You are on page 1of 57

Internal Assignment No.

Paper Code: CH - 201


Paper Title: Inorganic Chemistry

Q. 1. Answer all the questions:


1) Which element give highest oxidation states also give the all oxi. States of the element?
Ans The oxidation state, often called the oxidation number, is an indicator of the degree of oxidation (loss of electrons)
of an atom in a chemical compound. Conceptually, the oxidation state, which may be positive, negative or zero, is
thehypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic, with
nocovalent component. This is never exactly true for real bonds.
The term "oxidation" was first used by Lavoisier to mean reaction of a substance with oxygen. Much later, it was realized
that the substance on being oxidized loses electrons, and the use of the term "oxidation" was extended to include other
reactions in which electrons are lost.
Oxidation states are typically represented by small integers. In some cases, the average oxidation state of an element is a
fraction, such as 8/3 for iron in magnetite (Fe
3O
4). The highest known oxidation state is reported to be +9 in the cation IrO+
4,[1] while the lowest known oxidation state is −5 for boron, gallium, indium, and thallium. The possibility of +9 and +10
oxidation states in platinum group elements, especially iridium(IX) and platinum(X), has been discussed by Kiselev and
Tretiyakov.[2]

2) Give the example of symmetrical and unsymmetrical orbitals.


Ans A MO with σ symmetry results from the interaction of either two atomic s-orbitals or two atomic p z-orbitals. An MO will
have σ-symmetry if the orbital is symmetrical with respect to the axis joining the two nuclear centers, the internuclear axis.
This means that rotation of the MO about the internuclear axis does not result in a phase change. A σ* orbital, sigma
antibonding orbital, also maintains the same phase when rotated about the internuclear axis. The σ* orbital has a nodal
plane that is between the nuclei and perpendicular to the internuclear axis. [10]
For molecules that possess a center of inversion (centrosymmetric molecules) there are additional labels of symmetry that
can be applied to molecular orbitals. Centrosymmetric molecules include:

 Homonuclear diatomics, X2
 Octahedral, EX6
 Square planar, EX4.

Non-centrosymmetric molecules include:

 Heteronuclear diatomics, XY
 Tetrahedral, EX4.

3) Define labile and inert complexes.


Ans In a labile complex the dissociation rate is high; ligands come and go from the complex rapidly. In an inert
complex, the dissociation rate is low; ligands come and go relatively infrequently. This is more or less independent of the
equilibrium constant, which is the relative rate of formation to dissolution.
In chemistry, a coordination complex or metal complex consists of a centralatom or ion, which is usually metallic and is
called the coordination centre, and a surrounding array of bound molecules or ions, that are in turn known as ligands or
complexing agents.[1][2] Many metal-containing compounds, especially those of transition metals, are coordination
complexes.

4) Define supersaturation and nucleation.


Ans Nucleation is the first step in the formation of either a new thermodynamic phase or a new structure via self-
assembly or self-organization. Nucleation is typically defined to be the process that determines how long an observer has
to wait before the new phase or self-organized structure appears. Nucleation is often found to be very sensitive to
impurities in the system. Because of this, it is often important to distinguish between heterogeneous nucleation and
homogeneous nucleation. Heterogeneous nucleation occurs at nucleation siteson surfaces in the system.[1] Homogenous
nucleation occurs away from a surface.
Heterogeneous nucleation, nucleation with the nucleus at a surface, is much more common than homogeneous
nucleation.[1][3] Heterogeneous nucleation is typically understood to be much faster than homogeneous nucleation
usingclassical nucleation theory. This predicts that the nucleation slows exponentially with the height of a free energy
barrier ΔG

5) Give the electrochemical series.

Ans Metals at the top of the series are good at giving away electrons. They are good reducing agents. The reducing
ability of the metal increases as you go up the series.

Metal ions at the bottom of the series are good at picking up electrons. They are good oxidising agents. The oxidising
ability of the metal ions increases as you go down the series.

Note: Answer any two questions. Each question carries 5 marks (Word limits 500)

Q.2 Explain Werner's coordination theory.


Ans Werner's Theory:

Alfred Wernera Swiss chemist put forward a theory to explain the formation of complex compounds. It was the first
successful explanation,became famous as the coordination theory of complex compounds, which is also known as
Werner's theory.

Postulates:

(a) The central metal atom (or) ion in a coordination compound exhibits two types of valencies - primary and secondary.

(b) Primary valencies are ionisable and correspond to the number of charges on the complex ion. Primary valencies apply
equally well to simple salts and to complexes and are satisfied by negative ions.
(c) Secondary valencies correspond to the valencies that a metal atom (or) ion exercises towards neutral molecules (or)
negative ions in the formation of its complex ions.

(d) Secondary valencies are directional and so a complex has a particular shape. The number and arrangement of ligands
in space determines the stereochemistry of a complex.

The postulates of Werner's coordination theory were actually based on experimental evidence rather than theoretical.

Although Werner's theory successfully explains the bonding features in coordination compounds, it has drawbacks.

Drawbacks:
 It doesn't explain why only certain elements form coordination compounds.
 It does not explain why the bonds in coordination compounds have directional properties.
 It does not explain the colour, and the magnetic and optical properties of complexes.
In chemistry, a coordination complex or metal complex consists of a centralatom or ion, which is usually metallic and is
called the coordination centre, and a surrounding array of bound molecules or ions, that are in turn known as ligands or
complexing agents.[1][2] Many metal-containing compounds, especially those of transition metals, are coordination
complexes.
Coordination complexes are so pervasive that the structure and reactions are described in many ways, sometimes
confusingly. The atom within a ligand that is bonded to the central atom or ion is called the donor atom. In a typical
complex, a metal ion is bound to several donor atoms, which can be the same or different. Polydentate (multiple bonded)
ligands consist of several donor atoms, several of which are bound to the central atom or ion. These complexes are
called chelate complexes, the formation of such complexes is called chelation, complexation, and coordination.
The central atom or ion, together with all ligands comprise the coordination sphere.[4][5] The central atoms or ion and the
donor atoms comprise the first coordination sphere.

Q.3 Explain crystal field theory of octahedral complexes?


Ans Crystal Field Theory (CFT) is a model that describes the breaking of degeneracies of electron orbital states,
usually d or f orbitals, due to a static electric field produced by a surrounding charge distribution (anion neighbors). This
theory has been used to describe various spectroscopies of transition metal coordination complexes, in particular optical
spectra (colors). CFT successfully accounts for some magnetic properties, colours, hydration enthalpies,
and spinelstructures of transition metal complexes, but it does not attempt to describe bonding. CFT was developed by
physicistsHans Bethe and John Hasbrouck van Vleck[1] in the 1930s. CFT was subsequently combined with molecular
orbital theory to form the more realistic and complex ligand field theory (LFT), which delivers insight into the process
ofchemical bonding in transition metal complexes.
According to CFT, the interaction between a transition metal and ligands arises from the attraction between the positively
charged metal cation and negative charge on the non-bonding electrons of the ligand. The theory is developed by
considering energy changes of the five degenerate d-orbitals upon being surrounded by an array of point charges
consisting of the ligands. As a ligand approaches the metal ion, the electrons from the ligand will be closer to some of
the d-orbitals and farther away from others causing a loss of degeneracy. The electrons in the d-orbitals and those in the
ligand repel each other due to repulsion between like charges. Thus the d-electrons closer to the ligands will have a
higher energy than those further away which results in the d-orbitals splitting in energy. This splitting is affected by the
following factors:

 the nature of the metal ion.


 the metal's oxidation state. A higher oxidation state leads to a larger splitting.
 the arrangement of the ligands around the metal ion.
 the nature of the ligands surrounding the metal ion. The stronger the effect of the ligands then the greater the
difference between the high and low energy d groups.

The most common type of complex is octahedral; here six ligands form an octahedron around the metal ion. In octahedral
symmetry the d-orbitals split into two sets with an energy difference, Δoct (the crystal-field splitting parameter) where
the dxy, dxz and dyz orbitals will be lower in energy than the dz2 and dx2-y2, which will have higher energy, because the former
group is farther from the ligands than the latter and therefore experience less repulsion. The three lower-energy orbitals
are collectively referred to as t2g, and the two higher-energy orbitals as eg. (These labels are based on the theory
of molecular symmetry). Typical orbital energy diagrams are given below in the section High-spin and low-spin.

Internal Assignment No. 2

Paper Code: CH - 201


Paper Title: Inorganic Chemistry

Q. 1. Answer all the questions:


1) Define isomerism which type of somerism present in coordination compounds?
Ans  Stereoisomers occur when the ligands have the same bond, but the bonds are in different orientations relative to one another.
 In cis molecules, the two ligands are on the same side of the complex. In trans molecules, the similar ligands are on the opposite
sides of the molecules. Tetrahedral molecules do not show stereoisomerism.
 When three identical ligands occupy one face, the isomer is said to be facial, or fac. If the three ligands and the metal ion are in one
plane, the isomer is said to be meridional, or mer.
 Optical isomerism occurs when a molecule is not superimposable with its mirror image.
 Structural isomers have the same chemical composition but the bonds are different.

An isomer (/ˈaɪsəmər/; from Greek ἰσομερής, isomerès; isos = "equal", méros = "part") is a molecule with the

samechemical formula as another molecule, but with a different chemical structure. That is, isomers contain the same
number of atoms of each element, but have different arrangements of their atoms. [1][2] Isomers do not necessarily share
similar properties, unless they also have the same functional groups. There are many different classes of isomers,
likepositional isomers, cis-trans isomers and enantiomers, etc. (see chart below). There are two main forms of
isomerism:structural isomerism and stereoisomerism

2) Give the sources of actinide elements.


Ans The actinide /ˈæktɨnaɪd/ or actinoid /ˈæktɨnɔɪd/ (IUPAC nomenclature) series encompasses the 15
metallic chemical elements with atomic numbers from 89 to 103, actinium through lawrencium.[2][3][4][5]
The actinide series derives its name from the first element in the series, actinium. The informal chemical symbol An is
used in general discussions of actinide chemistry to refer to any actinide. All but one of the actinides are f-blockelements,
corresponding to the filling of the 5f electron shell; lawrencium, a d-block element, is also generally considered an
actinide. In comparison with thelanthanides, also mostly f-block elements, the actinides show much more
variable valence. They all have very large atomic and ionic radii and exhibit an unusually large range of physical
properties. While actinium and the late actinides (from americium onwards) behave similarly to the lanthanides, the
elements thorium through neptunium are much more similar to transition metalsin their chemistry.

3) Give the properties of solvents?


Ans A solvent (from the Latin solvō, "I loosen, untie, I solve") is a substance that dissolves a solute (a chemically
different liquid, solid or gas), resulting in a solution. A solvent is usually a liquid but can also be a solid or a gas. The
quantity of solute that can dissolve in a specific volume of solvent varies with temperature. Common uses
for organic solvents are in dry cleaning (e.g., tetrachloroethylene), as paint thinners (e.g., toluene, turpentine), as nail
polish removers and glue solvents (acetone, methyl acetate, ethyl acetate), in spot removers (e.g., hexane, petrol ether),
in detergents (citrus terpenes) and in perfumes (ethanol). Water is a solvent for polar molecules and the most common
solvent used by living things; all the ions and proteins in a cell are dissolved in water within a cell. Solvents find various
applications in chemical, pharmaceutical, oil and gas industries, including in chemical syntheses and purification
processes.
The global solvent market is expected to earn revenues of about US$33 billion in 2019. [citation needed] The dynamic economic
development in emerging markets like China, India or Brazil will especially continue to boost demand for solvents.

4) Explain Arrhenius concept?


Ans The Arrhenius equation is a formula for the temperature dependence of reaction rates. The equation was proposed
by Svante Arrhenius in 1889, based on the work of Dutch chemist Jacobus Henricus van 't Hoff who had noted in 1884
that Van 't Hoff's equation for the temperature dependence of equilibrium constants suggests such a formula for the rates
of both forward and reverse reactions. Arrhenius provided a physical justification and interpretation for the formula. [1][2]
[3]
Currently, it is best seen as an empirical relationship.[4] It can be used to model the temperature variation of diffusion
coefficients, population of crystal vacancies, creep rates, and many other thermally-induced processes/reactions.
The Eyring equation, developed in 1935, also expresses the relationship between rate and energy.
A historically useful generalization supported by Arrhenius' equation is that, for many common chemical reactions at room
temperature, the reaction rate doubles for every 10 degree Celsius increase in temperature

5) Explain lanthanide contraction.


Ans Lanthanide contraction is a term used in chemistry to describe the greater-than-expected decrease in ionic radii of
the elements in the lanthanide series from atomic number 57, lanthanum, to 71, lutetium, which results in smaller than
otherwise expected ionic radii for the subsequent elements starting with 72, hafnium.[1][2][3] The term was coined by the
Norwegian geochemist Victor Goldschmidt in his series "Geochemische Verteilungsgesetze der Elemente". [4]

Element La Ce Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb Lu

Atomic
electron
configura 4f15d16s 3 2 4 2 5 2 6 2 7 2 4f75d16s 9 2 10 2 11 2 12 2 13 2 14 2 4f145d16s
5d16s2 2 4f 6s 4f 6s 4f 6s 4f 6s 4f 6s 2 4f 6s 4f 6s 4f 6s 4f 6s 4f 6s 4f 6s 2
tion
(all begin
with [Xe])

Ln3+ elect
ron 4f14
4f0 4f1 4f2 4f3 4f4 4f5 4f6 4f7 4f8 4f9 4f10 4f11 4f12 4f13
configura
tion
Ln3+ radiu
s (pm) (6-
103 102 99 98.3 97 95.8 94.7 93.8 92.3 91.2 90.1 89
coordinat
e)

Note: Answer any two questions. Each question carries 5 marks (Word limits 500)

Q.2 Explain bronsted – lowry concept of Acids and bases.


Ans The Brønsted–Lowry theory is an acid–base reaction theory which was proposed independently by Johannes
Nicolaus Brønstedand Thomas Martin Lowry in 1923. The fundamental concept of this theory is that when an acid and a
base react with each other, the acid forms its conjugate base, and the base forms its conjugate acid by exchange of
a proton (the hydrogen cation, or H+). This theory is a generalization of the Arrhenius theory.
In the Arrhenius theory acids are defined as substances which dissociate in aqueous solution to give H + (hydrogen ions).
Bases are defined as substances which dissociate in aqueous solution to give OH − (hydroxide ions).[1]
In 1923 physical chemists Johannes Nicolaus Brønsted in Denmark and Thomas Martin Lowry in England independently
proposed the theory that carries their names.[2][3][4] In the Brønsted–Lowry theory acids and bases are defined by the way
they react with each other, which allows for greater generality. The definition is expressed in terms of an equilibrium
expression
acid + base conjugate base + conjugate acid.
With an acid, HA, the equation can be written symbolically as:
HA + B A− + HB+
The equilibrium sign, , is used because the reaction can occur in both forward and backward directions. The
acid, HA, can lose a proton to become its conjugate base, A −. The base, B, can accept a proton to become its
conjugate acid, HB+. Most acid-base reactions are fast so that the components of the reaction are usually
in dynamic equilibriumwith each other.[5]
Consider the following acid–base reaction, (also illustrated at top right)
CH
3COOH + H
2O CH
3COO−
+H
3O+

Acetic acid, CH
3COOH, is an acid because it donates a proton to water (H
2O) and becomes its conjugate base, theacetate ion (CH
3COO−
). H
2O is a base because it accepts a proton from CH
3COOH and becomes its conjugate acid, the hydronium ion, (H
3O+
).[6]
The reverse of an acid-base reaction is also an acid-base reaction, between the conjugate acid of the base in the first
reaction and the conjugate base of the acid. In the above example, acetate is the base of the reverse reaction and
hydronium ion is the acid.
H
3O+
+ CH
3COO−
CH
3COOH + H
2O

Q.3 Which type of principals are involved in extraction of the element?


Ans A chemical element (or element) is a chemical substance consisting of atoms having the same number of protons in
their atomic nuclei (i.e. the same atomic number, Z).[1] There are 118 elements that have been identified, of which the first
98 occur naturally on Earth with the remaining 20 being Synthetic elements. There are 80 elements that have at least one
stable isotopeand 38 that have exclusively radioactive isotopes, which decay over time into other elements. Iron is
the most abundant element (by mass) making up the Earth, whileoxygen is the most common element in the crust of the
earth.[2]
Chemical elements constitute approximately 15% of the matter in the universe: the remainder is dark matter, the
composition of it is unknown, but it is not composed of chemical elements. [3] The two lightest
elements,hydrogen and helium were mostly formed in the Big Bangand are the most common elements in the universe.
The next three elements (lithium, beryllium and boron) were formed mostly by cosmic ray spallation, and are thus more
rare than those that follow. Formation of elements with from six to twenty six protons occurred and continues to occur
in main sequence stars via stellar nucleosynthesis. The high abundance of oxygen, silicon, and iron on Earth reflects their
common production in such stars. Elements with greater than twenty six protons are formed by supernova
nucleosynthesis insupernovae, which, when they explode, blast these elements far into space as planetary nebulae,
where they may become incorporated into planets when they are formed.[
When different elements are chemically combined, with the atoms held together by chemical bonds, they form chemical
compounds. Only a minority of elements are found uncombined as relatively pure minerals. Among the more common of
such "native elements" are copper, silver, gold, carbon (as coal, graphite, or diamonds), and sulfur. All but a few of the
most inert elements, such as noble gases and noble metals, are usually found on Earth in chemically combined form,
aschemical compounds. While about 32 of the chemical elements occur on Earth in native uncombined forms, most of
these occur as mixtures. For example, atmospheric air is primarily a mixture of nitrogen, oxygen, and argon, and native
solid elements occur in alloys, such as that of iron and nickel.
The history of the discovery and use of the elements began with primitive human societies that found native elementslike
carbon, sulfur, copper and gold. Later civilizations extracted elemental copper, tin, lead and iron from
their ores bysmelting, using charcoal. Alchemists and chemists subsequently identified many more, with almost all of the
naturally-occurring elements becoming known by 1900.
The properties of the chemical elements are summarized on the periodic table, which organizes the elements by
increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and
chemical properties. Save for unstable radioactive elements with short half-lives, all of the elements are
availableindustrially, most of them in high degrees of purity.

Internal Assignment No. 1

Paper Code: CH-202


Paper Title: ORGANIC CHEMISTRY

Q. 1. Answer all the questions:


(i) Write a note on Hyperehromic and Hypochromic Shifts.
Ans Hypsochromic shift is a change of spectral band position in the absorption, reflectance, transmittance, or emission
spectrum of a molecule to a shorter wavelength (higher frequency). Because the blue color in the visible spectrum has a
shorter wavelength than most other colors, this effect is also commonly called a blue shift.
This can occur because of a change in environmental conditions: for example, a change in solvent polarity will result
insolvatochromism. A series of structurally related molecules in a substitution series can also show a hypsochromic shift.
Hypsochromic shift is a phenomenon seen in molecular spectra, not atomic spectra - it is thus more common to speak of
the movement of the peaks in the spectrum rather than lines.

where is the wavelength of the spectral peak of interest and


For example, β-acylpyrrole will show a hypsochromic shift of 30-40 nm in comparison with α-acylpyrroles.

(ii) What is Hooke’s Law?

Ans Hooke's law is a principle of physics that states that the force needed to extend or compress a spring by

some distance is proportional to that distance. That is: where is a constant factor characteristic of the
spring, its stiffness. The law is named after 17th century British physicistRobert Hooke. He first stated the law in 1660 as
a Latin anagram.[1][2] He published the solution of his anagram in 1678 as: ut tensio, sic vis ("as the extension, so the
force" or "the extension is proportional to the force").
Hooke's equation in fact holds (to some extent) in many other situations where an elastic body is deformed, such as wind
blowing on a tall building, a musician plucking a string of a guitar, or the filling of a party balloon. An elastic body or
material for which this equation can be assumed is said to be linear-elastic orHookean.

(iii) What is Pinacol Rearrangement?


Ans The pinacol–pinacolone rearrangement is a method for converting a 1,2-diol to a carbonyl compound in organic
chemistry. The 1,2-rearrangement takes place under acidic conditions. The name of the reaction comes from the
rearrangement of pinacol to pinacolone.
This reaction was first described by Wilhelm Rudolph Fittig in 1860AD. The Fittig reaction is named after him. It involves
couplng of 2 aryl halides in presence of sodium metal in dry etherial solution
n the course of this organic reaction, protonation of one of the –OH groups occurs and a carbocation is formed. If both the
–OH groups are not alike, then the one which yields a more stable carbocation participates in the reaction. Subsequently,
an alkyl group from the adjacent carbon migrates to the carbocation center. The driving force for this rearrangement step
is believed to be the relative stability of the resultant oxonium ion, which has complete octet configuration at all centers (as
opposed to the preceding carbocation).

(iv) Write the Zeisel’s Method.


Ans Zeisel's method is used to estimate the alkoxy linkages in an organic compound. In this method the organic
compound containing alkoxy group is treated with hydrogen iodide and the alkyl halide formed is further treated with silver
nitrate to precipitate silver iodide
In this reaction only hydrogen iodide can be used because it consists of an ionic bond, while HF, HCl, HBr contain
covalent bond where an iodide ion is liberated which forms a precipitate with silver nitrate. The silver iodide can be
weighed and from its weight the number or alkoxy group can be estimated.
The Zeisel determination or Zeisel test is a chemical test for the presence of esters or ethers in a chemical substance.[1]
[2][3][4]
It is named after the Czech chemist Simon Zeisel (1854–1933). In a qualitative test a sample is first reacted with a
mixture of acetic acid and hydrogen iodide in a test tube.

(v) What are Monohydric Alcohols?


Ans In chemistry, an alcohol is any organic compound in which the hydroxyl functional group (-O H) is bound to
a saturated carbon atom.[2] The term alcohol originally referred to the primary alcohol ethyl alcohol (ethanol), the
predominant alcohol inalcoholic beverages.
The suffix -ol appears in the IUPAC chemical name of all substances where the hydroxyl group is the functional group
with the highest priority; in substances where a higher priority group is present the prefix hydroxy- will appear in
the IUPAC name. The suffix -ol in non-systematic names (such as paracetamol or cholesterol) also typically indicates that
the substance includes a hydroxyl functional group and, so, can be termed an alcohol. But many substances, particularly
sugars (examples glucose andsucrose) contain hydroxyl functional groups without using the suffix. An important class of
alcohols, of which methanol and ethanol are the simplest members is thesaturated straight chain alcohols, the general
formula for which is CnH2n+1OH.
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Q.2 Give a detailed account of the Lederer-Manasse Reaction.
Ans La reazione di Lederer-Manasse è una reazione organica di idrossialchilazione aromatica. Essa consente
l'introduzione di un gruppo -CH2OH su un anello aromatico fenolico. Fu pubblicata indipendentemente nello stesso
periodo (1894) da O. Manasse e L. Lederer studiando la reazione ad induzione basica della formaldeide con fenolo [1][2].
Dopo riscaldamento per un breve periodo, in presenza di un eccesso di fenolo, i prodotti di reazione subiscono una
ulteriore reazione con fenoli non reagiti producendo un polimero lineare eliminando acqua.
Queste reazioni sono alla base della preparazione di resine fenoliche, esse furono sviluppate da Leo Baekeland, da lui
chiamate e commercializzate con il nome di bachelite. Esse sono solidi termoplastici solubili in molti solventi organici.
Costituirono la prima produzione in larga scala di plastiche sintetiche e con esse l'era della moderna plastica sintetica
ebbe inizio.
La reazione viene utilizzata, in particolari condizioni anche per la formazione di calixareni
La reazione di Lederer-Manasse ha luogo a temperatura ambiente ed è catalizzata sia da acidi diluiti che da basi
diluite(vedi meccanismi in seguito). Essa è lenta e procede anche a bassa temperatura. La condensazione ha luogo
prevalentemente in posizione para, con scarsa formazione di prodotto orto sostituto. Quando la posizione para è
occupata la reazione procede interamente verso la sostituzione in posizione orto. La natura dei prodotti dipende dalla
quantità di reagenti coinvolti. Una abbondanza di formaldeide i prodotti sono 2,4- e 2,6-diidrossometilfenolo, mentre una
grande quantità di fenolo porta alla produzione di p-p'diidrossifenilmetano fino alla formazione di polimeri lineari.

The alcohol (V) partially decomposed during distillation under reduced pressure to an
uncharacterised phenolic material. From the reaction of o-toluquinol and formaldehyde followed by
methylation, Euler and his co-workers3isolated a compound (A) to which they assigned the structure
(IV) on the basis of its conversion into a 2,5-dihydroxymethylbenzoic acid which depressed the
melting point of 2,5-dihydroxy-4-methylbenzoic acid prepared by the Kolbe reaction. The compound
(A) was not found by us to be the same as our product (IV) and we have identified it as 4-
hydroxymethyl-2,5-dimethoxytoluene by obtaining the aldehyde (II) from its oxidation with
selenium dioxide, and 2,5-dimethoxy-4-methylbenzoic acid4 by alternative oxidation procedures. By
this result the structural assignment of the aldehyde (II) is also confirmed.

The second synthesis of compound (IV) was of poor yield. It involved a Mannich reaction of
compound (III) with formaldehyde and dimethylamine 5 yielding 2-dimethylaminomethyl-4-
methoxy-6-methylphenol (VI), which, when heated with acetic anhydride 6, gave 2-acetoxy-5-
methoxy-3-methylbenzyl acetate (VII). This was hydrolysed with alkali and compound (IV) was
obtained by methylation of the hydrolysate. A by-product of this reaction was bis-(2-hydroxy-5-
methoxy-3-methylphenyl)methane (VIII), which may have arisen from the reversible
decomposition of 2-hydroxymethyl-4-methoxy-6-methylphenol (V) to give 4-methoxy-2-
methylphenol (III) and formaldehyde, followed by the condensation of compound (III) with
compound (V) or with an intermediate arising from compound (V) by loss of a molecule of water.

Q.3 Derive and explain Acid Catalyzed Ring Opening.


Ans polymer chemistry,ring-opening polymerization(ROP) is a form ofchain-growth polymerization, in which the
terminal end of a polymerchain acts as areactive center where further cyclic monomers can react by opening its ring
system and form a longer polymer chain (see figure). The propagating center can be radical, anionic or cationic. Some
cyclic monomers such as norbornene orcyclooctadiene can be polymerized to high molecular weight polymers by using
metal catalysts. ROP continues to be the most versatile method of synthesis of major groups of biopolymers, particularly
when they are required in quantity.
The driving force for the ring-opening of cyclic monomers is via the relief of bond-angle strain or steric repulsionsbetween
atoms at the center of a ring. Thus, as is the case for other types of polymerization, the enthalpy change in ring-opening is
negative.[3]
Cyclic monomers that are polymerized using ROP encompass a variety of structures, such as:

 alkanes, alkenes,
 compounds containing heteroatoms in the ring:
 oxygen: ethers, acetals, esters (lactones, lactides, and carbonates), and anhydrides,
 sulfur: polysulfur, sulfides and polysulfides,
 nitrogen: amines, amides (lactames), imides, N-carboxyanhydrides and 1,3-oxaza derivatives,
 phosphorus: phosphates, phosphonates, phosphites, phosphines and phosphazenes,
 silicon: siloxanes, silathers, carbosilanes and silanes.
 The term "polymer" derives from the ancient Greek word πολύς (polus, meaning "many, much") and μέρος
(meros, meaning "parts"), and refers to a molecule whose structure is composed of multiple repeating units, from
which originates a characteristic of high relative molecular mass and attendant properties.[6] The units composing
polymers derive, actually or conceptually, from molecules of low relative molecular mass. [7] The term was coined
in 1833 by Jöns Jacob Berzelius, though with a definition distinct from the modern IUPAC definition.[8][9] The
modern concept of polymers as covalently bonded macromolecular structures was proposed in 1920 by Hermann
Staudinger, who spent the next decade finding experimental evidence for this hypothesis. [10]
 Polymers are studied in the fields of biophysics and macromolecular science, and polymer science (which
includespolymer chemistry and polymer physics). Historically, products arising from the linkage of repeating units
by covalentchemical bonds have been the primary focus of polymer science; emerging important areas of the
science now focus on non-covalent links. Polyisoprene of latex rubber and the polystyrene of styrofoam are
examples of polymeric natural/biological and synthetic polymers, respectively. In biological contexts, essentially
all biological macromolecules—i.e., proteins (polyamides), nucleic acids (polynucleotides), and polysaccharides—
are purely polymeric, or are composed in large part of polymeric components—e.g., isoprenylated/lipid-modified
glycoproteins, where small lipidic molecule and oligosaccharide modifications occur on the polyamide backbone
of the protein.[11]
 The simplest theoretical models for polymers are ideal chains.
Internal Assignment No. 2

Paper Code: CH - 202


Paper Title: Organic Chemistry

Q. 1. Answer all the questions:


1) Differentiate between chromospheres and Aunochrome?
Ans Auxochrome is a Greek word arising from two word roots; ‘auxo’ meaning “to increase” and ‘chrome’ meaning “color”.
Auxochrome is a group of atoms which will impart a particular color when attached to a chromophore but when present
alone, will fail to produce that color. Chromophore is that part of the molecule which when exposed to visible light will
absorb and reflect a certain color.

Auxochrome is a group of atoms which is functional and has the capability to alter the capacity of the chromophore to
reflect colors. Azobenzene is an example of a dye which contains a chromophore. All substances like dyes produce colors
by absorption of visible light owing to the various constituent compounds. The electromagnetic spectrum has a very wide
variation in wavelengths but the human eye visualizes only short wavelength radiation. Chromophores do not absorb light
without the requisite contents but with the presence of an auxochrome there is a shift in the absorption of these
chromogens. Auxochrome increases the color of any organic substance. For instance, benzene does not have any color of
its own, but when it is combined with the -nitro group which acts as a chromophore; it imparts a pale yellow color.
2) Show by reactions the reduction of Aldehydes and ketones.
Ans An aldehyde /ˈældɨhaɪd/ or alkanal is an organic compound containing a formyl group. The formyl group is
a functional group, with the structure R-CHO, consisting of a carbonylcenter (a carbon double bonded to oxygen) bonded
to hydrogen and an R group,[1] which is any generic alkyl or side chain. The group without R is called the aldehyde
group orformyl group. Aldehydes differ from ketones in that the carbonyl is placed at the end of acarbon skeleton rather
than between two carbon atoms. Aldehydes are common in organic chemistry. Many fragrances are aldehydes.
Aldehydes feature an sp2-hybridized, planar carbon center that is connected by a double bond to oxygen and a single
bond to hydrogen. The C-H bond is not acidic. Because of resonance stabilization of the conjugate base, an α-
hydrogen in an aldehyde (not shown in the picture above) is far more acidic, with a pKa near 17, than a C-H bond in a
typical alkane (pKa about 50).[2] This acidification is attributed to (i) the electron-withdrawing quality of the formyl center
and (ii) the fact that the conjugate base, an enolate anion, delocalizes its negative charge. Related to (i), the aldehyde
group is somewhat polar.

3) Deduce pinacol rearrangement through reactions.


Ans In the course of this organic reaction, protonation of one of the –OH groups occurs and a carbocation is formed. If
both the –OH groups are not alike, then the one which yields a more stable carbocation participates in the reaction.
Subsequently, an alkyl group from the adjacent carbon migrates to the carbocation center. The driving force for this
rearrangement step is believed to be the relative stability of the resultant oxonium ion, which has complete octet
configuration at all centers (as opposed to the preceding carbocation). The migration of alkyl groups in this reaction
occurs in accordance with their usual migratory aptitude, i.e.hydride > Phenyl > tertiary carbocation (if formed by
migration) > secondary carbocation (if formed by migration) > methyl cation . The conclusion which group stabilizes
carbocation more effectively is migrated

4) Give any two preparation of Aldehydes.

1. Ans Acyclic aliphatic aldehydes are named as derivatives of the longest carbon chain containing the aldehyde
group. Thus, HCHO is named as a derivative of methane, and CH 3CH2CH2CHO is named as a derivative
of butane. The name is formed by changing the suffix -e of the parent alkane to -al, so that HCHO is
named methanal, and CH3CH2CH2CHO is named butanal.
2. In other cases, such as when a -CHO group is attached to a ring, the suffix -carbaldehyde may be used. Thus,
C6H11CHO is known as cyclohexanecarbaldehyde. If the presence of another functional group demands the use
of a suffix, the aldehyde group is named with the prefix formyl-. This prefix is preferred to methanoyl-.
3. If the compound is a natural product or a carboxylic acid, the prefix oxo- may be used to indicate which carbon
atom is part of the aldehyde group; for example, CHOCH 2COOH is named 3-oxopropanoic acid.
4. If replacing the aldehyde group with a carboxyl group (-COOH) would yield a carboxylic acid with a trivial name,
the aldehyde may be named by replacing the suffix -ic acid or -oic acid in this trivial name by -aldehyde.
5) What is difference between secondary and tertiary amines with examples.
Ans Amines and ammonia are generally sufficiently basic to undergo direct alkylation, often under mild conditions. The
reactions are difficult to control because the reaction product (a primary amine or a secondary amine) are often more
nucleophilic than the precursor and will thus preferentially react with the alkylating agent. For example, reaction of 1-
bromooctane with ammonia yields almost equal amounts of the primary amine and the secondary amine. [2] Therefore, for
laboratory purposes, N-alkylation is often limited to the synthesis of tertiary amines. A notable exception is the reactivity of
alpha-halo carboxylic acids that do permit synthesis of primary amines with ammonia. [3] Intramolecularreactions of
haloamines X-(CH2)n-NH2 give cyclic aziridines, azetidines and pyrrolidines.
N-alkylation is a general and useful route to quaternary ammonium salts from tertiary amines, because overalkylation is
not possible.
Examples of N-alkylation with alkyl halides are the syntheses of benzylaniline, [4] 1-benzylindole,[5][6] and azetidine.[7]Another
example is found in the derivatization of cyclen.[8] Industrially, ethylenediamine is produced by alkylation of ammonia
with 1,2-dichloroethane.

Note: Answer any two questions. Each question carries 5 marks

Q.2 Write short notes on :


(i) Grignard reagents (ii) Acid catalyzed ring opening (iii) Properties of carbonylic acids.
Q.3 Give the preparation of 20 & 30 amines and mechanism of gabriel synthesis.
Ans The Gabriel synthesis is a chemical reaction that transforms primary alkyl halides into primary amines.
Traditionally, the reaction uses potassium phthalimide.[1][2][3] The reaction is named after the German chemist Siegmund
Gabriel.[4]
The Gabriel reaction has been generalized to include the alkylation of sulfonamides and imides, followed by deprotection,
to obtain amines (seeAlternative Gabriel reagents).[5][6]
The utility of the method is based on the fact that the alkylation of ammonia is an unselective and inefficient route to
amines in the laboratory (on an industrial scale, the alkylation of ammonia is, however, widely employed). The conjugate
base of ammonia, sodium amide (NaNH2), is more basic than it is nucleophilic. In fact, sodium amide is used to
deliberately obtain the dehydrohalogenation product.[
The bond angles in aziridine are approximately 60°, considerably less than the normal hydrocarbon bond angle of 109.5°,
which results in angle strain as in the comparable cyclopropane and ethylene oxide molecules. Abanana bond model
explains bonding in such compounds. Aziridine is lessbasic than acyclic aliphatic amines, with a pKa of 7.9 for
the conjugate acid, due to increased s character of the nitrogen free electron pair. Angle strain in aziridine also increases
the barrier to nitrogen inversion. This barrier height permits the isolation of separate invertomers, for example
the cis and trans invertomers of N-chloro-2-methylaziridine
Cyclization of haloamines and amino alcohols
An amine functional group displaces the adjacent halide in anintramolecular nucleophilic substitution reaction to generate
an aziridine. Amino alcohols have the same reactivity, but the hydroxy group must first be converted into a good leaving
group. The cyclization of an amino alcohol is called a Wenker synthesis (1935), and that of a haloamine theGabriel
ethylenimine method (1888).[citation needed]
Nitrene addition
Nitrene addition to alkenes is a well-established method for the synthesis of
aziridines. Photolysis or thermolysis of azides are good ways to generate nitrenes. Nitrenes can also be prepared in
situ fromiodosobenzene diacetate and sulfonamides, or the ethoxycarbonylnitrene from the N-sulfonyloxy precursor
The toxicology of a particular aziridine compound depends on its structure and activity, although sharing the general
characteristics of aziridines. As electrophiles, aziridines are subject to attack and ring-opening by endogenous
nucleophiles such as nitrogenous bases in DNA base pairs, resulting in potential mutagenicity. [21][22][23]
Exposure
Inhalation and direct contact. Some reports note that the use of gloves has not prevented permeation of aziridine. It is
therefore important that users check the breakthrough permeation times for gloves, and pay scrupulous attention to
avoiding contamination when degloving. Workers handling azidrine are expected to be provided with, and required to
wear and use, a half-mask filter-type respirator for dusts, mists and fumes. [24]
There is relatively little human exposure data on aziridine. This is because it is considered extremely dangerous. In
industrial settings, class A pressure suits are preferred when exposure is possible.
Carcinogenicity
The International Agency for Research on Cancer (IARC) has reviewed aziridine compounds and classified them as
possibly carcinogenic to humans (IARC Group 2B).[25] In making the overall evaluation, the IARC Working Group took into
consideration that aziridine is a direct-acting alkylating agent which is mutagenic in a wide range of test systems and
forms DNA adducts that are promutagenic.
Irritancy
Aziridines are irritants of mucosal surfaces including eyes, nose, respiratory tract and skin.
Sensitization
Aziridine rapidly penetrates skin on contact.
Skin sensitizer — causing allergic contact dermatitis and urticaria.
Respiratory sensitiser — causing occupational asthma

Q.4 Write chemical reaction of Nitro alkenes and 2,4,6 – trinitro – phenol.
Ans Nitro compounds are organic compounds that contain one or more nitro functional groups (–NO2). They are often
highly explosive, especially when the compound contains more than one nitro group and is impure. The nitro group is one
of the most commonexplosophores (functional group that makes a compound explosive) used globally. This property of
both nitro and nitrate groups is because their thermal decomposition yields molecular nitrogen N2 gas plus considerable
energy, due to the high strength of the bond in molecular nitrogen.
Aromatic nitro compounds are typically synthesized by the action of a mixture of nitric andsulfuric acids on an organic
molecule. The one produced on the largest scale, by far, isnitrobenzene. Many explosives are produced by nitration
including trinitrophenol (picric acid), trinitrotoluene (TNT), and trinitroresorcinol (styphnic acid)
Chloramphenicol is a rare example of a naturally occurring nitro compound. At least some naturally occurring nitro groups
arise by the oxidation of amino groups.[2] 2-Nitrophenol is an aggregation pheromone of ticks.
Examples of nitro compounds are rare in nature. 3-Nitropropionic acid found in fungi and plants
(Indigofera).Nitropentadecene is a defense compound found in termites. Nitrophenylethane is found in Aniba canelilla.
[3]
Nitrophenylethane is also found in members of the Annonaceae, Lauraceae and Papaveraceae. [4]
Many flavin-dependent enzymes are capable of oxidizing aliphatic nitro compounds to less-toxic aldehydes and
ketones. Nitroalkane oxidase and 3-nitropropionate oxidase oxidize aliphatic nitro compounds exclusively, whereas other
enzymes such as glucose oxidase have other physiological substrates

 Aliphatic nitro compounds are reduced to amines with hydrochloric acid and an iron catalyst[citation needed]
 Nitronates are a tautomeric form of aliphatic nitro compounds.
 Hydrolysis of the salts of nitro compounds yield aldehydes or ketones in the Nef reaction
 Nitromethane adds to aldehydes in 1,2-addition in the nitroaldol reaction
 Nitromethane adds to alpha-beta unsaturated carbonyl compounds as a 1,4-addition in the Michael reaction as a
Michael donor
 Nitroalkenes are Michael acceptors in the Michael reaction with enolate compounds[12][13]
 In nucleophilic aliphatic substitution, sodium nitrite (NaNO2) replaces an alkyl halide. In the so-called ter Meer
reaction (1876) named after Edmund ter Meer,[14] the reactant is a 1,1-halonitroalkane:

Internal Assignment No. 1

Paper Code: CH - 203


Paper Title: Physical Chemistry

Q. 1. Answer all the questions:


1) What is therondynamic processes?
Ans A thermodynamic process is a passage of a thermodynamic systemfrom an initial state to a final state.
In equilibrium thermodynamics, the initial and final states are states of internal thermodynamic equilibrium, each
respectively fully specified by a suitable set of thermodynamic state variables, that depend only on the current state of the
system, not the path taken to reach that state. In general, in a thermodynamic process, the system passes through
physical states which are not describable as thermodynamic states, because they are far from internal thermodynamic
equilibrium. It is possible, however, that a process may take place slowly or smoothly enough to allow its description to be
usefully approximated by a continuous path of thermodynamic states. Then it may be approximately described by
a process function that does depend on the path.

2) Give the formula for heat capacity.


Ans Heat capacity or thermal capacity is a measurable physical quantityequal to the ratio of the heat added to (or

removed from) an object to the resulting temperature change.[1] The SI unit of heat capacity is joule perkelvin and
the dimensional form is L2MT−2Θ−1. Specific heat is the amount of heat needed to raise the temperature of a
certain mass 1 degree Celsius.
Heat capacity is an extensive property of matter, meaning it is proportional to the size of the system. When expressing the
same phenomenon as anintensive property, the heat capacity is divided by the amount of substance, mass, or volume, so
that the quantity is independent of the size or extent of the sample. The molar heat capacity is the heat capacity per unit
amount (SI unit: mole) of a pure substance and thespecific heat capacity, often simply called specific heat, is the heat
capacity per unit mass of a material. Occasionally, in engineering contexts, the volumetric heat capacity is used.

3) Basic principle of le-chatelier.


Ans In chemistry, Le Châtelier's principle, also called Chatelier's principle or "The Equilibrium Law", can be used to
predict the effect of a change in conditions on a chemical equilibrium. The principle is named after Henry Louis Le
Châtelier and sometimes Karl Ferdinand Braun who discovered it independently. It can be stated as:
When a system at equilibrium is subjected to change in concentration, temperature, volume, or pressure, then the
system readjusts itself to (partially) counteract the effect of the applied change and a new equilibrium is
established.
or whenever a system in equilibrium is disturbed the system will adjust itself in such a way that the effect of the
change will be nullified. (in short)
This principle has a variety of names, depending upon the discipline using it (see homeostasis, a term commonly
used in biology). It is common to take Le Châtelier's principle to be a more general observation, [1] roughly stated:
Any change in status quo prompts an opposing reaction in the responding system.
In chemistry, the principle is used to manipulate the outcomes of reversible reactions, often to increase
the yield of reactions. In pharmacology, the binding of ligands to the receptor may shift the equilibrium according
to Le Châtelier's principle, thereby explaining the diverse phenomena of receptor activation and desensitization.
[2]
In economics, the principle has been generalized to help explain the price equilibrium of efficient economic
systems.

4) Give concept of spontaneity.


Ans Spontaneous conception is the conception and birth of a subsequent child, after the birth of a child conceived
through in vitro fertilisation or other forms of assisted reproductive technology. There is an overall 18% chance of
spontaneous conception after an in vitro fertilization (IVF) treatment, but that chance rose to 37% among younger women
(less than 27 years). The likelihood also depends of the sperm performance on the man and the egg count of the woman. [
Spontaneous generation or anomalous generation is an obsolete body of thought on the ordinary formation of living
organisms without descent from similar organisms. Typically, the idea was that certain forms such as fleas could arise
from inanimate matter such as dust, or that maggots could arise from dead flesh. A variant idea was that ofequivocal
generation, in which species such as tapeworms arose from unrelated living organisms, now understood to be
their hosts. Doctrines supporting such processes of generation held that these processes are commonplace and regular.
Such ideas are in contradiction to that of univocal generation: effectively exclusive reproduction from genetically related
parent(s), generally of the same species

5) Show equation for gilibs function.


Ans In thermodynamics, the Gibbs free energy (IUPAC recommended name:Gibbs energy or Gibbs function; also
known as free enthalpy[1] to distinguish it from Helmholtz free energy) is a thermodynamic potential that measures the
"usefulness" or process-initiating work obtainable from athermodynamic system at a constant temperature and pressure
(isothermal, isobaric). Just as in mechanics, where potential energy is defined as capacity to do work, similarly different
potentials have different meanings. The Gibbs free energy (SI units kJ/mol) is the maximum amount of non-expansion
work that can be extracted from a thermodynamically closed system (one that can exchange heat and work with its
surroundings, but not matter); this maximum can be attained only in a completely reversible process. When a system
changes from a well-defined initial state to a well-defined final state, the Gibbs free energy change ΔP equals the work
exchanged by the system with its surroundings, minus the work of the pressure forces, during a reversible transformation
of the system from the initial state to the final state

Note: Answer any two questions. Each question carries 5 marks (Word limits 500)

Q.2 Deduce jouler lare and calculation of dn for the expansion of ideal gaser under isothermal.
Ans In thermodynamics, the Joule–Thomson effect (also known as the Joule–Kelvin effect, Kelvin–Joule effect,
orJoule–Thomson expansion) describes the temperature change of a gas or liquid when it is forced through
a valve orporous plug while kept insulated so that no heat is exchanged with the environment. [1][2][3] This procedure is
called athrottling process or Joule–Thomson process.[4] At room temperature, all gases
except hydrogen, helium and neon cool upon expansion by the Joule–Thomson process; these three gases experience
the same effect but only at lower temperatures.[5][6]
The effect is named after James Prescott Joule and William Thomson, 1st Baron Kelvin, who discovered it in 1852. It
followed upon earlier work by Joule on Joule expansion, in which a gas undergoes free expansion in a vacuum and the
temperature is unchanged, if the gas is ideal.
The throttling process is commonly exploited in thermal machines such as refrigerators, air conditioners, heat pumps, and
liquefiers.[7]
Throttling is a fundamentally irreversible process. The throttling due to the flow resistance in supply lines, heat
exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits the performance.
The physical mechanism associated with the Joule-Thomson effect is closely related to that of a shock wave
The adiabatic (no heat exchanged) expansion of a gas may be carried out in a number of ways. The change in
temperature experienced by the gas during expansion depends not only on the initial and final pressure, but also on the
manner in which the expansion is carried out.

 If the expansion process is reversible, meaning that the gas is in thermodynamic equilibrium at all times, it is
called an isentropic expansion. In this scenario, the gas does positive work during the expansion, and its temperature
decreases.
 In a free expansion, on the other hand, the gas does no work and absorbs no heat, so the internal energy is
conserved. Expanded in this manner, the temperature of an ideal gas would remain constant, but the temperature of
a real gas may either increase or decrease, depending on the initial temperature and pressure.
 The method of expansion discussed in this article, in which a gas or liquid at pressure P1 flows into a region of
lower pressure P2 via a valve or porous plug under steady state conditions and without change in kinetic energy, is
called the Joule–Thomson process. During this process, enthalpy remains unchanged (see a proof below).

State-transition points for steamshow a similar pattern to temperature inversion points on this T-s diagram for steam.
A throttling process proceeds along a constant-enthalpy curve in the direction of decreasing pressure, which means that
the process occurs from right to left on a temperature-pressure diagram. If the pressure starts out high enough, the
temperature increases as the pressure drops, until an inversion temperature is reached and a phase transition occurs,
called the inversion point.

Q.3 Explain conductance in metals and variation of molar equivalent.


Ans Molar conductivity is defined as the conductivity of an electrolyte solution divided by the molar concentration of the
electrolyte, and so measures the efficiency with which a given electrolyte conducts electricity in solution. Its units
aresiemens per meter per molarity, or siemens meter-squared per mole. The usual symbol is a capital lambda, Λ, or Λ m.
Or Molar conductivity of a solution at a given concentration is the conductance of the volume (V) of the solution containing
one mole of electrolyte kept between two electrodes with area of cross section (A) and at a distance of unit length.
Coordination compounds (or complexes) are molecules and extended solids that contain bonds between a transition
metal ion and one or more ligands. In forming these coordinate covalent bonds, the metal ions act as Lewis acids and
the ligands act as Lewis bases. Typically, the ligand has a lone pair of electrons, and the bond is formed by overlap of the
molecular orbital containing this electron pair with the d-orbitals of the metal ion. Ligands that are commonly found in
coordination complexes are neutral molecules (H 2O, NH3, organic bases such as pyridine, CO, NO, H2, ethylene, and
phosphines PR3) and anions (halides, CN-, SCN-, cyclopentadienide (C5H5-), H-, etc.). The resulting complexes can be
cationic (e.g., [Cu(NH3)4]2+), neutral ([Pt(NH3)2Cl2]) or anionic ([Fe(CN)6]4-). As we will see below, ligands that have weak or
negligible strength as Brønsted bases (for example, CO, CN -, H2O, and Cl-) can still be potent Lewis bases in forming
transition metal complexes.
Coordinate covalent bonds (also called dative bonds) are typically drawn as lines, or sometimes as arrows to indicate that
the electron pair "belongs" to the ligand X
Learning goals for Chapter 5:

 Determine oxidation states and assign d-electron counts for transition metals in complexes.
 Derive the d-orbital splitting patterns for octahedral, elongated octahedral, square pyramidal, square planar, and
tetrahedral complexes.
 For octahedral and tetrahedral complexes, determine the number of unpaired electrons and calculate the crystal
field stabilization energy.
 Know the spectrochemical series, rationalize why different classes of ligands impact the crystal field splitting
energy as they do, and use it to predict high vs. low spin complexes, and the colors of transition metal complexes.
 Use the magnetic moment of transition metal complexes to determine their spin state.
 Understand the origin of the Jahn-Teller effect and its consequences for complex shape, color, and reactivity.
 Understand the extra stability of complexes formed by chelating and macrocyclic ligands.

Internal Assignment No. 2

Paper Code: CH - 203


Paper Title: Physical Chemistry

Q. 1. Answer all the questions:


1) Give the spectrochemical series.
Ans A spectrochemical series is a list of ligands ordered on ligand strength and a list of metal ions based on oxidation
number, group and its identity. In crystal field theory, ligands modify the difference in energy between the d orbitals (Δ)
called the ligand-field splitting parameter for ligands or the crystal-field splitting parameter, which is mainly reflected
in differences in color of similar metal-ligand complexes.
However, keep in mind that "the spectrochemical series is essentially backwards from what it should be for a reasonable
prediction based on the assumptions of crystal field theory." [1] This deviation from crystal field theory highlights the
weakness of crystal field theory's assumption of purely ionic bonds between metal and ligand.
The order of the spectrochemical series can be derived from the understanding that ligands are frequently classified by
their donor or acceptor abilities. Some, like NH3, are σ bond donors only, with no orbitals of appropriate symmetry
for πbonding interactions. Bonding by these ligands to metals is relatively simple, using only the σ bonds to create
relatively weak interactions. Another example of a σ bonding ligand would be ethylenediamine,
however ethylenediamine has a stronger effect than ammonia, generating a larger ligand field

2) Give the structure of Hemoglobin.


Ans In mammals, the protein makes up about 96% of the red blood cells' dry content (by weight), and around 35% of the
total content (including water).[3] Hemoglobin has an oxygen-binding capacity of 1.34 mL O 2per gram,[4] which increases
the total blood oxygen capacity seventy-fold compared to dissolved oxygen in blood. The mammalian hemoglobin
molecule can bind (carry) up to four oxygen molecules. [5]
Hemoglobin is involved in the transport of other gases: It carries some of the body's respiratory carbon dioxide (about
10% of the total) ascarbaminohemoglobin, in which CO2 is bound to the globin protein. The molecule also carries the
important regulatory molecule nitric oxide bound to a globin protein thiol group, releasing it at the same time as oxygen.

3) What types of bonds in STYX?


Ans Boranes are electron-deficient and pose a problem for conventional descriptions of covalent bonding that involves
shared electron pairs. BH3 is a trigonal planar molecule (D3h molecular symmetry). Diborane has a hydrogen-bridged
structure, see the diborane article. The description of the bonding in the larger boranes formulated by William
Lipscombinvolved:

 3-center 2-electron B-H-B hydrogen bridges


 3-center 2-electron B-B-B bonds
 2-center 2-electron bonds (in B-B, B-H and BH2)

The styx number was introduced to aid in electron counting where s = count of 3-center B-H-B bonds; t = count of 3-
center B-B-B bonds; y = count of 2-center B-B bonds and x = count of BH 2 groups.
Lipscomb's methodology has largely been superseded by a molecular orbital approach, although it still affords insights.
The results of this have been summarised in a simple but powerful rule, PSEPT, often known as Wade's rules, that can be
used to predict the cluster type, closo-, nido-, etc. The power of this rule is its ease of use and general applicability to
many different cluster types other than boranes. There are continuing efforts by theoretical chemists to improve the
treatment of the bonding in boranes — an example is Stone's tensor surface harmonic treatment of cluster bonding. A
recent development is four-center two-electron bond.

4) What is effective Atomic number (EAN)?


Ans The effective atomic number Zeff, (sometimes referred to as the effective nuclear charge) of an atom is the
number of protons that an electron in the element effectively 'sees' due to screening by inner-shell electrons. It is a
measure of the electrostatic interaction between the negatively charged electrons and positively charged protons in the
atom. One can view the electrons in an atom as being 'stacked' by energy outside the nucleus; the lowest energy
electrons (such as the 1s and 2s electrons) occupy the space closest to the nucleus, and electrons of higher energy are
located further from the nucleus.
The binding energy of an electron, or the energy needed to remove the electron from the atom, is a function of
theelectrostatic interaction between the negatively charged electrons and the positively charged nucleus. In Iron, atomic
number 26, for instance, the nucleus contains 26 protons. The electrons that are closest to the nucleus will 'see' nearly all
of them. However, electrons further away are screened from the nucleus by other electrons in between, and feel less
electrostatic interaction as a result.

5) Derive the structure of m3 (co)12 type.


Ans Paper is a thin material produced by pressing together moist fibres of cellulosepulp derived
from wood, rags or grasses, and drying them into flexible sheets. It is a versatile material with many uses,
including writing, printing, packaging,cleaning, and a number of industrial and construction processes.
It and the pulp papermaking process is said to have been developed in Chinaduring the early 2nd century AD, possibly as
early as the year 105 A.D.,[1] by the Han court eunuch Cai Lun, although the earliest archaeological fragments of paper
derive from the 2nd century BC in China.[2] The modern pulp and paper industry is global, with China leading its
production and the United States right behind it.
The oldest known archaeological fragments of the immediate precursor to modern paper, date to the 2nd century BC
in China. The pulp papermaking process is ascribed to Cai Lun, a 2nd-century AD Han court eunuch.[2] With paper as an
effective substitute for silk in many applications, China could export silk in greater quantity, contributing to a Golden Age.
Its knowledge and uses spread from China through the Middle East to medieval Europe in the 13th century, where the
first water powered paper mills were built.[3] Because of paper's introduction to the West through the city of Baghdad, it
was first called bagdatikos.

Note: Answer any two questions. Each question carries 5 marks (Word limits 500)

Q.2 Give the structure and Nature of bonding in the mononudear carbonyls.
Ans Metal carbonyls are coordination complexes of transition metals with carbon monoxide ligands. Metal carbonyls are
useful in organic synthesis and as catalysts or catalyst precursors in homogeneous catalysis, such
as hydroformylation and Reppe chemistry. In the Mond process, nickel carbonyl is used to produce pure nickel. In
organometallic chemistry, metal carbonyls serve as precursors for the preparation of other organometalic complexes.
Metal carbonyls are toxic by skin contact, inhalation or ingestion, in part because of their ability to carbonylate hemoglobin
to give carboxyhemoglobin, which prevents the binding of O2
The nomenclature of the metal carbonyls depends on the charge of the complex, the number and type of central atoms,
and the number and type of ligands and their binding modes. They occur as neutral complexes, as positively charged
metal carbonyl cations or as negatively charged metal carbonylates. The carbon monoxide ligand may be bound
terminally to a single metal atom or bridging to two or more metal atoms. These complexes may be homoleptic, that is
containing only CO ligands, such as nickel carbonyl (Ni(CO)4), but more commonly metal carbonyls are heteroleptic and
contain a mixture of ligands.
Mononuclear metal carbonyls contain only one metal atom as the central atom. Except Vanadium hexacarbonyl only
metals with even order number such as chromium, iron, nickel and their homologs build neutral mononuclear complexes.
Polynuclear metal carbonyls are formed from metals with odd order numbers and contain a metal-metal bond.
[2]
Complexes with different metals, but only one type of ligand will be referred to as isoleptic. [2]
The number of carbon monoxide ligands in a metal carbonyl complex is described by a Greek numeral, followed by the
word carbonyl. Carbon monoxide has different binding modes in metal carbonyls. They differ in the hapticity and the
bridging mode. The hapticity describes the number of carbon monoxide atoms, which are directly bonded to the central
atom. The denomination shall be made by the letter η n, which is prefixed to the name of the complex. The superscript n
indicates the number of bounded atoms. In monohapto coordination, such as in terminally bonded carbon monoxide, the
hapticity is 1 and it is usually not separately designated. If carbon monoxide is bound via the carbon atom and via the
oxygen to the metal, it will be referred to as dihapto coordinated η 2.[3]
The carbonyl ligand engages in a range of bonding modes in metal carbonyl dimers and clusters. In the most common
bridging mode, the CO ligand bridges a pair of metals. This bonding mode is observed in the commonly available metal
carbonyls: Co2(CO)8, Fe2(CO)9, Fe3(CO)12, and Co4(CO)12.[1][4] In certain higher nuclearity clusters, CO bridges between
three or even four metals. These ligands are denoted μ 3-CO and μ4-CO. Less common are bonding modes in which both
C and O bond to the metal, e.g. μ3-η2.

Q.3 Explain pearson's HSAB concept.


Ans The HSAB concept is an initialism for "hard and soft (Lewis) acids and bases". Also known as the Pearson acid
base concept, HSAB is widely used in chemistry for explaining stability of compounds, reaction mechanisms and
pathways. It assigns the terms 'hard' or 'soft', and 'acid' or 'base' to chemical species. 'Hard' applies to species which are
small, have high charge states (the charge criterion applies mainly to acids, to a lesser extent to bases), and are weakly
polarizable. 'Soft' applies to species which are big, have low charge states and are strongly polarizable. [1] The concept is a
way of applying the notion of orbital overlap to specific chemical cases.[citation needed]
The theory is used in contexts where a qualitative, rather than quantitative, description would help in understanding the
predominant factors which drive chemical properties and reactions. This is especially so in transition metal chemistry,
where numerous experiments have been done to determine the relative ordering of ligands and transition metal ions in
terms of their hardness and softness.
HSAB theory is also useful in predicting the products of metathesis reactions. Quite recently it has been shown that even
the sensitivity and performance of explosive materials can be explained on basis of HSAB theory. [2]
Ralph Pearson introduced the HSAB principle in the early 1960s[3][4][5] as an attempt to unify inorganic and organicreaction
chemistry.[
The gist of this theory is that soft acids react faster and form stronger bonds with soft bases, whereas hard acids react
faster and form stronger bonds with hard bases, all other factors being equal.[7] The classification in the original work was
mostly based on equilibrium constants for reaction of two Lewis bases competing for a Lewis acid.
Hard acids and hard bases tend to have the following characteristics:


small atomic/ionic radius

high oxidation state

low polarizability

high electronegativity (bases)

hard bases have highest-occupied molecular orbitals (HOMO) of low energy, and hard acids have lowest-
unoccupied molecular orbitals (LUMO) of high energy. [7][8]

Examples of hard acids are: H+, light alkali ions (Li through K all have small ionic radius), Ti4+, Cr3+, Cr6+, BF3. Examples of
hard bases are: OH–, F–, Cl–, NH3, CH3COO–, CO32–. The affinity of hard acids and hard bases for each other is
mainly ionic in nature.
Soft acids and soft bases tend to have the following characteristics:

 large atomic/ionic radius


 low or zero oxidation state bonding
 high polarizability
 low electronegativity
 soft bases have HOMO of higher energy than hard bases, and soft acids have LUMO of lower energy than hard
acids. (However the soft-base HOMO energies are still lower than the soft-acid LUMO energies.

Internal Assignment No. 1

Paper Code: MT - 201


Paper Title: Real Analysis & Metric Space

Q. 1. Answer all the questions:


1) Write the Heine – Borel properly.
Ans In the topology of metric spaces the Heine–Borel theorem, named after Eduard Heine and Émile Borel, states:
For a subset S of Euclidean space Rn, the following two statements are equivalent:

 S is closed and bounded


 S is compact (that is, every open cover of S has a finite subcover).

In the context of real analysis, the former property is sometimes used as the defining property of compactness. However,
the two definitions cease to be equivalent when we consider subsets of more general metric spaces and in this generality
only the latter property is used to define compactness. In fact, the Heine–Borel theorem for arbitrary metric spaces reads:
A subset of a metric space is compact if and only if it is complete and totally bounded.

2) Define series and sequence.


Ans In mathematics, a sequence is an ordered collection of objects in which repetitions are allowed. Like a set, it
containsmembers (also called elements, or terms). The number of elements (possibly infinite) is called the length of the
sequence. Unlike a set, order matters, and exactly the same elements can appear multiple times at different positions in
the sequence. Formally, a sequence can be defined as a function whose domain is a countable totally ordered set, such
as the natural numbers.
For example, (M, A, R, Y) is a sequence of letters with the letter 'M' first and 'Y' last. This sequence differs from (A, R, M,
Y). Also, the sequence (1, 1, 2, 3, 5, 8), which contains the number 1 at two different positions, is a valid sequence.
Sequences can be finite, as in these examples, or infinite, such as the sequence of all even positive integers (2, 4, 6,...).
In computing and computer science, finite sequences are sometimes called strings, words or lists, the different names
commonly corresponding to different ways to represent them into computer memory; infinite sequences are also
called streams. The empty sequence ( ) is included in most notions of sequence, but may be excluded depending on the
context.

3) Write the different applicrhing of contraction maps (I) and contraction maps (II).
Ans In mathematics, the Banach fixed-point theorem (also known as the contraction mapping
theorem orcontraction mapping principle) is an important tool in the theory of metric spaces; it guarantees the
existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive method to find
those fixed points. The theorem is named after Stefan Banach (1892–1945), and was first stated by him in 1922

 A standard application is the proof of the Picard–Lindelöf theorem about the existence and uniqueness of
solutions to certain ordinary differential equations. The sought solution of the differential equation is expressed as a
fixed point of a suitable integral operator which transforms continuous functions into continuous functions. The
Banach fixed-point theorem is then used to show that this integral operator has a unique fixed point.
 One consequence of the Banach fixed-point theorem is that small Lipschitz perturbations of the identity are bi-
lipschitz homeomorphisms. Let Ω be an open set of a Banach space E; let I : Ω → E denote the identity (inclusion)
map and let g : Ω → E be a Lipschitz map of constant k < 1. Then

1. Ω′ := (I+g)(Ω) is an open subset of E: precisely, for any x in Ω such that B(x, r) ⊂ Ω one has B((I+g)(x), r(1−k)) ⊂ Ω

′;
2. I+g : Ω → Ω′ is a bi-lipschitz homeomorphism;

4) Write the scope of couchy's first method.


Ans Stress–strain analysis (or stress analysis) is an engineering discipline covering methods to determine
the stressesand strains in materials and structures subjected to forces or loads.
Stress analysis is a primary task for civil, mechanical and aerospace engineers involved in the design of structures of all
sizes, such as tunnels, bridges and dams, aircraft and rocket bodies, mechanical parts, and even plastic
cutlery andstaples. Stress analysis is also used in the maintenance of such structures, and to investigate the causes of
structural failures.
Typically, the input data for stress analysis are a geometrical description of the structure, the properties of the materials
used for its parts, how the parts are joined, and the maximum or typical forces that are expected to be applied to each
point of the structure. The output data is typically a quantitative description of the stress over all those parts and joints,
and the deformation caused by those stresses. The analysis may consider forces that vary with time, such
as enginevibrations or the load of moving vehicles. In that case, the stresses and deformations will also be functions of
time and space

5) Define metric spaces with examples.


Ans In mathematics, a metric space is a set for which distances between all members of the set are defined. Those
distances, taken together, are called a metric on the set.
The most familiar metric space is 3-dimensional Euclidean space. In fact, a "metric" is the generalization of theEuclidean
metric arising from the four long-known properties of the Euclidean distance. The Euclidean metric defines the distance
between two points as the length of the straight line segment connecting them. Other metric spaces occur for example
in elliptic geometry and hyperbolic geometry, where distance on a sphere measured by angle is a metric, and
the hyperboloid model of hyperbolic geometry is used by special relativity as a metric space of velocities.
A metric on a space induces topological properties like open and closed sets, which lead to the study of more
abstracttopological spaces.

Note: Answer any two questions. Each question carries 5 marks (Word limits 500)

Q.2 If x, y  R then prove that :


(i) |x + y| ≤ |x| + |y|
(ii) |x - y| ≥ |x| - |y|
Q.3 State and prove the Blozano – weicrstros theorem.
Ans In mathematics, specifically in real analysis, the Bolzano–Weierstrass theorem, named after Bernard
Bolzano andKarl Weierstrass, is a fundamental result about convergence in a finite-dimensional Euclidean space Rn. The
theorem states that each bounded sequence in Rn has a convergent subsequence.[1] An equivalent formulation is that a
subset of Rn is sequentially compact if and only if it is closed and bounded.[2] The theorem is sometimes called
the sequential compactness theorem
Suppose A is a subset of Rn with the property that every sequence in A has a subsequence converging to an element

of A. Then A must be bounded, since otherwise there exists a sequence xm in A with || xm || ≥ m for all m, and then every

subsequence is unbounded and therefore not convergent. Moreover, A must be closed, since from a noninterior point xin
the complement of A one can build an A-valued sequence converging to x. Thus the subsets A of Rn for which every
sequence in A has a subsequence converging to an element of A – i.e., the subsets which are sequentially compact in the
subspace topology – are precisely the closed and bounded sets.
This form of the theorem makes especially clear the analogy to the Heine–Borel theorem, which asserts that a subset
ofRn is compact if and only if it is closed and bounded. In fact, general topology tells us that a metrizable space is compact
if and only if it is sequentially compact, so that the Bolzano–Weierstrass and Heine–Borel theorems are essentially the
same.
The Bolzano–Weierstrass theorem is named after mathematicians Bernard Bolzano and Karl Weierstrass. It was actually
first proved by Bolzano in 1817 as a lemma in the proof of the intermediate value theorem. Some fifty years later the result
was identified as significant in its own right, and proved again by Weierstrass. It has since become an essential theorem
of analysis.
There are different important equilibrium concepts in economics, the proofs of the existence of which often require
variations of the Bolzano–Weierstrass theorem. One example is the existence of a Pareto efficient allocation. An
allocation is a matrix of consumption bundles for agents in an economy, and an allocation is Pareto efficient if no change
can be made to it which makes no agent worse off and at least one agent better off (here rows of the allocation matrix
must be rankable by a preference relation). The Bolzano–Weierstrass theorem allows one to prove that if the set of
allocations is compact and non-empty, then the system has a Pareto-efficient allocation.

Q.4 If f(x) = (x-3) logx, then prove the equedtion xlogx = 3-x is satisfied by at least one value of x lying between 1 and
3.
Ans In calculus, Rolle's theorem essentially states that any real-valueddifferentiable function that attains equal values at
two distinct points must have a stationary point somewhere between them—that is, a point where the first derivative (the
slope of the tangent line to the graph of the function) is zero.
Since the proof for the standard version of Rolle's theorem and the generalization are very similar, we prove the
generalization.
The idea of the proof is to argue that if f(a) = f(b), then f must attain either a maximum or a minimum somewhere
between a and b, say at c, and the function must change from increasing to decreasing (or the other way around) at c. In
particular, if the derivative exists, it must be zero at c.
By assumption, f is continuous on [a,b], and by the extreme value theorem attains both its maximum and its minimum in
[a,b]. If these are both attained at the endpoints of [a,b], then f is constant on [a,b] and so the derivative of f is zero at
every point in (a,b).
Suppose then that the maximum is obtained at an interior point c of (a,b) (the argument for the minimum is very similar,
just consider −f ). We shall examine the above right- and left-hand limits separately.
For a real h such that c + h is in [a,b], the value f(c + h) is smaller or equal to f(c) because f attains its maximum at c.
Therefore, for every h > 0,
We can also generalize Rolle's theorem by requiring that f has more points with equal values and greater regularity.
Specifically, suppose that

 the function f is n − 1 times continuously differentiable on the closed interval [a,b] and the nth derivative exists on
the open interval (a,b), and
 there are n intervals given by a1 < b1 ≤ a2 < b2 ≤ . . .≤ an < bn in [a,b] such that f(ak) = f(bk) for every k from 1 to n.

Then there is a number c in (a,b) such that the nth derivative of f at c is zero.
The requirements concerning the nth derivative of f can be weakened as in the generalization above, giving the
corresponding (possibly weaker) assertions for the right- and left-hand limits defined above with f (n−1) in place of f
Rolle's theorem is a property of differentiable functions over the real numbers, which are an ordered field. As such, it does
not generalize to other fields, but the following corollary does: if a real polynomial splits (has all its roots) over the real
numbers, then its derivative does as well – one may call this property of a field Rolle's property. More general fields do
not always have a notion of differentiable function, but they do have a notion of polynomials, which can be symbolically
differentiated. Similarly, more general fields may not have an order, but one has a notion of a root of a polynomial lying in
a field.
Thus Rolle's theorem shows that the real numbers have Rolle's property, and any algebraically closed field such as
thecomplex numbers has Rolle's property, but conversely the rational numbers do not – for example,
Internal Assignment No.2
Code: MT-201
Paper Title: Real Analysis and Metric Space
Q1. Answer all the questions. 1x5=5
(i) Find the following limit:
1 - cos x 2
lim x ®0
x 2 sin x 2
(ii) Determine the limit
x-3
lim x ®3 f ( x) = ,x ¹ 3 Exist.
x-3
ANS

(iii) Check whether the interval ]3, 7 ] and [8,12[ are equivalent or not.

ANS An equal temperament is a musical temperament, or a system of tuning, in which every pair of adjacent pitches is
separated by the same interval. In other words, the pitches of an equal temperament can be produced by repeating a
generating interval. Equal intervals also means equal ratios between the frequencies of any adjacent pair, and,
since pitch is perceived roughly as the logarithm of frequency, equal perceived "distance" from every note to its nearest
neighbor.
In equal temperament tunings, the generating interval is often found by dividing some larger desired interval, often
the octave (ratio 2/1), into a number of smaller equal steps (equal frequency ratios between successive notes).
For classical music and Western music in general, the most common tuning system for the past few hundred years has
been and remains twelve-tone equal temperament (also known as 12 equal temperament, 12-TET, or 12-ET), which
divides the octave into 12 parts, all of which are equal on alogarithmic scale. That resulting smallest interval, 1/12 the
width of an octave, is called a semitone or half step. In modern times, 12TET is usually tuned relative to a standard pitch
of 440 Hz, called A440, meaning one pitch is tuned to A440, and all other pitches are some multiple of semitones away
from that in either direction, although the standard pitch has not always been 440 and has fluctuated and generally risen
over the past few hundred years.

(iv) Write the inequality 4 £ 2 x + 3 £ 6 in the modules form.


ANS In mathematics, a modular form is a (complex) analytic function on the upper half-plane satisfying a certain kind
offunctional equation with respect to the group action of the modular group, and also satisfying a growth condition. The
theory of modular forms therefore belongs to complex analysis but the main importance of the theory has traditionally
been in its connections with number theory. Modular forms appear in other areas, such as algebraic topology and string
theory.
A modular function is a modular form invariant with respect to the modular group but without the condition that f(z)
beholomorphic at infinity. Instead, modular functions are meromorphic at infinity.
Modular form theory is a special case of the more general theory of automorphic forms, and therefore can now be seen as
just the most concrete part of a rich theory of discrete groups.
A modular form of weight k for the modular group

is a complex-valued function f on the upper half-plane H = {z ∈ C, Im(z) > 0}, satisfying the following three conditions:

(1) f is a holomorphic function on H.

(v) Draw the graph of the function f given by


f ( x ) = 5 - x + x - 3, x Î [2, 6]
ANS Remember that
f ′(x)
is the slope of the tangent at the point
(x, f(x))
on the graph of
f.
To sketch the graph of
f ′,
we make a table with several values of
x
(the corresponding points are shown on the graph) and rough estimates of the slope of the tangent
f ′(x).

x 0 0.5 1 1.5 2 2.5 3

f ′(x) 3 0 −4 −3 0 1 0

(Note that rough estimates are the best we can do; it is difficult to measure the slope of the tangent accurately without using a grid and
a ruler, so we couldn't reasonably expect two people's estimates to agree. However, all that is asked for is a rough sketch of the
derivative.) Plotting these points suggests the curve shown below.

Notice that the graph


f ′(x)
intersects the
x
-axis at points that correspond to the high and low points on the graph of
f(x).
Why is this so?
Any two questions.
Q.No.2: Test the following series for convergence 5
¥

å nx
n =1
n -1
,x f 0
ANS Definition An infinite series Pan is an alternating series iff holds either an = (−1)n |an| or an = (−1)n+1 |an|. Example I The
alternating harmonic series: X ∞ n=1 (−1)n+1 n = 1 − 1 2 + 1 3 − 1 4 + · · · . I The following series is an alternating series, X ∞ n=1
cos(nπ)n 2 (n + 1)! = X ∞ n=1 (−1)n n 2 (n + 1)! = − 1 2 + 4 6 − 9 24
Theorem (Leibniz’s test) If the sequence {an} satisfies: 0 < an, and an+1 6 an, and an → 0, then the alternating series P∞
n=1(−1)n+1an converges. Proof: Write down the partial sum s2n as follows s2n = a1 − a2 + a3 − a4 + a5 − · · · + s2n−1 − s2n = (a1 −
a2 ) + (a3 − a4 ) + · · · + (s2n−1 − s2n) = a1 − (a2 − a3 ) − (a4 − a5 ) − · · · − (s2n−2 − s2n−1 ) − s2n. The second expression implies s2n 6
s2(n+1). The third expression says that s2n is bounded above. Therefore converges, s2n → L. Since s2n+1 = s2n + a2n+1, and an → 0,
then s2n+1 → L + 0 = L. We conclude that P(−1)n+1an converges.
Show that the alternating harmonic series X ∞ n=1 (−1)n+1 n . converges. Solution: Introduce the sequence an = (−1)n+1 n . The
sequence {an} satisfies the hypothesis in the Leibniz test: I |an| > 0; I |an+1| < |an|; I |an| → 0. We then conclude that X ∞ n=1
(−1)n+1 n converges.
Q. No.3: Define the local minimum or local maximum value of the function defined by
f ( x) = 3 - 5 x3 + 5x 4 - x5
ANS In mathematical analysis, the maxima and minima (the plural of maximum andminimum) of a function, known
collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given
range (thelocal or relative extrema) or on the entire domain of a function (the global orabsolute extrema).[1][2][3] Pierre de
Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and
minima of functions.
As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively.
Unbounded infinite sets, such as the set ofreal numbers, have no minimum or maximum.

 The function x2 has a unique global minimum at x = 0.


 The function x3 has no global minima or maxima. Although the first derivative (3x2) is 0 at x = 0, this is an inflection
point.

 The function has a unique global maximum at x = e. (See figure at right)


 The function x-x has a unique global maximum over the positive real numbers at x= 1/e.
 The function x3/3 − x has first derivative x2 − 1 and second derivative 2x. Setting the first derivative to 0 and
solving for x gives stationary points at −1 and +1. From the sign of the second derivative we can see that −1 is a local
maximum and +1 is a local minimum. Note that this function has no global maximum or minimum.
 The function |x| has a global minimum at x = 0 that cannot be found by taking derivatives, because the derivative
does not exist at x = 0.
 The function cos(x) has infinitely many global maxima at 0, ±2π, ±4π, …, and infinitely many global minima at ±π,
±3π, ….
 The function 2 cos(x) − x has infinitely many local maxima and minima, but no global maximum or minimum.
 The function cos(3πx)/x with 0.1 ≤ x ≤ 1.1 has a global maximum at x = 0.1 (a boundary), a global minimum
near x = 0.3, a local maximum near x = 0.6, and a local minimum near x = 1.0. (See figure at top of page.)
 The function x3 + 3x2 − 2x + 1 defined over the closed interval (segment) [−4,2] has a local maximum at x =
−1−√15⁄3, a local minimum at x = −1+√15⁄3, a global maximum at x = 2 and a global minimum at x = −4.

Internal Assignment No. 1


Paper Code: MT - 202
Paper Title: Differential Equation

SECTION-A
1) (x+y) (dx-dy) = dx+ dy
ANS Euler's notation uses a differential operator, denoted as D, which is prefixed to the function so that the derivatives of
a function f are denoted by

for the first derivative,

for the second derivative, and

for the nth derivative, for any positive integer n.


When taking the derivative of a dependent variable y = f(x) it is common to add the independent variable x as a subscript
to the D notation, leading to the alternative notation

for the first derivative,

for the second derivative, and

for the nth derivative, for any positive integer n.


If there is only one independent variable present, the subscript to the operator is usually dropped, however.
Euler's notation is useful for stating and solving linear differential equations, as it simplifies presentation of the differential
equation, which can make seeing the essential elements of the problem easier.

2) Explain exact differential equation with examples.


ANS Differential equations arise in many problems in physics, engineering, and other sciences. The following
examples show how to solve differential equations in a few simple cases when an exact solution exists.

3) Find the general solutions of the following equation


Y = px + a/p.
ANS A singular solution ys(x) of an ordinary differential equation is a solution that is singular or one for which the initial
value problem (also called the Cauchy problem by some authors) fails to have a unique solution at some point on the
solution. The set on which a solution is singular may be as small as a single point or as large as the full real line. Solutions
which are singular in the sense that the initial value problem fails to have a unique solution need not be singular functions.
In some cases, the term singular solution is used to mean a solution at which there is a failure of uniqueness to the initial
value problem at every point on the curve. A singular solution in this stronger sense is often given as tangent to every
solution from a family of solutions. By tangent we mean that there is a point x where ys(x) = yc(x) and y's(x) = y'c(x)
where ycis a solution in a family of solutions parameterized by c. This means that the singular solution is the envelope of
the family of solutions.

4) Solve : (D2+9)y = cos 2 X


ANS

5) Solve : y2 + x2 ( ) 2 -2xy ( ) = 4 ( )2
ANS

SECTION-8
2
d2y dy x

Q.3 Solve : x + 4x + 2y = e

ANS
Q.4 Solve : (D5 - D4 + 2D3 - 2D2 + D - 1) y = cos X
ANS
Internal Assignment No. 2

Paper Code: MT - 202


Paper Title: Differential Equation

SECTION-A

ANS A 1. A differential equation is a mathematical equation that relates some functionwith its derivatives. In
applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the
equation defines a relationship between the two. Because such relations are extremely common, differential equations
play a prominent role in many disciplines including engineering, physics, economics, and biology.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their
solutions—the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit
formulas; however, some properties of solutions of a given differential equation may be determined without finding their
exact form.
If a self-contained formula for the solution is not available, the solution may be numerically approximated using
computers. The theory of dynamical systemsputs emphasis on qualitative analysis of systems described by differential
equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.

2. These equations, containing a derivative, involve rates of change – so often appear in an engineering or scientific
context.
Solving the equation involves integration.

The order of a differential equation is given by the highest derivative used.


The degree of a differential equation is given by the degree of the power of the highest derivative used.

Examples :-

3.
5.
4.
SECTION-B (Word limits 500)

Ans 8
Ans 7
Internal Assignment No. 1

Paper Code: MT - 203


Paper Title: Numerical analysis and vector calculas
Q. 1. Answer all the questions:
1) Define the term Interpolation.
Ans In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the
range of a discrete set of known data points.
In engineering and science, one often has a number of data points, obtained by sampling or experimentation, which
represent the values of a function for a limited number of values of the independent variable. It is often required
tointerpolate (i.e. estimate) the value of that function for an intermediate value of the independent variable. This may be
achieved by curve fitting or regression analysis.
A different problem which is closely related to interpolation is the approximation of a complicated function by a simple
function. Suppose the formula for some given function is known, but too complex to evaluate efficiently. A few known data
points from the original function can be used to create an interpolation based on a simpler function. Of course, when a
simple function is used to estimate data points from the original, interpolation errors are usually present; however,
depending on the problem domain and the interpolation method used, the gain in simplicity may be of greater value than
the resultant loss in precision.
In the examples below if we consider x as a topological space and the function f forms a different kind of Banach spaces
then the problem is treated as "interpolation of operators". The classical results about interpolation of operators are
theRiesz–Thorin theorem and the Marcinkiewicz theorem. There are also many other subsequent results.

2) Prove that (1-E-1)


3) Write the Stirling's central formula.
Ans

4) Define Algebric and transcendental equation.


Ans
5) Explain differencation of a vector.
Ans
In geometry, a position or position vector, also known as location vector or radius vector, is a Euclidean vector that
represents the position of a point P in space in relation to an arbitrary reference origin O. Usually denoted x, r, or s, it
corresponds to the straight-line distance from O to P:[1]
The term "position vector" is used mostly in the fields of differential geometry, mechanics and occasionally in vector
calculus.
Frequently this is used in two-dimensional or three-dimensional space, but can be easily generalized to Euclidean
spaces in any number of dimensions.[

Note: Answer any two questions. Each question carries 5 marks (Word limits 500)

Q.2 Given the following data :


x : 100 200 300 400 500 600 700 800
y : 0.9848 0.9397 0.8660 0.7660 0.6428 0.500 0.3420 0.1737
Evaluate (i) y(250) (ii) y(730)
Q.3 Evaluate y(1.1) using Runge-Kutta method given that :
= x2+y2, y(1) = 0
Ans
Q.4 Solve (1+y2) dx = (tan-1y - x) dy
Ans Step 1 :
Multiply tan-y-x by d
Step 2 :
Pulling out like terms :
1.1 Pull out like factors :

tan - y - x = -1 • (y + x - tan)
Equation at the end of step 2 :
(((1+(y2))•d)•x)-(-d•(y+x-tan)•y) = 0
Step 3 :
Raise y to the 2nd power
Exponentiation :
Equation at the end of step 3 :
(((1+y2)•d)•x)--yd•(y+x-tan) = 0
Step 4 :
Multiply y2+1 by d
Polynomial Roots Calculator :
3.1 Find roots (zeroes) of : F(y) = y2+1
Polynomial Roots Calculator is a set of methods aimed at finding values of y for which F(y)=0

Rational Roots Test is one of the above mentioned tools. It would only find Rational Roots that is numbers y which can
be expressed as the quotient of two integers

The Rational Root Theorem states that if a polynomial zeroes for a rational number P/Q then P is a factor of the Trailing
Constant and Q is a factor of the Leading Coefficient

In this case, the Leading Coefficient is 1 and the Trailing Constant is 1.

The factor(s) are:

of the Leading Coefficient : 1


of the Trailing Constant : 1

Let us test ....


P Q P/Q F(P/Q) Divisor
-1 1 -1.00 2.00
1 1 1.00 2.00
Polynomial Roots Calculator found no rational roots
Equation at the end of step 4 :
(d • (y2 + 1) • x) - -yd • (y + x - tan) = 0
Step 5 :
Simplify dx•(y2+1) - -yd•(y+x-tan)
Step 6 :
Pulling out like terms :
4.1 Pull out like factors :

y2dx + y2d + ydx - ydtan + dx =

d • (y2x + y2 + yx - ytan + x)
Equation at the end of step 6 :
d • (y2x + y2 + yx - ytan + x) = 0
Step 7 :
Solve d•(y2x+y2+yx-ytan+x) = 0
Theory - Roots of a product :
5.1 A product of several terms equals zero.

When a product of two or more terms equals zero, then at least one of the terms must be zero.

We shall now solve each term = 0 separately

In other words, we are going to solve as many equations as there are terms in the product

Any solution of term = 0 solves product = 0 as well.


Solving a Single Variable Equation :
5.2 Solve : d = 0

Solution is d = 0
Solving a Single Variable Equation :
5.3 Solve y2x+y2+yx-ytan+x = 0

In this type of equations, having more than one variable (unknown), you have to specify for which variable you want the
equation solved.

We shall not handle this type of equations at this time.


Internal Assignment No. 2

Paper Code: MT-203


Paper Title: Numerical Analysis & vector calculus

Q. 1. Answer all the questions:


i. If f ( x, y, z ) = x 2 ziˆ - 2 y 2 z 2 ˆj + xy 2 zkˆ. Find curl f at P (1, 1,-1).
ii. Define first order and first degree differential equation with example.
ANS In mathematics, linear differential equations are differential equationshaving differential equation solutions which
can be added together to form other solutions. They can be ordinary (ODEs) or partial (PDEs). The solutions to linear
equations form a vector space (unlike non-linear differential equations). A typical simple example is the linear differential
equation used to model radioactive decay. [2] Let N(t) denote the number of radioactive atoms in some sample of
material [3] at time t. Then for some constant k > 0, the number of radioactive atoms which decay can be modelled by
If y is assumed to be a function of only one variable, one speaks about an ordinary differential equation, else the
derivatives and their coefficients must be understood as (contracted) vectors, matrices or tensors of higher rank, and we
have a (linear)partial differential equation.
The case where f = 0 is called a homogeneous equation and its solutions are called complementary functions. It is
particularly important to the solution of the general case, since any complementary function can be added to a solution of
the inhomogeneous equation to give another solution (by a method traditionally called particular integral and
complementary function). When the Ai are numbers, the equation is said to have constant coefficients.

iii. Write the Newton interpolation forward and backward formula.

ANS Formulas (2) and

(3) are convenient for computing tables of a given function if the point is at the beginning or the end of
the table, since in this case the addition of one or several nodes caused by the wish to increase the accuracy of
the approximation does not lead to a repetition of the whole work done as in computations with Lagrange's
formula.
iv Evaluate D 3 [ (1 - x )(1 - 2 x)(1 - 3 x) ] .
ANS
v. Define spline interpolation .
ANS In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant
is a special type of piecewise polynomial called a spline. Spline interpolation is often preferred over polynomial
interpolationbecause the interpolation error can be made small even when using low degree polynomials for the spline.
Spline interpolation avoids the problem of Runge's phenomenon, in which oscillation can occur between points when
interpolating using high degree polynomials.

Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Q.2 Find a positive root of x 4 - x = 10 using Newton-Raphson’s method.
ANS
Q4 Solve the differential equation
dy
= x 2 - y, y (0) = 1
dx
By Picard’s method of successive approximation to get the value of y at x =1.
ANS

Internal Assignment No. 1


Paper Code: PH– 201
Paper Title: Thermal and Statical Physics

Q. 1. Answer all the questions:

a. What are the properties of absolute temperature?


Ans Thermodynamic temperature is the absolute measure of temperatureand is one of the principal parameters
of thermodynamics.
Thermodynamic temperature is defined by the third law of thermodynamics in which the theoretically lowest temperature
is the null or zero point. At this point, absolute zero, the particle constituents of matterhave minimal motion and can
become no colder.[1][2] In the quantum-mechanical description, matter at absolute zero is in its ground state, which is its
state of lowest energy. Thermodynamic temperature is often also called absolute temperature, for two reasons: one,
proposed by Kelvin, that it does not depend on the properties of a particular material; two that it refers to an absolute zero
according to the properties of the ideal gas.
The International System of Units specifies a particular scale for thermodynamic temperature. It uses the Kelvin scale for
measurement and selects the triple point of water at 273.16K as the fundamental fixing point. Other scales have been in
use historically. The Rankine scale, using the degree Fahrenheit as its unit interval, is still in use as part of the English
Engineering Units in the United States in some engineering fields. ITS-90gives a practical means of estimating the
thermodynamic temperature to a very high degree of accuracy.

b. Two states of energy gap 6.9x10-21J have relative


Probability of e2. Calculate the temperature of the complete system.

Ans

c. State second law of thermodynamics.


Ans The second law of thermodynamics states that in every natural thermodynamic process the sum of the entropies
of all participating bodiesis increased. In the limiting case, for reversible processes this sum remains unchanged.
The second law is an empirical finding that has been accepted as an axiom of thermodynamic theory.
Statistical thermodynamics, classical or quantum, explains the law.
The second law has been expressed in many ways. Its first formulation is credited to the French scientist Sadi Carnot in
1824 (see Timeline of thermodynamics).
The first law of thermodynamics provides the basic definition of thermodynamic energy, also called internal energy,
associated with all thermodynamic systems, but unknown in classical mechanics, and states the rule of conservation
of energy in nature

d. Explain the concept of triple point.


ANS In thermodynamics, the triple point of a substance is the temperature and pressure at which the
three phases (gas,liquid, and solid) of that substance coexist in thermodynamic equilibrium.[1] For example, the triple point
of mercuryoccurs at a temperature of −38.83440 °C and a pressure of 0.2 mPa.
In addition to the triple point for solid, liquid, and gas phases, a triple point may involve more than one solid phase, for
substances with multiple polymorphs. Helium-4 is a special case that presents a triple point involving two different fluid
phases (lambda point).[1]
The triple point of water is used to define the kelvin, the base unit of thermodynamic temperature in the International
System of Units (SI).[2] The value of the triple point of water is fixed by definition, rather than measured. The triple points of
several substances are used to define points in the ITS-90 international temperature scale, ranging from the triple point of
hydrogen (13.8033 K) to the triple point of water (273.16 K, 0.01 °C, or 32.018 °F).
The term "triple point" was coined in 1873 by James Thomson, brother of Lord Kelvin.

e. 1kg. Water (at 30oC) is mixed with 2kg. Water (at OoC) calculates the change in entropy.
ANS
Properties of water-NaCl mixtures [15]
NaCl, wt% Teq, °C ρ, g/cm3 n η, mPa·s

0 0 0.99984 1.333 1.002

0.5 −0.3 1.0018 1.3339 1.011

1 −0.59 1.0053 1.3347 1.02

2 −1.19 1.0125 1.3365 1.036

3 −1.79 1.0196 1.3383 1.052

4 −2.41 1.0268 1.34 1.068

5 −3.05 1.034 1.3418 1.085

6 −3.7 1.0413 1.3435 1.104

7 −4.38 1.0486 1.3453 1.124

8 −5.08 1.0559 1.347 1.145

9 −5.81 1.0633 1.3488 1.168

10 −6.56 1.0707 1.3505 1.193

12 −8.18 1.0857 1.3541 1.25

14 −9.94 1.1008 1.3576 1.317

16 −11.89 1.1162 1.3612 1.388

18 −14.04 1.1319 1.3648 1.463

20 −16.46 1.1478 1.3684 1.557

22 −19.18 1.164 1.3721 1.676

23.3 −21.1

23.7 −17.3

24.9 −11.1

26.1 −2.7

26.28 0

26.32 10
26.41 20

26.45 25

26.52 30

26.67 40

26.84 50

27.03 60

27.25 70

27.5 80

27.78 90

28.05 100

NOTE: Answer any two questions. Each question carries 5 Marks. (1*5=5 Marks) (500 Words).
Attempt any four questions.

Q.2 what is carnot engine? Derive the expression for efficiency calculation of carnot engine.
ANS A Carnot heat engine[2] is an engine that operates on the reversible Carnot cycle. The basic model for this engine
was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded upon
by Benoît Paul Émile Clapeyron in 1834 and mathematically elaborated upon by Rudolf Clausius in 1857 from which the
concept of entropy emerged.
Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a
series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may
perform work on its surroundings, thereby acting as a heat engine.
A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting
some of that energy to mechanical work. The cycle may also be reversed. The system may be worked upon by an
external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as
a refrigerator or heat pump rather than a heat engine.
The previous image shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engines. The
figure at right shows a block diagram of a generic heat engine, such as the Carnot engine. In the diagram, the “working
body” (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat Q can be
introduced or transmitted to produce work. Carnot had postulated that the fluid body could be any substance capable of
expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, or air, etc. Although, in these
early years, engines came in a number of configurations, typically QH was supplied by a boiler, wherein water was boiled
over a furnace; QC was typically supplied by a stream of cold flowing water in the form of a condenser located on a
separate part of the engine. The output work W here is the movement of the piston as it is used to turn a crank-arm, which
was then typically used to turn a pulley so to lift water out of flooded salt mines. Carnot defined work as “weight lifted
through a height”.
The Carnot cycle when acting as a heat engine consists of the following steps:

1. Reversible isothermal expansion of the gas at the "hot" temperature, TH (isothermal heat addition or
absorption). During this step (1 to 2 on Figure 1, A to B in Figure 2) the gas is allowed to expand and it does
work on the surroundings. The temperature of the gas does not change during the process, and thus the
expansion is isothermic. The gas expansion is propelled by absorption of heat energy Q 1 and of entropy
from the high temperature reservoir.
2. Isentropic (reversible adiabatic) expansion of the gas (isentropic work output). For this step (2 to 3 on
Figure 1, B to C in Figure 2) the piston and cylinder are assumed to be thermally insulated, thus they neither gain
nor lose heat. The gas continues to expand, doing work on the surroundings, and losing an equivalent amount of
internal energy. The gas expansion causes it to cool to the "cold" temperature, TC. The entropy remains
unchanged.
3. Reversible isothermal compression of the gas at the "cold" temperature, TC. (isothermal heat rejection) (3
to 4 on Figure 1, C to D on Figure 2) Now the surroundings do work on the gas, causing an amount of heat

energy Q2 and of entropy to flow out of the gas to the low temperature reservoir. (This is the same amount
of entropy absorbed in step 1.)
4. Isentropic compression of the gas (isentropic work input). (4 to 1 on Figure 1, D to A on Figure 2) Once
again the piston and cylinder are assumed to be thermally insulated.

During this step, the surroundings do work on the gas, increasing its internal energy and compressing it, causing the
temperature to rise to TH. The entropy remains unchanged. At this point the gas is in the same state as at the start of step
1.

Q.3 State and derive clausius - clapeyson equation.


Ans The Clausius–Clapeyron relation, named after Rudolf Clausius[1] and Benoît Paul Émile Clapeyron,[2] is a way of
characterizing a discontinuous phase transition between two phases of matter of a single constituent. On apressure–
temperature (P–T) diagram, the line separating the two phases is known as the coexistence curve. The Clausius–
Clapeyron relation gives the slope of the tangents to this curve. Mathematically,

where is the slope of the tangent to the coexistence curve at any point, is the specific latent heat, is

thetemperature, is the specific volume change of the phase transition and is the entropy change of the
phase transition.
A diagram like the one at http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/pvtsur.html would really help understanding
for this article. Also, the "important points" at the end
ofhttp://www.eng.usf.edu/~campbell/ThermoI/Proptut/tut8frm.html would be complimentary to such a
diagram.ChrisChiasson 04:00, 17 October 2007 (UTC)
I do think a better diagram would help people visualize the relationship. However, the one in the article and the
ones referenced above are general diagrams of phase equilibrium curves, rather than diagrams actually showing
the particular equation/relation which this page is about. Such a diagram would actually show a tangient drawn at
a few locations and show how the other side of the equation arrives at the slope value. (71.233.167.118 (talk)
02:28, 31 August 2013 (UTC))
Quite a few people have "improved" this page by saying that increased pressure actually is how ice skating works, or
otherwise modified the example given and removed mention of sumo wrestlers. However, the page as it stands (6th April
2006) is correct--that is, ice will not melt from the pressure of the ice skates. I chose the example I did because it is the
solution to a thermodynamics assignment problem I had in my final year as an undergraduate. So before you change it,
you might want to send an e-mail to Prof. Don Melrose of the University of Sydney, director of the Research Centre for
Theoretical Astrophysics, and tell him that he can't do thermodynamics.
--ckerr (the article's original author)

The thermodynamics are not in question. I suggest that you are discussing an over simplified model of a skate on ice.
Consider the nature of the contact between ice skate blade and the ice in reality. Firstly ice skates, particularly hockey
skates, have a curve along their length (to facilitate turning) so the area of contact is much smaller than the full 'length'
would allow. Secondly blades tend to be squared off in section and you skate on the corners of the square, further
reducing the area of contact. Finally we must consider the surface of the ice, not a smooth flat surface, but a maze of
ridges and bumps. The area of contact is thus much smaller than the total blade area, consequently the pressure is much
higher than has been discussed here. Contacts often generate much higher pressures than simple considerations predict
(for example plastic deformations in ordinary steel ball race bearings are such that lubricants can be forced to undergo
local glass transitions i.e. solidify, at the point of contact in normal operation. This requires pressures several orders of
magnitude higher than predicted by a simple picture of lubricated sphere on plane) --DaveGwyddel 14:23, 10 May 2006
(UTC)

Q.4 what is the concept of liquid helium I and II? Explain the abnormal properties of liquid He..

Internal Assignment No. 2

Paper Code: PH– 201


Paper Title: Thermal and Statical Physics

Q. 1. Answer all the questions.


A. What is regenerative cooling?
Ans Regenerative cooling is a method of cooling gases in which compressed gas is cooled by allowing it to expand and
thereby take heat from the surroundings. The cooled expanded gas then passes through aheat exchanger where it cools
the incoming compressed gas
In 1857, Siemens introduced the regenerative cooling concept with the Siemens cycle.[2] In 1895, William Hampson in
England[3] and Carl von Linde in Germany[4] independently developed and patented the Hampson-Linde cycle to liquefy air
using the Joule Thomson expansion process and regenerative cooling.[5] On 10 May 1898, James Dewar used
regenerative cooling to become the first to statically liquefy hydrogen.

 Cryocooler
 Displacer
 Fluid mechanics
 Regenerative cooling (rocket)
 Regenerative heat exchanger
 Thermodynamic cycle
 Timeline of hydrogen technologies

( 2KT)
C 2=
B….Prove that M M

Ans In proof theory, the semantic tableau (French pronunciation: [ta'blo]; singular: tableau; plural: tableaux), also
called truth tree, is a decision procedure for sentential and related logics, and a proof procedure for formulas of first-order
logic. The tableau method can also determine the satisfiability of finite sets of formulas of various logics. It is the most
popular proof procedure for modal logics (Girle 2000). The method of semantic tableaux was invented by the Dutch
logician Evert Willem Beth (Beth 1955) and simplified, for classical logic, by Raymond Smullyan (Smullyan 1968, 1995). It
is Smullyan's simplification, "one-sided tableaux", that is described below. Smullyan's method has been generalized to
arbitrary many-valued propositional and first-order logics by Walter Carnielli (Carnielli 1987).[1] Tableaux can be intuitively
seen as sequent systems upside-down. This symmetrical relation between tableaux and sequent systemswas formally
established in (Carnielli 1991).[2]
An analytic tableau has, for each node, a subformula of the formula at the origin. In other words, it is a tableau satisfying
the subformula property.

C. Explain thermal conduction.


Ans Thermal conduction is the transfer of internal energy by microscopic diffusion and collisions of particles or quasi-
particles within a body. The microscopically diffusing and colliding objects include molecules, atoms, and electrons. They
transfer disorganized microscopic kinetic and potential energy, which are jointly known as internal energy. Conduction can
only take place within an object or material, or between two objects that are in contact with each other. Conduction takes
place in all phases of ponderable matter, such as solids, liquids, gases and plasmas, but it is distinctly recognizable only
when the matter is undergoing neither chemical reaction nor differential local internal flows of distinct chemical
constituents. In the presence of such chemically defined contributory sub-processes, only the flow of internal energy is
recognizable, as distinct from thermal conduction. When the processes of conduction yield a net flow of energy across a
boundary because of a temperature gradient, the process is characterized as a flow of heat.
Heat spontaneously flows from a hotter to a colder body. In the absence of external drivers, temperature differences
decay over time, and the bodies approach thermal equilibrium.
In conduction, the heat flow is within and through the body itself. In contrast, in heat transfer by thermal radiation, the
transfer is often between bodies, which may be separated spatially.

D. Derive the relation between entropy and probability.


Ans In statistical mechanics, Boltzmann's equation is a probability equation relating the entropy S of an ideal gas to the
quantity W, which is the number ofmicrostates corresponding to a given macrostate:

(1)
where kB is the Boltzmann's constant, (also written with k) which is equal to 1.38065 × 10−23 J/K.
In short, the Boltzmann formula shows the relationship between entropy and the number of ways
the atoms ormolecules of a thermodynamic system can be arranged. In 1934, Swiss physical chemist Werner
Kuhn successfully derived a thermal equation of state for rubber molecules using Boltzmann's formula, which has
since come to be known as the entropy model of rubber.
The equation was originally formulated by Ludwig Boltzmann between 1872 to 1875, but later put into its current form
by Max Planck in about 1900.[2][3] To quote Planck, "the logarithmic connection between entropy and probability was first
stated by L. Boltzmann in his kinetic theory of gases

E. Explain thermionic emission.


ans Thermionic emission is the thermally induced flow of charge carriersfrom a surface or over a potential-energy
barrier. This occurs because the thermal energy given to the carrier overcomes the work function of the material. The
charge carriers can be electrons or ions, and in older literature are sometimes referred to as "thermions". After emission, a
charge that is equal in magnitude and opposite in sign to the total charge emitted is initially left behind in the emitting
region. But if the emitter is connected to a battery, the charge left behind is neutralized by charge supplied by the battery
as the emitted charge carriers move away from the emitter, and finally the emitter will be in the same state as it was
before emission.
The classical example of thermionic emission is the emission of electrons from a hot cathode into a vacuum (also known
as thermal electron emission or the Edison effect) in a vacuum tube. The hot cathode can be a metal filament, a
coated metal filament, or a separate structure of metal or carbides or borides of transition metals

Note: Answer any two questions. Each question carries 5 marks (Word limits 500)

Q.5 using energy distribution function. Prove that the mean kinetic energy of a system is always
Proportional to temperature.
Ans The kinetic theory describes a gas as a large number of submicroscopic particles (atoms or molecules), all of
which are in constant, random motion. The rapidly moving particles constantly collide with each other and with the walls of
the container. Kinetic theory explains macroscopic properties of gases, such as pressure, temperature, viscosity, thermal
conductivity, and volume, by considering their molecular composition and motion. The theory posits that gas pressure is
due to the impacts, on the walls of a container, of molecules or atoms moving at different velocities.
Kinetic theory defines temperature in its own way, not identical with the thermodynamic definition.[1]
Under a microscope, the molecules making up a liquid are too small to be visible, but the jittering motion of pollen grains
or dust particles can be seen. Known as Brownian motion, it results directly from collisions between the grains or particles
and liquid molecules. As analyzed by Albert Einstein in 1905, this experimental evidence for kinetic theory is generally
seen as having confirmed the concrete material existence of atoms and molecules.
The theory for ideal gases makes the following assumptions:

 The gas consists of very small particles known as molecules. This smallness of their size is such that the
totalvolume of the individual gas molecules added up is negligible compared to the volume of the smallest open ball
containing all the molecules. This is equivalent to stating that the average distance separating the gas particles is
large compared to their size.
 These particles have the same mass.
 The number of molecules is so large that statistical treatment can be applied.
 These molecules are in constant, random, and rapid motion.
 The rapidly moving particles constantly collide among themselves and with the walls of the container. All these
collisions are perfectly elastic. This means, the molecules are considered to be perfectly spherical in shape, and
elastic in nature.
 Except during collisions, the interactions among molecules are negligible. (That is, they exert no forces on one
another.)

This implies:
1. Relativistic effects are negligible.
2. Quantum-mechanical effects are negligible. This means that the inter-particle distance is much larger than
thethermal de Broglie wavelength and the molecules are treated as classical objects.
3. Because of the above two, their dynamics can be treated classically. This means, the equations of motion of
the molecules are time-reversible.

 The average kinetic energy of the gas particles depends only on the absolute temperature of the system. The
kinetic theory has its own definition of temperature, not identical with the thermodynamic definition.
 The time during collision of molecule with the container's wall is negligible as compared to the time between
successive collisions.
 Because they have mass, the gas molecules will be affected by gravity.

More modern developments relax these assumptions and are based on the Boltzmann equation. These can
accurately describe the properties of dense gases, because they include the volume of the molecules. The necessary
assumptions are the absence of quantum effects, molecular chaos and small gradients in bulk properties. Expansions
to higher orders in the density are known as virial expansions.

Q.6 For an ideal gas. Prove wPV = nRT using velocity distribution statistics.

Q.7 what are postulates of quantum physics? State plank distribution law.
Ans Quantum mechanics (QM; also known as quantum physics, orquantum theory), including quantum field theory,
is a fundamental branch of physics concerned with processes involving, for example,atoms and photons. In such
processes, said to be quantized, theaction has been observed to be only in integer multiples of thePlanck constant, a
physical quantity that is exceedingly, indeed perhaps ultimately, small. This is utterly inexplicable in classical physics.
Quantum mechanics gradually arose from Max Planck's solution in 1900 to the black-body radiation problem (reported
1859) and Albert Einstein's 1905 paper which offered a quantum-based theory to explain the photoelectric effect (reported
1887). Early quantum theory was significantly reformulated in the mid-1920s.
The mathematical formulations of quantum mechanics are abstract. A mathematical function, the wave function, provides
information about the probability amplitude of position, momentum, and other physical properties of a particle.
Important applications of quantum mechanical theory include superconducting magnets, LEDs and the laser,
the transistor and semiconductors such as themicroprocessor, medical and research imaging such as MRI and electron
microscopy, and explanations for many biological and physical phenomena
Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert
Hooke, Christiaan Huygens andLeonhard Euler proposed a wave theory of light based on experimental observations. [1] In
1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a
paper entitled On the nature of light and colours. This experiment played a major role in the general acceptance of
the wave theory of light.
In 1838, Michael Faraday discovered cathode rays. These studies were followed by the 1859 statement of the black-body
radiation problem byGustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical
system can be discrete, and the 1900 quantum hypothesis of Max Planck.[2] Planck's hypothesis that energy is radiated
and absorbed in discrete "quanta" (or energy elements) precisely matched the observed patterns of black-body radiation.
In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, [3] known as Wien's law in his
honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was
valid only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model
using Boltzmann's statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to
the development of quantum mechanics.
Following Max Planck's solution in 1900 to the black-body radiation problem (reported 1859), Albert Einstein offered a
quantum-based theory to explain the photoelectric effect (1905, reported 1887). Around 1900-1910, the atomic theoryand
the corpuscular theory of light[4] first came to be widely accepted as scientific fact; these latter theories can be viewed as
quantum theories of matter and electromagnetic radiation, respectively.

INTERNAL ASSIGMENT NO-2


Paper Code PH 202
Q1 Answer all the questions:
(i) How will you produce coherent sources of light?
Ans In physics, two wave sources are perfectly coherent if they have a constant phase difference and the same
frequency. It is an ideal property of waves that enables stationary (i.e. temporally and spatially constant) interference. It
contains several distinct concepts, which are limiting cases that never quite occur in reality but allow an understanding of
the physics of waves, and has become a very important concept in quantum physics. More
generally, coherencedescribes all properties of the correlation between physical quantities of a single wave, or between
several waves or wave packets.
Interference is nothing more than the addition, in the mathematical sense, of wave functions. A single wave can interfere
with itself, but this is still an addition of two waves (see Young's slits experiment). Constructive or destructive interferences
are limit cases, and two waves always interfere, even if the result of the addition is complicated or not remarkable.
When interfering, two waves can add together to create a wave of greater amplitude than either one (constructive
interference) or subtract from each other to create a wave of lesser amplitude than either one (destructive
interference), depending on their relative phase. Two waves are said to be coherent if they have a constant relative
phase. The amount of coherence can readily be measured by the interference visibility, which looks at the size of the
interference fringes relative to the input waves (as the phase offset is varied); a precise mathematical definition of
thedegree of coherence is given by means of correlation functions.

(ii) Why Newton’s ring is circular?


Ans Newton's rings is a phenomenon in which an interference pattern is created by the reflection of light between two
surfaces—a spherical surface and an adjacent flat surface. It is named after Isaac Newton, who first studied them in 1717.
When viewed with monochromatic light, Newton's rings appear as a series of concentric, alternating bright and dark rings
centered at the point of contact between the two surfaces. When viewed with white light, it forms a concentric-ring pattern
of rainbow colors, because the different wavelengths of light interfere at different thicknesses of the air layer between the
surfaces.
The light rings are caused by constructive interference between the light rays reflected from both surfaces, while the dark
rings are caused by destructive interference. Also, the outer rings are spaced more closely than the inner ones. Moving
outwards from one dark ring to the next, for example, increases the path difference by the same amount, λ, corresponding
to the same increase of thickness of the air layer,λ/2. Since the slope of the convex lens surface increases outwards,
separation of the rings gets smaller for the outer rings.
(iii) What is Rayleigh criterion of resolution?
(iv) Ans Angular resolution or spatial resolution describes the ability of any image-forming device such as
an optical orradio telescope, a microscope, a camera, or an eye, to distinguish small details of an object,
thereby making it a major determinant of image resolution.
Resolving power is the ability of an imaging device to separate (i.e. to see as distinct) points of an object that are located
at a small angular distance or it is the power of an optical instrument to separate far away objects, that are close together,
into individual images. The term resolution or minimum resolvable distance is the minimum distance between
distinguishable objects in an image, although the term is loosely used by many users of microscopes and telescopes to
describe resolving power. In scientific analysis, in general, the term "resolution" is used to describe the precision with
which any instrument measures and records (in an image or spectrum) any variable in the specimen or sample under
study.
(v) What is the use of babinets compensator?
Ans The Babinet–Soleil compensator is a continuously variable, zero-order retarder. It consists of
a birefringent wedge which is movable and another birefringent wedge which is fixed to a compensator plate. The
orientation of the long axis of the wedges is perpendicular to the long axis of the compensator plate
His father was Jean Babinet and mother, Marie‐Anne Félicité Bonneau du Chesn.[1] Babinet started his studies at the

Lycée Napoléon, but was persuaded to abandon a legal education for the pursuit of science. A graduate of the École
Polytechnique, which he left in 1812 for the Military School at Metz, he was later a professor at the Sorbonne and at
theCollège de France. In 1840, he was elected as a member of the Académie Royale des Sciences. He was also an
astronomer of the Bureau des Longitudes.
Among Babinet's accomplishments are the 1827 standardization of theÅngström unit for measuring light using the red
Cadmium line's wavelength, and the principle (Babinet's principle) that similar diffraction patterns are produced by two
complementary screens.
(vi) What is the difference between he-ne laser and ruby laser?
Ans The first HeNe lasers emitted light at 1.15 μm, in the infrared spectrum, and were the first gas lasers. However, a
laser that operated at visible wavelengths was much more in demand, and a number of other neon transitions were
investigated to identify ones in which a population inversion can be achieved. The 633 nm line was found to have the
highest gain in the visible spectrum, making this the wavelength of choice for most HeNe lasers. However other visible as
well as infrared stimulated emission wavelengths are possible, and by using mirror coatings with their peak reflectance at
these other wavelengths, HeNe lasers could be engineered to employ those transitions; this includes visible lasers
appearing red, orange, yellow, and green.[1] Stimulated emissions are known from over 100 μm in the far infrared to
540 nm in the visible. Since visible transitions at wavelengths other than 633 nm have somewhat lower gain, these lasers
generally have lower output efficiencies and are more costly. The 3.39 μm transition has a very high gain but is prevented
from use in an ordinary HeNe laser (of a different intended wavelength) since the cavity and mirrors are lossy at that
wavelength. However in high power HeNe lasers having a particularly long cavity, superluminescenceat 3.39 μm can
become a nuisance, robbing power from the stimulated emission medium, often requiring additional suppression. The
best-known and most widely used HeNe laser operates at a wavelength of 632.8 nm in the red part of the visible
spectrum. It was developed at Bell Telephone Laboratories in 1962, [2][3] 18 months after the pioneering demonstration at
the same laboratory of the first continuous infrared HeNe gas laser in December 1960
NOTE: - Attempt any two . It carries 5 marks.( word limits 500)
Q2 How will you produce plane polarized light, circular polarized light and elliptical
Polarized light?
Ans In electrodynamics, elliptical polarization is the polarization of electromagnetic radiation such that the tip of
theelectric field vector describes an ellipse in any fixed plane intersecting, and normal to, the direction of propagation. An
elliptically polarized wave may be resolved into two linearly polarized waves in phase quadrature, with their polarization
planes at right angles to each other. Since the electric field can rotate clockwise or counterclockwise as it propagates,
elliptically polarized waves exhibit chirality.
Other forms of polarization, such as circular and linear polarization, can be considered to be special cases of elliptical
polarization.
In electrodynamics, circular polarization of an electromagnetic wave is a polarization in which the electric field of the
passing wave does not change strength but only changes direction in a rotary manner.
In electrodynamics the strength and direction of an electric field is defined by what is called an electric field vector. In the
case of a circularly polarized wave, as seen in the accompanying animation, the tip of the electric field vector, at a given
point in space, describes a circle as time progresses. If the wave is frozen in time, the electric field vector of the wave
describes a helix along the direction of propagation.
Circular polarization is a limiting case of the more general condition of elliptical polarization. The other special case is the
easier-to-understand linear polarization.
The phenomenon of polarization arises as a consequence of the fact that light behaves as a two-dimensionaltransverse
wave.
On the right is an illustration of the electric field vectors of a circularly polarized electromagnetic wave. [1] The electric field
vectors have a constant magnitude but their direction changes in a rotary manner. Given that this is a plane wave, each
vector represents the magnitude and direction of the electric field for an entire plane that is perpendicular to the axis.
Specifically, given that this is a circularly polarized plane wave, these vectors indicate that the electric field, from plane to
plane, has a constant strength while its direction steadily rotates. Refer tothese two images in the plane wave article to
better appreciate this. This light is considered to be right-hand, clockwise circularly polarized if viewed by the receiver.
Since this is an electromagnetic wave eachelectric field vector has a corresponding, but not illustrated, magnetic
field vector that is at a right angle to the electric field vector and proportional in magnitude to it. As a result, the magnetic
field vectors would trace out a second helix if displayed.
Circular polarization is often encountered in the field of optics and in this section, the electromagnetic wave will be simply
referred to as light.
Q3 what is the principal of optical fiber? Derive an expression for acceptance angle of
Optical fiber?
Ans An optical fiber (or optical fibre) is a flexible, transparent fiber made bydrawing glass (silica) or plastic to a
diameter slightly thicker than that of ahuman hair.[1] Optical fibers are used most often as a means to transmit light
between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over
longer distances and at higher bandwidths (data rates) than wire cables. Fibers are used instead ofmetal wires because
signals travel along them with lesser amounts of loss; in addition, fibers are also immune to electromagnetic interference,
a problem which metal wires suffer from excessively. [2][2] Fibers are also used forillumination, and are wrapped in bundles
so that they may be used to carry images, thus allowing viewing in confined spaces, as in the case of afiberscope.
[3]
Specially designed fibers are also used for a variety of other applications, some of them being fiber optic
sensors and fiber lasers.[4]
Optical fibers typically include a transparent core surrounded by a transparentcladding material with a lower index of
refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as
awaveguide.[2] Fibers that support many propagation paths or transverse modesare called multi-mode fibers (MMF), while
those that support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core
diameter and are used for short-distance communication links and for applications where high power must be transmitted.
[citation needed]
Single-mode fibers are used for most communication links longer than 1,000 meters (3,300 ft).[citation needed]
An important aspect of a fiber optic communication is that of extension of the fiber optic cables such that the losses
brought about by joining two different cables is kept to a minimum. [2] Joining lengths of optical fiber often proves to be
more complex than joining electrical wire or cable and involves careful cleavingof the fibers, perfect alignment of the fiber
cores, and the splicing of these aligned fiber cores. For applications that demand a permanent connection amechanical
splice which holds the ends of the fibers together mechanically could be used or a fusion splice that uses heat to fuse the
ends of the fibers together could be used. Temporary or semi-permanent connections are made by means of
specialized optical fiber connectors.[2]
The field of applied science and engineering concerned with the design and application of optical fibers is known as fiber
optics.
Guiding of light by refraction, the principle that makes fiber optics possible, was first demonstrated by Daniel
Colladon and Jacques Babinet in Paris in the early 1840s. John Tyndall included a demonstration of it in his public
lectures inLondon, 12 years later.[5] Tyndall also wrote about the property of total internal reflection in an introductory book
about the nature of light in 1870:
When the light passes from air into water, the refracted ray is benttowards the perpendicular... When the ray passes from
water to air it is bent from the perpendicular... If the angle which the ray in water encloses with the perpendicular to the
surface be greater than 48 degrees, the ray will not quit the water at all: it will betotally reflected at the surface.... The
angle which marks the limit where total reflection begins is called the limiting angle of the medium. For water this angle is
48°27′, for flint glass it is 38°41′, while for diamond it is 23°42′. [6][7]
Unpigmented human hairs have also been shown to act as an optical fiber. [8]
Practical applications, such as close internal illumination during dentistry, appeared early in the twentieth century. Image
transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the
television pioneer John Logie Baird in the 1920s. The principle was first used for internal medical examinations
by Heinrich Lamm in the following decade. Modern optical fibers, where the glass fiber is coated with a transparent
cladding to offer a more suitable refractive index, appeared later in the decade
Internal Assignment No. 1

Paper Code: PH- 202


Paper Title: OPTICS

Q. 1. Answer all the questions:


1) What is the use of compensating plate in Michelson interferometer?
Ans The Michelson interferometer is common configuration for optical interferometry and was invented by Albert
Abraham Michelson. Using a beamsplitter, a light source is split into two arms. Each of those is reflected back toward the
beamsplitter which then combines their amplitudes interferometrically. The resulting interference pattern that is not
directed back toward the source is typically directed to some type of photoelectric detector or camera. Depending on the
interferometer's particular application, the two paths may be of different lengths or include optical materials or components
under test.
The Michelson interferometer is especially known for its use by Albert Michelson and Edward Morley in the
famous Michelson-Morley experiment (1887)[1] in a configuration which would have detected the earth's motion through
the supposed luminiferous aether that most physicists at the time believed was the medium in which light waves
propagated.
2) What is Rayleigh criterion by Resolution?
Ans The imaging system's resolution can be limited either by aberration or bydiffraction causing blurring of the image.
These two phenomena have different origins and are unrelated. Aberrations can be explained by geometrical optics and
can in principle be solved by increasing the optical quality — and subsequently the cost — of the system. On the other
hand, diffraction comes from the wave nature of light and is determined by the finite aperture of the optical
elements. The lens' circular aperture is analogous to a two-dimensional version of the single-slit
experiment. Light passing through the lensinterferes with itself creating a ring-shape diffraction pattern, known as the Airy
pattern, if the wavefront of the transmitted light is taken to be spherical or plane over the exit aperture.
The interplay between diffraction and aberration can be characterised by thepoint spread function (PSF). The narrower
the aperture of a lens the more likely the PSF is dominated by diffraction. In that case, the angular resolution of an
optical system can be estimated (from the diameter of the aperture and thewavelength of the light) by the Rayleigh
criterion invented by Lord Rayleigh:
3) Explain structures of optical fibre?
Ans An optical fiber (or optical fibre) is a flexible, transparent fiber made bydrawing glass (silica) or plastic to a
diameter slightly thicker than that of ahuman hair.[1] Optical fibers are used most often as a means to transmit light
between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over
longer distances and at higher bandwidths (data rates) than wire cables. Fibers are used instead ofmetal wires because
signals travel along them with lesser amounts of loss; in addition, fibers are also immune to electromagnetic interference,
a problem which metal wires suffer from excessively. [2][2] Fibers are also used forillumination, and are wrapped in bundles
so that they may be used to carry images, thus allowing viewing in confined spaces, as in the case of afiberscope.
[3]
Specially designed fibers are also used for a variety of other applications, some of them being fiber optic
sensors and fiber lasers.[4]
Optical fibers typically include a transparent core surrounded by a transparentcladding material with a lower index of
refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as
awaveguide
4) What are conditions for Interfenence?
Ans In physics, interference is a phenomenon in which two wavessuperpose to form a resultant wave of greater or
lower amplitude. Interference usually refers to the interaction of waves that are correlated or coherent with each other,
either because they come from the same source or because they have the same or nearly the same frequency.
Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water
waves or matter waves.
The principle of superposition of waves states that when two or more propagating waves of same type are incident on
the same point, the total displacement at that point is equal to the pointwise sum of the displacements of the individual
waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point, then
the magnitude of the displacement is the sum of the individual magnitudes – this is constructive interference
5) Why Newton's Ring is circular in shape? And what will happen if we use white light instead of sodium light?
Ans Newton's rings is a phenomenon in which an interference pattern is created by the reflection of light between two
surfaces—a spherical surface and an adjacent flat surface. It is named after Isaac Newton, who first studied them in 1717.
When viewed with monochromatic light, Newton's rings appear as a series of concentric, alternating bright and dark rings
centered at the point of contact between the two surfaces. When viewed with white light, it forms a concentric-ring pattern
of rainbow colors, because the different wavelengths of light interfere at different thicknesses of the air layer between the
surfaces.
The light rings are caused by constructive interference between the light rays reflected from both surfaces, while the dark
rings are caused by destructive interference. Also, the outer rings are spaced more closely than the inner ones. Moving
outwards from one dark ring to the next, for example,
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Q.3 Explain constructions and working of He-Ne Laser with energy level diagram.
Ans A helium–neon laser or HeNe laser, is a type of gas laser whose gain medium consists of a mixture of helium and
neon(10:1) inside of a small bore capillary tube, usually excited by a DC electrical discharge. The best-known and most
widely used HeNe laser operates at a wavelength of 632.8 nm in the red part of the visible spectrum.
The first HeNe lasers emitted light at 1.15 μm, in the infrared spectrum, and were the first gas lasers. However, a laser
that operated at visible wavelengths was much more in demand, and a number of other neon transitions were investigated
to identify ones in which a population inversion can be achieved. The 633 nm line was found to have the highest gain in
the visible spectrum, making this the wavelength of choice for most HeNe lasers. However other visible as well as infrared
stimulated emission wavelengths are possible, and by using mirror coatings with their peak reflectance at these other
wavelengths, HeNe lasers could be engineered to employ those transitions; this includes visible lasers appearing red,
orange, yellow, and green.[1] Stimulated emissions are known from over 100 μm in the far infrared to 540 nm in the visible.
Since visible transitions at wavelengths other than 633 nm have somewhat lower gain, these lasers generally have lower
output efficiencies and are more costly. The 3.39 μm transition has a very high gain but is prevented from use in an
ordinary HeNe laser (of a different intended wavelength) since the cavity and mirrors are lossy at that wavelength.
However in high power HeNe lasers having a particularly long cavity, superluminescenceat 3.39 μm can become a
nuisance, robbing power from the stimulated emission medium, often requiring additional suppression. The best-known
and most widely used HeNe laser operates at a wavelength of 632.8 nm in the red part of the visible spectrum. It was
developed at Bell Telephone Laboratories in 1962, [2][3] 18 months after the pioneering demonstration at the same
laboratory of the first continuous infrared HeNe gas laser in December 1960
The gain medium of the laser, as suggested by its name, is a mixture of helium and neon gases, in approximately a 10:1
ratio, contained at low pressure in a glass envelope. The gas mixture is mostly helium, so that helium atoms can be
excited. The excited helium atoms collide with neon atoms, exciting some of them to the state that radiates 632.8 nm.
Without helium, the neon atoms would be excited mostly to lower excited states responsible for non-laser lines. A neon
laser with no helium can be constructed but it is much more difficult without this means of energy coupling. Therefore, a
HeNe laser that has lost enough of its helium (e.g., due to diffusion through the seals or glass) will lose its laser
functionality since the pumping efficiency will be too low. [5] The energy or pump source of the laser is provided by a high
voltage electrical discharge passed through the gas between electrodes (anode and cathode) within the tube. A DC
current of 3 to 20 mA is typically required for CW operation. The optical cavity of the laser usually consists of two concave
mirrors or one plane and one concave mirror, one having very high (typically 99.9%) reflectance and the output
coupler mirror allowing approximately 1% transmission.
Q.4 Deriving the fernel equations.
Ans The Fresnel equations (or Fresnel conditions), deduced by Augustin-Jean Fresnel /frɛˈnɛl/, describe the behaviour
of light when moving between media of differing refractive indices. The reflection of light that the equations predict
is known as Fresnel reflection.
When light moves from a medium of a given refractive index n1 into a second medium with refractive index n2,
bothreflection and refraction of the light may occur. The Fresnel equations describe what fraction of the light is reflected
and what fraction is refracted (i.e., transmitted). They also describe the phase shift of the reflected light.
The equations assume the interface between the media is flat and that the media are homogeneous. The incident light is
assumed to be a plane wave, and effects of edges are neglected.
S and p polarizations
Main article: S and p polarizations
The calculations below depend on polarisation of the incident ray. Two cases are analyzed:

1. The incident light is polarized with its electric field perpendicular to the plane containing the incident, reflected,
and refracted rays. This plane is called the plane of incidence; it is the plane of the diagram below. The light is
said to be s-polarized, from the German senkrecht (perpendicular).
2. The incident light is polarized with its electric field parallel to the plane of incidence. Such light is described as p-
polarized, from parallel.
3. In this treatment, the coefficient r is the ratio of the reflected wave's complexelectric field amplitude to that of the
incident wave. The coefficient t is the ratio of the transmitted wave's electric field amplitude to that of the incident
wave. The light is split into s and p polarizations as defined above. (In the figures to the right, s polarization is

denoted " " and p is denoted " ".)


4. For s-polarization, a positive r or t means that the electric fields of the incoming and reflected or transmitted wave
are parallel, while negative means anti-parallel. For p-polarization, a positive r or t means that the magnetic
fields of the waves are parallel, while negative means anti-parallel. [3] It is also assumed that the magnetic
permeability µ of both media is equal to the permeability of free space µ 0.
5. (Be aware that some authors say instead use the opposite sign convention for r p, so that rp is positive when the
incoming and reflected magnetic fields are antiparallel, and negative when they are parallel. This latter convention
has the convenient advantage that the s- and p- sign conventions are the same at normal incidence. However,
either convention, when used consistently, gives the right answers.)

Internal Assignment No. 1

Paper Code: PH- 203


Paper Title: Electronics
Q. 1. Answer all the questions:
1) Explain Zener break down mechanism.
Ans The Zener effect is a type of electrical breakdown in a reverse biased p-n diode in which the electric
field enables tunneling of electrons from the valence to the conduction band of asemiconductor, leading to a large
number of free minority carriers, which suddenly increase the reverse current.[1] Zener breakdown is employed in
a Zener diode.
The Zener effect is distinct from avalanche breakdown which involves minority carrier electrons in the transition region
which are accelerated by the electric field to energies sufficient to free electron-hole pairs via collisions with bound
electrons. Either the Zener or the avalanche effect may occur independently, or both may occur simultaneously. In
general, diode junctions which break down below 5 V are caused by the Zener effect, while junctions which
experience breakdown above 5 V are caused by the avalanche effect. Intermediate breakdown voltages (around 5V)
are usually caused by a combination of the two effects. This Zener breakdown voltage is found to occur at electric
field intensity of about 3×107 V/m.[1] Zener breakdown occurs in heavily doped junctions (p-type semiconductor
moderately doped and n-type heavily doped), which produces a narrow depletion region

2) Explain De-Moargan's tehorm with example.


Ans In propositional logic and boolean algebra, De Morgan's laws[1][2][3] are a pair of transformation rules that are
both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules
allow the expression of conjunctions and disjunctions purely in terms of each other via negation.
The rules can be expressed in English as:
The negation of a conjunction is the disjunction of the negations.
The negation of a disjunction is the conjunction of the negations.
or informally as:
"not (A and B)" is the same as "(not A) or (not B)"
also,
"not (A or B)" is the same as "(not A) and (not B)".
The rules can be expressed in formal language with two propositions Pand Q as:
3) Describe half wave Rectifier.
Ans A rectifier is an electrical device that converts alternating current (AC), which periodically reverses direction,
to direct current (DC), which flows in only one direction. The process is known as rectification. Physically, rectifiers take
a number of forms, including vacuum tube diodes, mercury-arc valves, copper and selenium oxide
rectifiers, semiconductor diodes,silicon-controlled rectifiers and other silicon-based semiconductor switches. Historically,
even synchronous electromechanical switches and motors have been used. Early radio receivers, called crystal radios,
used a "cat's whisker" of fine wire pressing on a crystal of galena (lead sulfide) to serve as a point-contact rectifier or
"crystal detector".
Rectifiers have many uses, but are often found serving as components of DC power supplies and high-voltage direct
current power transmission systems.

4) Write any four diffn blw common emitter common base and common collector configuration?
Ans In electronics, a common emitter amplifier is one of three basic single-stage bipolar-junction-transistor (BJT)
amplifier topologies, typically used as a voltage amplifier.
In this circuit the base terminal of the transistor serves as the input, the collector is the output, and the emitter
is common to both (for example, it may be tied to ground reference or a power supply rail), hence its name. The
analogous field-effect transistor circuit is the common sourceamplifier, and the analogous tube circuit is the common
cathode amplifier.
Common emitter amplifiers give the amplifier an inverted output and can have a very high gainthat may vary widely from
one transistor to the next. The gain is a strong function of both temperature and bias current, and so the actual gain is
somewhat unpredictable. Stability is another problem associated with such high gain circuits due to any
unintentional positive feedback that may be present.

Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Q.2 Write down Barkhausen criteria criteria of sustained oscillation and explain martley oscillation.
Ans In electronics, the Barkhausen stability criterion is a mathematical condition to determine when a linear electronic
circuit will oscillate.[1][2][3] It was put forth in 1921 by German physicist Heinrich Georg Barkhausen(1881–1956).[4] It is
widely used in the design of electronic oscillators, and also in the design of general negative feedback circuits such as op
amps, to prevent them from oscillating.
In electronics, gain is a measure of the ability of a two port circuit (often an amplifier) to increase the power oramplitude of
a signal from the input to the output port[1][2][3][4] by adding energy converted from some power supply to the signal. It is
usually defined as the mean ratio of the signal amplitude or power at the output port to the amplitude or power at the input
port.[1] It is often expressed using the logarithmic decibel (dB) units ("dB gain").[4] A gain greater than one (zero dB), that
is amplification, is the defining property of an active component or circuit, while a passive circuit will have a gain of less
than one.[4]
The term gain alone is ambiguous, and can refer to the ratio of output to input voltage (voltage gain), current (current gain)
or electric power (power gain).[4] In the field of audio and general purpose amplifiers, especially operational amplifiers, the
term usually refers to voltage gain,[2] but in radio frequency amplifiers it usually refers to power gain. Furthermore, the term
gain is also applied in systems such as sensors where the input and output have different units; in such cases the gain
units must be specified, as in "5 microvolts per photon" for the responsivity of a photosensor. The "gain" of a bipolar
transistor normally refers to forward current transfer ratio, either hFE ("Beta", the static ratio of Icdivided by Ib at some
operating point), or sometimes hfe (the small-signal current gain, the slope of the graph of Icagainst Ib at a point).
The gain of an electronic device or circuit generally varies with the frequency of the applied signal. Unless otherwise
stated, the term refers to the gain for frequencies in the passband, the intended operating frequency range, of the
equipment. The term gain has a different meaning in antenna design; antenna gain is the ratio of radiation intensityfrom a

directional antenna to (mean radiation intensity from a lossless antenna).


My doubt is that the Barkhausen assertion about the loop gain can be right only in the case of an LC oscillator where an
LC tank produces the oscillations and the feedback system only sustains them. But it should not be right in the case of an
RC oscillator where the humble RC circuit cannot produce any oscillations without an external control. In this case, I think,
the unity gain is insufficient; it has to be more than one! I ask myself, "How can you expect that the voltage across the
capacitor will increase if we "copy" this voltage (gain of one) and then apply the same voltage across the capacitor? What
is this mystic force that will make the voltage across the capacitor change as no current flows through it?" I have exposed
my speculations in the Wikipedia talk page about the Wien bridge oscillator (under the user name Circuit dreamer):
Q.4 Describe with diagram Yagi antenna?
Ans A Yagi-Uda antenna, commonly known simply as a Yagi antenna, is adirectional antenna consisting of multiple
parallel dipole elements in a line,[1]usually made of metal rods.[2] It consists of a single driven element connected to
the transmitter or receiver with a transmission line, and additional parasitic elements: a so-called reflector and one or
more directors.[3][4][5][6][2] The reflector element is slightly longer than the driven dipole, whereas the directors are a little
shorter.[6] This design achieves a very substantial increase in the antenna's directionality and gain compared to a simple
dipole.
The antenna was invented in 1926 by Shintaro Uda of Tohoku Imperial University, Japan,[4] with a lesser role played by
his colleague Hidetsugu Yagi.[3][7] However the "Yagi" name has become more familiar with the name of Uda often omitted.
This appears to have been due to Yagi filing a patent on the idea in Japan without Uda's name in it, and later transferring
the patent to theMarconi Company in the UK.[8] Yagi antennas were first widely used duringWorld War II in radar systems
by the British, US and Germans.[7] After the war they saw extensive development as home television antennas.
Also called a "beam antenna",[6] the Yagi is very widely used as a high-gain antenna on the HF, VHF and UHF bands.[6][5] It
has moderate gain which depends on the number of elements used, typically limited to about 17 dBi, [5]linear polarization,
[5]
unidirectional (end-fire) beam pattern[5] with high front-to-back ratio of up to 20 db. and is lightweight, inexpensive and
simple to construct.[5] The bandwidth of a Yagi antenna, the frequency range over which it has high gain, is narrow, a few
percent of the center frequency, and decreases with increasing gain, [6][5] so it is often used in fixed-frequency applications.
The largest and most well-known use is as rooftop terrestrialtelevision antennas,[5] but it is also used for point-to-point
fixed communication links,[2] in radar antennas,[6] and for long distance shortwave communication by shortwave
broadcasting stations and radio amateurs.
The Yagi-Uda antenna consists of a number of parallel thin rod dipole elements in a line, usually half-wave dipoles,
[5]
typically supported on a perpendicular crossbar or "boom" along their centers. [2] There is a single driven element driven
in the center (consisting of two rods each connected to one side of the transmission line), and a variable number
of parasitic elements, a single reflector on one side and optionally one or more directors on the other side.[2][6][5] The
parasitic elements are not electrically connected to the transmitter or receiver, and serve as resonators, reradiating the
radio waves to modify the radiation pattern.[2] Typical spacings between elements vary from about 1/10 to 1/4 of a
wavelength, depending on the specific design. The lengths of the directors are slightly shorter than that of the driven
element, while the reflector(s) are slightly longer. [6] The radiation pattern is unidirectional, with the main lobe along the axis
perpendicular to the elements in the plane of the elements, off the end with the directors. [

INTERNAL ASSIGMENT NO-2


Paper Code PH 203
Paper title Electronics
Q1 Answer all the questions:
(i) Write any four difference between conductor and semiconductor ?
Ans A semiconductor material has an electrical conductivity value falling between that of a conductor, such as copper,
and an insulator, such as glass. Semiconductors are the foundation of modern electronics. Semiconducting materials exist
in two types - elemental materials and compound materials.[1] The modern understanding of the properties of a
semiconductor relies on quantum physics to explain the movement of electrons and holes in a crystal lattice.[2] The unique
arrangement of the crystal lattice makes silicon and germanium the most commonly used elements in the preparation of
semiconducting materials. An increased knowledge of semiconductor materials and fabrication processes has made
possible continuing increases in the complexity and speed of microprocessors and memory devices. Some of the
information on this page may be outdated within a year because new discoveries are made in the field frequently. [2]
The electrical conductivity of a semiconductor material increases with increasing temperature, which is behaviour opposite
to that of a metal. Semiconductor devices can display a range of useful properties such as passing current more easily in
one direction than the other, showing variable resistance, and sensitivity to light or heat.
(ii) what are basic logic gates explain?
Ans n electronics, a logic gate is an idealized or physical device implementing a Boolean function; that is, it performs
alogical operation on one or more logical inputs, and produces a single logical output. Depending on the context, the term
may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-
ideal physical device[1] (see Ideal and real op-amps for comparison).
Logic gates are primarily implemented using diodes or transistors acting as electronic switches, but can also be
constructed using vacuum tubes, electromagnetic relays (relay logic), fluidic logic, pneumatic logic, optics, molecules, or
even mechanical elements. With amplification, logic gates can be cascaded in the same way that Boolean functions can
be composed, allowing the construction of a physical model of all of Boolean logic, and therefore, all of the algorithms
and mathematics that can be described with Boolean logic.
Logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), and computer memory, all the
way up through complete microprocessors, which may contain more than 100 million gates. In modern practice, most
gates are made from field-effect transistors
(iii) Explain the testing of transformer ?
Ans A transformer is an electrical device that transfers electrical energy between two or more circuits
through electromagnetic induction. Commonly, transformers are used to increase or decrease the voltages of alternating
current in electric power applications.
A varying current in the transformer's primary winding creates a varying magnetic flux in the transformer core and a
varying magnetic field impinging on the transformer's secondary winding. This varying magnetic field at the secondary
winding induces a varying electromotive force (EMF) or voltage in the secondary winding. Making use of Faraday's Law in
conjunction with high magnetic permeability core properties, transformers can thus be designed to efficiently
change AC voltages from one voltage level to another within power networks.
Since the invention of the first constant potential transformer in 1885, transformers have become essential for the
AC transmission, distribution, and utilization of electrical energy. [3] A wide range of transformer designs is encountered in
electronic and electric power applications.
(iv) Explain dipole antenna?
Ans In radio and telecommunications a dipole antenna or doublet[1] is the simplest and most widely used class
of antenna.[2][3] It consists of two identical conductive elements[4] such as metal wires or rods, which are usually bilaterally
symmetrical.[3][5][6] The driving current from the transmitter is applied, or for receiving antennas the output signal to
the receiver is taken, between the two halves of the antenna. Each side of the feedline to the transmitter or receiver is
connected to one of the conductors. This contrasts with a monopole antenna, which consists of a single rod or conductor
with one side of the feedlineconnected to it, and the other side connected to some type of ground.[6] A common example
of a dipole is the "rabbit ears" television antenna found on broadcast television sets.
The most common form of dipole is two straight rods or wires oriented end to end on the same axis, with the feedline
connected to the two adjacent ends. This is the simplest type of antenna from a theoretical point of view. [1] Dipoles
are resonant antennas, meaning that the elements serve as resonators, withstanding waves of radio current flowing back
and forth between their ends. So the length of the dipole elements is determined by the wavelength of the radio waves
used.[3] The most common form is the half-wave dipole, in which each of the two rod elements is approximately 1/4
wavelength long, so the whole antenna is a half-wavelength long.
(v) Explain bi-stable devices ?
Ans In a dynamical system, bistability means the system has two stable equilibrium states. Something that
is bistablecan be resting in either of two states. These rest states need not be symmetric with respect to stored energy. In
terms of potential energy, a bistable system has two local minima of potential energy separated by a peak (local
maximum).
In a conservative force field, bistability stems from the fact that the potential energy has three equilibrium points. Two of
them are minima and one is a maximum. By mathematical arguments, the maximum must lie between the two minima. At
rest, a particle will be in one of the minimum equilibrium positions, because that corresponds to the state of lowest energy.
The maximum can be visualized as a barrier between them.
A system can transition from one state of minimal energy to the other if it is given enough activation energy to penetrate
the barrier (compare activation energy and Arrhenius equation for the chemical case). After the barrier has been reached,
the system will relax into the other minimum state in a time called the relaxation time.
Bistability is widely used in digital electronics devices to store binary data. It is the essential characteristic of the flip-
flop, a circuit widely used in latches and some types of semiconductor memory. A bistable device can store one bit of
binary data, with one state representing a "0" and the other state a "1". It is also used in relaxation
oscillators, multivibrators, and the Schmitt trigger. Optical bistability is an attribute of certain optical devices where two
resonant transmissions states are possible and stable, dependent on the input.
NOTE: - Attempt any two . It carries 5 marks.( word limits 500)
Q2 Explain Principle and working of simple and superheterodyne receiver ?
Ans In electronics, a superheterodyne receiver (often shortened to superhet) uses frequency mixing to convert a
received signal to a fixed intermediate frequency (IF) which can be more conveniently processed than the
originalradio carrier frequency. It was invented by US engineer Edwin Armstrong in 1918 during World War I.[1] Virtually all
modern radio receivers use the superheterodyne principle. At the cost of an extra frequency converter stage, the
superheterodyne receiver provides superior selectivity and sensitivity compared with simpler designs.
"Superheterodyne" is a contraction of "supersonic heterodyne", where "supersonic" indicates frequencies above the range
of human hearing. The word heterodyne is derived from the Greek roots hetero- "different", and -dyne "power". In radio
applications the term derives from the "heterodyne detector" pioneered by Canadian inventor Reginald Fessenden in
1905, describing his proposed method of producing an audible signal from the Morse codetransmissions of the
new continuous wave transmitters. With the older spark gap transmitters then in use, the Morse code signal consisted of
short bursts of a heavily modulated carrier wave, which could be clearly heard as a series of short chirps or buzzes in the
receiver's headphones. However, the signal from a continuous wave transmitter did not have any such inherent
modulation and Morse Code from one of those would only be heard as a series of clicks or thumps. Fessenden's idea was
to run two Alexanderson alternators, one producing a carrier frequency 3 kHz higher than the other. In the receiver's
detector the two carriers would beattogether to produce a 3 kHz tone thus in the headphones the Morse signals would
then be heard as a series of 3 kHz beeps. For this he coined the term "heterodyne" meaning "generated by a difference"
(in frequency).
The superheterodyne principle was devised in 1918 by U.S. Army Major Edwin Armstrong in France during World War I.[2]
[3]
He invented this receiver as a means of overcoming the deficiencies of early vacuum tube triodes used as high-
frequency amplifiers in radio direction finding equipment. Unlike simple radio communication, which only needs to make
transmitted signals audible, direction-finders measure the received signal strength, which necessitates linear amplification
of the actual carrier wave.

One of the first amateur superheterodyne receivers, built in 1920 even before Armstrong published his paper. Due to the
low gain of early triodes it required 9 tubes, with 5 IF amplification stages, and used an IF of around 50 kHz.
In a triode radio-frequency (RF) amplifier, if both the plate (anode) and grid are connected to resonant circuits tuned to the
same frequency, stray capacitive coupling between the grid and the plate will cause the amplifier to go into oscillation if
the stage gain is much more than unity. In early designs, dozens (in some cases over 100) low-gain triode stages had to
be connected in cascade to make workable equipment, which drew enormous amounts of power in operation and
required a team of maintenance engineers. The strategic value was so high, however, that the British Admiralty felt the
high cost was justified.
Armstrong realized that if radio direction-finding (RDF) receivers could be operated at a higher frequency, this would allow
better detection of enemy shipping. However, at that time, no practical "short wave" (defined then as any frequency above
500 kHz) amplifier existed, due to the limitations of existing triodes.
Q4 Explain Full wave (bridge ) rectifier ?
Ans A rectifier is an electrical device that converts alternating current (AC), which periodically reverses direction,
to direct current (DC), which flows in only one direction. The process is known as rectification. Physically, rectifiers take
a number of forms, including vacuum tube diodes, mercury-arc valves, copper and selenium oxide
rectifiers, semiconductor diodes,silicon-controlled rectifiers and other silicon-based semiconductor switches. Historically,
even synchronous electromechanical switches and motors have been used. Early radio receivers, called crystal radios,
used a "cat's whisker" of fine wire pressing on a crystal of galena (lead sulfide) to serve as a point-contact rectifier or
"crystal detector".
Rectifiers have many uses, but are often found serving as components of DC power supplies and high-voltage direct
current power transmission systems. Rectification may serve in roles other than to generate direct current for use as a
source of power. As noted,detectors of radio signals serve as rectifiers. In gas heating systems flame rectification is used
to detect presence of a flame.
Because of the alternating nature of the input AC sine wave, the process of rectification alone produces a DC current that,
though unidirectional, consists of pulses of current. Many applications of rectifiers, such as power supplies for radio,
television and computer equipment, require a steady constant DC current (as would be produced by a battery). In these
applications the output of the rectifier is smoothed by an electronic filter (usually a capacitor) to produce a steady current.
More complex circuitry that performs the opposite function, converting DC to AC, is called an inverter.
Before the development of silicon semiconductor rectifiers, vacuum tube thermionic diodes and copper oxide- or
selenium-based metal rectifier stacks were used.[1] With the introduction of semiconductor electronics, vacuum tube
rectifiers became obsolete, except for some enthusiasts of vacuum tube audio equipment. For power rectification from
very low to very high current, semiconductor diodes of various types (junction diodes, Schottky diodes, etc.) are widely
used.
Other devices that have control electrodes as well as acting as unidirectional current valves are used where more than
simple rectification is required—e.g., where variable output voltage is needed. High-power rectifiers, such as those used
in high-voltage direct current power transmission, employ silicon semiconductor devices of various types. These
are thyristors or other controlled switching solid-state switches, which effectively function as diodes to pass current in only
one direction.
Rectifier circuits may be single-phase or multi-phase (three being the most common number of phases). Most low power
rectifiers for domestic equipment are single-phase, but three-phase rectification is very important for industrial applications
and for the transmission of energy as DC (HVDC)
In half wave rectification of a single-phase supply, either the positive or negative half of the AC wave is passed, while the
other half is blocked. Because only one half of the input waveform reaches the output, mean voltage is lower. Half-wave
rectification requires a single diode in a single-phase supply, or three in a three-phase supply. Rectifiers yield a
unidirectional but pulsating direct current; half-wave rectifiers produce far more ripple than full-wave rectifiers, and much
more filtering is needed to eliminate harmonics of the AC frequency from the output.

You might also like