Professional Documents
Culture Documents
Homonuclear diatomics, X2
Octahedral, EX6
Square planar, EX4.
Heteronuclear diatomics, XY
Tetrahedral, EX4.
Ans Metals at the top of the series are good at giving away electrons. They are good reducing agents. The reducing
ability of the metal increases as you go up the series.
Metal ions at the bottom of the series are good at picking up electrons. They are good oxidising agents. The oxidising
ability of the metal ions increases as you go down the series.
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Alfred Wernera Swiss chemist put forward a theory to explain the formation of complex compounds. It was the first
successful explanation,became famous as the coordination theory of complex compounds, which is also known as
Werner's theory.
Postulates:
(a) The central metal atom (or) ion in a coordination compound exhibits two types of valencies - primary and secondary.
(b) Primary valencies are ionisable and correspond to the number of charges on the complex ion. Primary valencies apply
equally well to simple salts and to complexes and are satisfied by negative ions.
(c) Secondary valencies correspond to the valencies that a metal atom (or) ion exercises towards neutral molecules (or)
negative ions in the formation of its complex ions.
(d) Secondary valencies are directional and so a complex has a particular shape. The number and arrangement of ligands
in space determines the stereochemistry of a complex.
The postulates of Werner's coordination theory were actually based on experimental evidence rather than theoretical.
Although Werner's theory successfully explains the bonding features in coordination compounds, it has drawbacks.
Drawbacks:
It doesn't explain why only certain elements form coordination compounds.
It does not explain why the bonds in coordination compounds have directional properties.
It does not explain the colour, and the magnetic and optical properties of complexes.
In chemistry, a coordination complex or metal complex consists of a centralatom or ion, which is usually metallic and is
called the coordination centre, and a surrounding array of bound molecules or ions, that are in turn known as ligands or
complexing agents.[1][2] Many metal-containing compounds, especially those of transition metals, are coordination
complexes.
Coordination complexes are so pervasive that the structure and reactions are described in many ways, sometimes
confusingly. The atom within a ligand that is bonded to the central atom or ion is called the donor atom. In a typical
complex, a metal ion is bound to several donor atoms, which can be the same or different. Polydentate (multiple bonded)
ligands consist of several donor atoms, several of which are bound to the central atom or ion. These complexes are
called chelate complexes, the formation of such complexes is called chelation, complexation, and coordination.
The central atom or ion, together with all ligands comprise the coordination sphere.[4][5] The central atoms or ion and the
donor atoms comprise the first coordination sphere.
The most common type of complex is octahedral; here six ligands form an octahedron around the metal ion. In octahedral
symmetry the d-orbitals split into two sets with an energy difference, Δoct (the crystal-field splitting parameter) where
the dxy, dxz and dyz orbitals will be lower in energy than the dz2 and dx2-y2, which will have higher energy, because the former
group is farther from the ligands than the latter and therefore experience less repulsion. The three lower-energy orbitals
are collectively referred to as t2g, and the two higher-energy orbitals as eg. (These labels are based on the theory
of molecular symmetry). Typical orbital energy diagrams are given below in the section High-spin and low-spin.
An isomer (/ˈaɪsəmər/; from Greek ἰσομερής, isomerès; isos = "equal", méros = "part") is a molecule with the
samechemical formula as another molecule, but with a different chemical structure. That is, isomers contain the same
number of atoms of each element, but have different arrangements of their atoms. [1][2] Isomers do not necessarily share
similar properties, unless they also have the same functional groups. There are many different classes of isomers,
likepositional isomers, cis-trans isomers and enantiomers, etc. (see chart below). There are two main forms of
isomerism:structural isomerism and stereoisomerism
Element La Ce Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb Lu
Atomic
electron
configura 4f15d16s 3 2 4 2 5 2 6 2 7 2 4f75d16s 9 2 10 2 11 2 12 2 13 2 14 2 4f145d16s
5d16s2 2 4f 6s 4f 6s 4f 6s 4f 6s 4f 6s 2 4f 6s 4f 6s 4f 6s 4f 6s 4f 6s 4f 6s 2
tion
(all begin
with [Xe])
Ln3+ elect
ron 4f14
4f0 4f1 4f2 4f3 4f4 4f5 4f6 4f7 4f8 4f9 4f10 4f11 4f12 4f13
configura
tion
Ln3+ radiu
s (pm) (6-
103 102 99 98.3 97 95.8 94.7 93.8 92.3 91.2 90.1 89
coordinat
e)
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Acetic acid, CH
3COOH, is an acid because it donates a proton to water (H
2O) and becomes its conjugate base, theacetate ion (CH
3COO−
). H
2O is a base because it accepts a proton from CH
3COOH and becomes its conjugate acid, the hydronium ion, (H
3O+
).[6]
The reverse of an acid-base reaction is also an acid-base reaction, between the conjugate acid of the base in the first
reaction and the conjugate base of the acid. In the above example, acetate is the base of the reverse reaction and
hydronium ion is the acid.
H
3O+
+ CH
3COO−
CH
3COOH + H
2O
Ans Hooke's law is a principle of physics that states that the force needed to extend or compress a spring by
some distance is proportional to that distance. That is: where is a constant factor characteristic of the
spring, its stiffness. The law is named after 17th century British physicistRobert Hooke. He first stated the law in 1660 as
a Latin anagram.[1][2] He published the solution of his anagram in 1678 as: ut tensio, sic vis ("as the extension, so the
force" or "the extension is proportional to the force").
Hooke's equation in fact holds (to some extent) in many other situations where an elastic body is deformed, such as wind
blowing on a tall building, a musician plucking a string of a guitar, or the filling of a party balloon. An elastic body or
material for which this equation can be assumed is said to be linear-elastic orHookean.
The alcohol (V) partially decomposed during distillation under reduced pressure to an
uncharacterised phenolic material. From the reaction of o-toluquinol and formaldehyde followed by
methylation, Euler and his co-workers3isolated a compound (A) to which they assigned the structure
(IV) on the basis of its conversion into a 2,5-dihydroxymethylbenzoic acid which depressed the
melting point of 2,5-dihydroxy-4-methylbenzoic acid prepared by the Kolbe reaction. The compound
(A) was not found by us to be the same as our product (IV) and we have identified it as 4-
hydroxymethyl-2,5-dimethoxytoluene by obtaining the aldehyde (II) from its oxidation with
selenium dioxide, and 2,5-dimethoxy-4-methylbenzoic acid4 by alternative oxidation procedures. By
this result the structural assignment of the aldehyde (II) is also confirmed.
The second synthesis of compound (IV) was of poor yield. It involved a Mannich reaction of
compound (III) with formaldehyde and dimethylamine 5 yielding 2-dimethylaminomethyl-4-
methoxy-6-methylphenol (VI), which, when heated with acetic anhydride 6, gave 2-acetoxy-5-
methoxy-3-methylbenzyl acetate (VII). This was hydrolysed with alkali and compound (IV) was
obtained by methylation of the hydrolysate. A by-product of this reaction was bis-(2-hydroxy-5-
methoxy-3-methylphenyl)methane (VIII), which may have arisen from the reversible
decomposition of 2-hydroxymethyl-4-methoxy-6-methylphenol (V) to give 4-methoxy-2-
methylphenol (III) and formaldehyde, followed by the condensation of compound (III) with
compound (V) or with an intermediate arising from compound (V) by loss of a molecule of water.
alkanes, alkenes,
compounds containing heteroatoms in the ring:
oxygen: ethers, acetals, esters (lactones, lactides, and carbonates), and anhydrides,
sulfur: polysulfur, sulfides and polysulfides,
nitrogen: amines, amides (lactames), imides, N-carboxyanhydrides and 1,3-oxaza derivatives,
phosphorus: phosphates, phosphonates, phosphites, phosphines and phosphazenes,
silicon: siloxanes, silathers, carbosilanes and silanes.
The term "polymer" derives from the ancient Greek word πολύς (polus, meaning "many, much") and μέρος
(meros, meaning "parts"), and refers to a molecule whose structure is composed of multiple repeating units, from
which originates a characteristic of high relative molecular mass and attendant properties.[6] The units composing
polymers derive, actually or conceptually, from molecules of low relative molecular mass. [7] The term was coined
in 1833 by Jöns Jacob Berzelius, though with a definition distinct from the modern IUPAC definition.[8][9] The
modern concept of polymers as covalently bonded macromolecular structures was proposed in 1920 by Hermann
Staudinger, who spent the next decade finding experimental evidence for this hypothesis. [10]
Polymers are studied in the fields of biophysics and macromolecular science, and polymer science (which
includespolymer chemistry and polymer physics). Historically, products arising from the linkage of repeating units
by covalentchemical bonds have been the primary focus of polymer science; emerging important areas of the
science now focus on non-covalent links. Polyisoprene of latex rubber and the polystyrene of styrofoam are
examples of polymeric natural/biological and synthetic polymers, respectively. In biological contexts, essentially
all biological macromolecules—i.e., proteins (polyamides), nucleic acids (polynucleotides), and polysaccharides—
are purely polymeric, or are composed in large part of polymeric components—e.g., isoprenylated/lipid-modified
glycoproteins, where small lipidic molecule and oligosaccharide modifications occur on the polyamide backbone
of the protein.[11]
The simplest theoretical models for polymers are ideal chains.
Internal Assignment No. 2
Auxochrome is a group of atoms which is functional and has the capability to alter the capacity of the chromophore to
reflect colors. Azobenzene is an example of a dye which contains a chromophore. All substances like dyes produce colors
by absorption of visible light owing to the various constituent compounds. The electromagnetic spectrum has a very wide
variation in wavelengths but the human eye visualizes only short wavelength radiation. Chromophores do not absorb light
without the requisite contents but with the presence of an auxochrome there is a shift in the absorption of these
chromogens. Auxochrome increases the color of any organic substance. For instance, benzene does not have any color of
its own, but when it is combined with the -nitro group which acts as a chromophore; it imparts a pale yellow color.
2) Show by reactions the reduction of Aldehydes and ketones.
Ans An aldehyde /ˈældɨhaɪd/ or alkanal is an organic compound containing a formyl group. The formyl group is
a functional group, with the structure R-CHO, consisting of a carbonylcenter (a carbon double bonded to oxygen) bonded
to hydrogen and an R group,[1] which is any generic alkyl or side chain. The group without R is called the aldehyde
group orformyl group. Aldehydes differ from ketones in that the carbonyl is placed at the end of acarbon skeleton rather
than between two carbon atoms. Aldehydes are common in organic chemistry. Many fragrances are aldehydes.
Aldehydes feature an sp2-hybridized, planar carbon center that is connected by a double bond to oxygen and a single
bond to hydrogen. The C-H bond is not acidic. Because of resonance stabilization of the conjugate base, an α-
hydrogen in an aldehyde (not shown in the picture above) is far more acidic, with a pKa near 17, than a C-H bond in a
typical alkane (pKa about 50).[2] This acidification is attributed to (i) the electron-withdrawing quality of the formyl center
and (ii) the fact that the conjugate base, an enolate anion, delocalizes its negative charge. Related to (i), the aldehyde
group is somewhat polar.
1. Ans Acyclic aliphatic aldehydes are named as derivatives of the longest carbon chain containing the aldehyde
group. Thus, HCHO is named as a derivative of methane, and CH 3CH2CH2CHO is named as a derivative
of butane. The name is formed by changing the suffix -e of the parent alkane to -al, so that HCHO is
named methanal, and CH3CH2CH2CHO is named butanal.
2. In other cases, such as when a -CHO group is attached to a ring, the suffix -carbaldehyde may be used. Thus,
C6H11CHO is known as cyclohexanecarbaldehyde. If the presence of another functional group demands the use
of a suffix, the aldehyde group is named with the prefix formyl-. This prefix is preferred to methanoyl-.
3. If the compound is a natural product or a carboxylic acid, the prefix oxo- may be used to indicate which carbon
atom is part of the aldehyde group; for example, CHOCH 2COOH is named 3-oxopropanoic acid.
4. If replacing the aldehyde group with a carboxyl group (-COOH) would yield a carboxylic acid with a trivial name,
the aldehyde may be named by replacing the suffix -ic acid or -oic acid in this trivial name by -aldehyde.
5) What is difference between secondary and tertiary amines with examples.
Ans Amines and ammonia are generally sufficiently basic to undergo direct alkylation, often under mild conditions. The
reactions are difficult to control because the reaction product (a primary amine or a secondary amine) are often more
nucleophilic than the precursor and will thus preferentially react with the alkylating agent. For example, reaction of 1-
bromooctane with ammonia yields almost equal amounts of the primary amine and the secondary amine. [2] Therefore, for
laboratory purposes, N-alkylation is often limited to the synthesis of tertiary amines. A notable exception is the reactivity of
alpha-halo carboxylic acids that do permit synthesis of primary amines with ammonia. [3] Intramolecularreactions of
haloamines X-(CH2)n-NH2 give cyclic aziridines, azetidines and pyrrolidines.
N-alkylation is a general and useful route to quaternary ammonium salts from tertiary amines, because overalkylation is
not possible.
Examples of N-alkylation with alkyl halides are the syntheses of benzylaniline, [4] 1-benzylindole,[5][6] and azetidine.[7]Another
example is found in the derivatization of cyclen.[8] Industrially, ethylenediamine is produced by alkylation of ammonia
with 1,2-dichloroethane.
Q.4 Write chemical reaction of Nitro alkenes and 2,4,6 – trinitro – phenol.
Ans Nitro compounds are organic compounds that contain one or more nitro functional groups (–NO2). They are often
highly explosive, especially when the compound contains more than one nitro group and is impure. The nitro group is one
of the most commonexplosophores (functional group that makes a compound explosive) used globally. This property of
both nitro and nitrate groups is because their thermal decomposition yields molecular nitrogen N2 gas plus considerable
energy, due to the high strength of the bond in molecular nitrogen.
Aromatic nitro compounds are typically synthesized by the action of a mixture of nitric andsulfuric acids on an organic
molecule. The one produced on the largest scale, by far, isnitrobenzene. Many explosives are produced by nitration
including trinitrophenol (picric acid), trinitrotoluene (TNT), and trinitroresorcinol (styphnic acid)
Chloramphenicol is a rare example of a naturally occurring nitro compound. At least some naturally occurring nitro groups
arise by the oxidation of amino groups.[2] 2-Nitrophenol is an aggregation pheromone of ticks.
Examples of nitro compounds are rare in nature. 3-Nitropropionic acid found in fungi and plants
(Indigofera).Nitropentadecene is a defense compound found in termites. Nitrophenylethane is found in Aniba canelilla.
[3]
Nitrophenylethane is also found in members of the Annonaceae, Lauraceae and Papaveraceae. [4]
Many flavin-dependent enzymes are capable of oxidizing aliphatic nitro compounds to less-toxic aldehydes and
ketones. Nitroalkane oxidase and 3-nitropropionate oxidase oxidize aliphatic nitro compounds exclusively, whereas other
enzymes such as glucose oxidase have other physiological substrates
Aliphatic nitro compounds are reduced to amines with hydrochloric acid and an iron catalyst[citation needed]
Nitronates are a tautomeric form of aliphatic nitro compounds.
Hydrolysis of the salts of nitro compounds yield aldehydes or ketones in the Nef reaction
Nitromethane adds to aldehydes in 1,2-addition in the nitroaldol reaction
Nitromethane adds to alpha-beta unsaturated carbonyl compounds as a 1,4-addition in the Michael reaction as a
Michael donor
Nitroalkenes are Michael acceptors in the Michael reaction with enolate compounds[12][13]
In nucleophilic aliphatic substitution, sodium nitrite (NaNO2) replaces an alkyl halide. In the so-called ter Meer
reaction (1876) named after Edmund ter Meer,[14] the reactant is a 1,1-halonitroalkane:
removed from) an object to the resulting temperature change.[1] The SI unit of heat capacity is joule perkelvin and
the dimensional form is L2MT−2Θ−1. Specific heat is the amount of heat needed to raise the temperature of a
certain mass 1 degree Celsius.
Heat capacity is an extensive property of matter, meaning it is proportional to the size of the system. When expressing the
same phenomenon as anintensive property, the heat capacity is divided by the amount of substance, mass, or volume, so
that the quantity is independent of the size or extent of the sample. The molar heat capacity is the heat capacity per unit
amount (SI unit: mole) of a pure substance and thespecific heat capacity, often simply called specific heat, is the heat
capacity per unit mass of a material. Occasionally, in engineering contexts, the volumetric heat capacity is used.
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Q.2 Deduce jouler lare and calculation of dn for the expansion of ideal gaser under isothermal.
Ans In thermodynamics, the Joule–Thomson effect (also known as the Joule–Kelvin effect, Kelvin–Joule effect,
orJoule–Thomson expansion) describes the temperature change of a gas or liquid when it is forced through
a valve orporous plug while kept insulated so that no heat is exchanged with the environment. [1][2][3] This procedure is
called athrottling process or Joule–Thomson process.[4] At room temperature, all gases
except hydrogen, helium and neon cool upon expansion by the Joule–Thomson process; these three gases experience
the same effect but only at lower temperatures.[5][6]
The effect is named after James Prescott Joule and William Thomson, 1st Baron Kelvin, who discovered it in 1852. It
followed upon earlier work by Joule on Joule expansion, in which a gas undergoes free expansion in a vacuum and the
temperature is unchanged, if the gas is ideal.
The throttling process is commonly exploited in thermal machines such as refrigerators, air conditioners, heat pumps, and
liquefiers.[7]
Throttling is a fundamentally irreversible process. The throttling due to the flow resistance in supply lines, heat
exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits the performance.
The physical mechanism associated with the Joule-Thomson effect is closely related to that of a shock wave
The adiabatic (no heat exchanged) expansion of a gas may be carried out in a number of ways. The change in
temperature experienced by the gas during expansion depends not only on the initial and final pressure, but also on the
manner in which the expansion is carried out.
If the expansion process is reversible, meaning that the gas is in thermodynamic equilibrium at all times, it is
called an isentropic expansion. In this scenario, the gas does positive work during the expansion, and its temperature
decreases.
In a free expansion, on the other hand, the gas does no work and absorbs no heat, so the internal energy is
conserved. Expanded in this manner, the temperature of an ideal gas would remain constant, but the temperature of
a real gas may either increase or decrease, depending on the initial temperature and pressure.
The method of expansion discussed in this article, in which a gas or liquid at pressure P1 flows into a region of
lower pressure P2 via a valve or porous plug under steady state conditions and without change in kinetic energy, is
called the Joule–Thomson process. During this process, enthalpy remains unchanged (see a proof below).
State-transition points for steamshow a similar pattern to temperature inversion points on this T-s diagram for steam.
A throttling process proceeds along a constant-enthalpy curve in the direction of decreasing pressure, which means that
the process occurs from right to left on a temperature-pressure diagram. If the pressure starts out high enough, the
temperature increases as the pressure drops, until an inversion temperature is reached and a phase transition occurs,
called the inversion point.
Determine oxidation states and assign d-electron counts for transition metals in complexes.
Derive the d-orbital splitting patterns for octahedral, elongated octahedral, square pyramidal, square planar, and
tetrahedral complexes.
For octahedral and tetrahedral complexes, determine the number of unpaired electrons and calculate the crystal
field stabilization energy.
Know the spectrochemical series, rationalize why different classes of ligands impact the crystal field splitting
energy as they do, and use it to predict high vs. low spin complexes, and the colors of transition metal complexes.
Use the magnetic moment of transition metal complexes to determine their spin state.
Understand the origin of the Jahn-Teller effect and its consequences for complex shape, color, and reactivity.
Understand the extra stability of complexes formed by chelating and macrocyclic ligands.
The styx number was introduced to aid in electron counting where s = count of 3-center B-H-B bonds; t = count of 3-
center B-B-B bonds; y = count of 2-center B-B bonds and x = count of BH 2 groups.
Lipscomb's methodology has largely been superseded by a molecular orbital approach, although it still affords insights.
The results of this have been summarised in a simple but powerful rule, PSEPT, often known as Wade's rules, that can be
used to predict the cluster type, closo-, nido-, etc. The power of this rule is its ease of use and general applicability to
many different cluster types other than boranes. There are continuing efforts by theoretical chemists to improve the
treatment of the bonding in boranes — an example is Stone's tensor surface harmonic treatment of cluster bonding. A
recent development is four-center two-electron bond.
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Q.2 Give the structure and Nature of bonding in the mononudear carbonyls.
Ans Metal carbonyls are coordination complexes of transition metals with carbon monoxide ligands. Metal carbonyls are
useful in organic synthesis and as catalysts or catalyst precursors in homogeneous catalysis, such
as hydroformylation and Reppe chemistry. In the Mond process, nickel carbonyl is used to produce pure nickel. In
organometallic chemistry, metal carbonyls serve as precursors for the preparation of other organometalic complexes.
Metal carbonyls are toxic by skin contact, inhalation or ingestion, in part because of their ability to carbonylate hemoglobin
to give carboxyhemoglobin, which prevents the binding of O2
The nomenclature of the metal carbonyls depends on the charge of the complex, the number and type of central atoms,
and the number and type of ligands and their binding modes. They occur as neutral complexes, as positively charged
metal carbonyl cations or as negatively charged metal carbonylates. The carbon monoxide ligand may be bound
terminally to a single metal atom or bridging to two or more metal atoms. These complexes may be homoleptic, that is
containing only CO ligands, such as nickel carbonyl (Ni(CO)4), but more commonly metal carbonyls are heteroleptic and
contain a mixture of ligands.
Mononuclear metal carbonyls contain only one metal atom as the central atom. Except Vanadium hexacarbonyl only
metals with even order number such as chromium, iron, nickel and their homologs build neutral mononuclear complexes.
Polynuclear metal carbonyls are formed from metals with odd order numbers and contain a metal-metal bond.
[2]
Complexes with different metals, but only one type of ligand will be referred to as isoleptic. [2]
The number of carbon monoxide ligands in a metal carbonyl complex is described by a Greek numeral, followed by the
word carbonyl. Carbon monoxide has different binding modes in metal carbonyls. They differ in the hapticity and the
bridging mode. The hapticity describes the number of carbon monoxide atoms, which are directly bonded to the central
atom. The denomination shall be made by the letter η n, which is prefixed to the name of the complex. The superscript n
indicates the number of bounded atoms. In monohapto coordination, such as in terminally bonded carbon monoxide, the
hapticity is 1 and it is usually not separately designated. If carbon monoxide is bound via the carbon atom and via the
oxygen to the metal, it will be referred to as dihapto coordinated η 2.[3]
The carbonyl ligand engages in a range of bonding modes in metal carbonyl dimers and clusters. In the most common
bridging mode, the CO ligand bridges a pair of metals. This bonding mode is observed in the commonly available metal
carbonyls: Co2(CO)8, Fe2(CO)9, Fe3(CO)12, and Co4(CO)12.[1][4] In certain higher nuclearity clusters, CO bridges between
three or even four metals. These ligands are denoted μ 3-CO and μ4-CO. Less common are bonding modes in which both
C and O bond to the metal, e.g. μ3-η2.
small atomic/ionic radius
high oxidation state
low polarizability
high electronegativity (bases)
hard bases have highest-occupied molecular orbitals (HOMO) of low energy, and hard acids have lowest-
unoccupied molecular orbitals (LUMO) of high energy. [7][8]
Examples of hard acids are: H+, light alkali ions (Li through K all have small ionic radius), Ti4+, Cr3+, Cr6+, BF3. Examples of
hard bases are: OH–, F–, Cl–, NH3, CH3COO–, CO32–. The affinity of hard acids and hard bases for each other is
mainly ionic in nature.
Soft acids and soft bases tend to have the following characteristics:
In the context of real analysis, the former property is sometimes used as the defining property of compactness. However,
the two definitions cease to be equivalent when we consider subsets of more general metric spaces and in this generality
only the latter property is used to define compactness. In fact, the Heine–Borel theorem for arbitrary metric spaces reads:
A subset of a metric space is compact if and only if it is complete and totally bounded.
3) Write the different applicrhing of contraction maps (I) and contraction maps (II).
Ans In mathematics, the Banach fixed-point theorem (also known as the contraction mapping
theorem orcontraction mapping principle) is an important tool in the theory of metric spaces; it guarantees the
existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive method to find
those fixed points. The theorem is named after Stefan Banach (1892–1945), and was first stated by him in 1922
A standard application is the proof of the Picard–Lindelöf theorem about the existence and uniqueness of
solutions to certain ordinary differential equations. The sought solution of the differential equation is expressed as a
fixed point of a suitable integral operator which transforms continuous functions into continuous functions. The
Banach fixed-point theorem is then used to show that this integral operator has a unique fixed point.
One consequence of the Banach fixed-point theorem is that small Lipschitz perturbations of the identity are bi-
lipschitz homeomorphisms. Let Ω be an open set of a Banach space E; let I : Ω → E denote the identity (inclusion)
map and let g : Ω → E be a Lipschitz map of constant k < 1. Then
1. Ω′ := (I+g)(Ω) is an open subset of E: precisely, for any x in Ω such that B(x, r) ⊂ Ω one has B((I+g)(x), r(1−k)) ⊂ Ω
′;
2. I+g : Ω → Ω′ is a bi-lipschitz homeomorphism;
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
of A. Then A must be bounded, since otherwise there exists a sequence xm in A with || xm || ≥ m for all m, and then every
subsequence is unbounded and therefore not convergent. Moreover, A must be closed, since from a noninterior point xin
the complement of A one can build an A-valued sequence converging to x. Thus the subsets A of Rn for which every
sequence in A has a subsequence converging to an element of A – i.e., the subsets which are sequentially compact in the
subspace topology – are precisely the closed and bounded sets.
This form of the theorem makes especially clear the analogy to the Heine–Borel theorem, which asserts that a subset
ofRn is compact if and only if it is closed and bounded. In fact, general topology tells us that a metrizable space is compact
if and only if it is sequentially compact, so that the Bolzano–Weierstrass and Heine–Borel theorems are essentially the
same.
The Bolzano–Weierstrass theorem is named after mathematicians Bernard Bolzano and Karl Weierstrass. It was actually
first proved by Bolzano in 1817 as a lemma in the proof of the intermediate value theorem. Some fifty years later the result
was identified as significant in its own right, and proved again by Weierstrass. It has since become an essential theorem
of analysis.
There are different important equilibrium concepts in economics, the proofs of the existence of which often require
variations of the Bolzano–Weierstrass theorem. One example is the existence of a Pareto efficient allocation. An
allocation is a matrix of consumption bundles for agents in an economy, and an allocation is Pareto efficient if no change
can be made to it which makes no agent worse off and at least one agent better off (here rows of the allocation matrix
must be rankable by a preference relation). The Bolzano–Weierstrass theorem allows one to prove that if the set of
allocations is compact and non-empty, then the system has a Pareto-efficient allocation.
Q.4 If f(x) = (x-3) logx, then prove the equedtion xlogx = 3-x is satisfied by at least one value of x lying between 1 and
3.
Ans In calculus, Rolle's theorem essentially states that any real-valueddifferentiable function that attains equal values at
two distinct points must have a stationary point somewhere between them—that is, a point where the first derivative (the
slope of the tangent line to the graph of the function) is zero.
Since the proof for the standard version of Rolle's theorem and the generalization are very similar, we prove the
generalization.
The idea of the proof is to argue that if f(a) = f(b), then f must attain either a maximum or a minimum somewhere
between a and b, say at c, and the function must change from increasing to decreasing (or the other way around) at c. In
particular, if the derivative exists, it must be zero at c.
By assumption, f is continuous on [a,b], and by the extreme value theorem attains both its maximum and its minimum in
[a,b]. If these are both attained at the endpoints of [a,b], then f is constant on [a,b] and so the derivative of f is zero at
every point in (a,b).
Suppose then that the maximum is obtained at an interior point c of (a,b) (the argument for the minimum is very similar,
just consider −f ). We shall examine the above right- and left-hand limits separately.
For a real h such that c + h is in [a,b], the value f(c + h) is smaller or equal to f(c) because f attains its maximum at c.
Therefore, for every h > 0,
We can also generalize Rolle's theorem by requiring that f has more points with equal values and greater regularity.
Specifically, suppose that
the function f is n − 1 times continuously differentiable on the closed interval [a,b] and the nth derivative exists on
the open interval (a,b), and
there are n intervals given by a1 < b1 ≤ a2 < b2 ≤ . . .≤ an < bn in [a,b] such that f(ak) = f(bk) for every k from 1 to n.
Then there is a number c in (a,b) such that the nth derivative of f at c is zero.
The requirements concerning the nth derivative of f can be weakened as in the generalization above, giving the
corresponding (possibly weaker) assertions for the right- and left-hand limits defined above with f (n−1) in place of f
Rolle's theorem is a property of differentiable functions over the real numbers, which are an ordered field. As such, it does
not generalize to other fields, but the following corollary does: if a real polynomial splits (has all its roots) over the real
numbers, then its derivative does as well – one may call this property of a field Rolle's property. More general fields do
not always have a notion of differentiable function, but they do have a notion of polynomials, which can be symbolically
differentiated. Similarly, more general fields may not have an order, but one has a notion of a root of a polynomial lying in
a field.
Thus Rolle's theorem shows that the real numbers have Rolle's property, and any algebraically closed field such as
thecomplex numbers has Rolle's property, but conversely the rational numbers do not – for example,
Internal Assignment No.2
Code: MT-201
Paper Title: Real Analysis and Metric Space
Q1. Answer all the questions. 1x5=5
(i) Find the following limit:
1 - cos x 2
lim x ®0
x 2 sin x 2
(ii) Determine the limit
x-3
lim x ®3 f ( x) = ,x ¹ 3 Exist.
x-3
ANS
(iii) Check whether the interval ]3, 7 ] and [8,12[ are equivalent or not.
ANS An equal temperament is a musical temperament, or a system of tuning, in which every pair of adjacent pitches is
separated by the same interval. In other words, the pitches of an equal temperament can be produced by repeating a
generating interval. Equal intervals also means equal ratios between the frequencies of any adjacent pair, and,
since pitch is perceived roughly as the logarithm of frequency, equal perceived "distance" from every note to its nearest
neighbor.
In equal temperament tunings, the generating interval is often found by dividing some larger desired interval, often
the octave (ratio 2/1), into a number of smaller equal steps (equal frequency ratios between successive notes).
For classical music and Western music in general, the most common tuning system for the past few hundred years has
been and remains twelve-tone equal temperament (also known as 12 equal temperament, 12-TET, or 12-ET), which
divides the octave into 12 parts, all of which are equal on alogarithmic scale. That resulting smallest interval, 1/12 the
width of an octave, is called a semitone or half step. In modern times, 12TET is usually tuned relative to a standard pitch
of 440 Hz, called A440, meaning one pitch is tuned to A440, and all other pitches are some multiple of semitones away
from that in either direction, although the standard pitch has not always been 440 and has fluctuated and generally risen
over the past few hundred years.
is a complex-valued function f on the upper half-plane H = {z ∈ C, Im(z) > 0}, satisfying the following three conditions:
f ′(x) 3 0 −4 −3 0 1 0
(Note that rough estimates are the best we can do; it is difficult to measure the slope of the tangent accurately without using a grid and
a ruler, so we couldn't reasonably expect two people's estimates to agree. However, all that is asked for is a rough sketch of the
derivative.) Plotting these points suggests the curve shown below.
å nx
n =1
n -1
,x f 0
ANS Definition An infinite series Pan is an alternating series iff holds either an = (−1)n |an| or an = (−1)n+1 |an|. Example I The
alternating harmonic series: X ∞ n=1 (−1)n+1 n = 1 − 1 2 + 1 3 − 1 4 + · · · . I The following series is an alternating series, X ∞ n=1
cos(nπ)n 2 (n + 1)! = X ∞ n=1 (−1)n n 2 (n + 1)! = − 1 2 + 4 6 − 9 24
Theorem (Leibniz’s test) If the sequence {an} satisfies: 0 < an, and an+1 6 an, and an → 0, then the alternating series P∞
n=1(−1)n+1an converges. Proof: Write down the partial sum s2n as follows s2n = a1 − a2 + a3 − a4 + a5 − · · · + s2n−1 − s2n = (a1 −
a2 ) + (a3 − a4 ) + · · · + (s2n−1 − s2n) = a1 − (a2 − a3 ) − (a4 − a5 ) − · · · − (s2n−2 − s2n−1 ) − s2n. The second expression implies s2n 6
s2(n+1). The third expression says that s2n is bounded above. Therefore converges, s2n → L. Since s2n+1 = s2n + a2n+1, and an → 0,
then s2n+1 → L + 0 = L. We conclude that P(−1)n+1an converges.
Show that the alternating harmonic series X ∞ n=1 (−1)n+1 n . converges. Solution: Introduce the sequence an = (−1)n+1 n . The
sequence {an} satisfies the hypothesis in the Leibniz test: I |an| > 0; I |an+1| < |an|; I |an| → 0. We then conclude that X ∞ n=1
(−1)n+1 n converges.
Q. No.3: Define the local minimum or local maximum value of the function defined by
f ( x) = 3 - 5 x3 + 5x 4 - x5
ANS In mathematical analysis, the maxima and minima (the plural of maximum andminimum) of a function, known
collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given
range (thelocal or relative extrema) or on the entire domain of a function (the global orabsolute extrema).[1][2][3] Pierre de
Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and
minima of functions.
As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively.
Unbounded infinite sets, such as the set ofreal numbers, have no minimum or maximum.
SECTION-A
1) (x+y) (dx-dy) = dx+ dy
ANS Euler's notation uses a differential operator, denoted as D, which is prefixed to the function so that the derivatives of
a function f are denoted by
5) Solve : y2 + x2 ( ) 2 -2xy ( ) = 4 ( )2
ANS
SECTION-8
2
d2y dy x
Q.3 Solve : x + 4x + 2y = e
ANS
Q.4 Solve : (D5 - D4 + 2D3 - 2D2 + D - 1) y = cos X
ANS
Internal Assignment No. 2
SECTION-A
ANS A 1. A differential equation is a mathematical equation that relates some functionwith its derivatives. In
applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the
equation defines a relationship between the two. Because such relations are extremely common, differential equations
play a prominent role in many disciplines including engineering, physics, economics, and biology.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their
solutions—the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit
formulas; however, some properties of solutions of a given differential equation may be determined without finding their
exact form.
If a self-contained formula for the solution is not available, the solution may be numerically approximated using
computers. The theory of dynamical systemsputs emphasis on qualitative analysis of systems described by differential
equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
2. These equations, containing a derivative, involve rates of change – so often appear in an engineering or scientific
context.
Solving the equation involves integration.
Examples :-
3.
5.
4.
SECTION-B (Word limits 500)
Ans 8
Ans 7
Internal Assignment No. 1
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
tan - y - x = -1 • (y + x - tan)
Equation at the end of step 2 :
(((1+(y2))•d)•x)-(-d•(y+x-tan)•y) = 0
Step 3 :
Raise y to the 2nd power
Exponentiation :
Equation at the end of step 3 :
(((1+y2)•d)•x)--yd•(y+x-tan) = 0
Step 4 :
Multiply y2+1 by d
Polynomial Roots Calculator :
3.1 Find roots (zeroes) of : F(y) = y2+1
Polynomial Roots Calculator is a set of methods aimed at finding values of y for which F(y)=0
Rational Roots Test is one of the above mentioned tools. It would only find Rational Roots that is numbers y which can
be expressed as the quotient of two integers
The Rational Root Theorem states that if a polynomial zeroes for a rational number P/Q then P is a factor of the Trailing
Constant and Q is a factor of the Leading Coefficient
d • (y2x + y2 + yx - ytan + x)
Equation at the end of step 6 :
d • (y2x + y2 + yx - ytan + x) = 0
Step 7 :
Solve d•(y2x+y2+yx-ytan+x) = 0
Theory - Roots of a product :
5.1 A product of several terms equals zero.
When a product of two or more terms equals zero, then at least one of the terms must be zero.
In other words, we are going to solve as many equations as there are terms in the product
Solution is d = 0
Solving a Single Variable Equation :
5.3 Solve y2x+y2+yx-ytan+x = 0
In this type of equations, having more than one variable (unknown), you have to specify for which variable you want the
equation solved.
(3) are convenient for computing tables of a given function if the point is at the beginning or the end of
the table, since in this case the addition of one or several nodes caused by the wish to increase the accuracy of
the approximation does not lead to a repetition of the whole work done as in computations with Lagrange's
formula.
iv Evaluate D 3 [ (1 - x )(1 - 2 x)(1 - 3 x) ] .
ANS
v. Define spline interpolation .
ANS In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant
is a special type of piecewise polynomial called a spline. Spline interpolation is often preferred over polynomial
interpolationbecause the interpolation error can be made small even when using low degree polynomials for the spline.
Spline interpolation avoids the problem of Runge's phenomenon, in which oscillation can occur between points when
interpolating using high degree polynomials.
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Q.2 Find a positive root of x 4 - x = 10 using Newton-Raphson’s method.
ANS
Q4 Solve the differential equation
dy
= x 2 - y, y (0) = 1
dx
By Picard’s method of successive approximation to get the value of y at x =1.
ANS
Ans
e. 1kg. Water (at 30oC) is mixed with 2kg. Water (at OoC) calculates the change in entropy.
ANS
Properties of water-NaCl mixtures [15]
NaCl, wt% Teq, °C ρ, g/cm3 n η, mPa·s
23.3 −21.1
23.7 −17.3
24.9 −11.1
26.1 −2.7
26.28 0
26.32 10
26.41 20
26.45 25
26.52 30
26.67 40
26.84 50
27.03 60
27.25 70
27.5 80
27.78 90
28.05 100
NOTE: Answer any two questions. Each question carries 5 Marks. (1*5=5 Marks) (500 Words).
Attempt any four questions.
Q.2 what is carnot engine? Derive the expression for efficiency calculation of carnot engine.
ANS A Carnot heat engine[2] is an engine that operates on the reversible Carnot cycle. The basic model for this engine
was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded upon
by Benoît Paul Émile Clapeyron in 1834 and mathematically elaborated upon by Rudolf Clausius in 1857 from which the
concept of entropy emerged.
Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a
series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may
perform work on its surroundings, thereby acting as a heat engine.
A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting
some of that energy to mechanical work. The cycle may also be reversed. The system may be worked upon by an
external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as
a refrigerator or heat pump rather than a heat engine.
The previous image shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engines. The
figure at right shows a block diagram of a generic heat engine, such as the Carnot engine. In the diagram, the “working
body” (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat Q can be
introduced or transmitted to produce work. Carnot had postulated that the fluid body could be any substance capable of
expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, or air, etc. Although, in these
early years, engines came in a number of configurations, typically QH was supplied by a boiler, wherein water was boiled
over a furnace; QC was typically supplied by a stream of cold flowing water in the form of a condenser located on a
separate part of the engine. The output work W here is the movement of the piston as it is used to turn a crank-arm, which
was then typically used to turn a pulley so to lift water out of flooded salt mines. Carnot defined work as “weight lifted
through a height”.
The Carnot cycle when acting as a heat engine consists of the following steps:
1. Reversible isothermal expansion of the gas at the "hot" temperature, TH (isothermal heat addition or
absorption). During this step (1 to 2 on Figure 1, A to B in Figure 2) the gas is allowed to expand and it does
work on the surroundings. The temperature of the gas does not change during the process, and thus the
expansion is isothermic. The gas expansion is propelled by absorption of heat energy Q 1 and of entropy
from the high temperature reservoir.
2. Isentropic (reversible adiabatic) expansion of the gas (isentropic work output). For this step (2 to 3 on
Figure 1, B to C in Figure 2) the piston and cylinder are assumed to be thermally insulated, thus they neither gain
nor lose heat. The gas continues to expand, doing work on the surroundings, and losing an equivalent amount of
internal energy. The gas expansion causes it to cool to the "cold" temperature, TC. The entropy remains
unchanged.
3. Reversible isothermal compression of the gas at the "cold" temperature, TC. (isothermal heat rejection) (3
to 4 on Figure 1, C to D on Figure 2) Now the surroundings do work on the gas, causing an amount of heat
energy Q2 and of entropy to flow out of the gas to the low temperature reservoir. (This is the same amount
of entropy absorbed in step 1.)
4. Isentropic compression of the gas (isentropic work input). (4 to 1 on Figure 1, D to A on Figure 2) Once
again the piston and cylinder are assumed to be thermally insulated.
During this step, the surroundings do work on the gas, increasing its internal energy and compressing it, causing the
temperature to rise to TH. The entropy remains unchanged. At this point the gas is in the same state as at the start of step
1.
where is the slope of the tangent to the coexistence curve at any point, is the specific latent heat, is
thetemperature, is the specific volume change of the phase transition and is the entropy change of the
phase transition.
A diagram like the one at http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/pvtsur.html would really help understanding
for this article. Also, the "important points" at the end
ofhttp://www.eng.usf.edu/~campbell/ThermoI/Proptut/tut8frm.html would be complimentary to such a
diagram.ChrisChiasson 04:00, 17 October 2007 (UTC)
I do think a better diagram would help people visualize the relationship. However, the one in the article and the
ones referenced above are general diagrams of phase equilibrium curves, rather than diagrams actually showing
the particular equation/relation which this page is about. Such a diagram would actually show a tangient drawn at
a few locations and show how the other side of the equation arrives at the slope value. (71.233.167.118 (talk)
02:28, 31 August 2013 (UTC))
Quite a few people have "improved" this page by saying that increased pressure actually is how ice skating works, or
otherwise modified the example given and removed mention of sumo wrestlers. However, the page as it stands (6th April
2006) is correct--that is, ice will not melt from the pressure of the ice skates. I chose the example I did because it is the
solution to a thermodynamics assignment problem I had in my final year as an undergraduate. So before you change it,
you might want to send an e-mail to Prof. Don Melrose of the University of Sydney, director of the Research Centre for
Theoretical Astrophysics, and tell him that he can't do thermodynamics.
--ckerr (the article's original author)
The thermodynamics are not in question. I suggest that you are discussing an over simplified model of a skate on ice.
Consider the nature of the contact between ice skate blade and the ice in reality. Firstly ice skates, particularly hockey
skates, have a curve along their length (to facilitate turning) so the area of contact is much smaller than the full 'length'
would allow. Secondly blades tend to be squared off in section and you skate on the corners of the square, further
reducing the area of contact. Finally we must consider the surface of the ice, not a smooth flat surface, but a maze of
ridges and bumps. The area of contact is thus much smaller than the total blade area, consequently the pressure is much
higher than has been discussed here. Contacts often generate much higher pressures than simple considerations predict
(for example plastic deformations in ordinary steel ball race bearings are such that lubricants can be forced to undergo
local glass transitions i.e. solidify, at the point of contact in normal operation. This requires pressures several orders of
magnitude higher than predicted by a simple picture of lubricated sphere on plane) --DaveGwyddel 14:23, 10 May 2006
(UTC)
Q.4 what is the concept of liquid helium I and II? Explain the abnormal properties of liquid He..
Cryocooler
Displacer
Fluid mechanics
Regenerative cooling (rocket)
Regenerative heat exchanger
Thermodynamic cycle
Timeline of hydrogen technologies
( 2KT)
C 2=
B….Prove that M M
Ans In proof theory, the semantic tableau (French pronunciation: [ta'blo]; singular: tableau; plural: tableaux), also
called truth tree, is a decision procedure for sentential and related logics, and a proof procedure for formulas of first-order
logic. The tableau method can also determine the satisfiability of finite sets of formulas of various logics. It is the most
popular proof procedure for modal logics (Girle 2000). The method of semantic tableaux was invented by the Dutch
logician Evert Willem Beth (Beth 1955) and simplified, for classical logic, by Raymond Smullyan (Smullyan 1968, 1995). It
is Smullyan's simplification, "one-sided tableaux", that is described below. Smullyan's method has been generalized to
arbitrary many-valued propositional and first-order logics by Walter Carnielli (Carnielli 1987).[1] Tableaux can be intuitively
seen as sequent systems upside-down. This symmetrical relation between tableaux and sequent systemswas formally
established in (Carnielli 1991).[2]
An analytic tableau has, for each node, a subformula of the formula at the origin. In other words, it is a tableau satisfying
the subformula property.
(1)
where kB is the Boltzmann's constant, (also written with k) which is equal to 1.38065 × 10−23 J/K.
In short, the Boltzmann formula shows the relationship between entropy and the number of ways
the atoms ormolecules of a thermodynamic system can be arranged. In 1934, Swiss physical chemist Werner
Kuhn successfully derived a thermal equation of state for rubber molecules using Boltzmann's formula, which has
since come to be known as the entropy model of rubber.
The equation was originally formulated by Ludwig Boltzmann between 1872 to 1875, but later put into its current form
by Max Planck in about 1900.[2][3] To quote Planck, "the logarithmic connection between entropy and probability was first
stated by L. Boltzmann in his kinetic theory of gases
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Q.5 using energy distribution function. Prove that the mean kinetic energy of a system is always
Proportional to temperature.
Ans The kinetic theory describes a gas as a large number of submicroscopic particles (atoms or molecules), all of
which are in constant, random motion. The rapidly moving particles constantly collide with each other and with the walls of
the container. Kinetic theory explains macroscopic properties of gases, such as pressure, temperature, viscosity, thermal
conductivity, and volume, by considering their molecular composition and motion. The theory posits that gas pressure is
due to the impacts, on the walls of a container, of molecules or atoms moving at different velocities.
Kinetic theory defines temperature in its own way, not identical with the thermodynamic definition.[1]
Under a microscope, the molecules making up a liquid are too small to be visible, but the jittering motion of pollen grains
or dust particles can be seen. Known as Brownian motion, it results directly from collisions between the grains or particles
and liquid molecules. As analyzed by Albert Einstein in 1905, this experimental evidence for kinetic theory is generally
seen as having confirmed the concrete material existence of atoms and molecules.
The theory for ideal gases makes the following assumptions:
The gas consists of very small particles known as molecules. This smallness of their size is such that the
totalvolume of the individual gas molecules added up is negligible compared to the volume of the smallest open ball
containing all the molecules. This is equivalent to stating that the average distance separating the gas particles is
large compared to their size.
These particles have the same mass.
The number of molecules is so large that statistical treatment can be applied.
These molecules are in constant, random, and rapid motion.
The rapidly moving particles constantly collide among themselves and with the walls of the container. All these
collisions are perfectly elastic. This means, the molecules are considered to be perfectly spherical in shape, and
elastic in nature.
Except during collisions, the interactions among molecules are negligible. (That is, they exert no forces on one
another.)
This implies:
1. Relativistic effects are negligible.
2. Quantum-mechanical effects are negligible. This means that the inter-particle distance is much larger than
thethermal de Broglie wavelength and the molecules are treated as classical objects.
3. Because of the above two, their dynamics can be treated classically. This means, the equations of motion of
the molecules are time-reversible.
The average kinetic energy of the gas particles depends only on the absolute temperature of the system. The
kinetic theory has its own definition of temperature, not identical with the thermodynamic definition.
The time during collision of molecule with the container's wall is negligible as compared to the time between
successive collisions.
Because they have mass, the gas molecules will be affected by gravity.
More modern developments relax these assumptions and are based on the Boltzmann equation. These can
accurately describe the properties of dense gases, because they include the volume of the molecules. The necessary
assumptions are the absence of quantum effects, molecular chaos and small gradients in bulk properties. Expansions
to higher orders in the density are known as virial expansions.
Q.6 For an ideal gas. Prove wPV = nRT using velocity distribution statistics.
Q.7 what are postulates of quantum physics? State plank distribution law.
Ans Quantum mechanics (QM; also known as quantum physics, orquantum theory), including quantum field theory,
is a fundamental branch of physics concerned with processes involving, for example,atoms and photons. In such
processes, said to be quantized, theaction has been observed to be only in integer multiples of thePlanck constant, a
physical quantity that is exceedingly, indeed perhaps ultimately, small. This is utterly inexplicable in classical physics.
Quantum mechanics gradually arose from Max Planck's solution in 1900 to the black-body radiation problem (reported
1859) and Albert Einstein's 1905 paper which offered a quantum-based theory to explain the photoelectric effect (reported
1887). Early quantum theory was significantly reformulated in the mid-1920s.
The mathematical formulations of quantum mechanics are abstract. A mathematical function, the wave function, provides
information about the probability amplitude of position, momentum, and other physical properties of a particle.
Important applications of quantum mechanical theory include superconducting magnets, LEDs and the laser,
the transistor and semiconductors such as themicroprocessor, medical and research imaging such as MRI and electron
microscopy, and explanations for many biological and physical phenomena
Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert
Hooke, Christiaan Huygens andLeonhard Euler proposed a wave theory of light based on experimental observations. [1] In
1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a
paper entitled On the nature of light and colours. This experiment played a major role in the general acceptance of
the wave theory of light.
In 1838, Michael Faraday discovered cathode rays. These studies were followed by the 1859 statement of the black-body
radiation problem byGustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical
system can be discrete, and the 1900 quantum hypothesis of Max Planck.[2] Planck's hypothesis that energy is radiated
and absorbed in discrete "quanta" (or energy elements) precisely matched the observed patterns of black-body radiation.
In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, [3] known as Wien's law in his
honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was
valid only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model
using Boltzmann's statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to
the development of quantum mechanics.
Following Max Planck's solution in 1900 to the black-body radiation problem (reported 1859), Albert Einstein offered a
quantum-based theory to explain the photoelectric effect (1905, reported 1887). Around 1900-1910, the atomic theoryand
the corpuscular theory of light[4] first came to be widely accepted as scientific fact; these latter theories can be viewed as
quantum theories of matter and electromagnetic radiation, respectively.
Lycée Napoléon, but was persuaded to abandon a legal education for the pursuit of science. A graduate of the École
Polytechnique, which he left in 1812 for the Military School at Metz, he was later a professor at the Sorbonne and at
theCollège de France. In 1840, he was elected as a member of the Académie Royale des Sciences. He was also an
astronomer of the Bureau des Longitudes.
Among Babinet's accomplishments are the 1827 standardization of theÅngström unit for measuring light using the red
Cadmium line's wavelength, and the principle (Babinet's principle) that similar diffraction patterns are produced by two
complementary screens.
(vi) What is the difference between he-ne laser and ruby laser?
Ans The first HeNe lasers emitted light at 1.15 μm, in the infrared spectrum, and were the first gas lasers. However, a
laser that operated at visible wavelengths was much more in demand, and a number of other neon transitions were
investigated to identify ones in which a population inversion can be achieved. The 633 nm line was found to have the
highest gain in the visible spectrum, making this the wavelength of choice for most HeNe lasers. However other visible as
well as infrared stimulated emission wavelengths are possible, and by using mirror coatings with their peak reflectance at
these other wavelengths, HeNe lasers could be engineered to employ those transitions; this includes visible lasers
appearing red, orange, yellow, and green.[1] Stimulated emissions are known from over 100 μm in the far infrared to
540 nm in the visible. Since visible transitions at wavelengths other than 633 nm have somewhat lower gain, these lasers
generally have lower output efficiencies and are more costly. The 3.39 μm transition has a very high gain but is prevented
from use in an ordinary HeNe laser (of a different intended wavelength) since the cavity and mirrors are lossy at that
wavelength. However in high power HeNe lasers having a particularly long cavity, superluminescenceat 3.39 μm can
become a nuisance, robbing power from the stimulated emission medium, often requiring additional suppression. The
best-known and most widely used HeNe laser operates at a wavelength of 632.8 nm in the red part of the visible
spectrum. It was developed at Bell Telephone Laboratories in 1962, [2][3] 18 months after the pioneering demonstration at
the same laboratory of the first continuous infrared HeNe gas laser in December 1960
NOTE: - Attempt any two . It carries 5 marks.( word limits 500)
Q2 How will you produce plane polarized light, circular polarized light and elliptical
Polarized light?
Ans In electrodynamics, elliptical polarization is the polarization of electromagnetic radiation such that the tip of
theelectric field vector describes an ellipse in any fixed plane intersecting, and normal to, the direction of propagation. An
elliptically polarized wave may be resolved into two linearly polarized waves in phase quadrature, with their polarization
planes at right angles to each other. Since the electric field can rotate clockwise or counterclockwise as it propagates,
elliptically polarized waves exhibit chirality.
Other forms of polarization, such as circular and linear polarization, can be considered to be special cases of elliptical
polarization.
In electrodynamics, circular polarization of an electromagnetic wave is a polarization in which the electric field of the
passing wave does not change strength but only changes direction in a rotary manner.
In electrodynamics the strength and direction of an electric field is defined by what is called an electric field vector. In the
case of a circularly polarized wave, as seen in the accompanying animation, the tip of the electric field vector, at a given
point in space, describes a circle as time progresses. If the wave is frozen in time, the electric field vector of the wave
describes a helix along the direction of propagation.
Circular polarization is a limiting case of the more general condition of elliptical polarization. The other special case is the
easier-to-understand linear polarization.
The phenomenon of polarization arises as a consequence of the fact that light behaves as a two-dimensionaltransverse
wave.
On the right is an illustration of the electric field vectors of a circularly polarized electromagnetic wave. [1] The electric field
vectors have a constant magnitude but their direction changes in a rotary manner. Given that this is a plane wave, each
vector represents the magnitude and direction of the electric field for an entire plane that is perpendicular to the axis.
Specifically, given that this is a circularly polarized plane wave, these vectors indicate that the electric field, from plane to
plane, has a constant strength while its direction steadily rotates. Refer tothese two images in the plane wave article to
better appreciate this. This light is considered to be right-hand, clockwise circularly polarized if viewed by the receiver.
Since this is an electromagnetic wave eachelectric field vector has a corresponding, but not illustrated, magnetic
field vector that is at a right angle to the electric field vector and proportional in magnitude to it. As a result, the magnetic
field vectors would trace out a second helix if displayed.
Circular polarization is often encountered in the field of optics and in this section, the electromagnetic wave will be simply
referred to as light.
Q3 what is the principal of optical fiber? Derive an expression for acceptance angle of
Optical fiber?
Ans An optical fiber (or optical fibre) is a flexible, transparent fiber made bydrawing glass (silica) or plastic to a
diameter slightly thicker than that of ahuman hair.[1] Optical fibers are used most often as a means to transmit light
between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over
longer distances and at higher bandwidths (data rates) than wire cables. Fibers are used instead ofmetal wires because
signals travel along them with lesser amounts of loss; in addition, fibers are also immune to electromagnetic interference,
a problem which metal wires suffer from excessively. [2][2] Fibers are also used forillumination, and are wrapped in bundles
so that they may be used to carry images, thus allowing viewing in confined spaces, as in the case of afiberscope.
[3]
Specially designed fibers are also used for a variety of other applications, some of them being fiber optic
sensors and fiber lasers.[4]
Optical fibers typically include a transparent core surrounded by a transparentcladding material with a lower index of
refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as
awaveguide.[2] Fibers that support many propagation paths or transverse modesare called multi-mode fibers (MMF), while
those that support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core
diameter and are used for short-distance communication links and for applications where high power must be transmitted.
[citation needed]
Single-mode fibers are used for most communication links longer than 1,000 meters (3,300 ft).[citation needed]
An important aspect of a fiber optic communication is that of extension of the fiber optic cables such that the losses
brought about by joining two different cables is kept to a minimum. [2] Joining lengths of optical fiber often proves to be
more complex than joining electrical wire or cable and involves careful cleavingof the fibers, perfect alignment of the fiber
cores, and the splicing of these aligned fiber cores. For applications that demand a permanent connection amechanical
splice which holds the ends of the fibers together mechanically could be used or a fusion splice that uses heat to fuse the
ends of the fibers together could be used. Temporary or semi-permanent connections are made by means of
specialized optical fiber connectors.[2]
The field of applied science and engineering concerned with the design and application of optical fibers is known as fiber
optics.
Guiding of light by refraction, the principle that makes fiber optics possible, was first demonstrated by Daniel
Colladon and Jacques Babinet in Paris in the early 1840s. John Tyndall included a demonstration of it in his public
lectures inLondon, 12 years later.[5] Tyndall also wrote about the property of total internal reflection in an introductory book
about the nature of light in 1870:
When the light passes from air into water, the refracted ray is benttowards the perpendicular... When the ray passes from
water to air it is bent from the perpendicular... If the angle which the ray in water encloses with the perpendicular to the
surface be greater than 48 degrees, the ray will not quit the water at all: it will betotally reflected at the surface.... The
angle which marks the limit where total reflection begins is called the limiting angle of the medium. For water this angle is
48°27′, for flint glass it is 38°41′, while for diamond it is 23°42′. [6][7]
Unpigmented human hairs have also been shown to act as an optical fiber. [8]
Practical applications, such as close internal illumination during dentistry, appeared early in the twentieth century. Image
transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the
television pioneer John Logie Baird in the 1920s. The principle was first used for internal medical examinations
by Heinrich Lamm in the following decade. Modern optical fibers, where the glass fiber is coated with a transparent
cladding to offer a more suitable refractive index, appeared later in the decade
Internal Assignment No. 1
1. The incident light is polarized with its electric field perpendicular to the plane containing the incident, reflected,
and refracted rays. This plane is called the plane of incidence; it is the plane of the diagram below. The light is
said to be s-polarized, from the German senkrecht (perpendicular).
2. The incident light is polarized with its electric field parallel to the plane of incidence. Such light is described as p-
polarized, from parallel.
3. In this treatment, the coefficient r is the ratio of the reflected wave's complexelectric field amplitude to that of the
incident wave. The coefficient t is the ratio of the transmitted wave's electric field amplitude to that of the incident
wave. The light is split into s and p polarizations as defined above. (In the figures to the right, s polarization is
4) Write any four diffn blw common emitter common base and common collector configuration?
Ans In electronics, a common emitter amplifier is one of three basic single-stage bipolar-junction-transistor (BJT)
amplifier topologies, typically used as a voltage amplifier.
In this circuit the base terminal of the transistor serves as the input, the collector is the output, and the emitter
is common to both (for example, it may be tied to ground reference or a power supply rail), hence its name. The
analogous field-effect transistor circuit is the common sourceamplifier, and the analogous tube circuit is the common
cathode amplifier.
Common emitter amplifiers give the amplifier an inverted output and can have a very high gainthat may vary widely from
one transistor to the next. The gain is a strong function of both temperature and bias current, and so the actual gain is
somewhat unpredictable. Stability is another problem associated with such high gain circuits due to any
unintentional positive feedback that may be present.
Note: Answer any two questions. Each question carries 5 marks (Word limits 500)
Q.2 Write down Barkhausen criteria criteria of sustained oscillation and explain martley oscillation.
Ans In electronics, the Barkhausen stability criterion is a mathematical condition to determine when a linear electronic
circuit will oscillate.[1][2][3] It was put forth in 1921 by German physicist Heinrich Georg Barkhausen(1881–1956).[4] It is
widely used in the design of electronic oscillators, and also in the design of general negative feedback circuits such as op
amps, to prevent them from oscillating.
In electronics, gain is a measure of the ability of a two port circuit (often an amplifier) to increase the power oramplitude of
a signal from the input to the output port[1][2][3][4] by adding energy converted from some power supply to the signal. It is
usually defined as the mean ratio of the signal amplitude or power at the output port to the amplitude or power at the input
port.[1] It is often expressed using the logarithmic decibel (dB) units ("dB gain").[4] A gain greater than one (zero dB), that
is amplification, is the defining property of an active component or circuit, while a passive circuit will have a gain of less
than one.[4]
The term gain alone is ambiguous, and can refer to the ratio of output to input voltage (voltage gain), current (current gain)
or electric power (power gain).[4] In the field of audio and general purpose amplifiers, especially operational amplifiers, the
term usually refers to voltage gain,[2] but in radio frequency amplifiers it usually refers to power gain. Furthermore, the term
gain is also applied in systems such as sensors where the input and output have different units; in such cases the gain
units must be specified, as in "5 microvolts per photon" for the responsivity of a photosensor. The "gain" of a bipolar
transistor normally refers to forward current transfer ratio, either hFE ("Beta", the static ratio of Icdivided by Ib at some
operating point), or sometimes hfe (the small-signal current gain, the slope of the graph of Icagainst Ib at a point).
The gain of an electronic device or circuit generally varies with the frequency of the applied signal. Unless otherwise
stated, the term refers to the gain for frequencies in the passband, the intended operating frequency range, of the
equipment. The term gain has a different meaning in antenna design; antenna gain is the ratio of radiation intensityfrom a
One of the first amateur superheterodyne receivers, built in 1920 even before Armstrong published his paper. Due to the
low gain of early triodes it required 9 tubes, with 5 IF amplification stages, and used an IF of around 50 kHz.
In a triode radio-frequency (RF) amplifier, if both the plate (anode) and grid are connected to resonant circuits tuned to the
same frequency, stray capacitive coupling between the grid and the plate will cause the amplifier to go into oscillation if
the stage gain is much more than unity. In early designs, dozens (in some cases over 100) low-gain triode stages had to
be connected in cascade to make workable equipment, which drew enormous amounts of power in operation and
required a team of maintenance engineers. The strategic value was so high, however, that the British Admiralty felt the
high cost was justified.
Armstrong realized that if radio direction-finding (RDF) receivers could be operated at a higher frequency, this would allow
better detection of enemy shipping. However, at that time, no practical "short wave" (defined then as any frequency above
500 kHz) amplifier existed, due to the limitations of existing triodes.
Q4 Explain Full wave (bridge ) rectifier ?
Ans A rectifier is an electrical device that converts alternating current (AC), which periodically reverses direction,
to direct current (DC), which flows in only one direction. The process is known as rectification. Physically, rectifiers take
a number of forms, including vacuum tube diodes, mercury-arc valves, copper and selenium oxide
rectifiers, semiconductor diodes,silicon-controlled rectifiers and other silicon-based semiconductor switches. Historically,
even synchronous electromechanical switches and motors have been used. Early radio receivers, called crystal radios,
used a "cat's whisker" of fine wire pressing on a crystal of galena (lead sulfide) to serve as a point-contact rectifier or
"crystal detector".
Rectifiers have many uses, but are often found serving as components of DC power supplies and high-voltage direct
current power transmission systems. Rectification may serve in roles other than to generate direct current for use as a
source of power. As noted,detectors of radio signals serve as rectifiers. In gas heating systems flame rectification is used
to detect presence of a flame.
Because of the alternating nature of the input AC sine wave, the process of rectification alone produces a DC current that,
though unidirectional, consists of pulses of current. Many applications of rectifiers, such as power supplies for radio,
television and computer equipment, require a steady constant DC current (as would be produced by a battery). In these
applications the output of the rectifier is smoothed by an electronic filter (usually a capacitor) to produce a steady current.
More complex circuitry that performs the opposite function, converting DC to AC, is called an inverter.
Before the development of silicon semiconductor rectifiers, vacuum tube thermionic diodes and copper oxide- or
selenium-based metal rectifier stacks were used.[1] With the introduction of semiconductor electronics, vacuum tube
rectifiers became obsolete, except for some enthusiasts of vacuum tube audio equipment. For power rectification from
very low to very high current, semiconductor diodes of various types (junction diodes, Schottky diodes, etc.) are widely
used.
Other devices that have control electrodes as well as acting as unidirectional current valves are used where more than
simple rectification is required—e.g., where variable output voltage is needed. High-power rectifiers, such as those used
in high-voltage direct current power transmission, employ silicon semiconductor devices of various types. These
are thyristors or other controlled switching solid-state switches, which effectively function as diodes to pass current in only
one direction.
Rectifier circuits may be single-phase or multi-phase (three being the most common number of phases). Most low power
rectifiers for domestic equipment are single-phase, but three-phase rectification is very important for industrial applications
and for the transmission of energy as DC (HVDC)
In half wave rectification of a single-phase supply, either the positive or negative half of the AC wave is passed, while the
other half is blocked. Because only one half of the input waveform reaches the output, mean voltage is lower. Half-wave
rectification requires a single diode in a single-phase supply, or three in a three-phase supply. Rectifiers yield a
unidirectional but pulsating direct current; half-wave rectifiers produce far more ripple than full-wave rectifiers, and much
more filtering is needed to eliminate harmonics of the AC frequency from the output.