You are on page 1of 140


(A Science Magazine)

National Science Day

04th March 2019

Department of Physics
Faculty of Science
Sri Chandrasekharendra Saraswathi Viswa Mahavidyalaya
(Deemed to be University u/s 3 of UGC Act 1956)
(Accredited with “A” grade by NAAC)
Enathur, Kanchipuram – 631 561. Tamilnadu, India
Dean, Faculty of Science

The National Science Day is celebrated all over the country on 28th of February every
year to markk the discovery of Raman Effect by the great Indian Scientist Sir.C.V.Raman
Sir.C.V. in 1928.
In connection with this, the department of Physics has organized many events for the students on
the theme “Science
Science for the people and people for the Science

It is a great pleasure to congratulate the department of Physics for organizing “National Science
Day: with great enthusiasm”. The attempt taken to encourage the students and staff of our
university to write articles on science and publishing those articles in a Science
Scie Magazine
“Vijnana Veeksha”, is quite a great achievement. Th
The articles are written on various interesting
topics and this is one of the way
ways to inculcate science temper in the minds of the young students
and researchers.

My best wishes to the organizing members of the National Science Day 2019 and wish
them all success.

Sl.No Contents Page

1 Upcoming Technology Breakthroughs & Future Scientific New Inventions in 1
Shyam Mohan J S, Assistant Professor, Dept. Of CSE,SCSVMV
2 3D Printing – Technology Ahead 4
T.Dinesh Kumar, Assistant Professor, Dept. of ECE, SCSVMV
3 IoT for Highly-Integrated Wireless Technology 5
M.A.Archana, Assistant Professor, Dept. of ECE, SCSVMV
4 Sensors to Improve Early Warning Tsunami Systems 7
Dr. K. Umapathy, Associate Professor, Department of ECE, SCSVMV
5 The Physics of Sports 8
C.Suvasini, Department of Physics, SCSVMV
6 Role of Mathematics in the Nature 12
Dr. J. Sengamalaselvi ,Asst.Professor , Department of Mathematics, SCSVMV
7 Science In Ancient India 14
Dr. Sujatha raghavan, Dept. of Sanskrit & Indian Culture, SCSVMV
8 Knowledge engineering in Artificial Intelligence 19
Dr. R. Poorvadevi, Assistant Professor, Department of CSE, SCSVMV
9 Science Behind Visiting Temples 22
M.Vetrivel, Associate Professor, Department of Mechanical Engg., SCSVMV
10 Role of Science Fiction in Modern Science and Technology 27
K.Rajalakshmi,Assistant Professor, Department of Physics, SCSVMV
11 Balancing Chemical Equation Using a Matrix 31
R.Malathi, Asst.Prof. Department of Mathematics, SCSVMV
12 Anti-Gravity 34
Dr.C.K.Gomathy,Assistant Professor, Department of CSE, SCSVMV
13 Block Chain Technology 36
M.Thirunavukkarasu, Assistant Professor, Department of CSE, SCSVMV
14 Selenium Nanoparticles As Antibacterial Agents 37
T.Lakshmibai, Assistant Professor, Department of EIE, SCSVMV
15 The Ai Revolution In Science 38
L.Sathish Kumar, Assistant Professor, Department of ECE, SCSVMV
16 Wireless Charging ICs Available from Mouser 41
M.Vinoth, Assistant Professor, Department of ECE, SCSVMV
17 Modern Machine Vision Technologies 42
S.Selvakumar, Assistant Professor, Department of ECE, SCSVMV.
18 Lattice MachX03 FPGAs with Very High Density I/O 43
S.Selvakumar, Assistant Professor, Department of ECE, SCSVMV.
19 Long Life Rechargeable Batteries 44
M.Vinoth, Assistant Professor, Department of ECE, SCSVMV.
20 RF Power Amplifier Efficiency 45
J.Vinoth Kumar, Assistant Professor, Department of ECE, SCSVMV.
Sl.No Contents Page
21 LED Lighting Circuit Protection 46
J.Vinoth Kumar, Assistant Professor, Department of ECE, SCSVMV.
22 Black Hole 47
R.Hariuttej, BE-I Year, Section:S4,Department of CSE, SCSVMV
23 The Physics behind Thermal Camera imaging 51
S.Girivel, Dept. of Physics, SCSVMV
24 How batteries and capacitors differ 53
V.Jayapradha, Asst Prof, Department of ECE, SCSVMV.
25 3d Hologram Technology Is The Future 56
G.Padmanabha Sivakumar, Assistant Professor, Department of EIE, SCSVMV
26 TensorFlow in Deep Learning using Python-Vision of Science 58
Dr.N.R.Ananthanarayanan,Associate Professor,Department of CSA,SCSVMV
27 Significant Science and Tech Discoveries in Ancient India 61
P.PURANDHAR SRI SAI , BE-I Year, Section:S4,Department of CSE,
28 Arduino Uno 65
S.Nandhini, Department of Physics, SCSVMV
29 7 Indian women scientists who are an inspiration to all 69
G.Poornima, Assistant Professor, Department of ECE, SCSVMV.
30 Action of D4 on X 71
S.Vijayabarathi , Assistant Professor, Department of Mathematics, SCSVMV.
31 Solid Mechanics 73
P. Srinanda, BE-I Year, Section:S4,Department of CSE, SCSVMV
32 Psychology 77
Rohan Tiwari, BE-I Year, Section:S4,Department of CSE, SCSVMV
33 Science of Meditation 82
K.Rajalakshmi, Assistant Professor, Department of Physics, SCSVMV
34 Application of Science for Agriculture 87
K.Anitha, Assistant Professor, Department of ECE,SCSVMV
35 Science In Everyday Life 88
Harshavardhan Varma, BE-I Year, Section:S4,Department of CSE, SCSVMV
36 Objects under study from Sloan Digital Sky Survey 90
V.Ragavendran, Assistant Professor, Department of Physics, SCSVMV
37 Sky Surveys from Sloan Digital Sky Server (2014-2020) 92
V.Ragavendran, Assistant Professor, Department of Physics, SCSVMV
38 Mapping the Sky - What does it mean? 95
V.Ragavendran, Assistant Professor, Department of Physics, SCSVMV
39 Gravitational Waves 96
N. Surendra, BE-I Year, Department of CIVIL ENGG, SCSVMV
40 Astronomy and Vedas 101
Janani R, Assistant Professor, Dept. of EIE, SCSVMV
41 “Vision of Science” 103
M.Dinesh, Assistant Professor, Department of Civil and Structural, SCSVMV
42 Scientific Explanations 104
D.Muthukumaran, Asst.Professor, Department of ECE, SCSVMV
Sl.No Contents Page
43 Magnetic levitation 105
N.Sariha, M.Phil Physics,SCSVMV
44 Wave Energy 111
V. Geetha, Assistant Professor, Department of CSE, SCSVMV
45 Laser Technology 114
T.Jayanthi, Assistant Professor, Department of CSE, SCSVMV
46 Current Environmental Problems 117
Dr.S.Omkumar, Associate Professor, Department of ECE, SCSVMV
47 Industrial Internet of Things (IIoT) 118
S.Chandramohan, Assistant Professor, Department of ECE, SCSVMV
48 Pluto 120
R.A.V.S.R.Vamsi Krishna, BE-I Year, Section:S4,Department of CSE,SCSVMV
49 RF-based Wireless Charging and Energy Harvesting 123
A.Rajasekaran 1 Assistant Professor, Department of ECE, SCSVMV
K.Saraswathi.2 Assistant Professor, Department of E&I, SCSVMV
50 Aliens And A New Earth 126
Palle Anurag Kashyap, BE-I Year, Section: S3,Department of CSE,SCSVMV
51 Tiny Neptune Moon 128
J Suganthi, Assistant Professor, Department of Physics, SCSVMV
52 Science and Spirituality 131
V.Uma, Assistant Professor, Department of ECE, SCSVMV
53 Ancient space crystals may prove the sun threw heated tantrums as a tot 132
R.Prema, Assistant Professor, Department of CSE, SCSVMV
Upcoming Technology Breakthroughs & Future Scientific New Inventions
in 2019
Shyam Mohan J S, Assistant Professor, Dept. of CSE, SCSVMV

1. Quantum Computers:

Computing technology is revolutionized by the time and we have seen a lot of changes
and advancements. From room-sized computing machine to hand-held PDA (personal digital
assistants) and Smartphone. But, this is the time to take a shift for Quantum Computers.
Although quantum computers are still in their infancy, they have enough potential to take
computing capabilities to the next level.

IBM’s D-wave Quantum Computer

2. Self-healing Materials:

The Self-healing materials are not a new concept, ancient Indians, Egyptians used this
technology to build their houses, temples and traditional structures to last through the ages
thousands of years ago.

3. DNA Storage:
All it has come to light when Microsoft claimed a record by storing 200MB of data on to
a synthetic DNA strand.

4. Reversing Paralysis:
A cure for paralysis could finally be on its way. A French neuroscientist, Grégoire
Courtine’s researches rising hope for reversing paralysis caused by spinal cord injuries. He
developed a revolutionary technology that will simulate a few volts of electricity to activate the
specific muscle. A Swiss-based research company, École Polytechnique fédérale de
Lausanne (EPFL) which is centred for the main research, said “The neuroprosthetic
system ‘BSI’ (Brain-Spinal Interface System) bridges the spinal cord injury in real-time and
wirelessly. The chip interprets and decodes the signal from the brain’s motor cortex and send
information to the electrodes located on the surface of the lumbar spinal cord. This electrical
simulation will modulate distinct networks of neurons that can activate specific muscles of the

5. AI Integrated Surveillance Systems (CCTV Security Cameras):
Electronic surveillance is the most reliable security systems. Most criminal and theft
cases are solved based on CCTV footages. In offices, schools, streets, and malls our every move
is recorded and tracked. But, yet there are some limitations with traditional CCTV.
Conventional surveillance systems are not smart. They need human intervention and huge time
consumptive while investigation.

6. Flying cars:
A Dutch-based company Pal-V offered a flying car called “Liberty” on sale and is
expected to make deliveries by the end of 2019. It is just like the commutable drone,
uses Rotax engine- based dual-propulsion drive train which enables it to both drive and fly. In
the drive mode, the propeller tucks into the rear while rotors are folded and stored in the top of
the vehicle when it is in the drive mode.


T.Dineshkumar, Assistant Professor,Department of ECE, SCSVMV

Metals were a new frontier for 3D printing this year. Businesses wanted to expand
beyond the traditional materials associated with 3D printing and into something more
durable. Where 3D printing metal parts were once costly to produce, new software and
updated technology saw a decrease in price and a rise in quality.“Metal printing is ideal for
smaller scale production projects as it lends itself to production flexibility but at the same
time is perfect for designs with high complexity.
Metal additive manufacturing (AM) is on the rise, as the material is ideal for
producing lighter items at higher production rates. A recent trend has been to utilize it for
seats on aeroplanes. Upon completion, they weighed in an astonishing 50% lighter than their
regular counterparts at only 766 grams. As the technology moves forward, we can expect
forms of metal printing to change the dynamic of industries.”

Increased Productivity
Production speed has been a barrier to 3D printing adoption on a mass scale.
However, there has been a push by both researchers and 3D printing corporations to increase
production speeds, which will help lower this barrier.Allowing manufacturing speeds to
double while producing the same standard of quality, and most importantly to the
manufacturers, with no additional hardware costs involved.In conjunction, MIT engineers
developed a desktop printer that has an output times 10 of anything on the commercial
market. Using Polymer, fed through the nozzle with a new form of mechanism, it flows
faster, and so what would take hours now takes minutes.”

A Rise in Customization
Customization is the “sweet spot” for 3D printing. Major companies and
manufacturers have started to take notice that consumers are interested in more customized
product options which 3D printing can deliver. The automobile, aerospace and dental
industries have been utilizing 3D printings customization ability for years now, but other
industries are also starting to take notice.
The State of 3D printing study showed that offering customized products’ was a top
priority for companies who took the survey.As the software advances, materials expand,
demand for customization increases and 3D printing production becomes faster, we will see
more adoption of the technology across industries.

M.A.Archana, Assistant Professor, Department of ECE, SCSVMV

Wireless connectivity is ubiquitous, an invisible technology that has changed the

networking infrastructure and is now enabling the IoT. The implementation of wireless data
communications is achieved on many levels and in its simplest form can exist purely as a link
between two devices communicating over a short range, without the need for any kind of
protocol. Increasingly, of course, applications require a more sophisticated solution that
supports some form of networking between multiple devices and, here, the landscape is very
different; one dominated by standards.

The proliferation of wireless communications is largely thanks to the openness of the

RF spectrum. The vast majority of standard-based and proprietary protocols operate in the
license-free part of the spectrum which, while open to anyone, still comes with strict
restrictions in terms of operating power. Because of this, implementing a wireless connection
is easiest to achieve using either technology designed to comply with an industry standard, or
proprietary technology that has already been approved for use by the appropriate governing

Increasingly, it is the ability to create networks of devices that is propelling the

popularity of industry standards, but in broad strokes the different technologies available can
be categorized by range and bandwidth; both of which influence the operating power which is
becoming an increasingly important design parameter. There are many protocols used in the
license-free ISM (Industrial, Scientific and Medical) and SRD (Short Range Device)
frequency bands, both Sub-1 GHz and 2.4GHz. Some standards-based protocols such as
Zigbee operate in both bands but for historic reasons proprietary protocols are more likely to
operate in the Sub-1 GHz band. Semiconductor providers such as Texas Instruments offer a
wide range of solutions for wireless connectivity, from transducers to fully-integrated
wireless microcontrollers (MCUs); the Simple Link MCU Platform now supports over ten
wired and wireless protocols, ensuring there is a solution to fit every application.In many
cases the application will indicate which wireless technology to use.

With the right integrated solution, the low power nature of a Sub-1 GHz application
of this type would allow a smart sensor to operate for as much as 20 years from a single coin
cell battery. As the IoT develops it has become apparent that a hardware platform (Simple
Link family) can be extended to address more application areas.a major part of creating an
application that forms part of the IoT involves connecting it to the internet, and in many
applications the wireless protocol that most easily achieves this is Wi-Fi. A Wi-Fi connection
provides global access to the functionality and data of an application, however the complexity
of the protocol reflects this and is consequently more challenging to implement than, say,
protocols intended for private networks.

As the value of data collected through distributed Wireless Sensor Networks (WSNs)
increases, manufacturers are looking for reliable yet simple ways to create WSNs and start
collecting that data. In this application area, Sub 1 GHz wireless connectivity can excel in
both range and power.The latest revision of the specification sees Bluetooth 5 now supporting

mesh networking, as well as an increased bit rate of up to 5Mbit/s. With the same low power
credentials, it means Bluetooth can now be used in a wider range of applications that need a more
direct internet connection, with the added benefit of greater range through the use of mesh

The internet is enabled through the Internet Protocol, or IP; one of the latest wireless
standards to provide IP functionality is 6LowPAN, as defined by the Internet Engineering Task
Force. With full support for IPv6 on all nodes in a network It is perhaps the most future-proof of
all wireless technologies, as it can operate in either the Sub-1GHz or 2.4GHz frequency bands
and on a number of physical layers


Dr. K. Umapathy, Associate Professor, Department of ECE, SCSVMV

Natural Disaster in Japan:

On a Friday afternoon in 2011, the residents of northeastern Japan were hit by a six
minute earthquake—shifting the country’s main island by eight feet— triggered powerful
tsunami waves that reached up to 120 feet in height, according to media. Tsunami warnings had
initially broadcasted minutes before its arrival but unfortunately, underestimating its size. Many
failed to evacuate to higher ground and as a result a total of 15,894 deaths resulted from the
natural disaster. Japan has since installed a network of seismic and pressure sensors on the ocean
floor that have raised the bar for tsunami early-warning systems worldwide.

Current Tsunami Systems:

New research, which appears in Geophysical Research Letters, suggests how warnings
could be more accurate by combining data streaming in real-time from sensorswith tsunami
simulations.“Most tsunamis are caused by an offshore earthquake that pushes the ocean up or
down. As gravity pulls the water back toward equilibrium, a tsunami is born,” says Eric
Dunham, Senior Author and Associate Professor of Geophysics at Stanford University’s School
of Earth, Energy & Environmental Sciences. But Tsunamis can also be generated in other ways.
Underwater landslides, which might accompany an earthquake or occur independently, are a
classic example. Traditional warning systems completely miss tsunamis from those types of
sources. Current tsunami warning systems begin with an estimate of earthquake properties from
seismic waves and then utilize pre-computed relations between earthquakes and the tsunamis
they generate, explains Dunham.

Sensor Technology:
“Yuyun figured out how to apply a data assimilation technique, known as the ensemble
Kalman filter, to rapidly reconstruct the tsunami wavefield,” says Dunham of lead author Yuyun
Yang, who uses tsunami wave propagation simulations to ultimately the wave height and arrival
time at the coast.“This new technology—offshore sensors connected via fiber-optic cable to
land—allows the data to stream in almost real time back to computers where it can be processed
and used in warning systems,” says Dunham.The evolution of sensor technology not only
transforms the quality of natural disaster detections, like tsunamis, but it also touches on other
areas of life, from the food we eat to aiding human-machine interactions.

C.Suvasini, Teaching Assistant, Department of Physics, SCSVMV

The physics of sports has broad applications and is useful for boosting performance in a
variety of athletic disciplines. A good athletic performance is based on proper control and
coordination of movement.
The physics involved in these sports.
 The Physics of Archery
 The Physics of Hitting A Baseball
 The Physics of Basketball
 The Physics of Billiards
 The Physics of Bowling
 The Physics of Bungee Jumping
 The Physics of Cheerleading
 The Physics of Curling
 The Physics of Figure Skating
 The Physics of Football
 The Physics of Golf
 The Physics of A Golf Swing
 The Physics of Gymnastics
 The Physics of Hockey
 The Physics of Ice Skating
 The Physics of Jumping
 The Physics of Kite Flying
 The Physics of Luge
 The Physics of Pole Vaulting
 The Physics of Running
 The Physics of Sailing
 The Physics of Skateboarding
 The Physics of Skiing
 The Physics of Skydiving
 The Physics of Snowboarding
 The Physics of Soccer
 The Physics of Swimming
 The Physics of Tennis
 The Physics of Volleyball

1. The Physics of Skydiving – Terminal Speed

The physics behind skydiving involves the interaction between gravity and air
resistance. When a skydiver jumps out of a plane he starts accelerating downwards, until he
reaches terminal speed. This is the speed at which the drag from air resistance exactly

balances the force of gravity pulling him down. The mass of the object is also an important
factor. A feather will fall much more slowly than a solid object such as a rock, because the
drag force relative to body weight (mg) is much higher for a feather.
A skydiver typically reaches speeds of around 120 mph in the spread-eagle position
(shown in the first figure). But he can reach speeds up to 200 mph if he orients his body with
head pointing down. During free-fall a skydiver can perform a variety of acrobatic
manoeuvres such as spinning, moving forward, moving backward, just by changing the shape
of his body to "catch" the wind a certain way. By doing this he is essentially changing the
direction of the drag force acting on his body, much the same way airplane wings can be
oriented to produce a desired motion for the plane. The final drag force the skydiver must
experience is from releasing his parachute, which slows his descent enough for him to land


One way to optimize a volleyball serve is to minimize the time the ball spends in the
air. This in turn minimizes the reaction time of the opposing team, making it more difficult
for them to return the shot. In this analysis of the volleyball physics, we will look at ways to
minimize the time the ball spends in the air, after the serve is made. The physics behind this

analysis is of a kinematic nature, considering the motion of the ball. This optimization
problem is an interesting application of projectile motion.

The Magnus Effect and Air Resistance

The airborne time of the volleyball can be reduced even more by putting top-spin on
the volleyball. This causes the ball to experience an aerodynamic force known as the magnus
effect, which "pushes" the ball downward so that it lands faster. This complicates the physics

As the ball spins, friction between the ball and air causes the air to react to the
direction of spin of the ball. As the ball undergoes top-spin (shown as clockwise rotation in
the figure), it causes the velocity of the air around the top half of the ball to become less than
the air velocity around the bottom half of the ball. This is because the tangential velocity of
the ball in the top half acts in the opposite direction to the airflow, and the tangential velocity
of the ball in the bottom half acts in the same direction as the airflow. In the figure shown, the
airflow is in the leftward direction, relative to the ball.
Since the (resultant) air speed around the top half of the ball is less than the air speed
around the bottom half of the ball, the pressure is greater on the top of the ball. This causes a
net downward force (F) to act on the ball. This is due to Bernoulli's principle which states
that when air velocity decreases, air pressure increases (and vice-versa). As a result, by
putting enough top-spin on the volleyball, the airborne time can be further reduced by as
much as a tenth of a second.


The Physics of Kite Flying – Aerodynamic Lift
Kite flying is a fun activity which people of all ages can enjoy. The physics of how a
kite gains lift is very similar to how an airplane gains lift. The wings generate lift force by the
action of the moving air over the wing surface. A kite works in the same way. The wind
blows in the direction of the kite and somewhat underneath it. This creates lift.
A string is attached to the kite in different locations so that the kite doesn't flop
around in the wind. For further stability (as well as aesthetic value) a tail is often added to the
back of the kite. If the wind were to blow the tail from the side, the kite would rotate until the
tail (and kite) lined up with the wind. This allows the kite to remain straight and point in the
direction of the wind.

A lift force is generated in the direction perpendicular to the wind, and a drag force is
generated in the direction parallel to the wind. It's the same principal as if you were to stick
your hand out the window of a moving vehicle. With your hand tilted clockwise the wind
force would push your hand up (due to lift) and back (due to drag). Both lift and drag are
unavoidable consequences of the aerodynamics involved. You cannot have one without the
Since the lift force acting on a kite is usually quite small, they must be made of very
light and rigid material to get airborne and stay in one piece.
To get a kite airborne it is sometimes necessary to run while pulling the kite behind
you. This creates "apparent wind" which creates lift and pushes the kite up. Once the kite
reaches a high enough altitude where the wind becomes strong enough, you can stop running
and the kite will remain aloft.

Dr. J. Sengamalaselvi, Asst.Professor, Department of Mathematics, SCSVMV

For centuries, we have known that the world around us can be explained by the scientific
method. The difficulty was always making the discoveries within science to do so. However, the
existence of mathematics has made it a lot easier for us. We can see mathematics in nature –
numerical patterns within sunflowers and breeding ratios – formulas have been used to predict
the discoveries of mathematical anomalies like black holes. Some say our universe is literally
made out of mathematics in the same way that computer programmes are made out of code.
Everything we can observe has a mathematical explanation, even the most complex and beautiful
of anomalies. This is a list of two epic examples of Mathematics in nature.

1. The Eclipse

We always have at least one eclipse each year, and they’re quite fun. I remember
watching a rather significant eclipse in the year 2000, there won’t be a longer eclipse
until the year 3000! An eclipse in when the moon and the Earth align to the point where
sunlight is completely blocked out. It’s an amazing sight, and an epic example of
mathematics in nature. This is of course only possible due to the size of the moon relative
to the size of the sun. The sun is 1.4 million kilometres, whereas the moon is about 3.5
thousand kilometres – a huge difference! But the sun is a lot further away from us than
the moon is. This perspective allows them to align just right for the perfect eclipse. It’s by
complete chance that the planets can align like this, and we have no idea whether it’s

common for plants to be like ours in that respect… but we haven’t found much.
According to science the moon is slowly moving further away from the Earth. If that
continues, our eclipses may eventually cease to exist.

2. DNA

DNA is vital to all living organisms. It contains much of the genetic code that allows us to grow,
function, and produce new life via reproduction. How we live changes our DNA and our DNA
affects how we live and age. DNA damage is not a harmless thing, without it we couldn’t exist.
The structure of DNA correlates to numbers in the Fibonacci sequence, with an extremely
similar ratio. The Fibonacci sequence is a mathematical pattern that correlates to many examples
of mathematics in nature. This includes rabbit breeding patterns, snail shells, hurricanes and
many more examples of mathematics in nature. It was named after the man who discovered it,
Fibonacci, who some call the greatest European mathematician of the middle-ages. Clearly,
DNA structure is related to the Fibonacci numbers.

Dr. Sujatharaghavan, Dept. of Sanskrit & Indian Culture, SCSVMV

Sanskrit is considered to be the oldest language of the world and the Vedas are the oldest
literary monument available to us. The contribution of Sanskrit is enormous towards the different
aspects of world civilization — whether it belanguage, Religion, Mythology, Philosophy,
Astronomy, Medicine etc., everyone has to resort to Sanskrit. Science and Sanskrit are
inseparable, since Indian scientific concepts, be it Astronomy, Mathematics, Medicine etc. have
been expounded in Sanskrit over the millennia.

It is said that 'a person without the knowledge of Science (scriptures) is considered to be
really blind'. The scientific truths and facts explained by the modern scientists are embedded in
our ancient literature. The Rig Veda throws light on the astronomical ideas of the Vedic Indians.
The Vedic priests possessed adequate knowledge on the course of the Sun, the path and phases
of the moon, the movements of the planets etc. Months were determined based on the moon. The
names of lunar months were after those of the Naksatras in which the full moon occurred.
Another important aspect of Vedic astronomy is the concept of a cosmic cycle of 43,20,000
years (equal to a Mahayuga).

Our Vedic Philosophers had the same scientific curiosity that drives modern science, and
the questions asked in the NasadiyaSatka bear a striking resemblance to the questions asked by
modern astronomers. The Satapatha Brahamana, explains that the earth was spherical.
Aryabhatta explains the daily rising and setting of planets and stars in terms of the earth's
constant revolutionary motion. The SoryaSiddhanta reveals that the earth owing to its
gravitational force draws all things to itself.

In the development of mankind, we can consider the wheel as the first and primitive
machine used by a man. Wheel was used it carts, for making potteries, in flour mills, in churning
buttermilk to get butter and ghee, in taking out water from a well and in cloth weaving. Another
form of simple machines is in the use of principles of lever in pan balances. Such references
occur in Rgveda and in Yajurveda.

The heat of the sun lifts the water on earth to the atmosphere, which after some time
comes down as rain was recognized by the Vedic seers at a very early stage. They conceived the
rainfall process as a Yajna taking place in the middle region (antariksa) and the rain drops were
supposed to impregnate the earth as a result of which life comes upon earth. The
CandogyaUpaniload describes this yafia in five stages in the regions comprising the
Suryamaadala, Antariksa and the earth. The 164th Sukta of the first mandala of the Rgveda
contains some clear concepts of the rainfall process. The KarTristi described in the
TaittiriyaSamhita of Yajurveda has many interesting statements containing the concepts of the
Vedic seers on the process of rainfall. The rks of the sukta mentioned above state:

❖The rays of the Sun hold for six months waters capable of fertilizing the earth.
Pervading the sky they wait for performing their duty of drenching the earth with showers.

❖The rays of the sun following the dark (southern) path take the waters and move
upwards (northwards). They turn back from the source of rta and drench the earth with waters.

❖The waters go upward and come downward in the same measure during periods of the
respective seasons. Agni takes the waters to the heavens and Parjanya brings them down as rain.

The Seers imagined that Agni in the rta form, is located in the south and is always
moving towards north. On the analogy of snow present in the northern latitudes, they imagined
soma to be present in the north and always moving to the south. There is a constant confrontation
between the two on the earth's surface and the seasons are produced as a result of either of the
two asserting over the other. The MatsyaPurana (124(29-34), VayuPurana (51 (13-16) and
Visnuurana (2.9) (8-12) deal with the role of the sun in the rainfall phenomenon.

The atomic theory of the NyayaVaisesika system explains the order of creation and
destruction of the non-eternal objects. It is only the composite substances of the earth, water, air
and fire, which are produced and destroyed. All the objects of the phenomenal world are
divisible into smaller parts, which are divisible into more minute parts, until we come to the
indivisible atoms i.e., these atoms cannot be split further and mark the limit of division. An atom
has been defined in the NyayaVaisesika system as an ultimate particle i.e., it is the minutest part
of a thing with minute magnitude. It is globular (Parimandalya), though it has no parts. Hence,
the atom is indivisible and cannot be perceived through any of the organs of self-perception.

The term Paramanu is suggestive of the possibility that, at least at an abstract level Indian
Philosophers in ancient times had conceived the possibility of splitting an atom, which, as we
know today, is the source of atomic energy. Kanada observed that an inherent urge made one
Paramanu combine with another. When two Paramanu belong to one class of substance
combined, a Dvyanuka (binary molecule) was the result. Thus Dvyanuka had properties similar
to the two parent Pararnaous. Kanada also put forth the idea of chemical changes occurring
because of various factors. He claimed that variation in temperature could bring about such
changes. These Indian ideas about atom and atomic physics could have been transmitted to the
west during the contacts created between India and the west by the invasion of Alexander.

Modern science has also propounded an atomic theory to explain the physical universe.
According to it, the whole universe is made up of matter, which can be stored into elements,
compounds and mixtures. These elements, compounds and mixtures are made up of atoms,
which can be further divided into electron, proton and neutron. But even today, scientists believe

that atom is the only smallest particle of matter that can take part in a chemical reaction. In the
ultimate analysis, it is the concept of element and not that of electron, protoretc, which is the
basis of the physical science as electrons etc cannot have an independent existence. Hence the
discoveries made by the physical sciences are neither an advance upon the Indian system nor
differ fundamentally.

In physics, Kanada explained light and heat as different aspects of the same element, thus
anticipating Clarks Maxwell's Electro-Magnetic theory, which unified different forms of radiant
energy. Sankara in his Advaita thought expounded the concept unity of matter and energy.
Vacaspati recognized light as composed of minute particles emitted by substances, anticipating
Newton's Corouscular theory of light and the later discovery of the photon.

The CandogyaUpanisad beautifully explains the structure of the physical world thus:-

The essence of all these beings is the earth. The essence of the earth is water. The essence
of water is vegetation. The essence of vegetation is man. The essence of man is speech. The
essence of speech is a k. The essence of ak is Saman. The essence of Saman is Udgita.

Further the ChandogyaUpanisad lays the importance of Science and describes the various
streams of Sciences. Narada, the divine Seer approached Sanatkumara like an ordinary person
for acquiring the means for attaining the highest goal of life. Narada enumerates the sciences
read by Him as follows:

I have read the Rgveda, Yajurveda, Samaveda, Atharvanaveda, history and mythology
which together constitute the Panchama or fifth veda , the rites for the manes, Mathematics
(rasim), subject of natural disturbances (daivam), the subject of Mineralogy (nidim) found in
books like Mahakalaetc, logic (vakovakyam), ethics ekayanam), etymology (devavitlya),
knowledge of the Vedas brahmavidya) regarding pronunciation, ceremonial, prosody and
lighting of the ceremonial fire, science of elements (bhatavidya), science of archery
(kshtravidy8), astrology and astronomy (nakshatravidya), science of serpents (sarpavidya),
subject of fine arts (devajanavidya) like perfumery, dance, music (vocal and instrumental),
sculpture, painting, handicrafts etc.

The ChandogyaUpanisad notes that human beings are also a form of animals and
describes the procreation process. The process of perception of human brain and the linkage of
perception to sensory organs are clearly described here and in the Sankhyakarika (Karika 30) of
lsvarakrsna, correspond to the Nerve impulse transmission of the modern scientists. Extensive
references to Cosmology, Astronomy and Astrology are found repeatedly in several chapters. It
is apparent from the study of the Chandogya Upanishad that the Indians in that period had a great
deal of understanding of various sciences and technologies.

From the epics there is reference of Pushpakavimana in the Ramayana, which is a

prototype of aeroplanes. The Yantrasarvasva defines the various types of aeroplanes, which

differ in the four yugas. The propelling fuels, the design of the aeroplane varies in every yuga.
Sanjaya narrating the happenings of the Mahabharata war to the blind king Dhritarashtra could
be taken to mean a sophisticated communication system of the present era equivalent to a live

The growth and nurturing of a fertilized ovum outside the womb of the mother are found
in the Mahabharata. The embryo of the Kauravas emerged from the womb of Gandhari in the
form of a single egg after a conception of two years.

Sage Vyasa divided the egg into 100 pieces and put them in a pitcher filled with ghee
and protected it in a secret place for two years, after which the Kaurava princes were
born.(AdiParva 114.18-25). This process resembles the process of a test-tube baby up to the
stage of the embryos being divided into 100 parts and being transferred into a pitcher, but how
the growth of the embryo takes place inside the pitcher for a period of two years is to be
pondered over by the embrologists. There is a hymn in the Rgveda (VII. 33.13), which has
striking resemblance to the genesis of a test —tube baby. Then, there are stories of duplication of
bodies- similar bodies being produced from one body akin to cloning. Sage Kardama, produced
with the help of yogic powers nine bodies identical to his own.

The BhagavataMahapurana enumerates the modern day embryology by revealing the

conception of the embryo and the stages of growth of the human embryo in every month
(111.31.1-24). The concept of embryo transfer is explained in the BhagavataMahapurana (X.2.4-
3). When Kamsa had killed the six children of Devaki, Lord Visnu entered the womb of Devaki
as her seventh child and commands Yogamaya, the transcendent creative energy as follows:

"There exists in the IDevaki's womb, in the form of an embryo, my own manifestation
known by the name of ceesa (the serpent god). Taking it out, place it in the womb of Rohini and
will be known as Bafarama."

Indian Medical tradition goes back to the vedic times when the medical practitioners
were given the status of God. Dhanvantari, the god of medicine was the earliest to propound
medical sciences in India. Our knowledge of the nervous system fits in so accurately with the
internal description of the human body given in the Vedas. Susruta, the father of surgery, first
studied human anatomy and had acclaimed success in various kinds of surgery. Charaka was the
first physician who formulated medicines for various ailments.

The Vedic seers had great foresight for preserving the environment by the performance of
daily yajnas for protecting the ecology. Dharmashastras prohibited men from disturbing the bio-
diversity and eco-system as it was against the tenets of religion and it is considered as a sinful
activity. The Atharvaveda deals meticulously about the various aspects of environment and
showed more concern for ecology. Vedic risis had always stressed the planting of trees and
avoided the cutting of trees.(R.V.6.48.17)

In the field of Mathematics the Indians were responsible for introducing decimal place
value system. Even in the Vedas one finds a list of powers of 10 beginning with the zero power
to the twelfth power.

Hence it can be said that the discovery of zero, as well as the decimal system should have
originated from India. The performance of yajnas is considered as one of the primary religious
activities among the Indians. The time of the sacrifice, the place for the construction of
homakundas, the number homakundas, their shapes, sizes etc, the number of bricks, the area of
homakundasetc are to be determined with the help of JyotisaSastra and Geometry. The primary
objective of studying the JyotisaSastra is:

"The primary use of Vedas is in the performance of sacrifices and the sacrifices have to
be performed at the proper time because time is a part or anga of sacrifice. It is from
JyotisaSastra that one knows the exact time and hence it is termed as vedanga or an auxiliary part
of the veda".

The Sulbasutras of Boudhayana, Apastamba, Katyayana and Manava, which gives the
rules regarding the construction of Homakundas, form the basis of geometry. The TrikonasOtras
as stated in the Lilavati of Bhaskaracharya is identical to the Phythogyrus theorem and was
practiced by the Indians much earlier to Phythogyrus.

Thus Sanskrit literature is a treasure house of all types of knowledge whether science,
Astronomy, Mathematics, Medicine, Philosophy etc. There are many more references, which
have parallels with technological break through that have resulted from modern science.

Dr. R. Poorvadevi, Assistant Professor, Department of CSE, SCSVMV

Significance of Knowledge Engineering

Knowledge Engineering plays a vital role in the development of various technologies.

These technologies are expert systems, neural network, artificial intelligence, intelligent
systems, data mining, decision support systems, knowledge based systems etc. These
technologies are used in hybrid fashion that is combining of them with their strengths and
weaknesses to develop Hybrid Knowledge based medical information system.

The development of Hybrid knowledge based medical information system needs the
useful medical data. The useful medical data must be chosen carefully, studied and
interpreted in a proper manner to develop knowledge based medical information system. The
knowledge base (KB) consists of collection of related knowledge to solve any problem.
Knowledge based systems (KBS) are an important area in artificial intelligence (AI) in which
knowledge base (KB) and inference engine are the main components. Knowledge
management system (KMS) facilitates the flow of knowledge from one source which knows
another source in the updated system. The knowledge data collected needs proper

Process of Knowledge Engineering

The major process in artificial intelligence (AI), knowledge engineering is the process
of understanding and then representing human knowledge in data structures, semantic
models (conceptual diagram of the data as it relates to the real world) and heuristics (rules
that lead to solution to every problem taken in AI). Expert systems and algorithms are
examples that form the basis of the representation and application of this knowledge.
The knowledge engineering process includes:
 Knowledge acquisition
 Knowledge representation
 Knowledge validation
 Inferencing
 Explanation and justification

The amount of collateral knowledge can be very large depending on the task. A
number of advances in technology and technology standards have assisted in integrating data
and making it accessible. These include the semantic web (an extension of the current web in
which information is given a well-defined meaning), cloud computing (enables access to
large amounts of computational resources), and open datasets (freely available datasets for
anyone to use and republish). These advances are crucial to knowledge engineering as they
expedite data integration and evaluation.

Knowledge management (KM) is an approach of managing and sharing information

stored in a database which supports the decision making process. The acquired is used in
business management decision making. Integrating Knowledge management and decision
making system can be done faster and more accurately with the help of knowledge discovery
techniques involved in the process of data mining applications and its source finding

Fig: 1 Illustration of Knowledge Engineering Pipeline

An above diagram shows the principle of knowledge process among various Meta data
information in order to produce the algorithmic computations. In knowledge engineering the
process of building intelligent systems is given below:
Problem assessment
Data & knowledge Acquisition
Complete system
Evaluation and revise
Integration and Maintain system

The following diagram shows the knowledge representation of inferencing process in
the service process AI system. It is used to find out the application specific and process
specific functional components on the service application process. The following diagram
shows the process of inferencing operation.

Fig: 2 Pictorial representation of Knowledgebase

Building Knowledge base

The following Process is created to build the knowledgebase among the service access
environment. It will be used to perform to represent the access objects, relations to
specifically identify the domains.
 Knowledge Engineer (KE)
 Investigate a particular domain
 Determine what concepts are needed for domain
 Create a formal representation of the objects & relations
 KE is trained in representation
 Required real expert to become educate about the domain, acquisition process

KE is used to make certain decisions accurately, and need to identify the proper
knowledgebase in the tricky decision making applications to justify the conceptual relations
among the service environment.

M.Vetrivel, Associate Professor, Department of Mechanical Engineering, SCSVMV

Temples - A link between Man and God, to sing the glory of God, A structure for
religious and spiritual activities, house of worship and humans throughout the history has created
a space for the divine.
An idol is the physical image of the Divine. It helps the man to realize and move on to the
next stage of realizing God. Through the worship, a person moves to the next stage of mental
prayers and then finally realizes God.
India is known for its tradition and culture. There are many of temples built across the
country in different design, location and shape. People visit temples not only to get blessing but
also to get better mind set and calmness. Temples are found at a place where the positive energy
is available abundantly from the magnetic and electric wave of north/south post push.
So, the visit to temples helps make our five senses very active.
Most of the temples are built in the direction of east facing sun such that sun's rays first
fall over the Kumbham of Temple Tower.

Before a Vigraham is installed over a place, a copper plate (Enthras) is taken on which
Vedas and Mantras are written and then it is placed under the Vigraham of Garbhagriha or
Moolasthanam. Proper Avahanam (rituals) and Prathista are done before installing the

Vigrahams. These copper plates have the ability to absorb the magnetic vibrations and pass these
vibrations on to the Vigraham. In Agamic temples, the inner sanctorum will be always small and
open on one side such that these vibrations pass towards us while praying. These vibrations are
needed for a man to give eternal energy for the soul. This is the reason we should visit temples
very often.
A temple is a miniature cosmos comprised of five elements and a presiding deity from
which energy is constantly radiating. Temples are places where pure vibrations of magnetic and
electric fields with positive energy prevail. In olden days, temples are built in such a way that a
floor at the center of the temple were good conductors of these positive vibrations allowing them
to pass to the body through our feet. So, it is necessary to walk barefooted inside the temple.
The idol of God is set at the core center
of the temple known as “Garbhagriha” or
“Moolasthanam”. Initially, the structure of the
temple is built after the idol has been placed in
a high positive wave centric place. This
Moolasthanam” is the place where earth’s
magnetic waves are discovered to be the most

People who are visiting the temple should and will ring the bell before entering the inner
chamber where the main idol is placed. These bells are made of various metals like cadmium,
copper, Zinc, Nickel, Chromium and Manganese. The proportion at which they are mixed in
such a way that when they produce a sound it creates a
unity in the Left and Right parts of our brains. The
moment we ring the bell, it produces a sharp and
enduring sound which lasts for minimum of 7 seconds in
echo mode. The duration of echo is good enough to
activate all the seven healing centers in our body. This
results in emptying our brain from all negative thoughts.

Usually the Moolasthanam or Garbhagriha will be dark. We usually close our eyes while
praying and when we open our eyes, we should see the camphor which was lit to do the Aarthi
in front of the idol. This light seen inside the dark activates your sight sense. Once the camphor
is brought near to you after offering the prayer, we usually put our hands over the camphor,
which make our hands warm and when we touch our eyes with these warm hands, it assures your
touch sense is active.
The fragrance of the flower, camphor and instance sticks all together have the strong
essence to keep your smell sense active and pleasant giving calmness to the mind. The fragrance
from the flowers and the burning of camphor give out the chemical energy that creates a good
aura. The effect of all these energies and positive energy from the idol, the copper plates and
the utensils used while worshipping the God during evening Aarthis and when door opens up, the
positive energy gushes out onto everyone present there.
Inside a temple, Theertham is offered to the devotees by the temple priests ideally from a
silver or a copper vessel. The theertham usually contains thulsi leaves, saffron, Karpura,
Cardamom and Clove dipped in water for at least eight hours in copper vessel. In addition to the
high medicinal value of the materials, this water is given to all devotees after doing pooja with
the idol. This pooja charges the water with magnetic radiations and also increases the medicinal
values. This water is mainly as a source of magneto-therapy. By drinking this water, we can
activate taste sense and also develop the ability to balance the three doshas in our body like vata,
pitta and kapha. The curd, honey, milk sugar and coconut water by which we perform pooja to
the idol is believed to make the charna – amrit a blessing. The holy water containing thulsi and
camphor helps to fight against cough and cold.
The sound from the Conch or Sankha is associated with the sacred syllable “OM” which
is believed to be the first sound of creation. It marks
the beginning of any good work. The sound is
believed to be the purest form of sound which
signifies freshness and new hope. This gets more
powerful along with positive energy radiated in
temples and hence have more impacts on the

Applying Kumkum or Tilak between the eyebrows is said to
retain energy and prevent loss of energy. It also controls the various
levels of concentration. On the forehead, between the two eyebrows
is a spot considered as a major nerve point in human body. While
applying Kumkum, the points on the mid-eyebrow region and Adnya-
chakra are automatically presses. This facilitates the blood supply to
the face muscles.

In a temple, the idol of God is closed on three sides. This increases the effect of all
energies. The lighted lamp radiates heat energy and also provides light inside the sanctum to the
priests. The ringing of the bell and the chanting of prayer takes a worshipper into peaceful state
and relieves from his stress.

During the time of Deeparadhana

or Aarthi, the priests will close the door and
light the lamps first and suddenly they open
the doors and show Aarthi to Gods and at
the same time bells are rung. During this
time, the light and heat energy from
Camphor, sound energy from the bells, the
fragrant scents of Flowers adorned over the
God all combine with magnetic vibrations or energy and falls on us, passes through us and again
goes into the ground. Whatever scientific things happen in a temple are something superficial
and astonishing. The correct explanation and way is still unidentified.
Energy can neither be created nor destroyed. It can only be transferred from one body to
another; Temples do the same for us. So temples give us the positive energy which we lose and it
aims at rejuvenating the five senses. Thus, temples take the positive energy from the Earth’s
surface and transfer it to the human body through many mediums.

Hindus always take yatras and visit temples very often. Temple is a guiding place for
people on a path. Once you have the deepest Meditation (Tiyanam) , then you can create temple
inside you. So let us visit temples at least once a week by dedicating some amount of time for it.

K.Rajalakshmi, Assistant Professor, Department of Physics, SCSVMV

Our world is transforming rapidly and violently to the point where many people feel
completely unable to comprehend and utilize these advancements. In early 90’s, computer
technology was limited to a number of people and now-a-days, everyone has a Smartphone and a
laptop. Many new technologies such as mobile technology, genetic engineering, emerging fields
of nanotechnology play a vital role in all areas of developing science. The most scariest part of
all is the fact that science fiction, the literature of the human species encountering vast, mind-
altering changes, whether they arrive via scientific discoveries, technological innovations,
natural events, aliens, or societal shifts, has recently lost its magic and glimpse to the point we
wonder what is real and what is not nowadays. Let check up some modern Science and

Many researchers acknowledge the role that science fiction has played in triggering their
interest in science and inspiring breakthroughs. Indeed, there are many examples of fictional
technologies that have later emerged in the real world. In 1945, long before the first satellite
orbited Earth, Arthur C. Clarke famously described how radio signals could bounce off satellites
for long-distance communication. Today, communications satellites are common.

In 1945, pioneering science fiction author Arthur C.

Clarke began circulating a manuscript called The Space
Station: Its Radio Applications. This paper proposed that
space stations could be used to broadcast television
signals — an “out there” statement during a time when
television was not yet a commercial entity. Seventeen
years later, in 1962, the Telestar 1 communications
satellite relayed the first transatlantic television signal in
history. Fast-forward fifty-four years to 2016 and President Trump’s inauguration was viewed by
a live global audience of 30.6 million viewers, in an event that was virally compared to the
science fiction TV show, The Twilight Zone.

First planet with Four suns were discovered : An

international team of astronomers have announced the discovery
of a planet whose skies are illuminated by four suns—the first
known of its type. The planet, located about five thousand light-
years from Earth, has been dubbed PH1 in honor of Planet
Hunters, a program led by Yale University in the United States.

Flying cars : The Terrafugia flying car gets thirty-five miles to the
gallon as a car and consumes five gallons per hour as a plane. It
flies at 115 miles per hour and can cover 490 miles per flight.
A flying car is also a type of personal air vehicle or roadable
aircraft that provides door-to-door transportation by both ground
and air.

Robot snake: Researchers at Carnegie Mellon University’s
Biorobotics laboratory have adapted one of their robotic snakes to
cause it to automatically wrap itself around an object/target when it
is thrown. in test cases, a light pole and tree branch, and holds on,
supporting itself. Scientists believe their leg- and feet-free mode of
locomotion might be ideal for use in hard to reach places, such as
buildings that have been demolished by an earthquake.

The Cancer Gene Fingerprint: All cancers are equally fatal; for
example, prostate cancer means a longer survival rate than a tumor
in your esophagus. By analyzing the mutated genome of a tumor,
doctors can now pinpoint whether a cancer is sensitive to a certain
chemotherapy, or one that doesn’t respond at all to current

NASA begins using robotic exoskeletons :: The X1 Robotic

Exoskeleton weighs in at fifty-seven pounds and contains
four motorized joints along with six passive ones. With two
settings, it can either hinder movement, such as when
helping astronauts exercise in space, or aid movement,
assisting paraplegics with walking.

Diamond planet discovered : Planet 55 Cancri e is what’s

known as a super-Earth because it is likely a rocky world
orbiting a sun-like star, but it has a radius twice as large as that
of Earth, and a mass eight times greater. The hot planet also
races around its star at such a close distance that one year lasts
just eighteen hours. The “alien planet,” as it’s also known, is
thought to be made largely of diamond but new studies have
shown that it might be less than glittering inside.

Artificial leaves generate electricity : Using relatively

inexpensive materials, Daniel G. Nocera created the world’s
first practical artificial leaf. The self-contained units mimic
the process of photosynthesis, but the end result is hydrogen
instead of oxygen. The hydrogen can then be captured into
fuel cells and used for electricity, even in the most remote
locations on Earth.

Voyager I leaves the Solar System : Launched in 1977,
Voyager I traveled past Jupiter and Saturn and by 2013 (when
NASA confirmed that it left our solar system) traveled more than
11.66 billion miles (18.67 billion kilometers) from the sun,
becoming the first spacecraft to enter interstellar space.

Self-Driving bus is legal in Greece: Four tiny, driverless

buses are already on trial in the Greek city of Trikala, the
first of five European cities to introduce the automated
transportation. The vehicles are part of CityMobil2, an
EU-funded research project that is staging tests of
automated road transport systems with self-driving buses
across Europe. Each bus can carry 10 to 12 passengers at
speeds of up to twenty kilometers an hour and these buses
are electric, silent, and non-polluting.

3-D printer creates full-size houses in one session :

The D-Shape printer, created by Enrico Dini, is
capable of printing a two-storey building, complete
with rooms, stairs, pipes, and partitions. Using
nothing but sand and an inorganic binding compound,
the resulting material has the same durability as
reinforced concrete with the look of marble. The
building process takes approximately a fourth of the
time as traditional buildings, as long as it sticks to
rounded structures, and can be built without specialist
knowledge or skill sets.
DNA was photographed for the first time :
Using an electron microscope, Enzo di Fabrizio
and his team at the Italian Institute of Technology
in Genoa managed to capture the famous
Watson-Crick double helix in all its glory, by
imaging threads of DNA resting on a silicon bed
of nails. This technique now allows the
researchers to see how proteins, RNA, and other
biomolecules interact with DNA.

Genetically modified silk is stronger than steel:At the University

of Wyoming, scientists modified a group of silkworms to produce
silk that is, pound for pound, stronger than steel. Different groups
hope to benefit from the super-strong silk, including the medical
community for stronger sutures, businesses for use as a biodegradable
alternative to plastic, and the military for lightweight armor.

DARPA robot can traverse an obstacle course: DARPAtv,
or Defense Advanced Research Projects Agency, published a
robot-themed viral video back in 2012 for a robotic
presentation. The so-called Pet-Proto bot traversed through a
specially made obstacle course using autonomous decision-
making. Many viewers couldn’t resist and commented that
once the creepy robot will be able to execute such things
without the help of wires, humanity is doomed.

Clarke was not the first sci-fi author with uncanny technological predictions — and
certainly not the last. Many works of science fiction involve technological speculation that bears
remarkable resemblance to the pieces of technology woven into our lives today. This brings to
mind an intriguing question of the role that speculative fiction (and especially science fiction)
plays in driving technology by postulating future advancements — and sparking innovation.

There is still much to explore and to invent in a modern technology and there are no
limits to the human imagination. The science fiction literature has proven to have a great impact
on a modern science and technology and we can only wonder which Star Trek or Star Wars
inventions will we be able to use in real life in the future. Main point of this essay was to show
some impacts of science fiction ideas on the real world and to point out some important
inventions that came from it. Perhaps we'll be able to teleport between far distances or travel
using a speed of light. The limits of our future are way beyond our imagination. The art of
turning fiction into reality is still the most amazing thing about science. We can only hope that it
never ceases to amaze us as much as it does now. For there is no limit for a better life of a human
being and there are many things left to invent and many dreams to be fulfilled.


R.Malathi, Assistant.Professor, Department.of Mathematics, SCSVMV


Consider the following equations

Zn + H2SO4 = ZnSO4 + H2 …… (1)

Fe + H2O = Fe3O4 + H2 ……… (2)
KClO3 = KCl + O2 ……….. (3)

According to the law of conservation of mass, “Mass can neither be created nor
destroyed in a chemical reaction.” It means the mass of the elements or number of atoms of
various elements present in reactants and products should be equal in a chemical reaction.
Let us examine the number of atoms of different elements of reactants and products in chemical
equation (1).

Element No.of atoms in No.of atoms in products

Zn 1 1
H 2 2
S 1 1
O 4 4

The number of atoms of each element present in reactants and products are same. Such
an equation which has an equal number of atoms or masses of various elements on both
sides of the arrow (reactants and products) is called balanced chemical equation. Let us
examine chemical equation (2)

Fe + H2O→ Fe3O4 + H2 ……… (2)

KClO3 →KCl + O2 ……….. (3)

Element No.of atoms in reactants No.of atoms in products

Fe 1 3
H 2 2
O 1 4

Converse to equation – 1, in equation 2 and 3, number of atoms of various elements
present in reactants and products are not same. Such an equation which has an unequal
number of atoms of one or more elements in reactants and products is called unbalanced
chemical equation. Unbalanced chemical equation has to be made balanced to justify the law of
conservation of mass.

Balancing Chemical Equation Using a Matrix

1. C3H8 +O2 = CO2+H2O

Ans: a C3H8 +b O2= c CO2+d H2O

C  3 0 -1 0   a   0
    b  
H  8 0 0 - 2    =  0
O  0 2 - 2 -1   c   0
    d  
 

3a-c = 0  c = 3a

8a-2d = 0  d = 4a

2b-2c-d = 0 2b-2(3a)- 4a = 0  b = 5a

a=1 b=5 c=3 d=4

 1C3H8 + 5O2  3CO2 + 4H2O


1. Fecl3+Be3(PO4)2 = Becl2+FePO4

Ans: 2Fecl3+1Be3(PO4)2 = 3Becl2+2FePO4

2. H3PO4+(NH4)2MoO4+HNO3(NH4)3PO3.12MoO3+H2O

Ans: 1H3PO4+12(NH4)2MoO4+21HNO3



Ba3N2 + 6 H2O = 3 Ba(OH)2 + 2 NH3

3 CaCl2 + 2 Na3PO4 = Ca3(PO4)2 + 6 NaCl

4 FeS + 7 O2 = 2 Fe2O3 + 4 SO2

PCl5 + 4 H2O = H3PO4 + 5 HCl

2 As + 6 NaOH =2 Na3AsO3 + 3 H2

3 Hg(OH)2 + 2 H3PO4 = Hg3(PO4)2 + 6 H2O

12 HClO4 + P4O10 = 4 H3PO4 + 6 Cl2O7

8 CO + 17 H2 = C8H18 + 8 H2O

10 KClO3 + 3 P4 = 3 P4O10 + 10 KCl

SnO2 + 2 H2 = Sn + 2 H2O

3 KOH + H3PO4 = K3PO4 + 3 H2O

2 KNO3 + H2CO3 = K2CO3 + 2 HNO3

Na3PO4 + 3 HCl = 3 NaCl + H3PO4

TiCl4 + 2 H2O = TiO2 + 4 HCl

C2H6O + 3 O2 = 2 CO2 + 3 H2O

2 Fe + 6 HC2H3O2 = 2 Fe(C2H3O2)3 + 3 H2

4 NH3 + 5 O2 = 4 NO + 6 H2O

B2Br6 + 6 HNO3 → 2 B(NO3)3 + 6 HBr

4 NH4OH + KAl(SO4)2·12H2O → Al(OH)3 + 2 (NH4)2SO4 + KOH + 12 H2O

*** *** ***

Dr.C.K.Gomathy, Assistant Professor, Department of CSE, SCSVMV.

Anti-gravity (also known as non-gravitational field) is a theory of creating a place or

object that is free from the force of gravity. Anti-gravity is an idea of creating a place or object
that is free from the force of gravity. It does not refer to the lack of weight under gravity
experienced in free fall or orbit, or to balancing the force of gravity with some other force, such
as electromagnetism or aerodynamic lift .Anti-gravity is a recurring concept in science fiction,
particularly in the context of spacecraft propulsion.
Need of Anti Gravity
Like most people, you probably took gravity for granted before space flights began.
People know Earth's normal gravitational pull holds you to the seat of your chair-and more
importantly keeps you from being tossed into the air by the Earth's rotation. But gravity causes a
lot of trouble and expense we seldom think about. Aircraft and rocket builders have to provide
heavy engines and huge weights of fuel, just to offset gravity. In construction of buildings,
bridges and in a hundred other ways gravity affects our lives and adds billions to the cost of
work. Gravity control could reduce or end many of these problems. Global Warming obviously
now affecting us, in the next 20 or 30 years or so when our planet will be too hot to live in, we
could use anti gravity technology and anti gravity propulsion to explore space much more
adequately and find suitable planets to migrate.

Newton's Theory of General Relativity

In Newton's law of universal gravitation, gravity was an external force transmitted by
unknown means. In the 20th century, Newton's model was replaced by general relativity where
gravity is not a force but the result of the geometry of space-time. Under general relativity, anti-
gravity is impossible except under contrived circumstances
How to Produce Anti-Gravity
One easiest way to produce anti-gravity is described by sir Hermann Bond using negative
mass. Sir Hermann Bond proposed in 1957 that negative gravitational mass, combined with
negative inertial mass, would comply with the strong equivalence principle of general relativity
theory. Sir Hermann Bond pointed out that a negative mass will fall toward (and not away from)
the matter, since although the gravitational force is repulsive, on the other hand, normal mass
will fall away from the matter.
The negative mass (according to Newton's law, F=ma) responds by accelerating in the
opposite of the direction of the force. He noted that two identical masses, one positive and one
negative, placed near each other will therefore self-accelerate in the direction of the line between
them, then the negative mass chasing after the positive mass. So negative mass will show the
anti-gravity effect.

Negative Mass
In theoretical physics, negative mass is a hypothetical concept of matter whose mass is of
opposite sign to the mass of normal matter, e.g. −2 kg. Such matter would violate one or more
energy conditions and show some strange properties.

Anti-Gravity Devices
Anti-gravity devices produce a force when twisted that operates "out of plane" and can
appear to lift them against gravity. A device is shown invented by Henry Wallace. His device is
consists of super conducting ceramic ring and solenoids. He claimed that by passing current
through solenoid results in creating a "gravitomagnetic" field which shows anti gravity effect.

NASA Antigravity Machine

NASA researchers hope to conduct an experiment that could determine if the force of
gravity might someday be adjusted, like the volume of a radio. In 1992, Dr. Eugene published
the results of his experiment with high-temperature ceramic superconductors,”"He devised an
experiment in which a disc of superconducting material was magnetically levitated and rotated at
high speed, up to several thousand rpm, in the presence of an external magnetic field. He noted
that objects above the rotating disk showed a variable but measurable loss of weight, from less
than .5% to about 2%. Loss of weight is sample of impossibility of antigravity.
"Anti-gravity" is often used to refer to devices that look as if they reverse gravity even
though they operate through other means, such as lifters, which fly in the air by moving air
with electromagnetic fields.

M.Thirunavukkarasu, Assistant Professor, Department of CSE, SCSVMV


The block chain technology is an incorruptible digital ledger of economic

transactions that can be programmed to record not just financial transactions but virtually
everything of value.


Picture a spreadsheet that is duplicated thousands of times across a network of

computers. Then imagine that this network is designed to regularly update this
spreadsheet and you have a basic understanding of the block chain.

Information held on a block chain exists as a shared — and continually reconciled

database. This is a way of using the network that has obvious benefits. The block chain
database isn’t stored in any single location, meaning the records it keeps are truly public
and easily verifiable. No centralized version of this information exists for a hacker to
corrupt. Hosted by millions of computers simultaneously, its data is accessible to anyone
on the internet.


T.Lakshmibai, Assistant Professor, Department of EIE, SCSVMV

Scientists have found that nano particles

of selenium, an essential micronutrient, can be
used as an antibacterial agent.

Selenium is found naturally in wheat,

eggs, cheese, nuts and sea food. It is an
antioxidant and immunity booster. Scientists
found that selenium nano particles, owing to
their unique structure and properties, may be
more effective than antibiotics as they have a
larger surface area and therefore can be more in
contact with the external environment.
Fig. Selinium rich food
The antibacterial effect may be due to the fact that at a particular concentration nano-
selenium interacts with the bacterial cell surface and penetrates into the cell, thus causing
damage. Selenium in excess is toxic. Though silver nano particles are also being used for
similar purposes, researchers choose selenium due to their stable nature.

Selenium nano particles were made by combining sodium selenite with vitamin C. As
the most easily observed property of nano particles is their colour change at different sizes,
researchers allowed the process to continue till a colour change was seen? Thereafter, a high-
speed centrifuge was used to separate selenium nano particles in the form of pellets from the

To confirm whether the newly-produced selenium was actually selenium, the sample
was matched in structure, function and properties associated with selenium, using various
methods. These artificially-made particles are spherical in shape with average diameter range
between 15 and 18 nanometers. The vitamin C used during the process helps in maintaining
better uniformity of the particles

Scientists said, Nano-selenium can be an alternative to antibiotics like ampicillin to

prevent and treat a number of bacterial diseases or infections in humans.

The study has also indicated that nano-selenium is 60 times more effective in fighting
infections caused by conventional treatments. However, more research needs to be carried out
to deduce the antimicrobial response of the disease causing microorganisms.

L.Sathish Kumar, Assistant Professor, Department of ECE, SCSVMV

Big data has met its match. In field after field, the ability to collect data has
exploded—in biology, with its burgeoning databases of genomes and proteins; in astronomy,
with the petabytes flowing from sky surveys; in social science, tapping millions of posts and
tweets that ricochet around the internet. The flood of data can overwhelm human insight and
analysis, but the computing advances that helped deliver it have also conjured powerful new
tools for making sense of it all.

In a revolution that extends across much of science, researchers are unleashing

artificial intelligence (AI), often in the form of artificial neural networks, on the data torrents.
Unlike earlier attempts at AI, such “deep learning” systems don’t need to be programmed
with a human expert’s knowledge. Instead, they learn on their own, often from large training
data sets, until they can see patterns and spot anomalies in data sets that are far larger and
messier than human beings can cope with.

AI isn’t just transforming science; it is speaking to you in your smartphone, taking to

the road in driverless cars, and unsettling futurists who worry it will lead to mass
unemployment. For scientists, prospects are mostly bright: AI promises to supercharge the
process of discovery.

Unlike a graduate student or a postdoc, however, neural networks can’t explain their
thinking: The computations that lead to an outcome are hidden. So their rise has spawned a
field some call “AI neuroscience”: an effort to open up the black box of neural networks,
building confidence in the insights that they yield.

Understanding the mind inside the machine is likely to become more urgent as AI’s
role in science expands. Already some pioneers are turning to AI to design and carry out
experiments as well as interpret the results, opening up the prospect of fully automated
science. The tireless apprentice may soon become a full-fledged colleague.


Just what do people mean by artificial intelligence (AI)? The term has never had clear
boundaries. When it was introduced at a seminal 1956 workshop at Dartmouth College, it
was taken broadly to mean making a machine behave in ways that would be called intelligent
if seen in a human. An important recent advance in AI has been machine learning, which
shows up in technologies from spellcheck to self-driving cars and is often carried out by
computer systems called neural networks. Any discussion of AI is likely to include other
terms as well.

ALGORITHM A set of step-by-step instructions. Computer algorithms can be simple (if it’s
3 p.m., send a reminder) or complex (identify pedestrians).

BACKPROPAGATION The way many neural nets learn. They find the difference between
their output and the desired output, then adjust the calculations in reverse order of execution.

BLACK BOX A description of some deep learning systems. They take an input and provide
an output, but the calculations that occur in between are not easy for humans to interpret.

DEEP LEARNING How a neural network with multiple layers becomes sensitive to
progressively more abstract patterns. In parsing a photo, layers might respond first to edges,
then paws, then dogs.

EXPERT SYSTEM A form of AI that attempts to replicate a human’s expertise in an area,

such as medical diagnosis. It combines a knowledge base with a set of hand-coded rules for
applying that knowledge. Machine-learning techniques are increasingly replacing hand

GENERATIVE ADVERSARIAL NETWORKS A pair of jointly trained neural networks that

generates realistic new data and improves through competition. One net creates new
examples (fake Picassos, say) as the other tries to detect the fakes.

MACHINE LEARNING The use of algorithms that find patterns in data without explicit
instruction. A system might learn how to associate features of inputs such as images with
outputs such as labels.

NATURAL LANGUAGE PROCESSING A computer’s attempt to “understand” spoken or

written language. It must parse vocabulary, grammar, and intent, and allow for variation in
language use. The process often involves machine learning.

NEURAL NETWORK A highly abstracted and simplified model of the human brain used in
machine learning. A set of units receives pieces of an input (pixels in a photo, say), performs
simple computations on them, and passes them on to the next layer of units. The final layer
represents the answer.

NEUROMORPHIC CHIP A computer chip designed to act as a neural network. It can be

analog, digital, or a combination.

PERCEPTRON An early type of neural network, developed in the 1950s. It received great
hype but was then shown to have limitations, suppressing interest in neural nets for years.

REINFORCEMENT LEARNING A type of machine learning in which the algorithm learns by

acting toward an abstract goal, such as “earn a high video game score” or “manage a factory
efficiently.” During training, each effort is evaluated based on its contribution toward the

STRONG AIAI that is as smart and well-rounded as a human. Some say it’s impossible.
Current AI is weak, or narrow. It can play chess or drive but not both, and lacks common

SUPERVISED LEARNING A type of machine learning in which the algorithm compares its
outputs with the correct outputs during training. In unsupervised learning, the algorithm
merely looks for patterns in a set of data.

TENSORFLOW A collection of software tools developed by Google for use in deep learning.
It is open source, meaning anyone can use or improve it. Similar projects include Torch and

TRANSFER LEARNING A technique in machine learning in which an algorithm learns to

perform one task, such as recognizing cars, and builds on that knowledge when learning a
different but related task, such as recognizing cats.

TURING TEST A test of AI’s ability to pass as human. In Alan Turing’s original conception,
an AI would be judged by its ability to converse through written text.

M.Vinoth, Assistant Professor, Department of ECE, SCSVMV

ICs support all equipment conforming to WPC 1.1 Qi standard of the Wireless Power
Consortium, including portable audio, DSC, DVC, cell phone, and Smartphone devices.
The Panasonic Wireless Charging ICs, available from Mouser Electronics, feature low power
consumption and low heat dissipation in compliance with the Qi charging standard. As an
expansion to Panasonic's already popular Wireless Charging Control IC line, the
new NN32251A Transmitter IC includes both a microcontroller and an analog controller in
one package, reducing external components and saving board space. When used together,
the NN32251A transmitter and AN32258A-PR receiver chips achieve an energy conversion
efficiency of over 70% as a Qi compliant charger. The NN32251A Wireless Power Supply
Transmitter features a half-bridge (4-ch) or full bridge (2-ch) configurable gate driver. Other
features of the NN32251A include accurate voltage and current monitor for inverters, Qi-
defined output control by frequency or duty cycle, and a temperature detection circuit. This
device is compliant with Qi version 1.1 of the System Description Wireless Power Transfer,
Volume 1 for Low Power, defined by the Wireless Power Consortium, and is targeted toward
WPC-compliant wireless chargers.

Also fully compliant with Qi version 1.1 is the AN32258A Wireless Charging IC.
This receiver controller IC features synchronous full bridge rectifier control, a temperature
detecting circuit, full charge detection with an adjustable current level, and switching control
of an external power supply. Designers can use the AN32258A with any Qi-compliant
wireless charger, and the device is ideally suited for a variety of low-power and wireless
charging applications, including WPC compliant receivers, cell phones, Smartphones,
headsets, digital cameras, tablets, and portable media players.

S.Selvakumar, Assistant Professor, Department of ECE, SCSVMV

Video surveillance has proven itself to be an advanced sensor with benefits. Serving as
a remote set of eyes, video surveillance allows a virtual presence in off-site locations from a
single point. What's more, video cameras cover a large contiguous swath of view, allowing a
panning camera to steadily and consistently sweep a search pattern.

Video systems can also function in locations where humans cannot. The earliest known
video surveillance technology was used to safely monitor the development and launch of V-2
rockets in 1942. From a safe distance, scientists and engineers could observe performances
and identify failures.

Since then, video systems have acted as an extension of our eyes and ears. A steady
stream of technology developments and manufacturing advancements have taken video
surveillance to such a level that we feel comfortable relying on it for security purposes.First
Light Light-sensitive materials can change their resistance or conductance based on the
presence or absence of light. Early monochrome video systems like the RCA Vidicon camera
system of the 1950s involved vacuum tubes featuring a light-sensitive selenium plate that
acted as the focus of the image to be sensed.

An electron beam would scan the plate and the resulting current was directly
proportional to the amount of light hitting that section of the plate at that exact time. Thus,
the raster-scanned tube produced a rudimentary electronic video signal that could easily be
transmitted long distances. The CRT television took this signal in reverse order, scanning a
phosphor screen with the electron beam to re-create corresponding light levels in the image.

For decades, video was limited to monochrome sensing and displaying of images in real
time. Color filters in front of each sensor limited the analog level to the intensity of the
constituent colors in order to create color sensors. Color phosphors placed in the path of the
electron beam were used to create colors. The advent of colorburst crystals helped to
synchronize color components in the video signals. Steady advances made improvements in
these tubes over time, including better resolutions, lower power, lower cost manufacturing,
and higher reliabilities. The Closed Circuit Television (CCTV) and broadcast industries were
born and driving development at an even quicker pace.

On the down side, these technologies used fragile glass and the circuitry needed used
higher voltages. Size constraints made tube-based image sensors a large and bulky assembly.
Thanks to modern semiconductor technology, this is no longer the case.

S.Selvakumar, Assistant Professor, Department of ECE, SCSVMV

MachXO3™ Field Programmable Gate Arrays (FPGAs) from Lattice Semiconductor.

The MachXO3 uses a new packaging technology that eliminates bond wires, both increasing
performance while reducing cost. This innovative packaging approach results in increased
gate and input/output (I/O) density while maintaining a small board footprint. The MachXO3
FPGAs are designed on a 65nm non-volatile low power process technology and operate as
fast as 150MHz.

The new Lattice Semiconductor MachXO3 Field Programmable Gate Arrays,

available from Mouser Electronics, boast a very high gate density not usually found
in programmable logic. These FPGAs feature low power consumption with almost instant-on
activity after power-up. Densities in this FPGA family range from 640 to 6900 Look-Up
Tables (LUTs). In addition to LUT-based low-cost programmable logic these devices also
feature embedded block RAM (EBR) and distributed RAM. Internal phase locked loops
(PLLs) reduce component count.

Advanced configuration support includes dual-boot capability and hardware versions

of commonly used embedded blocks such as an SPI interface, two I2C interfaces, and a
timer/counter. These features allow developers to quickly configure these devices for their
applications without having to configure the FPGA for these commonly used logic blocks.

Enhanced flexibility in the I/O include drive strength control for driving external
loads, slew rate control to compensate against high frequency oscillations, PCI bus
compatibility, internal pull-up and pull-down resistors on the I/O pins, and open drain outputs
for driving loads.

The MachXO3 is flexible enough to be used across many market segments including
consumer, communications, data storage, industrial applications, and automotive

M.Vinoth, Assistant Professor, Department of ECE, SCSVMV

The new STMicroelectronics EFL700A39 Rechargeable Lithium Battery, available

from Mouser Electronics, is one of the thinnest batteries of its type. The battery has a lithium
cobalt oxide (LiCoO2) cathode which gives the battery a higher capacity for its size, a lithium
phosphorus oxynitride (LiPON) ceramic electrolyte which helps increase the overall energy
density of the battery, and a lithium anode. The internal resistance of the EnFilm battery is
only 100 ohms.

The EnFilm is an all solid-state battery designed to be ultra-thin. It boasts a fast

recharge time, and with a charging voltage of 4.2V can charge to 80% of its capacity in 20
minutes. The EnFilm boasts a low capacity loss over a long cycle life. In storage, the EnFilm
loses only 3% of its charge over a year which can be recovered by a recharge. Over a ten year
lifecycle the battery will lose 20% of its capacity like most lithium batteries.

The EnFilm battery is in full compliance with the IEC 62133 safety standard for
non-acid batteries. It also meets both the ISO7816 and IEC10373 mechanical and flexibility
standards for smartcards.

Target applications for these EnFilm batteries include Internet of

Things applications, sensors and sensor networks, smart card applications, RF ID tags, energy
harvesting devices, non-implantable medical devices, backup power, and wearable

J.Vinoth Kumar, Assistant Professor, Department of ECE, SCSVMV

The basic laws of thermodynamics ensure that no electronic device can achieve
100% efficiency-- although switch-mode power supplies come close (approaching 98%).
Unfortunately, anything that generates RF power cannot presently boast such near-ideal
performance as there are simply too many impediments to converting DC power into RF
power, from losses incurred throughout the signal path, to operating frequency, the inherent
characteristics of the device, and others. The result, as an article in MIT Technology Review
uncharitably put it, is "a grossly inefficient piece of hardware."

Not surprisingly, every manufacturer of RF power products, from semiconductors to

amplifiers to transmitters, along with universities and the Department of Defense, spends
enormous amounts of time and money every year to increase the efficiency of RF power
generation. And for good reason: Even small increases in efficiency increase operating time
in battery-powered products and reduce the annual electricity bills of wireless base stations.
Figure 1 shows just how much the RF portion of a base station contributes to power

Add up how much the various RF-related base station components contribute to power
consumption and the result is a very big number. Source: Globecom 2010, R. Grant and S.
Fortunately, these efforts are delivering results that continue to increase RF efficiency
every year, some at the device level and others through use of techniques such as envelope
tracking, digital pre-distortion/crest factor reduction schemes, and higher classes of amplifiers
beyond the ubiquitous Class AB.

A major change in amplifier design, which has in 5 years become the standard in base
station amplifiers, is the Doherty architecture. Essentially dormant, it has been used in only a
few applications since it was invented by W.H. Doherty of Bell Labs (then a part of Western
Electric) in 1936. Doherty's research produced an amplifier architecture that delivers very
high power-added efficiency with input signals that have high peak-to-average ratios (PARs).
In fact, when properly designed, a Doherty amplifier can produce increases in efficiency of
11% to 14% when compared to standard parallel Class AB amplifiers.

J.Vinoth Kumar, Assistant Professor, Department of ECE, SCSVMV

The advent of LED lighting in the past few years has now matured to the point where
sweeping changes exist in the way we think about illumination. LED lighting was once the
darling of scientists and "early adopters" but has now gone "mainstream". The technology
promises a virtual end to skyrocketing electric bills, the cold glow of fluorescent bulbs, and a
life expectancy that will make changing a light bulb so infrequent that we may require
instructions on how to do it properly! But does LED lighting actually deliver on these

On a tour of a recently renovated industrial space earlier this year, the host was
detailing some of the features of the LED light fixtures that were providing general lighting
to the factory floor. His statement was something on the order of "They are really good lights
- and they have a seven year warranty!" There was not a hint of doubt in his voice, but then
he's not on the hook for repair/replacement costs if the lights don't actually last that long. If
those LED lights fail early - well, that's somebody else's problem - unless it was your
employer that sold the LED lighting fixture, then maybe that "somebody else" is you!

Let's look at five LED lighting applications with large markets and discuss the threats
to the lighting system as well as the various circuit protection strategies that can be deployed
to "harden" the design for maximum reliability.

The primary focus will be to invest in circuit protection where there is value. Of
course, the safety rating agencies (UL, ETL, etc.) will demand the minimum of fire protection
and personnel safety measures. The real goal of this article is to cost effectively extend the
useful life of the protected equipment (an issue not addressed by the safety folks). Value goes
beyond just the retail value of the light fixture (although that is surely a factor) and includes
valuable function (where failure may put people or other equipment in danger), difficult
access (where repair or replacement access costs may be an order of magnitude larger than
the equipment cost), or equipment sold into markets where reliability is a point of
competition between vendors.

R.Hariuttej, BE-I Year, Section:S4,Department of CSE, SCSVMV


A black hole is a region of spacetime exhibiting such strong gravitational effects that
nothing—not even particles and electromagnetic radiation such as light—can escape from
inside it.



The first modern solution that would characterize a black hole was found by “Karl
Schwarzschild” in “1916”, although its interpretation as a region of space from which nothing
can escape was first published by David Finkelstein in 1958.


The physicist John Wheeler (1911) is credited with the term "black hole".In 1967,a
student reportedly suggested the phrase "black hole" at a lecture by John Wheeler. Before
him the idea existed of a star from which not even light could escape, but they were known as
"dark stars" or sometimes "frozen stars". For his work he named it as a “black hole”.


The common term " BLACK HOLE " is referring to is a " HOLE " at the center of a
Galaxy which has a gravitation intensity so strong that nothing can escape. That is “wrong”.
At the centre of Galaxies the " GALACTIC CORE STAR" that do not emit enough light be
seen against the brightness of it's surroundings. They are not black at any time of their
existence. We can see a object when a light falls on it and reflected but in case of black hole
it can pull light towards it. So no light is reflected from it so we can’t able see the black holes.
So they are in black colour.


Stars are bright and they glow due to “nuclear fusion”. Nuclear fusion is the process
of making a single or one heavy nucleus from two or double lighter nuclei. This process is
called a nuclear reaction. It releases a large amount of energy. The nucleus made by fusion is
heavier than the starting nuclei.Fusion happens in the middle of stars. Two Hydrogen atoms
are fused together to make single helium. Which releases a lots of energy. This energy
powers the heat and light of the star. This energy releases out. In the other side the
gravitational force or attraction of the black hole attractes the outer field, which causes two
forces actting opposite to each other. Due to this star is in stable condition. When the star is
out of nuclear fuel, gravity gets the upper hand and the material in the core is compressed
more further. The more massive the core of the star, thegreater the force of gravity that
compresses the material, collapsing its own weight.

For small stars, when the nuclear fuel is

exhausted and there are no more nuclear
reactions to fight agnist gravity, the
repulsive forces among electrons within the
star eventually create enough pressure to
halt further gravitational collapse. The star
then cools and dies peacefully. This type of
star is called a "white dwarf."

When a very massive star exhausts its

nuclear fuel then the explosion is called
supernova explosion. The outer parts of the
star are expelled violently into space, while
the core completely collapses under its own

Even when the core remains after the supernova explosion is very massive (more than 2.5
times the mass of the Sun), then no known repulsive force inside a star can push back hard
enough to prevent gravity from completely collapsing the core into a black hole.





SINGULARITY: It is the very centre part of black hole. At the centre of the black hole the
gravity is more stronger. The word singularity means “squashed up star”.


It is the very outer layer of black hole. If you are outer or nearer to event horizon you
can still able to escape from gravitational pull of black hole.


It is the middle layer or middle part of the black hole where the gravity is strong.
When you crosses the event horizon you can not able to escape from gravitational pull of
inner event horizon. From the middle layer the object push up to the more gravity area that is
core or centre part of the black hole.


Black holes are 3 types, they are


A stellar black hole or stellar-mass black hole is a black hole formed by the explostion
of a star. They have masses ranging from about 5 to several tens of solar masses. The process
is observed as a “hypernova” explosion or as a gamma ray burst. It has a 10 to 100 solar
mases of size.


Supermassive black holes contain between a million and a billion times more mass
than a stellar black hole. They are thought to exist at the centre of most large galaxies,
including the centre of our own galaxy, the Milky Way.


No one has ever discovered a miniature black hole, which would have a mass much
smaller than that of our Sun. But it's possible that miniature black holes could have formed
shortly after the "Big Bang". Very early in the life of the universe the rapid expansion of

some matter might have compressed slower-moving matter enough to contract into black



When you fall in black hole you feel the stronger gravity of black hole. Due to
stronger gravity our body will stretch down. And later our body will divide into parts due to
stronger gravity. But we do not have to fear about it because the closest black hole to us is
V616 Monocerotis, also known as V616 Mon. It's located about 3,000 light years away, and
has between 9-13 times the mass of the Sun.


We know that everything in universe has death. Black holes also has death.Black
holes have a finite lifetime due to the emission of Hawking radiation. However, for most
known astrophysical black holes, the time it would take to completely evaporate and
disappear is far longer than the current age of the universe. For example, a black hole with
the mass of the Sun would take 2×1067 years to evaporate, whereas the age of the universe is
only 13.8×109 years (thus it will take more than 1057 times the current age of the universe
for that black hole to evaporate).

Black Hole Evaporation Time : formula for the evaporation time of a black hole of mass
M is:

S.Girivel, Department of Physics, SCSVMV

How does thermal imaging work?

Human eyes can see objects that are illuminated by either the sun or
another form of light at specific wavelengths in the visual spectrum. In contrast,
thermal cameras “see” heat, or electromagnetic radiation within the infrared
spectrum, emittedbyobjects.Infrared (IR) light is an electromagnetic radiation of
small particles named photons. All objects at temperatures above absolute
zero(-273°C or -459.69°F) emit infrared radiation, and this is how heat is
transferred and detected by IR (thermal) cameras. This is why a thermal camera
can operate even in complete darkness.
Though it’s not visible to human eye, radiation of infrared energy can be
felt. If you hold your hand close to the side of a steaming cup of coffee – you
feel the heat emitting from the cup. Thermal cameras can see this radiation and
convert it to an image that we can then see with our eyes.

How are thermal cameras different from traditional cameras?

A thermal camera produces an image similar to that of a regular camera.
But unlike a regular camera, thermal (infrared) sensors detect electromagnetic
waves of different wavelength from those of light. This gives thermal cameras
the ability to “see” heat, or more technically, infrared radiation. The hotter an
object is, the more infrared radiation it produces.
In other words, thermal imaging allows us to see an object’s heat
radiating off its surface. This way, thermal cameras measure the temperature of
various objects in the frame, and then assign each temperature a shade of a
color. Colder temperatures are often represented as some shade of blue, purple,
or green, while warmer temperatures — a shade of red, orange, or yellow.
Some thermal cameras use a gray scale instead. Night vision footage from
security cameras always is in black and white. There is a good reason behind
that: human eyes can differentiate between black and white better than they can
differentiate other shades of colors, such as red or blue. Because of that, most
night vision cameras use a monochrome filter to make it easier for us to

understand what’s on the image. That is also why police helicopters use a
greyscale to make suspects stand out.


 Utility and energy companies use it to see where a house might be losing
heat through cracks.
 The police use it to locate suspects from helicopters at night.
 Thermal cameras are used in advanced vehicles to see and classify items
that are difficult to parse with the typical cameras on an autonomous car.
 Weather stations use it to track storms and hurricanes.
 It’s used in the medical field to diagnose different disorders and diseases.
 Thermal imaging cameras are mounted on ships to help the crew spot
icebergs and passengers overboard.

V.Jayapradha, Assistant Professor, Department of ECE, SCSVMV

Energy can be stored in a variety of ways. When you pull back on a slingshot, energy
from your muscles is stored in its elastic bands. When you wind up a toy, energy gets stored in
its spring. Water held behind a dam is, in a sense, stored energy. As that water flows downhill, it
can power a water wheel. Or, it can move through a turbine to generate electricity.

Each energy-storage device has its own advantages and disadvantages

When it comes to circuits and electronic devices, energy is typically stored in one of two
places. The first, a battery, stores energy in chemicals. Capacitors are a less common (and
probably less familiar) alternative. They store energy in an electric field . In either case, the
stored energy creates an electric potential. (One common name for that potential is voltage.)
Electric potential, as the name might suggest, can drive a flow of electrons. Such a flow is called
an electric current. That current can be used to power electrical components within a circuit.
These circuits are found in a growing variety of everyday things, from smartphones to cars to
toys. Engineers choose to use a battery or capacitor based on the circuit they’re designing and
what they want that item to do. They may even use a combination of batteries and capacitors.
The devices are not totally interchangeable, however. Here’s why.


Batteries come in many different sizes. Some of the tiniest power small devices like
hearing aids. Slightly larger ones go into watches and calculators. Still larger ones run
flashlights, laptops and vehicles. Some, such as those used in smartphones, are specially
designed to fit into only one specific device. Others, like AAA and 9-volt batteries, can power
any of a broad variety of items. Some batteries are designed to be discarded the first time they
lose power. Others are rechargeable and can discharge many, many times.

A typical battery consists of a case and three main components. Two are electrodes. The
third is an electrolyte. This is a gooey paste or liquid that fills the gap between the electrodes.

The electrolyte can be made from a variety of substances. But whatever its recipe, that
substance must be able to conduct ions — charged atoms or molecules — without allowing
electrons to pass. That forces electrons to leave the battery via terminals that connect the
electrodes to a circuit.

When the circuit isn’t turned on, the electrons can’t move. This keeps chemical reactions
from taking place on the electrodes. That, in turn, enables energy to be stored until it is needed.

The battery’s negative electrode is called the anode (ANN-ode). When a battery is connected
into a live circuit (one that has been turned on), chemical reactions take place on the anode’s
surface. In those reactions, neutral metal atoms give up one or more electrons. That turns them
into positively charged atoms, or ions. Electrons flow out of the battery to do their work in the
circuit. Meanwhile, the metal ions flow through the electrolyte to the positive electrode, called
a cathode (KATH-ode). At the cathode, metal ions gain electrons as they flow back into the
battery. This allows the metal ions to become electrically neutral (uncharged) atoms once again.

The anode and cathode are usually made of different materials. Typically, the cathode
contains a material that gives up electrons very easily, such as lithium. Graphite, a form of
carbon, holds onto electrons very strongly. This makes it a good material for a cathode. Why?
The bigger the difference in the electron-gripping behavior between a battery’s anode and
cathode, the more energy a battery can hold (and later share).

As smaller and smaller products have evolved, engineers have sought to make smaller,
yet still powerful batteries. And that has meant packing more energy into smaller spaces. One
measure of this trend is energy density. That’s calculated by dividing the amount of energy
stored in the battery by the battery’s volume. A battery with high energy density helps to make
electronic devices lighter and easier to carry. It also helps them last longer on a single charge.

In some cases, however, high energy density can also make devices more dangerous.
News reports have highlighted a few examples. Some smartphones, for instance, have caught
fire. On occasion, electronic cigarettes have blown up. Exploding batteries have been behind
many of these events. Most batteries are perfectly safe. But sometimes there may be internal
defects that cause energy to be released explosively inside the battery. The same destructive
results can occur if a battery is overcharged. This is why engineers must be careful to design
circuits that protect batteries. In particular, batteries must operate only within the range of
voltages and currents for which they have been designed.

Over time, batteries can lose their ability to hold a charge. This happens even with some
rechargeable batteries. Researchers are always looking for new designs to address this problem.
But once a battery can’t be used, people usually discard it and buy a new one. Because some
batteries contain chemicals that aren’t eco-friendly, they must be recycled. This is one reasons
engineers have been looking for other ways to store energy. In many cases, they’ve begun
looking at capacitors.


Capacitors can serve a variety of functions. In a circuit, they can block the flow of direct
current (a one-directional flow of electrons) but allow alternating current to pass. (Alternating

currents, like those obtained from household electrical outlets, reverse direction many times each
second.) In certain circuits, capacitors help tune a radio to a particular frequency. But more and
more, engineers are also looking to use capacitors to store energy.

Capacitors have a pretty basic design. The simplest ones are made from two components
that can conduct electricity, which we’ll call the conductors. A gap that doesn’t conduct
electricity usually separates these conductors. When connected to a live circuit, electrons flow in
and out of the capacitor. Those electrons, which have a negative charge, are stored on one of the
capacitor’s conductors. Electrons won’t flow across the gap between them. Still, the electric
charge that builds up on one side of the gap affects the charge on the other side. Yet throughout,
a capacitor remains electrically neutral. In other words, the conductors on each side of the gap
develop equal but opposite charges (negative or positive).

The amount of energy a capacitor can store depends on several factors. The larger the
surface of each conductor, the more charge it can store. Also, the better the insulator in the gap
between the two conductors, the more charge that can be stored.

In some early capacitor designs, the conductors were metal plates or disks separated by
nothing but air. But those early designs couldn’t hold as much energy as engineers would have
liked. In later designs, they began to add non-conducting materials in the gap between the
conducting plates. Early examples of those materials included glass or paper. Sometimes a
mineral known as mica (MY-kah) was used. Today, designers may choose ceramics or plastics
as their nonconductors.

Advantages and disadvantages

A battery can store thousands of times more energy than a capacitor having the same
volume. Batteries also can supply that energy in a steady, dependable stream. But sometimes
they can’t provide energy as quickly as it is needed.

Take, for example, the flashbulb in a camera. It needs a lot of energy in a very short time
to make a bright flash of light. So instead of a battery, the circuit in a flash attachment uses a
capacitor to store energy. That capacitor gets its energy from batteries in a slow but steady flow.
When the capacitor is fully charged, the flashbulb’s “ready” light comes on. When a picture is
taken, that capacitor releases its energy quickly. Then, the capacitor begins to charge up again.

Since capacitors store their energy as an electric field rather than in chemicals that undergo reactions, they
can be recharged over and over again. They don’t lose the capacity to hold a charge as batteries tend to
do. Also, the materials used to make a simple capacitor usually aren’t toxic. That means most capacitors
can be tossed into the trash when the devices they power are discarded.

G.Padmanabha Sivakumar, Assistant Professor, Department of EIE, SCSVMV

Smartphone-based augmented reality (AR) and the AR headset explosion will bring 3D
holograms into our lives everywhere. Meanwhile, though, the real AR hologram revolution is
being ignored.

A hologram is a 3D virtual object that isn’t actually “there,” but looks as if it were, either
floating in the air or standing on a nearby desk or table. Think of these hologram displays as the
next step in making digital content more human-

“We see the world in 3D. Our computer and phone screens
show us a 2D version of the world. It’s artificial”

Reality of Holograms: Holograms are as close as your

wallet. Most driver licenses include holograms, as well as
ID cards and credit cards. Holograms can even be found
throughout our houses. Holograms come as part of CD,
DVD, Blu-Ray, and software packaging, as well as nearly
everything sold as “official merchandise.” But, these
security holograms; which discourage forgery aren’t
impressive. They simply change shape and color when

However, large-scale holograms, illuminated with lasers or created in a dark room with carefully
placed lighting, are phenomenal. They’re basically two-dimensional surfaces that show very
accurate three-dimensional images of real objects. You don’t even have to wear special glasses
like when you go to a 3D movie. Holograms have surprising features. For example, each half
contains whole views of the entire holographic image. The same is true if you cut out a small
piece. Even a small fragment will still house the entire picture.

Understanding the principles behind holograms, helps you understand that the hologram, your
brain, and light waves work together to make clear, 3D pictures.

Working of Hologram

To make a hologram, you record an object (or person) in a clean environment with a laser beam
and apply the information to a recording medium that will clean up and clarify the image. The
laser beam is split in two and redirected with mirrors. One of the beams is directed at the object.
A portion of the light reflects off the object and is recorded on the medium. The second beam
(reference beam), is directed toward the recording medium. This means the beams coordinate to
make a precise image in the hologram location.

These two laser beams interfere and intersect with each other. The interference pattern is
imprinted on the recording medium to recreate the 3D image.

There are also developers like Looking Glass Factory that are working on a display product
called HoloPlayer that's currently available for
60000 INR (PC dependent) or a 240000 INR
version (built-in PC).

A HoloPlayer displays 3D holograms on a

sheet of glass, so you don't need special eyewear
to view them. The device creates 3D hologram
objects that can be manipulated using in-air
gestures. When you look straight on, you see the
front of the image. Tilt your head to the side, and
you see the side of the image. These can be
manipulated with natural hand gestures - reaching
out, pretending to grab and turning will rotate the 3D objects. In-the-air swiping gestures also
work as expected, taking you to the next image in a series."

“A flexible holographic phone with an OLED display might be the future of smart phones.”

“True holograms give medical professionals a real sense of the human anatomy”

In Companies & Manufacturers perspective- “A 3D hologram features a great many advantages

that promise to give companies a significant advantage over their competitors and clearly set
their products apart from the competition. 3D holograms are amazing eye catchers that attract the
undivided attention of potential customers and the product presented remains embedded in the
memory of the observer for a long time. They also create more contacts, make it easier to enter
into a dialog with potential customers and guarantee that more leads are generated.”

Conclusion: Holograms used to be the stuff of science fiction that was “coming to a theater near
you”. However, the practical uses of holographic technology have eclipsed the film industry and
become a commonplace feature in our everyday lives. We are only seeing the beginning of the
usefulness of holograms and as the innovators and developers continue to improve the
technology, holograms will become an even larger part of society.

*** ****

Dr.N.R.Ananthanarayanan,Associate Professor, Department of CSA,SCSVMV

Many vision science studies employ machine learning, especially the version called “deep
learning.” Neuroscientists use machine learning to decode neural responses. Perception scientists
try to understand how living organisms recognize objects. To them, deep neural networks offer
benchmark accuracies for recognition of learned stimuli. Originally machine learning was
inspired by the brain. Today, machine learning is used as a statistical tool to decode brain
activity. Tomorrow, deep neural networks might become our best model of brain function. This
overview highlights on deep learning which is a branch of machine learning, and again a branch
of artificial intelligence. Artificial intelligence basically is the branch of computer science that
attempts to solve tasks with computers for which seemingly a human-level intelligence would be
required. Deep learning is itself a sub-branch of machine learning, where the above mentioned
parametric computational model is an artificial neural network, which is inspired from the brain.
It consists of interconnected atomic processing units, called neurons. The signal for which some
properties are to be predicted is interpreted as stimulations of some of the neurons, the so-called
input neurons. The neurons can communicate the stimulations to others depending on the
strength of their connections. The training process in deep learning basically comprises finding
the right strengths of the connections, such that some neurons interpreted as output produce the
desired targets when the input neurons are stimulated with a signal. Deep learning is a subfield of
machine learning that is a set of algorithms that is inspired by the structure and function of the
brain. TensorFlow is the second machine learning framework that Google created and used to
design, build, and train deep learning models. You can use the TensorFlow library do to
numerical computations, which in itself doesn’t seem all too special, but these computations are
done with data flow graphs. In these graphs, nodes represent mathematical operations, while the
edges represent the data, which usually are multidimensional data arrays or tensors, that are
communicated between these edges.

Introduction to Tensorflow
TensorFlow is a powerful data flow oriented machine learning library created the Brain Team
of Google and made open source in 2015. It is designed to be easy to use and widely applicable
to both numeric and neural network oriented problems as well as other domains. Basically,
TensorFlow is a low-level toolkit for doing complicated math and it targets researchers who
know what they’re doing to build experimental learning architectures, to play around with them
and to turn them into running software. Generally, it is a programming system in which one can
represent computations as graphs. Nodes in the graph represent math operations, and the edges
represent multidimensional data arrays (tensors) communicated between them.

Introducing Tensors

To understand tensors is a combination of linear algebra and vector calculus. A tensor, then,
is the mathematical representation of a physical entity that may be characterized by magnitude

and multiple directions. And, just like you represent a scalar with a single number and a vector
with a sequence of three numbers in a 3-dimensional space, for example, a tensor can be
represented by an array of 3R numbers in a 3-dimensional space. The “R” in this notation
represents the rank of the tensor: this means that in a 3-dimensional space, a second-rank tensor
can be represented by 3 to the power of 2 or 9 numbers. In an N-dimensional space, scalars will
still require only one number, while vectors will require N numbers, and tensors will require N^R
Tensorflow API for Open Source Softwares.
TensorFlow provides APIs for Python, C++, askell, Java, Go, Rust, and there’s also a third-party package for R
called tensorflow.

Ecosystem of Tensorflow

As can be seen from the above representation, TensorFlow integrates well and has dependencies
that include GPU processing, python and Cpp and you can use it integrated with container
software like docker as well.Now, as the name suggests, it provides primitives for defining
functions on tensors and automatically computing their derivatives. Basically, tensors are higher
dimensional arrays which are used in computer programming to represent a multitude of data in
the form of numbers. There are other n-d array libraries available on the internet like Numpy but
TensorFlow stands apart from them as it offers methods to create tensor functions and
automatically compute derivatives.

Tensorflow using Python Code

# Import `tensorflow
import tensorflow as tf
# Initialize two constants
x1 = tf.constant([1,2,3,4])
x2 = tf.constant([5,6,7,8])

# Multiply
result = tf.multiply(x1, x2)
# Intialize the Session

sess = tf.Session()
# Print the result
# Close the session
Tensorflow Applications
There are many applications in machine learning. TensorFlow allows you to explore the majority
of them including sentiment analysis, google translate, text summarization and the one for which
it is quite famous for image recognizition which uses by major companies all over the world,
including Airbnb, eBay, Dropbox, Snapchat, Twitter, Uber, SAP, Qualcomm, IBM, Intel, and of
course, Google, Facebook, Instagram, and even Amazon for various purposes. So, these all are
TensorFlow Applications.
Features of Tensorflow
Below, we are discussing the features of TensorFlow:Tensorflow has API’s for Matlab, and C++
and has a wide language support. With each day passing by, researchers are working on making it
more better and recently in the latest Tensorflow Summit, tensorflow.js, a javascript library for
training and deploying machine learning models.
Advantages of Tensorflow
 Tensorflow has a responsive construct as you can easily visualize each and every part of
the graph.
 It has platform flexibility, meaning it is modular and some parts of it can be standalone
while the others coalesced.
 It is easily trainable on CPU as well as GPU for distributed computing.
 TensorFlow has auto differentiation capabilities which benefit gradient based machine
learning algorithms meaning you can compute derivatives of values with respect to other
values which results in a graph extension.
 Also, it has advanced support for threads, asynchronous computation, and queues.
 It is a customizable and open source.
Limitations of Tensorflow
 TensorFlow has GPU memory conflicts with Theano if imported in the same scope.
 No support for OpenCL
 Requires prior knowledge of advanced calculus and linear algebra along with a pretty
good understanding of machine learning.

1. Tensorflow for Beginners-

by DataFlair Team-February 12,2019.
2. Tensorflow Tutorial for Beginners- by Karlijn Willems,
January 16th 2019.
3. Deep Learning- Using Machine Learning to Study Biological Vision, Najib
J.Majaj;Denis G. Pelli. Journal of Vision December 2018, Vol.18, 2. doi:10.1167/18.13.2

P.Purandhar Sri Sai , BE-I Year, Section:S4, Department of CSE, SCSVMV

One of the oldest civilizations in the world, the Indian civilization has a strong
tradition of science and technology. Ancient India was a land of sages and seers as well as
a land of scholars and scientists. Research has shown that from making the best steel in the
world to teaching the world to count, India was actively contributing to the field of science
and technology centuries long before modern laboratories were set up. Many theories and
techniques discovered by the ancient Indians have created and strengthened the
fundamentals of modern science and technology. Some adventurous discoveries
areunknown to many of the people.

Here is some of the contributions, made by ancient Indians to the world of science and
technology, that will make you feel proud to be an Indian.

1.The Idea of Zero

Little needs to be written about the
mathematical digit ‘zero’, one of the most
important inventions of all time. Mathematician
Aryabhata was the first person to create a
symbol for zero and it was through his efforts
that mathematical operations like addition and
subtraction started using the digit, zero. The
concept of zero and its integration into the
place-value system also enabled one to write
numbers, no matter how large, by using only
ten symbols.

2. Fibbonacci Numbers
The Fibonacci numbers and their
sequence first appear in Indian
mathematics as mātrāmeru, mentioned
by Pingala in connection with the
Sanskrit tradition of prosody. Later on,
the methods for the formation of these
numbers were given by mathematicians
Virahanka, Gopala and Hemacandra ,
much before the Italian mathematician
Fibonacci introduced the fascinating
sequence to Western European

3. Smelting of Zinc

India was the first to smelt zinc by the distillation

process, an advanced technique derived from a
long experience of ancient alchemy. The ancient
Persians had also attempted to reduce zinc oxide in
an open furnace but had failed. Zawar in the Tiri
valley of Rajasthan is the world’s first known
ancient zinc smelting site. The distillation
technique of zinc production goes back to the 12th
Century AD and is an important contribution of
India to the world of science.

4. Seamless Metal Globe

Considered one of the most remarkable feats

in metallurgy, the first seamless celestial globe
was made in Kashmir by Ali Kashmiri
ibnLuqman in the reign of the Emperor Akbar.
In a major feat in metallurgy, Mughal
metallurgists pioneered the method of lost-
wax casting to make twenty other globe
masterpieces in the reign of the Mughal
Empire. Before these globes were
rediscovered in the 1980s, modern
believed that it was technically impossible to
produce metal globes without any seams, even
with modern technology.

5. Plastic Surgery

Written by Sushruta in 6th Century

BC, SushrutaSamhita is considered to
be one of the most comprehensive
textbooks on ancient surgery. The text
mentions various illnesses, plants,
preparations and cures along with
complex techniques of plastic surgery.
The SushrutaSamhita ’s most well-
known contribution to plastic surgery
is the reconstruction of the nose,
known also as rhinoplasty.

6. Cataract Surgery

The first cataract surgery is said to have been

performed by the ancient Indian physician
Sushruta, way back in 6th century BCE. To
remove the cataract from the eyes, he used a
curved needle, JabamukhiSalaka, to loosen
the lens and push the cataract out of the field
of vision. The eye would then be bandaged
for a few days till it healed completely.
Sushruta’s surgical works were later
translated to Arabic language and through
the Arabs, his works were introduced to the

7. Ayurveda

Long before the birth of Hippocrates, Charaka

authored a foundational text, Charakasamhita, on
the ancient science of Ayurveda. Referred to as
the Father of Indian Medicine, Charaka was was
the first physician to present the concept of
digestion, metabolism and immunity in his book.
Charaka’s ancient manual on preventive medicine
remained a standard work on the subject for two
millennia and was translated into many foreign
languages, including Arabic and Latin.

8. Iron-Cased Rockets

The first iron-cased rockets were developed in

the 1780s by Tipu Sultan of Mysore who
successfully used these rockets against the larger
forces of the British East India Company during
the Anglo-Mysore Wars. He crafted long iron
tubes, filled them with gunpowder and fastened
them to bamboo poles to create the predecessor
of the modern rocket. With a range of about 2
km, these rockets were the best in the world at
that time and caused as much fear and confusion
as damage. Due to them, the British suffered one
of their worst ever defeats in India at the hands
of Tipu.


S.Nandhini, Department of Physics, SCSVMV


The Arduino Uno is a microcontroller board based on the ATmega328 . It has 14 digital
input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz crystal
oscillator, a USB connection, a power jack, an ICSP header, and a reset button. It contains
everything needed to support the microcontroller; simply connect it to a computer with a USB
cable or power it with a AC-to-DC adapter or battery to get started.

The hardware features with an open-source hardware board designed around an 8-bit
Atmel AVR microcontroller or a 32-bit Atmel ARM. Current models consists a USB interface, 6
analog input pins and 14 digital I/O pins that allows the user to attach various extension boards.

Uno" means one in Italian and is named to mark the upcoming release of Arduino 1.0.
The Uno and version 1.0 will be the reference versions of Arduino, moving forward. The Uno is
the latest in a series of USB Arduino boards, and the reference model for the Arduino platform;
for a comparison with previous versions, see the index of Arduino boards.

The maximum length and width of the Uno PCB are 2.7 and 2.1 inches respectively, with
the USB connector and power jack extending beyond the former dimension. Four screw holes
allow the board to be attached to a surface or case. Note that the distance between digital pins 7
and is 160 mil (0.16"), not an even multiple of the 100 mil spacing of the other pins.


 External Power: It is used to power the board if the USB connector is not used. An AC
adapter(9 volts,2.1mm barrel tip, center positive) could be used for providing external power.
If there is no power at the power socket, then the Arduino will use power form the USB
socket. But it is safe to have power at both the power socket and USB socket.

 Digital Pins(I/O): The Arduino Uno has 14 digital pins(0 to 13) of which the 6 are
PWM(~).This pins can be either inputs or outputs. But we need to mention it in the Arduino
sketch (Arduino programming).The PWM(Pulse Width Modulated) pins acts as normal
digital pins and also used to control some functions. Say for example, control the dimming of
LED and control the direction of servo motor. Both digital inputs and digital outputs can read
one of the two values either HIGH or LOW.

 Analog Pins: The Analog pins(0 to 5) acts as inputs which is used to read the voltage in
analog sensors such as temperature sensor,gas sensor,etc.Unlike digital pins which can only
read one of the two values(HIGH or LOW),the analog inputs can measure 1024 different
voltage levels.

 ATmega Microcontroller: The Arduino uses ATmega328 microcontroller. It is a single

chip microcontroller created by Atmel. This chip works well with Arduino IDE. If damaged,
this controller can be easily replaced. The Atmega328 has 32 KB of flash memory for storing
code (of which 0,5 KB is used for the bootloader).It has also 2 KB of SRAM and 1 KB of

 3.3V Pin: A 3.3 volt supply generated by the on-board regulator. Maximum current draw is
50 mA.

 5V Pin: The regulated power supply used to power the microcontroller and other
components on the board. This can come either from an on-board regulator, or be supplied
by USB or another regulated 5V supply.

 Reset Button: It is used to reset the microcontroller. Pushing this button will temporarily
connect the reset pin to ground and restart any code that is loaded on the Arduino.

 USB: The USB port is used to power the board from the computer's USB port and also to
transfer the program code from computer into the Arduino microcontroller.


1. Microcontroller- ATmega328
2. Operating Voltage- 5V
3. Input Voltage (recommended) - 7to12V
4. Input Voltage(limit) - 6to20V
5. Digital I/O Pins-14 (of which 6 provide PWM output)
6. Analog Input Pins-6
7. DC Current per I/O Pin-40 mA
8. DC Current for 3.3V Pin-50 mA
9. Flash Memory-32 KB (ATmega328) of which 0.5 KB used by boot loader
10.SRAM-2 KB (ATmega328)
11.EEPROM-1 KB (ATmega328)
12.Clock Speed-16 MHz

Arduino programs are written in the Arduino Integrated Development Environment
(IDE). Arduino IDE is a special software running on your system that allows you to write
sketches (synonym for program in Arduino language) for different Arduino boards. The
Arduino programming language is based on a very simple hardware programming language
called processing, which is similar to the C language. After the sketch is written in the
Arduino IDE, it should be uploaded on the Arduino board for execution.

The first step in programming the Arduino board is downloading and installing the Arduino
IDE. The open source Arduino IDE runs on Windows, Mac OS X, and Linux. Download the
Arduino software (depending on your OS) from the official website.


1.Arduino Based Home Automation System

The project is designed by using Arduino uno board for the development of home automation
system with Bluetooth which is remotely controlled and operated by an Android OS smart
phone. Houses are becoming smarter and well developed by using such kind of advanced
technologies. Modern houses are gradually increasing the way of design by shifting to
centralized control system with remote controlled switches instead of conventional switches.

2.Arduino based Auto Intensity Control of Street Lights

As the intensity is cannot be controlled by using High Intensity Discharge (HID) lamps power
saving is not possible in street lights with these lamps as the density on roads is decreasing from
peak hours of nights to early morning.
Thus, this system overcomes this problem by controlling the intensity of LED lights on street by
gradually reducing intensity by controlling the voltage applied to these lamps. This system uses
arduino board to produce and it is programmed in such a way that it decreases the voltage
applied to these lamps gradually till late nights and completely shutdowns at morning.

3. Web Server using Arduino,

4. Line follower bot,

5. Obstacle avoiding robot,

6. Weather sensing,


Arduino UNO is something which is very much common to electronics students these days. It is
the basic that an electronic engineer must know everything about arduino and must have done
projects using Arduino UNO.



G.Poornima, Assistant Professor, Department of ECE, SCSVMV

Like most professions in India, the field of science is very much male-dominated.
When asked to name an Indian scientist, most of us can only think of APJ Abdul Kalam or
SrinivasaRamanujan. It’s not often that someone names an Indian woman scientist.This
might make it seem like there aren’t many women contributing to the field of science, but
that is not the case. Many women, over the years, have made immense contribution to
science, and have also paved a path for others to follow. Let us look at seven Indian women
scientists who have broken stereotypes and are an inspiration to all.

1. MangalaNarlikar

An Indian mathematician, Mangala has worked in the field of both Simple Arithmetic
and Advanced Mathematics at the University of Pune and the University of Mumbai. One of
the few women mathematics researchers in the country, she completed her PhD, 16 years
after marriage.Having worked at the Tata Institute of Fundamental Research (TIFR),
Mangala published several books on mathematical topics in both English and Marathi. She
has been honoured with the VishwanathParvatiGokhale Award 2002 for one of her books in
Marathi. Also a teacher, she is popularly known for the way she makes mathematics an
interesting subject for her students.

2. Aditi Pant

Aditi is an oceanographer, and the first Indian woman to travel to Antarctica, as a part
of the 1983 Indian expedition, to study geology and oceanography. Inspired by Alister
Hardy’s book The Open Sea, she pursued her MS in Marine Sciences, with a US government
scholarship, at the University of Hawaii.

Aditi completed her PhD at London’s Westfield College, and returned to India to join
the National Institute of Oceanography in Goa. She has conducted coastal studies and has
travelled the entire Indian west coast.

3. Indira Hinduja

An Indian gynaecologist and infertility specialist based out of Mumbai, Indira is the
first to deliver a test tube baby in India. She is a pioneer of the gamete intra-fallopian transfer
technique (GIFT), and has also delivered India’s first GIFT baby. Besides, she is known for
the oocyte donation technique she has developed for premature ovarian failure, and
menopausal patients.

4. ParamjitKhurana

Paramjit is a scientist in the field of Plant Biotechnology, Genomics, and Molecular

Biology. She has published over 125 scientific papers, and is a professor at the Plant
Molecular Biology Department of the University of Delhi. Paramjit has received several
awards, including the ‘Certificate of Honour’ from the GantavayaSansthan on the occasion of
International Women’s Day in 2011

5. Sunetra Gupta

A novelist and a professor of Theoretical Epidemiology at Oxford University, Sunetra

has a passion for studying infectious agents that cause diseases such as influenza and malaria,
among others. She has been honoured by the Zoological Society of London with the
Scientific Medal, and has also received the Royal Society Rosalind Franklin Award for her
contribution to science.

6. NandiniHarinath

A rocket scientist at the Indian Space Research Organisation (ISRO) Satellite Centre
in Bengaluru, Nandini has worked on 14 missions in her 20 years of work. She was the
deputy operations director for the Mangalyaan mission, and says that her first exposure to
science was the popular cult television series Star Trek.

7. RohiniGodbole

Rohini is an Indian physicist and a professor at the Centre for High Energy Physics of
the Indian Institute of Science in Bengaluru. She has worked for over three decades on
Particle Phenomenology, and is particularly interested in exploring the Standard Model of
Particle Physics (SM). Rohini is an elected fellow at all the three Indian Science academies
and the Science Academy of the Developing World.

S.Vijayabarathi, Assistant Professor, Department of Mathematics, SCSVMV

D4 will be the group of symmetries of the square with the vertices 1,2,3,4

4 3


We are using  i for rotations,  i for mirror image in perpendicular bisectors of sides and i
for diagonal flips. There are 8 permutations involved here.

1 2 3 4 1 2 3 4  1 2 3 4 
0    1    2   
1 2 3 4 2 3 4 1  3 4 1 2
1 2 3 4
3   
4 1 2 3
1 2 3 4 1 2 3 4  1 2 3 4 
1    2    3   
2 1 4 3 4 3 2 1  3 2 1 4
1 2 3 4
4   
1 4 3 2

Note that D4 is nonabelian.

Let G be the group D4  { 0 , 1 ,  2 ,  3 , 1 ,  2 ,  1 ,  2 } of symmetries of the square.Label

the square with vertices 1,2,3,4 sides as S1, S2, S3, S4 diagonals d1, d2, Vertical and horizontal
axes m1 and m2, Center point as C, midpoints of Si is Pi

i corresponds to rotating the square counterclockwise through radius

 i corresponds to flipping on the axis mi and  i to flipping on the diagonal di. We let

X  {1,2,3,4, S1 , S 2 , S3 , S 4 , m1 , m2 , d1 , d 2 , C , P1 , P2 , P3 , P4 } Then X can be regarded as a

D4-set. 4 3

d1 d2

P4 m2 P2


1 P1

1 2 3 4 S1 S2 S3 S4 m1 m2 d1 d2 C P1 P2 P3 P4
0 1 2 3 4 S1 S2 S3 S4 m1 m2 d1 d2 C P1 P2 P3 P4
1 2 3 4 1 S2 S3 S4 S1 m2 m1 d2 d1 C P2 P3 P4 P1
2 3 4 1 2 S3 S4 S1 S2 m1 m2 d1 d2 C P3 P4 P1 P2
3 4 1 2 3 S4 S1 S2 S3 m2 m1 d2 d1 C P4 P1 P2 P3
1 2 1 4 3 S1 S4 S3 S2 m1 m2 d2 d1 C P1 P4 P3 P2
2 4 3 2 1 S3 S2 S1 S4 m1 m2 d2 d1 C P2 P3 P1 P4
1 3 2 1 4 S2 S1 S4 S3 m2 m1 d1 d2 C P2 P1 P4 P3
2 1 4 3 2 S4 S3 S2 S1 m2 m1 d1 d2 C P4 P3 P2 P1

This table describes completely action of D4 on X

P. Srinanda, BE-I Year Section:S4, Department of CSE, SCSVMV

Solid mechanics is the branch of continuum mechanics that studies the behavior of solid
materials, especially their motion and deformation under the action of forces, temperature
changes, phase changes, and other external or internal agents.

Solid mechanics is fundamental for civil, aerospace, nuclear, biomedical and mechanical
engineering, for geology, and for many branches of physics such as materials science. It has
specific applications in many other areas, such as understanding the anatomy of living beings,
and the design of dental prostheses and surgical implants. One of the most common practical
applications of solid mechanics is the Euler-Bernoulli beam equation. Solid mechanics
extensively uses tensors to describe stresses, strains, and the relationship between them.

Relationship to continuum mechanics

As shown in the following table, solid mechanics inhabits a central place within
continuum mechanics. The field of rheology presents an overlap between solid and fluid

Continuum Mechanics:

The study of the physics of continuous materials- Solid mechanics

The study of the physics of continuous materials with a defined rest shape
Elasticity describes materials that return to their rest shape after applied stresses are removed.
Plasticity describes materials that permanently deform after a sufficient applied stress.
Rheology the study of materials with both solid and fluid characteristics.

Fluid mechanics:

The study of the physics of continuous materials which deform when subjected to a force.
Non-Newtonian fluids do not undergo strain rates proportional to the applied shear stress.
Newtonian fluids undergo strain rates proportional to the applied shear stress.

Response models:

A material has a rest shape and its shape departs away from the rest shape due to stress. The
amount of departure from rest shape is called deformation, the proportion of deformation to
original size is called strain. If the applied stress is sufficiently low (or the imposed strain is
small enough), almost all solid materials behave in such a way that the strain is directly
proportional to the stress; the coefficient of the proportion is called the modulus of elasticity.
This region of deformation is known as the linearly elastic region.

It is most common for analysts in solid mechanics to use linear material models, due to ease of
computation. However, real materials often exhibit non-linear behavior. As new materials are
used and old ones are pushed to their limits, non-linear material models are becoming more

There are four basic models that describe how a solid responds to an applied stress:

Elasticity – When an applied stress is removed, the material returns to its undeformed state.
Linearly elastic materials, those that deform proportionally to the applied load, can be described
by the linear elasticity equations such as Hooke's law.

Viscoelasticity – These are materials that behave elastically, but also have damping: when the
stress is applied and removed, work has to be done against the damping effects and is converted
in heat within the material resulting in a hysteresis loop in the stress–strain curve. This implies
that the material response has time-dependence.

Plasticity – Materials that behave elastically generally do so when the applied stress is less than
a yield value. When the stress is greater than the yield stress, the material behaves plastically and
does not return to its previous state. That is, deformation that occurs after yield is permanent.

Thermo elasticity - There is coupling of mechanical with thermal responses. In general, thermo
elasticity is concerned with elastic solids under conditions that are neither isothermal nor

adiabatic. The simplest theory involves the Fourier's law of heat conduction, as opposed to
advanced theories with physically more realistic models.


1452–1519 Leonardo da Vinci made many contributions

1638: Galileo Galilei published the book "Two New Sciences" in which he examined the failure
of simple structures

1660: Hooke's law by Robert Hooke

1687: Isaac Newton published "Philosophiae Naturalis Principia Mathematica" which contains
Newton's laws of motion

1750: Euler–Bernoulli beam equation

1700–1782: Daniel Bernoulli introduced the principle of virtual work

1707–1783: Leonhard Euler developed the theory of buckling of columns

1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures

1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which
contains his theorem for computing displacement as partial derivative of the strain
energy. This theorem includes the method of least work as a special case
1874: Otto Mohr formalized the idea of a statically indeterminate structure.

1922: Timoshenko corrects the Euler-Bernoulli beam equation

1936: Hardy Cross' publication of the moment distribution method, an important innovation in
the design of continuous frames.

1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice

1942: R. Courant divided a domain into finite subregions

1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and
Deflection of Complex Structures" introduces the name "finite-element method".



Rohan Tiwari, BE-I Year Section:S4, Department of CSE

Psychology is the study of the mind, it’s thought, feeling and behavior. It is an academic
discipline which involves the scientific study of mental faculties, functions and behaviors

Psychology deals mainly with humans but also sometimes with nonhuman animals.
Because psychology may be difficult to study as a whole, psychologists often only look at small
parts of it at one time. Psychology has much in common with many other fields, and overlaps
with many of them. Some of these fields are medicine, ethology, computer science, and

In this field, a professional practitioner or researcher is called a psychologist and is a

social, behavioral, or cognitive scientist. Psychologists attempt to understand the role of mental
functions in individual and social behavior. They also explore the physiological and
neurobiological processes which underlie cognitive functions and behaviors.

Psychology has been split up into smaller parts called
branches. These are subjects in psychology that try to
answer a particular group of questions about how people
think. Some branches of psychology that are often
studied are:
 Abnormal psychology tries to work out what
differences there are between people who are healthy and
people who have a mental illness.
 Clinical psychology is about finding the best way
to help people to recover from mental illness.
 Cognitive psychology looks at how people think,
use language, remember and forget, and solve problems.
 Cross-cultural psychology looks at different ways of living and views of the world.
Developmental psychology is interested in how people develop and change through their
lives. This includes what used to be called "child psychology".

 Educational or school psychology tests and helps students to learn and make friends.
 Evolutionary psychology studies how evolution may have shaped the way people think
and do things.
 Neuropsychology looks at the brain and how it works to make people the way they are.

Motivation: the root causes of action: Perceptual psychology asks questions about how people
make sense of what they see and hear and how they use that information to get around.
Social psychology looks into how groups of people work together and how societies build and

Abnormal psychology:
Abnormal psychology is a part of psychology. People who study abnormal psychology
are psychologists. They are scientists that investigate the mind using the scientific method.
Different cultures tend to have different ideas of how strange (abnormal) any behavior is
considered. This tends to change over time within cultures, so people that live in a country at one
time in history might consider abnormal what people who live in the same country consider
normal years before or years later.

Abnormal psychology is often used to

understand or to treat people with mental
disorders to make life better for them. This is
because abnormal behavior is often defined
as when someone is not able to change how
they behave to fit different settings. This is
often also used to define some mental
disorders. When someone cannot change
their behavior to fit the people and situations
around them when they need to, it can cause suffering, and the person may be uncomfortable
when around people. Their behavior can be unreasonable and hard to understand. Their behavior
can even be dangerous.
Not everyone with a mental disorder is unable to adjust to their surroundings. People who
can change to fit the environment around them more easily than most people can also have
behavior that is considered abnormal and might also have an easier life with the help of a
Clinical psychology:
Clinical psychology is the study in psychology of mental
disorders. It is about learning, understanding, diagnosing,
treating or preventing these types of illnesses.
Clinical psychologists examine the mental functioning of a
person and use psychotherapy to treat the disorder.
Psychotherapy uses talking instead of medical or physical

The first psychological clinic opened in 1896 at the University of Pennsylvania by Lightner
Witmer. In the first half of the 20thcentury, clinical psychology mainly about psychological
assessment, not treatment. After WorldWarII, there was a big increase in the numbers of trained
clinical psychologists. There are two main educational models - the Ph.D. scientist-practitioner
model which looks at research, and the Psy.D. practitioner-scholar model which looks at
treatment. Clinical psychologists are now regarded as experts in psychotherapy.
Neuropsychology is the scientific study that
studies the function and structure of
the brain related to more common psychological
processes and overt behaviors. The term has been
applied to lesion studies in humans and animals.
It has also been applied to efforts to record
electrical activity from individual cells (or
groups of cells) in higher primates (including some studies of human patients).

Motivation is an important part of
human psychology. It arouses a person to act
towards a desired goal. It is a driving force which
promotes action. For example, hunger is a
motivation which causes a desire to eat. "Motivation
is an energizer of behavior”. Motivation is the
purpose or psychological cause of an action.
With animals, motivation is caused by basic needs:
needs for food, water, warmth, safety, mating,
protecting the young, defending territory, needs to
escape pain and threats... The drive to do these things is instinctive, inborn, and triggered by
With humans, whose mental life is so much more complex, motivation is more complicated.
Obviously, humans feel the need for food and water, avoid pain and so on. But they are also
capable of having long-term plans which are more difficult to understand.

Emotions and unconscious motivation:

Motivation and emotion are intertwined: "Emotional states tend to have motivational properties".
Not all motivated behavior is the result of conscious decisions. Freudian psychology suggests
that much behavior is motivated by "unconscious factors, working through a network of defence
mechanisms, symbolic disguises and psychological cloaks".[1]
Scientific approaches:
Psychology is a type of science, and research psychologists use many of the same types
of methods that researchers from other natural and social sciences use.

Psychologists make theories to try to explain a behavior or pattern they see. Based on their
theory they make some predictions. They then carry out an experiment or collect other types of
information that will tell them whether their predictions were right or wrong.

Some types of experiments cannot be done on people
because the process would be too long, expensive,
dangerous, unfair, or otherwise unethical. There are also
other ways psychologists study the mind and behavior
scientifically, and test their theories. Psychologists might
wait for some events to happen on their own; they might
look at patterns among existing groups of people in natural
environments; or they might do experiments on animals
(which can be simpler and more ethical to study).

Psychology shares other things with natural sciences, as

well. For example, a good psychological theory may be possible to prove wrong. Just like in any
natural science, a group of psychologists can never be completely sure that their theory is the
right one. If a theory can be proved wrong, but experiments do not prove it wrong, then it is more
likely that the theory is accurate. This is called falsifiability.

Psychologists use many tools as part of their daily work. Psychologists use surveys to ask people
how they feel and what they think. They may use special devices to look at the brain and to see
what it is doing. Psychologists use computers to collect data as they measure how people behave
in response to pictures, words, symbols, or other stimuli. Psychologists also use statistics to help
them analyze the data that they get from their experiments.

Symbolic and subjective approaches:

Not all psychology is scientific psychology. Psychodynamic psychology and depth

psychology do things like interpreting people's dreams to understand the unconscious mind, as in
older approaches to psychology begun by Carl Jung who was particularly interested in finding
methods for measuring what kind of personality people have.

Humanistic psychology and existential psychology also believe that it is more important to
understand personal meaning than to find causes and effects of mental processes and behaviors.
A drive or desire is a deficiency or need that activates behavior aimed at a goal or an incentive.
Drives may arise inside or outside an organism. External drives for humans are rewards and
punishments, and can be quite subtle: a frown or a smile may be sufficient for a young person.
Drives often occur within the individual and may not need external stimuli to encourage the
behavior. By contrast, outside rewards and stimuli are used in training animals by giving them
treats when they perform a trick correctly. The treat motivates the animals to perform the trick
consistently, even later when the treat is removed from the process. Children are motivated to
learn by approval of friendly adults, and by their own pleasure at success .


Psychologists are people who work in the field of psychology. A psychologist may work
in either basic research or applied research. Basic research is
the study of people or animals to learn more about them.
Applied research is using what was learned from basic
research to solve real-world problems. If he or she is qualified as a clinical psychologist they
may be a therapist or counsellor as well as a researcher.
To become a psychologist, a person must first get a basic degree at a university and then go to
graduate school. A Master's degree, either MSc (Master of Science) or MA (Master of Arts)
allows beginning work, like a school psychologist. A doctorate degree takes a longer time
because it includes doing research and writing a detailed report called a dissertation or thesis.
The doctoral graduate uses the initials PhD or DPhil (Doctor of Philosophy) after his or her
name. Some clinical psychologists earn a Doctor of Psychology degree and use the initials PsyD
after their name. The American Psychological Association says that people need a PhD (or PsyD
and a current state license in the U.S.) in order to call
themselves a 'psychologist'.

The words psychologist and psychiatrist may be

confused with each other. A psychiatrist has
graduated from medical school and uses the initials
MD or its equivalent (MB ChB in London
University, for example). A psychiatrist or doctor
may work with a psychologist: they may prescribe
and check on the effect of medicate

Thereby we can say that major part of our life

depends on our psychological behavior and is interlinked with everything.

Reference Links:

K.Rajalakshmi, Assistant Professor, Department of Physics, SCSVMV

Meditation is a practice where an individual focuses his mind on a particular thought or

activity – to train attention and achieve a mentally clear and emotionally calm and stable state.
Meditation has been practiced in numerous religious traditions and beliefs as part of the path
towards enlightenment and self realization.It may be used with the aim of reducing stress,
anxiety, depression, and pain, and increasing peace, perception, self-concept, and well-
being Meditation is under research to define its possible health (psychological, neurological,
and cardiovascular) and other effects.

Definitions or Characterizations of Meditation:

Forms of Meditation

Focused and open monitoring meditation

In the West, meditation techniques have sometimes
been thought of in two broad categories: focused
(or concentrative) meditation and open monitoring
(or mindfulness) meditation
Open monitoring methods
These include mindfulness, shikantaza and other
awareness states.
Practices using both methods
Some practices use both
techniques, including vipassana (which
uses anapanasati as a preparation), samatha/calm-
abiding, and Headspace.

No thought
In these methods, "the practitioner is fully alert, aware, and in control of their faculties
but does not experience any unwanted thought activity." This is in contrast to the common
meditative approaches of being detached from, and non-judgmental of, thoughts, but not of
aiming for thoughts to cease. In the meditation practice of the Sahaja yoga spiritual movement,
the focus is on thoughts ceasing. Clear light yoga also aims at a state of no mental content, as
does the no thought (wu nian) state taught by Huinengand the teaching of Yaoshan Weiyan.
Automatic self-transcending
One proposal is that transcendental meditation and possibly other techniques be grouped
as an 'automatic self-transcending' set of techniques
Meditation Practice
Common practice timings
The transcendental meditation technique recommends practice of 20 minutes twice per
day. Some techniques suggest less time, especially when starting meditation, and Richie
Davidson has quoted research saying benefits can be achieved with a practice of only 8 minutes
per day. Some meditators practice for much longer, particularly when on a course
or retreat. Some meditators find practice best in the hours before dawn.
Physical postures and techniques
The different postures are asanas and positions such as the full-lotus, half-
lotus, Burmese, Seiza, and kneeling positions are popular in Buddhism, Jainism and
Hinduism, other postures such as sitting, supine (lying), and standing are also used. Meditation is
also sometimes done while walking, known as kinhin, or while doing a simple task mindfully,
known as samu.
Use of prayer beads
Some ancient religions of the world have a tradition of using prayer beads as tools in
devotional meditation. Most prayer beads and Christian rosaries consist of pearls or beads linked
together by a thread. The Roman Catholic rosary is a string of beads containing five sets with ten
small beads. The Hindu japa mala has 108 beads (the figure 108 in itself having spiritual
significance, as well as those used in Jainism and Buddhist prayer beads. Each bead is counted
once as a person recites a mantra until the person has gone all the way around the mala. The
Muslim misbaha has 99 beads.
Research on Meditation
Research has been carried out on the processes and the effects of meditation. Modern
scientific techniques, such as MRI and EEG, were used to observe neurological responses during
meditation. Since the 1970s, clinical psychology and psychiatry have developed meditation
techniques for numerous psychological conditions. Mindfulness practice is employed in
psychology to alleviate mental and physical conditions, such as reducing depression, stress,
and anxiety. Mindfulness is also used in the treatment of drug addiction. Studies demonstrate that
meditation has a moderate effect to reduce pain. There is insufficient evidence for any effect of
meditation on positive mood, attention, eating habits, sleep, or body weight.
A 2017 systematic review and meta-analysis of the effects of meditation
on empathy, compassion, and prosocial behaviors found that meditation practices had small to

medium effects on self-reported and observable outcomes, concluding that such practices can
"improve positive prosocial emotions and behaviors".
Preliminary studies showed a potential relationship between meditation and job performance,
resulting from cognitive and social effects.
Concerns have been raised on the quality of much meditation research, including the particular
characteristics of individuals who tend to participate.
Why should we meditate?
Meditation Decreases Beta Waves
Even after the very first time you meditate, you are going to see a significant decrease in
beta waves. Basically, they are information being processed by your brain. We should decrease
that because most of the information your brain processes is useless junk, a bunch of noise
preventing you from focusing on things you actually want to focus on.

Meditation Helps You Focus

One of the most immediate benefits of calming down the beta wave cacophony is
an increased ability to focus. You do not have to run away to the woods to make it happen either.
Researchers found that a mere 20 minutes a day, four times a week made a significant difference
in terms of ability to focus. And meditation has a cumulative effect to it. So, 20 minutes every
day will sharpen your focus and attention.

Meditation Decreases Anxiety

And while we are discussing the science of meditation, a decrease in anxiety is one of the
more well attested health benefits. This goes back to beta waves again which creates a constant
chorus of useless thoughts in your head. You’re going to loosen up connections between parts of
your brain that tell other parts of your brain to worry about traffic, or your next dentist
appointment, or whatever you’re anxious about that you don’t need to be.

Meditation Decreases Your Need For Sleep

One of the more interesting things we know about meditation is that it makes you need
less sleep. The study was done on people who just started meditating, not on those who have
practiced for years. However, what the study found is that 40 minutes of meditation can be a
better means of resting than 40 minutes of sleeping.

Meditation Makes You More Perceptive

A study conducted in 1984 found that those who meditate regularly are more perceptive
than those who do not. In short, the researchers found that meditation practitioners needed a
shorter period of time to register and recognize stimuli than their non-meditating counterparts.

Meditation Makes Your Brain Bigger

Researchers at Harvard found out that meditation will help your brain to grow bigger by
increasing your gray matter. They’re not entirely sure how this is happening quite yet, but further
studies are underway. For now, however, we know that meditation is scientifically proven to
increase the mass of your brain.
Meditation Helps You to Be More Compassionate
Being a little — or even a lot — more compassionate can make you a more empathetic
friend, partner, and leader. Meditation can help to cultivate that sensitivity. You need to make it a
goal of your practice, but if you do, you’re going to have more compassion even when you’re not

Meditation Will Help You Remember Things

The same study that found meditation increased compassion also found that it helped
with rapid memory recall. That is to say, meditation won’t just help you to remember things — it
will help you to remember them faster.

Meditation Increases Creativity

For the entrepreneur or creative professional, creativity is an absolute must. As you
probably guessed, meditation is going to help you increase your creativity. Researchers at Leiden
University in The Netherlands found that certain forms of meditation open up your mind to new
ideas. This is not in the abstract. What it means is that your internal censor is less active after
certain forms of meditation, allowing you to fully realize ideas that you might otherwise shut
down before they fully blossom.

Meditation: How to Do It
There are different ways to meditate. There’s guided meditation, mantra meditation, Zazen
meditation, mindfulness meditation, Tai Chi meditation, transcendental meditation…

 Set the right environment. Turn off your phone. Close the laptop. Don’t play music. Make
your environment as quiet and tranquil as possible. Plan to do anything to avoid interruption.

 Set an alarm. Pick a time, starting with ten minutes, though the longer the better. Half an
hour is great if you can do it. Set your alarm and forget it. Don’t worry about how long
you’ve been meditating or how long you have to go.

 Get your physical stress out. Progressive muscle relaxation is a great way to prepare for
meditation. Spend five or ten minutes doing this before you meditate and it will make a huge

 Find a comfortable position for sitting. You can sit in a chair, on a pillow, or on the floor.
The main thing is that you need to be sitting up, not lying down, as the latter is almost always
a recipe for falling asleep. If you want to find a “mystical” pose to help you get in the mood,
go for it, but it’s not necessary.

 Settle into position. Take a couple seconds to settle into position. Wiggle around a little bit.
But once you’re set, stay there. In fact, learning to sit still — without moving, scratching, or
otherwise adjusting yourself — is a lot of what the early stages of learning to meditate are

 Close your eyes. Some forms of meditation don’t require this. For now, close your eyes.

 Breathe. A lot of meditation is really just about breathing. There are different ways to do
this, but one basic way is what’s called “four fold breath.” Breathe in for four seconds. Hold
your breath for four seconds. Breathe out for four seconds. Hold for four seconds. Focus on
getting this right for a bit and it will eventually become automatic.

 Let thoughts flow through you. A lot of guys think they need to fight against thoughts.
Nothing could be further from the truth. Instead, just let them flow by like clouds through the
sky. If it helps, say “thinking” when you have a thought and go back to focusing on nothing.

 Do the time you committed to. Don’t be surprised if it’s difficult to meditate at first. We
are so used to being constantly stimulated that when we are finally not, it can be
uncomfortable or difficult. But do whatever time you set out to do. Focusing on quieting our
mind allows us to better use it when we need to, rather than constantly being consumed by
mental chatter.
As with anything else you’re
introducing in your schedule, it’s
important to make time for this. One
way you can do this is just by blocking
it out on your schedule. Then you’re
going to find it harder to come up with
reasons to not meditate. Even if you
can’t do it every day (though you’ll find
the best results that way), you can treat
it like going to the gym — do it three or
even five times a week.

Meditation has a number of proven benefits, but the biggest one is just having a greater mastery
over your mind. Leaders and celebrities from Barry Zito and Rupert Murdoch to Rick Rubin and
David Lynch meditate. Just like your muscles need exercise, so do your focus and concentration.

You’ll notice results immediately and in a week, but the real, powerful, and lasting results are
only going to come after several weeks or even months of meditation. The science of meditation
is real, and it can make a huge difference in your life. Try it for yourself.


K.Anitha, Assistant Professor, Department of ECE, SCSVMV

Agriculture has always been a lifeline for India. Even though with age, India has made a
mark in various spheres and has progressed in the manufacturing sector by leaps and bounds,
but agriculture still remains one of the key drivers of the economy. Worldwide, India ranks
second in farm output and accounts for about 50% of the country’s workforce. But this isn’t a
new phenomenon.

From ancient times, agriculture has played a vital role in India’s growth. Rich fertile land,
plenty of water for irrigation, and domestication of crops and animals were some of the key
factors for its success.

Since then Indian agriculture has witnessed many phases. However, the real success of
scientific farming and use of various technologies in agriculture can be attributed to the Green
Revolution. In 1960s when India was grappling with frequent droughts, Green Revolution

Green Revolution

The golden period in the agriculture sector, facilitated in increasing crop yields by
manifolds. Improved agronomic technology allowed India to overcome poor agricultural

A crucial aspect to the success of the Green Revolution in India was the various scientific
technologies developed to facilitate more yields. New farming irrigation methods such as drip
irrigation, stronger and more resistant pesticides, more efficient fertilizers, and newly developed
seeds helped in proficient crop growth. As a result of such new improvements in agricultural
methods, India experienced drastic increases in crop production. This eventually led to the
country becoming more self-sufficient and avoiding mass famine and starvation.

Harshavardhan Varma, BE-I Year, Section:S4, Department of CSE, SCSVMV

Ever since a caveman rubbed two stones together and made fire, scientific discoveries
have been altering the way humans live and interact with the world.
This National Science Day, we look at the focus of scientific research in India, to get
an idea of where we might be headed. The ultimate aim of science is to understand the
phenomena of nature and use that understanding to devise ways of elevating our living. Let
us make that process sustainable, so that there is much left of nature for future generations to
National Science Day is celebrated in India on February 28, to mark the discovery of
the Raman effect. The discovery of the Raman effect. The discovery was made by physicist
Sir Chandrashekhara Venkata Raman in 1928. For his discovery, Sir C.V.Raman was
awarded the Nobel Prize in physics in 1930.
Medical research:
Researchers from universities across India collaborated to develop an anti-cancer drug that is
now ready for clinical trails. This study spanned diverse areas of research including
biochemistry, molecular biophysics, pharmaceutical chemistry, genetics, computational
biology, nanotechnology and bioinformatics, along with an exploration of the part that natural
remedies play in the prevention of cancer.
Tuberculosis is another deadly disease that our researches are fighting. Drug resistant strains
of the TB bacteria are particularly hard to beat. Scientists from the Indian Institutes of
Science (Bengaluru) have made path-breaking discoveries on how the bacteria become drug
resistant. Based on this, a combination of antibiotics that can kill the TB bacteria has been
formulated, that will save thousands of lives.
Astronomy and space science:
Last year saw some breakthrough in astronomy, as Indian scientists discovered a super cluster
of galaxies and named it Saraswathi. Estimated to contain billions of stars, planets, dark
matter and much else that we do not know about, this will open up ways of understanding the
mysteries of the cosmos.
The Indian Space Research Organisation (ISRO) has been in the news a lot lately.
Among its many achievements last year, ISRO launched the most number of satellites (104 to
be precise) on a single fight. ISRO also launched its heaviest payload so far (a 3,136-
kilogram communications satellite) as well as its smallest (4-gram space probes). The space
probes are part of a programme aimed at exploring space beyond the solar system in search of
extraterrestrial life.
Indigenous transistor:
India’s technological missions are all set to get a bigger boost as researchers from the Indian
Institute of technology Bombay, in collaboration with ISRO’s Semi-Conductor Labs,

developed India’s first ever indigenous transistor called the BiCMOS, Expected to play a big
role in the Internet of Things (IoT), this transistor will reduce our dependency on
multinational semiconductor manufacturers, accelerating our defence and space programmes
to a great degree.
Waste management:
India has been leading the world with innovations in managing waste by segregating,
compositing and recycling. Prof.Rajagopalan Vasudevan was awarded the Padmashri for his
patented method to use plastic for laying roads. Since plastic and tar are both derived from
petroleum, they blend well. Not only does this stop plastic from polluting our land and water,
it creates durable and lasting roads. However, this only takes care of existing plastic, and is
not the long-term answer to plastic pollution. Just imagine, every bit of plastic that has ever
been manufactured is still present on the earth is form, as it takes hundreds of years to break
down, It is up to young minds, such as yours, to stop using plastic that you will discard after a
single use, like straws, disposable cutlery, carry bags, shiny gift wraps, and even balloons.
Monsoon is the backbone of the Indian economy, as it dictates everything from planning
agriculture activities to urban water resources management. But the weather is notoriously
fickle, and the Southeast monsoon, more so. Researchers at the Indian Institute of Science
(Bengaluru) have developed a new technique that studies several phenomena such as
atmospheric pressure, sea surface temperature, moisture content, and rainfall over a period of
over years. Correlating the date of the onset of the monsoon to the changes in patterns over
the years, we can now predict the arrival of monsoon to an accuracy of four days!
Sustainable agriculture:
Sustainable technologies are not just popular right now, but critically essential, if the planet
has to survive. Climate change is making its effects felt, with melting glaciers in the
Himalayas, and erratic monsoon patterns. Agriculture is one of the contributors of climate
change, with large forest lands making way for human habitat. Scientists at the International
Maize and Wheat Improvement Centre (New Delhi), are laying the groundwork for
‘conservation agriculture’ which will reduce greenhouse gas emissions. Environmentally
unfriendly practices such as crop burning will be stopped and more ‘climate-smart’ practices
for soil enrichment will be in place. These include minimum soil disturbance, zero tillage,
crop rotation, and retention of crop residues.

V.Ragavendran, Assistant Professor, Department of Physics, SCSVMV

Powered by the accretion of gas onto supermassive black holes at the centers of galaxies,
quasars are the most luminous objects in the Universe. With discoveries from its earliest imaging
campaigns, the SDSS extended the study of quasars back to the first billion years after the Big
Bang, showing the rapid early growth of black holes and mapping the end stages of the epoch of

With full quasar samples hundreds of times larger than those that existed before, the
SDSS has given us the most accurate descriptions of the growth of black holes over cosmic
history. SDSS spectra show that the properties of quasars have changed remarkably little from
the early universe to the present day.

SDSS studies have probed the dark matter environments of quasars through clustering
measurements, revealed populations of quasars whose central engines are hidden by obscuring
dust, captured changes in quasar spectra that show clouds moving in the gravitational grip of the
central black hole, and allowed a comprehensive census of the much fainter accreting black holes
(active galactic nuclei, or AGN) in present-day galaxies.

By seeing so many quasars, the SDSS has been able to find examples of entirely new
types of quasars. These new discoveries include quasars whose winds change drastically over
just a few years, and galaxies whose central quasars sometimes almost completely disappear. A
particularly unexpected and famous discovery was Hanny's Voorwerp — discovered by a
primary school teacher as part of the Galaxy Zoo citizen science project — that turned out to be
light reflected from a quasar that died more than 200,000 years ago.


SDSS has transformed the field of systematic galaxy analysis with accurate
measurements of hundreds of parameters for hundreds of thousands of galaxies across the full
range of cosmic environments. SDSS studies have demonstrated a bimodal distribution of galaxy
properties, with a clear separation between populations of star-forming galaxies like the Milky
Way, and passive galaxies that have little or no ongoing star formation.

Using weak gravitational lensing and statistical analyses of galaxy clustering, the SDSS
has mapped out the multi-faceted relationships between galaxies and their surrounding halos of
dark matter, showing that passive galaxies are found mainly at the centers of massive halos or as
satellites orbiting larger galaxies.

Today, the SDSS's Mapping Nearby Galaxies at Apache Point Observatory (MaNGA)
survey is studying the detailed structure of thousands of nearby galaxies for clues about the

origin and evolution of galaxies. These observations have been used to solve a long-standing
mystery about white dwarfs in galaxy centers, and to learn why galaxies stop making new stars.

The comprehensive census of present-day galaxies near and far from the SDSS provides
an essential testing ground for theoretical models of galaxy formation. These studies have
provided new insights into the ionization sources of gas in galaxy centers, and also into what
circumstances might cause galaxies stop making new stars.

SDSS imaging enabled the discovery of a new population of "ultra-faint" dwarf galaxies
orbiting the Milky Way. To date, the majority of known Milky Way companions have been
found by the SDSS, along with several new companions of the Andromeda galaxy. With total
light output as low as a thousand times the luminosity of the Sun, these tiny systems provide
critical insights into the physics of galaxy formation and stringent tests of the properties of dark

The Milky Way

The SDSS is mapping the structure and chemical makeup of our Galaxy through its
Apache Point Observatory Galaxy Evolution Experiment (APOGEE) surveys. With these
observations and prior SDSS results, SDSS researchers measure the abundances of multiple
elements in the periodic table found in each of hundreds of thousands of stars across the Galaxy.

This chemical cartography provides information about the formation and evolution of the
Milky Way, because the chemistry of stars encodes information about the gas from which they
formed, and this gas is enriched in different elements in a variety of astrophysical environments.
APOGEE is particularly good at studying the stars in the flat disk of the Milky Way — which
contains the vast majority of Milky Way stars — because it works in infrared light that is better
able to penetrate the dust that is also found in the disk of the Milky Way. In addition, APOGEE
takes advantage of another, identical spectrograph as a "second eye on the sky," located at the
Irénée du Pont Telescope at Las Campanas Observatory in Chile. Having one spectrograph in
each hemisphere allows the SDSS to simultaneously observe the entire Galaxy, both bulge and
disk, from Earth's Northern and Southern Hemispheres.

The new picture of the Milky Way that the SDSS reveals shows a galaxy criss-crossed by
long streams of stars, each associated with a former satellite galaxy that has since been absorbed
into the Milky Way. By studying the distribution of these streams, astronomers can reconstruct
the history of how our Galaxy evolved by absorbing smaller galaxies.

SDSS measurements of the motions of stars in the disk and stellar halo have yielded the
most precise determinations of the mass distribution of the Milky Way's dark matter halo,
implying a total halo mass of approximately one trillion solar masses, lower than many previous


V.Ragavendran, Assistant Professor, Department of Physics, SCSVMV

After nearly a decade of design and construction, the Sloan Digital Sky Survey saw
first light on its giant mosaic camera in 1998 and entered routine operations in 2000. While the
collaboration and scope of the SDSS have changed over the years, many of its key principles
have stayed fixed: the use of highly efficient instruments and software to enable surveys of
unprecedented scientific reach, a commitment to creating high quality public data sets, and
investigations that draw on the full range of expertise in a large international collaboration. The
generous support of the Alfred P. Sloan Foundation has been crucial in all phases of the SDSS,
alongside support from the Participating Institutions and national funding agencies in the U.S.
and other countries.

The latest generation of the SDSS (SDSS-IV,

2014-2020) is extending precision cosmological
measurements to a critical early phase of cosmic
history (eBOSS), expanding its revolutionary
infrared spectroscopic survey of the Galaxy in the
northern and southern hemispheres (APOGEE-2),
and for the first time using the Sloan spectrographs
to make spatially resolved maps of individual
galaxies (MaNGA). Two smaller surveys will be
executed as subprograms of eBOSS: the Time
Domain Spectroscopic Survey (TDSS) will be the
first large-scale, systematic spectroscopic survey of
variable sources; while the
SPectroscopicIDentification of EROSITA Sources The Orange Spider. This illustrates the wealth of information
on scales both small and large available in the SDSS I/II and
(SPIDERS) will provide an unique census of III imaging.
supermassive black-hole and large scale structure
growth, targeting X-ray sources from ROSAT, XMM and eROSITA. Finally, the MaNGA stellar
library (MaStar) will provide an optical stellar library covering a wide range of stellar

APO Galaxy Evolution Experiment 2 (APOGEE)

A stellar spectroscopic survey of the
Milky Way, with two major components: a
northern survey using the bright time at APO
(APOGEE-2N), and a southern survey using
the 2.5m du Pont Telescope at Las Campanas
(APOGEE-2S).The second generation of the
Apache Point Observatory Galaxy Evolution
Experiment (APOGEE-2) observes the
"archaeological" record embedded in hundreds
of thousands of stars to explore the assembly
history and evolution of the Milky Way

Galaxy. The details as to how the Galaxy evolved are preserved today in the motions and
chemical compositions of its stars. APOGEE-2 maps the dynamical and chemical patterns of
Milky Way stars with data from the 1-meter NMSU Telescope and the 2.5-meter Sloan
Foundation Telescope at the Apache Point Observatory in New Mexico (APOGEE-2N), and the
2.5-meter du Pont Telescope at Las Campanas Observatory in Chile (APOGEE-2S).

The Extended Baryon Oscillation Spectroscopic Survey (eBOSS)

A cosmological survey of quasars
and galaxies, also encompassing
subprograms to survey variable objects
(TDSS) and X-Ray sources
(SPIDERS).eBOSS will precisely measure
the expansion history of the Universe
throughout eighty percent of cosmic history,
back to when the Universe was less than
three billion years old, and improve
constraints on the nature of dark energy.
"Dark energy" refers to the observed
phenomenon that the expansion of the
Universe is currently accelerating, which is one of the most mysterious experimental results in
modern physics.

eBOSS concentrates its efforts on the observation of galaxies and in particular

quasars, in a range of distances (redshifts) currently left completely unexplored by other three-
dimensional maps of large-scale structure in the Universe. In filling this gap, eBOSS will create
the largest volume survey of the Universe to date. The figure to the right shows the region that
will be newly mapped by the eBOSS project. This region corresponds to the epoch when the
Universe was transitioning from deceleration due to the effects of gravity, to the current epoch of

Mapping Nearby Galaxies at APO (MaNGA)

The galaxy survey for people who love galaxies! MaNGA will explore the detailed
internal structure of nearly 10,000 nearby galaxies using spatially resolved spectroscopy.
Subprogram MaStar will provide an optical stellar library covering thousands of stars with a
wide range of parameters. Unlike previous SDSS surveys which obtained spectra only at the
centers of target galaxies, MaNGA enables spectral measurements across the face of each of
~10,000 nearby galaxies thanks to 17 simultaneous "integral field units" (IFUs), each composed
of tightly-packed arrays of optical fibers. MaNGA's goal is to understand the "life history" of
present day galaxies from imprinted clues of their birth and assembly, through their ongoing
growth via star formation and merging, to their death from quenching at late times.

To answer these questions, MaNGA will provide two-dimensional maps of stellar
velocity and velocity dispersion, mean stellar age and star formation history, stellar metallicity,
element abundance ratio, stellar mass surface density, ionized gas velocity, ionized gas
metallicity, star formation rate and dust extinction for a statistically powerful sample. The
galaxies are selected to span a stellar mass interval of nearly 3 orders of magnitude. No cuts are
made on size, inclination, morphology or environment, so the sample is fully representative of
the local galaxy population. Just as tree-ring dating yields information about climate on Earth
hundreds of years into the past, MaNGA's observations of the dynamical structures and
composition of galaxies will help unravel their evolutionary histories over several billion years.


V.Ragavendran, Assistant Professor, Department of Physics, SCSVMV

Since Hubble's time, a few more sky surveys have been conducted. But most astronomy
research focused on observing a small number of individual objects, often chosen because they
appeared somehow unusual. By choosing unusual objects, astronomers were attempting to
observe and categorize a broad range of celestial phenomena to discover and constrain the limits
of "what's out there." But some astronomers using this research method found that calculations
they thought were simple and straightforward were actually difficult. For example, they found
that it was particularly hard to determine the expansion rate of the universe (called the Hubble
constant), the density of the universe, how galaxies cluster together, and even what makes up
most of the matter in the universe. The reasons for these difficulties are clear: astronomers had
too little data to work with. It was as if they had been trying to study the oceans, but seeing only
a small patch of the North Atlantic. Astronomers realized that it was time for another map of the
entire sky, one that could see large portions of sky to distances up to several billion light-years.
Now that technology has become advanced enough, this map is being created by the Sloan
Digital Sky Survey.

What do we mean when we say the SDSS will map the universe? To the SDSS, mapping
means measuring the positions and properties of all the hundreds of millions of celestial objects
its telescope can adequately observe: over one-quarter of the northern sky. To find these objects,
SDSS's astronomers must first use their telescopes to take a picture of the sky over the whole
survey area. From this first set of observations, almost all objects can be categorized into well-
recognized types such as stars, galaxies, and quasars. This survey also measures the objects'
positions very precisely. The survey is, in some sense, already a map of the sky: it tells
astronomers where to look to find any of the objects. But astronomers are also interested in
measuring the distances to these objects, to get a full three-dimensional picture of "what's out

Measuring distance is of especially great interest to cosmologists, who study the origin
and structure of the universe. To find distances to celestial objects, the SDSS's astronomers must
go back to each detected galaxy and observe it once again with an instrument called a
spectrograph - basically a giant prism that separates light into its component colors. The
spectrograph analyzes how much of each color of light comes from the object. Since the universe
is expanding, the wavelength of all the light coming from the galaxy has been stretched as it has
traveled. This stretching is called the redshift of light. By measuring the redshift of each galaxy,
astronomers can determine the distance to the galaxy and make a full three-dimensional map of
the positions of galaxies. The advanced technology of the Sloan Digital Sky Survey is able to
measure the distances to about 600 galaxies in less than an hour. In five years, the survey will
measure the distance to over a million galaxies.


N. Surendra, BE-I Year, Department of CIVIL Engg. SCSVMV

What can we learn from gravitational waves?

“Gravitational waves are a new way of observing the Universe. Astronomy

traditionally uses light to explore the cosmos, but there are lots of things you can miss
because a lot of the universe is dark, including black holes. One source of gravitational waves
is two dense objects (like black holes or neutron stars) in orbit around each other.”

A century after Albert Einstein rewrote our understanding of space and

time, physicists have confirmed one of the most elusive predictions of his general theory of
relativity. In another galaxy, a billion or so light-years away, two black holes collided,
shaking the fabric of spacetime. Here on Earth, two giant detectors on opposite sides of the
United States quivered as gravitational waves washed over them. After decades trying to
directly detect the waves, the recently upgraded Laser Interferometer Gravitational-Wave
Observatory, now known as Advanced LIGO, appears to have succeeded, ushering in a new
era of astronomy.

What are gravitational waves?

Colossal cosmic collisions and stellar explosions can rattle spacetime itself. General
relativity predicts that ripples in the fabric of spacetime radiate energy away from such
catastrophes. The ripples are subtle; by the time they reach Earth, some compress spacetime
by as little as one ten-thousandth the width of a proton.

How are they detected?

To spot a signal, LIGO uses a special mirror to split a beam of laser light and sends
the beams down two 4-kilometer-long arms, at a 90-degree angle to each other. After
ricocheting back and forth 400 times, turning each beam’s journey into a 1,600 kilometre
round-trip, The experiment is designed so that, in normal conditions, the light waves cancel
one another out when they recombine, sending no light signal to the nearby detector.

The experiment is designed so that, in normal conditions, the light waves cancel one
another out when they recombine, sending no light signal to the nearby detector.

But a gravitational wave stretches one tube while squeezing the other, altering the
distance the two beams travel relative to each other. Because of this difference in distance,
the recombining waves are no longer perfectly aligned and therefore don’t cancel out. The
detector picks up a faint glow, signaling a passing wave.

LIGO has one detector in Louisiana and another in Washington to ensure the wave is
not a local phenomenon and to help locate its source.

What are other sources of gravitational waves?

Scientists can figure out what type of signals to expect from various gravitational

Spinning neutron stars

A single spinning neutron star, the core left behind after a massive star explodes, can
whip up spacetime at frequencies similar to those produced by colliding black holes.


Powerful explosions known as supernovas, triggered when a massive star dies, can
shake up space and blast the cosmos with a burst of high-frequency gravitational waves.

Supermassive black hole pairs

Pairs of gargantuan black holes, more than a million times as massive as the sun and larger
than the ones Advanced LIGO detected, radiate long, undulating waves. Though Advanced
LIGO can’t detect waves at this frequency, scientists might spot them by looking for subtle
variations in the steady beats of pulsars.

Big Bang

The Big Bang might have triggered universe-sized gravitational waves 13.8 billion
years ago. These waves would have left an imprint on the first light released into the cosmos
380,000 years later, and could be seen today in the cosmic microwave background.

How else are we looking for gravitational waves?

LIGO isn’t the only game in town when it comes to hunting for gravitational waves.
Here are a few other ongoing and future projects.

Ground-based interferometers

A couple of other detectors similar to LIGO are in Europe. The Virgo detector, near
Pisa, Italy, is being upgraded and will team up with LIGO later this year. GEO600, near
Hannover, Germany, has been the only interferometer running for the past several years
while Virgo and LIGO underwent renovations. A third LIGO detector, this one in India, is
scheduled to join the search in 2019.

Space-based interferometers

In space no one can you hear you scream. Neither do you have to deal with pesky
Earth-based phenomena like seismic tremors. Researchers have been lobbying the European
Space Agency to put a LIGO-like detector in space — the Evolved Laser Interferometer
Space Antenna — sometime in the 2030s. In anticipation of eLISA, ESA recently
launched the LISA Pathfinder, a mission to test technologies needed for the full-fledged
space-based gravitational wave detector.

Pulsar timing arrays

To pick up the relatively low-frequency hum of colliding supermassive black holes,

researchers are turning to pulsars. These rapidly spinning neutron stars (the cores left behind
after a massive star explodes) send out steady pulses of radio waves. As a gravitational wave
squeezes and stretches the space between Earth and a pulsar, the beat appears to quicken and
diminish. Three projects — the Parkes Pulsar Timing Array in Australia, Nano-gray in North
America and the European Pulsar Timing Array in Europe — are monitoring dozens of
pulsars for tempo changes that can reveal not only single collisions but the cacophony of
gargantuan black holes smashing together throughout the universe.

Cosmic microwave background polarization

Gravitational waves released in the wake of the Big Bang would have left a mark on
the cosmic microwave background, or CMB. This radiation fills the universe and is a relic
from the moment light could first travel freely through the cosmos, about 380,000 years after
its birth. The CMB preserved how space stretched and squeezed following a phenomenal
expansion a trillionth of a trillionth of a trillionth of a second after the Big Bang. Many
telescopes are searching for this signature by looking for specific patterns in how the CMB
light waves align with one another. It’s not easy though; the project alreadymistook dust in
the Milky Way for its cosmic quarry.

What is the future for gravitational-wave science?

“LIGO has just finished its first observations using its new “advanced” sensitivity. It
will slowly be improved over the next five years, making it even more sensitive. Next year it
should also be joined by Virgo, a detector in Italy. There is also another detector being built
underground in Japan called KAGRA. There is also a plan for putting a LIGO detector in
India. Plans for a network of third generation of observatories such as the Einstein Telescope
are under way. Improving the worldwide network of detectors will help us to measure
properties of the signals, especially helping us figure out the position in the sky of the source
of the waves. At the same time, Pulsar Timing Arrays are taking data to observe giant black
holes at the centre of galaxies.

“Further in the future, there will be a space-based mission called eLISA. This will be
much bigger (100 times the size of the Earth) and look for gravitational waves from much
more massive objects.”


Janani R, Assistant Professor, Department of EIE, SCSVMV

Astronomy is something that Indians are way more advanced, even for today's world, might be
after some decades we will understand more about the astronomy from Vedas and other scripts

 The existence of rather advanced concepts like the sphericity of Earth and the cause of
seasons is quite clear in Vedic literature. For example, the Aitareya Brahmana (3.44)
declares:The Sun does never set nor rise. When people think the Sun is setting it is not
so. For after having arrived at the end of the day it makes itself produce two opposite
effects, making night to what is below and day to what is on the other sideHaving
reached the end of the night, it makes itself produce two opposite effects, making day
to what is below and night to what is on the other side. In fact, the Sun never sets.
 Shape of Earth is like an Oblate Spheroid. (Rig VedaXXX. IV.V)
Earth is flattened at the poles (Markandeya Purana 54.12)
Its really unbelievable that sixty-four centuries before Isaac Newton, the Hindu Rig-
Veda asserted that gravitation held the universe together. The Sanskrit speaking
Aryans subscribed to the idea of a spherical earth in an era when the Greeks believed
in a flat one. The Indians of the fifth century A.D. calculated the age of the earth as 4.3
billion years; scientists in 19th century England were convinced it was 100 million
 "Earth rotates in two ways by the Will of Brahama, first it rotates on its axis secondly
it revolves around sun. Days and Nights are distinguished when moves on its axis.
Season change when it revolves around Sun". (Vishnu Puran)
 There are planets in all directions, but only visible in night sky (Rig Veda)

 BaṛhatSamhita- chapter 35 (6th century CE) describes formation of rainbow. This

was later proposed by Sir Issac Newton nearly after 11 centuries. The verse is as
follow. कराः ।िवयित धनुः


 Meaning: The multi colored rays of the Sun, being dispersed in a cloudy sky, are
seen in the form of a bow, which is called the rainbow.Even in Vedas 7 colors of
light is represented by 7 horses of God Sun which was shown much later. In fact, the
same scripture describes formation of thunderbolt as well.

 After the formation of the earth planet, Brahama created atmosphere in group of seven,
from that formation oceans began to exist, and the first form of life appeared on the
earth planet. Atmosphere was created as protective skin of earth (Shrimad Bhagwatam)
"Amazing isn’t it Vedas and Puaran are divine source of knowledge" said Dr. Donald
Mitchell of the Johns Hopkins Applied Physics Laboratory. It is hard to believe that
these facts were already mentioned in Hindu books thousands of years back, in the
time when Human didn’t knew much about Astronomy.
 “This earth is devoid of hands and legs, yet it moves ahead. All the objects over
the earth also move with it. It moves around the sun.”(Rig Veda 10.22.14)" way
before Isaac Newton explained gravity, ancient Indian scholars had already figured
out how it worked. "
 The Vishnu Purana gives a quite an accurate description of tides:"In all the oceans the
water remains at all times the same in quantity and never increases or diminishes; but
like the water in a cauldron, which in consequence of its combination with heat,
expands, so the waters of the ocean swell with the increase of the Moon. The waters,
although really neither more nor less, dilate or contract as the Moon increases or wanes
in the light and dark fortnights".

M.Dinesh, Assistant Professor, Department of Civil and Structural Engg. SCSVMV

Vision is to build a foundation for excellence and encourage the development of the
institution by igniting and promoting interests and passion in physics. Science is the
intellectual and practical activity which study the structure and behaviour of the biological,
physical and natural world through observation and experiment. Some of the most significant
scientific discoveries in science so far are given below.

 Scientists have finally created Metallic Hydrogen:

Harvard University; Jan27,2017: By applying almost 5 million atmospheres of
pressure to liquid hydrogen. That’s about 5 million times the pressure we experience
at sea level and 4500 times that at the bottom of the ocean. It could remain stable once
that pressure is released. It would revolutionize rocketry.

 Spacex`s Historic Launch Proves Recycled rockets are the future of space
Mar30,2017; Elon Musk`s company Made history by successfully relaunching and
relanding a used rocket booster for the first time. After Launch, Musk called it is a
‘’Huge revolution in spaceflight’’

 Dark Matter could be detected by firing microwaves in to space: 09 Feb,2019

A powerful beam of microwaves could be fired in to space to detect hypothetical
dark-matter particles called axions. The research for this is going on still.

D.Muthukumaran, Assistant Professor, Department of ECE, SCSVMV



TIMES MORE quickly while in water than it would in air at the same temperature. Water's
density gives it a high specific heat capacity, meaning it takes a lot of heat to raise its
temperature even a little, and it's very good at retaining heat or cold (the reason why hot
soup stays hot for a long time, and why the ocean is much cooler than land). Water is a great
conductor, so it's very effective at transferring that heat or cold to your body.


Unlike other solid materials, like metals, glass is made up of amorphous, loosely
packed atoms arranged randomly. They can’t absorb or dissipate energy from something like
a bullet. The atoms can’t rearrange themselves quickly to retain the glass’s structure, so it
collapses, shattering fragments everywhere.


Because water molecules are triangular—made of two hydrogen atoms stuck to one
oxygen atom—they have slightly different charges on their different sides, kind of like a
magnet. The hydrogen end of the molecule is slightly positive, and the oxygen side is
slightly negative. This makes water excellent at sticking to other molecules. When you wash
away dirt, the water molecules stick to the dirt and pull it away from whatever surface it was
on. This is also the reason water has surface tension: it’s great at sticking to itself.

N.Sariha, M.Phil Physics, SCSVMV

Magnetic levitation, maglev, or magnetic suspension is a method by which an object is

suspended with no support other than magnetic fields. Magnetic force is used to counteract the
effects of the gravitational acceleration and any other acceleration.

The two primary issues involved in magnetic levitation are lifting forces: providing an
upward force sufficient to counteract gravity, and stability: ensuring that the system does not
spontaneously slide or flip into a configuration where the lift is neutralized.

Magnetic levitation is used for maglev trains, contactless melting, and magnetic bearings
and for product display purposes


Magnetic materials and systems are able to attract or press each other apart or together
with a force dependent on the magnetic field and the area of the magnets. For example, the
simplest example of lift would be a simple dipole magnet positioned in the magnetic fields of
another dipole magnet, oriented with like poles facing each other, so that the force between
magnets repels the two magnets.

Essentially all types of magnets have been used to generate lift for magnetic levitation;
permanent magnets, electromagnets, ferromagnetism, diamagnetism, superconducting magnets
and magnetism due to induced currents in conductors.


Earnshaw's theorem proves that using only paramagnetic materials (such as

ferromagnetic iron) it is impossible for a static system to stably levitate against gravity.

For example, the simplest example of lift with two simple dipole magnets repelling is
highly unstable, since the top magnet can slide sideways, or flip over, and it turns out that no
configuration of magnets can produce stability.

However, servomechanisms, the use of diamagnetic materials, super-conduction, or

systems involving eddy currents allow stability to be achieved.

In some cases the lifting force is provided by magnetic levitation, but stability is provided by a
mechanical support bearing little load. This is termed pseudo-levitation.

Static stability:

Static stability means that any small displacement away from a stable equilibrium causes
a net force to push it back to the equilibrium point.

Dynamic stability:

Dynamic stability occurs when the levitation system is able to damp out any vibration-
like motion that may occur.

Magnetic fields are conservative forces and therefore in principle have no built-in
damping, and in practice many of the levitation schemes are under-damped and in some cases
negatively damped.This can permit vibration modes to exist that can cause the item to leave the
stable region.

Damping of motion is done in a number of ways:

 External mechanical damping (in the support), such as dashpots, air drag etc.
 Eddy current damping (conductive metal influenced by field)
 Tuned mass dampers in the levitated object
 Electromagnets controlled by electronics


For successful levitation and control of all 6 axes (degrees of freedom; 3 translational and
3 rotational) a combination of permanent magnets and electromagnets or dia-magnets or
superconductors as well as attractive and repulsive fields can be used. From Earnshaw's theorem
at least one stable axis must be present for the system to levitate successfully, but the other axes
can be stabilized using ferromagnetism. The primary ones used in maglev trains are servo-
stabilized electromagnetic suspension (EMS), electrodynamic suspension (EDS).

Mechanical constraint (pseudo-levitation):

With a small amount of mechanical constraint for stability, achieving pseudo-levitation is

a relatively straightforward process.If two magnets are mechanically constrained along a single
axis, for example, and arranged to repel each other strongly, this will act to levitate one of the
magnets above the other.

Another geometry is where the magnets are attracted, but constrained from touching by a
tensile member, such as a string or cable.


Electromagnetic suspension

The attraction from a fixed strength magnet decreases with increased distance, and
increases at closer distances. This is unstable. For a stable system, the opposite is needed;
variations from a stable position should push it back to the target position.

Stable magnetic levitation can be achieved by measuring the position and speed of the
object being levitated, and using a feedback loop which continuously adjusts one or more
electromagnets to correct the object's motion, thus forming a servomechanism.

Many systems use magnetic attraction pulling upwards against gravity for these kinds of
systems as this gives some inherent lateral stability, but some use a combination of magnetic
attraction and magnetic repulsion to push upwards.

EMS magnetic levitation trains are based on this kind of levitation: The train wraps
around the track, and is pulled upwards from below. The servo controls keep it safely at a
constant distance from the track.

Induced currents:

Electrodynamic suspension

These schemes work due to repulsion due to Lenz's law. When a conductor is presented
with a time-varying magnetic field electrical currents in the conductor are set up which create a
magnetic field that causes a repulsive effect.

These kinds of systems typically show an inherent stability, although extra damping is
sometimes required.

Relative motion between conductors and magnets:

If one moves a base made of a very good electrical conductor such as copper, aluminium
or silver close to a magnet, an (eddy) current will be induced in the conductor that will oppose
the changes in the field and create an opposite field that will repel the magnet (Lenz's law). At a
sufficiently high rate of movement, a suspended magnet will levitate on the metal, or vice versa
with suspended metal. Litz wire made of wire thinner than the skin depth for the frequencies
seen by the metal works much more efficiently than solid conductors.

Oscillating electromagnetic fields:

A conductor can be levitated above an electromagnet (or vice versa) with an alternating
current flowing through it. This causes any regular conductor to behave like a dia-magnet, due to
the eddy currents generated in the conductor. Since the eddy currents create their own fields
which oppose the magnetic field, the conductive object is repelled from the electromagnet, and
most of the field lines of the magnetic field will no longer penetrate the conductive object.

This effect requires non-ferromagnetic but highly conductive materials like aluminium or
copper, as the ferromagnetic ones are also strongly attracted to the electromagnet (although at
high frequencies the field can still be expelled) and tend to have a higher resistivity giving lower
eddy currents. Again, litz wire gives the best results.

The effect can be used for stunts such as levitating a telephone book by concealing an
aluminium plate within it.At high frequencies (a few tens of kilohertz or so) and kilowatt powers
small quantities of metals can be levitated and melted using levitation melting without the risk of
the metal being contaminated by the crucible.One source of oscillating magnetic field that is
used is the linear induction motor. This can be used to levitate as well as provide propulsion.

Diamagnetically stabilized levitation:

Earnshaw's theorem does not apply to diamagnets. These behave in the opposite manner
to normal magnets owing to their relative permeability of μr< 1 (i.e. negative magnetic
susceptibility). Diamagnetic levitation can be inherently stable.

A permanent magnet can be stably suspended by various configurations of strong

permanent magnets and strong diamagnets. When using superconducting magnets, the levitation
of a permanent magnet can even be stabilized by the small diamagnetism of water in human

Diamagnetic levitation:

Diamagnetism is the property of an object which causes it to create a magnetic field in

opposition to an externally applied magnetic field, thus causing the material to be repelled by
magnetic fields. Diamagnetic materials cause lines of magnetic flux to curve away from the
material. Specifically, an external magnetic field alters the orbital velocity of electrons around
their nuclei, thus changing the magnetic dipole moment.

According to Lenz's law, this opposes the external field. Diamagnets are materials with a
magnetic permeability less than μ0 (a relative permeability less than 1). Consequently,
diamagnetism is a form of magnetism that is only exhibited by a substance in the presence of an
externally applied magnetic field. It is generally quite a weak effect in most materials, although
superconductors exhibit a strong effect.

Direct diamagnetic levitation

A substance that is diamagnetic repels a magnetic field. All materials have diamagnetic
properties, but the effect is very weak, and is usually overcome by the object's paramagnetic or
ferromagnetic properties, which act in the opposite manner. Any material in which the
diamagnetic component is stronger will be repelled by a magnet.

Diamagnetic levitation can be used to levitate very light pieces of pyrolytic graphite or
bismuth above a moderately strong permanent magnet. As water is predominantly diamagnetic,
this technique has been used to levitate water droplets and even live animals, such as a
grasshopper, frog and a mouse. However, the magnetic fields required for this are very high,
typically in the range of 16 teslas, and therefore create significant problems if ferromagnetic
materials are nearby.


Super diamagnetism

Superconductors may be considered perfect diamagnets, and completely expel magnetic

fields due to the Meissner effect when the superconductivity initially forms; thus
superconducting levitation can be considered a particular instance of diamagnetic levitation. In a
type-II superconductor, the levitation of the magnet is further stabilized due to flux pinning
within the superconductor; this tends to stop the superconductor from moving with respect to the
magnetic field, even if the levitated system is inverted.

Spin-stabilized magnetic levitation:

A magnet or properly assembled array of magnets with a toroidal field can be stably
levitated against gravity when gyroscopically stabilized by spinning it in a second toroidal field
created by a base ring of magnet(s). However, this only works while the rate of precession is
between both upper and lower critical thresholds—the region of stability is quite narrow both
spatially and in the required rate of precession.

The first discovery of this phenomenon was by Roy M. Harrigan, a Vermont inventor
who patented a levitation device in 1983 based upon it. Several devices using rotational
stabilization (such as the popular Levitron branded levitating top toy) have been developed
citing this patent. Non-commercial devices have been created for university research
laboratories, generally using magnets too powerful for safe public interaction.

Strong focusing:

Earnshaw's theory strictly only applies to static fields. Alternating magnetic fields, even
purely alternating attractive fields, can induce stability and confine a trajectory through a
magnetic field to give a levitation effect.

This is used in particle accelerators to confine and lift charged particles, and has been proposed
for maglev trains as well.


Maglev transportation:

Maglev, or magnetic levitation, is a system of transportation that suspends guides and

propels vehicles, predominantly trains, using magnetic levitation from a very large number of
magnets for lift and propulsion. This method has the potential to be faster, quieter and smoother
than wheeled mass transit systems. The technology has the potential to exceed 6,400 km/h (4,000
mi/h) if deployed in an evacuated tunnel. If not deployed in an evacuated tube the power needed
for levitation is usually not a particularly large percentage and most of the power needed is used
to overcome air drag, as with any other high speed train. Some maglev Hyperloop prototype

vehicles are being developed as part of the Hyperloop pod competition in 2015–2016, and are
expected to make initial test runs in an evacuated tube later in 2016.

The highest recorded speed of a maglev train is 603 kilometers per hour (374.69 mph),
achieved in Japan on April 21, 2015, 28.2 km/h faster than the conventional TGV speed record.

1. Magnetic bearings
2. Magnetic bearings
3. Flywheels
4. Centrifuges
5. Magnetic ring spinning
6. Levitation melting

Levitation melting

Electromagnetic levitation (EML), patented by Muck in 1923, is one of the oldest

levitation techniques used for containerless experiments. The technique enables the levitation of
an object using electromagnets. A typical EML coil has reversed winding of upper and lower
sections energized by a radio frequency power supply.

Maglev in India

The Somenath floating Idol:

The famous Persian geographer Al Kazvini wrote the following interesting account –

“Somnath is a celebrated city of India, situated on the shore of the sea and washed by its
waves. Among the wonders of the place was the temple in which was placed the idol called
Somnath. This idol was in the middle of the temple without anything to support it from
below, or to suspend it from above. It was regarded with great veneration by the Hindus, and
whoever beheld it floating in the air was struck with amazement, whether he was a Mussulman
or an infidel. “

V. Geetha, Assistant Professor, Department of CSE, SCSVMV

Wave Energy also known as Ocean Wave Energy, is another type of ocean based
renewable energysource that uses the power of the waves to generate electricity. Unlike
tidal energy which uses the ebb and flow of the tides, wave energy uses the vertical movement
of the surface water that produce tidalwaves.

Wave power is the capture of energy of wind waves to do useful work – for example, electricity
generation, water desalination, or pumping water. A machine that exploits wave power is a wave
energy converter (WEC).

Wave power is distinct from tidal power, which captures the energy of the current caused
by the gravitational pull of the Sun and Moon. Waves and tides are also distinct from ocean
currents which are caused by other forces including breaking waves, wind, the Coriolis
effect, cabbeling, and differences in temperature and salinity.

Waves are generated by wind passing over the surface of the sea. As long as the waves
propagate slower than the wind speed just above the waves, there is an energy transfer from the
wind to the waves. Both air pressure differences between the upwind and the lee side of a
wave crest, as well as friction on the water surface by the wind, making the water to go into
the shear stress causes the growth of the waves.

Wave height is determined by wind speed, the duration of time the wind has been
blowing, fetch (the distance over which the wind excites the waves) and by the depth and
topography of the seafloor (which can focus or disperse the energy of the waves). A given wind
speed has a matching practical limit over which time or distance will not produce larger waves.
When this limit has been reached the sea is said to be "fully developed".

In general, larger waves are more powerful but wave power is also determined by wave
speed, wavelength, and water density.

Oscillatory motion is highest at the surface and diminishes exponentially with depth.
However, for standing waves (clapotis) near a reflecting coast, wave energy is also present as
pressure oscillations at great depth, producing microseisms.[5] These pressure fluctuations at
greater depth are too small to be interesting from the point of view of wave power.

The waves propagate on the ocean surface, and the wave energy is also transported
horizontally with the group velocity. The mean transport rate of the wave energy through a
vertical plane of unit width, parallel to a wave crest, is called the wave energy flux (or wave
power, which must not be confused with the actual power generated by a wave power device).

In a sea state, the average(mean) energy density per unit area of gravity waves on the water
surface is proportional to the wave height squared, according to linear wave theory where E is
the mean wave energy density per unit horizontal area (J/m2), the sum of kinetic and potential
energy density per unit horizontal area. The potential energy density is equal to the kinetic
energy, both contributing half to the wave energy density E, as can be expected from
the equipartition theorem. In ocean waves, surface tension effects are negligible for wavelengths
above a few decimetres.As the waves propagate, their energy is transported. The energy transport
velocity is the group velocity. As a result, the wave energy flux, through a vertical plane of unit
width perpendicular to the wave propagation direction, is equal with cg the group velocity (m/s).
Due to the dispersion relation for water waves under the action of gravity, the group velocity
depends on the wavelength λ, or equivalently, on the wave period T. Further, the dispersion
relation is a function of the water depth h. As a result, the group velocity behaves differently in
the limits of deep and shallow water, and at intermediate depths.


T.Jayanthi, Assistant Professor, Department of CSE, SCSVMV

A laser is a device that emits light through a process of optical amplification based on
the stimulated emission of electromagnetic radiation. The term "laser" originated as
an acronym for "Light Amplification by Stimulated Emission of Radiation".[1][2] The first
laser was built in 1960 by Theodore H. Maiman at Hughes Research Laboratories, based on
theoretical work by Charles Hard Townesand Arthur Leonard Schawlow.

A laser differs from other sources of light in that it emits light coherently. Spatial
coherence allows a laser to be focused to a tight spot, enabling applications such as laser
cutting and lithography. Spatial coherence also allows a laser beam to stay narrow over great
distances (collimation), enabling applications such as laser pointers and lidar. Lasers can also
have high temporal coherence, which allows them to emit light with a very narrow spectrum, i.e.,
they can emit a single color of light. Alternatively, temporal coherence can be used to produce
pulses of light with a broad spectrum but durations as short as a femtosecond

Lasers are distinguished from other light sources by their coherence. Spatial coherence is
typically expressed through the output being a narrow beam, which is diffraction-limited. Laser
beams can be focused to very tiny spots, achieving a very high irradiance, or they can have very
low divergence in order to concentrate their power at a great distance. Temporal (or longitudinal)
coherence implies a polarized wave at a single frequency, whose phase is correlated over a
relatively great distance (the coherence length) along the beam. A beam produced by a thermal
or other incoherent light source has an instantaneous amplitude and phase that vary randomly
with respect to time and position, thus having a short coherence length.

Lasers are characterized according to their wavelength in a vacuum. Most "single

wavelength" lasers actually produce radiation in several modes with slightly different
wavelengths. Although temporal coherence implies monochromaticity, there are lasers that emit
a broad spectrum of light or emit different wavelengths of light simultaneously. Some lasers are
not single spatial mode and have light beams that diverge more than is required by the diffraction

limit. All such devices are classified as "lasers" based on their method of producing light, i.e.,
stimulated emission. Lasers are employed where light of the required spatial or temporal
coherence can not be produced using simpler technologies.

The word laser started as an acronym for "light amplification by stimulated emission of
radiation". In this usage, the term "light" includes electromagnetic radiation of any frequency,
not only visible light, hence the terms infrared laser, ultraviolet laser, X-ray laser and gamma-
ray laser. Because the microwave predecessor of the laser, the maser, was developed first,
devices of this sort operating at microwave and radio frequencies are referred to as "masers"
rather than "microwave lasers" or "radio lasers". In the early technical literature, especially
at Bell Telephone Laboratories, the laser was called an optical maser; this term is now obsolete.[

A laser that produces light by itself is technically an optical oscillator rather than
an optical amplifier as suggested by the acronym. It has been humorously noted that the acronym
LOSER, for "light oscillation by stimulated emission of radiation", would have been more
correct. ith the widespread use of the original acronym as a common noun, optical amplifiers
have come to be referred to as "laser amplifiers", notwithstanding the apparent redundancy in
that designation.

The back-formed verb to lase is frequently used in the field, meaning "to produce laser
light," especially in reference to the gain medium of a laser; when a laser is operating it is said to
be "lasing." Further use of the words laser and maser in an extended sense, not referring to laser
technology or devices, can be seen in usages such as astrophysical maser and atom laser.

A laser consists of a gain medium, a mechanism to energize it, and something to provide
optical feedback. The gain medium is a material with properties that allow it to amplify light by
way of stimulated emission. Light of a specific wavelength that passes through the gain medium
is amplified (increases in power).

For the gain medium to amplify light, it needs to be supplied with energy in a process
called pumping. The energy is typically supplied as an electric current or as light at a different
wavelength. Pump light may be provided by a flash lamp or by another laser.

The most common type of laser uses feedback from an optical cavity—a pair of mirrors
on either end of the gain medium. Light bounces back and forth between the mirrors, passing
through the gain medium and being amplified each time. Typically one of the two mirrors,
the output coupler, is partially transparent. Some of the light escapes through this mirror.
Depending on the design of the cavity (whether the mirrors are flat or curved), the light coming
out of the laser may spread out or form a narrow beam. In analogy to electronic oscillators, this
device is sometimes called a laser oscillator.

Most practical lasers contain additional elements that affect properties of the emitted
light, such as the polarization, wavelength, and shape of the beam.

Dr.S.Omkumar, Assistant Professor, Department of ECE, SCSVMV

Our environment is constantly changing. There is no denying that. However, as our

environment changes, so does the need to become increasingly aware of the problems that
surround it. Global warming has become an undisputed fact about our current livelihoods; our
planet is warming up and we are definitely part of the problem. Our planet is poised at the
brink of a severe environmental crisis. Current environmental problems make us vulnerable
to disasters and tragedies, now and in the future. Discuss some Environmental Problems

Global Warming: Climate changes like global warming is the result of human practices like
emission of Greenhouse gases. Global warming leads to rising temperatures of the oceans and
the earth’ surface causing melting of polar ice caps, rise in sea levels and also unnatural
patterns of precipitation such as flash floods, excessive snow or desertification.

Waste Disposal: The over consumption of resources and creation of plastics are creating a
global crisis of waste disposal.Nuclear waste disposal has tremendous health hazards
associated with it. Plastic, fast food, packaging and cheap electronic wastes threaten the well
being of humans. Waste disposal is one of urgent current environmental problem.

Loss of Biodiversity: Human activity is leading to the extinction of species and habitats and
and loss of bio-diversity. Eco systems, which took millions of years to perfect, are in danger
when any species population is decimating.

Deforestation: Our forests are natural sinks of carbon dioxide and produce fresh oxygen as
well as helps in regulating temperature and rainfall. At present forests cover 30% of the land
but every year tree cover is lost amounting to the country of Panama due to growing
population demand for more food, shelter and cloth. Deforestation simply means clearing of
green cover and make that land available for residential, industrial or commercial purpose.

Acid Rain: Acid rain occurs due to the presence of certain pollutants in the atmosphere. Acid
rain is a known environmental problem that can have serious effect on human health,
wildlife and aquatic species.

Genetic Engineering: Genetic modification of food using biotechnology is called genetic

engineering. Genetic modification of food results in increased toxins and diseases as genes
from an allergic plant can transfer to target plant. Genetically modified crops can cause
serious environmental problems as an engineered gene may prove toxic to wildlife.

Many people don’t consider that what they do will affect future generations. If humans
continue moving forward in such a harmful way towards the future, then there will be no
future to consider. By raising awareness in your local community and within your families
about these issues, you can help contribute to a more environmentally conscious and friendly
place for you to live.

S.Chandramohan, Assistant Professor, Department of ECE, SCSVMV

Introduction: IIoT is a network of devices connected via communications technologies to form

systems that monitor, collect, exchange and analyze data, delivering valuable insights that enable
industrial companies to make smarter business decisions faster. Industrial IoT system consists of:

1. Intelligent controllers, sensors and security components -- that can sense, communicate
and store information.
2. Data communications infrastructure, e.g., the cloud;
3. Analytics and applications that generate business information from raw data; and people.

Advantages of IIoT Versus Conventional methods:

This involves organizations using real-time data generated from IIoT systems to predict
defects in machinery, for example, before they occur, enabling companies to take action to
address those issues before a part fails or a machine goes down.

Figure:1 IIoT Architecture

When products are connected to the internet of things, the manufacturer can capture and
analyze data about how customers use their products, enabling manufacturers and product
designers to tailor future IoT devices and build more customer-centric product roadmaps.

The Internet of Things (IoT) is stretching beyond our bodies, homes, and vehicles to include
things never-before networked: equipment, machines, sensors, and more. As the industrial IoT
(IIoT) expands, connecting everything everywhere, a continuous stream of data is being
generated — a whole lot of it.

Figure: 2 Evolution of Industrial Revolution

Currently, Industry 4.0 is a name given to the current trend of automation and data exchange
in manufacturing technologies. It includes cyber-physical systems, Internet of things and cloud
computing. Over the Internet of Things, cyber-physical systems communicate and cooperate with
each other and with humans in real-time both internally and across organizational services
offered and used by participants of the value chain. Industry 4.0 is commonly referred to as
the fourth industrial revolution

The rapid growth of technologies influences us to use smart phones to remotely control
the home appliances. An automated device has ability to work with versatility, diligence and
with lowest error rate. The idea of home automation system is a significant issue for researchers
and home appliances companies. Automation system not only helps to decrease the human labor
but it also saves time and energy.


R.A.V.S.R.Vamsi Krishna, BE-I Year, Section:S4,Department of CSE,SCSVMV

An icy world on the outskirts of our solar system, Pluto was discovered by Clyde
Tombaugh in 1930.
The name Pluto was proposed by an eleven-year-old schoolgirl from England named
Venetia Burney (1918–2009). Venetia, who had an interest in Greek and Roman mythology,
suggested to her grandfather, Falconer Madan, that the new planet should be named Pluto
because it was dark and far away, like the god of the underworld. Venetia’s grandfather, a
librarian at Oxford’s Bodleian Library, passed on the suggestion to an astronomer friend of
his. This astronomer, Herbert Hall Turner, in turn cabled the suggestion to the Lowell
Observatory. Astronomers at the Lowell Observatory liked the suggestion, and Tombaugh’s
newly discovered celestial body was officially named Pluto on March 24, 1930. Venetia
received £5 as a reward for her suggestion

Astronomical observations of Uranus and Neptune in the late 19th and early 20th
centuries revealed a slight discrepancy between the actual observed orbits and what the
calculated theoretical orbits of these two planets should have been. American astronomer
Percival Lowell (1855–1916) believed that these discrepancies were due to the influence of
an undiscovered ninth planet that he called Planet X. Lowell searched for Planet X between
1906 and 1916, but was unsuccessful in his endeavors.

The task was taken up again by Clyde Tombaugh (1906–1997) in 1929. Tombaugh
was an American astronomer working at the Lowell Observatory in Flagstaff, Arizona.
Tombaugh’s technique was to take pictures of the same part of the night sky, but over two
different dates. A comparison of the photographic plates would then reveal if any of the
“stars” had changed position. Tombaugh reasoned that if one of the stellar objects had
changed position in the relatively short time interval between the dates of the two plates, then
it was not really a star. On February 18, 1930, Tombaugh discovered just such a moving
object in a comparison of photographic plates that had originally been taken slightly earlier
on January 23 and January 29, 1930. A new planet, to be named Pluto, had been discovered.

Pluto is the largest and second-most-massive (after Eris) known dwarf planet in the
Solar System, and the ninth-largest and tenth-most-massive known object directly orbiting
the Sun. It is the largest known trans-Neptunian object by volume but is less massive than
Eris. Like other Kuiper belt objects, Pluto is primarily made of ice and rock and is relatively
small—about one-sixth the mass of the Moon and one-third its volume. It has a moderately
eccentric and inclined orbit during which it ranges from 30 to 49 astronomical units or AU
(4.4–7.4 billion km) from the Sun. This means that Pluto periodically comes closer to the Sun
than Neptune, but a stable orbital resonance with Neptune prevents them from colliding.
Light from the Sun takes about 5.5 hours to reach Pluto at its average distance (39.5 AU
Pluto has five known moons: Charon (the largest, with a diameter just over half that
of Pluto), Styx, Nix, Kerberos, and Hydra. Pluto and Charon are sometimes considered a
binary system because the barycenter of their orbits does not lie within either body.
While many people can point to a picture of Jupiter or Saturn and call it a "planet,"
the definition of this word is much more subtle and has changed over time. Many
astronomers decided on a new definition in 2006 after the discovery of several worlds at the
fringes of the solar system — a decision that remains controversial.
The International Astronomical Union defined a planet as an object that:
1.orbits the sun
2.Has sufficient mass to be round, or nearly round.
3.Is not a satellite (moon) of another object.
4.Has removed debris and small objects from the area around its orbit
The IAU also created a newer classification, "dwarf planet," which is an object that
meets planetary criteria except that it has not cleared debris from its orbital neighbourhood.
This definition meant that Pluto — considered a planet at the time — was demoted and
reclassified as a dwarf planet.



A.Rajasekaran 1 Assistant Professor, Department of ECE, SCSVMV.

K.Saraswathi.2 Assistant Professor, Department of E&I, SCSVMV

RF energy is currently broadcasted from billions of radio transmitters around the

world, including mobile telephones, handheld radios, mobile base stations, and television/
radio broadcast stations. The ability to harvest RF energy, from ambient or dedicated sources,
enables wireless charging of low-power devices and has resulting benefits to product design,
usability, and reliability. Battery-based systems can be trickled charged to eliminate battery
replacement or extend the operating life of systems using disposable batteries. Battery-free
devices can be designed to operate upon demand or when sufficient charge is accumulated. In
both cases, these devices can be free of connectors, cables, and battery access panels, and
have freedom of placement and mobility during charging and usage.
Energy Sources

The obvious appeal of harvesting ambient RF energy is that it is essentially “free”

energy. The number of radio transmitters, especially for mobile base stations and handsets,
continues to increase. ABI Research and iSupply estimate the number of mobile phone
subscriptions has recently surpassed 5 billion, and the ITU estimates there are over 1 billion
subscriptions for mobile broadband. Mobile phones represent a large source of transmitters
from which to harvest RF energy, and will potentially enable users to provide power-on-
demand for a variety of close range sensing applications. Also, consider the number of WiFi
routers and wireless end devices such as laptops. In some urban environments, it is possible
to literally detect hundreds of WiFi access points from a single location. At short range, such
as within the same room, it is possible to harvest a tiny amount of energy from a typical WiFi
router transmitting at a power level of 50 to 100 mW. For longer-range operation, larger
antennas with higher gain are needed for practical harvesting of RF energy from mobile base
stations and broadcast radio towers. In 2005, Powercast demonstrated ambient RF energy
harvesting at 1.5 miles (~2.4 km) from a small, 5-kW AM radio station.

RF energy can be broadcasted in unlicensed bands such as 868MHz, 915MHz,

2.4GHz, and 5.8GHz when more power or more predictable energy is needed than what is
available from ambient sources. At 915MHz, government regulations limit the output power
of radios using unlicensed frequency bands to 4W effective isotropic radiated power (EIRP),
as in the case of radio-frequency- identification (RFID) interrogators. As a comparison,
earlier generations of mobile phones based on analog technology had maximum transmission
power of 3.6W, and Powercast’s TX91501 transmitter that sends power and data is 3W.

RF Harvesting Receivers

RF energy harvesting devices, such as Powercast’s Powerharvester® receivers,

convert RF energy into DC power. These components are easily added to circuit board
designs and work with standard or custom 50-ohm antennas. With current RF sensitivity of
the P2110 Powerharvester receiver at -11dBm, powering devices or charging batteries at
distances of 40-45 feet from a 3W transmitter is easily achieved and can be verified with
Powercast’s development kits. Improving the RF sensitivity allows for RF-to-DC power
conversion at greater distances from an RF energy source. However, as the range increases
the available power and rate of charge decreases.

An important performance aspect of an RF energy harvester is the ability to maintain

RF-to-DC conversion efficiency over a wide range of operating conditions, including
variations of input power and output load resistance. For example, Powercast’s RF energy-
harvesting components do not require additional energy-consuming circuitry for maximum
power point tracking (MPPT) as is required with other energy-harvesting technologies.
Powercast’s components maintain high RF-to-DC conversion efficiency over a wide
operating range that enables scalability across applications and devices. RF energy-harvesting
circuits that can accommodate multi-band or wideband frequency ranges, and automatic
frequency tuning, will further increase the power output, potentially expand mobility options,
and simplify installation.

Typical Applications

RF energy can be used to charge or operate a wide range of low-power devices. At

close range to a low-power transmitter, this energy can be used to trickle charge a number of
devices including GPS or RLTS tracking tags, wearable medical sensors, and consumer
electronics such as e-book readers and headsets. At longer range the power can be used for
battery-based or battery-free remote sensors for HVAC control and building automation,
structural monitoring, and industrial control. Depending on the power requirements and
system operation, power can be sent continuously, on a scheduled basis, or on-demand. In
large-scale sensors deployments significant labor cost avoidance is possible by eliminating
the future maintenance efforts to replace batteries.

RF-Powered Wireless Sensors

Available power from a 3W transmitter will be low milliwatts within a few feet and
tens of microwatts at around 40 feet. This amount of power is best used for devices with low-
power consumption and long or frequent charge cycles. Typically, devices that operate for
weeks, months, or years on a single set of batteries are good candidates for
being wirelessly recharged by RF energy. In some applications simply augmenting the
battery life or offsetting the sleep current of a microcontroller is enough to justify adding RF-
based wireless power and energy harvesting technology.

A network of transmitters can be positioned in a facility to provide wireless power on

a room-by-room basis, or for a many-to-many charging topology. Mobile phones can be used
as portable power sources for a number of battery-free wireless devices. Imagine a mobile
phone powering a battery-less, body-worn sensor that sends data to the phone via a
commonly used protocol such as Wi-Fi, Bluetooth, or ZigBee. This data can be displayed
locally on the handset or transmitted by the phone to a monitoring service. Powercast has
already demonstrated this application using ambient RF energy from an iPhone (see video to

Improved Product Design

Products with embedded wireless power technology can be sealed from

environmental conditions such as moisture and from user access. In addition, connectors and
cables can be eliminated. Product reliability and lifecycle can be significantly improved as a
result. When in range of a suitable RF source, charging is automatic and transparent to the
end-user which provides increased convenience of use. With Powercast’s components,
multiple battery chemistries and charge voltages can be supported which allows for
maximum power storage flexibility.


Ambient radio waves are universally present over an ever-increasing range of

frequencies and power levels, especially in highly populated urban areas. These radio
waves represent a unique and widely available source of energy if it can be effectively and
efficiently harvested. The growing number of wireless transmitters is naturally resulting in
increased RF power density and availability. Dedicated power transmitters further enable
engineered and predictable wireless power solutions. With continued decreases in the power
consumption of electronic components, increased sensitivity of passive receivers for RF
harvesting, and improved performance of low-leakage energy storage devices, the
applications for wire-free charging by means of RF-based wireless power and energy
harvesting will continue to grow.

Palle Anurag Kashyap, BE-I Year, Section: S3, Department of CSE, SCSVMV

What is an alien?
Alien is a foreigner ,especially one who is not neutralized of the country where he or she
is living. Aliens maybe harmful or friendly. Incidents Related To UFO’s:-
In 19th century
For the first time in 19th century UFO has been spotted in United Kingdom(There were
also spotting of UFO’s in the before years)
In the year 1801 an UFO has been spotted in united kingdom in East Yorkshire.
It is reported in hull packet that a fiery object is said to have appeared suddenly over the
Humber .
An immense moon-like globe with a black bar across its centre of its face. For a moment
It had bathed the Hull and the Humber in a mysterious blue light. Then, it split into seven fire
balls and vanished.
In the 20th century
In the year 1909 in New Zealand in Otago, A strange moving lights and some solid
bodies in the were seen around and elsewhere in New Zealand and were reported to newspapers
On this date in Japan, The radar of a F-61 Black Widow detected a target below the
aircraft. While the crew tried to intercept it. The pilot saw the object which appeared as a stubby
cigar; Then the object accelerated and vanished .
The latest date when the UFO was seen was 08-11-2018.
Several airliner pilots report seeing a “very bright lights” off the south-west coast of
Ireland, Moving at around “Mach 2”before veering off to north. The sightings prompted the
“Irish Aviation Authority” to launch an investigation.

On 25th June 2015 in Kanpur, A school boy claimed that he have captured the
photographs of the UFO from his house rooftop.
On 4th August soldiers of Indian Army have observed UFO at Ladakh and it is reported
that army troops have observed more than 100 UFO movements along Arunachal Pradesh border
area during the period of seven months.

NASA’s New Planet Hunter Found Its 1st Alien World

Nasa’s newest planet-hunting has reported detecting its first alien world – “A Super-
Earth“ That is likely evaporating under the heat from its star.
The transiting Exoplanet survey satellite (TESS) launched toEarth orbit on April 18 atop
of SpaceX Falcon 9 rocket . The space telescope is analysing several hundred thousand of the
brightest stars in the sun’s neighbourhood, looking for a tiny dips in brightness caused by the
passage of orbiting planets as small as Earth across the faces of those stars.
The scientists have used TESS data to discover a new planet around the star “Pi Mensae”,
also known as HD 39091, which is located about 59.5 light-years from the Earth in the
constellation Mensa, The table . Pi Mensae is yellow dwarf star like the sun and second brightest
among the stars known to have transiting exoplanets.
Previous research had already spotted a gas giant around Pi Mensae thst about ten times
more massive than Jupiter .The exoplanet, named Pi Mensae b, has a highly oval-shaped
“eccentric” orbit that carries it up to Astronomical Units (AU) from its star. (One AU is the
average distance between Earth and the Sun -about 93 million miles ,or 150 million kilometers ).
Now, scientists have detected an new world around Pi Mansae one about 2.14 times the
Earth’s diameter and 4.82 times the Earth’s mass . This super-Earth dubbed Pi Mensae C, orbits
about 0.07 AU from its star, or more than 50 times closer than mercury.
The Pi Mensae C is a super-Earth, a class of planets slightly larger and more massive
than our world.
The density of Pi Mensae C is consistent with a picture where “The entire planet is made
of water”, Study lead author Chelsea Huang, of the Massachusetts institution of technology told.
However ,”Its more likely to have a rocky core and an extend atmosphere made of “Hydrogen
and Helium” She also said. “We also think this planet might be evaporating right now, given the
intense radiation it gets from it host star”.
Future research can investigate the odd configuration of Pi Mensae’s two know planets.
The oval-shaped orbit of Jupiter-like Pi Mensae b stands in the stark contrast to the circular orbit
of Jupiter . This suggest that “Something must have happened in the history in this planetary
system to excite the orbit of the Jupiter-like planet”, Huang said. “if so how did the inner system
survive? These questions still need further investigation, and understanding them will tell us a lot
of planet-formation theory”.


J Suganthi, Assistant Professor, Department of Physics, SCSVMV

Tiny Neptune moon spotted by Hubble space telescope may have broken from larger
moon. An artist's concept of the tiny moon Hippocamp that was discovered by the Hubble
Space Telescope in 2013. Only 20 miles across, it may actually be a broken-off fragment from a
much larger neighboring moon, Proteus, seen as a crescent in the background. This is the first
evidence for a moon being an offshoot from a comet collision with a much larger parent body.

The tiny moon, named Hippocamp, is unusually close to a much larger Neptunian moon
called Proteus. Normally, a moon like Proteus should have gravitationally swept aside or
swallowed the smaller moon while clearing out its orbital path.

Researchers were initially surprised to find the tiny rock so close inside the orbit of
Proteus, which is more than 100 times its size. Observations suggest tidal forces have been
slowly pushing Proteus away from Neptune; a few billion years ago, it would have sat right
where Hippocamp is today. "Our initial thought was that's a very strange place to find a moon,"
Showalter said. Proteus also has a massive crater on its surface, called Pharos, likely left behind
after an impact from a comet or another passing object that nearly destroyed the moon at some
point in its history.

Perhaps, Showalter and his colleagues suggest, Hippocamp is some of the shrapnel from
that ancient collision. Only by sending a spacecraft back to the Neptune system to compare the
worlds' compositions can scientists know for sure.

Though the outer solar system is sometimes seen as dark, cold and dreary, the story of
Hippocamp demonstrates how much activity has gone unnoticed in this distant region, said
Kathleen Mandt, a planetary scientist at the Johns Hopkins University Applied Physics
Laboratory who was not involved in the new research.

Scientists say Neptune's icy largest moon, Triton, is an object acquired from the Kuiper
belt sometime after the planet's formation. Its arrival probably jostled the inner moons, causing
collisions that flung some bodies outward and fragmented others.
"It's just fascinating for dynamicists" studying how solar system bodies interact and evolve,
Mandt said.
Hippocamp, originally known as S/2004 N 1, is a small moon of Neptune, about 35 km
(20 mi) in diameter, which orbits the planet in just under one Earth day. Its discovery on 1 July
2013 increased the number of Neptune's known satellites to fourteen. The moon is so dim that it
was not observed when the Voyager 2 space probe flew by Neptune and its moons in 1989.
Mark Showalter of the SETI Institute found it by analyzing archived Neptune photographs
the Hubble Space Telescope captured between 2004 and 2009.
The moon was formally numbered Neptune XIV (14) on 25 September 2018 in Minor Planet
Hippocamp is assumed to resemble Neptune's other inner satellites in having a surface as
dark as "dirty asphalt". Their geometrical albedos range from 0.07 to 0.10. Derived from
Hippocamp's apparent magnitude of 26.5, its diameter was initially thought to be around 16 to
20 km, making it the smallest of Neptune's known moons. More recent observations of Neptune's

moons have shown that Hippocamp is almost twice as large as previously thought, giving it a
diameter of 34.8 km. However, it remains by a wide margin the smallest of Neptune's inner,
regular, satellites.

The near-infrared spectra of Neptune's rings and inner moons have been examined with
the HST NICMOS instrument. Similar dark, reddish material, characteristic of small outer Solar
System bodies, appears to be present on all their surfaces. The data is consistent with organic
compounds containing C−H and/or C≡N bonds, but
spectral resolution was inadequate to identify the
molecules. Water ice, abundant in the outer Solar System,
is believed to be present, but its spectral signature could
not be observed (unlike the case of small Uranian

Neptune and its smallest moon Hippocamp.

Hippocamp completes one revolution around Neptune every 22 hours and 28.1 minutes
(0.9362 days), implying a semi-major axis, or orbital distance of 105,283 km (65,420 mi), just
over a quarter that of Earth's Moon, and roughly twice the average radius of Neptune's rings.
Both its inclination and eccentricity are close to zero. It orbits between Larissa and Proteus,
making it the second outermost of Neptune's regular satellites. Its small size at this location runs
counter to a trend among the other regular Neptunian satellites of increasing diameter with
increasing distance from the primary.

The periods of Larissa and Hippocamp are within about one percent of a 3:5 orbital
resonance, while Hippocamp and Proteus are within 0.1% of a 5:6 resonance. Larissa and
Proteus are thought to have passed through a 1:2 mean-motion resonance a few hundred million
years ago. Proteus and Hippocamp have drifted away from Larissa since then because the former
two are outside Neptune-synchronous orbit (Neptune's rotational period is 0.6713 day) and are
thus being tidally accelerated, while Larissa is within and is being tidally decelerated.

Diagram of the orbits of Neptune's moons out to Triton, with Hippocamp's orbit highlighted.

Neptune's Tiny New Moon, Could Be Fragment Of It's
Larger Moon Proteus

The earliest image of Hippocamp, obtained by the

Hubble Space Telescope in 2004
A diminutive nugget of a moon has been discovered lurking in the inner orbit of Neptune.
The moon, dubbed Hippocamp for the half-horse, half-fish sea monster from Greek legend, is
about the size of Chicago and so faint only the powerful Hubble Space Telescope can spot it. But
by examining data stretching more than a decade, researchers were able to discern its dim form
from 3 billion miles away.

"Being able to contribute to the real estate of the solar system is a real privilege," said
planetary scientist Mark Showalter, the lead author of a study on the discovery published
Wednesday in the journal Nature. "But it shows how much we still don't know about the ice
giants, Neptune and Uranus."

Showalter and his colleagues suggest Hippocamp is a fragment of a larger neighboring

moon called Proteus, broken off during a cataclysmic collision some 4 billion years ago.
Neptune has been explored just once in human history, with a brief flyby of the Voyager 2
spacecraft in 1989. "But there are all these interesting processes going on there, that we only got
a glimpse of," Showalter said. "Atmospheric phenomena, rings with peculiar properties and
these collisions and breakups that formed the inner moons." "It's not just a dinky little moon
we've found," he continued. Moons such as Hippocamp "are witnesses to the formation and
evolution of the planets they orbit. In my mind, they have very interesting stories to tell."

Showalter, a senior research scientist at the Search for Extraterrestrial Intelligence (SETI)
Institute in Mountain View, California, is something of a distant moon detective. By
accumulating scores of long-exposure Hubble images, then adjusting them to account for an
orbiting body's predicted movement, he has already uncovered two new moons each around
Pluto and Uranus.

The resulting images "are not pretty,"

Showalter said. The planets are so overexposed they
become big white blotches, and the moons at their
centers are little more than pale dots. The procedure
generally does not capture enough data from the
moons to allow scientists to take spectra - splitting
light into its component parts to reveal clues about
the moons' composition.
Hippocamp is the first new inner satellite found
around the solar system's outermost planet since the
Voyager 2 flyby.

Neptune has been explored just once in human history, with a brief flyby of the Voyager 2
spacecraft in 1989.

V.Uma, Assistant Professor, Department of ECE, SCSVMV

The aim of science is to uncover the deepest truths, and the aim of spirituality is the
search for the cause behind fact. Scientists discover how creation and human life came into
being. Those engaged in spirituality try to find the hidden cause behind scientific fact. They are
interested in the laws of nature, but wish to find the divine law that brought everything into
being. While scientists search through outer instrumentation, spiritual scientists focus on higher
level of consciousness. Both science and spirituality aim at the same truth, but arrive there
differently. At the heart of science and spirituality is a partnership seeking to answer the same

People today are familiar with reports of spiritual experiences. Polls and studies show
that many believe in God, the soul, and an after-life. Such people find no contradiction between
science and spirituality. They believe in scientific law as well as spiritual law.

Washington Carver, an African –American who was once a slave, later became a world-famous
scientist. He discovered 300 products that could be made from the humble peanut. Once he was
asked how he came up with those discoveries. Carver replied, “One day I asked God what could
be made out of a peanut?” God told me, “You have brains; find out for yourself!” . While God is
the inspiration, human beings have a duty to search for knowledge and make discoveries. God
could easily create anything, but God leaves it to humans to make the discoveries. Human beings
have a choice: to live for themselves or devote their lives to the betterment of this planet. Some
live only to gratify their desires. Those who contact a higher power, discover that the greatest
purpose is to love and serve others. Scientists are devoted to bettering the lives of others. Thus,
we find the mission of spiritual scientists and physical scientists is the same. They are both here
to use their discoveries to better the lives of others. While physical scientists gaze at the stars
through telescopes, and listen to radio-waves from distant stars through instrumentation, spiritual
scientists gaze at the inner stars and listen to the inner Music of the Spheres through meditation.
They both sit in silence, watching and waiting. Physical scientists try to prove through the outer
eyes and ears, while spiritual scientists try to prove through the inner eyes and ears.

My scientific background helped me study spirituality as a science. This approach helped me

prove the validity of spiritual experiences. My spiritual background enabled me to better pursue
the study of science. Meditation gives inspiration to uncover scientific truths. Discoveries come
as inspiration. Science and spirituality make a great partnership. If those engaged in science
spend time in the silence of themselves, inspiration will come and lead them to the answers they
seek. If those interested in spirituality apply the scientific law of testing hypothesis in the
laboratories of their own bodies, they will find results.

R.Prema, Assistant Professor, Department of CSE, SCSVMV

Many newborn and toddler stars are not all that different from newborn and toddler
humansprone to bouts of cranky energy, loud and violent tempers, and indiscriminately wailing
and vomiting heaps of disgusting matter in every direction. It’s natural to assume even our 4.6
billion-year-old sun had a messy heyday in its youth, but without any hard evidence to prove this
was case, the only thing many scientists had going for them were strong suspicions. New data,
focused around a peculiar set of ancient blue crystals from space, seems to suggest the sun
emitted a much higher flux of cosmic rays in its early history than we once thought.

“We think hibonites like those in Murchison formed close to the young sun, because that
is where temperatures were high enough to form such minerals,” says Levke Kööp, a
cosmochemistry researcher at the University of Chicago and the lead author of the new study.
“Hibonites from Murchison are famous for showing large isotope anomalies that tell us about the
types of stars that contributed material to the molecular cloud that the sun formed from.” The
team doesn’t have an exact date on the hibonite grains, but based on the age of refractory
elements in the meteorite, it pegs the crystals to be a little over 4.5 billion years old.
If hibonite really was produced by an early active sun, the answer would be found in analyzing
the crystals’ helium and neon isotopes. High energy particles being ejected by a volatile young
sun would have hit calcium and aluminum deposits in the crystals and split these atoms into neon
and helium, and been irrevocably trapped for billions of years.

Lead author Levke Kööp at work in the lab.

(c) Field Museum
The research team studied the hibonite crystals using a highly sensitive mass spectrometer at
ETH Zurich in Switzerland, melting the grains of hibonite down with a laser while the
spectrometer measured and confirmed the presence of helium and neon concentrations.
Beyond simply illustrating that the young sun went through a phase of high activity, the new
results also show how some meteorite materials from the solar nebula are directly affected by
young sun irradiation. The team also noticed helium and neon were absent from younger
crystals, indicating that something changed later in the irradiation conditions created by the sun,
and raising the question of what happened. This sort of insight might augur later into a better
understanding of how the roles star evolution plays in the creation of elements and materials that
later on assemble into planets and other celestial bodies.

A tiny hibonite crystal from the Murchison meteorite.

(c) Andy Davis, University of Chicago
“Over the last few decades, there has been a controversy whether meteorites contain
evidence of an early active sun,” says Kööp. “In general, even for us, it was hard to know what
to expect from this study. In the end, we were very excited to see such a clear irradiation
signature in the hibonites.”

Andrew Davis, a study coauthor affiliated with the University of Chicago and the Field
Museum of Natural History, points out the minuscule size of the hibonite grains limits how much
the team could measure helium and neon traces, as well as an analysis of the absolute age of the
hibonite itself. Moreover, the analyses also involve destruction of the grains. “We are working
on a new instrument in my lab to study the isotopic compositions of more elements in the
hibonite grains, to better understand how different sources of dust were mixed in the early solar
nebula,” he says. Still, the implications of these findings alone shouldn’t be understated. “I’ve
been involved with this type research for a very long time. I’ve constantly been skeptical of
claims from scientists that traces of the early sun have been found.” “With this new study,” he
says, “I’m happy to change my mind.”