You are on page 1of 79

Chapter 1

Introduction

This chapter discusses mainly the formulation of the project. The purpose of the project is to make
the earth sensor of a Satellite (ERS). The project is to connect all the hardware components and
then interfacing it with the software that calculates the pitch and roll of the satellite.

1.1 Earth Observation (Remote Sensing)


The important function of the satellite is the attitude determination which has direct correlation
with the earth observation. Attitude determination is basically the control of the attitude of the
satellite. Earth observation is used to get the information about the things and objects with
which we do not have the direct contact and is scientifically called ‘Remote Sensing’. Humans
are full of different sensors naturally which have been awarded to them by God like ears as
the audio sensor, eyes as the visual sensor, nose as the sensor to smell, brain as the main
processor to process all the information given by the other sensors [1]. With all these sensors,
humans are able to accomplish remote sensing easily which is required in the daily life
activities like watching a movie, driving a car etc. Remote sensing have been effectively done
in all the activities by the humans. The procedure to convert the energy from one form to
another form and then processing it in the main hub is a microprocessor which in technology
language similar to the brain in humans.
Let us now compare the human analogy with the technology by taking the example of a
human eye and camera attached to the computer. Camera takes the image in such a form to
view the image and then convert it to electrical signal which is delivered to the microprocessor
which stores the image as well as displays it on the LCD at the same time. In the same way,
the human eye works, when the light strikes the object and reflects back, light is the energy at

1
this time containing the information in form to view. When that energy reaches the eye it
converts it to electrical energy which is transmitted to the brain. These signals are interpreted
by the eye and it gets the visual information which is then matched with the previous images
and it finds the information about the objects. In this way the human sensors help the
technology in remote sensing.

1.2 Electromagnetic Spectrum in Remote Sensing


Remote sensing consists of the part whose working is very much similar to the phenomenon
of taking the images but in our project, the remote sensing (earth sensing) is very much
inclined to take the images of the earth in the various wavelengths of the ‘Infrared Region’ of
the ‘Electromagnetic Spectrum’ which is the major characteristic of remote sensing as shown
in Figure 1.1. When the energy is provided by the platform of remote sensing, so it is called
the active remote sensing [2].

Figure 1.1 Electromagnetic Spectrum

Infrared defines the spectral range that is below the red end of the visible band (>0.79µm) up
to the microwave range (<1000µm).

2
1.3 Steps of Remote Sensing [3]
Remote sensing (earth sensing in the project) consists of the five major steps which combine
to complete a system. These steps are described below:
1) Sun Emission (Self Emission):
The first step includes the emission of the electromagnetic waves and finding the values
of algorithm of calibration.
2) Transmission and absorption of radiation:
The transmitted and the received energy can be found from the transmission of the
radiations from the satellite and then their absorption by the satellite.
3) Reflection and emission of electromagnetic waves:
It involves the interaction of earth and electromagnetic radiation.
4) Transmission of energy to sensor (Infrared Camera in our case) by its remote.
5) Output of sensor to be interfaced with microprocessor.

1.4 Formulation of Project


The project is based on the medium of communication in space technology (satellite).
Artificial satellites are the satellites that should be placed in outer orbit of the space due to the
desired purposes as they are prepared by the humans and the satellite is sent to the orbit by
artificial process as shown in Figure 1.2. There in space also exists the natural satellites like
moon which is the heavenly body and exists naturally in the orbit in space.

Figure 1.2 Satellite assembled on Earth

3
The project is about one of the parts of the satellite that is the ‘Earth Sensor’ (ERS) which plays
an important role in the functioning of the satellite. ERS is used in stabilizing the satellite by
finding the center of the earth as shown in Figure 1.3. ERS will take the infrared images with the
help of infrared camera and then processes the algorithms on it to calculate the position of the
satellite just above earth.

Figure 1.3 scatter meter coverage of ERS

As the ERS of the LEO satellite works in the visible light spectrum, so it has a disadvantage that
it cannot be used to calculate the values during the eclipse because during that period, the light is
not reflected back from the earth. So to have the calculations at that time, the sensor should be
operated in the spectral range other than the infrared range which is usually called the thermal
range.

1.4.1 Problem Statement

The earth sensor of a Satellite (ERS) made according to the specifications given by SUPARCO.
This project summarizes the interfacing of hardware with the software which calculates the pitch
and roll of the satellite.

The basic specifications required by SUPARCO are:

4
FOV 40 x 30

Mass 0.2 kg

Voltage 5V/12V

Current <500 mA

Accuracy 3sigma<0.1degree

Dimensions 50 x 50 x 35 mm

Spectral Range 8 micrometer -14 micrometer

Data Output Rate 10 Hz

Power Consumption 3W

Operating Temperature -40 to 80 C

Communication Interface CAN, RS422

Sun Moon Avoidance

Accurate Knowledge of Time

Two Axis Attitude Information

Table 1.1 Project Specifications Given by SUPARCO

5
Chapter 2

Satellite

The body moving in the outer space orbit is satellite, the body either is artificial or natural. The
naturally existing bodies like moon orbiting the earth and the planets orbiting the sun are
considered as natural satellites, whereas the man made satellites which are the combination of
technology and telecommunications used for the communication purposes to explore the space are
the artificial satellites [3].

History

As the humans started to get civilized, the need of communication with each other gets importance
day by day, and they started to find different ways of communication with the help of all the skills
they had.

Here is the brief history of satellites communications:

2.1.1 Passive Satellites

Clarke was the first one who observed the circular orbit which is 36,000 km away from the earth
and the satellite in that orbit will have the same angular velocity as the earth, so it would be exactly
above the earth which receives and transmits the signal anywhere from the earth to the other place
of the earth. This concept was observed by Clarke in 1945 which was verified by the launch of
SPUTNIK1 by USSR in 1957, after which the United States and USSR decided to develop satellite
technology [5].

SCORE was the first artificial satellite that was launched by the air force in 1958 which was the
LEO satellite. The purpose of the satellite was to deliver a message to the stations in the world and

6
the duration of that message was 4 minutes. This satellite was powered by the batteries of 12 days
life and decayed out of the orbit after 22 days. This was used for point to point communications.

NASA and National aeronautics launched the ECHO 1 in 1960 and ECHO 2 in 1964 which were
passive satellites to act as reflectors. These were the large spheres which attracted the attention of
the people as they could be seen from the earth in the light. ECHO 1 remained in the orbit for 8
years whereas ECHO 2 remained there for 5 years. COURIER extended the SCORE technology
in 1960 which invested the store and forward capabilities of LEO satellites which was able to work
for 17 days and used solar cells for power for the first time in artificial satellites.US Army launched
the passive satellite WESTFORD in 1963 which was the second one example of reflection
principle. After WETFORD, the passive satellites technology comes to an end [6].

2.1.2 Active Satellites

NASA launched active LEO satellites for the very first time named TELSTAR 1 in 1962 and
TELSTAR 2 in 1963. TELSTAR 1 provide the facilities of telephone, telegraph and television in
France, Britain and United States which failed in 1962 due to Van Allen belt radiation whereas in
TELSTAR 2 the amount of defect due to Van Allen was less so it worked for 2 years. RCA
launched RELAY 1 in 1962 and RELAY 2 in 1964 for NASA which worked for 1 year and two
months. These provided the television and telephony transmissions in United States, Europe and
Japan. SYNCOM was a synchronous satellite that has been developed by the Aircraft Company
for NASA and GSPC.SYNCOM 1 failed whereas SYNCOM 2 was launched in 1963 and
SYNCOM 2 was launched in 1964 and was used for tracking for the first time. EARLY BIRD was
the first commercial satellite developed by COMSAT for INTELSAT, so later on name was
converted to INTELSAT 1 which was launched by NASA in 1965 which worked till 1969 [6].

2.1.3 Application Technology Satellites


These were the following:
 ATS 1 was launched by NASA in 1966 which was a highly successful satellite. It had a
camera in it which took the pictures of the whole earth from space and provides the
multipurpose communications.

7
 ATS 3 was launched in 1967 which provided the facility to take the colored pictures for
the very first time.
 ATS 5 was launched in 1969 which was a gravity stabilized satellite.
 ANIK A was launched by NASA in 1972 which was a domestic satellite.
 ATS 6 was launched by NASA in 1974.

2.2 Different Types of Satellites

Different types of satellites are present in the orbits for different purposes. These satellites are used
to explore the space and to communicate with the space because nowadays human mind is not
only restricted to explore the earth surroundings but is very much interested in exploring the whole
universe. Each satellite has its own specific purpose like communication satellite for the
telecommunication. Our thesis mainly depends on the ‘Earth Sensor’ of the satellite.

Different types of satellites are shown in Figure 2.1.

Figure 2.1 Different Types of Satellites

There are two major types of satellites:

1) Military Satellites
8
2) Non-Military Satellites

2.2.1 Military Satellites

As the name indicates, these type of satellites are used for specific military purposes. As they are
related to military so they are kept confidential, some of them which have been unclassified are:

2.2.1.1 Reconnaissance Satellites

These are usually used for communication or for earth observation.

2.2.1.2 Miniaturized Satellites

These are satellites of smaller size and less mass than the usual satellites and are of following
types:

1) Mini satellite having maximum weight of 500 kg


2) Micro satellite having maximum weight of 100 kg
3) Nano satellite having maximum weight of 10kg

2.2.2 Non-Military Satellites

These type of satellites are:

1) Fixed Satellites
2) Mobile satellites
3) Scientific research satellites
2.2.2.1 Fixed Satellites

These type of satellites are specifically use for the telecommunication purposes which can handle
large data of the voice calls and for transmission of video.

2.2.2.2 Mobile Satellites

These type of satellites are used to connect one part of the world to another part of the world and
also used for the navigation systems.

9
2.2.2.3 Scientific Research Satellites

These type of satellites are used to provide meteorological information and data of land survey
like remote sensing, radio etc.

2.2.3 Biosatellites

These type of satellites are used for different kind of satellites in the space for different kinds of
experiments to explore the life on other planets.

2.2.4 Navigational Satellites

These satellites are used to find the exact location of mobile receiver on earth with the help of
radio signals sent by transmitter.

2.2.5 Weather satellites

The weather satellites are used to check the weather and climate of the earth.

2.2.6 Spaceships

These are the larger satellites called the manned space crafts that take the humans from earth to
space and return them to the earth. These are used as the carriers which are used to end and receive
the equipment.

2.2.7 Astronomical Satellites

These are used to explore the planets, stars or the other objects in the outer space.

10
2.2.8 Communication Satellites

These are widely used for the telecommunications.

2.2.9 Earth Observation Satellites

These are used basically for monitoring of the environment.

2.2.10 Recovery Satellites

These are used for recovery of reconnaissance, biological, space-production and other payloads
from orbit to Earth.

2.2.11 International Space Station

These are designed to sustain life for medium term living in the orbit for weeks, months, years.
Propulsion and landing assembly is not available in international space station that makes them
different from other space crafts.

2.3 Orbits of Satellite

Satellites require the orbit around the earth to move which is an imaginary path in space.

2.3.1 Geocentric Orbit

It is the orbit in space which provides path to the satellites to move. About 2465 satellites are
orbiting around the earth using the geocentric orbit. Geocentric is divided on the basis of following
three parameters:

11
1) Altitude
2) Inclination
3) Eccentricity

2.3.2 Geocentric Orbit according to the Altitude

Geocentric orbits is divided into three sub orbits based on the distance of orbit from the earth

1) LEO
2) MEO
3) HEO

The Figure 2.2 shows the sub orbits of the Geocentric Orbit

Figure 2.2 Geocentric Orbit

2.3.2.1 Lower Earth Orbit (LEO)

This orbit is the closest to the earth surface that ranges from 200 km to 2000 km.

12
Figure 2.3 LEO Orbit

2.3.2.2 Medium Earth Orbit (MEO)

This orbit is present in between the two orbits and ranges from 2000 km to 35786 km.

Figure 2.4 MEO Orbit

2.3.2.3 Higher Earth Orbit (HEO)

This is the farthest orbit and it starts from 35786 km and present above it.

Figure 2.5 HEO Orbit

13
2.3.3 Heliocentric Orbit

Movement of the objects according to the Sun are classified in heliocentric orbit like all the planets
moving around the Sun in Heliocentric Orbit.

2.3.4 Areocentric Orbit

The orbit in which the satellite moves around its parent like moon moves around the earth in
Areocentric Orbit.

2.4 Lower Earth Orbit (LEO) Satellite

Lower Earth Orbit is the orbit having the altitude of 200km to 2000 km in outer space around the
earth which is the imaginary path. The satellites orbiting the earth using this path are the LEO
satellites. These satellites are better in performance than the other satellites in different ways.

Figure 2.6 Leo Satellites

14
2.4.1 Comparison of LEO Satellites with GEO Satellites

Satellites Advantages Disadvantages

LEO 1) Launching cost is less 1) Have short life


2) Path loss is less 2) Have short call
3) Delays in round trip is interruptions
short
GEO 1) Provides constant 1) Delays in round trips
view are larger
2) No problem occurs 2) Must be pointed to
due to Doppler acquire satellite

Table 2.1 LEO and GEO Satellites Comparison

2.4.2 Factors of LEO Satellites

Following are the factors upon which the speed and rotation of the satellite depends:

1) Mass of the satellite


2) Gravitational acceleration
3) Radius of the earth and the orbit

Figure 2.7 Factors for the Speed and Rotation of LEO Satellite

15
Single revolution around the earth takes about 90 minutes having a speed of satellite about 8 km
per second.

2.4.3 Parts of Satellite

Basic parts of each satellite are almost the same whose construction is different in different
satellites according to the requirements. These include the power supply since batteries are part of
the satellite and to charge these batteries, solar panels are essential as shown in Figure 2.8, which
consume most of the weight of the satellite.

Figure 2.8 Parts of Satellite

2.4.4 Working of LEO Satellites

Figure 2.9 shows the working of the LEO satellites

16
Figure 2.9 Working of LEO Satellites

LEO satellites orbits the earth in a circular path which is governed by two forces, one is the
attractive force that pulls the satellite towards the earth gravitational force and the other is the
centrifugal force that makes the satellite rotate in the circular path.

2.5 Stabilization of Satellite

Satellite orbits the earth in the specific path, so there is a need to determine the stability of the
satellite to keep the satellite in the orbit. Hence, there is a need of attitude determination to control
the satellite which is required for communication satellite to align its high gain antenna with the
earth station because all the communication between earth and satellite is always in Line of Sight
(LOS).

2.5.1 Methods to Stabilize the Satellite

Following are the methods used to stabilize the satellite:

1) Spin Stabilization
2) Axis Stabilization

17
2.5.1.1 Spin Stabilization

Spin Stabilization is a method of stabilization in which the satellite spins fast around axial
symmetry. Satellite will be stable when it is equipped with energy dissipation and spins around the
principle of inertia, so the satellite should adopt gyroscopic qualities.

2.5.1.2 Axis Stabilization

Following are the types of the Axis Stabilization:

1) Single Axis Stabilization


2) 3-axis Stabilization
2.5.1.2.1 Single Axis Stabilization

In this method of stabilization, sensor requires the difference of the bore sight of the sensor and
the reference point that could be sun or the earth nadir as shown in Figure 2.10.

Figure 2.10 Single Axis Stabilization

2.5.1.2.2 3-axis Stabilization

This method of stabilization includes the qualities of three gyroscope (which is a spinning wheel
which contains the free axis of rotation) because of the three axis, one gyroscope for each axis as
depicted in Figure 2.11.

18
Figure 2.11 3-axis of Stabilization

19
Chapter 3

Sensors

The physical quantities present in the environment can be measured with the help of the sensors
and sensors can also change them from one form of energy to another form. As the sensors play
an important role in changing one form of energy to another, so they are also called “transducers”
or “converters”.

Sensors can also take the place of human meaning thereby that these can be used for automation
and can be used instead of people that are doing their jobs manually. For example people hire a
person which act as a security guard to provide them security in a way that if someone is found in
the area, they should provide security. Instead of this manual procedure, vision sensors can be used
for that specified area which act as human eyes and can easily detect the presence of a person and
aware the people of that area.

3.1 Types of Sensors on Power Basis

Sensors are of two types on the basis of power which converts the information into electrical form.

1) Active Sensors
2) Passive Sensors

3.1.1 Active Sensors

For conversion to electrical signal, external power is required and these sensor give the output
signal of high power as shown in Figure 3.1 left side.

20
3.1.2 Passive Sensors

For conversion to electrical signal, no external power is required and these sensors give the output
signal of low power which should be amplified before using it for the specific purpose as shown
in Figure 3.1 right side.

Figure 3.1 Active Sensor on left and Passive Sensor on right

3.2 Types of Sensors on Output Basis

The sensors are of two types on the basis of output.

1) Digital Sensors
2) Analog Sensors

21
3.2.1 Digital Sensors

A digital sensor is a sensor that gives the output of conversion of information into electrical form
in HIGH and LOW signals only, which is further decoded to know the meaning of HIGH and
LOW is true or false. For example Proximity sensors help in the detection of the presence of objects
in front of the sensors within the specific area. If the object is in front of the sensor, the sensor
gives HIGH as output, otherwise LOW.

Figure 3.2 Digital Sensor Output

3.2.2 Analog Sensors

Analog sensor is a sensor that gives the output of conversion of information into electrical form in
more than two outputs that varies according to the input. For example, the temperature sensor
whose output varies according to the input temperature.

22
Figure 3.3 Analog Sensor Output

3.3 Characteristics of Sensors

There are several characteristics of sensors which play an important role in the working and output
of the system, so these should be kept in mind and sensor should be chosen according to the
specifications required for the system to get the desired output. Following are the characteristics
of the sensors:

1) Resolution
2) Range
3) Precision
4) Impedance
5) Sensitivity
6) Linearity

3.3.1 Resolution

The resolution of the sensor is the minimum value of the information to be changed in the electrical
signal. The higher the resolution, the easy to notice the smallest change in quantity.

23
3.3.2 Range

The difference between the high and the low value that a sensor can measure is the range of the
sensor. The high range is the desired characteristic.

3.3.3 Precision

Precision is concerned with the accuracy of the device which produces the same result again and
again by measuring the same quantity.

3.3.4 Impedance

Construction of the sensor depends on the impedance which is the impedance of input and output
both which helps in determining the amount of current that can be drawn from the sensor and the
amount of current that can be delivered to the output. So there are two types of impedance required
for the sensors.

1) High Input Impedance


2) Low Output Impedance

3.3.5 Sensitivity

The smallest change in the output of the sensor with the change in the output is sensitivity which
is the major characteristic of the sensor. The greater the sensitivity, better the sensor. Sensitivity
is the difference between the measured value and the calculated value and errors can be there which
are:

1) Biased errors
2) Random errors

24
Sensitivity is the ratio between the change of output signal and change in the input signal.
For a digital sensor, sensitivity is directly related to resolution whereas in analog sensor
sensitivity is directly related to slope of output versus the input line as shown in Figure
3.4.

Figure 3.4 Sensitivity of Sensors

3.3.6 Linearity

If the relation between the input and output is straight line, then the sensor is linear. Mostly the
sensors are non-linear. The linearity correspond to the maximal difference between the hysteresis
loop and the ideal straight line of linearity expressed in percentage of full scale as shown in Figure
3.5.

25
Figure 3.5 Linearity of Sensors

3.4 Interfacing of Sensors

For the environmental interfacing system, sensors are the major part of the system as the
microprocessor cannot be interfaced directly with the environment. Microprocessor requires the
sensor to convert the physical quantities into the electrical form. Sensor gives the output in the
analog form whereas microprocessor accepts the signal in digital form, so there a device is required
which converts the analog signal into digital form and then that digital signal is sent to the
microprocessor for processing. Analog to digital converter is required for this purpose between the
sensor and the microprocessor to communicate with each other as shown in Figure 3.6, so it is
important to choose the sensor according to the requirements.

26
Figure 3.6 Interfacing of Sensor

3.5 Types of Sensors for Satellites

According to the specifications of the satellite, different instruments and sensors are used that can
be either active or passive according to the requirements. Passive sensors serves the purpose of
recording the solar radiations (visible light) and emissive radiations (sun heat) whereas the active
sensors serve the purpose of remote sensing (Earth Observation). To control the attitude of the
satellite, there are different kinds of sensors which can be used to control the satellite and its
position. Following are the sensors that can be used in the satellites:

1) Absolute Attitude Sensor


I. Horizon Sensor
II. Orbital Gyrocompass
III. Sun Sensor
IV. Earth Sensor
V. IR Sensors

27
i. Simple IR Sensor
ii. Pyro-Electric IR Sensor
iii. Micro Bolometer IR Sensor
iv. Spatial Resolution
2) Gyroscopes
3) Motion Reference Units

3.5.1 Absolute Attitude Sensors [4]

To sense the orientation and phenomenon, these are made outside the space craft. Some of these
are:

3.5.1.1 Horizon Sensor

Optical instruments are used in these sensors to detect the light from Earth horizon. It works on
the principle of sensing the thermal infrared which compares cooler and warmer temperatures of
the surroundings.

3.5.1.2 Orbital Gyrocompass

This sensor is used to sense the gravity and the force to align the spinning vector of earth to point
north using pendulum and gyrocompass.

3.5.1.3 Sun Sensor

This is used to sense the direction of the sun and can be made as simple as solar cells and can be
made as complex as steer able telescope.

3.5.1.4 Earth Sensor

These are mostly the infrared cameras and use to detect the direction of the earth. Due to low cost
and reliability, these are used in the satellites.

3.5.1.5 IR Sensor

Earth Sensor can be made with the help of IR (Infrared) sensors which are of three types.

28
3.5.1.5.1 Simple IR Sensor

It is used in the Earth Sensor to determine the attitude of the satellite from the earth. Potential
difference between the detectors can be generated with the help of IR Sensors and administrated
by the software algorithm of Digital signal processing to determine the angle that helps to
determine the infrared horizon. To reduce the number of errors, earth and space pixels are used to
normalize the data of the horizon.

3.5.1.5.2 Pyro-Electric IR Sensor

Objects having energy, have some temperature, so the difference between the temperature of the
objects and their environment causes the energy to flow from object to the environment and that
energy is called ‘radiation’. Mostly the humans cannot see the radiations because its wavelength
is in infrared region and it is beyond the sight of human capacity, so the electronic devices are
required to detect them. These devices are designed in a way that they can measure the energy
emitting from the objects having temperature which is more than absolute zero without using the
external power and during detection, it stops emitting the radiations hence called the Pyro-electric
sensor which is a ‘Passive Sensor’. The principle on which the sensor depends is the emission of
the infrared radiations by objects rather than the heat emitted. Although radiation and heat are
different but they are correlated.

Figure 3.7 Working of Pyrometer

29
3.5.1.5.3 Micro Bolometer IR Sensor

To detect the thermal images, these are used in the thermal camera to measure the special kind of
radiations. It works on the principle that when it is exposed to the radiations, the detector material
in the meter gets heated up to change the resistance which is then processed and scaled according
to the temperature which is then translated to the image.

Figure 3.8 Micro Bolo Meter

3.5.1.5.4 Spatial Resolution

On the surface of the earth, the difference between the centers of the two adjacent pixels is the
spatial resolution of the camera of the satellite.

3.5.2 Gyroscopes

This is used to balance and stabilize the object without using the external power and is used to
sense the rotation of object in three dimensions in space with reference to the central axis of
rotation.

30
3.5.3 Motion Reference Units

These are used to detect the single and multi-axial motion.

3.6 Design Approaches other than Sensors

The other designs for the Earth Sensor are:

1) IR Camera
2) CMOS VGA Camera
3) Star Tracker

3.6.1 IR Camera

IR Camera is a Thermo Graphic Camera in which Infrared Radiation is used to form the image
having the operation range up to 14,000 nm whereas visible light is used to form the image in the
normal camera having the operation range 450-470 nm.

3.6.1.1 Working of IR Camera

Infrared energy is used to operate the IR Camera which is converted to the electronic signal and
then processed to form the thermal image and calculation of the temperature. IR Camera has sensed
the heat accurately and measured it which leads to measure the thermal performance and the
problems related to it [5].

Thermal image formed by the IR Camera is not visible to the human eye. With the passage of time,
IR cameras become more advanced which is cost effective and provide the thermal analysis
solutions. IR cameras have the features of Detector technology, automatic functionality, infrared
software providing the control of the Camera and much more.

3.6.2 CMOS VGA Camera

Image Sensors are used in CMOS VGA Camera which convert the image into the electronic signal.
Complementary Metal Oxide Semiconductor Technology is used in the CMOS VGA Cameras.

31
3.6.3 Star Tracker

Star Tracker either uses a camera or a photocell in order to find the position of the stars. These
also have a setback because they may give bad results because of effects by the sunlight reflected
from the spacecraft or by the exhaust system.

3.7 Earth Sensor Used in the Project Functionality

In order to find the position of the satellite, the technique to calculate the center of the earth from
the satellite is used which collects images as data and different digital image processing techniques
are implemented on the images. IR Spectral Band is used because the interference of the sun is 75
times less than the one in the visible Spectral Band. Interference from the environmental effects
like clouds are also less in the IR region which is another advantage.

3.7.1 Block Diagram of ERS

The block diagram of ERS is shown in Figure 3.9.

IR Detector
Lens

Sensor unit
Serial Peripheral

Regulated 5V
Interface

Control
Unit
Intel C8051
Microcontroller Power
regulation
Satellite CAN
Bus

Unregulated
supply

Figure 3.9 Block Diagram of ERS

32
IR Detector consists of the Optical and the sensor unit, the Optical Unit is adjusted
outside the satellite. The Control Unit includes the microcontroller with the external RAM power
circuit to supply the power to the units of hardware.

Control Unit is the important unit of ERS which performs the following functions:

1) Control functions of ERS are executed by the Control Unit.


2) Image is downloaded and stored from the sensor via Serial Peripheral Interface.
3) Processes the Image data
4) Calculated Telemetry data is relayed to the Attitude and Orbit Determination and
Control System (AODCS) via the Data Bus.

3.7.2 Evaluation Setup Diagram

The Evaluation Setup Diagram of the system is given in Figure 3.10.

Figure 3.10 Evaluation Setup Diagram of ERS

All the components that are used in the Earth Sensor are shown in Figure 3.10. For transmission
and debugging, the serial cable is used to connect OBC with transceiver. All the commands and
Data are processed between the microcontroller and OBC via serial cable.
33
3.7.3 Flow Chart of Software

The Flow Chart of the software part of ERS is given in Figure 3.11.

Figure 3.11 Flow Chart of Software of ERS

To calculate the Pitch and Roll of the LEO satellite, the steps required for software is shown in
Figure 3.11.

34
3.8 Components of ERS

Following components are used in ERS unit

1) Microcontroller
2) Camera
3) RAM
4) Converter
5) Capacitor
6) Fuse
7) Diode
3.8.1 Microcontroller TMS320F28335 (ACTIVE) Delfino

For processing the image using different algorithms to calculate the pitch and roll of the satellite,
microcontroller is used which is then sent to OBC via RS422 cable. The specifications of the
microcontroller are:

Technology Static CMOS

Frequency 150 MHz

Logic Level 3.3V

Architecture 32 bit

Operation 16 x 16 dual MAC and 32 x 32 MAC

Language Coding C/C++ and ASSEMBLY

External Interface 16 bit or 32 bit

Address Available Over 2M x 16

Table 3.1 Microcontroller Specifications

35
3.8.2 Camera

The first priority in the cameras is the infrared camera. Following are the three suitable cameras:

1) ICI 9320 P-Series


2) ICI 9640 P-Series
3) Space IR camera by Neptec

Following are the specifications of these cameras

Specifications ICI 9320 P-Series ICI 9640 P-Series Space IR Camera by


Neptec

Weight 75 Grams - without - -

Lens

Thermalized Lens 7.5mm (40x30 14.25mm (44x33 F/1, 50mm FL


FOV, +45 grams)
FOV, +16 grams)

Dimension 34x30x34 – without - -

Lens

Operating Power < 1Watt <1 watt 0.7Watt

Operating Range -40℃ to 80℃ - -10°C to +50°C

Pixel Array 320x240 640x840 640x480

Frame Rate 60Hz 30Hz -

Table 3.2 Specifications of IR Cameras

36
3.8.2.1 Alternative to IR Camera

While looking as the alternative of IR Camera, TLV chip came to our notice that converts the
visible light image to the infrared image but it is not produced anymore so searching more to meet
our requirements we came across, camera film roll strip which can be used to block the visible
light meet our requirements. Using that camera film roll in front of the CMOS camera makes the
CMOS camera an IR camera which allows the IR light to pass. So we searched for the CMOS
camera and selected cameras which meets our requirement which are:

1) uCAM-II
2) C3188A
3) VF2F2
4) Multi-processor configuration capable T5000118

uCAM-II

Weight 6g

Dimensions 32 × 32 × 21 mm

DC supply nominal 5V

UART up to 3.68Mbps for transferring JPEG

C3188A

VGA resolution (640 × 480 pixel)

Effective image area 4.86 × 3.64 mm

Frame rate 30 fps

I/O 3.3V or 5V

Operation Voltage 5V DC ±5%

Operation Current <120mA Active @ 30fps

37
Board dimensions 33 × 33 mm

VF2F2

Low-power operation (25mA running/2mA standby)

Operating voltage 3.3V upto5V safe inputs

Internal Data SRAM 4kb

Connectivity 2 x USB 2.0 Host/Slave Ports

Controllers Dual DMA controllers

Multi-processor configuration capable T5000118

Dimensions:(H) 38.11mm (L) × 34.12mm (W) ×


6.00mm

Operating temperature -20 to +70 °C

Signal system CMOS 2 megapixel

Resolution 1600 × 1200

Frame rate/second 15 fps

Interface 30-pin FPC

Dimensions 60 × 8 × 4.26 mm (L×W×H)

Sensor Resolution: 1 304 × 1036, SVGA

Pixel Size 2 × 2 µm

Interface USB2.0

Max Frame Rate 15 fps

Power Consumption: Idle 50mA

38
Operating 100mA ±10mA @ 640×480 / 15
fps

Operating Temp -40°C to 70 °C

OS Support Linux

Supported Resolutions 640 * 480

Table 3.3 Specifications of CMOS Cameras

Images from the CMOS Camera and the CMOS Camera with the film roll are shown in the Figure 3.12,
3.13, 3.14, 3.15.

Figure 3.12 Image taken by CMOS Camera

39
Figure 3.13 Image taken by CMOS Camera with Film Roll

Figure 3.14 Image taken by CMOS Camera with 2 Film Roll

3.8.3 Converter CON-422-PIE

For the communication between the earth sensor and the OBC, converter is used which converts
the data from RS-232 to RS-422. The specifications of this converter are:

Baud rates 300 bps --- 115,200 bps

Connector type RS-232 Side and RS-485/RS-422

Side 5-way Terminal Block

40
Power source From RS-232 Data Lines

Current require 10 mA max

Static protection 15KV ESD Protection

Surge protection 600W/Sec

Weight 24 grams

Dimensions (62.8 mm X 33.8 mm X 17.8 mm)

Operating temperature (-40°C to 85° C)

Operating humidity 5% To 95%

Reliability Low Failure Rate – 99+% Reliability


since Inception

Table 3.4 Specifications of Converter

3.8.4 Zener Diode 1N5221 thru 1N5254

To save the unit from the back flow of current and the high voltage, Zener Diode is used whose
specifications are:

Operating Temperature -55°C up to +150°C

Power Dissipation 500 mW

Forward Voltage At 200mA is 1.1 V

Table 3.5 Specifications of Diode

3.8.5 Fuse Series 473

To prevent the components from being damaged due to over flow of current, fuse is used whose
specifications are:

41
1) Enhanced inrush withstand
2) Small size
3) Wide range of current ratings (375mA - 7A)
4) RoHS compliant
5) Wide operating temperature range
6) Low temperature de-rating

3.8.6 FOV, IFOV, SWATH

Field of View (FOV) Instantaneous Field of View Swath


(IFOV)

Total angle observed by Amount of spatial resolution Total width of any image that
camera. provided by remote imaging is in the orthogonal direction
system. to IFOV to the ground track.

Table 3.6 FOV, IFOV and SWATH

FOV, IFOV and Swath is shown in Figure 3.15.

Figure 3.15 FOV, IFOV and Swath

42
Chapter 4

Software

This chapter describes the software design of ERS which starts with the review of the whole
software and then describes each part in detail. Appendix A contains the code of all the algorithms
discussed in this chapter and Appendix B contains the Arduino code which is used in interfacing
the hardware.

4.1 Review

First of all the image data is stored in a 32 x 32 16-bit array and the same array can be used to store
multiple images in order to increase SNR. After that the values of column 32 is used to calibrate
the image in order to compensate the difference of amplifier gain present in all the rows.

The algorithm of edge detection is used to produce the 2 x 31 array which contains the edge pixel
co-ordinates.

Another algorithm can be used to increase the number of edge pixels in order to increase the
accuracy of the line fitting.

With the help of the edge coordinates straight line is fitted to edge points and that straight line is
used to compute the pitch and roll.

The figure 4.1 shows the overall procedure of the software

43
Figure 4.1 Overall Procedure of the Software

4.2 Software Used


4.2.1 MATLAB

Mat lab is one of the most commonly used language of programming of generation 4 and is also
used for the environmental analysis numerically. The major advantage of this software is that it
can easily be interfaced with the programs which are written in some other languages. It can be
easily downloaded from internet and is free of cost and can be easily understood by anyone and
has in-built libraries which makes our work easy. Matlab has been developed by Mathworks.
Matlab window has following windows which helps in programming:

44
Figure 4.2 Matlab

4.2.2 Arduino IDE

Arduino IDE is used to develop different programs in different programming blocks. It is easy to
use and anyone can use it by downloading it free of cost and in order to help the user sample
programs of interfacing are provided within the IDE. www.arduino.cc/en/main/software is used to
download the software.

This is very easy to use having the following five buttons on the left side at top of the window:

Syntax errors of the code can be easily checked by the Verify button. If any error occurs, is
shown at the bottom of the window, it does not tell about the logical errors.

The verified code is then send to the development board by the Upload button. The code is
successfully uploaded by selecting the right serial port and the development kit and the RX pin
should not be connected to the hardware.

45
The code needs to be saved manually so the Save button is used to save the code to the hard
drive of computer.

To open the new window IDE, New button is used to write a new code.

Open button is used to open the existing code from the hard drive.

Table 4.1 Arduino Window Buttons

4.3 Image Retrieval and Calibration

Pins to read image from array set is given in Figure 4.3

Figure 4.3 Pins to Read Image from Array Set

46
To read the image from thermopile, the array should be kept low. ARRAYSET should be kept low
for 20µs for retrieval and then driven high which is the 12 th pin as shown in Figure 4.3 to reset a
10-bit counter by selecting column 1 and making all the rows (from row 1) as outputs that are
available on the multiplexer. After that CSMUX is driven low to send the channel number through
SPI and then driven high to get the row’s pixel values on ADC output. Then to read the value from
ADC, CSADC should be kept high for minimum 2.2µs and then driven to low again. CSADC should
be kept low in normal mode. So first the highest 8-bits are read and after that the lower ones,
then the values are combined and saved in the unsigned 32 x 32 16-bit array which make us able
to read all the 32 pixels present in a row.

Then ARRAYCLK should be driven low for about 1µs to get the values of the next column on the
output pins. The delay of 115µs is required before reading the values of the pixels to settle the
amplifier.

Each row of thermopile is clocked out using a different amplifier because of having a small
difference in each of the amplifier gain. The last element of each row is connected to the ground
instead of the thermopile to provide the bias gain of the specified row amplifier which is
subtracted from the other row entries after the image has been saved.

In the manufacturing of the FPA (focal plane array), the most common problem is the damaged
pixels which leads to the incorrect values which are higher than the usual values. To recover this,
we take the image of a uniform object which covers the FPA as a whole. This value is then
compared with the output and their difference is subtracted from the output value if the error
has been found.

The image data is stored in a 32 x 32 16-bit array and the same array can be used to store multiple
images in order to increase SNR. As stated earlier, the values of column 32 is used to calibrate the
image in order to compensate the difference of amplifier gain present in all the rows.

47
Figure 4.4 Images Taken by AXT25C 1) left is a cup of coffee 2) right is a hand

4.4 Edge Detection [7]

The connected pixels present on the boundary that distinguishes the two regions of grey scale
combines to make the edge. As the edges are detected with the change in the grey scale, so the
more the difference in the grey scale level of both the regions, better edges are detected. If a grey
scale level is just a step between the two regions (from one pixel to another pixel), an ideal edge
is found having a shorter slope as shown on left hand side of Figure 4.5. But mostly a number of
pixels found in transition of grey scale providing a blurred edge having a longer slope like ramp
as shown on right hand side of Figure 4.5.

48
Figure 4.5 Difference between Ideal and Real Edges

There are many algorithms used for edge detection which are categorized as Gradient and Laplacian. The
edges in the Gradient method can be found by using the maximum and minimum value of the first derivative
of the image while in Laplacian method the edges can be found with the help of zero crossings found in the
second derivative of the image. The first derivatives algorithms include Roberts, Prewitt, Sobel and
Canny whereas the second derivative method included Marr-Hildreth and Zero-crossing. In edge
detection methods Roberts, Prewitt and Sobel are noise sensitive. As the noise in the image
increases, the magnitude of the edge decreases results in inaccuracy. The Canny algorithm mainly
depends upon the changing parameters which are standard deviation for the Gaussian filter, and
its threshold values. The larger size and the larger values can be used to control the size of the
Gaussian filter. As the size increases, the noise increases which is essential for the noisy images
and to detect the large edges. Canny algorithm for edge detection is more costly than the Robert,
Prewitt and Sobel but has a better performance. Although it has a best response in all of the
algorithms mentioned above, but all these algorithms produce false response. A small change in
the edge produce a false line fitting so the algorithm that does not produce false edges is the
better option.

49
The connected pixels present on the boundary that distinguishes the two regions of grey scale
combines to make the edge. In the start, the image is read and saved in matrix of 32 x 31. And then
start detecting an edge in each column by applying a loop. Then in order to set the threshold value
we find the earth value and the space value as given in the following formulas

Space value= (x (1, column) +x (2, column) +x (3, column))/3 ------------------------- 4.1

Earth value= (x (30, column) +x (31, column) +x (32, column))/3 --------------------- 4.2

Threshold= (0.3 x (Earth value – Space value)) + Space value -------------------------- 4.3

Then for the edges scan the columns from row 2 to 29. The three pixels that are present below the
current pixel and the current pixel itself should be higher than the threshold value. If the result is
positive than the pixel present above the current pixel should be 10% higher than the current pixel
which is then delineated to be the edge pixel. Then when the mapping has been done the edge pixel
will follow the next column and so on [8].

Now if the current pixel and the three pixels present below the current pixel are not higher than
the threshold then the algorithm starts processing on the lower pixels [9].

4.4.1 Algorithm of Edge Detection

The algorithm of the edge detection process is given in Figure 4.6

50
Figure 4.6 Algorithm of Edge Detection

51
The algorithm is as follows:

1) Read and save the image and then convert the image into grey scale
a) Find the matrix size
i) Detect the rows and columns of image.
2) Using the for loop find the threshold in the columns
a) Find the space value by calculating the average of top three pixels by using formula in
equation 1.
b) Find the earth value by calculating the average of bottom three pixels by using formula
in equation 2.
c) Then find the threshold by formula given in equation 3.
3) Inside the loop of columns, apply the loop in rows to detect the edges
a) The current pixel and the pixels present below the current pixels are checked and
compared with the threshold then the pixels either have a high value or the low value
than the threshold.
i) If a higher value is received than the threshold, the pixel present above the
current pixel should be 10% higher than the current pixel so the current pixel is
the edge pixel.
ii) If a lower value is received than the threshold, the loop moves on.

Image has been mapped as given in Figure 4.7

Figure 4.7 Image Mapping

52
4.5 Line Fitting

After finding the edges using the edge detection method, a line is fitted on the edges. For line
fitting, Hough Transform Method and the Least Square Method are the best methods. The general
equation of a line is:

y= ax + b -------------- 4.4

4.5.1 Hough Transform

Hough transform is the linear algorithm which is based on the two dimensional array called
‘accumulator’. The dimension of accumulator is equal to the number of parameters. The Hough
transform method requires more memory because number of dimensions is equal to number of
parameters so not a priority in the project and is known as the ‘Feature Extraction Technique’.
This method can also be used for computer vision and the digital image processing. This method
is basically used for simple shapes like circle, straight line etc. The equation used to detect line is:

r = x cos𝜃 + y sin𝜃 --------------- 4.5

Where ‘r’ is the smallest distance between the origin and the straight line, ‘x’ and ‘y’ are the points
on ‘x’ and ‘y’ axis representing the pixels and 𝜃 is the angle between the ‘x’ axis and the straight
line as in Figure 4.8.

Figure 4.8 Hough Transform

53
4.5.2 Least Square Method [13], [14]

The Least Square Method is less complex and consumes less memory as compared to the Hough
Transform Method. The Least Square Method depends on the data points Ax and the original fit
b, so the error between the data points and the original fit can be determined by:

e=Ax-b ----------------------- 4.6

The best fit is the fit when the error occur is the minimum error. This method provides accuracy
as well as saves memory. So the norm of the error can be given as:

1 1
J= ||𝑒||2 = 2 ||𝐴𝑥 − 𝑏||2 ---------------------- 4.7
2

1
= [𝐴𝑥 − 𝑏]𝑇 [𝐴𝑥 − 𝑏] ---------------------- 4.8
2

Where ‘e’ is the error, ‘J’ is the norm of the error which should be minimized for the best fit, ‘A’
and ‘b’ are the coordinates of the line. Regression has been shown by the Least Square Method
as shown in Figure 4.9.

Figure 4.9 Regression Analysis

54
‘A’ and ‘b’ are the coordinates of the line represented by:

𝑦0 𝑥0 1
𝑦1 𝑥1 1
⋮ 𝑎
y= , A= ⋮ ⋮ , c= [𝑏 ]
𝑦𝑁−1 𝑥𝑁−2 1
[𝑦𝑁−2 ] [𝑥𝑁−1 1]

Y=Ac ------------------- 4.9

Where ‘y’ is the independent variable and ‘x’ is the dependent variable. After taking the derivative
of the normalized error with respect to ‘x’, the solution for ‘x’ is:

𝛿
𝐽 = 𝐴𝑇 [𝐴𝑥 − 𝑏] = 0 ----------------- 4.10
𝛿𝑥

𝑋 = [𝐴𝑇 𝐴]−1 ----------------- 4.11

By putting the matrices of ‘y’, ‘a’, ‘c’ in equation 4.9, the value of ‘c’ can be found as:

0
𝑐 0 = [𝑎0 ] = [𝐴𝑇 𝐴]−1 𝐴𝑇 ------------- 4.12
𝑏

Now consider the first row edge coordinates as x0, x1, x2 so on and second row edge coordinates
as y0, y1, y2 so on to get the general line of equation. To solve the general line of equation with
the help of matrices, we find the parameters ‘a’ and ‘b’ as:

𝑁 ∑𝑁−1 𝑁−1 𝑁−1


𝑛=0 𝑥𝑛 𝑦𝑛 −∑𝑛=0 𝑥𝑛 ∑𝑛=0 𝑦𝑛
𝑎0 = ----------- 4.13
𝑁 ∑𝑁−1 2 𝑁−1
𝑛=0 (𝑥𝑛 ) −(∑𝑛=0 𝑥𝑛 )
2

∑𝑁−1 2 𝑁−1 𝑁−1 𝑁−1


𝑛=0 (𝑥𝑛 ) ∑𝑛=0 𝑦𝑛 −∑𝑛=0 𝑥𝑛 ∑𝑛=0 𝑥𝑛 𝑦𝑛
𝑏0 = ------------ 4.14
𝑁 ∑𝑁−1 2 𝑁−1
𝑛=0 (𝑥𝑛 ) −(∑𝑛=0 𝑥𝑛 )
2

4.5.2.1 Algorithm of Least Square Method

The algorithm of Least Square Method is shown in Figure 4.10

55
Figure 4.10 Algorithm of Line Fitting

1) The coordinates of the image should be converted to the Cartesian coordinates.


a) Use for loop up to the edge number
First the rows should be converted, then the columns.
2) Then initialize the second for loop

For summation X, summation Y, Summation XY and summation XX

3) Then denominator should be defined


4) Value of the denominator

56
a) If the value is zero

The ‘y’ equation becomes zero and the ‘x’ equation is defined.

b) If the value is non-zero


So define ‘a-zero’ and ‘b-zero’ and use equation ‘y’ as 1 and equation ‘x’ as 0.

4.6 Calculating Pitch and Roll [15]

Pitch and Roll is calculated relative to the image bore sight which is the important parameter. Pitch
is one of the reference axis which is the inclination of the slope as shown on left side of Figure
4.11. The pitch can be found by taking the tangent of the distance between the line and the origin
divided by the Focal Length. While the roll is the relative rotation of the object in the certain axis
as shown on right side of the Figure 4.11. Roll is the angle between the line perpendicular through
origin and y-axis.

Figure 4.11 Pitch and Roll

The Grey color shows the front of the satellite. Pitch and Roll should be calculated from the
position of the line of horizon relative to the image bore sight as shown in Figure 4.12. The pitch
is perpendicular distance from origin to the line shown by ′𝛼 ′ in Figure 4.12. Roll is the relative
rotation of the object in the certain axis, so roll is the angle′∅’ between the y axis and the line

57
perpendicular to the horizon line that touches the origin. The point ‘𝑝(𝑥𝑝 , 𝑦𝑝 )’ as shown in Figure
4.12 is the point of intersection of the horizon line and the line perpendicular to it that touches the
origin.

Figure 4.12 Pitch and Roll of Horizon line

Following equations are used to find the values of pitch ‘∝ ′ and roll ‘ɸ’:

−𝑎𝑏
𝑥𝑝 = 𝑎2 +1 --------------------- 4.15

−𝑏
𝑦𝑝 = 𝑎2 +1 ---------------------- 4.16

∝= √𝑥𝑝 + 𝑦𝑝 ------------------ 4.17

‖𝑥𝑝 ‖
ɸ= tan−1(‖𝑦 ----------------- 4.18
𝑝‖

58
If the horizon line becomes parallel to the y axis which is very unusual, so in that case pitch ‘∝ ′
becomes equal to the y intercept as shown in Figure 4.13 and roll is taken according to any angle
which is suitable.

Figure 4.13 Relationship between Pitch Distance and Pitch Angle

In this case, the pitch distance in FPA (Focal Plane Array) can be calculated by taking the product
of pitch distance and the pixel height as:

∝ ′ = ∝ × 𝑃𝐼𝑋𝐸𝐿𝐻𝐸𝐼𝐺𝐻𝑇 --------------- 4.19

So the pitch can be calculated by taking the inverse of tangent of the pitch distance calculated
divided by the foal length as:

𝛼′
𝑃𝐼𝑇𝐶𝐻 = tan−1 ----------------------------- 4.20
𝑓

59
4.6.1 Algorithm to Calculate Pitch and Roll

The algorithm to Calculate Pitch and Roll is shown in Figure 4.14

Figure 4.14 Algorithm to Calculate Pitch and Roll

1) If equation-y is equal to zero


a) Defining the pitch of the straight line
With the help of equation-x and pixel height

60
b) Defining the pitch with the help of the straight line pitch
With the help of pi, straight line pitch and focal length
c) Then convert the values of pitch in degree
d) Finding the roll from equation-x
2) If equation-y is non-zero
a) Find the intersection of the line with the help of a-zero and b-zero
X and y component of intersection point
b) Calculating the perpendicular distance on image
Negative of pitch is received if y component is zero and positive if non-zero
c) Calculating the perpendicular distance on camera
d) Calculating Roll with help of a-zero and convert it to degree

First the image is retrieved and calibrated, then the edges should be found after that the line should
be fitted and then with the help of that line, pitch and roll should be found.

61
Chapter 5

Hardware

This chapter describes the hardware design of the ERS.

5 Hardware Required

The components include:

1) Power Supplies
2) DC Power
3) Arduino Development Kit
4) LCD
5) Web Cam

5.1 Power Supplies

Power supplies are required to operate the overall hardware. Power supplies include both the AC
and DC power supplies but in robots mostly DC power supply is used. Batteries are the primary
source of DC supply and we can use them according to the requirement.

62
5.2 DC Power

Batteries are the major source of DC supply which are available in many voltage and current
ratings and one can easily get the desired battery. As battery is a combination of cells, so the
desired battery can be made by series or parallel formation of the cells.

5.2.1 Voltage Regulator

These type of circuits are used to supply constant voltage to any circuit, although the input voltage
increases from the rated voltage, it supplies the constant voltage to circuit. The process of the
voltage regulation is shown in Figure 5.1.

Figure 5.1 Voltage Regulator

The whole circuit includes:

1) Reference Voltage
2) Control Circuit
3) Error Generator

63
4) Feedback

The reference voltage is the input to the regulator which is forwarded to the control circuit and the
control circuit provides the output to load, where the load plays a role of the error generator and
the generated error signal is sent back to the reference in order to compensate the change.

5.2.2 Performance of regulator

Performance of the regulator depends on:

1) Output
2) Output impedance
3) Line regulation
4) Load regulation
5) Heat dissipation
6) Maximum regulation ratio

5.2.3 Fixed Regulator IC’s

A very common IC of the regulator 78xx series of positive voltage regulator can be used for voltage
regulation. This regulator IC comprises of three pins to regulate the positive voltage as shown in

Figure 5.2. First pin is the input pin which takes DC voltage as input which is going to be regulated,
second pin should be grounded and the third pin is used to take the regulated output and to get the
minimized noise at output, the capacitor is connected at the output side to filter the maximum of
the noise. The rating of the conventional current to the circuit is 1A which is sufficient for all kinds
of microcontrollers.

64
Figure 5.2 Fixed Voltage Regulator

5.2.4 Variable Regulator IC

These are the type of regulators which include the variable resistor which is used to change the
output regulation. LM 317 is the variable voltage regulator and it works same as the fixed regulator
do. The variable voltage regulator is shown in Figure 5.3.

Figure 5.3 Variable Voltage Regulator

65
Vout is the output Voltage, reference voltage is Vref, Iadj*R2 shows the voltage drop across the variable
resistor. To measure the regulated voltage according to the change in the resistor’s value, the formula
used is:

𝑅2
Vout = VREF(1+ )+ IADJR2
𝑅1

5.3 Arduino Development Kit


5.3.1 Arduino Mega 2560

Prototyping development board based on the microcontroller ATmega2560 is the


ArduinoMega2560 as shown in Figure 5.4. The ArduinoMega2560 constitutes of 100 pins, 54 of
them are the digital input-output pins which can be used to control the outputs of relays, transistors
and many more and can be used to get the inputs from digital sensors. 14 pins out of these 54 pins,
can be used as the output pins of the Pulse Width Modulation for switching when required. Arduino
has 16 input pins for analog inputs to connect 16 analog sensors to the board. It has 4 serial ports
(UART’s) to have communication with other microcontrollers serially. To burn the code on the
controller and to communicate serially with PC, USB port is present on the Arduino board to make
it easy to burn the code without using the micro controller burner and 16MHz crystal oscillator is
used for it’s clock. USB or external power supply can be used to give power to the board. Power
jack can bear the minimum of 6V and maximum of 20V but the recommended power is from 7V
to 12V. To store the code, 256KB flash memory is available, 8KB SROM and 4KB EPROM is
also there. In this project, ArduinoMega2560 is the major part because all other components are
interfaced with Arduino board.

66
Figure 5.4 ArduinoMega2560

5.4 LCD

Liquid Crystal Display (LCD) uses the light moderating property to show the data on it. It is one
of the features of liquid crystal and is thin flat but does not emit light, so for clear display the
LCD’s are mostly equipped with LED’s as shown in Figure 5.5.

Figure 5.5 LCD

LCD’s are vastly used these days all over the world in projects, systems, applications and much
more. Nowadays, LCD’s can be seen everywhere in computers, clocks, TV, calculators, watches
and in many more things used in our daily lives. In early days, cathode ray tubes were used for this
purpose, but now LCD’s take their place because of their properties clear display, easy to use, of
having the low cost, less weight, greater efficiency, reliability and portability facilities. LCD’s are

67
available in different sizes, in this project, one 20x4 alphanumeric LCD’s are used to represent the
data.

5.4.1 Interfacing of LCD with Arduino

LCD plays an important role in the design of the embedded system because it is used as an
indicator. Status of anything can be shown with the help of LCD, so the values obtained from the
sensor or the values that are calculated can be easily displayed on the LCD. Generally 2x16 LCD
is used which means 2 rows and 16 columns. The description of the LCD pins is shown in the
Figure 5.6.

Figure 5.6 Configuration of LCD Pins

68
To interface the LCD with Arduino, firstly power up the LCD, after that variable resistor is
attached to pin 3 of the LCD which is used to control the brightness. Pin 4 and Pin 6 of LCD are
connected to the Arduino with the pin 11 and pin 14 that are the data pins. Four data pins of LCD
are connected with Arduino because in this project, LCD is to be controlled in four pin mode. The
interfacing of LCD with Arduino is shown in Figure 5.7.

Figure 5.7 Interfacing of LCD with Arduino

5.5 WEB CAM

Image sensors plays an essential role in many of our lives because of their flexibility of sensing all
the things come in front of them by using the right algorithm. Webcams are of low cost and are
more flexible and even the resolution in present camera is impressive too. The resolutions offered
by the web cam’s are of different types as follows:

1) Low resolution web cam’s of 320x240.


2) Medium resolution web cam’s of 640x480.

69
3) High resolution web cam’s of 1280x720 and of 1920x1080.

Webcams have lens which are the image sensor which are mostly of plastic and can be used for
focusing in and out. Some Webcams can provide VGA resolution at 30 frames per second and new
webcams can provide multi resolution at high frame rates at even 120 frames per second.

Image is read from the sensor and is transmitted to the computer over USB, each of the frame is
transmitted in RGB in uncompressed form or either in JPEG in compressed form.

Figure 5.8 Webcam

70
Chapter 6

Simulations and Results

This chapter discusses the simulations and results of the software as well as the hardware.

6.1 Software Simulations and Results

After the designing of the algorithm to be used in the software of the Earth Sensor, we take the test
image as shown in Figure 6.1 to test the software.

Figure 6.1 Test Image (Earth Horizon)

First the image of the earth with space background is taken as shown in Figure 6.1 which was then
sent to PC in .jpg format and it was sent to process the algorithm coded in the MATLAB, where

71
the image was resized and then is converted to the grey scale which is then proceeded through the
edge detection and line fitting algorithms to calculate the pitch and roll of the tested image.

The result got after the image processing is shown in Figure 6.2.

Figure 6.2 Resultant of Image Processing

The values of the variables used in the algorithms to calculate pitch and roll of the tested image
are:

A=1

azero= 0.2714

bzero= 0.5673

edgeNum= 31

xEqn= 0

yEqn=1

The calculated values of pitch and roll of the tested image are:

pitch= -65.4590

roll= 15.1828

72
When all the results are verified on MATLAB, the new technique used for Edge Detection in this
project is compared with the other techniques, PSNR values of different images are taken and the
average of the PSNR values is calculated for different techniques of Edge Detection as shown in
Table 6.1.

Images Sobel (dB) Canny (dB) Advance (dB)

1 23.2694 23.964 24.296

2 19.109 19.901 20.180

Table 6.1 Comparison of Average PSNR Values of Different Edge Detection Techniques

Figure 6.3 also shows the comparison of the different techniques that are discussed in Table 6.1.

Average
PSNR(dB)
20.8
20.6
20.4
20.2
20
19.8
19.6
19.4
19.2
Sobel Canny Proposed

Figure 6.3 Comparison of PSNR

73
The results of some more images are shown in Table 6.2.

Images Pitch Roll

1 89.4285 53.0999

2 88.7317 41.2643

Table 6.2 Pitch and Roll Values of Figure 6.4 and Figure 6.5

Table 6.2 shows the results of Figure 6.4 and Figure 6.5 respectively.

Figure 6.4 Image 1

74
Figure 6.5 Image 2

6.2 Hardware Simulations and Results

6.2.1 Ground Testing

For ground testing, IR is generated in a small room by using warm bulbs. Warm bulbs are
acting as Sun. A small round ball is placed at the center of the room acting as Earth. The
radiations coming from the bulb are focused on Earth so that IR will be emitted from the Earth
and can be sensed by the IR camera. Therefore the algorithm runs to calculate the values of
pitch and roll and will display it on the LCD.
Following figures have been taken during testing.

Figure 6.6 Arduino Mega 2560 Figure 6.7 Round Ball

75
Figure 6.8 Warm bulbs Figure 6.9 Webcam

Figure 6.10 LCD result

76
Chapter 7

Conclusion

7.1 Conclusion

The objective of this project was to design the uncooled Earth Sensor by using commercial-of-the-
shelf components and develop a project which operates similar to the commercial Earth Sensor
does but without consuming a lot of resources and at lower cost. Further the project also meets the
requirements (including power, dimensions, and environmental conditions, mass) of the LEO
satellite.

Although it does not produce fully accurate results but is very close to the required one. The
proposed method of edge detection in the project defines the preservation of the satellite position
with the bore sight in a way which is accurate and reliable. The main purpose of the proposed Edge
Detection Technique used in the project is to take the accurate values of Pitch and Roll with the
help of low solution image.

7.2 Recommendations for improvement in current ERS

Following are the recommendations to improve the designed ERS:

1) More tests should be performed for sub-pixel edge estimation algorithm and check the
possibility to improve this algorithm.
2) Isolate the noise sources to improve the testing.
3) Search for the methods used to compensate the distortion occurred in lens.
4) Improvement in the prototype unit.
5) Test the project for super-resolution also.

77
Bibliography
[1] Vaibhav Vasant Unhelkar under supervision by Prof. Hari B. Hablani, “Satellite Altitude
Estimation using Sun Sensors, Horizon Sensors and Gyros,” Department of Aerospace
Engineering Indian Institute of Technology, Bombay 2012,
[2] Vikram Ivatury2, Kevin Moore1, Shwetha Prasad3, Garrett Sinclair1, Alex Sloboda1,
David Smith1, and Meng, “Earth Horizon Sensor for Small Spacecraft Final Report and
Recommendations,” Electrical Engineering, University of Michigan Ann Arbor 2B.S.E.
Aerospace Engineering (April 2011), University of Michigan Ann Arbor 3M.S.E
Aerospace Engineering, University of Michigan, Ann Arbor
April 25, 2011
[3] Yu-Fei Shen, Dean Krusienski, Jiang Li, and Zia-ur Rahman,“A Hierarchical Horizon
Detection Algorithm,” IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL.
10, NO. 1, JANUARY 2013
[4] “A Low Weight/Power/Cost Infrared Earth Sensor,”0-7803-81 55-6/04/$17.00 02004
IEEE, Aerospace Conference paper
[5] Steven J. Dumble · Peter W. Gibbens J Intell Robot Syst, “Horizon Profile Detection for
Altitude Determination,” DOI 10.1007/s10846-012-9684-7, 2 September 2011 / Accepted:
31 May 2012
[6] Mohamad Nazree Dol Bahar, Mohd Effandi Mohd Hassan, Norhizam Hamzah, Ahmad
Sabirin Arshad, “Modular CMOS Horizon Sensor for Small Satellite Altitude
Determination and Control Subsystem,” Astronautic Technology (M) Sdn. Bhd.
(ATSBTM) SSC06-VIII-8
[7] Roger Claypoole, Jim Lewis, Srikrishna bhashyam and Kevin Kelly, “Image morphing,”
http://www.owlnet.rice.edu/~elec539/Projects97/morphjrks/moredge.html
[8] IJCSI, “A comparison of various edge detection techniques used in image
processing,”,G.T. Shrivakshan,Dr.C. Chandrasekar, vol 19 , Sept 2012
[9] Publisher: Global Journals Inc. (USA),” Study and Comparison of Different Edge
Detectors for Image Segmentation,” Volume 12 Issue 13 Version 1.0 Year 2012
[10] Green, B., Canny Edge Detection Tutorial, Drexel University,
www.pages.drexel.edu, 2002.

78
[11] CiteSeerX, “A short introduction to the Radon and Hough transforms and how they
relate to each other”
[12] MISB Standard 0601" (PDF),”Motion Imagery Standards Board (MISB),”
Retrieved 1 May 2015
[13] Shapiro, Linda and Stockman, George, "Computer Vision,"Prentice-Hall,Inc.2001.
[14] “SD 575 Image Processing” Fundamentals of Image Processing September 29,
2008.
[15] “Roll pitch yaw,” Licensed under CC BY-SA 3.0 via Wikimedia Commons -2012
[16] Gonzales, R.C, Woods R.E., Digital Image Processing 2nd Edition, Prentice Hall,
2002.
[17] Heath, M.D., A Robust Visual Method for Assessing the Relative Performance of
Edge Detection Algorithms,Masters Thesis, University of Florida, 1996.
[18] Hall. C.D. Spacecraft attitude dynamics and control, Chapter 4, Aerospace and
Ocean Engineering, Virginia Tech, Blacksburg, Virginia, www.aoe.vt.edu, 2003.
[19] Visagie, L., CMOS Horizon/Sun Sensor Algorithms, Sun Space and Information
Systems, 2007.
[20] Anderson, B., Cutsinger, L., Gilpatric, M., Oberg, M., Taylor, M., Condor Thermal
Imaging System, Final Year Project, University of Colorado, 2006.
[21] Wicks, A. & Dr Underwood, G., A Novel Two-Axis Earth Horizon Sensor, Surrey
Space Center, University of Surrey, Proceedings of the 5th International ESA conference
on Spacecraft Guidance (ESA SP-516), February 2003.
[22] Hanel, R.A., Bandeen, W.R., Conrath, J., The Infrared Horizon of the Planet Earth,
Journal of Atmospheric Sciences, Volume 20, p.73-86, March 1963.

79