You are on page 1of 20

A Technical Seminar Report

On

Night Vision Obstacle Detection and


Avoidance Based on Bio-Inspired Vision
Sensors
Submitted to JNTU HYDERABAD

In Partial Fulfillment of the requirements for the Award of Degree


of

BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY

Submitted
By

M. SAI CHAND REDDY (198R1A1234)

Under the Esteemed guidance of

Mrs. T. SWATHI

Assistant Professor, Department of IT

Department of Information Technology

CMR ENGINEERING COLLEGE


(UGC AUTONOMOUS)
(Accredited by NAAC, Approved by AICTE NEW DELHI, Affiliated to JNTU, Hyderabad)
(Kandlakoya, Medchal, Medchal Dist. Hyderabad-501 401)

(2022-2023)
CMR ENGINEERING COLLEGE
(UGC AUTONOMOUS)
(Accredited by NAAC, Approved by AICTE NEW DELHI, Affiliated to JNTU, Hyderabad)
(Kandlakoya, Medchal Road, Medchal Dist. Hyderabad-501 401)

Department of Information Technology

CERTIFICATE

This is to certify that the seminar entitled “Night Vision Obstacle Detection and
Avoidance Based on Bio-Inspired Vision Sensor” is a bonafide work carried out
by

M. SAI CHAND REDDY (198R1A1234)

in partial fulfillment of the requirement for the award of the degree of BACHELOR OF
TECHNOLOGY in INFORMATION TECHNOLOGY from CMR Engineering College,
affiliated to JNTU, Hyderabad, under our guidance and supervision.
The results presented in this seminar have been verified and are found to be satisfactory. The
results embodied in this seminar have not been submitted to any other university for the award
of any other degree or diploma.

_________________ _______________________
Internal Guide Head of the Department
Mrs. T. SWATHI Dr. MADHAVI PINGILI
Assistant Professor Head of Department
Department of IT Department of IT
CMREC, Hyderabad CMREC, Hyderabad
DECLARATION

This is to certify that the work reported in the present seminar entitled “Night Vision Obstacle
Detection and Avoidance Based On Bio-Inspired Vision Sensors” is a record of bonafide
work done by me in the Department of Information Technology, CMR Engineering College,
JNTU Hyderabad. The reports are based on the seminar work done entirely by me and not
copied from any other source. I submit my seminar for further development by any interested
students who share similar interests to improve the seminar in the future.

The results embodied in this seminar report have not been submitted to any other University or
Institute for the award of any degree or diploma to the best of my knowledge and belief.

M. SAI CHAND REDDY (198R1A1234)


ACKNOWLEDGEMENT

I am extremely grateful to Dr. A. Srinivasula Reddy, Principal and Dr. Madhavi Pingili,
HOD, Department of IT, CMR Engineering College for their constant support.

I am extremely thankful to Mrs. T. Swathi, Assistant Professor, Internal Guide,


Department of IT, for her constant guidance, encouragement and moral support throughout
the seminar.
I will be failing in duty if I do not acknowledge with grateful thanks to the authors of the
references and other literatures referred in this Seminar.

I express my thanks to all staff members and friends for all the help and coordination
extended in bringing out this seminar successfully in time.

Finally, I are very much thankful to my parents who guided me for every step.

M. SAI CHAND REDDY (198R1A1234)


CONTENTS

S.No TOPIC PAGE NO

I ABSTRACT I

II LIST OF FIGURES II

1. INTRODUCTION 2

2. LITERATURE SURVEY 3

3. ARCHITECTURE 6

4. WORKING METHODOLOGY 7

5. TECHNOLOGY 8

6. ADVANTAGES 10

7. DISADVANTAGES 11

8. APPLICATIONS 12

9. FUTURE SCOPE 13

10. CONCLUSION 14

11. REFERENCES 15
ABSTRACT

Moving towards autonomy, unmanned vehicles rely heavily on state-of-the-art collision


avoidance systems (CAS). However, the detection of obstacles especially during night-
time is still a challenging task since the lighting conditions are not sufficient for
traditional cameras to function properly. Therefore, we exploit the powerful attributes
of event-based cameras to perform obstacle detection in low lighting conditions. Event
cameras trigger events asynchronously at high output temporal rate with high dynamic
range of up to 120 dB. The algorithm filters background activity noise and extracts
objects using robust Hough transform technique. The depth of each detected object is
computed by triangulating 2D features extracted utilising LC-Harris. Finally,
asynchronous adaptive collision avoidance (AACA) algorithm is applied for effective
avoidance. Qualitative evaluation is compared using event camera and traditional
camera.

I
List of Figures

S.NO FIGURE NO DESCRIPTION PAGE NO


1 3.1.1 System architecture 6

II
1. INTRODUCTION
With exponential growth in the use of vehicles, the number of accidents have increased
considerably, with studies showing that approximately 90% of the accidents are due to
the human error. And thereby making a reliable detection of an obstacle is one of the
most important parts in advanced driver assistance systems (ADAS) or collision
avoidance systems (CAS), with vision sensors among the most popular choices. Majority
of the methods utilise the traditional optical sensors for detection of vehicles under
normal lighting conditions such as daytime. Stereo vision based detection methods,
motion based methods, and monocular vision detection based methods are the three
kinds of methods used for obstacle detection using optical sensors. Traditional cameras
can have either indirect based methods (i.e., feature based) or direct based methods.

Indirect & Direct Methods: As only some of the features can be tracked or detected,
therefore the feature based, i.e., indirect methods, are not robust when it comes to low-
textured environments. However, all of the related information, even the weak intensity
variations, is utilised in direct methods, making them more robust and helps in providing
efficient results in similar surroundings. Since direct methods are computationally
demanding, hybrid approach (which is a combination of both approaches) is used to deal
with such issues. e. In the proposed methodology, direct method is utilised to efficiently
ascertain the orientation, while feature-based technique is utilised for determining the
displacement.

The benefits of using event-based cameras over traditional vision sensors/cameras are:
high dynamic range, low power, high temporal resolution, and low latency. Event-based
cameras have significantly high dynamic range as compared to the traditional high
quality frame-based cameras, i.e., 120 dB vs 60 dB respectively. Furthermore, in event
cameras, instead of waiting for the global shutter, each pixel work independently and the
photoreceptors of the pixels function in logarithmic scale. This makes event-based
cameras capture information in all lighting conditions, i.e., from daytime to night time
scenes.

II
2. LITERATURE SURVEY
• J. N. Yasin, S. A. S. Mohamed, M. Haghbayan, J. Heikkonen, H. Tenhunen, and J.
Plosila : Unmanned aerial vehicles (uavs): Collision avoidance systems and
approaches.
Moving towards autonomy, unmanned vehicles rely heavily on state-of-the-art collision
avoidance systems (CAS). A lot of work is being done to make the CAS as safe and
reliable as possible, necessitating a comparative study of the recent work in this
important area. The paper provides a comprehensive review of collision avoidance
strategies used for unmanned vehicles, with the main emphasis on unmanned aerial
vehicles (UAV). It is an in-depth survey of different collision avoidance techniques that
are categorically explained along with a comparative analysis of the considered
approaches w.r.t. different scenarios and technical aspects. This also includes a
discussion on the use of different types of sensors for collision avoidance in the context
of UAVs.

• S. A. S. Mohamed, M. Haghbayan, T. Westerlund, J. Heikkonen, H. Tenhunen,


and J.
Plosila: A survey on odometry for autonomous navigation systems.
The development of a navigation system is one of the major challenges in building a
fully autonomous platform. Full autonomy requires a dependable navigation capability
not only in a perfect situation with clear GPS signals but also in situations, where the
GPS is unreliable. Therefore, self-contained odometry systems have attracted much
attention recently. This paper provides a general and comprehensive overview of the
state of the art in the field of self-contained, i.e., GPS denied odometry systems, and
identifies the outcoming challenges that demand further research in future. Self-
contained odometry methods are categorized into five main types, i.e., wheel, inertial,
laser, radar, and visual, where such categorization is based on the type of the sensor
data being used for the odometry. Most of the research in the field is focused on
analyzing the sensor data exhaustively or partially to extract the vehicle pose.

II
• C. D. Prakash, F. Akhbari, and L. J. Karam: Robust obstacle detection for
advanced driver assistance systems using distortions of inverse perspective
mapping of a monocular camera.
The highlight of our method is the ability to detect all obstacles without prior
knowledge and detect partially occluded obstacles including the obstacles that have
not completely appeared in the frame (truncated obstacles). Our results show an
improvement of 90% more true positives per frame compared to one of the state-of-
the-art methods. Our proposed method is robust to variations in illumination and to a
wide variety of vehicles and obstacles that are encountered while driving.

• J. N. Yasin, M.-H. Haghbayan, J. Heikkonen, H. Tenhunen, and J. Plosila:


Formation maintenance and collision avoidance in a swarm of drones.
Distributed formation control and obstacle avoidance are two important challenges in
autonomous navigation of a swarm of drones and can negatively affect each other due
to possible competition that arises between them. In such a platform, a multi-priority
control strategy is required to be implemented in every node in order to dynamically
optimise the tradeoffs between collision avoidance and formation control.

• C. Scheerlinck, N. Barnes, and R. Mahony: Continuous-time intensity estimation


using event cameras.
Event cameras provide asynchronous, data-driven measurements of local temporal
contrast over a large dynamic range with extremely high temporal resolution.
Conventional cameras capture low-frequency reference intensity information. These
two sensor modalities provide complementary information. We propose a
computationally efficient, asynchronous filter that continuously fuses image frames
and events into a single high-temporal-resolution, high dynamic-range image state.

• M. Liu and T. Delbruck: Adaptive time-slice block-matching optical flow


algorithm for dynamic vision sensors.
Dynamic Vision Sensors (DVS) output asynchronous log intensity change events. They
have potential applications in high-speed robotics, autonomous cars and drones. The
precise event timing, sparse output, and wide dynamic range of the events are well
suited for optical flow, but conventional optical flow (OF) algorithms are not well
matched to the event stream data.

II
3. ARCHITECTURE

Fig. 3.1.1 System Architecture

The architecture diagram is one which describes the overall view of this work. A
novel fusion framework is proposed for night-vision applications such as
pedestrian recognition, vehicle navigation and surveillance. The underlying
concept is to fuse lowlight visible and infrared imagery into a single output to
enhance the visual perception.
The proposed framework is computationally simple since it realizes only in the
spatial domain. The core idea is to obtain an initial fused image by averaging all
the source images. The initial fused image is then enhanced by selecting most
salient features guided from the Root Mean Square Error ( RMSE ) and fractal
dimension of the visual and infrared images into the fusion process to obtain the
final fused image. The extensive experiments on different scene imaginary
demonstrate that it is consistently superior to the conventional image fusion
methods in terms of visual and quantitative evaluations

II
4. WORKING METHODOLOGY
Working principle: Night vision technology works in two different ways, namely Image
Intensification and Thermal Imaging.
1. Image intensification: This method basically involves ambient light amplication. It
works by being able to detect low levels of light and then amplify it. When photons(
tiny energy packets that make up light) enter an image enhancer they first hit a layer
called a photo cathode which releases electrons. These electrons hit a second layer
called a micro channel plate which multiplies the electrons before they hit the
phosphor screen which converts them back into light because there are now so many
more electrons that we get a brighter image. But this model fails if there isn’t enough
light for the thermal enhancers to see at all.
2. Thermal Imaging: This approach requires capturing the upper portion of the infrared
light spectrum emitted as heat energy by objects instead of being simply reflected as
light. Temperature is detected by capturing the different levels of IR radiation.
Although we cannot see the light in the dark, but it can be felt as heat provided the
intensity is high enough. But thermal imaging has many disadvantages: too costly,
image formed is of poor quality and we cannot see the target objects if there are
transparent obstacles in our field of view.

II
5. TECHNOLOGY
There are actually two similar technologies used in night vision equipment. Traditional night
vision devices use optoelectronic image enhancement, which works by sensing small
amounts of infrared light that are reflected off objects and then electrically amplifying that
light into a characteristic glowing green image. A newer technology, digital image
enhancement, captures available light on a digital image sensor and then digitally enhances
the images in a full-color display.

• OPTO ELECTRONIC IMAGE ENHANCEMENT

Older night vision equipment uses optoelectronic image enhancement technology. This
technology uses a series of optical lenses and a special electronic vacuum tube to capture and
amplify the visible and infrared light that is reflected off nearby objects. The first lens in the
system, called the objective lens, captures the dim visible light reflected from the subject,
along with some light from the low end of the infrared spectrum. This light, like all light, is
comprised of small particles called photons. These photons pass through the objective lens
into animage-intensifier tube. This is a special electronic vacuum tube powered by small AA
or N-cell batteries, which consists of two components. The first part of the tube is called the
photocathode. This component converts the incoming photons into electrons. As you might
remember from science class, photons, neutrons, and electrons are all very small particles
that comprise the components of an atom. Photons and neutrons combine to create the
nucleus of the atom—electrons swirl around the nucleus and carry an electrical charge. The
newly-created electrons flow into the second part of the vacuum tube, called the
microchannel plate (MCP). The MCP is a small glass disc with millions of tiny holes that
multiplies the number of electrons, thus amplifying the electric signal several thousand times
over.As the electrons exit the end of the image-intensifier tube they hit a phosphor-coated
screen. The phosphors on the screen light up when hit, creating a glowing green image that is
considerably brighter than the dim light that originally entered the objective lens. You view
the phosphor image through an ocular lens that lets you focus and, if necessary, magnify the
image.Why isn't this traditional night vision image in color? It has to do with the conversion
of the photons into electrons, which strips the color information from the image and converts
the original colored light into a black and white image.

II
• DIGITAL IMAGE ENHANCEMENT

Most night vision devices today employ a digital version of traditional optoelectronic image
enhancement technology. Digital image enhancement technology results in smaller, lighter-
weight, more versatile night vision devices.

With digital night vision, the light entering the objective lens is converted into a digital signal
via a complementary metal-oxide-semiconductor (CMOS) sensor, like the ones used in
digital video cameras. The digital image is then enhanced electronically and magnified
several times, then sent to an LCD display for viewing. The larger the CMOS sensor, the
higher the resolution of the image you see. Many current digital night vision devices display
and record full 1080p HD video.

In addition to direct viewing via the LCD screen, many digital night vision devices can be
connected to other devices, such as still or video cameras, for remote viewing. Digital night
vision signals can also be stored digitally, on SD cards, USB drives, or other storage devices.
Some digital night vision devices feature Wi-Fi capability for easy sharing and live-streaming
of videos and images to smartphones, computers, and other devices.

Digital technology has revolutionized the night vision industry. Each subsequent generation
of CMOS sensor has produced better images at lower costs. While the images from early
digital night vision devices weren't near as detailed as traditional optical images, current
generation devices result in extremely high-resolution displays. Many high-end digital night
vision devices even reproduce colour images instead of the old-school glowing green images.
Thus digital enhancements are used to make it easier for visual interpretation and
understanding of imagery. The advantage of digital imagery is that it allows us to manipulate
the digital pixel values in an image.

II
6. ADVATNAGES

1. No particular skill is required.


2. Compact system.
3. Reduction in accident cases.
4. Day and night use is possible
5. Within 200 meters, it is possible to clearly identify objects.
6. It is possible to see through the glass

II
7. DISADVANTAGES

1. Display effect will decrease a lot due to not being able to penetrate smoke.
2. Red storms with fill lights may expose your location.
3. Camouflage cannot be identified.
4. Daytime effect is not as good as optical sight.
5. Fast-paced movements may not be suitable for some programs.

II
8. APPLICATIONS

1. Military:

Military use isn’t the only application for night vision, though.Night vision
camerasare often used for security, as well. They are especially useful in less-
populated areas where there is less light. Law enforcement and the military also both
use NVDs in helicopters for surveillance when necessary.

2. Thermal Imaging:
Thermal imaging devices can be used for many of these applications as well, and
more.
NVDs provide an amplified view of what you would normally see, although it’s only
in one color. Thermal imaging devices, on the other hand, pick up heat, which makes
them useful in other ways. Since thermal-IR energy is emitted rather than reflected,
these devices can work in the complete absence of any light. This makes them
particularly useful for firefighters, who may find themselves going into a building that
is not only dimly lit, but also choked with smoke.
3. Image intensification:
This method basically involves ambient light amplication. It works by being able to
detect low levels of light and then amplify it. When photons( tiny energy packets that
make up light) enter an image enhancer they first hit a layer called a photo cathode
which releases electrons. These electrons hit a second layer called a micro channel
plate which multiplies the electrons before they hit the phosphor screen which
converts them back into light because there are now so many more electrons that we
get a brighter image. But this model fails if there isn’t enough light for the thermal
enhancers to see at all.
4. Security:
Security at driving at night, flying planes at night, security and surveillance. It also
allows for individuals to conduct observations or inspections in the dark and conduct
search and rescue missions in the night for emergencies.

II
9. FUTURE SCOPE

Concept is surveillance of war fields or mining fields where in most of the areas human
intervention is not allowed or dangerous. Spy robots are basically used for spying on the
enemies and with the help of these we can prepare for counterattack to save military people's
lives. This spy robot is also used to observe the mining areas. As this robot is user friendly, it
can easily move, capture images and wirelessly transmit them as well as it can avoid
obstacles, which alerts people about dangerous situations. This helps organizations to view
things at a remote location. With available facilities and infrastructure we can be successful
in designing cost effective systems to meet required applications. Wireless technology that
we have used helps to handle robots efficiently without manual operation. As we are using
DTMF technology, this robot can cover long range.

II
10. CONCLUSION

In this paper, we developed a night vision obstacle detection and collision avoidance
algorithm utilising the dynamic vision sensor for autonomous vehicles. We performed BA
filtering to eliminate noise which decreases the computational costs significantly and
increases the accuracy. Then an object detection algorithm is utilised using an adaptive
slicing algorithm based on accumulating number of events. Afterwards, Hough transform is
used to detect objects from the generated event frames. Furthermore, the AACA
(asynchronous adaptive collision avoidance) algorithm is able to detect, evaluate, and tackle
with the change in environment at run-time and adapt as soon as either a new or an existing
object under observation, changes its parameters, endangering the safety of the system, i.e.,
potential collision.
Due to the space limitation of the conference, the main emphasis of our work has been on
showcasing the qualitative results. In future work, we plan to perform rigorous real-time
testing under different environmental conditions to provide comprehensive qualitative and
quantitative results for such DVS-based systems.

II
11.REFERENCES
1. J. N. Yasin, S. A. S. Mohamed, M. Haghbayan, J. Heikkonen, H. Tenhunen and J.
Plosila, "Unmanned aerial vehicles (uavs): Collision avoidance systems and
approaches", IEEE Access, vol. 8, pp. 105 139-105 155, 2020.
2. S. A. S. Mohamed, M. Haghbayan, T. Westerlund, J. Heikkonen, H. Tenhunen and J.
Plosila, "A survey on odometry for autonomous navigation systems", IEEE Access, vol.
7, pp. 97 46697 486, 2019.
3. C. D. Prakash, F. Akhbari and L. J. Karam, "Robust obstacle detection for advanced
driver assistance systems using distortions of inverse perspective mapping of a
monocular camera", Robotics and Autonomous Systems, vol. 114, pp. 172-186, 2019.
4. J. N. Yasin, M.-H. Haghbayan, J. Heikkonen, H. Tenhunen and J. Plosila, "Formation
maintenance and collision avoidance in a swarm of drones", Proceedings of the 2019
3rd International Symposium on Computer Science and Intelligent Control, 2019.
5. M. Liu and T. Delbruck, "Adaptive time-slice block-matching optical flow algorithm for
dynamic vision sensors", 09 2018.
6. N. Krombach, D. Droeschel and S. Behnke, "Combining feature-based and direct
methods for semi-dense real-time stereo visual odometry", Advances in Intelligent
Systems and Computing, vol. 531, pp. 855-868, July 2017.
7. J. Feng, C. Zhang, B. Sun and Y. Song, "A fusion algorithm of visual odometry based
on feature-based method and direct method", 2017 Chinese Automation Congress
(CAC), pp.
1854-1859.

II

You might also like