You are on page 1of 62

Abstract

In the last decade, the open source community has broadened to make it possible for people to build
complex products at home. Self-balancing robots are progressively becoming popular for their unique
ability to move around in two wheels. They are characterized by their high maneuverability and
outstanding agility. This project will undertake the construction and implementation of a two-wheeled
robot that is not just proficient of balancing itself on two wheels but also navigates its way around with
the help of a detecting device (image processing system) attached to it. The robot can be considered as
a merger of two units – the balancing and the image processing unit. The balancing unit performs all
functions that keep the robot upright whereas the image processing unit contributes in performing the
specific task assigned to it. The balancing unit runs a PID control loop along with a microcontroller
responsible for the robot’s motor control, which improves the systems stability. This system can be
used as a base model to accomplish complex tasks which would otherwise be functioned by humans,
of which, some include foot print analysis in wildlife reserves, autonomous indoor navigation etc.

IV
Table of Contents

Acknowledgements .......................................................................................................... I

Abstract ..........................................................................................................................II

Table of Contents .......................................................................................................... III

List of Figures................................................................................................................ V

List of Tables ................................................................................................................ VI

Chapter 1 ........................................................................................................................ 1
1 Introduction ......................................................................................................... 1
1.1 Background ......................................................................................................... 1
1.2 Motivation........................................................................................................... 1
1.3 Problem statement ............................................................................................... 1
1.4 Objectives ........................................................................................................... 2
1.5 Overall Block Diagram........................................................................................ 2

Chapter 2 ........................................................................................................................ 4
2 Literature Review .................................................................................................... 4

Chapter 3 ........................................................................................................................ 7
3 Hardware Design .................................................................................................... 7
3.1 Hardware Components ........................................................................................ 7
3.1.1 Microcontroller ......................................................................................... 7
3.1.2 Ultrasonic Sensor - HC-SR04 .................................................................... 8
3.1.3 DC Geared Motor .................................................................................... 11
3.1.4 Camera .................................................................................................... 12
3.1.5 Node MCU .............................................................................................. 13
3.1.6 Camera .................................................................................................... 15
3.2 Circuit Diagram................................................................................................. 16
3.2.1 WIFI Interface circuit (NODE MCU WEB CONNECTIVITY) ........................ 16
3.2.2 Ultrasonic Sensors Interface Circuit ................................................................. 17
3.2.3 Complete schematic ......................................................................................... 18
3.2.4 Calculations ..................................................................................................... 19
Chapter 4…………………………………………………………………………………30
4.1 Angle Estimation ………………………………………………………………… ..31
4.2 MPU-6050 (GYRO + ACCELEROMETER ) ……………………………………..32
4.3 How to Code MPU-6050 ……………………………………………………………33
4.4 What is PID (proportional , integral and derivative control algorithm ) ……………39
4.5 How to tune PID for a self balancing Robot ………………………………………..40
Chapter 5 ...................................................................................................................... 33
5 Software design ..................................................................................................... 33
5.1 Algorithms used for image processing ............................................................ 33

5.2 Software ............................................................................................................ 49


5.2.1 ARDUINO IDE ....................................................................................... 49
5.2.2 PYCHRAM ............................................................................................. 49
5.2.3 Proteus 8 Pro ........................................................................................... 49
5.2.4 VNC ........................................................................................................ 49

Chapter 6 ...................................................................................................................... 50
6 Results and Discussion .......................................................................................... 50
6.1 Operation and Working .............................................................................. 50
6.2 Problems Faced .......................................................................................... 50

Chapter 7 ...................................................................................................................... 52
7 Conclusion ............................................................................................................ 52

References ................................................................................................................... 53

Appendix A: Code ....................................................................................................... 54

IV
List of Figures

Fig 1.1………………………………………………………………………………..1
Fig 1.2………………………………………………………………………………..3
Fig 3.1……………………………………………………………............…………..8
Fig 3.2………………………………………………………………………………..9
Fig 3.3……………………………………………………………….………………10
Fig 3.4…………………………………………………………………….…………10
Fig 3.5……………………………………………………………….………………11
Fig 3.6………………………………………………………………….……………12
Fig 3.7…………………………………………………………………….…………13
Fig 3.8……………………………………………………………………….………14
Fig 3.9………………………………………………………….……………………16
Fig 3.10……………………………………………………………….……………..16
Fig 3.11……………………………………………………………….……………..17
Fig 4.1……………………………………………………………………….………31
Fig 4.2…………………………………………………………….…………………32
Fig 5.1………………………………………………………….....……………........33
Fig 5.2……………………………………………………………………….………34
Fig 5.3………………………………………………………….....……………........35
Fig 5.4………………………………………………………….....……………........37
Fig 5.5………………………………………………………….....……………........39
Fig 5.6………………………………………………………….....……………........40
Fig 5.7………………………………………………………….....……………........41
Fig 5.8………………………………………………………….....……………........44
Fig 5.9………………………………………………………….....……………........46
Fig 5.10………………………………………………………….....…………….......47

List of Tables

Table 3.1……………………………………………………………………………….…9

Table 3.2………………………………………………………………….………………12

Table 3.3………………………………………………………………….………………15

Table 5.1…………………………………………………………….…………….……...48

IV
Chapter 1
Introduction

1.1 Background
This report clears up the advantages and design of the project titled Smart Robot with Image
Processing'. This project incorporates the arrangement of a mechanized vehicle close by camera which
is interfaced for image processing. The user displays symbols/arrows to navigate the robot to the
destination point while the robot self-balances itself throughout this path.. The robot is showed up in
figure 1.1.

Figure 1.1 Smart Image Processing Robot

1.2 Motivation
In the last decade, the open source community has broadened to make it possible for people to build
complex products at home. Self-balancing robots are progressively becoming popular for their unique
ability to move around in two wheels. They are characterized by their high maneuverability and
outstanding agility. Navigation of the robot has been the most essential but then most difficult in
constructing up such a mobile robot. For instance, if several specialists built up a bomb deactivation
robot with extraordinary sensors and arms yet it can't go to the objective by it, how might we assess
the robot as valuable? This shows how navigation is imperative in building up a mobile robot.
Air University Introduction

1.3 Problem Statement


Nowadays, mechanical advancements have ended up being more basic since a significant measure of
industry is endeavoring to upgrade their equipment weapons. This advancement has made well-ordered
robots to guarantee a splendid result. Recently, as time goes by, a huge amount of mechanical robots
have been devised to help social orders running their daily life. Self-balancing robot is truly an essential
machine. It balances itself on two wheels and provides the unique stability which differentiates it from
other ordinary robots. This ability of robot allows it to navigate on various terrains, sharp corners etc.
These abilities of robot solve the number of challenges in industry and the society.
Other than that, its future change is tremendous to examine. By adding the feature of this collision
avoidance system, an impressive measure of new and variety mobile robot with various limits can be
outlined.
The basic idea of self-balancing is to is to drive the wheels in the direction in which the robot tilts
while following the external directions given through the camera. This balancing is itself very complex
to achieve. Another problem is the process of Image processing during the robot’s destabilization as
the robot must be stable for the camera to capture the signs.
Once these have been settled the primary target can be achieved.

1.4 Aims and Objectives


The focuses and goals of our project are as follows
• To be able to balance on two wheels.
• Be stable in shortest settling time and smallest over shoot. and not make any large unwanted
movements.
• To be able to use camera, raspberry pi for image processing.
• To be economical.

1.5 Overall Block Diagram


The overall block diagram is shown in Figure 1.2. The general design of the robot is a 2-layered
rectangular body on two wheels. The wheels are placed parallel to each other. The layers are made up
of thick glass epoxy PCB. The bottom layer consists of wheels and comprises of electronic circuitry
which include the microcontroller, angle sensor, power circuit. motors and motor driver. The topmost
layer contains a LIPO battery

2
Air University Smart Image Processing Robot

The balancing and the image processing unit have a separate CPU. The balancing unit contains a
microcontroller- Arduino UNO which runs at 16MHz. The image processing unit contains a single
board computer- Raspberry Pi which runs at 700MHz. We interface an ultrasonic distance sensor with
raspberry pi, give a trigger pulse and receive an echo pulse. The ultrasonic distance sensor uses sonar
to detect obstacles and to measure the distance to them
An inertial measurement unit (IMU), a microcontroller, a motor driver and two motors form the
balancing unit. The microcontroller continuously reads the data from IMU and calculates the angle of
tilt of the robot with respect to the vertical. Based on this data, the microcontroller then sends
appropriate control signals to the motor driver to drive the motors.

DC
Power
Supply

Figure 1.2 Block diagram


Air University Introduction

With the help of camera video is recorded for image processing , later on. In image processing,
we are extracting frames from the video and apply different image processing techniques. Also
change is detected from the recorded video. The other techniques are edge detection, histogram,
distance measurement (in pixels), image resizing, non-linear filtering, rotation and flipping.

4
Chapter 2
Literature Review

Mechanical innovation has been. a principal of forefront creating for over an expansive part of a
century. As robots and their periphery equipment. end up being more sophisticated, strong, and
downsized, these structures are continuously being utilized for entertainment, military, and
surveillance purposes. A self-balancing robot is described as a two-wheeled robot that is not just
proficient. of balancing itself. on two wheels but also directs its way around with the help of a detecting
device (image processing system) closed to it, for specific purposes. These Adaptable robots have
basic norms in area of rescue and military.

A rescue robot is a kind of surveillance robot that has been planned with the ultimate objective of
protecting people. Essential conditions that use rescue. robots are mining mishaps, urban disasters,
detainee conditions, and impacts. Military robots are free robots or remote-controlled devices proposed
for. military applications. Such structures are at present being analyzed by different militaries. US
Mechatronics has made a working automated sentry gun and is at this moment making it help for
business and military use that can be. worked remotely, and another particularly surely understood
one. is The Multi-Mission Unmanned.. Ground Vehicle, previously known as the Multifunction.
Utility/Logistics and Equipment. vehicle.

Overseeing contrasted scene. puts extra demands on. the mobile robot's driving force. system, among
various structures. Control organization. and new time drive-plan. structures. utilize moved materials
and especially profitable. Transmissions. to get higher speed, accuracy, and durability. to work in a
broad assortment of conditions.

There are various microcontrollers. in the market including various types of capacity from fundamental
input output to high end. microcontroller. These diverse sorts of microcontroller are reason made for
general. application. In this examination, we propose building for Raspberry pi. and Arduino UNO
based. robot that controls the robot’s navigation and stabilization.

Robotics has always. been played an vital part of the human. psyche. The dream of creating a machine
that. replicates human thought and physical. features extends throughout the existence of mankind.
Air University Smart Image Processing Robot

Growths in technology. over the past fifty years have established the basics of making these dreams
come true. Robotics is now achievable through the diminishment of the microprocessors which
performs the processing and computations. New forms of sensor devices are being developed all the
time further providing machines with the ability to identify the world around them in so many ways.
Effective and efficient control system designs offer the robot with the ability to control itself and
operate autonomously.
Artificial intelligence (AI) is becoming. a definite possibility with advancements in non-linear control
systems. such as neural networks and fuzzy controllers. Improved synthetics and materials allow for
robust and cosmetically aesthetic designs to be implemented. for the construction and visual aspects
of the robot. Two wheeled robots are one variation of robot that has become a standard topic of research
and exploration. for young engineers and robotic enthusiasts. They offer the opportunity to develop
control systems that can maintain stability of an otherwise unstable system. This type of system is also
known as an inverted pendulum. This project aims to bring this, and many of the previously mention
aspects of a robot. together into the building of a two-wheeled balancing robot with. a non-linear, fuzzy
controller. This field of research is essential as robots offer an opportunity of improving the quality of
life for every member. of the human race. This will be achieved through the. reduction of human
exposure to hazardous. conditions, dangerous environments and harmful chemicals. and the provision
of continual. 24 Hr assistance and monitoring for people with medical. conditions, etc. Robots will be
employed in many applications within society including. carers, assistants and security.
Chapter 3
Hardware Design
3.1 Hardware Components
This section discusses all the components used in this project.

3.1.1 Microcontroller Arduino Uno


The microcontroller used in the project is Arduino UNO. Arduino Uno is a microcontroller
board based on 8-bit ATmega328P microcontroller. Along with ATmega328P, it consists other
components such as crystal oscillator, serial communication, voltage regulator, etc. to support
the microcontroller. Arduino Uno has 14 digital input/output pins (out of which 6 can be used
as PWM outputs), 6 analog input pins, a USB connection, A Power barrel jack, an ICSP header
and a reset button. The package used in this project is a forty pin PDIP. The major specifications
are as follows.

The 14 digital input/output pins can be used as input or output pins by using pinMode(),
digitalRead() and digitalWrite() functions in arduino programming. Each pin operate at 5V and
can provide or receive a maximum of 40mA current, and has an internal pull-up resistor of 20-
50 KOhms which are disconnected by default. Out of these 14 pins, some pins have specific
functions as listed below:

• Serial Pins 0 (Rx) and 1 (Tx): Rx and Tx pins are used to receive and transmit TTL
serial data. They relate to the corresponding ATmega328P USB to TTL serial chip.
• External Interrupt Pins 2 and 3: These pins can be configured to trigger an interrupt
on a low value, a rising or falling edge, or a change in value.
• PWM Pins 3, 5, 6, 9 and 11: These pins provide an 8-bit PWM output by using
analogWrite() function.
• SPI Pins 10 (SS), 11 (MOSI), 12 (MISO) and 13 (SCK): These pins are used for SPI
communication.
• In-built LED Pin 13: This pin is connected with an built-in LED, when pin 13 is HIGH
– LED is on and when pin 13 is LOW, its off.

Along with 14 Digital pins, there are 6 analog input pins, each of which provide 10 bits of
resolution, i.e. 1024 different values. They measure from 0 to 5 volts but this limit can be
increased by using AREF pin with analog Reference() function.

• Analog pin 4 (SDA) and pin 5 (SCA) also used for TWI communication using Wire
library.

Arduino Uno has a couple of other pins as explained below:

• AREF: Used to provide reference voltage for analog inputs with analogReference()
function.
• Reset Pin: Making this pin LOW, resets the microcontroller.

Figure 3.1 Pin layout

3.1.2 Ultrasonic Sensor – HC-SR04


Product features:
Ultrasonic ranging module HC – SR04 provides 2cm – 400cm non-contact measurement
function, the ranging accuracy can reach to 3mm. The modules includes ultrasonic
transmitters, receiver and control circuit. The basic principle of work:
a) Using IO trigger for at least 10us high level signal.
b) The Module automatically sends eight 40 kHz and detect whether there is a pulse
signal back.

31
c) IF the signal back, through high level , time of high output IO duration is the time
from sending ultrasonic to returning. Test distance = (high level time×velocity of
sound (340M/S) / 2.

Figure 3.2 Sensor timing diagram

SENSOR PARAMETERS:

Table 3.1 Sensor Parameter

32
SENSOR LOS:

Figure 3.3 Sensor timing diagram

Figure 3.4 Ultrasonic Sensor - HC-SR04

33
3.1.3 DC Geared Motors
Geared DC motors can be described as an extension of DC motor which starting at now
had its Insight purposes of intrigue demystified here. A geared DC Motor has a
contraption gathering joined to the motor. The speed of motor is checked in terms of
rotations of the shaft per minute and is termed as RPM. The gear gathering helps in
growing the torque and decreasing the speed. Using the correct blend of riggings in a
mechanical assembly motor, its speed can be reduced to any appealing figure. This
thought where gears reduce the speed of the vehicle yet increase its torque is known as
mechanical assembly diminishing. This Insight will explore all the minor and huge
purposes of intrigue that make the mechanical assembly head and consequently the
working of geared DC motor.
External Structure:
At the first sight, the external structure of a DC geared motor looks as a straight
improvement over the fundamental DC ones.

Figure 3.5 DC geared motor

Description of motors
• Gear shaft 4mm
• Rated Voltage: 12V
• Rated Current: 1.6A
• No-load Current: 260mA
• Rated Torque: 7kg.cm
• Rated Speed : 270rpm
• No-load Speed: 320rpm

34
Motor Controller:
The motors are controlled using the motor driver LM298N. Different combinations are sent to
motor driver to control motors. There are motor controlling is explained in table below.

Motor 1 Motor 2 Result


Forward Forward Forward
Forward Stop Right
Stop Forward Left
Stop Stop Stop
• Table 3.2 Motors Working

3.1.4 Camera
A 8MP Raspberry Pi compatible Camera has the high quality Sony IMX219 image sensor.
Sony IMX219 is a CMOS image sensor.
Frame Rate for this camera is 30 frame/s
This sensor is capable of 3280 x 2464 pixel static images
and 640x480 pixel for video. It attaches to Pi by the dedicated standard CSi interface.
It is the supplementary for Raspberry Pi official camera in order to fulfill the demands for
different lens mount, field of view (FOV) and depth of the field (DOF) as well as the
motorized IR cut filter for both daylight and night vision.

Figure 3.6 IP camera

35
3.1.5 Raspberry Pi 3 model B (1GB)
Raspberry Pi 3 Model B is the third-generation Raspberry Pi. This powerful credit-card sized
single board computer can be utilized for many applications. Furthermore. It supersedes the
first Raspberry Pi Model B+ and Raspberry Pi 2 Model B. the Raspberry Pi 3 Model B has
more powerful processor. 10x faster than the original Raspberry Pi. Also it includes remote
LAN and Bluetooth availability making it the perfect solution for powerful connected designs.

Here's the complete specs for the Pi 3:

• SoC: Broadcom BCM2837 (roughly 50% faster than the Pi 2)


• CPU: 1.2 GHZ quad-core ARM Cortex A53 (ARMv8 Instruction Set)
• GPU: Broadcom VideoCore IV @ 400 MHz.
• Memory: 1 GB LPDDR2-900 SDRAM.
• USB ports: 4.
• Network: 10/100 MBPS Ethernet, 802.11n Wireless LAN, Bluetooth 4.0.

Figure 3.7 Raspberry Pi 3 Model B

36
• 40 GPIO PINS
• Pins numbering system
➢ BCM
➢ BOARD
• All pins can read or write at 3.3 voltage level

Figure 3.8 Raspberry Pi GPIO Pinout Diagram

37
3.1.6 MPU 6050
We need 2 sensors. Sensing angle using Gyroscope and sensing motion using Accelerometer.
For this we selected MPU 6050 since it can sense both. It provides a very fast sensing of angle
which is a key requirement for our project. Values taken from the Data sheet of MPU 6050 is
attached which tells us about its features.

Table 3.3 MPU 6050 Parameters

38
ESP8266:

ESP8266 is a chip with which manufacturers are making wirelessly networkable micro-
controller modules. More specifically, ESP8266 is a system-on-a-chip (SoC) with capabilities
for 2.4 GHz Wi-Fi (802.11 b/g/n, supporting WPA/WPA2), general-purpose input/output (16
GPIO), Inter-Integrated Circuit (I²C), analog-to-digital conversion (10-bit ADC), Serial
Peripheral Interface (SPI), I²S interfaces with DMA (sharing pins with GPIO), UART (on
dedicated pins, plus a transmit-only UART can be enabled on GPIO2), and pulse-width
modulation (PWM). It employs a 32-bit RISC CPU based on the Tensilica Xtensa L106
running at 80 MHz (or overclocked to 160 MHz). It has a 64 KB boot ROM, 64 KB instruction
RAM and 96 KB data RAM. External flash memory can be accessed through SPI.

Various vendors have consequently created a multitude of modules containing the ESP8266
chip at their cores. Some of these modules have specific identifiers, including monikers such
as "Wi07c" and "ESP-01" through "ESP-13"; while other modules might be ill-labeled and
merely referred to by a general description — e.g., "ESP8266 Wireless Transceiver."
ESP8266-based modules have demonstrated themselves as a capable, low-cost,
networkable foundation for facilitating end-point IoT developments. The AI-Thinker modules
are succinctly labeled ESP-01 through ESP-13. NodeMCU boards extend upon the AI-
Thinker modules.

We are using ESP8266-12E for our project. It is shown in figure below.

We are using ESP as microcontroller with Wi-Fi compatibility. ESP is programmed through
Arduino IDE by following process.
• First we have to install ESP board on Arduino IDE.
• Select the board which we are using and also select port of computer/laptop to
which USB to Serial converter is connected.

39
• There are two modes of ESP.
o Programming mode
o Operating mode
• To upload code we have to switch to programming mode.
• After uploading we have to reset it again to use it in operating mode.

3.2 Circuit Diagrams

Figure 3.9 Circuit Diagram/ Complete Schematics

40
3.2.1 Motor Driver circuit

Figure 3.10 Motor Driver schematics

Calculation For Motor Selection:

• Maximum weight of our Robot after placing all components = 3kg

• Radius of Robot tyre = 4cm

= 4/100

=0.04m

• Number of tyres used =2

• Weight per tyre =Total weight/2

= 3kg/2

=1.5kg

• Torque in kg-cm = (1.5)(4)

41
=6kg-cm
3.2.2 WIFI Interface circuit (NODE MCU WEB CONNECTIVITY):-
Flow diagram:-

Block diagram:-
This basic working of our project is shown in below block diagram.

Motor
Arduino
Driver

Working Explanation:

Working Explaination of Server on Raspberry Pi:-

We have to establish some connection between our image processing part which is a
Raspberry PI and Arduino which is installed on our Robot so we establish the connection
with a wifi module on Arduino and a server installed on Raspberry pi .

42
The process starts with the installation of some web server language libraries on Raspberry Pi
. The pre installations that we have done for webserver that we have created using raspberry
pi as follows.
• sudo apt-get update
• sudo bash
• apt-get install apache2 apache2-doc apache2-utils
• apt-get install libapache2-mod-php5 php5 php-pear php5-xcache
• apt-get install php5-mysql
• apt-get install mysql-server mysql-client

After pre installments on Raspberry Pi we have to test that either our server is working on
Raspberry Pi or not for that we have to type the IP address of Raspberry Pi on web server it if
it shows some page like this then its working if not then we have to do the pre installations
again

If its working then we are on our way to make our own web server this is done by PHP
language . Our purpose is just to read some Pins on Raspberry Pi and continuously refresh the
page so that it works fine with time .
Commands we use to read the pins are :

$pinstatus=shell_exec("gpio -g read 17");

This Command is Reading Pin staus of Pin no 17 and now next step after reading the Pin
status is to display that command status on server This can be done with the command :
echo $pinstatus;

43
The job for making the server is almost done the last thing which we have to do is to
coninously refreh the web server page this can be done as
header("Refresh: 0;);

Index 0 is showing that that we are refreshing our page after every zero second
which means continuously.
Here is the index file which we have edites and it is located on /var/www directory in
raspberry Pi .

This code is written in PHP language and saved in file named as index.php. This file can be
accessed using raspberry pi IP address in browser.
These commands are used to show pin status of raspberry pi GPIO.
On the Robot end ESP8266-12E is used to read the status of Raspberry Pi GPIO pins from
the webserver, acting as client. After reading the pin status ESP processes the data and
triggers its own GPIO pins to control direction of Robot.

Working Explaination of Esp8266:-

We can find the server code of Node MCU from Arduino IDE Examples it
explains that how we can connect with a web server . The thing which we have
to edit in that code is explained below with detail :

WiFiMulti.addAP("Mujtaba", "12345678");

44
This is the information of the Access point through which we are accessing our
server we have to provide the SSID i.e the user name of our router and the
password our our router .

After this we have to wait for the WIFI connection to be established we begin
our connection with
http.begin("http://192.168.0.2/");

This is the IP address our our web server which we have earlier set on our
raspberry Pi . After successful communication with Raspberry Pi we have to get
the data from Raspberry Pi

httpCode > 0

This condition satisfies only when HTTP header has been send and serponse
header has been handleld

if(httpCode == HTTP_CODE_OK) {
String payload = http.getString();

After successful connection we we have to get the data from server this can be
done by this http.getString command and we are going to save our string in
some string variable for future use and conditioning . After getting string data
from server we convert it to the data which can be useful for us After that we
apply conditions on that received string from server and turn our GPIO pins on
and off.

One of the problem here in turning GPIO pins on and off is that the actual pin
numbering is different from GPIO pin numbering and we face some problems
here in locating its pins refring with the pin diagram

45
3.2.3 Ultrasonic Sensors Interface Circuit with Raspberry Pi

Figure 3.11 Ultrasonic interfaced circuit

We interface an ultrasonic distance sensor with raspberry pi, give a trigger pulse and receive
an echo pulse.
The ultrasonic distance sensor uses sonar to detect obstacles and to measure the distance to
them.
To determine the distance, the sensor measures the elapsed time between sending and
receiving waves. The speed of sound in air is about 343 m/s.
We generate a trigger of at least 10us from raspberry pi and interface a voltage divider to
limit the reception at ≤ 3.3V.

46
3.2.4 Calculations For Ultrasonic sensor

We divide our voltage as such there is 3.3V or less volts at output where vin is pulse received
by echo signal.

• By supposing R1 we can calculate R2


R1= 1k , R2= 2K
• Distance from object= 343* elapsed time/ 2
• Converting into cm :

34300= Distance/Time/2
17150= Distance/Time
17150*Time = Distance

= 1k

= 2k

47
Chapter 4
ANGLE ESTIMATION AND BALANCING

ANGLE ESTIMATION:

To find the direction and angle of the tilt, we use inertial sensor unit.
It comprises of an accelerometer and a gyroscope which reads the angular velocity and angular
position.
Accelerometer gives accurate reading over a sufficient interval of time but it is highly
susceptible to noise which results due to sudden jerking movement of the robot. Since
accelerometer measures linear acceleration, the sudden jerking movement throws off the sensor
accuracy. Gyroscope measures angular velocity which is then integrated to find the angle of
tilt. For a small interval of time, the value of gyroscope is very accurate, but since the gyroscope
experiences drift and integration compounds the error, after some time the reading becomes
unreliable. Thus, we require some way to combine these two values. This is done with
complementary filter.
It is basically a high pass filter and a low pass filter combined where the high pass acts on
gyroscope and the low pass on the accelerometer. It makes use of gyroscope for short term
estimation and the accelerometer for the absolute reference.
This simple filter is easy to implement, experimentally tuned and demands very little
processing power.

How does an accelerometer work?

48
Fig 4.1 Working of Accelerometer

An accelerometer works on the principle of piezo electric effect. Here, imagine a cuboidal box,
having a small ball inside it, like in the picture above. The walls of this box are made with
piezo electric crystals. Whenever you tilt the box, the ball is forced to move in the direction of
the inclination, due to gravity. The wall with which the ball collides, creates tiny piezo electric
currents. There are totally, three pairs of opposite walls in a cuboid. Each pair corresponds to
an axis in 3D
space: X, Y and Z axes. Depending on the current produced from the piezo electric walls, we
can determine the direction of inclination and its magnitude.

How does a gyroscope work?

49
Figure 4.2 Gyroscope

Gyroscopes work on the principle of Coriolis acceleration. Imagine that there is a fork like
structure, that is in constant back and forth motion. It is held in place using piezo electric
crystals. Whenever you try to tilt this arrangement, the crystals experience a force in the
direction of inclination. This is caused as a result of the inertia of the moving fork. The
crystals thus produce a current in consequence with the piezo electric effect, and this current
is amplified. The values are then refined by the host microcontroller.

HOW PROGRAMMING WORKS :-


Introduction to MPU 6050
The MPU 6050 is a 6 DOF (degrees of freedom) sensor, which means that it
gives six values as output,three values from the accelerometer and three from
the gyroscope.
3 Axis Accelerometer:
Accelerometer measures acceleration. Can be used to sense linear
motion,vibration, and infer orientation the force.accelerometer full-scale range
of ±2g, ±4g, ±8g, and ±16g.
3 Axis Gyroscope:
Gyro measures angular rotation which can be used to infer orientation.Gyro
full-scale range of ±250, ±500, ±1000, and ±2000°/sec (dps)
DMP (Digital Motion Processor). The MPU6050 IMU contains a DMP which
fuses the accelerometer and gyroscope data together to minimize the effects

50
of errors inherent in each sensor. The DMP computes the results and can
convert the results to Euler angles and perform other computations with the
data as well.
MPU-6050 operates from VDD power supply voltage range of 2.375V-
3.46V.Additionally, the MPU-6050 provides a VLOGIC reference pin in addition
to VDD.

I2C Communication Introduction:


The Inter-integrated Circuit (I2C) Protocol is a protocol intended to allow
multiple “slave” digital integrated circuits to communicate with one or more
“master” chips .In our case the Master device is Arduino UNO and the slave
device is MPU 6050.MPU-6050 supports the I2C serial interface only.I2C
requires two lines for communication.
Serial Data Line (SDA)
Serial Clock Line (SCL)
The SCL line is the clock signal which synchronize the data transfer between
the devices on the I2C bus and it’s generated by the master device.SDA line
carries the data.
I2C can support a multi-master system, allowing more than one master to
communicate with all devices on the bus.Basic I2C communication is using
51
transfers of 8 bits or bytes. Each I2C slave device has a 7-bit address.7-bit
address represents bits 1 to 7 while bit 0 is used for signal reading or writing . If
bit 0 is set to 1 then the master device will read from the slave device and if bit
0 is set to 1 then the master device will write on the slave device
In normal state both lines SCL and SDA are high. The communication is initiated
by the master device. It generates the Start condition followed by the address
of the slave device . If the bit 0 of the address byte is set to 0 the master device
will write to the slave device and if bit 0 is set to 1 then the master device will
read from the slave device .At the end the master device generates Stop
condition.

Power Management Register:

52
Accelerometer Register:

53
Gyroscope Register:

54
Accelerometer and Gyroscope Data Conversion From Raw To Real (Euler
Formula):
Accelerometer Data:

Acceleration_angle=atan((Acc_rawY/4096.0)/sqrt(pow((Acc_rawX/4096.0),2)+
pow((Acc_rawZ/4096.0),2)))*rad_to_deg;

55
Gyroscope Data :

Gyro_angle = Gyr_rawX/32.8;

Application Of Complementory Filter:


Sensor Fusion:

Total_angle = 0.98 *(Total_angle + Gyro_angle*elapsedTime) +


0.02*Acceleration_angle;

56
To remove the high frequency distortion in the reading of accelerometer it is passed through
a low pass filter . the angular velocity obtained from gyroscope is integrated to obtain the
angle and then it is passed through high pass filter to remove all the low frequency
distortions.The Result obtained from low pass fand high pass filter are then summed to find
the estimated angle.

Chapter 5
Software Design

57
5.1 Algorithm
The algorithms we have used in our project are explained in two parts. The first part explains
the ball following test. The second part explains the sign detection algorithm. This covers all
the image processing algorithms.

BALL FOLLOWING

Fig 5.1 Flow Chart

BALL FOLLOWING TEST:


1.capturing the video :-

58
For capturing of video we need Video Capture() function its argument is the device i.e. which
camera we are using we need to pass 0, -1 or 1 after the capturing of video we need to separate
the frames so that we can apply our techniques on that separated frames and at the end we need
to release the camera by using release command.

cap = cv2.VideoCapture(0)

while(True):
# Capture frame-by-frame
ret, frame = cap.read()

Fig 5.2 App View in making

2.Resizing of the video :-

We need to resize our video so that our processing become faster for that purpose we need to
change the width and hight of the video by using cap.set(3,320) and cap.set(4,240) commands
. frame rate by default is 640x480 and we are changing it to 320x240

cap.set(3,320)
cap.set(4,240)

3.CONVERTING TO HSV :-

59
Fig 5.3 Conversion to HSV

4.Creating a trackbar :-

We are creating trackbar so that we can get some critical value for our value, saturation and
range components for that purpose we are using cv2.getTrackbarPos() function
First argument which we give to get trackbar position is trackbar name, second argument is
window name third argument is default value fourth value is the maximum value and fifth
value is the callback function which runs every time trackbar value changes.

cv2.createTrackbar('hmin', 'HueComp',12,179,nothing)

5.GETTING TRACKBAR POSITION :-

60
cv2.getTrackbarPos(trackbarname, winname)

• trackbarname – Name of the trackbar.


• winname – Name of the window that is the parent of the trackbar

hmn = cv2.getTrackbarPos('hmin','HueComp')
hmx = cv2.getTrackbarPos('hmax','HueComp')

6.Apply thresholding:-

Thresholding means that If pixel value is greater than some cutoff value, it is assigned one
value or we can say it is assigned as white and other values are assigned as some other value
which may be black. for that purpose we use the function cv2.threshold() . First argument
which is passed to that threshold function is the source image but that source image must
be converted to greyscale. Second argument which is passed is the threshold value which is
used to give the pixel values. Third argument passed is the maximum Value which represents
the value to be given if pixel value is more than that critical value. OpenCV give us the option
different options for thresholding which are :

• cv2.THRESH_BINARY
• cv2.THRESH_BINARY_INV
• cv2.THRESH_TRUNC
• cv2.THRESH_TOZERO
• cv2.THRESH_TOZERO_INV

here we have done thresholding by a simple technique of just by applying a bitwise and operator
to the in range values of hue, saturation and value components.

# Apply thresholding

hthresh = cv2.inRange(np.array(hue),np.array(hmn),np.array(hmx))

61
tracking = cv2.bitwise_and(hthresh,cv2.bitwise_and(sthresh,vthresh))

Fig 5.4 Applying thresholding

7.APPLYING MORPHOLOGICAL TRANSFORMATION:-

Morphological transformations are the operations related to image shape. But it is most of the
time applied to binary images. It needs two inputs, one is our original image second one is
called structuring element or we can say kernel matrix which decides the nature of operation
which we are going to apply on our binary image. Erosion and Dilation are two basic
morphological transformations. Opening and Closing are its forms.

ERROSION :-

62
Erosion operation does that it erodes away the boundaries of foreground object i.e it erodes the
the thing which appears as a object rest of the thing is considered as background. foreground
is to be kept white so that it is considered as a object. The kernel matrix convolve with the
image . A pixel in the original image is either 1 or 0 because we have given a binary image for
erosion operation so it will be considered 1 only if all the pixels under the kernel is 1, otherwise
it is will be made zero.

erosion = cv2.erode(img,kernel,iterations = 1)

DILATION :-

Dilation increases the white region in the image or size of foreground object increases .

dilation = cv2.dilate(img,kernel,iterations = 1)

OPENING :-

For noise removal, erosion is followed by dilation. Erosion removes white noises but the
problem which is caused by erosion is it also shrinks our object size. So, we dilate it. As noise
is gone which is done by erosion by size of object is disturbed which is being corrected by
dilation.

CLOSING :-

Closing is Dilation followed by Erosion. It is useful for removing the noise which occurs
inside the foreground object in closing small black points on the object are removed.

closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)

63
8.GAUSSIAN BLUR :-

Gaussian blur technique is applied by using Gaussian kernel. It is done with the
function, cv2.GaussianBlur(). We should have to mention the width and height of the kernel
which should be positive and odd. We also have to mention the standard deviation in the X and
Y directions, sigmaX and sigmaY respectively. Gaussian filtering is highly effective in
removing Gaussian noise from the image.

blur = cv2.GaussianBlur(img,(5,5),0)

Fig 5.5 Applying Gaussian Blur

64
9.DRAWING BOUNDRIES :-

To draw a circle, you need its center coordinates and radius. Its first argument is the image on
which we are going to draw the circle , second argument is the pixel value where we are going
to draw that circle third argument is the radius of circle fourth argument gives the color of
circle drawn and last argument gives the thickness of the circle .

img = cv2.circle(img,(447,63), 63, (0,0,255), -1)

Fig 5.6 drawing boundary

65
SIGN DETECTION

Fig 5.7 Sign Detection Flow chart

66
1. capturing and resizing of video :-
We used cv2.VideoCapture() function for capturing of video , its argument for taking video
from raspberry pi is 0 because pii cam is at 0 port . After capturing we seprate the frames so
to proceed further
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()

We need to resize our video so that our processing become faster and to meet the resolution of
our raspberry for that purpose we need to change the width and hight of the video .Argument
3 access the width and 4 access the high and then we set the width and hight resolution
according to our own choice
cap.set(3,320) cap.set(4,240)

2. Color conversion and smoothing :-


Now the next step we have to do is to find the edges In our images. For finding edges require
that first convert it to grayscale then applying some smoothing technique and then apply canny
edge detection . In our code we apply cv2.cvtcolor for conversion to gray and then we apply
bilateral filter for that we apply cv2.bilateralFilter() builtin function . it is highly effective in
noise removal while keeping edges sharp .
Theory of bilateral filter says that Bilateral filter takes a gaussian filter which is a function of
pixel difference . Now what gaussian filter is?? Gaussian filter takes the a neighbourhood
around the pixel and find its gaussian weighted average. This gaussian filter is a function of
space alone, that is, nearby pixels are considered while filtering.
gaussian function of intensity difference make sure only those pixels with similar intensity to
central pixel is considered for blurring. So it preserves the edges since pixels at edges will have
large intensity variation.

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)


gray = cv2.bilateralFilter(gray, 11, 17, 17)

67
3. Finding Edges:-
We used canny edge detection for finding the edges in our image , cv2.canny() function is used
for edge detection , it takes the input image which we are giving to it is our smooth image done
in previous step and the other arguments which we give to edge detection is minimum and
maximum value of edge which it have to find in our image . Third argument is aperture_size.
It is the size of Sobel kernel used for find image gradients. Which in other sense take the value
that how much change in intensity which is considered as a gradient

edged = cv2.Canny(gray, 30, 200)

4. FINDING CONTOURS :-

Now what we have to do is to find the sign area that where our sign is located in whole image
In order to find the sign portion in our edged image, we need to find the contours in the image.
A contour is the outline of an object .
To find contours in an image, we need the cv2.findContours function . Three parameters
are required. The first is the image we pass in our edged image .The second
parameter cv2.RETR_TREE tells OpenCV to compute the relationship between contours. We
could have also used the cv2.RETR_LIST option as well. Finally, we tell OpenCV to compress
the contours to save space using cv2.CV_CHAIN_APPROX_SIMPLE.
In return, the cv2.findContours function gives us a list of contours .

5. Sorting Of Contours :-
Now we have contours but we don’t know either contour surround just the sign portion or the
contour are made in whole image . for this the first thing we should do is to make less d the
number of contours we need to process. We know the area of our sign is quite large with respect
to the rest of the regions in the image. Line 30 handles sorting our contours, from largest to
smallest, by calculating the area of the contour using cv2.contourArea. We now have only
the 10 largest contours.

68
image, cnts,hirarchy = cv2.findContours(edged, cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)

cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:15]

6. Sorting Of Rectangular Contours :-


Now I have 15 largest contours in the image. Now we approximate the contour
using cv2.arcLength and cv2.approxPolyDP. These methods are used to approximate the
polygonal curve of a contour. To approximate a contour, we need to supply our level of
approximation precision. In this case, we use 2% of the perimeter of the contour. The precision
is an important value .Our sign is in rectangle which has four vertices .

I kept only the 15 largest contours and threw the others out. Now here comes the role of
precision value the contours which are 2% similar with each other are retained and rest of other
are discarded .

7. Warp Perspective Method :-

Till now what we achieved is our four points which is not in any arranged form to be processed
or i can say that i don’t know which is top most corners of the rectangle or which one are
bottom most . (knowing top most or bottom most corners is important because if image which
is taken from camera is not at parallel frame to camera or at some angle to camera then by
applying this method will help us to take that rectangle out in 2d format )

Fig 5.8 Applying Warp perspective

69
8. Order_Points Function:-

I defined order_points(pts) function for this purpose .This function takes pts ( which is a list
of four points specifying the (x, y) coordinates of each point of the rectangle) which we
calculated earlier using contour .
The actual ordering itself can be different, as long as it is made in order by implementing
algorithm . I separate the points as top left, top right, bottom right, and bottom left.

Memory is four ordered points by using assigned for


rect = np.zeros((4, 2), dtype="float32")

Then top-left point is found which is have the smallest x + y sum and the bottom right point,
which will have the largest x + y sum. Now we have to find the top right and bottom left points.
Which is done by taking difference x – y between the points using np.diff function

The coordinates associated with the smallest difference will be the top-right points, whereas
the coordinates with the largest difference will be the bottom-left point

9. Four Point_Transform Function:-

Till now we have four points which is in order i.e we know which point in top left , top right ,
bottom left and bottom right now what we are going to do on this function is we calculate the
maximum width and maximum height which we can do by using distance formula

We calculate top width and bottom width then we apply max( ) function to calculate the
maximum width among the two similarly we calculate left hight and right hight and check
which one is maximum among the two by using max( ) function .

After we get maximum hight and maximum width we have to define my new image dimension
which is The first entry in the list is (0, 0) indicating the top-left corner. The second entry
is (maxWidth - 1, 0) which corresponds to the top-right corner. Then we
have(maxWidth - 1, maxHeight - 1) which is the bottom-right corner. Finally, we
have (0,maxHeight - 1) which is the bottom-left corner. This is how I define the dimension of
my new image .

70
To extract the image which is inside the rectangular shape we use the
cv2.getPerspectiveTransform function. This function requires two arguments rect , which is
the list of 4 points in the original image and dst , which is our list of transformed points.
The cv2.getPerspectiveTransform function returns M , which is the actual transformation
matrix.

Then we apply cv2.warpPerspective function. We pass in the image , our transform matrix M ,
along with the width and height of our output image.The output of cv2.warpPerspective is
our warped image

Fig 5.9 Four Point Transform

71
10. Thresholding Of Warped Image :-

As warped image is the region of interest are which is taken from our original image so it may
contain noice and some other factors which we don’t want to be there so we apply threshold to
convert our warped image just to black and white or if we talk in terms of pixels converted
either zero or one we have given threshold at 80 as if we consider scale of 0 to 255 it is more
close to 0 because we are more concerned with black portion which is our region of interest
.After thresholding we converting our image we have a clear image now we resize this clear
image so our image size become equal to that of size of our video frames .

11. Image Comparison :-

Now we have our final converted video frames on which we can apply comparison operation .
For this purpose first of all we read our reference images using cv2.imread() function after that
we resized our images to the size of our video frames . then we are going to use bitwise xor
function to compare our original and frames taken from video . when images are xor the
portions which are similar in images will give a black result i.e it gives 0 answer and the portion
which is not matched in both of the images will give white i.e will give answer 1 .

72
73
Fig 5.10 Image Comparison

Fig 5.1 Truth Table Xor gate

74
12. CHECK ON COUNTING OF NONZERO BITS :

After the bitwise XOR function we have result of pixels whose values are zero or one or we
can say that not matched portion of image is one so if we count ones in an image by applying
Countnonzeros then we take the decision that either image is matched or not .

5.2 Software Used

5.2.1 ARDUINO IDE


Arduino is a both an open source software. library and an open-source breakout board for the
popular AVR micro-controllers. The Arduino IDE (Integrated Development Environment) is
the program used to write code, and comes in the form of a downloadable. file on the Arduino
website. The Arduino board is the physical. board that stores and performs the code uploaded
to it. Both the software package and the board are referred to as "Arduino."

75
5.2.2 PyCharm
PyCharm. is an Integrated. Development Environment (IDE) used in computer programming,
specifically for the Python language. ... It provides. code analysis, a graphical debugger, an
integrated unit tester, integration with version control systems (VCSes), and supports web
development with Django.

5.2.3 Proteus 8 Pro


It is used for simulating circuits. Proteus. 8 Professional is a software that includes the software
packages of all the controllers, ICs, power. supplies etc. All the codes were first verified on
Proteus and then they underwent hardware testing.

5.2.4 Virtual Network Computing (VNC)


Virtual network computing (VNC) is a type of. remote-control software that makes it possible
to control another computer over a network. connection. Keystrokes and mouse clicks are
transmitted from one computer to another, allowing technical support. staff to manage
a desktop, server, or other networked device without being in the same physical location.
VNC works on a. client/server model: A VNC viewer (or client) is installed on the local
computer and connects to the server component, which must be installed on the remote
computer

Chapter 6
Results and Discussion

6.1 Operation and working:


To operate the project, the following steps should be followed:
• Connect the battery to the robot.
• Show a sign, the camera captures it and does its image processing on
raspberry pi.
• The direction is sent through by raspberry pi to server

76
• Data from the server is read by the node MCU(ESP 8266 module)
• According to the data present on server NODEMCU turns its specific pin
high
• The pin logic on NODEMCU is read by the microcontroller (Arduino)
which turns on the motors accordingly.
• The MPU6050 calculates the angle of tilt, sends it to the microcontroller
which then again runs the motors correspondingly to stabilize the robot.
• After reaching the destination, the robot stops itself.

6.2 Problems Faced:


The following are the problems faced during the making of this project:
1. Motor Selection
The motor we chose had the specs fulfilling the linear property but our
model required the specs for rotational property thus a motor of high rpm
and high torque was used.

2. PID Tuning
The robot was jerky and image was blur thus a detailed and careful PID
tuning was required.

3. Image Processing
• Making algorithms for extracting region of interest.

77
• Problem in finding the cut-point for differentiating images for the
left/right indications.
• Problem in using IC2 and serial communication at a time.

78
Conclusion

This project was started with the problem statement that how we can send vehicle from one

place to another while it stabilizes itself through self-balancing technique. This problem was

proposed to be resolved using a vehicle called ‘Smart Image Processing Robot’ which was

based on the concept of controlling the vehicle by displaying a set of symbols to navigate it to

its destination. The underlying concept was to move the vehicle autonomously from one place

to another and perform image processing. Furthermore, the real-time image processing was

also performed in the project. One of the aims was to develop an autonomous vehicle for

surveillance in less cost. All these aims and objectives were successfully achieved by the end

of the project.

79
References:-

1. https://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html
2. http://opencv-python-
tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_canny/py
_canny.html
3. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
4. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.ht
ml
5. https://www.pyimagesearch.com/2014/04/21/building-pokedex-
python-finding-game-boy-screen-step-4-6/
6. https://www.pyimagesearch.com/2014/08/25/4-point-opencv-
getperspective-transform-example/
7. http://roboticssamy.blogspot.pt/
8. http://www.robotshop.com/letsmakerobots/rs4-self-balancing-
raspberry-pi-image-processing-robot
9. http://socialledge.com/sjsu/index.php/S15:_Self-Balancing_Robot

80

You might also like