You are on page 1of 46

MINOR PROJECT REPORT

ON
Wayfinding and Navigation Robot
Using ROS

Submitted in partial fulfillment of the requirements


For the award of the degree of

BACHELOR OF TECHNOLOGY
IN
ELECTRICAL & ELECTRONICS ENGINEERING

Submitted By

Bhavya Sharma Harshit Singh


40296204918 00996204918
Ujjwal Gupta Navendu Kumar
356962049118 01296204918

Under the guidance of


MD. Saquib Faraz, Asst. Professor, EEE department

Department of Electrical & Electronics Engineering


Dr. Akhilesh Das Gupta Institute of Technology & Management
Guru Gobind Singh Indraprastha University
Dwarka, Delhi-110078
JANUARY- 2022
CERTIFICATE

I/We hereby certify that the work that is being presented in the project report entitled Project
Title to the partial fulfillment of the requirements for the award of the degree of Bachelor of
Technology in Electrical & Electronics Engineering from Dr. Akhilesh Das Gupta
Institute of Technology & Management, New Delhi. This is an authentic record of our own
work carried out during a period from Aug, 2021 to Dec, 2021 under the guidance of “Md.
Saquib Faraz”, “Asst. professor” in EEE department.
The matter presented in this project has not been submitted by us for the award of any other
degree elsewhere.
Bhavya Sharma Harshit Singh
40296204918 00996204918
Ujjwal Gupta Navendu Kumar
356962049118 01296204918

This is to certify that the above statement made by the candidates is correct to the best of our
knowledge.

MD. Saquib Faraz


Asst. Professor
EEE Department

Dr. Amruta Pattnaik Dr. Parthish Kumar Paul Mr. Ajit Kumar Sharma
Project Coordinator Project Coordinator H.O.D., EEE Deptt.
ACKNOWLEDGEMENT

I/We would like to acknowledge the contributions of the following persons, without whose
help and guidance this report would not have been completed.
I/We acknowledge the counsel and support of our project guide Md. Saquib Faraz, Asst.
professor” in EEE department, with respect and gratitude, whose expertise, guidance,
support, encouragement, and enthusiasm has made this report possible. Their feedback vastly
improved the quality of this report and provided an enthralling experience. I/We are indeed
proud and fortunate to be supervised by him.
We are thankful to Mr. Ajit Kumar Sharma, H.O.D. EEE department, Dr. Akhilesh Das
Gupta Institute of Technology & Management, New Delhi for his constant encouragement,
valuable suggestions and moral support and blessings.
I/We are immensely thankful to our esteemed, Prof. (Dr.) Sanjay Kumar, Director, Dr.
Akhilesh Das Gupta Institute of Technology & Management, New Delhi for his never
ending motivation and support.
I/We shall ever remain indebted to Dr. Amruta Patnaik (Project Coordinator), Dr.
Parthish Kumar Paul (Project Coordinator), Departmental faculty and staff members of
Dr. Akhilesh Das Gupta Institute of Technology & Management, New Delhi.
Finally, yet importantly, I/We would like to express our heartfelt thanks to God, our beloved
parents for their blessings, our friends/classmates for their help and wishes for the successful
completion of this project.

Bhavya Sharma Harshit Singh


40296204918 00996204918
Ujjwal Gupta Navendu Kumar
356962049118 01296204918
ABSTRACT
SLAM robot aims to independently navigate in unknown environments. In a new
environment, the robot navigates around the whole area and dynamically generates map
alongside.
After the generation of the whole map, the bot could navigate to any position in that
environment taking the shortest route. It can detect both dynamic and static obstacle and
avoid it during its course.
Since SLAM robot navigation basically means building a model leading to a new map
or repetitively improving an existing map while at the same time localizing the robot within
that map, an inherent problem is created.
The answers to the two characteristic questions cannot be delivered independently of each
other. Errors in map and pose estimates are correlated.
Further the data of both LIDAR and odometry data are analysed using probabilistic
filters for increasing the accuracy of the result and hence it is dependent on the probability of
error that can be measured on the currently used sensors.
TABLE OF CONTENTS

Page No.
Title Page
Certificate ii
Acknowledgment iii
Abstract iv
Table of Contents v
List of Figure vii

CHAPTER 1: INTRODUCTION AND LITERATURE REVIEW


1.1. Introduction 1
1.2. Basic of SLAM 1
1.3. Literature Overview 2
1.4. Organization of Project Report 4

CHAPTER 2: METHODOLOGY ADOPTED


2.1. Castor Wheel
5
2.2. Arduino Mega 2560 Board
5
2.3. LiDAR 360 degree
10
2.4. Encoder motor and Wheels
11
2.5. Connecting Wires
13
2.6. Motor driver L293N 2A
14
2.7. Acrylic Chassis
15
2.8. Bluetooth module HC-05
16
2.9. Software Used
2.9.1. Python IDLE
18
2.9.2. Robot Operating System (ROS)
18
CHAPTER 3: DESIGNING AND RESULT ANALYSIS
3.1. Algorithms Used
20
3.2. Simulations
23
CHAPTER 4: MERITS, DEMERITS AND APPLICATIONS
4.1 Merits of Slam Robot
24
4.2 Demerits of Slam Robot 24
4.3 Applications of Slam Robot 24

CHAPTER 5: CONCLUSIONS AND FUTURE SCOPE


5.1 Conclusion 26
5.2 Future Scope 28

REFERENCES

APPENDIX
List of Figures
Figure No. Title of Figure Page No.
1.1 Localization Methods 2
2.1 Arduino MEGA 2560 Board 10
2.2 LIDAR 11
2.3 Quadrature Pulses 12
2.4 Connecting Wires 14
2.5 Motor Driver L293N 2A 15
2.6 Acrylic Chassis 16
2.7 HC-05 Module 17
2.8 Python IDLE & ROS 19
3.1 Low-Pass Filter Characteristics Curve 21
3.2 High-Pass Filter Characteristics Curve 21
4.1 Applications 28
5.1 Mathematical position of scanner and motors with the
reference along with the landmarks placed randomly in arena 26
5.2 Improvement of robot state using the sensor data by complying
various techniques 26
5.3 Real Time Simulation of LIDAR 27
5.4 Robot setup with RPI LiDAR Scanning and data collection
Software 27
5.5 Electronic Setup of the SLAM Robot 27
5.6 Frontal View of SLAM Robot 27
CHAPTER 1 INTRODUCTION AND
LITERATURE REVIEW
1.1. Introduction
Research on the SLAM problem began within the robotics community
(arguably with the 1986 papers by Smith and Cheeseman), usually with wheeled
robots traversing a flat ground plane. Typically, this was done by combining sensor
readings (such as from a laser scanner or sonar) with information about the control
input (e.g. steering angle) and the measured robot state (e.g. counting wheel
rotations). This may seem far removed from tracking a handheld camera freely
moving in space, but embodies many of the core difficulties of SLAM, such as creating
a consistent and accurate map, and making best use of multiple unreliable sources of
information.

More recently the use of visual sensors has become an important aspect of
SLAM research, in part because an image provides a rich source of information about
the structure of the environment (containing more information than a sonar ping for
example). A large amount of research on visual SLAM used stereo cameras, or
cameras alongside other sensors (such as accelerometers or GPS), but since around
2001 a number of works showed how SLAM could be done successfully using only a
single camera (known as monocular visual SLAM), for example the seminal work of
Andrew Davison at the University of Oxford.

This was crucial in making SLAM a more widely useful technology, since
devices equipped with just a single camera - such as webcams and mobile phones -
are vastly more common and accessible than specialist sensing equipment. More
recent work has demonstrated how monocular visual SLAM can be used to create
large scale maps, how the maps can be automatically enhanced with meaningful 3D
structures, and recover extremely detailed shapes in real time. SLAM is an active field
of research within computer vision and new and improved techniques are constantly
emerging.
1.2. Basics of SLAM Robot
Mobile robots are expected to perform complicated tasks that require navigation in
complex and dynamic indoor and outdoor environments without any human input. In
order to autonomously navigate, path plan, and perform these tasks efficiently and
safely, the robot needs to be able to localize itself in its environment based on the
constructed maps. Mapping the spatial information of the environment is done online
with no prior knowledge of the robot’s location; the built map is subsequently used by
the robot for navigation.

1.3. Literature Review


 Localization Methods
Robot localization requires sensory information regarding the position and
orientation of the robot within the built map. Each method involves some major
limitations, so proper sensor fusion techniques have been deployed to overcome the
constraints of each sensor alone.

Fig.1.1 Localization Methods [1]

 Relative Positioning Measurement


The simplest form is to use wheel odometry methods that rely upon encoders
to measure the amount of rotation of wheels. In those methods, wheel rotation
measurements are incrementally used in conjunction with the robot’s motion model to
find the robot’s current location with respect to a global reference coordinate system.
The most significant error source is wheel slippage in uneven terrain or slippery floors.
IMU is also used to measure the linear and rotational acceleration of the robots.
However, it can still suffer from factors like extensive drift and sensitivity to bumpy
ground.
Like with odometry, position estimates from inertial navigation are acquired by
integrating the obtained information from the sensors; once for obtaining the speed,
twice for obtaining the traveled distance of the robot. The systems are independent of
external information sources. However, since measurements are made by integration,
the position estimates drift over time, and can lead to increased errors.

 Absolute Positioning Measurement


Laser ranger finders are commonly used as well. As optical sensors, they
estimate a distance by calculating phase difference between the wave sent and
rebounded. Unfortunately, in large scale environments, there are bound to be areas
devoid of features visible by a laser range finder, like open atria or corridors with glass
walls.
WiFi localization uses a graph based WiFi map by collecting the signal
strength across the field. In this approach, the mean and standard deviations of WiFi
RSSI observations are approximated by linear interpolation on a graph. This leads to
a computationally efficient observation likelihood function and the optimized location
could be derived from the probability function. However, it is restricted to the WiFi
signal covered area with the information of a pre-learned WiFi graph.
Most robots use Global Positioning System (GPS) which allows the acquisition
of pose information. With exact prior knowledge of where the satellites are, once the
receiver calculates its distance from three or more satellites using the travel time of
the radio signals, the true location can be calculated. Of course, poor coverage of
satellite signal for indoor environments will limit its accuracy.
This measurement supplies information about the location of the robot
independent of previous location estimates; the location is not derived from integrating
a sequence of measurements, but directly from one measurement. This has the
advantage that the error in the position does not grow unbounded, as is the case with
relative position techniques.

 Multi-Sensor Fusion
Mobile robots are usually equipped with several sensor systems to avoid the
limitations when only one sensor is used to reconstruct the environment. Relative
position measurements provide precise positioning information constantly, and at
certain times absolute measurements are made to correct potential errors. There are
a number of approaches to sensor fusion for robot localization, including merging
multiple sensor feeds at the lowest level before being processed homogeneously, and
hierarchical approaches to fuse state estimates derived independently from multiple
sensors.
The position measurements can be fundamentally combined in a formal
probabilistic framework, such as the Markov Localization framework. Based on all
available information, the robot can believe to be at a certain location to a certain
degree. The localization problem consists of estimating the probability density over the
space of all locations. The Markov Localization framework combines information from
multiple sensors in the form of relative and absolute measurements to form a
combined belief in the location.

1.4. Organization of the Project


The material presented during the training is organized into four chapters.
Chapter 1 gives a brief introduction and literature survey of the topic.
Chapter 2 describes the technology implementation in different libraries during my
research in SLAM.
Chapter 3 describes summarizes the results of the implemented technology and
draws a comparison between the libraries available.
Finally, Chapter 4, consist of merits, demerits and applications of SLAM and it’s value
in the current world.
CHAPTER 2 METHODOLOGY
ADOPTED
2.1. Castor Wheel
Wheel Properties Muvtons exclusively designed "Real solid" elastomer wheels
produced from premium liquid cast polyurethane 73° ±3° shore D mechanically &
chemically locked to a non corrosive metallic core that holds the wheel bearings and
also provides structural rigidity under heavy loads. These wheels resist compression
set, have excellent resilience for easy movement even under high loads and expel
floor debris. Can be used for corrosive applications with suitable bearings. They are
non marking and floor protective. Tested & Guaranteed against bond failures. Wheel
bearing options: Sealed precision ball bearings, tapered roller bearings or roller
bearings.
Temperature range:-20°C to +85°C. Tread options: Anti-static; conductive
wheels are available.

2.2 Arduino Mega 2560 Board


The Arduino Mega 2560 is a microcontroller board based on the ATmega2560.
It has 54 digital input/output pins (of which 15 can be used as PWM outputs), 16
analog inputs, 4 UARTs (hardware serial ports), a 16 MHz crystal oscillator, a USB
connection, a power jack, an ICSP header, and a reset button. It contains everything
needed to support the microcontroller; simply connect it to a computer with a USB
cable or power it with a AC-to-DC adapter or battery to get started. The Mega 2560
board is compatible with most shields designed for the Uno and the former boards
Duemilanove or Diecimila. The Mega 2560 is an update to the Arduino Mega, which it
replaces. Technical Specifications
• Microcontroller ATmega2560
• Operating Voltage 5V
• Input Voltage (recommended) 7-12V
• Input Voltage (limit) 6-20V
• Digital I/O Pins 54 (of which 15 provide PWM output)
• Analog Input Pins 16
• DC Current per I/O Pin 20 mA
• DC Current for 3.3V Pin 50 mA
• Flash Memory 256 KB of which 8 KB used by bootloader
• SRAM 8 KB
• EEPROM 4 KB
• Clock Speed 16 MHz
• LED_BUILTIN 13
• Length 101.52 mm
• Width 53.3 mm
• Weight 37 g

Programming
The Mega 2560 board can be programmed with the Arduino Software (IDE).
The ATmega2560 on the Mega 2560 comes preprogrammed with a bootloader that
allows you to upload new code to it without the use of an external hardware
programmer. It communicates using the original STK500 protocol (reference, C
header files).
You can also bypass the bootloader and program the microcontroller through the
ICSP (InCircuit Serial Programming) header using Arduino ISP or similar; see these
instructions for details.
The ATmega16U2 (or 8U2 in the rev1 and rev2 boards) firmware source code is
available in the Arduino repository. The ATmega16U2/8U2 is loaded with a DFU
bootloader, which can be activated by:
• On Rev1 boards: connecting the solder jumper on the back of the board (near
the map of Italy) and then resetting the 8U2.
• On Rev2 or later boards: there is a resistor that pulling the 8U2/16U2 HWB
line to ground, making it easier to put into DFU mode. You can then use
Atmel's FLIP software (Windows) or the DFU programmer (Mac OS X and
Linux) to load a new firmware. Or you can use the ISP header with an external
programmer.

Power
The Mega 2560 can be powered via the USB connection or with an external
power supply. The power source is selected automatically. External (non-USB) power
can come either from an AC-to-DC adapter (wall-wart) or battery. The adapter can be
connected by plugging a 2.1mm center-positive plug into the board's power jack.
Leads from a battery can be inserted in the GND and Vin pin headers of the POWER
connector.
The board can operate on an external supply of 6 to 20 volts. If supplied with less than
7V, however, the 5V pin may supply less than five volts and the board may become
unstable. If using more than 12V, the voltage regulator may overheat and damage the
board. The recommended range is 7 to 12 volts.
The power pins are as follows:

• Vin: The input voltage to the board when it's using an external power source
(as opposed to 5 volts from the USB connection or other regulated power
source). You can supply voltage through this pin, or, if supplying voltage via
the power jack, access it through this pin.
• 5V: This pin outputs a regulated 5V from the regulator on the board. The board
can be supplied with power either from the DC power jack (7 - 12V), the USB
connector (5V), or the VIN pin of the board (7-12V). Supplying voltage via the
5V or 3.3V pins bypasses the regulator, and can damage your board. We don't
advise it.
• 3V3: A 3.3 volt supply generated by the on-board regulator. Maximum current
draw is 50 mA.
• GND. Ground pins.
• IOREF. This pin on the board provides the voltage reference with which the
microcontroller operates. A properly configured shield can read the IOREF pin
voltage and select the appropriate power source or enable voltage translators
on the outputs for working with the 5V or 3.3V.

Memory
The ATmega2560 has 256 KB of flash memory for storing code (of which 8 KB
is used for the bootloader), 8 KB of SRAM and 4 KB of EEPROM (which can be read
and written with the EEPROM library).
Input and Output
See the mapping between Arduino pins and Atmega2560 ports:

PIN MAPPING ATmega2560


Each of the 54 digital pins on the Mega can be used as an input or output,
using pinMode(),digitalWrite(), and digitalRead() functions. They operate at 5 volts.
Each pin can provide or receive 20 mA as recommended operating condition and has
an internal pull-up resistor (disconnected by default) of 20-50 k ohm. A maximum of
40mA is the value that must not be exceeded to avoid permanent damage to the
microcontroller.
In addition, some pins have specialized functions:
• Serial: 0 (RX) and 1 (TX); Serial 1: 19 (RX) and 18 (TX); Serial 2: 17 (RX) and
16 (TX); Serial 3: 15 (RX) and 14 (TX). Used to receive (RX) and transmit (TX)
TTL serial data. Pins 0 and 1 are also connected to the corresponding pins of
the ATmega16U2 USB-to-TTL Serial chip.
• External Interrupts: 2 (interrupt 0), 3 (interrupt 1), 18 (interrupt 5), 19 (interrupt
4), 20 (interrupt 3), and 21 (interrupt 2). These pins can be configured to
trigger an interrupt on a low level, a rising or falling edge, or a change in level.
See the attachInterrupt() function for details.
• PWM: 2 to 13 and 44 to 46. Provide 8-bit PWM output with the analogWrite()
function.
• SPI: 50 (MISO), 51 (MOSI), 52 (SCK), 53 (SS). These pins support SPI
communication using theSPI library. The SPI pins are also broken out on the
ICSP header, which is physically compatible with the Arduino /Genuino Uno
and the old Duemilanove and Diecimila Arduino boards.
• LED: 13. There is a built-in LED connected to digital pin 13. When the pin is
HIGH value, the LED is on, when the pin is LOW, it's off.
• TWI: 20 (SDA) and 21 (SCL). Support TWI communication using the Wire
library. Note that these pins are not in the same location as the TWI pins on
the old Duemilanove or Diecimila Arduino boards.
The Mega 2560 has 16 analog inputs, each of which provide 10 bits of resolution
(i.e. 1024 different values). By default they measure from ground to 5 volts, though is
it possible to change the upper end of their range using the AREF pin and
analogReference() function.
There are a couple of other pins on the board:

• AREF. Reference voltage for the analog inputs. Used with analogReference().
• Reset. Bring this line LOW to reset the microcontroller. Typically used to add a
reset button to shields which block the one on the board.

Communication
The Mega 2560 board has a number of facilities for communicating with a computer,
another board, or other microcontrollers. The ATmega2560 provides four hardware
UARTs for TTL (5V) serial communication. An ATmega16U2 (ATmega 8U2 on the
revision 1 and revision 2 boards) on the board channels one of these over USB and
provides a virtual com port to software on the computer (Windows machines will need
a .inf file, but OSX and Linux machines will recognize the board as a COM port
automatically. The Arduino Software (IDE) includes a serial monitor which allows
simple textual data to be sent to and from the board. The RX and TX LEDs on the
board will flash when data is being transmitted via the ATmega8U2/ATmega16U2
chip and USB connection to the computer (but not for serial communication on pins 0
and 1).
A SoftwareSerial library allows for serial communication on any of the Mega
2560's digital pins. The Mega 2560 also supports TWI and SPI communication. The
Arduino Software (IDE) includes a Wire library to simplify use of the TWI bus; see the
documentation for details. For SPI communication, use the SPI library.

Physical Characteristics and Shield Compatibility


The maximum length and width of the Mega 2560 PCB are 4 and 2.1 inches
respectively, with the USB connector and power jack extending beyond the former
dimension. Three screw holes allow the board to be attached to a surface or case.
Note that the distance between digital pins 7 and 8 is 160 mil (0.16"), not an even
multiple of the 100 mil spacing of the other pins.
The Mega 2560 is designed to be compatible with most shields designed for
the Uno and the older Diecimila or Duemilanove Arduino boards. Digital pins 0 to 13
(and the adjacent AREF and GND pins), analog inputs 0 to 5, the power header, and
ICSP header are all in equivalent locations. Furthermore, the main UART (serial port)
is located on the same pins (0 and 1), as are external interrupts 0 and 1 (pins 2 and 3
respectively). SPI is available through the ICSP header on both the Mega 2560 and
Duemilanove / Diecimila boards. Please note that I2C is not located on the same pins
on the Mega 2560 board (20 and 21) as the Duemilanove / Diecimila boards (analog
inputs 4 and 5).

Automatic (Software) Reset


Rather than requiring a physical press of the reset button before an upload, the
Mega 2560 is designed in a way that allows it to be reset by software running on a
connected computer. One of the hardware flow control lines (DTR) of the ATmega8U2
is connected to the reset line of the ATmega2560 via a 100 nanofarad capacitor.
When this line is asserted (taken low), the reset line drops long enough to reset the
chip. The Arduino Software (IDE) uses this capability to allow you to upload code by
simply pressing the upload button in the Arduino environment. This means that the
bootloader can have a shorter timeout, as the lowering of DTR can be well-
coordinated with the start of the upload.
Fig. 2.1 Arduino MEGA 2560 [9]
The Mega 2560 board contains a trace that can be cut to disable the auto-
reset. The pads on either side of the trace can be soldered together to re-enable it. It's
labeled "RESETEN". You may also be able to disable the auto-reset by connecting a
110 ohm resistor from 5V to the reset line; see this forum thread for details.

2.3 LiDAR 360 Degree

• Range Radius: 12 meters; Power Supply: 5V


• 360 Degree Omnidirectional Laser Range Scanning
• Configurable Scan Rate from 2-10Hz; Plug and Play
• 8000 Times Sample Rate, the Highest in the Current Economical LIDAR
industry
• OPTMAG Original Design, prolong the life-span, Ideal for Robot Navigation
and Localization

RPLIDAR A1’s scanning frequency reached 5.5 hz when sampling 1450 points each
round. And it can be configured up to 10 hz maximum. RPLIDAR A1 is basically a
laser triangulation measurement system. It can work excellent in all kinds of indoor
environment and outdoor environment without direct sunlight exposure.
Fig. 2.2 LiDAR [3]

Applications

• Home service /cleaning robot navigation and localization


• General robot navigation and localization
• Smart toy’s localization and obstacle avoidance
• Environment scanning and 3D re-modeling
• General simultaneous localization and mapping (SLAM)

2.4 Encoder Motor and Wheels


Efficiently Reading Quadrature with Interrupts
Once you start wanting to control the position of a motor through feedback
control, you will likely end up using a rotary encoder as the feedback device. Encoders
can be broadly categorized as:

• Absolute position
• Incremental position
Absolute position encoders will tell you what the angular position is all the time,
even between power cycles. Most use "grey code" (a modified form of binary) with
several "tracks" (bits) to read position.
Incremental position encoders will tell you what the angular position is, but only
relative to where it was when you started paying attention (usually on power-up). Two
common types of incremental outputs are:
• Incremental (clever name)  Quadrature
Incremental is rather useless for position control because it doesn't give you any
information about what direction you are turning, just that you are turning. Quadrature
encoders give you direction as well as incremental position. This article deals with
efficiently reading quadrature output on an Arduino by utilizing external interrupts and
some AVR bare-bones code.
What is Quadrature?

Fig. 2.3 Quadrature Pulses [5]


There are two channels of output in quadrature lovingly referred to as channels
A and B. They are each a square wave, but are offset from each other by 90 degrees.
Whether channel A is leading or lagging channel B depends on the direction the shaft
is turning, which is what allows you to determine direction. For example, both
channels are low and then channel A goes high, you know that you are spinning
CCW. If channel B had instead gone high before channel A, you would then know you
are spinning CW. Of course, this can be deduced starting from any state as can be
seen in the diagram. The output channels can be produced by a variety of means,
usually either magnets in a disk attached to the shaft and a pair of hall effect sensors,
or a disk with slots cut out and a pair of optical sensors looking through the slots. A
google image search of "quadrature encoder" will come up with plenty of pictures to
explain it.

Selecting an Encoder

Selecting the best encoder for your design is more than just getting the highest
resolution you can afford since the higher the p/r the faster you have to read the
events.
From this example we can see some important factors that must be considered when
selecting an appropriate encoder:

• What is your required angular resolution


• What is your maximum speed you need to be able to track
• How fast can you read and process the encoder output

The remainder of this article will deal with the third bullet point, allowing you to read the
encoder output as fast as possible. This will give you more freedom when considering the
other two points.
Mobile wheeled or tracked robots have a minimum of two motors which are
used to propel and steer the robot. Hobbyists tend to choose skid steering (like a tank)
because of its simplicity to design, incorporate and control. A three wheeled robot’s
third (rear) wheel usually prevents the robot from falling over. Four wheeled robots
have either two or four drive motors and use skid steering. Six wheeled robots most
commonly have either two, four or six drive motors. Individuals who use an R/C car as
a basis for their robot use rack and pinion steering where one motor is connected to a
drive train and the other (usually a servo motor) is used for steering. Increasing the
number of drive motors helps the robot to climb steeper inclines by increasing the
torque.
Adding “idle” wheels (wheels not connected to a motor) often has the
unfortunate consequence of removing weight from the drive wheels resulting in slip
and loss of traction. In the image below, the center wheel, chosen mistakenly as the
driven wheel, often loses contact with the ground. The way around this is to add
suspension.

2.5 Connecting Wires


A jump wire (also known as jumper wire, or jumper) is an electrical wire, or
group of them in a cable, with a connector or pin at each end (or sometimes without
them – simply "tinned"), which is normally used to interconnect the components of a
breadboardor other prototype or test circuit, internally or with other equipment or
components, without soldering.

Fig. 2.4 Connecting Wires [6]

2.6 Motor Driver L293N 2A

This L298N Based Motor Driver Module is a high power motor driver perfect
for driving DC Motors and Stepper Motors. It uses the popular L298 motor driver IC
and has the onboard 5V regulator which it can supply to an external circuit. It can
control up to 4 DC motors, or 2 DC motors with directional and speed control. This
motor driver is perfect for robotics and mechatronics projects and perfect for
controlling motors from microcontrollers, switches, relays, etc. Perfect for driving DC
and Stepper motors for micro mouse, line following robots, robot arms, etc.

An H-Bridge is a circuit that can drive a current in either polarity and be


controlled by Pulse Width Modulation (PWM).

Pulse Width Modulation is a means of controlling the duration of an electronic


pulse. In motors try to imagine the brush as a water wheel and electrons as the
flowing droplets of water. The voltage would be the water flowing over the wheel at a
constant rate, the more water flowing the higher the voltage. Motors are rated at
certain voltages and can be damaged if the voltage is applied to heavily or if it is
dropped quickly to slow the motor down. Thus PWM. Take the water wheel analogy
and think of the water hitting it in pulses but at a constant flow. The longer the pulses
the faster the wheel will turn, the shorter the pulses, the slower the water wheel will
turn. Motors will last much longer and be more reliable if controlled through PWM.

Pins:

1.Out 1: Motor A lead out

2.Out 2: Motor A lead out


3.Out 3: Motor B lead out

4.Out 4: Mo (Can actually be from 5v-35v, just marked as 12v)

5.GND: Ground

6. 5v: 5v input (unnecessary if your power source is 7v-35v, if the power source is
7v35v then it can act as a 5v out)

7. EnA: Enables PWM signal for Motor A (Please see the “Arduino Sketch

Considerations” section)

8. In1: Enable Motor A

9. In2: Enable Motor A

10. In3: Enable Motor B

11. In4: Enable Motor B

12. EnB: Enables PWM signal for Motor B (Please see the “Arduino Sketch

Considerations” section)
Fig. 2.5 Motor Driver L293N 2A [6]

2.7 Acrylic Chassis


Transparent acrylic chassis capable of mounting 2 motors and castor wheel for
a robot. The chassis is specially designed to fit Arduino compatible boards such as
SPDuino and also the Line array sensor. The body contains holes for easy mounting
of various sensors in the front and other mechanical components.

Features
• Made from Laser cut 3mm Acrylic Sheet
• Multiple slots for mounting extra electronics easily
• Compatible with 2 Axis Pan-Til for mounting wireless camera, Sharp sensors
and Ultrasonic distance sensor
• Compatible with all our normal DC geared motors, High torque motors and
High torque motors with current controlled drive
• Arduino / Raspberry Pi / Rhino board can be mounted inside as well as on top.
• Holes and slots for mounting Line sensor array on bottom
• Holes for mounting castors on bottom in case for use with only 2 motors.
Applications
• Many types of autonomous and manual controlled robots
Fig. 2.6 Acrylic Chassis [7]

2.8 Bluetooth Module HC-05


HC-05 is a very cool module which can add two-way (full-duplex) wireless
functionality to your projects. You can use this module to communicate between two
microcontrollers like Arduino or communicate with any device with Bluetooth
functionality like a Phone or Laptop. There are many android applications that are
already available which makes this process a lot easier. The module communicates
with the help of USART at 9600 baud rate hence it is easy to interface with any
microcontroller that supports USART. We can also configure the default values of the
module by using the command mode. So if you looking for a Wireless module that
could transfer data from your computer or mobile phone to microcontroller or vice
versa then this module might be the right choice for you. However do not expect this
module to transfer multimedia like photos or songs; you might have to look into the
CSR8645 module for that.
HC-05 has two operating modes, one is the Data mode in which it can send
and receive data from other Bluetooth devices and the other is the AT Command
mode where the default device settings can be changed. We can operate the device
in either of these two modes by using the key pin as explained in the pin description.
It is very easy to pair the HC-05 module with microcontrollers because it operates
using the Serial Port Protocol (SPP). Simply power the module with +5V and connect
the Rx pin of the module to the Tx of MCU and Tx pin of module to Rx of MCU as
shown in the figure below
Fig. 2.7 HC-05 Module [7]

HC-05 Technical Specifications


• Serial Bluetooth module for Arduino and other microcontrollers
• Operating Voltage: 4V to 6V (Typically +5V)
• Operating Current: 30mA
• Range: <100m
• Works with Serial communication (USART) and TTL compatible
• Follows IEEE 802.15.1 standardized protocol
• Uses Frequency-Hopping Spread spectrum (FHSS)
• Can operate in Master, Slave or Master/Slave mode
• Can be easily interfaced with Laptop or Mobile phones with Bluetooth
• Supported baud rate: 9600, 19200, 38400, 57600, 115200, 230400, 460800

2.9 Software Used


2.9.1 Python IDLE
IDLE (short for integrated development environment or integrated development
and learning environment) is an integrated development environmentfor Python,
which has been bundled with the default implementation of the language since
1.5.2b1. It is packaged as an optional part of the Python packaging with many Linux
distributions. It is completely written in Python and the TkinterGUI toolkit
(wrapperfunctions for Tcl/Tk).
IDLE is intended to be a simple IDEand suitable for beginners, especially in an
educational environment. To that end, it is cross-platform, and avoids feature clutter.
According to the included README, its main features are:

• Multi-window text editor with syntax highlighting, auto completion, smart indent
and other.
• Python shell with syntax highlighting.
• Integrated debugger with stepping, persistent breakpoints, and call stack
visibility.

2.9.2 Robot Operating System (ROS)


It is robotics middleware (i.e. collection of software frameworks for robot software
development). Although ROS is not an operating system, it provides services
designed for a heterogeneous computer cluster such as hardware abstraction, low-
level device control, implementation of commonly used functionality, message-passing
between processes, and package management. Running sets of ROS-based
processes are represented in a graph architecture where processing takes place in
nodes that may receive, post and multiplex sensor data, control, state, planning,
actuator, and other messages. Despite the importance of reactivity and low latency in
robot control, ROS itself is not a real-time OS(RTOS). It is possible, however, to
integrate ROS with real-time code. The lack of support for realtime systems has been
addressed in the creation of ROS 2.0, a major revision of the ROS API which will take
advantage of modern libraries and technologies for core ROS functionality and add
support for real-time code and embedded hardware.
Software in the ROS Ecosystem can be separated into three groups:
• Language-and platform-independent tools used for building and distributing
ROSbased software;
• ROS client library implementations such as roscpp, rospy, and roslisp.
• Packages containing application-related code which uses one or more ROS
client libraries.

Fig. 2.8 Python IDLE & ROS


CHAPTER 3 DESIGNING AND RESULT
ANALYSIS
3.1 Algorithms Used

1. A* Algorithm
• A* is the most popular choice for path finding, because it’s fairly flexible and
can be used in a wide range of contexts.
• It is an Artificial Intelligence algorithm used to find shortest possible path from
start to end states.
• It could be applied to character path finding, puzzle solving and much more. It
really has countless number of application.
• Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute
(now SRI International) first published the algorithm in 1968.
• The A* algorithm uses both the actual distance from the start and the
estimated distance to the goal.
Now let’s see how A* algorithm works. It based on following concepts –

• START GOAL States – Where the program begins and where it aims to get.
• How you measure progress towards goal.
• How you generate Children or all possible Paths towards solution.
At each iteration of its main loop, A* needs to determine which of its paths to extend. It
does so based on the cost of the path and an estimate of the cost required to extend
the path all the way to the goal. Specifically, A* selects the path that minimizes
f(n)=g(n)+h(n) where,
n= next node on the path
g(n) = the cost of the path from the start node to n h(n) = a heuristic function that
estimates the cost of the cheapest path from n to the goal

2. Low Pass Filter

The low-pass filter has a gain response with a frequency range from zero frequency
(DC) to cutoff frequency (ωc) , which means that any input that has a frequency below
the ωc gets a pass, and anything above it gets attenuated or rejected. The gain
approaches zero as frequency increases to infinity. Refer to the image below:
The input signal of the filter shown here has equal amplitudes at frequencies ω1 and
ω2.

Fig. 3.1 Low-Pass Filter Characteristics Curve [9]

After passing through the low-pass filter, the output amplitude at ω1 is


unaffected because it’s below the cutoff frequency ωc. However, at ω2, the signal
amplitude is significantly decreased because it’s above ωc.

High Pass Filter

Opposite to the Low-pass filter, the high-pass filter has a gain response with a
frequency range from the cutoff frequency (ωc) to infinity. Any input having a
frequency below ωc gets attenuated or rejected. Anything above ωc passes through
unaffected. Refer to the image below:

Fig. 3.2 High-Pass Filter Characteristics Curve [9]

The input signal of the filter shown here has equal amplitude at frequencies ω1 and
ω2. After passing through the high-pass filter, the output amplitude at ω1 is
significantly decreased because it’s below ωc , and at ω2 , the signal amplitude
passes through unaffected because it’s above ω c .
3. Kalman Filter

Implements a linear Kalman filter. For now the best documentation is my free
book Kalman and Bayesian Filters in Python. The test files in this directory also give
you a basic idea of use, albeit without much description.

In brief, you will first construct this object, specifying the size of the state vector
with dim_xand the size of the measurement vector that you will be using with dim_z.
These are mostly used to perform size checks when you assign values to the various
matrices. For example, if you specified dim_z=2and then try to assign a 3x3 matrix to
R (the measurement noise matrix you will get an assert exception because R should
be 2x2. (If for whatever reason you need to alter the size of things midstream just use
the underscore version of the matrices to assign directly: your_filter._R =
a_3x3_matrix.)

After construction the filter will have default matrices created for you, but you
must specify the values for each. It’s usually easiest to just overwrite them rather than
assign to each element yourself. This will be clearer in the example below. All are of
type numpy.array.

These are the matrices (instance variables) which you must specify. All are of type
numpy.array (do NOT use numpy.matrix) If dimensional analysis allows you to get
away with a 1x1 matrix you may also use a scalar. All elements must have a type of
float.

4. Extended Kalman Filter

Implements a extended Kalman filter. For now the best documentation is my


free book Kalman and Bayesian Filters in Python. The test files in this directory also
give you a basic idea of use, albeit without much description.

In brief, you will first construct this object, specifying the size of the state vector
with dim_xand the size of the measurement vector that you will be using with dim_z.
These are mostly used to perform size checks when you assign values to the various
matrices. For example, if you specified dim_z=2and then try to assign a 3x3 matrix to
R (the measurement noise matrix you will get an assert exception because R should
be 2x2. (If for whatever reason you need to alter the size of things midstream just use
the underscore version of the matrices to assign directly: your_filter._R =
a_3x3_matrix.)

After construction the filter will have default matrices created for you, but you
must specify the values for each. It’s usually easiest to just overwrite them rather than
assign to each element yourself. This will be clearer in the example below. All are of
type numpy array.
3.2 Simulations

Kalman Filter FastSLAM 1.0

Extended Kalman Filter Particle Filter

CHAPTER 4 MERITS, DEMERITS AND


APPLICATIONS

4.1 MERITS
• Simple to implement
• Works well in practice simple to implement
• Represent arbitrary even multi-modal
• Adaptive focusing on probable regions of space
• Deals with non-Gaussian noise
• Closely operates automatically from 0.15m - 12m

4.2 DEMERITS
• The mapping between observations and landmarks is unknown
• Picking wrong data associations can have catastrophic consequences
(divergence)
• Laser ranging systems are accurate active sensors
• Its most common form operates on the time of flight principle by sending a
laser pulse in a narrow beam towards the object and measuring the time
taken by the pulse to be reflected off the target and returned to the sender.
Sonar-based systems are fast and provide measurements and recognition
capacities with amounts of information similar to vision, but with the lack of
appearance data. However, its dependence on inertial sensors, such as
odometers, implies that a small error can have large effects on later position
estimates. On the other hand, Vision systems are passive, they have long
range and high resolution, but the computation cost is considerably high and
good visual features are more difficult to extract and match. Vision is used to
estimate the 3D structure, feature location and robot pose, for instance by
means of stereo camera pairs or monocular cameras with structure from
motion recovery.

4.3 APPLICATIONS
SLAM is central to a range of indoor, outdoor, in-air and underwater applications for
both manned and autonomous vehicles.
Examples:
• At home: vacuum cleaner, lawn mower
• Air: surveillance with unmanned air vehicles
• Underwater: reef monitoring
• Underground: exploration of mines
• Space: terrain mapping for localization
Fig. 4.1 Applications [2]
CHAPTER 5 CONCLUSIONS AND
FUTURE SCOPE
5.1 CONCLUSION

Fig. 5.1 Mathematical position of scanner and motors with the reference along with the
landmarks placed randomly in arena

Fig. 5.2 Improvement of robot state using the sensor data by complying various
techniques
Fig. 5.3 Real Time Simulation of LiDAR

Fig. 5.4 Robot setup with RPI LiDAR Scanning and data collection software

Fig. 5.5 Electronic Setup of the SLAM Robot Fig. 5.6 Frontal View of SLAM Robot
5.2 FUTURE SCOPE
The localization and mapping problem is a fundamental one for any robot
navigating in unknown environments. So any robot that needs to move about
autonomously in a space needs to solve the problem of localization to know its current
position in the world. In case of unknown or dynamic environments where a map is not
available or constantly changing, the robot also needs to build and update the map in
real time. As of now there is no general solution that works in all cases but rather
algorithms that solve a specific problem for a particular use case.

Some form of SLAM is already used in almost all the autonomously navigating
robots that we see right now. This includes autonomous vehicles, autonomous aerial
vehicles, robot vacuum cleaners, toys like the Anki Drive, industrial robots, etc. SLAM
is the core of most robots trying to navigate autonomously today.

Coming to the future, I see SLAM still being relevant with added improvements in
methods as more research is done in the field. A little farther ahead in the future there
might be techniques that take a new perspective on the problem and solve it better,
but I think that will be really far in the future (say 20–40 years) and for that to be
implemented in mainstream products would take even longer. SLAM would still be
relevant but in a different form than we are familiar with today. Since SLAM is such a
fundamental problem it would still be relevant similar to how understanding Newton’s
gravitational model is still relevant to understanding relativistic models.
So, in the future I see it being used extensively for a lot of autonomous
navigation tasks even more than it is now.

REFERENCES
[1]. Christopher M. Bishop. Pattern Recognition and Machine Learning
(Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg, 2006.
ISBN 0387310738.
[2]. Karan Chawla. Github account. https://github.com/karanchawla.
[3]. Joe Dinius. Github account. https://github.com/jwdinius.
[4]. Atsushi Sakai et al. Atsushisakai/pythonrobotics: Python sample codes for
robotics algorithms. https://github.com/AtsushiSakai/PythonRobotics.
[5]. David Gonzalez Bautista, JoshuÃl’ PÃl’rez, Vicente Milanes, and Fawzi
Nashashibi. A review of motion planning techniques for automated vehicles.
pages 1–11, 11 2015.
[6]. Daniel Ingram. Github account. https://github.com/daniel-s-ingram.
[7]. Jesse Levinson, Jake Askeland, Jan Becker, Jennifer Dolson, David Held,
Soeren Kammel, J. Zico Kolter, Dirk Langer, Oliver Pink, Vaughan Pratt,
Michael Sokolsky, Ganymed Stanek, David Stavens, Alex Teichman, Moritz
Werling, and Sebastian Thrun. Towards fully autonomous driving: systems and
algorithms. In Intelligent Vehicles Symposium (IV), 2011 IEEE, 2011.
[8]. Brian Paden, Michal Cap, Sze Zheng Yong, Dmitry Yershov, and Emilio
Frazzoli. A survey of motion planning and control techniques for self-driving
urban vehicles.
2016.
[9]. Alexis Paques. Github account. https://github.com/AlexisTM.
[10]. Alejandro Perez, Robert Platt Jr, George Konidaris, Leslie P. Kaelbling, and
Tomas LozanoPerez. Lqr-rrt*: Optimal sampling-based motion planning with
automatically derived extension heuristics. pages 2537–2542, 05 2012.
[11]. Morgan Quigley, Ken Conley, Brian P. Gerkey, Josh Faust, Tully Foote,
Jeremy Leibs, Rob Wheeler, and Andrew Y. Ng. Ros: an open-source robot
operating system. In ICRA Workshop on Open Source Software, 2009.
[12]. Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic Robotics
(Intelligent Robotics and Autonomous Agents). The MIT Press, 2005. ISBN
0262201623.

Arduino Code
char x; int motor[8] =
{22,24,26,28}; int trigPin1=2;
int echoPin1=3; int
trigPin2=4; int echoPin2=5;
int trigPin3=6; int
echoPin3=7; int trigPin4=8;
int echoPin4=9; int
trigPin5=10; int
echoPin5=11; int
trigPin6=30; int
echoPin6=31; int
trigPin7=32; int
echoPin7=33; int
trigPin8=34; int
echoPin8=35; void setup() {
Serial.begin (9600);
pinMode(trigPin1, OUTPUT);
pinMode(echoPin1, INPUT);
pinMode(trigPin2, OUTPUT);
pinMode(echoPin2, INPUT);
pinMode(trigPin3, OUTPUT);
pinMode(echoPin3, INPUT);
pinMode(trigPin4, OUTPUT);
pinMode(echoPin4, INPUT);
pinMode(trigPin5, OUTPUT);
pinMode(echoPin5, INPUT);
for (int i = 0; i <= 3; i++)
{
pinMode(motor[i], OUTPUT);
}} void forward()
{ digitalWrite(motor[0],
HIGH);
digitalWrite(motor[1], LOW);
digitalWrite(motor[2], HIGH);
digitalWrite(motor[3], LOW);

}
void
back() {
digitalWrite(motor[0], LOW);
digitalWrite(motor[1], HIGH);
digitalWrite(motor[2], LOW);
digitalWrite(motor[3], HIGH);

} void
right() {
digitalWrite(motor[0], LOW);
digitalWrite(motor[1], HIGH);
digitalWrite(motor[2], HIGH);
digitalWrite(motor[3], LOW);
} void left()
{ digitalWrite(motor[0],
HIGH);
digitalWrite(motor[1], LOW);
digitalWrite(motor[2], LOW);
digitalWrite(motor[3], HIGH);

} void sp()
{ digitalWrite(motor[0],
HIGH);
digitalWrite(motor[1], HIGH);
digitalWrite(motor[2], HIGH);
digitalWrite(motor[3], HIGH);
} void loop() { long duration1, distance1;
digitalWrite(trigPin1, LOW); // Added this line
delayMicroseconds(2); // Added this line
digitalWrite(trigPin1, HIGH);
delayMicroseconds(10); // Added this line
digitalWrite(trigPin1, LOW); duration1 =
pulseIn(echoPin1, HIGH); distance1 =
(duration1/2) / 29.1; if (distance1 >= 500 ||
distance1 <= 0){
Serial.println("Out of range");
}
else
{
Serial.print ( "Sensor1 ");
Serial.print ( distance1);
Serial.println("cm");
} delay(100); long duration2, distance2;
digitalWrite(trigPin2, LOW); // Added this line
delayMicroseconds(2); // Added this line
digitalWrite(trigPin2, HIGH);
delayMicroseconds(10); // Added this line
digitalWrite(trigPin2, LOW); duration2 =
pulseIn(echoPin2, HIGH); distance2=
(duration2/2) / 29.1; if (distance2 >= 500 ||
distance2 <= 0){
Serial.println("Out of range");
}
else
{
Serial.print("Sensor2 ");
Serial.print(distance2);
Serial.println("cm");
} delay(100); long duration3, distance3;
digitalWrite(trigPin3, LOW); // Added this line
delayMicroseconds(2); // Added this line
digitalWrite(trigPin3, HIGH);
delayMicroseconds(10); // Added this line
digitalWrite(trigPin3, LOW); duration3 =
pulseIn(echoPin3, HIGH); distance3=
(duration3/2) / 29.1; if (distance3 >= 500 ||
distance3 <= 0){
Serial.println("Out of range");
}
else
{
Serial.print("Sensor3 ");
Serial.print(distance3);
Serial.println("cm");
} delay(100); long duration4, distance4;
digitalWrite(trigPin4, LOW); // Added this line
delayMicroseconds(2); // Added this line
digitalWrite(trigPin4, HIGH);
delayMicroseconds(10); // Added this line
digitalWrite(trigPin4, LOW); duration4 =
pulseIn(echoPin4, HIGH);
distance4= (duration4/2) / 29.1;

if (distance4 >= 500 || distance4 <= 0){


Serial.print("Sensor4 ");
Serial.print(distance3);
Serial.println("cm");
}
else
{
Serial.print("Sensor4 ");
Serial.print(distance4);
Serial.print("cm");
} delay(100); long duration5, distance5;
digitalWrite(trigPin5, LOW); // Added this line
delayMicroseconds(2); // Added this line
digitalWrite(trigPin5, HIGH);
delayMicroseconds(10); // Added this line
digitalWrite(trigPin5, LOW); duration5 =
pulseIn(echoPin5, HIGH); distance5=
(duration5/2) / 29.1; if (distance5 >= 500 || dist
ance5 <= 0){
Serial.println("Out of range");
}
else
{
Serial.print("Sensor5 ");
Serial.print(distance5);
Serial.println("cm");
}
delay(100);
forward();
delay(1000
0);
left();
delay(1500);
forward();
delay(2000);
left();
delay(1500);
forward();
delay(10000);
sp();
delay(20000);
}

Encoder Motor Code


#include<Encoder.h> int in1 = 22, in2
= 24, in3 = 26, in4 = 28; Encoder
E1(50, 52); void setup() {
pinMode(in1, OUTPUT);
pinMode(in2, OUTPUT);
pinMode(in3, OUTPUT);
pinMode(in4, OUTPUT);
Serial.begin(115200); } void loop() {
long P1 = E1.read();
Serial.println(P1); if (P1 < 0) {
analogWrite(in1, 0);
analogWrite(in2, 255);
analogWrite(in3, 0);
analogWrite(in4, 0);
} else if (P1 >=
0) {
analogWrite(in1, 0);
analogWrite(in2, 0);
analogWrite(in3, 0);
analogWrite(in4, 0);
}
}

Python IDLE Code

Code 1
# For each cylinder in the scan, find its cartesian
coordinates, # in the scanner's coordinate system.
# Write the result to a file which contains all cylinders, for all scans.
#
03_d_find_cylinders_cartesian
# Claus Brenner, 09 NOV 2012
from lego_robot import *
from math import sin, cos
# Find the derivative in scan data, ignoring invalid measurements.
def compute_derivative(scan, min_dist):
jumps = [ 0 ] for i in range(1,
len(scan) - 1): l = scan[i-1]
r = scan[i+1] if l > min_dist
and r > min_dist:
derivative = (r - l) / 2.0
jumps.append(derivative)
else:
jumps.append(0)
jumps.append(0)
return jumps

# For each area between a left falling edge and a right rising
edge, # determine the average ray number and the average
depth. def find_cylinders(scan, scan_derivative, jump,
min_dist):
cylinder_list = []
on_cylinder = False
direction = 'Left'
sum_ray, sum_depth, rays = 0.0, 0.0, 0
discard = False

for i in range(len(scan_derivative)):
# --->>> Insert your cylinder code here.
# Whenever you find a cylinder, add a tuple
# (average_ray, average_depth) to the cylinder_list.
current_der = scan_derivative[i] if
abs(current_der) > jump:
if on_cylinder and direction == 'Left':
if current_der < 0: # Left again
discard = True
else:
on_cylinder = False
average_ray = sum_ray/rays
average_depth = sum_depth/rays
cylinder_list.append( (average_ray,
average_depth) ) sum_ray, sum_depth, rays =
0.0, 0.0, 0 if not on_cylinder and current_der < 0:
on_cylinder = True direction = 'Left' if scan[i]
<= min_dist:
discard = True if
on_cylinder and scan[i] > min_dist:
rays += 1
sum_ray += i
sum_depth += scan[i]
if discard:
sum_ray, sum_depth, rays = 0.0, 0.0, 0
discard = False

return cylinder_list
def compute_cartesian_coordinates(cylinders,
cylinder_offset): result = [] #print cylinders for c in
cylinders:
# --->>> Insert here the conversion from polar to Cartesian coordinates.
# c is a tuple (beam_index, range).
# For converting the beam index to an angle, use
# LegoLogfile.beam_index_to_angle(beam_index)

ray_ix = c[0]
rng = c[1]
ray_angle =
LegoLogfile.beam_index_to_angle(ray_ix) x=
cos(ray_angle) * (rng + cylinder_offset)
y = sin(ray_angle) * (rng + cylinder_offset)

result.append( (x,y) ) # Replace this by your (x,y)


return result

if __name__ == '__main__':

minimum_valid_distance = 20.0
depth_jump = 100.0
cylinder_offset = 90.0

# Read the logfile which contains all scans.


logfile = LegoLogfile()
logfile.read("robot4_scan.txt") # Write a
result file containing all cylinder records.
# Format is: D C x[in mm] y[in mm] ...
# With zero or more points.
# Note "D C" is also written for otherwise empty lines (no
# cylinders in scan) out_file =
open("cylinders.txt", "w") for scan
in logfile.scan_data:
# Find cylinders.
der = compute_derivative(scan, minimum_valid_distance)
cylinders = find_cylinders(scan, der, depth_jump,
minimum_valid_distance)
cartesian_cylinders = compute_cartesian_coordinates(cylinders,

cylinder_offset) # Write to file. print("D


C", file=out_file) #print >> out_file, "D C",
for c in cartesian_cylinders: print("%.1f
%.1f" % c, file=out_file) #print >> out_file,
"%.1f %.1f" % c,
print(file=out_file)
#print >> out_file
out_file.close()
Code 2
# Subsample the scan. For each point, find a closest point on
the # wall of the arena.
# From those point pairs, estimate a transform and apply this to the pose.
# 05_b_estimate_wall_transform # Claus Brenner, 17 NOV
2012 from lego_robot import * from slam_b_library import
filter_step from slam_04_a_project_landmarks import
write_cylinders from slam_04_d_apply_transform_question
import\ estimate_transform, apply_transform,
correct_pose from slam_05_a_find_wall_pairs_question
import\ get_subsampled_points,
get_corresponding_points_on_wall

if __name__ == '__main__':
# The constants we used for the filter_step.
scanner_displacement = 30.0
ticks_to_mm = 0.349
robot_width = 150.0

# The start pose we obtained miraculously.


pose = (1850.0, 1897.0, 3.717551306747922)

# Read the logfile which contains all


scans. logfile = LegoLogfile()
logfile.read("robot4_motors.txt")
logfile.read("robot4_scan.txt")

# Iterate over all positions.


out_file = open("estimate_wall_transform.txt", "w")
for i in range(len(logfile.scan_data)):
# Compute the new pose. pose =
filter_step(pose, logfile.motor_ticks[i],
ticks_to_mm, robot_width,
scanner_displacement)

# Subsample points. subsampled_points =


get_subsampled_points(logfile.scan_data[i]) world_points =
[LegoLogfile.scanner_to_world(pose, c)
for c in subsampled_points]

# Get the transformation left, right =


get_corresponding_points_on_wall(world_points) trafo =
estimate_transform(left, right, fix_scale = True)
# Correct the initial position using trafo. Also transform points.
if trafo:
pose = correct_pose(pose, trafo) world_points =
[apply_transform(trafo, p) for p in world_points] else:
world_points = []

# Write to file.
# The pose.
print("F %f %f %f" % pose, file=out_file) #
Write the scanner points and corresponding points.
write_cylinders(out_file, "W C", world_points)

out_file.close()

ROS Navigation Code

#!/usr/bin/env python
import rospy
from geometry_msgs.msg import Twist, Point
from sensor_msgs.msg import LaserScan
from nav_msgs.msg import Odometry
from tf.transformations import euler_from_quaternion
import math
x=0.0
y=0.0
theta=0.0
regions=None
def Waypoints(t):
global y
y=2*math.sin(t)*math.sin(t/2)
return y
def odom_callback(msg):
global x
global y
global theta
x = msg.pose.pose.position.x
y = msg.pose.pose.position.y
rot_q = msg.pose.pose.orientation
(roll, pitch, theta) = euler_from_quaternion([rot_q.x, rot_q.y, rot_q.z, rot_q.w])
def laser_callback(msg):
global regions
regions = {
'bright': min(min(msg.ranges[0:143]),10),
'fright': min(min(msg.ranges[144:287]),10),
'front': min(min(msg.ranges[288:431]),10),
'fleft': min(min(msg.ranges[432:575]),10),
'bleft': min(min(msg.ranges[576:713]),10),
}

def control_loop():
rospy.init_node('ebot_controller')
pub = rospy.Publisher('/cmd_vel', Twist, queue_size=10)
rospy.Subscriber('/odom', Odometry, odom_callback)
rospy.Subscriber('/ebot/laser/scan', LaserScan, laser_callback)
rate = rospy.Rate(10)
velocity_msg = Twist()
velocity_msg.linear.x = 0
velocity_msg.angular.z = 0
# pub.publish(velocity_msg)
global t,x,y,theta,regions
while not rospy.is_shutdown():
# Your algorithm to complete the obstacle course
curr_x=0.5
while(curr_x<2*math.pi):
goal=Point()
goal.x=curr_x
goal.y=Waypoints(curr_x)
while(1):
inc_x = goal.x -x
inc_y = goal.y -y
angle_to_goal = math.atan2(inc_y, inc_x)
if(inc_x<0.1 and inc_y<0.1):
break
if abs(angle_to_goal - theta) > 0.1:
if(angle_to_goal > theta ):
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = 2.0
else:
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = -2.0
else:
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = 0.0
pub.publish(velocity_msg)
rate.sleep()
x=goal.x
y=goal.y
curr_x=curr_x+0.5
# break #curve complete
#starting with obstacle
#laser=LaserScan()
while(curr_x<12.5):
velocity_msg.linear.x = 0
velocity_msg.angular.z = 0
if regions['front'] > 1 and regions['fleft'] > 1 and regions['fright'] > 1:
state_description = 'case 1 - nothing'
inc_x = 12.50 -x
inc_y = 0.0 - y
angle_to_goal = math.atan2(inc_y, inc_x)
if abs(angle_to_goal - theta) > 0.1:
if(angle_to_goal > theta ):
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = 2.0
else:
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = -2.0
else:
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = 0.0
elif regions['front'] < 1 and regions['fleft'] > 1 and regions['fright'] > 1:
# state_description = 'case 2 - front'
velocity_msg.linear.x = 0
velocity_msg.angular.z = 2.5
elif regions['front'] > 1 and regions['fleft'] > 1 and regions['fright'] < 1:
# state_description = 'case 3 - fright'
velocity_msg.linear.x = 0
velocity_msg.angular.z = 2.5
elif regions['front'] > 1 and regions['fleft'] < 1 and regions['fright'] > 1:
# state_description = 'case 4 - fleft'
velocity_msg.linear.x = 0
velocity_msg.angular.z = -2.5
elif regions['front'] < 1 and regions['fleft'] > 1 and regions['fright'] < 1:
# state_description = 'case 5 - front and fright'
velocity_msg.linear.x = 0
velocity_msg.angular.z = 2.5
elif regions['front'] < 1 and regions['fleft'] < 1 and regions['fright'] > 1:
# state_description = 'case 6 - front and fleft'
velocity_msg.linear.x = 0
velocity_msg.angular.z = -2.5
elif regions['front'] < 1 and regions['fleft'] < 1 and regions['fright'] < 1:
# state_description = 'case 7 - front and fleft and fright'
velocity_msg.linear.x = 0
velocity_msg.angular.z = 2.5
elif regions['front'] > 1 and regions['fleft'] < 1 and regions['fright'] < 1:
# state_description = 'case 8 - fleft and fright'
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = 0
# rospy.loginfo(state_description)
pub.publish(velocity_msg)
curr_x=x
# regions=Lasers can()
# pub.publish(laser)
rate.sleep()
velocity_msg.linear.x = 0.0
velocity_msg.angular.z = 0.0
break
if __name__ == '__main__':
try:
control_loop()
except rospy.ROSInterruptException:
pass

You might also like