Professional Documents
Culture Documents
ON
Wayfinding and Navigation Robot
Using ROS
BACHELOR OF TECHNOLOGY
IN
ELECTRICAL & ELECTRONICS ENGINEERING
Submitted By
I/We hereby certify that the work that is being presented in the project report entitled Project
Title to the partial fulfillment of the requirements for the award of the degree of Bachelor of
Technology in Electrical & Electronics Engineering from Dr. Akhilesh Das Gupta
Institute of Technology & Management, New Delhi. This is an authentic record of our own
work carried out during a period from Aug, 2021 to Dec, 2021 under the guidance of “Md.
Saquib Faraz”, “Asst. professor” in EEE department.
The matter presented in this project has not been submitted by us for the award of any other
degree elsewhere.
Bhavya Sharma Harshit Singh
40296204918 00996204918
Ujjwal Gupta Navendu Kumar
356962049118 01296204918
This is to certify that the above statement made by the candidates is correct to the best of our
knowledge.
Dr. Amruta Pattnaik Dr. Parthish Kumar Paul Mr. Ajit Kumar Sharma
Project Coordinator Project Coordinator H.O.D., EEE Deptt.
ACKNOWLEDGEMENT
I/We would like to acknowledge the contributions of the following persons, without whose
help and guidance this report would not have been completed.
I/We acknowledge the counsel and support of our project guide Md. Saquib Faraz, Asst.
professor” in EEE department, with respect and gratitude, whose expertise, guidance,
support, encouragement, and enthusiasm has made this report possible. Their feedback vastly
improved the quality of this report and provided an enthralling experience. I/We are indeed
proud and fortunate to be supervised by him.
We are thankful to Mr. Ajit Kumar Sharma, H.O.D. EEE department, Dr. Akhilesh Das
Gupta Institute of Technology & Management, New Delhi for his constant encouragement,
valuable suggestions and moral support and blessings.
I/We are immensely thankful to our esteemed, Prof. (Dr.) Sanjay Kumar, Director, Dr.
Akhilesh Das Gupta Institute of Technology & Management, New Delhi for his never
ending motivation and support.
I/We shall ever remain indebted to Dr. Amruta Patnaik (Project Coordinator), Dr.
Parthish Kumar Paul (Project Coordinator), Departmental faculty and staff members of
Dr. Akhilesh Das Gupta Institute of Technology & Management, New Delhi.
Finally, yet importantly, I/We would like to express our heartfelt thanks to God, our beloved
parents for their blessings, our friends/classmates for their help and wishes for the successful
completion of this project.
Page No.
Title Page
Certificate ii
Acknowledgment iii
Abstract iv
Table of Contents v
List of Figure vii
REFERENCES
APPENDIX
List of Figures
Figure No. Title of Figure Page No.
1.1 Localization Methods 2
2.1 Arduino MEGA 2560 Board 10
2.2 LIDAR 11
2.3 Quadrature Pulses 12
2.4 Connecting Wires 14
2.5 Motor Driver L293N 2A 15
2.6 Acrylic Chassis 16
2.7 HC-05 Module 17
2.8 Python IDLE & ROS 19
3.1 Low-Pass Filter Characteristics Curve 21
3.2 High-Pass Filter Characteristics Curve 21
4.1 Applications 28
5.1 Mathematical position of scanner and motors with the
reference along with the landmarks placed randomly in arena 26
5.2 Improvement of robot state using the sensor data by complying
various techniques 26
5.3 Real Time Simulation of LIDAR 27
5.4 Robot setup with RPI LiDAR Scanning and data collection
Software 27
5.5 Electronic Setup of the SLAM Robot 27
5.6 Frontal View of SLAM Robot 27
CHAPTER 1 INTRODUCTION AND
LITERATURE REVIEW
1.1. Introduction
Research on the SLAM problem began within the robotics community
(arguably with the 1986 papers by Smith and Cheeseman), usually with wheeled
robots traversing a flat ground plane. Typically, this was done by combining sensor
readings (such as from a laser scanner or sonar) with information about the control
input (e.g. steering angle) and the measured robot state (e.g. counting wheel
rotations). This may seem far removed from tracking a handheld camera freely
moving in space, but embodies many of the core difficulties of SLAM, such as creating
a consistent and accurate map, and making best use of multiple unreliable sources of
information.
More recently the use of visual sensors has become an important aspect of
SLAM research, in part because an image provides a rich source of information about
the structure of the environment (containing more information than a sonar ping for
example). A large amount of research on visual SLAM used stereo cameras, or
cameras alongside other sensors (such as accelerometers or GPS), but since around
2001 a number of works showed how SLAM could be done successfully using only a
single camera (known as monocular visual SLAM), for example the seminal work of
Andrew Davison at the University of Oxford.
This was crucial in making SLAM a more widely useful technology, since
devices equipped with just a single camera - such as webcams and mobile phones -
are vastly more common and accessible than specialist sensing equipment. More
recent work has demonstrated how monocular visual SLAM can be used to create
large scale maps, how the maps can be automatically enhanced with meaningful 3D
structures, and recover extremely detailed shapes in real time. SLAM is an active field
of research within computer vision and new and improved techniques are constantly
emerging.
1.2. Basics of SLAM Robot
Mobile robots are expected to perform complicated tasks that require navigation in
complex and dynamic indoor and outdoor environments without any human input. In
order to autonomously navigate, path plan, and perform these tasks efficiently and
safely, the robot needs to be able to localize itself in its environment based on the
constructed maps. Mapping the spatial information of the environment is done online
with no prior knowledge of the robot’s location; the built map is subsequently used by
the robot for navigation.
Multi-Sensor Fusion
Mobile robots are usually equipped with several sensor systems to avoid the
limitations when only one sensor is used to reconstruct the environment. Relative
position measurements provide precise positioning information constantly, and at
certain times absolute measurements are made to correct potential errors. There are
a number of approaches to sensor fusion for robot localization, including merging
multiple sensor feeds at the lowest level before being processed homogeneously, and
hierarchical approaches to fuse state estimates derived independently from multiple
sensors.
The position measurements can be fundamentally combined in a formal
probabilistic framework, such as the Markov Localization framework. Based on all
available information, the robot can believe to be at a certain location to a certain
degree. The localization problem consists of estimating the probability density over the
space of all locations. The Markov Localization framework combines information from
multiple sensors in the form of relative and absolute measurements to form a
combined belief in the location.
Programming
The Mega 2560 board can be programmed with the Arduino Software (IDE).
The ATmega2560 on the Mega 2560 comes preprogrammed with a bootloader that
allows you to upload new code to it without the use of an external hardware
programmer. It communicates using the original STK500 protocol (reference, C
header files).
You can also bypass the bootloader and program the microcontroller through the
ICSP (InCircuit Serial Programming) header using Arduino ISP or similar; see these
instructions for details.
The ATmega16U2 (or 8U2 in the rev1 and rev2 boards) firmware source code is
available in the Arduino repository. The ATmega16U2/8U2 is loaded with a DFU
bootloader, which can be activated by:
• On Rev1 boards: connecting the solder jumper on the back of the board (near
the map of Italy) and then resetting the 8U2.
• On Rev2 or later boards: there is a resistor that pulling the 8U2/16U2 HWB
line to ground, making it easier to put into DFU mode. You can then use
Atmel's FLIP software (Windows) or the DFU programmer (Mac OS X and
Linux) to load a new firmware. Or you can use the ISP header with an external
programmer.
Power
The Mega 2560 can be powered via the USB connection or with an external
power supply. The power source is selected automatically. External (non-USB) power
can come either from an AC-to-DC adapter (wall-wart) or battery. The adapter can be
connected by plugging a 2.1mm center-positive plug into the board's power jack.
Leads from a battery can be inserted in the GND and Vin pin headers of the POWER
connector.
The board can operate on an external supply of 6 to 20 volts. If supplied with less than
7V, however, the 5V pin may supply less than five volts and the board may become
unstable. If using more than 12V, the voltage regulator may overheat and damage the
board. The recommended range is 7 to 12 volts.
The power pins are as follows:
• Vin: The input voltage to the board when it's using an external power source
(as opposed to 5 volts from the USB connection or other regulated power
source). You can supply voltage through this pin, or, if supplying voltage via
the power jack, access it through this pin.
• 5V: This pin outputs a regulated 5V from the regulator on the board. The board
can be supplied with power either from the DC power jack (7 - 12V), the USB
connector (5V), or the VIN pin of the board (7-12V). Supplying voltage via the
5V or 3.3V pins bypasses the regulator, and can damage your board. We don't
advise it.
• 3V3: A 3.3 volt supply generated by the on-board regulator. Maximum current
draw is 50 mA.
• GND. Ground pins.
• IOREF. This pin on the board provides the voltage reference with which the
microcontroller operates. A properly configured shield can read the IOREF pin
voltage and select the appropriate power source or enable voltage translators
on the outputs for working with the 5V or 3.3V.
Memory
The ATmega2560 has 256 KB of flash memory for storing code (of which 8 KB
is used for the bootloader), 8 KB of SRAM and 4 KB of EEPROM (which can be read
and written with the EEPROM library).
Input and Output
See the mapping between Arduino pins and Atmega2560 ports:
• AREF. Reference voltage for the analog inputs. Used with analogReference().
• Reset. Bring this line LOW to reset the microcontroller. Typically used to add a
reset button to shields which block the one on the board.
Communication
The Mega 2560 board has a number of facilities for communicating with a computer,
another board, or other microcontrollers. The ATmega2560 provides four hardware
UARTs for TTL (5V) serial communication. An ATmega16U2 (ATmega 8U2 on the
revision 1 and revision 2 boards) on the board channels one of these over USB and
provides a virtual com port to software on the computer (Windows machines will need
a .inf file, but OSX and Linux machines will recognize the board as a COM port
automatically. The Arduino Software (IDE) includes a serial monitor which allows
simple textual data to be sent to and from the board. The RX and TX LEDs on the
board will flash when data is being transmitted via the ATmega8U2/ATmega16U2
chip and USB connection to the computer (but not for serial communication on pins 0
and 1).
A SoftwareSerial library allows for serial communication on any of the Mega
2560's digital pins. The Mega 2560 also supports TWI and SPI communication. The
Arduino Software (IDE) includes a Wire library to simplify use of the TWI bus; see the
documentation for details. For SPI communication, use the SPI library.
RPLIDAR A1’s scanning frequency reached 5.5 hz when sampling 1450 points each
round. And it can be configured up to 10 hz maximum. RPLIDAR A1 is basically a
laser triangulation measurement system. It can work excellent in all kinds of indoor
environment and outdoor environment without direct sunlight exposure.
Fig. 2.2 LiDAR [3]
Applications
• Absolute position
• Incremental position
Absolute position encoders will tell you what the angular position is all the time,
even between power cycles. Most use "grey code" (a modified form of binary) with
several "tracks" (bits) to read position.
Incremental position encoders will tell you what the angular position is, but only
relative to where it was when you started paying attention (usually on power-up). Two
common types of incremental outputs are:
• Incremental (clever name) Quadrature
Incremental is rather useless for position control because it doesn't give you any
information about what direction you are turning, just that you are turning. Quadrature
encoders give you direction as well as incremental position. This article deals with
efficiently reading quadrature output on an Arduino by utilizing external interrupts and
some AVR bare-bones code.
What is Quadrature?
Selecting an Encoder
Selecting the best encoder for your design is more than just getting the highest
resolution you can afford since the higher the p/r the faster you have to read the
events.
From this example we can see some important factors that must be considered when
selecting an appropriate encoder:
The remainder of this article will deal with the third bullet point, allowing you to read the
encoder output as fast as possible. This will give you more freedom when considering the
other two points.
Mobile wheeled or tracked robots have a minimum of two motors which are
used to propel and steer the robot. Hobbyists tend to choose skid steering (like a tank)
because of its simplicity to design, incorporate and control. A three wheeled robot’s
third (rear) wheel usually prevents the robot from falling over. Four wheeled robots
have either two or four drive motors and use skid steering. Six wheeled robots most
commonly have either two, four or six drive motors. Individuals who use an R/C car as
a basis for their robot use rack and pinion steering where one motor is connected to a
drive train and the other (usually a servo motor) is used for steering. Increasing the
number of drive motors helps the robot to climb steeper inclines by increasing the
torque.
Adding “idle” wheels (wheels not connected to a motor) often has the
unfortunate consequence of removing weight from the drive wheels resulting in slip
and loss of traction. In the image below, the center wheel, chosen mistakenly as the
driven wheel, often loses contact with the ground. The way around this is to add
suspension.
This L298N Based Motor Driver Module is a high power motor driver perfect
for driving DC Motors and Stepper Motors. It uses the popular L298 motor driver IC
and has the onboard 5V regulator which it can supply to an external circuit. It can
control up to 4 DC motors, or 2 DC motors with directional and speed control. This
motor driver is perfect for robotics and mechatronics projects and perfect for
controlling motors from microcontrollers, switches, relays, etc. Perfect for driving DC
and Stepper motors for micro mouse, line following robots, robot arms, etc.
Pins:
5.GND: Ground
6. 5v: 5v input (unnecessary if your power source is 7v-35v, if the power source is
7v35v then it can act as a 5v out)
7. EnA: Enables PWM signal for Motor A (Please see the “Arduino Sketch
Considerations” section)
12. EnB: Enables PWM signal for Motor B (Please see the “Arduino Sketch
Considerations” section)
Fig. 2.5 Motor Driver L293N 2A [6]
Features
• Made from Laser cut 3mm Acrylic Sheet
• Multiple slots for mounting extra electronics easily
• Compatible with 2 Axis Pan-Til for mounting wireless camera, Sharp sensors
and Ultrasonic distance sensor
• Compatible with all our normal DC geared motors, High torque motors and
High torque motors with current controlled drive
• Arduino / Raspberry Pi / Rhino board can be mounted inside as well as on top.
• Holes and slots for mounting Line sensor array on bottom
• Holes for mounting castors on bottom in case for use with only 2 motors.
Applications
• Many types of autonomous and manual controlled robots
Fig. 2.6 Acrylic Chassis [7]
• Multi-window text editor with syntax highlighting, auto completion, smart indent
and other.
• Python shell with syntax highlighting.
• Integrated debugger with stepping, persistent breakpoints, and call stack
visibility.
1. A* Algorithm
• A* is the most popular choice for path finding, because it’s fairly flexible and
can be used in a wide range of contexts.
• It is an Artificial Intelligence algorithm used to find shortest possible path from
start to end states.
• It could be applied to character path finding, puzzle solving and much more. It
really has countless number of application.
• Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute
(now SRI International) first published the algorithm in 1968.
• The A* algorithm uses both the actual distance from the start and the
estimated distance to the goal.
Now let’s see how A* algorithm works. It based on following concepts –
• START GOAL States – Where the program begins and where it aims to get.
• How you measure progress towards goal.
• How you generate Children or all possible Paths towards solution.
At each iteration of its main loop, A* needs to determine which of its paths to extend. It
does so based on the cost of the path and an estimate of the cost required to extend
the path all the way to the goal. Specifically, A* selects the path that minimizes
f(n)=g(n)+h(n) where,
n= next node on the path
g(n) = the cost of the path from the start node to n h(n) = a heuristic function that
estimates the cost of the cheapest path from n to the goal
The low-pass filter has a gain response with a frequency range from zero frequency
(DC) to cutoff frequency (ωc) , which means that any input that has a frequency below
the ωc gets a pass, and anything above it gets attenuated or rejected. The gain
approaches zero as frequency increases to infinity. Refer to the image below:
The input signal of the filter shown here has equal amplitudes at frequencies ω1 and
ω2.
Opposite to the Low-pass filter, the high-pass filter has a gain response with a
frequency range from the cutoff frequency (ωc) to infinity. Any input having a
frequency below ωc gets attenuated or rejected. Anything above ωc passes through
unaffected. Refer to the image below:
The input signal of the filter shown here has equal amplitude at frequencies ω1 and
ω2. After passing through the high-pass filter, the output amplitude at ω1 is
significantly decreased because it’s below ωc , and at ω2 , the signal amplitude
passes through unaffected because it’s above ω c .
3. Kalman Filter
Implements a linear Kalman filter. For now the best documentation is my free
book Kalman and Bayesian Filters in Python. The test files in this directory also give
you a basic idea of use, albeit without much description.
In brief, you will first construct this object, specifying the size of the state vector
with dim_xand the size of the measurement vector that you will be using with dim_z.
These are mostly used to perform size checks when you assign values to the various
matrices. For example, if you specified dim_z=2and then try to assign a 3x3 matrix to
R (the measurement noise matrix you will get an assert exception because R should
be 2x2. (If for whatever reason you need to alter the size of things midstream just use
the underscore version of the matrices to assign directly: your_filter._R =
a_3x3_matrix.)
After construction the filter will have default matrices created for you, but you
must specify the values for each. It’s usually easiest to just overwrite them rather than
assign to each element yourself. This will be clearer in the example below. All are of
type numpy.array.
These are the matrices (instance variables) which you must specify. All are of type
numpy.array (do NOT use numpy.matrix) If dimensional analysis allows you to get
away with a 1x1 matrix you may also use a scalar. All elements must have a type of
float.
In brief, you will first construct this object, specifying the size of the state vector
with dim_xand the size of the measurement vector that you will be using with dim_z.
These are mostly used to perform size checks when you assign values to the various
matrices. For example, if you specified dim_z=2and then try to assign a 3x3 matrix to
R (the measurement noise matrix you will get an assert exception because R should
be 2x2. (If for whatever reason you need to alter the size of things midstream just use
the underscore version of the matrices to assign directly: your_filter._R =
a_3x3_matrix.)
After construction the filter will have default matrices created for you, but you
must specify the values for each. It’s usually easiest to just overwrite them rather than
assign to each element yourself. This will be clearer in the example below. All are of
type numpy array.
3.2 Simulations
4.1 MERITS
• Simple to implement
• Works well in practice simple to implement
• Represent arbitrary even multi-modal
• Adaptive focusing on probable regions of space
• Deals with non-Gaussian noise
• Closely operates automatically from 0.15m - 12m
4.2 DEMERITS
• The mapping between observations and landmarks is unknown
• Picking wrong data associations can have catastrophic consequences
(divergence)
• Laser ranging systems are accurate active sensors
• Its most common form operates on the time of flight principle by sending a
laser pulse in a narrow beam towards the object and measuring the time
taken by the pulse to be reflected off the target and returned to the sender.
Sonar-based systems are fast and provide measurements and recognition
capacities with amounts of information similar to vision, but with the lack of
appearance data. However, its dependence on inertial sensors, such as
odometers, implies that a small error can have large effects on later position
estimates. On the other hand, Vision systems are passive, they have long
range and high resolution, but the computation cost is considerably high and
good visual features are more difficult to extract and match. Vision is used to
estimate the 3D structure, feature location and robot pose, for instance by
means of stereo camera pairs or monocular cameras with structure from
motion recovery.
4.3 APPLICATIONS
SLAM is central to a range of indoor, outdoor, in-air and underwater applications for
both manned and autonomous vehicles.
Examples:
• At home: vacuum cleaner, lawn mower
• Air: surveillance with unmanned air vehicles
• Underwater: reef monitoring
• Underground: exploration of mines
• Space: terrain mapping for localization
Fig. 4.1 Applications [2]
CHAPTER 5 CONCLUSIONS AND
FUTURE SCOPE
5.1 CONCLUSION
Fig. 5.1 Mathematical position of scanner and motors with the reference along with the
landmarks placed randomly in arena
Fig. 5.2 Improvement of robot state using the sensor data by complying various
techniques
Fig. 5.3 Real Time Simulation of LiDAR
Fig. 5.4 Robot setup with RPI LiDAR Scanning and data collection software
Fig. 5.5 Electronic Setup of the SLAM Robot Fig. 5.6 Frontal View of SLAM Robot
5.2 FUTURE SCOPE
The localization and mapping problem is a fundamental one for any robot
navigating in unknown environments. So any robot that needs to move about
autonomously in a space needs to solve the problem of localization to know its current
position in the world. In case of unknown or dynamic environments where a map is not
available or constantly changing, the robot also needs to build and update the map in
real time. As of now there is no general solution that works in all cases but rather
algorithms that solve a specific problem for a particular use case.
Some form of SLAM is already used in almost all the autonomously navigating
robots that we see right now. This includes autonomous vehicles, autonomous aerial
vehicles, robot vacuum cleaners, toys like the Anki Drive, industrial robots, etc. SLAM
is the core of most robots trying to navigate autonomously today.
Coming to the future, I see SLAM still being relevant with added improvements in
methods as more research is done in the field. A little farther ahead in the future there
might be techniques that take a new perspective on the problem and solve it better,
but I think that will be really far in the future (say 20–40 years) and for that to be
implemented in mainstream products would take even longer. SLAM would still be
relevant but in a different form than we are familiar with today. Since SLAM is such a
fundamental problem it would still be relevant similar to how understanding Newton’s
gravitational model is still relevant to understanding relativistic models.
So, in the future I see it being used extensively for a lot of autonomous
navigation tasks even more than it is now.
REFERENCES
[1]. Christopher M. Bishop. Pattern Recognition and Machine Learning
(Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg, 2006.
ISBN 0387310738.
[2]. Karan Chawla. Github account. https://github.com/karanchawla.
[3]. Joe Dinius. Github account. https://github.com/jwdinius.
[4]. Atsushi Sakai et al. Atsushisakai/pythonrobotics: Python sample codes for
robotics algorithms. https://github.com/AtsushiSakai/PythonRobotics.
[5]. David Gonzalez Bautista, JoshuÃl’ PÃl’rez, Vicente Milanes, and Fawzi
Nashashibi. A review of motion planning techniques for automated vehicles.
pages 1–11, 11 2015.
[6]. Daniel Ingram. Github account. https://github.com/daniel-s-ingram.
[7]. Jesse Levinson, Jake Askeland, Jan Becker, Jennifer Dolson, David Held,
Soeren Kammel, J. Zico Kolter, Dirk Langer, Oliver Pink, Vaughan Pratt,
Michael Sokolsky, Ganymed Stanek, David Stavens, Alex Teichman, Moritz
Werling, and Sebastian Thrun. Towards fully autonomous driving: systems and
algorithms. In Intelligent Vehicles Symposium (IV), 2011 IEEE, 2011.
[8]. Brian Paden, Michal Cap, Sze Zheng Yong, Dmitry Yershov, and Emilio
Frazzoli. A survey of motion planning and control techniques for self-driving
urban vehicles.
2016.
[9]. Alexis Paques. Github account. https://github.com/AlexisTM.
[10]. Alejandro Perez, Robert Platt Jr, George Konidaris, Leslie P. Kaelbling, and
Tomas LozanoPerez. Lqr-rrt*: Optimal sampling-based motion planning with
automatically derived extension heuristics. pages 2537–2542, 05 2012.
[11]. Morgan Quigley, Ken Conley, Brian P. Gerkey, Josh Faust, Tully Foote,
Jeremy Leibs, Rob Wheeler, and Andrew Y. Ng. Ros: an open-source robot
operating system. In ICRA Workshop on Open Source Software, 2009.
[12]. Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic Robotics
(Intelligent Robotics and Autonomous Agents). The MIT Press, 2005. ISBN
0262201623.
Arduino Code
char x; int motor[8] =
{22,24,26,28}; int trigPin1=2;
int echoPin1=3; int
trigPin2=4; int echoPin2=5;
int trigPin3=6; int
echoPin3=7; int trigPin4=8;
int echoPin4=9; int
trigPin5=10; int
echoPin5=11; int
trigPin6=30; int
echoPin6=31; int
trigPin7=32; int
echoPin7=33; int
trigPin8=34; int
echoPin8=35; void setup() {
Serial.begin (9600);
pinMode(trigPin1, OUTPUT);
pinMode(echoPin1, INPUT);
pinMode(trigPin2, OUTPUT);
pinMode(echoPin2, INPUT);
pinMode(trigPin3, OUTPUT);
pinMode(echoPin3, INPUT);
pinMode(trigPin4, OUTPUT);
pinMode(echoPin4, INPUT);
pinMode(trigPin5, OUTPUT);
pinMode(echoPin5, INPUT);
for (int i = 0; i <= 3; i++)
{
pinMode(motor[i], OUTPUT);
}} void forward()
{ digitalWrite(motor[0],
HIGH);
digitalWrite(motor[1], LOW);
digitalWrite(motor[2], HIGH);
digitalWrite(motor[3], LOW);
}
void
back() {
digitalWrite(motor[0], LOW);
digitalWrite(motor[1], HIGH);
digitalWrite(motor[2], LOW);
digitalWrite(motor[3], HIGH);
} void
right() {
digitalWrite(motor[0], LOW);
digitalWrite(motor[1], HIGH);
digitalWrite(motor[2], HIGH);
digitalWrite(motor[3], LOW);
} void left()
{ digitalWrite(motor[0],
HIGH);
digitalWrite(motor[1], LOW);
digitalWrite(motor[2], LOW);
digitalWrite(motor[3], HIGH);
} void sp()
{ digitalWrite(motor[0],
HIGH);
digitalWrite(motor[1], HIGH);
digitalWrite(motor[2], HIGH);
digitalWrite(motor[3], HIGH);
} void loop() { long duration1, distance1;
digitalWrite(trigPin1, LOW); // Added this line
delayMicroseconds(2); // Added this line
digitalWrite(trigPin1, HIGH);
delayMicroseconds(10); // Added this line
digitalWrite(trigPin1, LOW); duration1 =
pulseIn(echoPin1, HIGH); distance1 =
(duration1/2) / 29.1; if (distance1 >= 500 ||
distance1 <= 0){
Serial.println("Out of range");
}
else
{
Serial.print ( "Sensor1 ");
Serial.print ( distance1);
Serial.println("cm");
} delay(100); long duration2, distance2;
digitalWrite(trigPin2, LOW); // Added this line
delayMicroseconds(2); // Added this line
digitalWrite(trigPin2, HIGH);
delayMicroseconds(10); // Added this line
digitalWrite(trigPin2, LOW); duration2 =
pulseIn(echoPin2, HIGH); distance2=
(duration2/2) / 29.1; if (distance2 >= 500 ||
distance2 <= 0){
Serial.println("Out of range");
}
else
{
Serial.print("Sensor2 ");
Serial.print(distance2);
Serial.println("cm");
} delay(100); long duration3, distance3;
digitalWrite(trigPin3, LOW); // Added this line
delayMicroseconds(2); // Added this line
digitalWrite(trigPin3, HIGH);
delayMicroseconds(10); // Added this line
digitalWrite(trigPin3, LOW); duration3 =
pulseIn(echoPin3, HIGH); distance3=
(duration3/2) / 29.1; if (distance3 >= 500 ||
distance3 <= 0){
Serial.println("Out of range");
}
else
{
Serial.print("Sensor3 ");
Serial.print(distance3);
Serial.println("cm");
} delay(100); long duration4, distance4;
digitalWrite(trigPin4, LOW); // Added this line
delayMicroseconds(2); // Added this line
digitalWrite(trigPin4, HIGH);
delayMicroseconds(10); // Added this line
digitalWrite(trigPin4, LOW); duration4 =
pulseIn(echoPin4, HIGH);
distance4= (duration4/2) / 29.1;
Code 1
# For each cylinder in the scan, find its cartesian
coordinates, # in the scanner's coordinate system.
# Write the result to a file which contains all cylinders, for all scans.
#
03_d_find_cylinders_cartesian
# Claus Brenner, 09 NOV 2012
from lego_robot import *
from math import sin, cos
# Find the derivative in scan data, ignoring invalid measurements.
def compute_derivative(scan, min_dist):
jumps = [ 0 ] for i in range(1,
len(scan) - 1): l = scan[i-1]
r = scan[i+1] if l > min_dist
and r > min_dist:
derivative = (r - l) / 2.0
jumps.append(derivative)
else:
jumps.append(0)
jumps.append(0)
return jumps
# For each area between a left falling edge and a right rising
edge, # determine the average ray number and the average
depth. def find_cylinders(scan, scan_derivative, jump,
min_dist):
cylinder_list = []
on_cylinder = False
direction = 'Left'
sum_ray, sum_depth, rays = 0.0, 0.0, 0
discard = False
for i in range(len(scan_derivative)):
# --->>> Insert your cylinder code here.
# Whenever you find a cylinder, add a tuple
# (average_ray, average_depth) to the cylinder_list.
current_der = scan_derivative[i] if
abs(current_der) > jump:
if on_cylinder and direction == 'Left':
if current_der < 0: # Left again
discard = True
else:
on_cylinder = False
average_ray = sum_ray/rays
average_depth = sum_depth/rays
cylinder_list.append( (average_ray,
average_depth) ) sum_ray, sum_depth, rays =
0.0, 0.0, 0 if not on_cylinder and current_der < 0:
on_cylinder = True direction = 'Left' if scan[i]
<= min_dist:
discard = True if
on_cylinder and scan[i] > min_dist:
rays += 1
sum_ray += i
sum_depth += scan[i]
if discard:
sum_ray, sum_depth, rays = 0.0, 0.0, 0
discard = False
return cylinder_list
def compute_cartesian_coordinates(cylinders,
cylinder_offset): result = [] #print cylinders for c in
cylinders:
# --->>> Insert here the conversion from polar to Cartesian coordinates.
# c is a tuple (beam_index, range).
# For converting the beam index to an angle, use
# LegoLogfile.beam_index_to_angle(beam_index)
ray_ix = c[0]
rng = c[1]
ray_angle =
LegoLogfile.beam_index_to_angle(ray_ix) x=
cos(ray_angle) * (rng + cylinder_offset)
y = sin(ray_angle) * (rng + cylinder_offset)
if __name__ == '__main__':
minimum_valid_distance = 20.0
depth_jump = 100.0
cylinder_offset = 90.0
if __name__ == '__main__':
# The constants we used for the filter_step.
scanner_displacement = 30.0
ticks_to_mm = 0.349
robot_width = 150.0
# Write to file.
# The pose.
print("F %f %f %f" % pose, file=out_file) #
Write the scanner points and corresponding points.
write_cylinders(out_file, "W C", world_points)
out_file.close()
#!/usr/bin/env python
import rospy
from geometry_msgs.msg import Twist, Point
from sensor_msgs.msg import LaserScan
from nav_msgs.msg import Odometry
from tf.transformations import euler_from_quaternion
import math
x=0.0
y=0.0
theta=0.0
regions=None
def Waypoints(t):
global y
y=2*math.sin(t)*math.sin(t/2)
return y
def odom_callback(msg):
global x
global y
global theta
x = msg.pose.pose.position.x
y = msg.pose.pose.position.y
rot_q = msg.pose.pose.orientation
(roll, pitch, theta) = euler_from_quaternion([rot_q.x, rot_q.y, rot_q.z, rot_q.w])
def laser_callback(msg):
global regions
regions = {
'bright': min(min(msg.ranges[0:143]),10),
'fright': min(min(msg.ranges[144:287]),10),
'front': min(min(msg.ranges[288:431]),10),
'fleft': min(min(msg.ranges[432:575]),10),
'bleft': min(min(msg.ranges[576:713]),10),
}
def control_loop():
rospy.init_node('ebot_controller')
pub = rospy.Publisher('/cmd_vel', Twist, queue_size=10)
rospy.Subscriber('/odom', Odometry, odom_callback)
rospy.Subscriber('/ebot/laser/scan', LaserScan, laser_callback)
rate = rospy.Rate(10)
velocity_msg = Twist()
velocity_msg.linear.x = 0
velocity_msg.angular.z = 0
# pub.publish(velocity_msg)
global t,x,y,theta,regions
while not rospy.is_shutdown():
# Your algorithm to complete the obstacle course
curr_x=0.5
while(curr_x<2*math.pi):
goal=Point()
goal.x=curr_x
goal.y=Waypoints(curr_x)
while(1):
inc_x = goal.x -x
inc_y = goal.y -y
angle_to_goal = math.atan2(inc_y, inc_x)
if(inc_x<0.1 and inc_y<0.1):
break
if abs(angle_to_goal - theta) > 0.1:
if(angle_to_goal > theta ):
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = 2.0
else:
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = -2.0
else:
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = 0.0
pub.publish(velocity_msg)
rate.sleep()
x=goal.x
y=goal.y
curr_x=curr_x+0.5
# break #curve complete
#starting with obstacle
#laser=LaserScan()
while(curr_x<12.5):
velocity_msg.linear.x = 0
velocity_msg.angular.z = 0
if regions['front'] > 1 and regions['fleft'] > 1 and regions['fright'] > 1:
state_description = 'case 1 - nothing'
inc_x = 12.50 -x
inc_y = 0.0 - y
angle_to_goal = math.atan2(inc_y, inc_x)
if abs(angle_to_goal - theta) > 0.1:
if(angle_to_goal > theta ):
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = 2.0
else:
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = -2.0
else:
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = 0.0
elif regions['front'] < 1 and regions['fleft'] > 1 and regions['fright'] > 1:
# state_description = 'case 2 - front'
velocity_msg.linear.x = 0
velocity_msg.angular.z = 2.5
elif regions['front'] > 1 and regions['fleft'] > 1 and regions['fright'] < 1:
# state_description = 'case 3 - fright'
velocity_msg.linear.x = 0
velocity_msg.angular.z = 2.5
elif regions['front'] > 1 and regions['fleft'] < 1 and regions['fright'] > 1:
# state_description = 'case 4 - fleft'
velocity_msg.linear.x = 0
velocity_msg.angular.z = -2.5
elif regions['front'] < 1 and regions['fleft'] > 1 and regions['fright'] < 1:
# state_description = 'case 5 - front and fright'
velocity_msg.linear.x = 0
velocity_msg.angular.z = 2.5
elif regions['front'] < 1 and regions['fleft'] < 1 and regions['fright'] > 1:
# state_description = 'case 6 - front and fleft'
velocity_msg.linear.x = 0
velocity_msg.angular.z = -2.5
elif regions['front'] < 1 and regions['fleft'] < 1 and regions['fright'] < 1:
# state_description = 'case 7 - front and fleft and fright'
velocity_msg.linear.x = 0
velocity_msg.angular.z = 2.5
elif regions['front'] > 1 and regions['fleft'] < 1 and regions['fright'] < 1:
# state_description = 'case 8 - fleft and fright'
velocity_msg.linear.x = 0.4
velocity_msg.angular.z = 0
# rospy.loginfo(state_description)
pub.publish(velocity_msg)
curr_x=x
# regions=Lasers can()
# pub.publish(laser)
rate.sleep()
velocity_msg.linear.x = 0.0
velocity_msg.angular.z = 0.0
break
if __name__ == '__main__':
try:
control_loop()
except rospy.ROSInterruptException:
pass