You are on page 1of 18

Simultaneous localisation and mapping for path

planning in static environment

Submitted By : Supervisor :
❏ Shreyansh Sharma (IEC2020089) ❏ Dr Surya Prakash
❏ Jyoti Mathur ( IEC2020066) Assistant Professor.
❏ Yaramakula Sasikala(IEC2020108)
Department of Electronics and
Communication Engineering
“ Simultaneous localisation and mapping for path
planning in static environment ”
Outline

❖ Introduction
❖ Previous study
❖ Literature Survey
❖ Problem Gap
❖ Methodology
❖ Expected outcomes
❖ References
INTRODUCTION :
With the scientific advances happening in this world, robots are now essential for use in many industries, but their use is not
limited to the industrial sector alone. So we are trying to make a bot which having
1)Simultaneous Localization and Mapping (SLAM) is a technique that allows a robot to map its surroundings and determine its
location within that map. Which helps to Increased accuracy, Improved efficiency, Improved adaptability, Reduced dependency
on external infrastructure
2)We are using path planning which can help to improve the efficiency, accuracy, and safety of warehouse bot operations,
making them a valuable tool for warehouse management.
3)We are also use static environment, A static environment in a warehouse, where the layout and placement of items and
obstacles remains constant, can provide several benefits for warehouse bots, including making them more efficient,
predictable, and safe to operate.
4)There are a lot of bots already in the market but they are not so accurate in this particular area so they can work in the small
area of the warehouse with much accuracy , so we are just make a bot which having high precious value in the (centimeter
range) to work in the industry level by using the knowledge of hardware and software both.
5)It is hard to get the depth by using a bot in that manner so we are take this task and make are bot which is also connects with
the server and work on the real time environment.
Raspberry pi 3B+
Previous study :

A vehicle that automatically drives along a line drawn on the ground is called a "path
follower bot". On a white surface the trace can be seen as a black line (back side). The
basic operations of path followers are:
1. Most use various photo reflectors, but other top competitors use image sensors for
image processing while detecting line position with an optical sensor mounted on
the front of the robot. Therefore, the line detection method requires high resolution
and high durability.
2. Use any steering device to steer the robot and follow the line. In fact, this is just a
servo operation. Phase adjustment is required to stabilise the tracking motion using
digital PID filters or other servo algorithms. Adapt your speed to the road conditions.
3. Friction between the tire and the ground limits speed when cornering. Therefore,
stronger mechanisms can increase the force of operation. As a result, the speed of
the robot can be increased. This also improves robot performance.
Literature Survey
Ref.1: Pakdaman, Sanaatiyan, and Ghahroudi (2010), February. From conception through realisation, there were technical difficulties with
a line-following robot. The second International Conference on Computer and Automation Engineering (ICCAE), which took place in 2010,
(Vol. 1, pp. 5-9). IEEE.

Background: Thanks to constant advances in machine learning and artificial intelligence, robots are more alert than ever. They are used in a
variety of situations beyond human control due to this property.
Objective: By employing a GPS (Global Positioning System) based coordinate system, it aims to create autonomous route development
algorithms and apply them to real terrain conditions.
Application: Rovers are specialised robots that can navigate terrain too difficult for humans. Despite being a robust bot, its main flaws
include inadequate automation and a lack of intelligence. A rover's primary function is to traverse extremely difficult terrain, so an
intelligent path generation and tracking system is absolutely necessary.
Method : The method is to send the GPS coordinates of the target point sent from the control station to the rover. The rover collects its
own GPS signals to create a route between your current position and your destination.

Result : As a result, after building such a system, we were able to effectively navigate the rover on this difficult terrain.
Comparison: the deviation between the current path, direction, and position. Ultimately, when it follows its course and reaches its
destination, it corrects for errors or deviations using a proportional-integral-derivative (PID) control-loop feedback mechanism.
Conclusion: In summary, a cheap on-board computer (a Raspberry Pi in this example) manages all the computations in the process and
controls the rover to complete the task.
Limitation: It works only with GPS signals, but its high accuracy makes navigation more reliable outdoors.
Future : In the future, it will be possible to deliberately deviate in these situations and regenerate the polynomial path from another point
that avoids these impassable roads. In addition, we will implement image processing in our work in progress.
Ref.2: M. S. Islam and M. A. Rahman "Line follower robot design and construction." 2, no. 2 (2013): 127–132 Asian
Journal of Applied Science and Engineering

Background: The design combines the expertise of mechanical, electrical and computer engineers.
Objective: Line follower robots work on the same basic idea as light follower robots. However, LFR sensors
are used to track lines, not light.
Application: In this project, we will develop and manufacture a line tracking robot with a 700g, 9W LDR
sensor that constantly moves towards a black mark on a white surface.
Method: The electromechanical robot is priced at 1150 BDT and has dimensions of 7*5*5.2 cubic inches.
Result : As a result, this low-cost line-sensing robot, made up of simple electronic components, can carry a
load of around 500g without deviating from the line.
Comparison: The robot can carry 1-2 kg weight, but it can carry some weight (maybe 500 g).
Conclusion: the robot that follows the black line will succeed. On a white surface (artwork paper) you can see
some black lines running in different directions. The robot can still recognize the line and stick to the track.
Limitation: As a limitation, since it relies on painting techniques, some substances can damage it if it comes
into contact with it, which limits its capacity considerably in modern times.

Future: In the future, robots will be able to recognize different colours using colour sensors. It can be applied to
various industries, such as robot competitions
Ref.4: Al-Jarrah R, Shahzad A, Roth H; reference 4. Using probabilistic neuro-fuzzy, multi-robot systems may
coordinate their mobility and plan their routes. 2015 Jan 1;48(10):46-51 in IFAC-PapersOnLine.

Background: Coordination is based on the idea of ​leader and follower, meaning that the follower's actions are tied to
the leader's position.
Objective: Each robot has a low-level probabilistic fuzzy controller to reduce probabilistic uncertainty and allow the
multi-robot team to safely navigate from the starting point to the destination.
Application: A high-level controller is built using the Sugano first-order fuzzy inference system to simulate the main
robotic system.
Method: The process begins with the generation of input and output data. Fuzzy rules that characterise associations
between input/output data are generated using subtractive clustering and least-squares estimation (LSE) techniques.
Result: As a result, a neural network-based learning method is provided to tune the membership function parameters,
and ANFIS tunes the fuzzy rules.
Comparison: Simulations are used to confirm the feasibility and effectiveness of the proposed approach.
Conclusion: Simulation results show that the proposed solution works well. In addition, tests with real robots were
conducted to validate some aspects of the proposed strategy.
Limitation: PFLS-based methods only use probabilistic and fuzzy uncertainties to represent typical nonlinear
relationships in input-output models.
Future: We fully evaluate the proposed technique using a multi-robot ground system led by AutoMerlin and an
airship robot that scans the environment and provides a map of the environment to the leader robot.
Ref.5: Latif, Abdul, Hendro Robbi Rahim, Agus Widodo, and Kunal Kunal. "Line follower robot implementation based on
ATMega32A microcontroller." 2020: 70–74 Journal of Robotics and Control (JRC) 1, no. 3

Background: Robot technology is advancing rapidly, but Jakarta and the eastern region of Indonesia still lag behind.
Sultan Agung University of Islam also does not have access to microcontroller learning media devices.
Objective: By using the simplest possible robot design (a line-following robot that just follows a line), the author hopes to
break new ground.
Application: The mechanical design of this line follower features a robotic drive in the form of a robotic frame, sensor
placement, and robotic wheels. By making robot-like mechanical parts, we can build future robots that are lightweight
and easy to use.
Method: Requirement analysis, mechanical diagram design, electronic component design, control program design,
manufacturing, and testing.
Result : The results show that the line-following robot can move along a black line on a white floor while displaying its
surroundings on the LCD.
Conclusion: As the processing system of the ATmega32A microcontroller, it can derive the performance of all systems in
the line following the robot. This follows the program you created.
Limitation: This line follower robot still has problems with the sensitivity of the line sensor at certain speeds. Robots
following a line can only follow paths at 90-150 revolutions per minute (RPM).
Future: It incorporates new sensors to increase its efficiency, and because it is lightweight, it will also be used as a spy in
the future.
Ref.6: Imteaj,A., M.I.J. Chowdhury, M. Farshid, and A.R. Shahid, 2019, May. RoboFI is an autonomous path-following
robot that uses computer vision and the Internet of Things to geo locate and identify human bodies during search and
rescue operations.
Background: Natural and man-made disasters, losses, catastrophes and cataclysms cause a significant number of deaths each
year, and this number is steadily increasing.
Objective: To solve this problem, our team developed an autonomous rescue robot that uses three ultrasonic sensors to follow a path.
Application: The study also looks at some sensors and how they perform in the industry. It can be effectively used in search and
rescue operations to reduce the time it takes to find and find victims. RoboFI, an autonomous robot that follows a course, is
dedicated to protecting people
Method: A Raspberry Pi Model B is used as the processor module. It has a collection of single-board processors and uses
algorithms to train the camera to detect motion.
Result: So when motion is detected, we use PIR to confirm the presence of a person. Then perform your assigned tasks again to
maximise your success and save as many lives as possible after rescuing those found.
Comparison: After the system is installed, the robot collects information for personal identification, passive infrared verification,
obstacle avoidance using ultrasonic sensors, live streaming imagery, and location mapping.
Conclusion: We have successfully developed an autonomous rescue robot with person identification and localization
capabilities.
Limitation: A limitation of this robot is that it can only operate on flat surfaces. It doesn't work on higher levels or in the outside
world. There's still a lot of room for development, so it's not as good as it could be.
Future: As soon as life is found in a destroyed place, we can see both this person and his surroundings alive on the
internet.
Problem Gap :
These robots are immune to human emotions, fatigue and exhaustion. These robots work with
sensors that follow lines placed on the floor of the site. This line can be straight or curved. Even under
the most difficult and challenging conditions, this robot can do its job without fatigue.

1. To the best of our knowledge, Simultaneous localization and mapping has not been included
using 2D camera in available studies about path planning.
2. So far, studies have been focused on the 3D map construction using 3D camera.
3. It becomes relevant to explore the behaviour for bot using 2D camera for depth mapping as 3D
camera combined with Machine learning and AI can provide significant benefits in terms of
perception and object recognition for warehouse bots, there are still some limitations that need
to be addressed such as cost, processing power, limited field of view, environmental factors etc.
4. In this regard, we suggested in our previous study that the path following bot developed and
implemented in the way performed better, an additional of cloud storage and wifi- enabled
cameras made it simpler to map and save particular data.
Problem Solution:
1) In the present study, we have approached this is on detail by doing a map building algorithm of
simultaneous localization and path planning based on the potential field using 2D camera
which can help to improve the accuracy.
2) we present an integrated approach for robot localization, obstacle mapping, and path planning
in 3D environments based on data of an onboard consumer-level depth camera.These cameras
are relatively accurate and provide dense, three-dimensional information directly from the
hardware.
3) we present the first integrated navigation system consisting of localization, obstacle mapping,
and collision avoidance for humanoid robots that is based on 2D camera data.
Our approach relies on a given 3D environment model, bot which is also connects with the server and
work on the real time environment.

4) Image stabilization technique is used to compensate for motion blur caused by the movement of the
robot. This involves analyzing the images captured by the camera and using algorithms to remove any
unwanted motion and it takes the picture in the frames as well which helps it for a better movement.
Methodology and Experimentation

Make all the connections as shown in the image below.


we can power the motor driver and the raspberry pi using
the same battery by making use of a step down
converter.
To run code on your raspberry pi, you will first need to
setup VNC Viewer on your desktop to remotely access
the rpi. Or you can use a separate monitor for this. Then
open the downloaded code on the raspberry pi and we
have to install the OpenCV, Numpy and Rpi libraries
before running the code.
Recheck all the connections and then place the robot in
your warehouse. The route travelled is saved in cloud
storage by inserting and mapping the camera, so you
can refer to the movement later.
If we find the speed of your robot is way too slow then
change the values to 100 in the p1.start(50) and
p2.start(50) instructions.
Circuit:
EXPECTED OUTCOMES :
The expected outcomes of our project include:

○ Efficient and accurate navigation: The 2D camera can help the robot to navigate
efficiently and accurately by detecting obstacles and mapping the environment.its accuracy is going to
be in cm range.
○ Improved productivity: By automating tasks such as transporting items, the warehouse bot can
increase productivity and reduce manual labor.
○ Reduced costs: The use of a Raspberry Pi and a 2D camera can reduce the cost of building
and operating the warehouse bot, making it more accessible to smaller businesses with limited
budgets.
○ Enhanced safety: The robot's ability to detect obstacles and hazards can help to improve safety in
the warehouse by working in the real time scenario with the help of severs.
○ Scalability: Multiple warehouse bots can be deployed to handle different tasks, leading to more
efficient operations and increased scalability.

Overall, building a warehouse bot with a 2D camera and Raspberry Pi can result in an efficient, cost-effective, and
safe automated system for warehouse operations which also includes depth measurement system as well which
makes it unique from the others.
References
● Pakdaman, Sanaatiyan, and Ghahroudi (2010), February. From conception through realisation, there were technical
difficulties with a line-following robot. The second International Conference on Computer and Automation
Engineering (ICCAE), which took place in 2010, (Vol. 1, pp. 5-9). IEEE.
● M. S. Islam and M. A. Rahman "Line follower robot design and construction." 2, no. 2 (2013): 127–132 Asian Journal
of Applied Science and Engineering
● Madhavan, B., and Sreekumar, M. (Ref. 3). (2013). tracking algorithm for several robots utilising the leader-follower
method. 64th Annual Procedia Engineering, 1426-1435.
● Al-Jarrah R, Shahzad A, Roth H; reference 4. Using probabilistic neuro-fuzzy, multi-robot systems may coordinate
their mobility and plan their routes. 2015 Jan 1;48(10):46-51 in IFAC-PapersOnLine.
● Latif, Abdul, Hendro Robbi Rahim, Agus Widodo, and Kunal Kunal. "Line follower robot implementation based on
ATMega32A microcontroller." 2020: 70–74 Journal of Robotics and Control (JRC) 1, no. 3
● Imteaj , A., M.I.J. Chowdhury, M. Farshid, and A.R. Shahid, 2019, May. RoboFI is an autonomous path-following robot
that uses computer vision and the Internet of Things to geo locate and identify human bodies during search and
rescue operations. The first International Conference on Advances in Science, Engineering, and Robotics Technology
(ICASERT) will take place in 2019 (pp. 1-6). IEEE.

You might also like