You are on page 1of 8

A CASE STUDY ON AUTONOMOUS ROBOTS USED IN

RESTAURANTS FOR FOOD DELIVERY

Paper 1:
https://www.ijcrt.org/papers/IJCRT2008442.pdf

IoT communication:
The IoT architecture comprises the six
modules with various units (Power
supply, Sensors, Actuators, Wi-fi Module,
Keypads, Wheels, Servo motors, LCD
display, Arduino board with user
interfaces and transceivers). The
integration of those components makes
the installation of the autonomous system
easier to control, interface and provide
optimal performance.

Embedded networking:
Shows a variation of the industrial
automation pyramid. It symbolizes the
hierarchy in an industrial automation
system. At the bottom is the sensor
and actuator level with input and output
elements that directly read switches
and sensors like current speed of a
conveyor belt, RPM valves from
anything rotating or a current
temperature.
Robot Navigation:
Robot Operating System (ROS) framework and it modular design helps our
autonomous system can be more efficient and highly optimized. In ROS,
trajectory path planning and navigation within a restaurant can effectively
achieved by generating digital map of the environment (by SLAM algorithm) and
position of the robot can be assessed by Adaptive Monte-Carlo (AMCL).

Robot used in the paper:


The base houses the motors driving four mecanum wheels, and other peripheral
devices and components. The power source consists of two 24V lithium polymer
(LiPo) batteries to supply power to the robot. The height of the robot is flexible to
serve table with different heights by means of motorized lifting mechanism
between base and the dumbwaiter. The robot and its motion can be simulated in
the RVIZ space.

Localization, mapping and navigation:


The information about orientation and heading is collected by the Attitude
Heading Reporting System (AHRS) equipped with triple axis gyroscope,
accelerometer and magnetometer. A Low cost laser scanner (RPLidar) is
used to generate a map for the robot.

AMCL for localization: The algorithm uses a particle filter to represent the
distribution of likely states, with each particle representing a possible state,
i.e., a hypothesis of where the robot is. The algorithm typically starts with a
uniform random distribution of particles over the configuration space,
meaning the robot has no information about where it is and assumes it is
equally likely to be at any point in space.
A demonstration of AMCL:

Problems encountered:
There are two considerations, related to locate and navigate the robot to
deliver food to the right table. The first is the global navigation issue
(navigating and locating the waiter robot to the target table) and the second
is local navigation (recognizing and docking at the correct table). To solve
the first one, the location of the target table and robot must be known.

Approaches:
Two approaches were available with: Using Indoor Positioning System
(IPS), and using the ROS navigation and localization modules.

- One uses IR range finders for docking. Two infra-red sensors located
on the robot will provide distances to the edge of the table. A program
is written to adjust the robot until the two sets of readings are about
0.5 m to the edge of the table. Once the robot is almost parallel, the
robot will begin its serving routine. The docking sequence and
program is implemented in ROS and the flow chart of this
implementation is show in Fig. 11.
- In the second approach the robot control uses a pixy camera with pan
and tilt angles. The camera tracks a colour tag and outputs the pan
and tilt angles to the robot controller closer to the table.

Flowchart:
Paper 2:

https://www.mdpi.com/2075-1702/10/10/844/pdf?version=1663926220

Flowchart:
Planning:

Global planner: The main principle of the Dijkstra algorithm is to identify


the shortest path from each node of the map to its initial location. The
Dijkstra method employs the shortest path point as the bridge to update the
shortest distance from the unvisited point to the initial location. The
algorithm iterates until the shortest global distance from the initial location
to the goal point is attained. Based on the coupling of this Dijkstra algorithm
and the robot control strategy, the program implements the movement
technique under a defined beginning pose and with no back-off limitation.

Local planner: During the navigation process, the TEB algorithm plans the
local path by following the global path and the step size parameter settings.
It continuously updates the obstacle information to ensure that the
trajectory is far away from the obstacle with the expansion distance as the
minimum value.

Autonomous navigation:
The robot uses nodes to publish target points in order, moves to each
target point in turn based on global path planning, and avoids obstacles in
real-time. Based on the map produced in advance, the SLAM algorithm is
used to synchronously find and draw the real-time map of the surrounding
environment so that the robot can plan its path according to the map. The
IMU posture sensor is used to gather inertial information to adjust for
position and pose inaccuracies. Combined with the distance sensor based
on lidar, impediments like walls and objects are identified, and the ideal
path between two points and chassis motion mode is computed. The
adaptive Monte Carlo localization (AMCL) approach is utilized to aid the
real-time estimation of the robot’s position, and the built-in encoder of the
robot is employed to increase the precision of planning the robot’s motion.
Code identification:
According to the assumed task requirements, each ArUco tag in this project
is a 4 × 4 bit binary pattern, whose binary reference mark can be used for
camera Machines 2022, 10, 844 10 of 18 pose estimation. The ArUco code
is a composite square mark composed of a wide black border and an
internal binary matrix that determines its identifier. The black boundary
facilitates its rapid detection in an image, and binary coding allows for its
recognition and the application of error detection and correction technology.
We implemented the ArUco module based on OpenCV to enable pose
estimation and camera correction features.

Feature detection:
The typical algorithms of feature detection can be split into two types. One
category is the R-CNN algorithm based on region proposals. The technique
generates target candidate boxes and then does classification and
regression on the candidate boxes. The other group is one-stage
algorithms, such as YOLO and SSD, which merely use a convolutional
neural network (CNN) to predict the categories and positions of distinct
targets directly. Algorithms in the first category have higher accuracy but
lower speed, whereas algorithms in the second category have faster speed
but lower accuracy. In this study, YOLOv4 target detection and SIFT
feature matching are employed for target detection, both of which can
realize the goal of picture recognition, but the outcome is different.
Paper 3:

You might also like