You are on page 1of 10

CHAPTER 2

THEORY BACKGROUND OF THE SYSTEM

2.1 Types of Robots


There are many specific types of robots which can be used in environment and
human can control them in a way, manual or automatic. Manual controlled robots are
in various ways to control by remote or sensors and so do automatic. How many are
there robots which can be controlled by manual or automatic ways? There are six
main types of industrial robots: Cartesian, SCARA, cylindrical, delta, polar and
vertically articulated. But there has the most common types of robots which can be
used not only in industrial but also in society. They are mobile robots, stationary
robots, autonomous robots, remote-controlled robots and virtual robots.

2.1.1 Mobile Robots


There are many applications of mobile robots, and their importance in
industrial processes continues to grow. Mobile robots can be used for transportation
tasks, surveillance, or cleaning. Increasingly, they play an economic role also in the
entertainment industry (artificial pets being the best known example).[7,8]

Figure 2.1. Sample Picture of Mobile Robot[7]


2.1.2 Stationary Robots
These robots are fixed in one place and cannot move. This category includes
robotic arms, computerized machine tools, and most other industrial robots. Industrial
8

robots are robots used in mass production e.g. welding robots, CNC plate cutters or
CNC drills.[7,10]

Figure 2.2. Sample Picture of Stationary Robot[7]

2.1.3 Autonomous Robots


An autonomous robot is a robot that performs behaviors or tasks with a high
degree of autonomy. This feature is particularly desirable in fields such as spaceflight,
household maintenance (such as cleaning), waste water treatment and delivering
goods and services.

Figure 2.3. Sample Picture of Autonomous Robot

2.1.4 Remote-controlled Robots


A lot of people may think that a robot is not fully a robot if it is not
autonomous, in other words if it is not able to move for extended periods of time
without human intervention. The fact is that robot control systems have varying levels
9

of autonomy, and tele-operated or remote control mode robot – where there is direct
interaction between human and robot and the human has nearly complete control over
the robot’s motion – is one of them.[10,11]

Figure 2.4. Sample Picture of Remote-controlled Robot[12]

2.1.5 Virtual Robots


Virtual robot provides a simple to use web based tool for content creation on
EA robots like RoboThespian and SociBot. Robot Virtual Worlds is high-end
simulation environment that enables students, without robots, to learn programming.
Research has shown that learning to program in the RVW is more efficient than
learning to program using physical robots.

Figure 2.5. Sample Picture of Virtual Robot[7]

2.2 Types of Mobile Robots


Upper describe kinds of robot, we choice the mobile robot with fully
autonomous. In the mobile robot platforms have different mechanisms to move
wheels, legs wings, even jets. In the wheel mobile robot platforms
10

Differential Drive (Two wheel power, other wheel are free (caster), Two
degree of freedom).
Synchrony drive (All wheels are powered Steering is also powered but
synchronized All wheels steer the same way simultaneously Fast moving, but limited
mobility 2 Degrees of freedom).
Omni Drive (Three/Four powered wheels, Wheels are specially designed Fast
moving, high mobility 3 degrees of freedom – can move at any heading and turn at
the same time).
A mobile robot is a robot that is capable of locomotion. Mobile robotics is
usually consider to be a subfield of robotics and information engineering. Mobile
robots have the capability to move around in their environment and are not fixed to
one physical location.
Mobile robot can be "autonomous" (AMR) which mean they are capable
which mean they are capable of navigating an uncontrolled environment without need
for physical or electromechanical guidance device. Alternatively, mobile robot can
rely on guidance devices that allow them to travel a pre-defined navigating route in
relatively controlled space (AGV). The components of the mobile robot are controller,
control software, sensors and actuators. The controller is generally a microprocessor,
microcontroller or a personal computer. Mobile control software can be either
assembly level language or high-level language such as C, C++, Python,. The sensors
used are depended upon the requirements of the robot. The requirements could be
dead reckoning, tactile and proximity sensing, triangulation ranging, collision
avoidance, position location and other specific applications.[2]

2.2.1 Manual Remote or Tele-op


A manually tele-operated robot is totally under control of a driver with a
joystick or other control device. The device may be plugged directly into the robot,
may be a wireless joystick, or may be an accessory to a wireless computer or other
controller.
A tele-operated robot is typically used to keep the operator out of harm's way.
Examples of manual remote robots include Robotics Design's ANATROLLER ARI-
100 and ARI-50, Foster-Miller's Talon, iRobot's PackBot, and KumoTek's MK-705
Roosterbot.
11

2.2.2 Guarded Tele-op


A guarded tele-op robot has the ability to sense and avoid obstacles but will
otherwise navigate as driven, like a robot under manual tele-op. Few if any mobile
robots offer only guarded tele-op.[2,7]

2.2.3 Sliding Autonomy (Robotic mapping)


More capable robots combine multiple levels of navigation under a system
called sliding autonomy. Most autonomously guided robots, such as the Helpmate
hospital robot, also offer a manual mode. The motivate autonomous robot operation
system, which is used in the ADAM, PatrolBot, SpeciMinder, MapperBot and a
number of other robots, offers full sliding autonomy, from manual to guarded to
autonomous modes.

Figure 2.6. Mapperbot[2]

2.2.4 Line-Follower Robot


Some of the earliest Autonomous Guided Vehicles (AGVs) were line
following mobile robots. They might follow a visual line painted or embedded in the
floor or ceiling or an electrical wire in the floor. Most of these robots operated a
simple "keep the center sensor on the line" algorithm. They could not circumnavigate
obstacle; they just stopped and waited when something blocked their path. Many
12

examples of such vehicles are still sold, by Tranbotics, FMC, Egemin, HK System
and many other companies. These types of robot are still widely popular well known
Robotic Societies are a first step of learning nooks and corners of robotics.

Figure 2.7. Line follower Robot[2]

2.2.5 Autonomously guided robot


An autonomously guided robot knows at least some information about where
it is and how to reach various goals and or waypoints along the way. "Localization" or
knowledge of its current location is calculated by one or more means, using sensors
such motor encoders, vision, Stereopsis, lasers and global positioning systems.
Positioning systems often use triangulation, relative position and/or Monte-
Carlo/Markov localization to determine the location and orientation of the platform,
from which it can plan a path to its next waypoint or goal. It can gather sensor
readings that are time and location-stamped. Such robots are often part of the wireless
enterprise network, interfaced with other sensing and control systems in the building.
For instance, the PatrolBot security robot responds to alarms, operates elevators and
notifies the command center when an incident arises. Other autonomously guided
robots include the SpeciMinder and the TUG delivery robots for the hospital. In 2013,
autonomous robots capable of finding sunlight and water for potted plants were
created by artist Elizabeth Demaray in collaboration with engineer Dr. Qingze Zou,
biologist Dr. Simeon Kotchomi, and computer scientist Dr. Ahmed Elgammal.
13

Figure 2.8. Autonomous Guided Robots[2]

2.2.6 Chefbot Autonomous Mobile Robot


Robot should publish its odometry/position data with respect to the starting
position. The necessary hardware components that provide odometry information are
wheel encoders, IMU and 2D/3D cameras (visual odometry). There should be a laser
scanner or a 3D vision sensor, sensor, which can act as a laser scanner. The laser
scanner data is essential for map building process using SLAM. The robot should
publish the transform of the sensors and other robot components using ROS
transform. The base controller is a ROS node, which can convert a twist message
from Navigation stack to corresponding motor velocities.[2,20]

Figure 2.9. Chefbot Autonomous Mobile Robot[20]


14

2.3 Theory Background of Robot Operation System (ROS)


Early days at Stanford (2007 and earlier), the first pieces of what eventually
would become ROS were beginning to come together at Stanford University. Eric
Berger and Keenan Wyrobek, PhD students working in Salisbury’s robotics
laboratory at Stanford were leading the Personal Robotics Program. While working
on robots to do manipulation tasks in human environments, the two students noticed
that many of their colleagues were held back by the diverse nature of robotics: an
excellent software developer might not have the hardware knowledge required;
someone developing state of the art path planning might not know how to do the
computer vision required. In an attempt to remedy this situation, the two students set
out to make a baseline system that would provide a starting place for others in
academia to build upon. In the words of Eric Berger, “something that didn’t suck, in
all of those different dimensions”.
In their first steps towards this unifying system, the two build the PR1 as a
hardware prototype and began to work on software from it, borrowing the best
practices from other early open source robotic software frameworks, particularly
switchyard, a system that Morgan Quigley, another Stanford PhD student, had been
working on in support of the STAIR (Stanford Artificial Intelligence Robot) by the
Stanford Artificial Intelligence Laboratory. Early funding of US$50,000 was
provided by Joanna Hoffman and Alain Rossmann, which supported the development
of the PR1. While seeking funding for further development, Eric Berger and Keenan
Wyrobek met Scott Hassan, the founder of Willow Garage, a technology incubator
which was working on an autonomous SUV and a solar autonomous boat. Hassan
shared Berger and Wyrobek's vision of a “Linux for robotics”, and invited them to
come and work at Willow Garage. Willow Garage was started in January 2007, and
the first commit of ROS code was made to Source Forge on the seventh of November,
2007.
Willow Garage (2007-2013) began developing the PR2 robot as a follow-up to
the PR1, and ROS as the software to run it. Groups from more than twenty institutions
made contributions to ROS, both the core software and the growing number of
packages which worked with ROS to form a greater software ecosystem. The fact that
people outside of Willow were contributing to ROS (particularly from Stanford's
STAIR project) meant that ROS was a multi-robot platform from the beginning.
While Willow Garage had originally had other projects in progress, they were
15

scrapped in favor of the Personal Robotics Program: focused on producing the PR2 as
a research platform for academia and ROS as the open source robotics stack that
would underlie both academic research and tech startups, much like the LAMP stack
did for web-based startups.
In December 2008, Willow Garage met the first of their three internal
milestones: continuous navigation for the PR2 over a period of two days and a
distance of pi kilometers. Soon after, an early version of ROS (0.4 Mango Tango) was
released, followed by the first RVIZ documentation and the first paper on ROS. In
early summer, the second internal milestone: having the PR2 navigate the office, open
doors, and plug itself it in, was reached. This was followed in August by the initiation
of the ROS.org website. Early tutorials on ROS were posted in December, preparing
for the release of ROS 1.0, in January 2010. This was Milestone 3: producing tons of
documentation and tutorials for the enormous capabilities that Willow Garage's
engineers had developed over the preceding 3 years. [20]
Following this, Willow Garage achieved one of its longest held goals: giving
away 10 PR2 robots to worthy academic institutions. This had long been a goal of the
founders, as they felt that the PR2 could kick-start robotics research around the world.
They ended up awarding eleven PR2s to different institutions, including University of
Freiburg (Germany), Bosch, Georgia Tech, KU Leuven (Belgium), MIT, Stanford,
TU Munich (Germany), UC Berkeley, U Penn, USC, and University of Tokyo
(Japan). This, combined with Willow Garage's highly successful internship program
(run from 2008 to 2010 by Melonee Wise), helped to spread the word about ROS
throughout the robotics world. The first official ROS distribution release: ROS Box
Turtle was released on March 2nd of 2010, marking the first time that ROS was
official distributed with a set of versioned packages for public use. These
developments lead to the first drone running ROS, the first autonomous car running
ROS, and the adaption of ROS for Lego Mind storms. With the PR2 Beta program
well underway, the PR2 robot was officially released for commercial purchase on
September 9th, 2010.
2011 was a banner year for ROS with the launch of ROS Answers, a Q/A
forum for ROS users, on February 15th; the introduction of the highly successful
Turtlebot robot kit on April 18th; and the total number of ROS repositories passing
100 on May 5th. Willow Garage began 2012 by creating the Open Source Robotics
Foundation (OSRF) in April. The OSRF was immediately awarded a software
16

contract by DARPA. Later that year, the first ROSCon was held in St. Paul, MN, the
first book on ROS, ROS By Example, was published, and the Baxter, first commercial
robot to run ROS, was announced by Rethink Robotics. Soon after passing its fifth
anniversary in November, ROS began running on every continent on December 3rd,
2012.
In February 2013, the OSRF became the primary software maintainers for
ROS, foreshadowing the announcement in August that Willow Garage would be
absorbed by its founders, Suitable Technologies. At this point, ROS had released
seven major versions (up to ROS Groovy), and had users all over the globe. This
chapter of ROS development would be finalized when Clearpath Robotics took over
support responsibilities for the PR2 in early 2014. [16]
OSRF and Open Robotics (2013-present). In the years since OSRF took over
primary development of ROS, a new version has been released every year, while
interest in ROS continues to grow. ROSCons have occurred every year since 2012,
co-located with either ICRA or IROS, two flagship robotics conferences. Meetups of
ROS developers have been organized in a variety of countries, a number of
ROS books have been published, and many educational programs initiated. On
September 1st, 2014, NASA announced the first robot to run ROS in space:
Robotnaut 2, on the International Space Station. In 2017, the OSRF changed its name
to Open Robotics. Tech giants Amazon and Microsoft began to take an interest in
ROS during this time, with Microsoft porting core ROS to Windows in September
2018, followed by Amazon Web Services releasing RoboMaker in November.
Perhaps the most important development of the OSRF/Open Robotics years
thus far (not to discount the explosion of robot platforms which began to support ROS
or the enormous improvements in each ROS version) was the proposal of ROS2, a
significant API change to ROS which is intended to support real time programming, a
wider variety of computing environments, and utilize more modern technology. ROS2
was announced at ROSCon 2014, the first commits to the ros2 repository were made
in February 2015, followed by alpha releases in August 2015. The first distribution
release of ROS2, Ardent Apalone, was released on December 8th, 2017, ushering in a
new era of next-generation ROS development. [20,16]

You might also like