You are on page 1of 16

THAI NGUYEN UNIVERSITY OF TECHNOLOGY

FACULTY OF INTERNATIONAL TRAINING

REPORT
VISION OF ROBOT ARM

Hoang Mai Trung


School of Mechanical Engineering, Thai Nguyen University of
Technology, 3-2 Road, Tich Luong District, Thai Nguyen City,
Thai Nguyen, Vietnam.
* e-mail: k175905218022@tnut.edu.vn *
Instructor: Ngo Ngoc Vu
Lecturer of Faculty of International Training, Thai Nguyen
University of Technology
ACKOWLEDGEMENTS
I want to thank the teacher and everyone who guided me to complete this
project. Thank you for everything. I would like to express the deepest appreciation
to my supervisor, Ngo Ngoc Vu, who has helped and guided in this subject and
worked on this project. Without her guidance and persistent help, this project
would not have been easyly completed. One time again, I thank to Ngo Ngoc Vu
from who have been guiding and helping in undertaking this project.

ABSTRACT
Machine vision has played a vital role in the evolution of industrial robotics,
and the two are becoming increasingly integrated. A main reason for this trend is
that cameras have become more powerful and more accurate in rugged industrial
settings than ever before. While robotic capabilities have certainly improved too,
it’s been the ability of cameras to let robots see what’s around them that’s provided
some of the most profitable and productive benefits.

Machine vision allows a robot to see what it’s doing, in a sense. Without
machine vision the robot would be blind – only capable of repeating the same
exact task over and over until it’s reprogrammed.

This technology allows a robot to adjust to obstacles in its environment and


complete different preprogrammed tasks by recognizing which one needs to be
completed. One of the major benefits of machine vision being used in industrial
robots is increased flexibility. One robot with vision can do the tasks of several
blind robots. As long as the robot is preprogrammed and the tooling is correct, it
can switch between tasks seamlessly with little to no downtime. Another benefit is
that less programming is required for robots with vision technology. Typically, you
would only need to program once before a robot starts up. Blind robots need to
continually be programmed to widen their skill sets and also improve their
performance. Related to flexibility and less programming, robots with machine
vision don’t need to have precision part placements to do their job productively.
The ability of a robot to adjust to its environment allows it to pick up, locate and/or
work on a part in any orientation.

I. Introduction
There are many benefits of using industrial robots with vision, but increased
flexibility, reduced programming time and less investment in loading/unloading
processes are some of the most obvious benefits. Machine vision has been used in
robotic arms for years now, but it continues to advance the capabilities of industrial
robots and find new ways to achieve productivity for manufacturers.

With the rapidly growing demand for industrial automation in the


manufacturing sector, machine vision now plays an important role in many fields.
Many applications of machine vision involve the inspection of components and
surfaces for defects that may affect the quality of products. Machine vision has
also been employed in varying degrees to assist in manipulating manufacturing
equipment in the performance of specific tasks. The machine vision method is
based on the human vision system, and has been used to detect objects of all types.
The information is transmitted to a personal computer via the signal line of the
mainframe computer, and then, the spatial position of the measured object located
on the world coordinate systems is calculated. Because machine vision is
considered to be an essential sensing function for robots, various types of general
and special purpose image processing hardware have been developed to improve
the performance of visual data acquisition. The typical design for image processing
systems requires highresolution and homogeneous, high-speed processing over the
full region on the screen. In particular, a workpiece held in a robotic manipulator
can be guided to a targeted position using a machine vision feedback procedure,
and a robot which is programmed with a general set of movement instructions.
During the last several years, machine vision systems have been applied in
manufacturing with the goal of improving both quality and productivity. Machine
vision unifies illumination, imaging, image processing, and analysis, to provide
non-contact localization, characterization, and manipulation of stationary or
moving objects.

Nowadays, the machine vision method is widely used in many applications,


including quality detection, non-contact measurement systems, industrial
automation, electronic semiconductors, medical treatment, and defect
inspection.Quang-Cherng Hsu,Ngoc-Vu Ngo and Rui-Hong Ni el at.(1) using a
machine vision system for the automatic classification process is developed under
different lighting environments, and has been applied to the operation of a robot
arm with 6 degrees of freedom (DOF). In order to obtain accurate positioning
information, the overall image is captured by a CMOS camera which is mounted
above the working platform. For each condition, global and local contrast threshold
operations were used to obtain good image quality. In this study, a quadratic
transformation used to describe the relationship between the image coordinates and
the world coordinates was proposed, which has been compared to linear
transformation as well as the camera calibration model in MATLAB tool.
Experimental results show that in a back-lighting environment, the image quality is
improved, such that the positions of the centers of objects are more accurate than in
a front-lighting environment. According to the calibration results, the quadratic
transformation is more accurate than other methods. Machine-vision-based reading
and sorting devices have been used to measure and classify items. Ngoc-Vu Ngo,
Glen Andrew Porter, and Quang-Cherng Hsu el at.(2) extend their application to
the sorting and assembling of items identified by their geometry and color. In this
study, they developed an improved machine vision system that is capable of
discerning and categorizing items of distinct geometries and colors and utilizes a
computer-controlled robotic system to manipulate and segregate these items.
Hence, a machine vision system for an automatic classification process while
operating a robotic arm is hereby developed. To obtain positioning information, the
proposed system uses cameras that were mounted above the working platform to
acquire images. Perspective and quadratic transformations were used to transform
the image coordinates of the calibration system to the world coordinates by using a
calibration procedure. By these methods, the proposed system can ascertain the
two- and three-dimensional coordinates of the objects and automatically perform
classification and assembly operations using the data collected from the visual
recognition system. Frank S. Cheng and Andrew Denman el at.(3) the method of

applying the 2D vision technique for enhanced industrial robot’s intelligence in


picking/place operation is presented. The approach addresses the concerns and
solutions related to 2D vision system setup, robot system setup, and robot
programming. The result shows that the developed method is effective for
improved accuracy, flexibility, and intelligence in vision-guided robot operations.
A basic 2D vision system uses a single camera and twodimensional (2D) picture(s)
for object identification. Because of its simplicity in image processing, the
technology is commonly used in industrial robot applications. However, a 2D
image cannot provide the information of the depth of the scene, which often results
in the problem of identifying the true position of a 3D part on surface. Most of the
robotic arm is controlled by using accelerometer sensor with an artificial intelligent
algorithm. Ariful Islam Bhuyan1, Tuton Chandra Mallick el at.(4) proposed a
gesture recognition based 6DOF robotic arm controller using gyro-meter with
accelerometer to improve the stability and to detect the rotational gesture of human
arm. The arm also has the capability to grab object. To find out the angular
position of an object, it is easiest way to fuse 3axis accelerometer and 3axis gyro-
meter sensor. A low cost MEMs chip (integrated 3-axis accelerometer and 3-axis
gyro-meter) used to detect human arm gesture as well as its angular position. Here
gyro gives gesture orientation data to determine dynamic gesture behavior. An
artificial algorithm used to evaluate all gesture data which helps to train the robotic
arm.

II. Literature review


1. Structure

Quang-Cherng Hsu,Ngoc-Vu Ngo and Rui-Hong Ni el at.(1) using a machine


vision system for the automatic classification process is developed under different
lighting environments, and has been applied to the operation of a robot arm with 6
degrees of freedom (DOF).

With the rapidly growing demand for industrial automation in the


manufacturing sector, machine vision now plays an important role in many fields.
Many applications of machine vision involve the inspection of components and
surfaces for defects that may affect the quality of products. Machine vision has
also been employed in varying degrees to assist in manipulating manufacturing
equipment in the performance of specific tasks.

Fig. 1 Structure of the experimental system.[1]

Figure 1 shows the experimental setup of the system used in this study. The
overall system includes a robotic arm, a personal computer (PC), and a platform on
which a backlight box, the target objects, and work pieces were located. A cover
was installed above the platform to shield the work area from ambient room
lighting. The CMOS camera (C910) of the proposed system was mounted
underneath the cover, directly above the platform. Two front-lighting sources were
also mounted underneath the cover on opposite sides of the platform. The
computer was equipped with intelligent image analysis software to be used with
the 2-D image-based vision system, and an object detection algorithm.

The proper extraction and processing of the object’s images is an essential


part of the machine vision system. To this end, images were captured and contrast
threshold operations were subsequently conducted. In the image analysis process, a
mathematically proscribed morphology was used to implement object recognition,
image enhancement, segmentation, and defect inspection.

Fig. 2 The application stages of the proposed vision system.[1]

The purpose of the camera calibration is to improve the transformation


results, which were, in this case, based on the coordinate measurements, rather
than by using a known transformation to map the coordinates from one coordinate
system to the other . The calibration was also used to construct the projection
model that relates the two coordinate systems, and to identify the relative camera
parameters, which included both intrinsic and extrinsic parameters.

2. Illumination conditions

In this paper, five illumination conditions were investigated, including (A)


back-lighting, (B) front-lighting and roomlighting without cover, (C) front-lighting
and some roomlighting with cover, (D) only front-lighting with cover, and (E) only
front-lighting without cover. The relative locations of the light fixtures in these
conditions are shown in Fig. 1.

3. Calibration work

3.1 Camera calibration using MATLAB

In this study, we adopt classic Zhang’s calibration method[5]. MATLAB is


chosen as the programming tool. Before experiment, a chessboard pattern was
constructed with 70 squares (10 and 7 squares along the X and Y directions,
respectively). The camera in the proposed system was used to capture 12 images of
the chessboard pattern under different orientations.

Fig. 3 The Camera Calibration using MATLAB. a Auxiliary angle 1. b Auxiliary


angle 2. c Auxiliary angle 3. d Auxiliary angle 4. e Auxiliary angle 5. f Main angle.
[1]

Fig. 3 shows the plane that was perpendicular to the calibration plane was
chosen as the main angle for measuring the calibration process. Using the Camera
Calibration Toolbox of MATLAB, all the parameters of the camera can be
obtained including intrinsic parameters and extrinsic matrix.

Fig. 4 The standardized calibration pattern.[1]

Fig.4 shows a standardized calibration pattern was used to establish the


relationship between the image coordinate system and the world coordinate system
using the linear transformation and quadratic transformation. The calibration
pattern consists of 156 black circles with ad iameter of 11mm, arranged in 12 rows
and 13 columns. The distance between any two black circles is 28 mm in the X
direction, and 22 mm Y direction.

2.1 Platform description


Ngoc-Vu Ngo, Glen Andrew Porter, and Quang-Cherng Hsu el at.(2) extend
their application to the sorting and assembling of items identified by their
geometry and color.

Fig. 5. (Color online) Experimental system.[2]

Figure 5 shows the components of the vision system. The overall hardware
included a robotic arm (HIWIN), three CMOS cameras, and a PC. The double
CMOS camera C615 and the third camera C525 of the proposed system were
mounted on a lattice above the platform. The double camera was used to capture
the assembly modules, and the C525 camera was used to capture the assembly
parts. The 3D-image-based vision equipment required an intelligent image
analysis software program and an object detection algorithm. The proposed
framework combined the control and image processing to perform the desired
operations.
Fig. 6. Flowchart schema of the proposed system.[2]

As shown in Fig. 6, once the proposed system received a command from the
user, the camera calibration processes, including 2D and 3D calibrations, were
implemented first, and then the workspace was scanned to acquire the desired
images. The acquired images were processed to classify the color, and then Blob
analysis was used to calculate the properties of blobs, such as the area, perimeter,
and compactness, to identify the target object, and formulate a marker for their
centers. Afterwards, the measurement data were sent to the robotic arm for
manipulation .With the 2D coordinate calibration, we used a quadratic
transformation in this study using a regression analysis to extract the point
coordinates for the calibration process. In this work, lens distortion was not
considered.

Ariful Islam Bhuyan, Tuton Chandra Mallick el at.(4) proposed a gesture


recognition based 6DOF robotic arm controller using gyro-meter with
accelerometer to improve the stability and to detect the rotational gesture of human
arm. The arm also has the capability to grab object. To find out the angular
position of an object, it is easiest way to fuse 3axis accelerometer and 3axis gyro-
meter sensor. A low cost MEMs chip (integrated 3-axis accelerometer and 3-axis
gyro-meter) used to detect human arm gesture as well as its angular position. Here
gyro gives gesture orientation data to determine dynamic gesture behavior. An
artificial algorithm used to evaluate all gesture data which helps to train the robotic
arm. The most popular Kalman filter used to find out the exact position of human
arm more accurately.

Fig. 7. Robotic arm controlled by human hand gesture.[4]

The whole system divided into two sections. One is data transmitting section
and another is data receiving section. These two systems are interfaced with xbee
protocol. Here we used hand made artificial robotic arm which contains 180degree
rotation angle. We also use MPU-6050 IMU board. The MPU-6050 devices
combine a 3-axis gyroscope and a 3axis accelerometer on the same silicon die
together with an onboard Digital Motion Processor (DMP) capable of processing
complex 9-axis Motion Fusion algorithms. The parts feature a user-programmable
gyro full-scale range of ±250, ±500, ±1000, and ±2000°/sec (dps) and a
userprogrammable accelerometer full-scale range of ±2g, ±4g, ±8g, and ±16g[10].
It’s detects hand gesture and send data to main board for processing. With the help
of complex geometry we calculate the proper angle of movement of hand gesture
in main board. Then these data sends to receiving section by xbee module.
Transmitted data received by receiving xbee module and processed by receiving
sections main board.

Fig. 8. Wireless data transmitting section with Xbee module. Right figure shows
MPU5060 IMU board. Total system is able to transmit hand gesture data
wirelessly.[4]

At first sensor read data from hand gesture then it sends to main board for
calculation. Then it sends over xbee module to receiving end. Receiver module’s
receive data and then transfer it to main board. The main board utilizes these data
for movement of servo motor. This whole system can express as follows:

Fig. 9. Block Diagram of hand gesture data receiving section. This system also
control the robotic arm.[4]
Fig. 10. Block Diagram of hand gesture data receiving section. This system also
control the robotic arm.[4]

Control a robotic arm is still hassle and time consuming in many industrial
section. Many universities and researchers works in this field to make its simple
and smarter (or skilled). The unique phase is human-robot interaction. This
interaction mode varies in different purposes. Our experimental design is
prototype. Here we try to improve its stability and first response. The control
strategy is synchronous with sensor movement that means with human arm
gesture. In different research paper proposed many design in which neural network
used to find out Z-axis rotational angle. But in this system we used just a 3-axis
gyro-meter which helps to find out Z-axis rotational angle and makes the control
system easier. Gyro system makes this system more stable and synchronous
response. By applying this prototype methodology we can control industrial
robotic arm easily. Using wireless camera module and powerful protocol then its
may be controllable from far distance away also without face to face presence of
human user.

III. Conclusion
References

[1]Quang-Cherng Hsu & Ngoc-Vu Ngo & Rui-Hong Ni(12 October 2018),
Development of a faster classification system for metal parts using machine vision
under different lighting environments.

[2] Ngoc-Vu Ngo, Glen Andrew Porter, and Quang-Cherng Hsu,Development of a


Color Object Classification and Measurement System Using Machine Vision.

[3] Frank S. Cheng and Andrew Denman, A Study of Using 2D Vision System for
Enhanced Industrial Robot Intelligence.

[4]Ariful Islam Bhuyan,Tuton Chandra Mallick, Gyro-Accelerometer based control


of a robotic Arm using AVR Microcontroller.

[5]Zhang Z (2000) A flexible new technique for camera calibration. IEEE Trans
Pattern Anal Mach Intell 22:1330–1334.

You might also like