Professional Documents
Culture Documents
REPORT
VISION OF ROBOT ARM
ABSTRACT
Machine vision has played a vital role in the evolution of industrial robotics,
and the two are becoming increasingly integrated. A main reason for this trend is
that cameras have become more powerful and more accurate in rugged industrial
settings than ever before. While robotic capabilities have certainly improved too,
it’s been the ability of cameras to let robots see what’s around them that’s provided
some of the most profitable and productive benefits.
Machine vision allows a robot to see what it’s doing, in a sense. Without
machine vision the robot would be blind – only capable of repeating the same
exact task over and over until it’s reprogrammed.
I. Introduction
There are many benefits of using industrial robots with vision, but increased
flexibility, reduced programming time and less investment in loading/unloading
processes are some of the most obvious benefits. Machine vision has been used in
robotic arms for years now, but it continues to advance the capabilities of industrial
robots and find new ways to achieve productivity for manufacturers.
Figure 1 shows the experimental setup of the system used in this study. The
overall system includes a robotic arm, a personal computer (PC), and a platform on
which a backlight box, the target objects, and work pieces were located. A cover
was installed above the platform to shield the work area from ambient room
lighting. The CMOS camera (C910) of the proposed system was mounted
underneath the cover, directly above the platform. Two front-lighting sources were
also mounted underneath the cover on opposite sides of the platform. The
computer was equipped with intelligent image analysis software to be used with
the 2-D image-based vision system, and an object detection algorithm.
2. Illumination conditions
3. Calibration work
Fig. 3 shows the plane that was perpendicular to the calibration plane was
chosen as the main angle for measuring the calibration process. Using the Camera
Calibration Toolbox of MATLAB, all the parameters of the camera can be
obtained including intrinsic parameters and extrinsic matrix.
Figure 5 shows the components of the vision system. The overall hardware
included a robotic arm (HIWIN), three CMOS cameras, and a PC. The double
CMOS camera C615 and the third camera C525 of the proposed system were
mounted on a lattice above the platform. The double camera was used to capture
the assembly modules, and the C525 camera was used to capture the assembly
parts. The 3D-image-based vision equipment required an intelligent image
analysis software program and an object detection algorithm. The proposed
framework combined the control and image processing to perform the desired
operations.
Fig. 6. Flowchart schema of the proposed system.[2]
As shown in Fig. 6, once the proposed system received a command from the
user, the camera calibration processes, including 2D and 3D calibrations, were
implemented first, and then the workspace was scanned to acquire the desired
images. The acquired images were processed to classify the color, and then Blob
analysis was used to calculate the properties of blobs, such as the area, perimeter,
and compactness, to identify the target object, and formulate a marker for their
centers. Afterwards, the measurement data were sent to the robotic arm for
manipulation .With the 2D coordinate calibration, we used a quadratic
transformation in this study using a regression analysis to extract the point
coordinates for the calibration process. In this work, lens distortion was not
considered.
The whole system divided into two sections. One is data transmitting section
and another is data receiving section. These two systems are interfaced with xbee
protocol. Here we used hand made artificial robotic arm which contains 180degree
rotation angle. We also use MPU-6050 IMU board. The MPU-6050 devices
combine a 3-axis gyroscope and a 3axis accelerometer on the same silicon die
together with an onboard Digital Motion Processor (DMP) capable of processing
complex 9-axis Motion Fusion algorithms. The parts feature a user-programmable
gyro full-scale range of ±250, ±500, ±1000, and ±2000°/sec (dps) and a
userprogrammable accelerometer full-scale range of ±2g, ±4g, ±8g, and ±16g[10].
It’s detects hand gesture and send data to main board for processing. With the help
of complex geometry we calculate the proper angle of movement of hand gesture
in main board. Then these data sends to receiving section by xbee module.
Transmitted data received by receiving xbee module and processed by receiving
sections main board.
Fig. 8. Wireless data transmitting section with Xbee module. Right figure shows
MPU5060 IMU board. Total system is able to transmit hand gesture data
wirelessly.[4]
At first sensor read data from hand gesture then it sends to main board for
calculation. Then it sends over xbee module to receiving end. Receiver module’s
receive data and then transfer it to main board. The main board utilizes these data
for movement of servo motor. This whole system can express as follows:
Fig. 9. Block Diagram of hand gesture data receiving section. This system also
control the robotic arm.[4]
Fig. 10. Block Diagram of hand gesture data receiving section. This system also
control the robotic arm.[4]
Control a robotic arm is still hassle and time consuming in many industrial
section. Many universities and researchers works in this field to make its simple
and smarter (or skilled). The unique phase is human-robot interaction. This
interaction mode varies in different purposes. Our experimental design is
prototype. Here we try to improve its stability and first response. The control
strategy is synchronous with sensor movement that means with human arm
gesture. In different research paper proposed many design in which neural network
used to find out Z-axis rotational angle. But in this system we used just a 3-axis
gyro-meter which helps to find out Z-axis rotational angle and makes the control
system easier. Gyro system makes this system more stable and synchronous
response. By applying this prototype methodology we can control industrial
robotic arm easily. Using wireless camera module and powerful protocol then its
may be controllable from far distance away also without face to face presence of
human user.
III. Conclusion
References
[1]Quang-Cherng Hsu & Ngoc-Vu Ngo & Rui-Hong Ni(12 October 2018),
Development of a faster classification system for metal parts using machine vision
under different lighting environments.
[3] Frank S. Cheng and Andrew Denman, A Study of Using 2D Vision System for
Enhanced Industrial Robot Intelligence.
[5]Zhang Z (2000) A flexible new technique for camera calibration. IEEE Trans
Pattern Anal Mach Intell 22:1330–1334.