You are on page 1of 97

Semester 1, 2016-2017

Camera Vision

EAB 3606 AGRICULTURAL


AUTOMATION

IMAGE SENSOR

An image sensor is a device that converts an


optical image into an electronic signal. It is
used mostly in digital cameras, camera
modules and other imaging devices. Currently
used types are semiconductor charge-coupled
devices (CCD) or active pixel sensors in
complementary metaloxidesemiconductor
(CMOS).

CCD VS CMOS

Today, most digital still cameras use either a


CCD image sensor or a CMOS sensor. Both
types of sensor accomplish the same task of
capturing light and converting it into
electrical signals.
Each cell of a CCD image sensor is an analog
device. When light strikes the chip it is held
as a small electrical charge in each photo
sensor. The charges are converted to voltage
one pixel at a time as they are read from the
chip. Additional circuitry in the camera
converts the voltage into digital information.
A CMOS imaging chip is a type of active pixel
sensor made using the CMOS semiconductor
process. Extra circuitry next to each photo
sensor converts the light energy to a voltage.
Additional circuitry on the chip may be
included to convert the voltage to digital
data.

CMOS sensors can potentially be


implemented with fewer components, use
less power, and/or provide faster readout
than CCD sensors. CCD is a more mature
technology and is in most respects the
equal of CMOS.

CMOS sensors are less expensive to


manufacture than CCD sensors.

USB CAMERA

USB
Universal Serial Bus (USB) is an industry standard
developed in the mid-1990s that defines the cables,
connectors and communications protocols used in a
bus for connection, communication, and power supply
between computers and electronic devices.
USB was designed to standardize the connection of
computer peripherals (including keyboards, pointing
devices, digital cameras, printers, portable media
players, disk drives and network adapters) to personal
computers, both to communicate and to supply electric
power.

FIREWIRE CAMERA

FIREWIRE
FireWire IEEE 1394 is a serial bus architecture for highspeed data transfer. The system is commonly used to
connect data storage devices and DV (digital video)
cameras, but is also popular in industrial systems for
machine vision and professional audio systems. It is
preferred over the more common USB for its greater
effective speed and power distribution capabilities.
Data transfer rates are higher for FireWire than for USB
2.0, but lower than USB 3.0.

IP CAMERA
An Internet protocol camera, or IP camera, is a type of digital video
camera commonly employed for surveillance, and which can send
and receive data via a computer network and the Internet.
.

SPECTRAL CAMERA
A multispectral image is one that captures image data at specific frequencies across the electromagnetic spectrum.
The wavelengths may be separated by filters or by the use of instruments that are sensitive to particular
wavelengths, including light from frequencies beyond the visible light range, such as infrared. Spectral imaging can
allow extraction of additional information the human eye fails to capture with its receptors for red, green and blue.
It was originally developed for space-based imaging.

THERMAL CAMERA
A thermographic camera (also called an infrared camera or thermal imaging camera) is a
device that forms an image using infrared radiation,

STEREOVISION CAMERA
Stereo vision is the process of extracting 3D information from multiple 2D views of a scene. The 3D
information can be obtained from a pair of images, also known as a stereo pair, by estimating the relative
depth of points in the scene.

GRA or Lab Technician will demonstrate the


smart sprayer.
Please take note on how the camera is used to
capture the image of weeds, save it and used
as the reference image for the sprayer to open
or closed the nozzle for spraying.

The solid-state image sensor has an array of photoelectric


elements that generate electric charges from photons. The
sensor can be classified by its scanning method, mainly into
the charge-coupled-device type and metal-oxidesemiconductor type.
The charge-couple-device type has the advantage of high
resolution, while the metal-oxide-semiconductor type is
better for less brooming phenomena.
The image sensor often has a sensitivity from the visible
region to the infrared region (400-1200 nm). It can also be
classified as an area sensor, which has a two-dimensional (2D) array of photoelectric elements and a linear sensor, which
has a one-dimensional (1-D) array.

Chapter 5; Machine Vision

Vision System
5.1 Image Acqiusition
5.1.1
Image Sensors
5.1.2
TV cameras
5.1.3
Image Grabber and Its Processing Device
5.1.4
Luminaire
5.2 Discrimination
5.2.1
Method of Red-Green-Blue Signals
5.2.2
Method of the Most Suitable Wavelength Band Based on Spectral
Reflectance
5.3 Recognition
5.3.1
Features from Binary Image
5.3.2
Features from the Gray-Level Image
5.3.3
Recognition Algorithm for a Biological Object
5.4 Depth Measurement and Three-Dimensional Vision
5.4.1
Depth Measurement
5.4.2
Area-Based Stereo Vision
5.4.3
Sensor Fusion
5.4.4
Application to Bioproduction

ELECTROMAGNETIC SPECTRUM

Digital Images
A digital image contains a fixed number of rows and columns of picture elements (pixels)
Pixels are like tiles holding quantized values between 0 to 255, that represent the
brightness at the point of image.

An image is a matrix of pixels


Note: Matlab uses
Resolution
Digital cameras: 1600 X 1200 at a minimum
Video cameras: ~640 X 480
Grayscale: generally 8 bits per pixel Intensities in range [0255]
RGB color: three 8-bit color planes

COLORS SPACE
The RGB color model is an additive model in
which red, green, and blue (often used in
additive light models) are combined in various
ways to reproduce other colors. The name of
the model and the abbreviation RGB come
from the three primary colors, red, green, and
blue. A color in the RGB color model can be
described by indicating how much of each of
the red, green, and blue is included. Each can
vary between the minimum (fully dark) and
maximum (full intensity). If all the colors are at
minimum the result is black. If all the colors at
maximum, the result is white. RGB color is
ideally suited for hardware implementation
but is not well suited with for describing
colors.

HSI (Hue, Saturation and Intensity) color


space corresponds closely with the way
humans describe and interpret color. HSI
color space is an ideal tool for developing
image processing algorithm based on color
description that are natural and intuitive to
humans (Gonzalez & Woods, 2010) . The
variances of lighting intensity did not affect
the HSI value. This is due to the fact the HSI
color space system separates the color
information of an image from its intensity
information. HSV (Hue, Saturation and
Value) and HSL (Hue, saturation and
Luminance) are another variant the color
space have the similar Hue and Saturation
color component

Image Histogram

Image Conversion
RGB Grayscale
Grayscale Binary
RGB

Grayscale

Binary

Boundary Detection

Recognition - Shading

Lighting affects appearance

Sobel Edge Detection

Sobel

Canny Edge Detection

Corner Detection

courtesy of S. Smith

SUSAN corners

How to Determine Centroid, xl and xr


Start
Read Image

+
RGB Image
Matrix

Image Conversion
(RGB to HSV, Grayscale or Binary)

RGB to Binary

Determine Centroid of the Objects


Xl and Xr

End

Stereovision Model

PMCl

plLCl

PNCr

prRCr

The objectives:
To differentiate and
analyze
oil
palm
colours
To differentiate oil
palm fruit with other
object
To
identify
the
maturity of fruits

The
specimen
were captured by
using CCD camera,
Matrox Meteor Card,
and
Matrox
intellicam software.

The data of images were


extracted by using Matrox
inspector software.

Experiment was carried out in the laboratory with a


constant distance between camera and specimen and light
was
controlled
to
be
constant
intensity
The result showed that the ripe category could be differentiated from other
category of specimen depending on the RGB intensity and range

A computer program was also written to analyze the data of


RGB in order to differentiate the ripe with other categories
and the program can produce electrical signal which can be
used to activate a robot arm

The experiment carried out showed that when the


camera detected the ripe' bunch that matched with the
reference red colour code intensity in the computer, the
LED was turned ON. This signal can be used to turn ON a
switch to activate machine or robot arm.

RGB
CAMERA

The software can be able to simulate real time


robot movement. The CCD camera function as a
image grabber to detect the fruit and at the
same time determine the X and Y axes location
of the fruit. While Z axis is fixed.

The second stage concentrate on development of control


software that integrate vision system and interfacing
system

The positioning of the end-effector close to the


target as obtained from the developed interface
for two different target positions.

This project was successfully designed. Further


research was carried out to automatically
determine the Z axis. 3 D for x, y, and z axis.

The use of camera has failed to obtain sufficient


information about fruit, mainly in unconstraint
environment, where many factors affect the scene such as
weather conditions, colour of leaves and their position
according the fruit and light contrast.

Vision System With Videogrammetry


Technique

Videogrammetry
technique
and
triangulation method were used to
measure distance of the target object.
By clicking the image displayed on the
user interface, the 3-dimensional (3D)
distance of the target from robot arm will
be generated and sends a signal to the
robot to grip the selected target.

Plate shows the camera position on robot arm. The left


side of UI window will show real time video scene
captured by camera 1 while at the right side of UI window,
the picture was captured by camera 2.

The 3D measurement of the target base on the robot


Cartesian coordinate. The origin of robot Cartesian will be
selected at the center of the robot base coordinate.

Development Of Virtual Motion Pneumatic Interface


(VMPRI)

Development Of Virtual Motion Pneumatic Interface


(VMPRI)

The robot simulation windows which indicate the target


location and robot simulation movement from top view.
Once the user clicked the target on both scene and clicked
the triangulation button, the simulation will display and
robot start to grab selected target.

The camera control parameters with background of main


window of UI. The camera control dialog box was retrieved
from window operating system using DLL files.

The main window of user interface, video and picture


scene from both camera and buttons for robot
operation.(simulation)

Attachment of the harvester to the prime


mover @ tractor

OPTICAL, MECHANICAL AND RHEOLOGICAL PROPERTIES FOR


THE DETERMINATION OF FRUIT RIPENESS.

BY :
WAN ISHAK WAN ISMAIL
MOHAMAD SAUFI MOHD KASSIM
LEE BOON HUET

Table1.0: Colour Ripening Index Number of


Pisang Mas

Determination of Optical Properties.

This paper investigates the use of machine vision to identify the


object by colour and shape and send the signal to enable the robot to
pick or sort that object.

Determination of Optical Properties.

Equipment.
1)

Sony CCD RGB camera.

2)

Intel Pentium 166MHz with 16 Mbytes RAM.

3)

Meteor Board to receive the signal from the camera.

4)

Matrox Intellicam 2.0 software

Continued

5) Matrox Inspector 1.7 software was used for analysis of the colour of an
image file.
6) The Turbo C++ programming languages was used to write a program
for colour analysis and give the colour indexes result of images.
7) Lighting chamber provided with one 10W white fluorescent lamp and
one 40W frosted bulb. Both light sources were place at a distance of
35cm from the floor

Figure 1.0 : Lighting Chamber

METHODOLOGY

A computer program was written to analyze the RGB and


HSL data of the images.
The computer program uses the average of colour intensity
to differentiate the ripe index from the other fruit index.

Figure 2.0:Programming Development Flowchart.

Figure 3.0: Block Diagram of Computer Imaging System

Figure 4.0: The Line Profile used to analysis the whole colour intensity of
Pisang Mas

Figure 5.0: Experiment Procedure

RESULT.

Optical Properties.
Pisang Mas will take about two weeks to go through all the colour ripening
index (index 1 to index 8) after harvesting at normal ambient temperature.

Sample of Colour intensity versus location for colour ripening index 1

Sample of Colour intensity versus location for colour ripening index 2

Sample of Colour intensity versus location for colour ripening index 3

Sample of Colour intensity versus location for colour ripening index 4

Sample of Colour intensity versus location for colour ripening index 5

Sample of Colour intensity versus location for colour ripening index 6

You might also like