You are on page 1of 104

18.What is image resolution?

The resolution of a digital camera is often limited by the camera sensor (typically a CCD or
CMOS sensor chip) that turns light into discrete signals, replacing the job of film in traditional
photography.
This means that the brighter the image at that point the larger of a value that is read for that
pixel.
19.Define tracking
Tracking is defined as the motion of the scene, objects or the camera given a sequence of
images. Knowing this motion, predict where things are going to project in the next image, so
that we don’t have so much work looking for them.
20.What are the methods of teaching.
• Joint movements
• X-Y-Z coordinate motions
• Tool coordinate motion

PART- B
1. Explain different types of noises in image.
Image noise is random (not present in the object imaged) variation of brightness or color
information in images, and is usually an aspect of electronic noise. It can be produced by the
sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain
and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable
by-product of image capture that adds spurious and extraneous information.
Gaussian noise
Principal sources of Gaussian noise in digital images arise during acquisition e.g. sensor noise
caused by poor illumination and/or high temperature, and/or transmission e.g. electronic
circuit noise.
A typical model of image noise is Gaussian, additive, independent at each pixel, and
independent of the signal intensity, caused primarily by Johnson–Nyquist noise (thermal
noise), including that which comes from the reset noise of capacitors ("kTC noise"). Amplifier
noise is a major part of the "read noise" of an image sensor, that is, of the constant noise level
in dark areas of the image. In color cameras where more amplification is used in the blue color
channel than in the green or red channel, there can be more noise in the blue channel. At higher
exposures, however, image sensor noise is dominated by shot noise, which is not Gaussian
and not independent of signal intensity.
Salt-and-pepper noise
Fat-tail distributed or "impulsive" noise is sometimes called salt-and-pepper noise or spike
noise. An image containing salt-and-pepper noise will have dark pixels in bright regions and
bright pixels in dark regions. This type of noise can be caused by analog-to-digital converter
errors, bit errors in transmission, etc. It can be mostly eliminated by using dark frame
subtraction, median filtering and interpolating around dark/bright pixels. Dead pixels in an
LCD monitor produce a similar, but non-random, display.

Shot Noise

The dominant noise in the darker parts of an image from an image sensor is typically that caused
by statistical quantum fluctuations, that is, variation in the number of photons sensed at a given
exposure level. This noise is known as photon shot noise. Shot noise has a root-mean-square value
proportional to the square root of the image intensity, and the noises at different pixels are
independent of one another. Shot noise follows a Poisson distribution, which except at very low
intensity levels approximates a Gaussian distribution.

In addition to photon shot noise, there can be additional shot noise from the dark leakage current
in the image sensor; this noise is sometimes known as "dark shot noise"[6] or "dark-current shot
noise". Dark current is greatest at "hot pixels" within the image sensor. The variable dark charge
of normal and hot pixels can be subtracted off (using "dark frame subtraction"), leaving only the
shot noise, or random component, of the leakage. If dark-frame subtraction is not done, or if the
exposure time is long enough that the hot pixel charge exceeds the linear charge capacity, the noise
will be more than just shot noise, and hot pixels appear as salt-and-pepper noise.

Periodic Noise:

A common source of periodic noise in an image is from electrical or electromechanical


interference during the image capturing process.[7] An image affected by periodic noise will look
like a repeating pattern has been added on top of the original image. In the frequency domain this
type of noise can be seen as discrete spikes. Significant reduction of this noise can be achieved by
applying notch filters in the frequency domain.[7] The following images illustrate an image
affected by periodic noise, and the result of reducing the noise using frequency domain filtering.
Note that the filtered image still has some noise on the borders. Further filtering could reduce this
border noise, however it may also reduce some of the fine details in the image. The trade-off
between noise reduction and preserving fine details is application specific.
For example if the fine details on the castle are not considered important, further low pass filtering
could be an appropriate option. If the fine details of the castle are considered important, a viable
solution may be to crop off the border of the image entirely

2. Explain in detail about image filtering.


Filtering is a technique for modifying or enhancing an image. For example, you
can filter an image to emphasize certain features or remove other features. Image
processing operations implemented with filtering include smoothing, sharpening, and
edge enhancement.
MEAN FILTER
We can use linear filtering to remove certain types of noise. Certain filters, such as
averaging or Gaussian filters, are appropriate for this purpose. For example, an
averaging filter is useful for removing grain noise from a photograph. Because each
pixel gets set to the average of the pixels in its neighborhood, local variations caused
by grain are reduced. Conventionally linear filtering Algorithms were applied for image
processing .The Mean Filter is a linear filter which uses a mask over each pixel in the
signal. Each of the components of the pixels which fall under the mask are averaged
together to form a single pixel. This filter is also called as average filter. The Mean
Filter is poor in edge preserving. The Mean filter is defined by:

MEDIAN FILTER
The Median filter is a nonlinear digital filtering technique, often used to remove noise.
Such noise reduction is a typical preprocessing step to improve the results of later processing
(for example, edge detection on an image). Median filtering is very widely used in digital
image processing because under certain conditions, it preserves edges whilst removing noise.
The main idea of the median filter is to run through the signal entry by entry, replacing each
entry with the median of neighboring entries. Note that if the window has an odd number of
entries, then the median is simple to define: it is just the middle value after all the entries in
the window are sorted numerically. For an even number of entries, there is more than one
possible median. The median filter is a robust filter . Median filters are widely used as
smoothers for image processing, as well as in signal processing and time series processing.
A major advantage of the median filter over linear filters is that the median filter can
eliminate the effect of input noise values with extremely large magnitudes. (In contrast, linear
filters are sensitive to this type of noise - that is, the output may be degraded severely by even
by a small fraction of anomalous noise values). The output y of the median filter at the moment
t is calculated as the median of the input values corresponding to the moments adjacent to t:

WIENER FILTER
The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on
a statistical approach. Typical filters are designed for a desired frequency response. The
Wiener filter approaches filtering from a different angle. One is assumed to have knowledge
of the spectral properties of the original signal and the noise, and one seeks the LTI filter
whose output would come as close to the original signal as possible. Wiener filters are
characterized by the following
3. Explain in detail about image processing technique.
Digital image processing is always an interesting field as it gives improved pictorial
information for human interpretation and processing of image data for storage, transmission,
and representation for machine perception. Image Processing is a technique to enhance raw
images received from cameras/sensors placed on satellites, space probes and aircrafts or
pictures taken in normal day-to-day life for various applications. This field of image
processing significantly improved in recent times and extended to various fields of science
and technology. The image processing mainly deals with image acquisition, Image
enhancement, image segmentation, feature extraction, image classification etc.

DIGITAL IMAGE PROCESSING


The term digital image processing generally refers to processing of a two-dimensional
picture by a digital computer [2]. In a broader context, it implies digital processing of any
two-dimensional data. A digital image is an array of real numbers represented by a finite
number of bits. The principle advantage of Digital Image Processing methods is its
versatility, repeatability and the preservation of original data precision. The various Image
Processing techniques are:
• Image preprocessing
• Image enhancement
• Image segmentation
• Feature extraction
• Image classification
IMAGE PREPROCESSING
In image preprocessing, image data recorded by sensors on a satellite restrain errors related
to geometry and brightness values of the pixels. These errors are corrected using appropriate
mathematical models which are either definite or statistical models. Image enhancement is
the modification of image by changing the pixel brightness values to improve its visual
impact. Image enhancement involves a collection of techniques that are used to improve the
visual appearance of an image, or to convert the image to a form which is better suited for
human or machine interpretation.
Sometimes images obtained from satellites and conventional and digital cameras lack in
contrast and brightness because of the limitations of imaging sub systems and illumination
conditions while capturing image. Images may have different types of noise. In image
enhancement, the goal is to accentuate certain image features for subsequent analysis or for
image display [3]. Examples include contrast and edge enhancement, pseudo-coloring, noise
filtering, sharpening, and magnifying. Image enhancement is useful in feature extraction,
image analysis and an image display. The enhancement process itself does not increase the
inherent information content in the data. It simply emphasizes certain specified image
characteristics. Enhancement algorithms are generally interactive and application dependent.
Some of the enhancement techniques are:
a. Contrast Stretching
b. Noise Filtering
c. Histogram modification
a. Contrast Stretching
Some images (eg. over water bodies, deserts, dense forests, snow, clouds and under hazy
conditions over heterogeneous regions) are homogeneous i.e., they do not have much change
in their levels. In terms of histogram representation, they are characterized as the occurrence
of very narrow peaks. The homogeneity can also be due to the incorrect illumination of the
scene [1]. Ultimately the images hence obtained are not easily interpretable due to poor
human perceptibility. This is because there exists only a narrow range of gray-levels in the
image having provision for wider range of gray-levels. The contrast stretching methods are
designed exclusively for frequently encountered situations. Different stretching techniques
have been developed to stretch the narrow range to the whole of the available dynamic range.
b. Noise Filtering
Noise Filtering is used to filter the unnecessary information from an image. It is also used to
remove various types of noises from the images. Mostly this feature is interactive. Various
filters like low pass, high pass, mean, median etc., are available [1].
c. Histogram Modification
Histogram has a lot of importance in image enhancement. It reflects the characteristics of
image. By modifying the histogram, image characteristics can be modified. One such
example is Histogram Equalization. Histogram equalization is a nonlinear stretch that
redistributes pixel values so that there is approximately the same number of pixels with each
value within a range. The result approximates a flat histogram. Therefore, contrast is
increased at the peaks and lessened at the tails [1].
IMAGE SEGMENTATION
Segmentation is one of the key problems in image processing. Image segmentation is the
process that subdivides an image into its constituent parts or objects. The level to which this
subdivision is carried out depends on the problem being solved, i.e., the segmentation should
stop when the objects of interest in an application have been isolated e.g., in autonomous
air-to-ground target acquisition, suppose our interest lies in identifying vehicles on a road,
the first step is to segment the road from the image and then to segment the contents of the
road down to potential vehicles. Image thresholding techniques are used for image
segmentation.
After thresholding a binary image is formed where all object pixels have one gray level and
all background pixels have another - generally the object pixels are 'black' and the
background is 'white'. The best threshold is the one that selects all the object pixels and maps
them to 'black'. Various approaches for the automatic selection of the threshold have been
proposed. Thresholding can be defined as mapping of the gray scale into the binary set {0,
1} :

where S(x, y) is the value of the segmented image, g(x, y) is the gray level of the pixel (x, y)
and T(x, y) is the threshold value at the coordinates (x, y). In the simplest case T(x, y) is
coordinate independent and a constant for the whole image. It can be selected, for instance,
on the basis of the gray level histogram. When the histogram has two pronounced maxima,
which reflect gray levels of object(s) and background, it is possible to select a single
threshold for the entire image. A method which is based on this idea and uses a correlation
criterion to select the best threshold, is described below. Sometimes gray level histograms
have only one maximum. This can be caused, e.g., by inhomogeneous illumination of
various regions of the image. In such case it is impossible to select a single thresholding
value for the entire image and a local binarization technique must be applied. General
methods to solve the problem of binarization of in homogeneously illuminated images,
however, are not available.
Segmentation of images involves sometimes not only the discrimination between objects
and the background, but also separation between different regions. One method for such
separation is known as watershed segmentation.
FEATURE EXTRACTION
The feature extraction techniques are developed to extract features in synthetic aperture radar
images. This technique extracts high-level features needed in order to perform classification
of targets. Features are those items which uniquely describe a target, such as size, shape,
composition, location etc. Segmentation techniques are used to isolate the desired object
from the scene so that measurements can be made on it subsequently. Quantitative
measurements of object features allow classification and description of the image.
When the pre-processing and the desired level of segmentation has been achieved, some
feature extraction technique is applied to the segments to obtain features, which is followed
by application of classification and post processing techniques. It is essential to focus on the
feature extraction phase as it has an observable impact on the efficiency of the recognition
system. Feature selection of a feature extraction method is the single most important factor
in achieving high recognition performance. Feature extraction has been given as “extracting
from the raw data information that is most suitable for classification purposes, while
minimizing the within class pattern variability and enhancing the between class pattern
variability”. Thus, selection of a suitable feature extraction technique according to the input
to be applied needs to be done with utmost care. Taking into consideration all these factors,
it becomes essential to look at the various available techniques for feature extraction in a
given domain, covering vast possibilities of cases
4. Explain Thresholding, Image resolution, Depth of field, Morphology and
Exposure in detail.
THRESHOLDING:
Thresholding is the simplest method of image segmentation. From a grayscale
image, thresholding can be used to create binary images.
The simplest thresholding methods replace each pixel in an image with a black pixel if the
image intensity Ii,j is less than some fixed constant T (that is, Ii,j < T), or a white pixel if the
image intensity is greater than that constant. In the example image on the right, this results
in the dark tree becoming completely black, and the white snow becoming completely
white.
IMAGE RESOLUTION:
Image resolution is the detail an image holds. The term applies to raster digital
images, film images, and other types of images. Higher resolution means more image
detail.
Resolution refers to the number of pixels in an image. Resolution is sometimes
identified by the width and height of the image as well as the total number of pixels in
the image. For example, an image that is 2048 pixels wide and 1536 pixels high
(2048X1536) contains (multiply) 3,145,728 pixels (or 3.1 Megapixels). You could call
it a 2048X1536 or a 3.1 Megapixel image. As the megapixels in the pickup device in
your camera increase so does the possible maximum size image you can produce.
This means that a 5 megapixel camera is capable of capturing a larger image than a 3
megapixel camera.
DEPTH OF FIELD:
In optics, particularly as it relates to film and photography, depth of field (DOF),
also called focus range or effective focus range, is the distance between the nearest and
farthest objects in a scene that appear acceptably sharp in an image. Although a lens
can precisely focus at only one distance at a time, the decrease in sharpness is gradual
on each side of the focused distance, so that within the DOF, the unsharpness is
imperceptible under normal viewing conditions.
MORPHOLOGY:
Morphological image processing is a collection of non-linear operations related to
the shape or morphology of features in an image. morphological operations rely only
on the relative ordering of pixel values, not on their numerical values, and therefore are
especially suited to the processing of binary images. Morphological operations can also
be applied to greyscale images such that their light transfer functions are unknown and
therefore their absolute pixel values are of no or minor interest.
Morphological techniques probe an image with a small shape or template called a
structuring element. The structuring element is positioned at all possible locations in
the image and it is compared with the corresponding neighborhood of pixels. Some
operations test whether the element "fits" within the neighborhood, while others test
whether it "hits" or intersects the neighborhood:
EXPOSURE:
Exposure is the amount of light per unit area (the image plane illuminance times the
exposure time) reaching a photographic film or electronic image sensor, as determined
by shutter speed, lens aperture and scene luminance. Exposure is measured in lux
seconds, and can be computed from exposure value (EV) and scene luminance in a
specified region.
5. Explain Edge detection technique in image processing.
6. Explain morphology and its types.
7. Explain the generation of Robot programming.
8. Explain different types of commands in VAL II programming.
9. Explain in detail about Robot language structure.
10.Explain in detail about classification of programming.
11.Write a program for pick and place operation of a robot using VAL II
language.
PART B

1. Derive Forward and inverse kinematic for 2-DOF RR Robot using Trigonometric
Method. (13)
2. (i) In a 2-DOF RR config robot length of the links L1 & L2 are 36 cm & 24 cm
respectively. If the angle form with respect to x1 & x2 are 30o & 70o. Find the
position of the wrist of the robot. (6)
(ii) In a 2-DOF RR config robot length of the links L1 & L2 are 30cm & 18 cm
respectively. If the end position of wrist robot point to x=32 & y=20. Find the angle
Ɵ1 & Ɵ2. (7)
3. Derive Rotational Transformation matrix for Rotation about Z-Axis. (13)
4. Derive forward and inverse kinematic for RRL 3-DOF Robot. (13)
5. Derive Kinematics for TRR Robot using D-H matrix. (13)

x3

D-H Parameters
C1 0 S1 0
 S1 0  C1 0
A1  
0 0 1 0
 
0 0 0 1

C 2  S 2 0 C 2 L1
S 2 C2 0 S 2 L1 
A2  
0 0 1 0 
 
0 0 0 1 

C 3  S 3 0 C 3 L 2
S 3 C3 0 S 3 L 2 
A3  
0 0 1 0 
 
0 0 0 1 

C1C 2  C1S 2 S 1 C1C 2 L1


 S 1C 2  S 1S 2  C1 S 1C 2 L1 
A1 A2   
 S2 C2 0 S 2 L1 
 
 0 0 0 1 

R
Total transformation between the base of the robot and the hand is TH=

C1C 2  C1S 2 S 1 C1C 2 L1 C 3  S 3 0 C 3 L 2


 S 1C 2  S 1S 2  C1 S 1C 2 L1  S 3 C3 0 S 3 L 2 
A1 A2 A3    
 S2 C2 0 S 2 L1  0 0 1 0 
   
 0 0 0 1  0 0 0 1 
(C1C 2C 3  C1S 2 S 3)  (C1C 2 S 3  C1C 3S 2) S 1 (C1C 2C 3 L 2  C1S 2S 3 L 2  C1C 2 L1)
 ( S 1C 2C 3  S1S 2 S 3)  ( S 1S 3C 2  S 1S 2C 3)  C1 ( S 1C 2C 3 L 2  S 1S 2 S 3 L 2  S 1C 2 L1) 
A1 A2 A3   
 ( S 2C 3  C 2 S 3) ( S 2 S 3  C 2C 3) 0 ( S 2C 3 L 2  C 2 S 3 L 2  S 2 L1) 
 
 0 0 0 1 

The above matrix is in the form

n x ox ax Px 
n oy ay Py 
H y
n z oz az Pz 
 
0 0 0 1

Px= [L2(C2C3-S2S3) +C2L1]C1


PY = [L2(C2C3-S2S3) +S2L1]S1
PZ = L2(S2C3+C2S3) +S2L1

6. Derive Kinematics for TRL Robot using D-H matrix. (13)


7. Derive Kinematics for TRLR Robot using Trignometric method. (13)
8. Derive D-H Representation Matrices (13)

Rotate about the z-axis an angle of θn+1. This will make xn and xn+1 parallel to each other.
This is true because an and an+1 are both perpendicular to zn and rotating zn an angle of
θn+1 will make them parallel (and thus coplanar)
Translate along the zn-axis a distance of dn+1 to make xn and xn+1 collinear. Since xn and
xn+1 were already parallel and normal to zn, moving along zn will lay them over each other.

Translate along the xn-axis a distance of an+1 to bring the origins of xn and xn+1
together. At this point, the two origins of the two reference frames will be at the
same location.
Rotate zn-axis about xn+1 – axis an angle of αn+1 to align zn-axis with zn+1-axis.
At this point frames n and n+1 will be exactly the same and we will have
transformed from one frame to the next.
9. Derive RRR 3-DOF robot using Trignometric Method. (13)
10. (i) A vector v=3i+2j+7k is rotated by 60 o about the z-axes of the reference frame. It is
then rotated by 30 o about the x-axes of the reference frame. Find the rotation
transformation (7)
(ii) For the vector v= -25i + 10j + 20k, perform a translation by a distance of 8 in
the x direction, 5 in the y direction and 0 in the z direction. Find the translation
transformation. (6)
11. Explain the parameters of Robot Kinematics (13)
12. Static analysis of robot Dynamics (13)
PART - B

21. (i) Classify the Industrial Robots and briefly describe it. (7)

1.Stationary Robots

Stationary robots are robots those work without chaning their positions. Referring the robot as
“stationary” does not mean that the robot actually is not moving. What “stationary” means is the
base of the robot does not move during operation.

These kind of robots generally manipulate their environment by controlling the position and
orientation of an end-effector. Stationary robot category includes robotic arms, catesian robots,
cylinderical robots, spherical robots, SCARA robots and parallel robots.

1.1 Cartesian/Gantry Robots


1.2 Cylindrical Robots
1.3 Spherical Robots
1.4 SCARA Robots
1.5 Robotic Arms - (Articulated Robots )
1.6 Parallel Robots

2.Wheleed Robots

Wheeled robots are robots which change their positions with the help of their wheels. Wheeled
motion for a robot can be achieved easily in mechanical terms and its cost is pretty low.
Additionally control of wheeled movement is generally easier.

These reasons make wheeled robots one of the most frequently seen robots. Single wheeled
robots, mobile ball robots, two-wheeled robots, three-wheeled robots, four-wheeled robots,
multi-wheeled robots and tracked robots are examples of wheeled robots.

2.1 Single Wheel (Ball) Robots


2.2 Two-Wheeled Robots
2.3 Three Wheeled Robots
2.4.Four Wheeled Robots
2.5.Multi Wheeled Robots
2.6.Tracked Robots

3. Legged Robots

Legged robots are mobile robots, similar to wheeled robots, but their locomotion methods are
more sophisticated and complicated compared to their wheeled counterparts. As their name
suggests they use their legs to control their locomotion and they perform much better than
wheeled robots on uneven terrain.
Despite the cost and complexity of production is high for these robots their advantages on
uneven terrain makes these robots indispensable for most applications. One-legged robots, two-
legged robots, three-legged robots, four-legged robots, six-legged robots and multi-legged robots
are examples of this robot class.

3.1.One Legged Robots


3.2.Two Legged – Bipedal Robots (Humanoids)
3.3.Three Legged – Tripedal Robots
3.4.Four Legged – Quadrupedal Robots
3.5.Six Legged Robots (6 Legged Hexapod)
3.6 Robots With Many Legs

4.Swimming Robots – Robot Fish

Swimming robots are robots which move underwater. These robots are generally inspired by
fish and they use their fin-like actuators to maneuver in water.

5.Flying Robots

Flying robots are robots that float and maneuver on air using their plane-like or bird/insect-like
wings, propellers or balloons. Examples of these robots are airplane robots, bird/insect inspired
wing flapping robots, proppeller based multicopters and balloon roobots.

6.Rolling Robotic Balls (Mobile Spherical Robots)

7.Swarm Robots

Swarm robots are robotic systems which consist of multiple small robots. These robots
structurally does not create a single united robot, but operates as their robot modules operate
cooperatively. Although similar to modular robotic systems, elements of swarm robots have
much less functionality and herd configurations does not create new robots.

8.Modular Robots

Similar to swarm robots, modular robotic systems also have multiple robots in their
configurations. Modules of these systems are more functional compared to a robotic herd. For
example a single module of a modular robotic system can have self-mobility and it can operate
alone. The power of modular robotics comes from its versatility in its configurations. Modules of
a modular robotic system can create very different configurations and the robots created this way
can have very distinct abilities.
9.Micro Robots

By definiton micro robots term is used to specify both robots that have dimensions on
micrometer scale and robots that can operate on micrometer resolution. Therefore both possibly
very big stationary robots that can manipulate their environment on a micrometer scale and small
robots that are actually measured by micrometers are called micro motors.

10.Nano Robots

Similar to micro robots nano robots also defined a bit vaguely. The term nano robot both defines
very small robots which have nano meter scaled dimensions and robots those can manipulate
their environment with a nano meter scale resolution regardless of their actual sizes.

11.Soft Elastic Robots

Soft/elastic robots, are new introductions to robotics. These robots are generally bio-inspired.
Most applications are inspired from squids or inchworms both structurally and functionally.

(ii) Classify the robot based on generation. (6)


First Generation:

Second generation:

Third Generation:
22. (i) Describe the Basic components of Robots with neat sketch. (7)
(ii) Define links and joints. Explain the types of joints (6)
23. Explain robot specification in detail. (13)
24. (i) Briefly explain the following terms:
a) Payload b) Spatial Resolution c). Precision d) Accuracy (7)
(ii) Explain the types of robots based on control signal (6)
25. Sketch and explain the configuration of Robot (13)
26. What are the types of power transmission systems (13)
27. Describe the types of grippers with neat sketch (13)
28. With neat sketch explain the types of gripper mechanism (13)
29. Discuss about Magnetic and Vacuum Grippers with neat sketch. (13)
30. list out the parameters for Selection of gripper (13)
31. Discuss about vacuum grippers
32. (ii) The diagram shows the linkage mechanism of and dimensions of gripper used to
handle a workpart for a machining. It has determined that the gripper force is to be 21 lb.
What is required to compute the actuating force to deliver this force of 30 lb. (6)

You might also like