Professional Documents
Culture Documents
The resolution of a digital camera is often limited by the camera sensor (typically a CCD or
CMOS sensor chip) that turns light into discrete signals, replacing the job of film in traditional
photography.
This means that the brighter the image at that point the larger of a value that is read for that
pixel.
19.Define tracking
Tracking is defined as the motion of the scene, objects or the camera given a sequence of
images. Knowing this motion, predict where things are going to project in the next image, so
that we don’t have so much work looking for them.
20.What are the methods of teaching.
• Joint movements
• X-Y-Z coordinate motions
• Tool coordinate motion
PART- B
1. Explain different types of noises in image.
Image noise is random (not present in the object imaged) variation of brightness or color
information in images, and is usually an aspect of electronic noise. It can be produced by the
sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain
and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable
by-product of image capture that adds spurious and extraneous information.
Gaussian noise
Principal sources of Gaussian noise in digital images arise during acquisition e.g. sensor noise
caused by poor illumination and/or high temperature, and/or transmission e.g. electronic
circuit noise.
A typical model of image noise is Gaussian, additive, independent at each pixel, and
independent of the signal intensity, caused primarily by Johnson–Nyquist noise (thermal
noise), including that which comes from the reset noise of capacitors ("kTC noise"). Amplifier
noise is a major part of the "read noise" of an image sensor, that is, of the constant noise level
in dark areas of the image. In color cameras where more amplification is used in the blue color
channel than in the green or red channel, there can be more noise in the blue channel. At higher
exposures, however, image sensor noise is dominated by shot noise, which is not Gaussian
and not independent of signal intensity.
Salt-and-pepper noise
Fat-tail distributed or "impulsive" noise is sometimes called salt-and-pepper noise or spike
noise. An image containing salt-and-pepper noise will have dark pixels in bright regions and
bright pixels in dark regions. This type of noise can be caused by analog-to-digital converter
errors, bit errors in transmission, etc. It can be mostly eliminated by using dark frame
subtraction, median filtering and interpolating around dark/bright pixels. Dead pixels in an
LCD monitor produce a similar, but non-random, display.
Shot Noise
The dominant noise in the darker parts of an image from an image sensor is typically that caused
by statistical quantum fluctuations, that is, variation in the number of photons sensed at a given
exposure level. This noise is known as photon shot noise. Shot noise has a root-mean-square value
proportional to the square root of the image intensity, and the noises at different pixels are
independent of one another. Shot noise follows a Poisson distribution, which except at very low
intensity levels approximates a Gaussian distribution.
In addition to photon shot noise, there can be additional shot noise from the dark leakage current
in the image sensor; this noise is sometimes known as "dark shot noise"[6] or "dark-current shot
noise". Dark current is greatest at "hot pixels" within the image sensor. The variable dark charge
of normal and hot pixels can be subtracted off (using "dark frame subtraction"), leaving only the
shot noise, or random component, of the leakage. If dark-frame subtraction is not done, or if the
exposure time is long enough that the hot pixel charge exceeds the linear charge capacity, the noise
will be more than just shot noise, and hot pixels appear as salt-and-pepper noise.
Periodic Noise:
MEDIAN FILTER
The Median filter is a nonlinear digital filtering technique, often used to remove noise.
Such noise reduction is a typical preprocessing step to improve the results of later processing
(for example, edge detection on an image). Median filtering is very widely used in digital
image processing because under certain conditions, it preserves edges whilst removing noise.
The main idea of the median filter is to run through the signal entry by entry, replacing each
entry with the median of neighboring entries. Note that if the window has an odd number of
entries, then the median is simple to define: it is just the middle value after all the entries in
the window are sorted numerically. For an even number of entries, there is more than one
possible median. The median filter is a robust filter . Median filters are widely used as
smoothers for image processing, as well as in signal processing and time series processing.
A major advantage of the median filter over linear filters is that the median filter can
eliminate the effect of input noise values with extremely large magnitudes. (In contrast, linear
filters are sensitive to this type of noise - that is, the output may be degraded severely by even
by a small fraction of anomalous noise values). The output y of the median filter at the moment
t is calculated as the median of the input values corresponding to the moments adjacent to t:
WIENER FILTER
The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on
a statistical approach. Typical filters are designed for a desired frequency response. The
Wiener filter approaches filtering from a different angle. One is assumed to have knowledge
of the spectral properties of the original signal and the noise, and one seeks the LTI filter
whose output would come as close to the original signal as possible. Wiener filters are
characterized by the following
3. Explain in detail about image processing technique.
Digital image processing is always an interesting field as it gives improved pictorial
information for human interpretation and processing of image data for storage, transmission,
and representation for machine perception. Image Processing is a technique to enhance raw
images received from cameras/sensors placed on satellites, space probes and aircrafts or
pictures taken in normal day-to-day life for various applications. This field of image
processing significantly improved in recent times and extended to various fields of science
and technology. The image processing mainly deals with image acquisition, Image
enhancement, image segmentation, feature extraction, image classification etc.
where S(x, y) is the value of the segmented image, g(x, y) is the gray level of the pixel (x, y)
and T(x, y) is the threshold value at the coordinates (x, y). In the simplest case T(x, y) is
coordinate independent and a constant for the whole image. It can be selected, for instance,
on the basis of the gray level histogram. When the histogram has two pronounced maxima,
which reflect gray levels of object(s) and background, it is possible to select a single
threshold for the entire image. A method which is based on this idea and uses a correlation
criterion to select the best threshold, is described below. Sometimes gray level histograms
have only one maximum. This can be caused, e.g., by inhomogeneous illumination of
various regions of the image. In such case it is impossible to select a single thresholding
value for the entire image and a local binarization technique must be applied. General
methods to solve the problem of binarization of in homogeneously illuminated images,
however, are not available.
Segmentation of images involves sometimes not only the discrimination between objects
and the background, but also separation between different regions. One method for such
separation is known as watershed segmentation.
FEATURE EXTRACTION
The feature extraction techniques are developed to extract features in synthetic aperture radar
images. This technique extracts high-level features needed in order to perform classification
of targets. Features are those items which uniquely describe a target, such as size, shape,
composition, location etc. Segmentation techniques are used to isolate the desired object
from the scene so that measurements can be made on it subsequently. Quantitative
measurements of object features allow classification and description of the image.
When the pre-processing and the desired level of segmentation has been achieved, some
feature extraction technique is applied to the segments to obtain features, which is followed
by application of classification and post processing techniques. It is essential to focus on the
feature extraction phase as it has an observable impact on the efficiency of the recognition
system. Feature selection of a feature extraction method is the single most important factor
in achieving high recognition performance. Feature extraction has been given as “extracting
from the raw data information that is most suitable for classification purposes, while
minimizing the within class pattern variability and enhancing the between class pattern
variability”. Thus, selection of a suitable feature extraction technique according to the input
to be applied needs to be done with utmost care. Taking into consideration all these factors,
it becomes essential to look at the various available techniques for feature extraction in a
given domain, covering vast possibilities of cases
4. Explain Thresholding, Image resolution, Depth of field, Morphology and
Exposure in detail.
THRESHOLDING:
Thresholding is the simplest method of image segmentation. From a grayscale
image, thresholding can be used to create binary images.
The simplest thresholding methods replace each pixel in an image with a black pixel if the
image intensity Ii,j is less than some fixed constant T (that is, Ii,j < T), or a white pixel if the
image intensity is greater than that constant. In the example image on the right, this results
in the dark tree becoming completely black, and the white snow becoming completely
white.
IMAGE RESOLUTION:
Image resolution is the detail an image holds. The term applies to raster digital
images, film images, and other types of images. Higher resolution means more image
detail.
Resolution refers to the number of pixels in an image. Resolution is sometimes
identified by the width and height of the image as well as the total number of pixels in
the image. For example, an image that is 2048 pixels wide and 1536 pixels high
(2048X1536) contains (multiply) 3,145,728 pixels (or 3.1 Megapixels). You could call
it a 2048X1536 or a 3.1 Megapixel image. As the megapixels in the pickup device in
your camera increase so does the possible maximum size image you can produce.
This means that a 5 megapixel camera is capable of capturing a larger image than a 3
megapixel camera.
DEPTH OF FIELD:
In optics, particularly as it relates to film and photography, depth of field (DOF),
also called focus range or effective focus range, is the distance between the nearest and
farthest objects in a scene that appear acceptably sharp in an image. Although a lens
can precisely focus at only one distance at a time, the decrease in sharpness is gradual
on each side of the focused distance, so that within the DOF, the unsharpness is
imperceptible under normal viewing conditions.
MORPHOLOGY:
Morphological image processing is a collection of non-linear operations related to
the shape or morphology of features in an image. morphological operations rely only
on the relative ordering of pixel values, not on their numerical values, and therefore are
especially suited to the processing of binary images. Morphological operations can also
be applied to greyscale images such that their light transfer functions are unknown and
therefore their absolute pixel values are of no or minor interest.
Morphological techniques probe an image with a small shape or template called a
structuring element. The structuring element is positioned at all possible locations in
the image and it is compared with the corresponding neighborhood of pixels. Some
operations test whether the element "fits" within the neighborhood, while others test
whether it "hits" or intersects the neighborhood:
EXPOSURE:
Exposure is the amount of light per unit area (the image plane illuminance times the
exposure time) reaching a photographic film or electronic image sensor, as determined
by shutter speed, lens aperture and scene luminance. Exposure is measured in lux
seconds, and can be computed from exposure value (EV) and scene luminance in a
specified region.
5. Explain Edge detection technique in image processing.
6. Explain morphology and its types.
7. Explain the generation of Robot programming.
8. Explain different types of commands in VAL II programming.
9. Explain in detail about Robot language structure.
10.Explain in detail about classification of programming.
11.Write a program for pick and place operation of a robot using VAL II
language.
PART B
1. Derive Forward and inverse kinematic for 2-DOF RR Robot using Trigonometric
Method. (13)
2. (i) In a 2-DOF RR config robot length of the links L1 & L2 are 36 cm & 24 cm
respectively. If the angle form with respect to x1 & x2 are 30o & 70o. Find the
position of the wrist of the robot. (6)
(ii) In a 2-DOF RR config robot length of the links L1 & L2 are 30cm & 18 cm
respectively. If the end position of wrist robot point to x=32 & y=20. Find the angle
Ɵ1 & Ɵ2. (7)
3. Derive Rotational Transformation matrix for Rotation about Z-Axis. (13)
4. Derive forward and inverse kinematic for RRL 3-DOF Robot. (13)
5. Derive Kinematics for TRR Robot using D-H matrix. (13)
x3
D-H Parameters
C1 0 S1 0
S1 0 C1 0
A1
0 0 1 0
0 0 0 1
C 2 S 2 0 C 2 L1
S 2 C2 0 S 2 L1
A2
0 0 1 0
0 0 0 1
C 3 S 3 0 C 3 L 2
S 3 C3 0 S 3 L 2
A3
0 0 1 0
0 0 0 1
R
Total transformation between the base of the robot and the hand is TH=
n x ox ax Px
n oy ay Py
H y
n z oz az Pz
0 0 0 1
Rotate about the z-axis an angle of θn+1. This will make xn and xn+1 parallel to each other.
This is true because an and an+1 are both perpendicular to zn and rotating zn an angle of
θn+1 will make them parallel (and thus coplanar)
Translate along the zn-axis a distance of dn+1 to make xn and xn+1 collinear. Since xn and
xn+1 were already parallel and normal to zn, moving along zn will lay them over each other.
Translate along the xn-axis a distance of an+1 to bring the origins of xn and xn+1
together. At this point, the two origins of the two reference frames will be at the
same location.
Rotate zn-axis about xn+1 – axis an angle of αn+1 to align zn-axis with zn+1-axis.
At this point frames n and n+1 will be exactly the same and we will have
transformed from one frame to the next.
9. Derive RRR 3-DOF robot using Trignometric Method. (13)
10. (i) A vector v=3i+2j+7k is rotated by 60 o about the z-axes of the reference frame. It is
then rotated by 30 o about the x-axes of the reference frame. Find the rotation
transformation (7)
(ii) For the vector v= -25i + 10j + 20k, perform a translation by a distance of 8 in
the x direction, 5 in the y direction and 0 in the z direction. Find the translation
transformation. (6)
11. Explain the parameters of Robot Kinematics (13)
12. Static analysis of robot Dynamics (13)
PART - B
21. (i) Classify the Industrial Robots and briefly describe it. (7)
1.Stationary Robots
Stationary robots are robots those work without chaning their positions. Referring the robot as
“stationary” does not mean that the robot actually is not moving. What “stationary” means is the
base of the robot does not move during operation.
These kind of robots generally manipulate their environment by controlling the position and
orientation of an end-effector. Stationary robot category includes robotic arms, catesian robots,
cylinderical robots, spherical robots, SCARA robots and parallel robots.
2.Wheleed Robots
Wheeled robots are robots which change their positions with the help of their wheels. Wheeled
motion for a robot can be achieved easily in mechanical terms and its cost is pretty low.
Additionally control of wheeled movement is generally easier.
These reasons make wheeled robots one of the most frequently seen robots. Single wheeled
robots, mobile ball robots, two-wheeled robots, three-wheeled robots, four-wheeled robots,
multi-wheeled robots and tracked robots are examples of wheeled robots.
3. Legged Robots
Legged robots are mobile robots, similar to wheeled robots, but their locomotion methods are
more sophisticated and complicated compared to their wheeled counterparts. As their name
suggests they use their legs to control their locomotion and they perform much better than
wheeled robots on uneven terrain.
Despite the cost and complexity of production is high for these robots their advantages on
uneven terrain makes these robots indispensable for most applications. One-legged robots, two-
legged robots, three-legged robots, four-legged robots, six-legged robots and multi-legged robots
are examples of this robot class.
Swimming robots are robots which move underwater. These robots are generally inspired by
fish and they use their fin-like actuators to maneuver in water.
5.Flying Robots
Flying robots are robots that float and maneuver on air using their plane-like or bird/insect-like
wings, propellers or balloons. Examples of these robots are airplane robots, bird/insect inspired
wing flapping robots, proppeller based multicopters and balloon roobots.
7.Swarm Robots
Swarm robots are robotic systems which consist of multiple small robots. These robots
structurally does not create a single united robot, but operates as their robot modules operate
cooperatively. Although similar to modular robotic systems, elements of swarm robots have
much less functionality and herd configurations does not create new robots.
8.Modular Robots
Similar to swarm robots, modular robotic systems also have multiple robots in their
configurations. Modules of these systems are more functional compared to a robotic herd. For
example a single module of a modular robotic system can have self-mobility and it can operate
alone. The power of modular robotics comes from its versatility in its configurations. Modules of
a modular robotic system can create very different configurations and the robots created this way
can have very distinct abilities.
9.Micro Robots
By definiton micro robots term is used to specify both robots that have dimensions on
micrometer scale and robots that can operate on micrometer resolution. Therefore both possibly
very big stationary robots that can manipulate their environment on a micrometer scale and small
robots that are actually measured by micrometers are called micro motors.
10.Nano Robots
Similar to micro robots nano robots also defined a bit vaguely. The term nano robot both defines
very small robots which have nano meter scaled dimensions and robots those can manipulate
their environment with a nano meter scale resolution regardless of their actual sizes.
Soft/elastic robots, are new introductions to robotics. These robots are generally bio-inspired.
Most applications are inspired from squids or inchworms both structurally and functionally.
Second generation:
Third Generation:
22. (i) Describe the Basic components of Robots with neat sketch. (7)
(ii) Define links and joints. Explain the types of joints (6)
23. Explain robot specification in detail. (13)
24. (i) Briefly explain the following terms:
a) Payload b) Spatial Resolution c). Precision d) Accuracy (7)
(ii) Explain the types of robots based on control signal (6)
25. Sketch and explain the configuration of Robot (13)
26. What are the types of power transmission systems (13)
27. Describe the types of grippers with neat sketch (13)
28. With neat sketch explain the types of gripper mechanism (13)
29. Discuss about Magnetic and Vacuum Grippers with neat sketch. (13)
30. list out the parameters for Selection of gripper (13)
31. Discuss about vacuum grippers
32. (ii) The diagram shows the linkage mechanism of and dimensions of gripper used to
handle a workpart for a machining. It has determined that the gripper force is to be 21 lb.
What is required to compute the actuating force to deliver this force of 30 lb. (6)