You are on page 1of 4

Proceedings of National Conference on Recent Advances in Electronics and Communication Engineering

(RACE-2015), March 2015

Digital Image Processing


Review Paper
Ajay kumar1, Rishu jain2, Dr. Leena Arya3
123

Department of Electronics and Communication Engineering


12
Student
3
Associate Professor
123
I.T.S. Engineering College
1
soniajaykumar94@gmail.com, 2rishuuniverse@gmail.com, 3dr.leenaarya@its.edu.in

Abstract- Over the past dozen years rhetorical and medical applications of technology initial developed to record
and transmit footage from space have modified the method we tend to see things here on earth, as well as AngloSaxon manuscripts. With their abilities combined, associate electronic camera designed to be used with documents
and a computing machine will currently off times enhance the legibility of at one time obscure or maybe invisible
texts. The pc initial converts the analogue image, during this case a videotape, to a digital image by dividing it into a
microscopic grid and listing every half by its relative brightness. Specific image process programs will then radically
improve the distinction, for instance by stretching the vary of brightness throughout the grid from black to white,
accentuation edges, and suppressing random background signal that comes from the instrumentality instead of the
document. Applied to a number of the foremost dirty passages within the fictitious character manuscript, this new
technology so shows us some things we tend to had not seen before and forces us to rethink some established
readings.
I. INTRODUCTION
Vision permits humans to understand and perceive the planet encompassing world. Pc vision aims to duplicate the
result of human vision by electronically perceiving and understanding a picture. Giving computers the power to
visualize is not a simple task - we tend to sleep in a 3 dimensional (3D) world, and once computers try and analyze
objects in 3D house, out there visual sensors (e.g., TV cameras) typically provide 2 dimensional (2D) pictures, and
this projection to a lower variety of dimensions incurs a colossal loss of data. So as to change the task of pc vision
understanding, Two levels are typically distinguished; low-level image process and high level image understanding.
Typically little data concerning the content of pictures. High level process relies on data, goals, and plans of a way to
succeed those goals. Artificial Intelligence (AI) ways are utilized in several cases. High-level pc vision tries to
imitate human psychological feature and also the ability to create choices in line with the knowledge contained
within the image. This course deals virtually solely with low-level image process, high level within which could be a
continuation of this course. Age process is mentioned within the course Image Analysis and Understanding. That
could be a continuation of this course.
Many of the techniques of digital image processing, or digital picture processing as it was often called, were
developed in the 1960s at the Jet Propulsion Laboratory, MIT, Bell Labs, University of Maryland, and few other
places, with application to satellite imagery, wire photo standards conversion, medical imaging, videophone,
character recognition, and photo enhancement. But the cost of processing was fairly high with the computing
equipment of that era. In the 1970s, digital image processing proliferated, when cheaper computers Creating a film
or electronic image of any picture or paper form. It is accomplished by scanning or photographing an object and
turning it into a matrix of dots (bitmap), the meaning of which is unknown to the computer, only to the human
viewer. Scanned images of text may be encoded into computer data (ASCII or EBCDIC) with page recognition
software (OCR).

II. BASIC CONCEPTS

Proceedings of National Conference on Recent Advances in Electronics and Communication Engineering


(RACE-2015), March 2015

A signal may be a operate looking on some variable with physical which means. Signals will be
o One-dimensional (e.g., obsessed on time),
o Two-dimensional (e.g., pictures obsessed on 2 co-ordinates in a very plane),
o Three-dimensional (e.g., describing associate degree object in space),
o Or higher dimensional.
Pattern recognition may be a field among the realm of machine learning. Instead it will be outlined as "the act of
taking in data associate degree taking an action supported the class of the data" [1]. As such, it\'s a group of ways for
supervised learning.
Pattern recognition aims to classify information (patterns) supported either a prior information or on applied math
data extracted from the patterns. The patterns to be classified are sometimes teams of measurements or observations,
process points in associate degree applicable 3-d area. r to represent, for instance, color pictures consisting of three
part colours.

III. IMAGE FUNCTION


The image can be modeled by a continuous function of two or three variables; Arguments are co-ordinates x, y in a
plane, while if images change in time a third variable t might be added. The image function values correspond to the
brightness at image points. The function value can express other physical quantities as well (temperature, pressure
distribution, distance from the observer, etc.). The brightness integrates different optical quantities - using brightness
as a basic quantity allows us to avoid the description of the very complicated process of image formation. The image
on the human eye retina or on a TV camera sensor is intrinsically 2D. We shall call such a 2D image bearing
information about brightness points an intensity image. The real world, which surrounds us, is intrinsically 3D. The
2D intensity image is the result of a perspective projection of the 3D scene. When 3D objects are mapped into the
camera plane by perspective projection a lot of information disappears as such a transformation is not one-to-one.
Recognizing or reconstructing objects in a 3D scene from one image is an ill-posed problem. Recovering
information lost by perspective projection is only one, mainly geometric, problem of computer vision. The second
problem is how to understand image brightness. The only information available in an intensity image is brightness
of the appropriate pixel, which is dependent on a number of independent factors such as
o Object surface reflectance properties (given by the surface material, microstructure and marking),
o Illumination properties,
o And object surface orientation with respect to a viewer and light source.
Metric properties of digital images:
Distance is an important example. The distance between two pixels in a digital image is a significant quantitative
measure. The Euclidean distance is defined by Eq. 2.42

City block distance

Proceedings of National Conference on Recent Advances in Electronics and Communication Engineering


(RACE-2015), March 2015

Chessboard distance Eq. 2.44

Pixel adjacency is another important concept in digital images. 4-neighborhood 8-neighborhood. It will become
necessary to consider important sets consisting of several adjacent pixels -- regions. Region is a contiguous set.
Contiguity paradoxes of the square grid.
One possible solution to contiguity paradoxes is to treat objects using 4-neighborhood and background using 8neighborhood (or vice versa).A hexagonal grid solves many problems of the square grids ... any point in the
hexagonal raster has the same distance to all its six neighbors. Border R is the set of pixels within the region that
have one or more neighbors outside R ... inner borders, outer borders exist. Edge is a local property of a pixel and its
immediate neighborhood it is a vector given by a magnitude and direction. The edge direction is perpendicular to the
gradient direction which points in the direction of image function growth. Border and edge ... the border is a global
concept related to a region, while edge expresses local properties of an image function. Crack edges ... four crack
edges are attached to each pixel, which are defined by its relation to its 4-neighbors. The direction of the crack edge
is that of increasing brightness, and is a multiple of 90 degrees, while its magnitude is the absolute difference
between the brightness of the relevant pair of pixels. (Fig.2.9)

Proceedings of National Conference on Recent Advances in Electronics and Communication Engineering


(RACE-2015), March 2015

IV. TOPOLOGICAL PROPERTIES OF DIGITAL IMAGES


Topological properties of images are invariant to rubber sheet transformations. Stretching does not change contiguity
of the object parts and does not change the number One such image property is the Euler--Poincare characteristic
defined as the difference between the number of regions and the number of holes in them. Convex hull is used to
describe topological properties of objects. r of holes in regions. The convex hull is the smallest region which
contains the object, such that any two points of the region can be connected by a straight line, all points of which
belong to the region.

V. CONCLUSION
Further, surveillance by humans is dependent on the quality of the human operator and lot off actors like operator
fatigue negligence may lead to degradation of performance. These factors may can intelligent vision system a better
option. As in systems that use gait signature for recognition in vehicle video sensors for driver assistance.

VI. REFERENCE

[1]
[2]
[3]
[4]
[5]

John G. Proakis Digital Signal Processing, ed. Pearson 2007.


Steven W. Smith The scientist & engineer guide to digital signal processing1998.
Donald Reay Digital signal processing and application2nd edition Wiley student edition 2010.
Richard G. Lyons Understanding Digital signal processing 3rd edition 2010.
http://itl7.elte.hu/~zsolt/Oktatas/editable_Digital_Signal_Processing_Principles_Algorithms_and_Applications_Third_Edition.pdf.

You might also like