You are on page 1of 62

CHAPTER 1

Introduction to Digital Image Processing


Introduction

Digital image processing systems are used in many applications like remote sensing,
medical field, transferal and encoding, machine vision etc. The image enhancement is the
operation to improve the display of the image or properties of image for more advanced image
analysis. Image acquisition in adverse conditions like night-time, cloudy or smoky weather etc.,
results in many defects because of weak reflection of light from object. The need to acquire clear
images from these unfavorable situations now became a challenge.

1.1 IMAGE:

An image is a two-dimensional picture, which has a close aspect to a subject generally a


bodily object or a human.

Images, such as a photograph, screen display, and as well as a 3-D. That can be captured
by optical devices—such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural
objects and phenomena, such as the human eye or water surfaces.

Image is used for the perception of any 2-D images such as graph, map, pie charts or
painting etc... In the perception, images also provides manually, by carving, drawing, painting,
provides automatic printing or making graphics, and also improved by the combination of
different methods, like pseudo-photograph.

SRKR Engineering College, Bhimavaram


Fig 1.1: Image

Image is a rectangular grid pixels with columns and rows. It have Specific height and a
Specific width pixels. Each pixel is a square and it have fixed size on a given display. Different c
monitors is using the different sized pixel. The pixels that constitute for an image are placed as a
grid (columns and rows) each pixel is having the numbers which represents magnitudes of
brightness and color.

Fig 1.2: Pixel magnitude table

SRKR Engineering College, Bhimavaram


Each pixel has a color. The color is a 32-bit integer. The first eight bits says the redness
of the pixel, the next eight bits the greenness, the next eight bits the blueness, and the remaining
eight bits the transparency of the pixel.

Fig 1.3: 32 bit interger

1.2 IMAGE FILE SIZES:

Image file size is expressed as the number of bytes that increases with the number of
pixels composing an image, and the color depth of the pixels. The greater the number of rows
and columns, the greater the image resolution, and the larger the file. Also, each pixel of an
image increases in size when its color depth increases, an 8-bit pixel (1 byte) stores 256 colors, a
24-bit pixel (3 bytes) stores 16 million colors, the latter known as true color.

Image compression uses algorithms to decrease the size of a file. High resolution cameras
make large image files, ranging from hundreds of kilobytes to megabytes, per the camera's
resolution and the image-storage format capacity. High resolution digital cameras record 12
megapixel (1MP = 1,000,000 pixels / 1 million) images, or more, in true color. For example, an
image recorded by a 12 MP camera; since each pixel uses 3 bytes to record true color, the
uncompressed image would occupy 36,000,000 bytes of memory, a great amount of digital
storage for one image, given that cameras must record and store many images to be practical.
Faced with large file sizes, both within the camera and a storage disc, image file formats were
developed to store such large images.

SRKR Engineering College, Bhimavaram


1.3 IMAGE FILE FORMATS:

Image file formats are standardized means of organizing and storing images. This entry is
about digital image formats used to store photographic and other images. Image files are
composed of either pixel or vector (geometric) data that are rasterized to pixels when displayed
(with few exceptions) in a vector graphic display. Including proprietary types, there are hundreds
of image file types. The PNG, JPEG, and GIF formats are most often used to display images on
the Internet.

Fig 1.4: Image format

In addition to straight image formats, Metafile formats are portable formats which can
include both raster and vector information. The metafile format is an intermediate format. Most
Windows applications open metafiles and then save them in their own native format.

1.3.1 RASTER FORMATS:

These formats store images as bitmaps (also known as pix maps).

 JPEG/JFIF:

JPEG (Joint Photographic Experts Group) is a compression method. JPEG compressed


images are generally stored in the JFIF (JPEG File Interchange Format) file format. JPEG
compression is lossy compression. Nearly every digital camera can save images in the
JPEG/JFIF format, which supports 8 bits per color (red, green, blue) for a 24-bit total, producing
relatively small files. Photographic images may be finer stored in a lossless non-JPEG format if
4

SRKR Engineering College, Bhimavaram


they will be re-edited, or if small "artifacts" are unacceptable. The JPEG/JFIF format also is used
as the image compression algorithm in many Adobe PDF files.

 EXIF:

The EXIF (Exchangeable image file format) format is a file standard close to the JFIF
format with TIFF extensions. It is incorporated in the JPEG writing software used in most
cameras. Its purpose is to record and to standardize the exchange of images with image
metadata between digital cameras and editing and viewing software. The metadata are recorded
for individual images and include such things as camera settings, time and date, shutter speed,
exposure, image size, compression, name of camera, color information, etc. When images are
viewed or edited by image editing software, all of this image information can be displayed.

 TIFF:

The TIFF (Tagged Image File Format) format is a flexible format that normally saves 8
bits or 16 bits per color (red, green, blue) for 24-bit and 48-bit totals, respectively, generally
using either the TIFF or TIF filename extension. TIFFs are lossy and lossless. Some offer
relatively good lossless compression for bi-level (black & white) images. Some digital cameras
can save in TIFF format, using the LZW compression algorithm for lossless storage. TIFF image
format is not widely supported by web browsers. TIFF remains widely accepted as a photograph
file standard in the printing business. TIFF can handle device-specific color spaces, such as the
CMYK defined by a specific set of printing press inks.

 PNG:

The PNG (Portable Network Graphics) file format was created as the free, open-source
successor to the GIF. The PNG file format supports true color (16 million colors) while the GIF
supports only 256 colors. The PNG file excels when the image has large, equally colored areas.
The lossless PNG format is best suited for editing pictures, and the lossy formats, like JPG, are
best for the final distribution of photographic images, because JPG files are smaller than PNG
files. PNG, an extensible file format for the lossless, portable, well-compressed storage of raster
images. PNG provides a patent-free replacement for GIF and can also replace many common
uses of TIFF. Indexed-color, gray scale, and true color images are supported, plus an optional
5

SRKR Engineering College, Bhimavaram


alpha channel. PNG is designed to work well in online viewing applications, such as the World
Wide Web. PNG is robust, providing both full file integrity checking and simple detection of
common transferal errors.

 GIF:

GIF (Graphics Interchange Format) is limited to an 8-bit palette, or 256 colors. This
makes the GIF format suitable for storing graphics with relatively few colors such as simple
diagrams, shapes, logos and cartoon style images. The GIF format supports animation and is still
widely used to provide image animation effects. It also uses a lossless compression that is more
effective when large areas have a single color, and ineffective for detailed images or dithered
images.

 BMP:

The BMP file format (Windows bitmap) handles graphics files within the Microsoft
Windows OS. Typically, BMP files are uncompressed, hence they are large. The advantage is
their simplicity and wide acceptance in Windows programs.

1.3.2 VECTOR FORMATS:

As opposed to the raster image formats above (where the data describes the
characteristics of each individual pixel), vector image formats have a geometric description
which can be provides smoothly at any desired display size.

At some point, all vector graphics must be rasterized in order to be displayed on digital
monitors. However, vector images can be displayed with analog CRT technology such as that
used in some electronic test equipment, medical monitors, radar displays, laser shows and early
video games. Plotters are printers that use vector data rather than pixel data to draw graphics.

SRKR Engineering College, Bhimavaram


 CGM:

CGM (Computer Graphics Metafile) is a file format for 2D vector graphics, raster
graphics, and text. All graphical elements can be specified in a textual source file that can be
compiled into a binary file or one of two text representations. CGM provides a means of
graphics data interchange for computer representation of 2D graphical information
independent from any specific application, system, platform, or device.

 SVG:

SVG (Scalable Vector Graphics) is an open standard created and developed by the World
Wide Web Consortium to address the need for a versatile, scriptable and all-purpose vector
format for the web and otherwise. The SVG format does not have a compression scheme of
its own, but due to the textual nature of XML, an SVG graphic can be compressed using a
program such as gzip.

1.4 IMAGE PROCESSING:

Digital image processing, the manipulation of images by computer, is relatively recent


development in terms of man‘s ancient fascination with visual stimuli. In its short history, it has
been applied to practically every type of images with varying degree of success. The inherent
subjective appeal of pictorial displays attracts perhaps a disproportionate amount of attention
from the scientists and also from the layman. Digital image processing like other glamour fields,
suffers from myths, mis-connect ions, mis-understandings and mis-information. It is vast
umbrella under which fall diverse aspect of optics, electronics, mathematics, photography
graphics and computer technology. It is truly multidisciplinary endeavor ploughed with
imprecise jargon.

Several factor combine to indicate a lively future for digital image processing. A major
factor is the declining cost of computer equipment. Several new technological trends promise to
more advanced promote digital image processing. These include parallel procedure mode

SRKR Engineering College, Bhimavaram


practical by low cost microprocessors, and the use of charge coupled devices (CCDs) for
digitizing, storage during processing and display and large low cost of image storage arrays.

1.5 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING:

Fig 1.5: Fundamental block diagram

SRKR Engineering College, Bhimavaram


1.5.1 Image Acquisition:

Image Acquisition is to acquire a digital image. To do so requires an image sensor and


the capability to digitize the signal produced by the sensor. The sensor could be monochrome or
color TV camera that produces an entire image of the problem domain every 1/30 sec. the image
sensor could also be line scan camera that produces a single image line at a time. In this case, the
objects motion past the line.

Fig 1.6: Digital camera

Scanner produces a two-dimensional image. If the output of the camera or other imaging
sensor is not in digital form, an analog to digital converter digitizes it. The nature of the sensor
and the image it produces are said by the application.

Fig 1.7: Scanner

SRKR Engineering College, Bhimavaram


1.5.2 Image Enhancement:

Image enhancement is the simplest and most appealing areas of digital image processing.
Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or
simply to outstanding certain features of interesting an image. A familiar example of
enhancement is when we increase the contrast of an image because ―it looks finer.‖ It is
important to keep in mind that enhancement is a very subjective area of image processing.

Fig 1.8: Image Enhancement

1.5.3 Image restoration:


Image restoration is an area that also deals with making finer the aspect of an image.
However, unlike enhancement, which is subjective, image restoration is objective, in the
perception that restoration techniques tend to be build on mathematical or probabilistic models of
image degradation.

Fig 1.9: Image Restoration


10

SRKR Engineering College, Bhimavaram


Enhancement, on the other hand, is build on human subjective preferences regarding
what constitutes a ―good‖ enhancement result. For example, contrast stretching is considered an
enhancement technique because it is build primarily on the pleasing aspects it might present to
the viewer, whereas removal of image blur by applying a deblurring function is considered a
restoration technique.

1.5.4 Color image processing:

The use of color in image processing is motivated by two principal factors. First, color is
a powerful descriptor that often simplifies object identification and extraction from a scene.
Second, humans can discern thousands of color shades and intensities, compared to about only
two dozen shades of gray. This second factor specific important in manual image analysis.

Fig 1.10: Color image processing from black and white

1.5.5 Wavelets and multi resolution processing:

Wavelets are the formation for representing images in various degrees of resolution.
Although the Fourier transform has been the mainstay of aspect of based image processing since
the late1950‘s, a more recent transformation, called the wavelet transform, and is now making it
even easier to compress, transmit, and analyze many images. Unlike the Fourier transform,
whose basis functions are sinusoids, wavelet transforms are based on small values, called
Wavelets, of varying frequency and limited duration.

11

SRKR Engineering College, Bhimavaram


Fig 1.11: Multi resolution representation

Wavelets were first shown to be the foundation of a powerful new approach to signal
processing and analysis called Multi resolution theory. Multi resolution theory incorporates and
unifies techniques from a variety of disciplines, including sub band coding from signal
processing, quadrature mirror filtering from digital speech recognition, and pyramidal image
processing.

1.5.6 Compression:
Compression, as the name implies, deals with techniques for reducing the storage
required saving an image, or the bandwidth required for transmitting it. Although storage
technology has improved significantly over the past decade, the same cannot be said for
transferal capacity. This is true specifically in uses of the Internet, which are characterized by
significant pictorial content. Image compression is familiar to most users of computers in the
form of image file extensions, such as the jpg file extension used in the JPEG (Joint
Photographic Experts Group) image compression standard.

1.5.7 Morphological processing:


Morphological processing deals with tools for extracting image components that are
useful in the representation and description of shape. The language of mathematical morphology
is set theory. As such, morphology offers a unified and powerful approach to numerous image
processing problems. Sets in mathematical morphology represent objects in an image. For
12

SRKR Engineering College, Bhimavaram


example, the set of all black pixels in a binary image is a complete morphological description of
the image.

Fig 1.12: Morphing of image

In binary images, the sets in question are members of the 2-D integer space Z2, where
each element of a set is a 2-D vector whose coordinates are the (x,y) coordinates of a black(or
white) pixel in the image. Gray-scale digital images can be represented as sets whose
components are in Z3. In this case, two components of each element of the set refer to the
coordinates of a pixel, and the third corresponds to its discrete gray-level value.

1.5.8 Segmentation:
Segmentation procedures partition an image into its constituent parts or objects. In
general, autonomous segmentation is one of the most difficult tasks in digital image processing.
A rugged segmentation procedure brings the procedure a long way toward successful solution of
imaging problems that require objects to be identified individually.

Fig 1.13: Segmentation representation

13

SRKR Engineering College, Bhimavaram


On the other hand, weak or erratic segmentation algorithms almost always guarantee
eventual failure. In general, the more accurate the segmentation, the more likely recognition is to
succeed.

1.5.9 Representation and description:

Representation and description almost always follow the output of a segmentation stage,
which generally is raw pixel data, constituting either the boundary of a region (i.e., the set of
pixels separating one image region from another) or all the points in the region itself. In either
case, converting the data to a form suitable for computer procedure is inevitably. The first
decision that must be made is whether the data should be represented as a boundary or as a
complete region. Boundary representation is appropriate when the focus is on external shape
characteristics, such as corners and inflections.

Regional representation is appropriate when the focus is on internal properties, such as


texture or skeletal shape. In some applications, these representations complement each other.
Choosing a representation is only part of the solution for aspect of raw data into a form suitable
for subsequent computer procedure. A method must also be specified for describing the data so
that features of interest are outstanding. Description, also called feature selection, deals with
extracting attributes that result in some quantitative information of interest or are basic for
differentiating one class of objects from another.

1.5.10 Object recognition:

The last stage requires recognition and interpretation. Recognition is the procedure that
assigns a label to an object based on the information provided by its descriptors. Interpretation
requires assigning meaning to an ensemble of recognized objects.

14

SRKR Engineering College, Bhimavaram


1.5.11 Knowledgebase:

Knowledge about a problem domain is coded into image processing system in the form of
a knowledge database. This knowledge may be as simple as detailing regions of an image when
the information of interests is known to be located, thus limiting the search that has to be
conducted in seeking that information. The knowledge base also can be quite complex, such as
an inter related to list of all major possible defects in a materials inspection problem or an image
data base having high resolution satellite images of a region in connection with alter deletion
application. In addition to guiding the operation of each processing module, the knowledge base
also controls the interaction between modules. The system must be endowed with the knowledge
to recognize the significance of the location of the string with respect to other components of an
address field. This knowledge glides not only the operation of each module, but it also aids in
feedback operations between modules through the knowledge base. We implemented
preprocessing techniques using MATLAB.

1.6 COMPONENTS OF AN IMAGE PROCESSING SYSTEM:

As recently as the mid-1980s, numerous models of image processing systems being sold
throughout the world were rather substantial peripheral devices that attached to equally
substantial host computers. Late in the 1980s and early in the 1990s, the market shifted to image
processing hardware in the form of single boards designed to be compatible with industry
standard buses and to fit into engineering workstation cabinets and personal computers. In
addition to lowering costs, this market shift also served as a catalyst for a significant number of
new companies whose specialty is the development of software written specifically for image
processing.

15

SRKR Engineering College, Bhimavaram


Network

Image displays computer Mass storage

Specialized image
Image processing
Hard copy processing
software
hardware

Image sensor

Problem domain

Fig 1.14: Basic image processing

Although large-scale image processing systems still are being sold for massive imaging
applications, such as processing of satellite images, the trend continues toward miniaturizing and
blending of general-purpose small computers with specialized image processing hardware.
Figure shows the basic components comprising a typical general-purpose system used for digital
image processing. The function of each component is discussed in the following paragraphs,
starting with image sensing.

16

SRKR Engineering College, Bhimavaram


 Image sensors:

With reference to sensing, two elements are required to acquire digital images. The first
is a bodily device that is sensitive to the energy radiated by the object we wish to image. The
second, called a digitizer, is a device for converting the output of the bodily sensing device into
digital form. For instance, in a digital video camera, the sensors make an electrical output
proportional to light intensity. The digitizer converts these outputs to digital data.

 Specialized image processing hardware:

Specialized image processing hardware generally consists of the digitizer just mentioned,
plus hardware that carryout other primitive operations, such as an arithmetic logic unit (ALU),
which carryout arithmetic and logical operations in parallel on entire images. One example of
how an ALU is used is in averaging images as quickly as they are digitized, for the purpose of
noise reduction. This type of hardware sometimes is called a front-end subsystem, and its most
distinguishing characteristic is speed. In other words, this unit carryout functions that require fast
data throughputs (e.g., digitizing and averaging video images at 30 frames) that the typical main
computer cannot handle.

 Computer:

The computer in an image processing system is a general-purpose computer and can


range from a PC to a supercomputer. In dedicated applications, sometimes specially designed
computers are used to achieve a required level of performance, but our interest here is on
general-purpose image processing systems. In these systems, almost any well-equipped PC-type
machine is suitable for offline image processing tasks.

17

SRKR Engineering College, Bhimavaram


 Image processing software:

Software for image processing consists of specialized modules that perform specific
tasks. A well-designed package also includes the capability for the user to write code that, as a
minimum, utilizes the specialized modules. More sophisticated software packages allow the
integration of those modules and general-purpose software commands from at least one
computer language.

 Mass storage:

Mass storage capability is a must in image processing applications. An image of size


1024*1024 pixels, in which the intensity of each pixel is an 8-bit quantity, requires one
megabyte of storage space if the image is not compressed. When dealing with thousands, or even
millions, of images, providing adequate storage in an image processing system can be a
challenge. Digital storage forimage processing applications fall into three principal categories:
(1) short-term storage for use during processing, (2) on-line storage for relatively fast recall, and
(3) archival storage, characterized by infrequent access. Storage is measured in bytes (eight bits),
Kbytes (one thousand bytes), Mbytes (one million bytes), Gbytes (meaning giga, or one billion,
bytes), and T bytes (meaning tera, or one trillion, bytes).

One method of providing short-term storage is computer memory. Another is by


specialized boards, called frame buffers that store one or more images and can be accessed
rapidly, generally at video rates. The latter method allows virtually instantaneous image zoom, as
well as scroll (vertical shifts) and pan (horizontal shifts). Frame buffers generally are housed in
the specialized image processing hardware unit shown in Fig. 1.24. Online storage generally
takes the form of magnetic disks or optical-media storage. The key factor characterizing on-line
storage is frequent access to the stored data. Finally, archival storage is characterized by massive
storage requirements but infrequent need for access. Magnetic tapes and optical disks housed in
―jukeboxes‖ are the usual media for archival applications.

18

SRKR Engineering College, Bhimavaram


 Image displays:

Image displays in use today are mainly color (preferably flat screen) TV monitors.
Monitors are driven by the outputs of image and graphics display cards that are an integral part
of the computer system. Seldom are there requirements for image display applications that
cannot be met by display cards available commercially as part of the computer system. In some
cases, it is inevitably to have stereo displays, and these are implemented in the form of headgear
having two small displays embedded in goggles worn by the user.

 Hardcopy:

Hardcopy devices for recording images include laser printers, film cameras, heat-
sensitive devices, inkjet units, and digital units, such as optical and CD-ROM disks. Film
provides the highest possible resolution, but paper is the obvious medium of choice for written
material. For presentations, images are displayed on film transparencies or in a digital medium if
image projection equipment is used. The latter approach is gaining acceptance as the standard for
image presentations.

 Network:
Networking is almost a default function in any computer system in use today. Because of
the large amount of data inherent in image processing applications, the key consideration in
image transferal is bandwidth. In dedicated networks, this typically is not a problem, but
communications with remote sites via the Internet are not always as efficient. Fortunately, this
situation is making finer quickly as a result of optical fiber and other broadband technologies.

19

SRKR Engineering College, Bhimavaram


1.7 Digital Image Processing allows users the following tasks

Image sharpening and restoration

The common applications of Image sharpening and restoration are zooming, blurring,
sharpening, gray scale conversion, edges detecting, Image recognition, and Image retrieval, etc.

Medical field

The common applications of medical field are Gamma-ray imaging, PET scan, X-Ray
Imaging, Medical CT, UV imaging, etc.

Remote sensing

It is the process of scanning the earth by the use of satellite and acknowledges all
activities of space.

Machine/Robot vision

It works on the vision of robots so that they can see things, identify them, etc.

Pattern recognition

It requires the study of image processing; it is also combined with artificial intelligence
such that computer-aided diagnosis, handwriting recognition and images recognition can be
easily implemented. Now a days, image processing is used for pattern recognition.

Video processing

It is also one of the applications of digital image processing. A collection of frames or


pictures are arranged in such a way that it makes the fast movement of pictures. It requires frame
rate conversion, motion detection, reduction of noise and colour space conversion etc.

20

SRKR Engineering College, Bhimavaram


1.8 Characteristics of Digital Image Processing

 It uses software, and some are free of cost.


 It provides clear images.
 Digital Image Processing do image enhancement to recollect the data through images.
 It is used widely everywhere in many fields.
 It reduces the complexity of digital image processing.
 It is used to support a finer experience of life.

1.9 Advantages of Digital Image Processing

 Image reconstruction (CT, MRI, SPECT, PET).


 Image reformatting (Multi-plane, multi-view reconstructions).
 Fast image storage and retrieval.
 Fast and high-quality image distribution.
 Controlled viewing (windowing, zooming)

1.10 Disadvantages of Digital Image Processing

 It is very much time-consuming.


 It is very much costly depending on the specific system.
 Qualified persons can be used.

21

SRKR Engineering College, Bhimavaram


CHAPTER 2
Literature Survey

In digital image processing stems from two principal application areas making finer of
pictorial representation for human interpretation, processing of image data for autonomous
machine perception . Wang et al. suggested the colour image correction method build on non-
linear transformation function based on light reflection model and multi-scale theory. Traditional
low-illumination image enhancement algorithms include the gray transformation method and
histogram equalization method. Huang et al. suggested an adaptive gamma correction algorithm
to adaptively get gamma correction parameters build on a cumulative distribution probability
histogram. Jobson et al. suggested the single-scale retinex (SSR) algorithm build on the retinex
illumination-reflection model established by Land et al. Later evolved into the multiscale retinex
(MSR) algorithm, the MSR algorithm with colour restoration (MSRCR) , and the MSR
algorithm with chromaticity preservation (MSRCP) . Fu et al. suggested a weighted variation
model for simultaneous reflection and illumination estimation (SRIE) that could preserve the
estimated reflectivity with high accuracy and suppress noise to a certain extent. Image
enhancement methods build on machine learning have emerged in recent years. The image
quality for different distortions lie noise, compression etc., is estimated.

Fig 2.1: Illumination Reflection model

22

SRKR Engineering College, Bhimavaram


F(x,y) = I(x,y) * R(x,y) [1]

F(x,y) represents the brightness of the image at specific pixel is the product of I(x,y) and R(x,y).
I(x,y) and R(x,y) represents illumination component of the incident light and transferal
component from object surface.

Image processing is very useful in various areas such as-

 Improvement of Pictorial Information:

To making finer of pictorial information of the image for human perception. DIP enhances
the quality of image.

 Pictorial Information Enhancement Method:

The pictorial information of the image is enhanced for human perception by using following
image processing method

 Filtering method is used to reduce the noise of the image.


 Contrast of image is enhanced by using intensity aspect functionality.
 Blurred image taken from moving platform shot of moving element is enhanced by using
filters.
 Image Processing for Autonomous Machine Application: It is useful in assembly automation
and it controls the quality of assembled product in industry.
 Efficient Storage and transferal: Image processing is used to sort the image when required
space for sorting the image is not available in disk space. If transmission bandwidth is low
then it processes the signal by using image processing technique and gives the redial image.

23

SRKR Engineering College, Bhimavaram


In Image enhancement Techquines is the making finer of satellite image quality without
knowledge about the source of degradation. If the source of degradation is known, one calls the
process image restoration Both are iconical processes, viz., input and output is images. Many
different, often elementary and heuristic methods are used to improve images in some
perception. Image restoration removes or minimizes some known degradations in an image. In
many image processing applications, geometrical transformations facilitate processing. Examples
are image restoration, where one frequently wants to model the degradation process as space-
invariant, or the calibration of a measurement device, or a correction in order to remove a
relative movement between object and sensor. In all cases the first operation is to eliminate a
known geometrical distortion.
The geometric rectification imagery has to be enhanced to making finer the effective
visibility. Image enhancement techniques are generally applied to remote sensing data to making
finer the aspect of an image for human visual analysis. Apart from geometrical transformations
some preliminary grey level adjustments may be indicated, to take into account imperfections in
the acquisition system. This can be done pixel by pixel, calibrating with the output of an image
with constant brightness Frequently space-invariant grey value transformations are also done for
contrast stretching, range compression, etc. The critical distribution is the relative frequency of
each grey value, the grey value histogram. Image enhancement techniques, while generally not
required for automated analysis techniques, have regained a significant interest in current years.
Applications such as virtual environments or battlefield simulations require specific
enhancement techniques to create ‗real life‘ environments or to process images near real time,
the major focus of these procedures is to enhance imagery data in order to display effectively or
record the data for subsequent visual interpretations. Enhancements are used to make easier
visual interpretations and understanding of imagery. The advantage of digital imagery allows to
manipulate the digital pixel values in an image. Various image enhancement algorithms are
applied to remotely perception data to improve the aspect of an image for human visual analysis
or occasionally for subsequent machine analysis. There is no such ideal or best image
enhancement because the results are ultimately evaluated by humans, who make subjective
judgements whether a given image enhancement is useful. The purpose of the image
enhancement is to making better the visual interpretability of an image by increasing the
apparent distinction between the features in the scene. Although radiometric corrections for
24

SRKR Engineering College, Bhimavaram


illumination, atmospheric influences, and sensor characteristics may be done prior to distribution
of data to the user, the image may still not be optimized for visual interpretation. Remote sensing
devices to cope with levels of target / background energy, which are typically for all conditions,
likely to be encountered in routine use. With large variations in spectral response from a diverse
range of targets no generic radiometric correction could optimally account for, display the
optimum brightness range, and contrast for all targets. Thus, for each application and each
imagery, a custom adjustment of the range and distribution of brightness values is generally
inevitably.
Normally, image enhancement requires techniques for increasing the visual distinctions
between features in a scene. The objective is to create new images from the original image data
in order to increase the amount of information that can be displayed interactively on a monitor or
they can be recorded in a hard copy format either in monochrome or RGB color. Three
techniques are categorized as contrast manipulation Gray level threshold, level slicing and
contrast stretching, Spatial feature manipulation Spatial filtering, Edge enhancement and Fourier
analysis, multi-image manipulation band rationing, differencing, principal components,
canonical components, vegetation components, intensity-hue-saturation. In raw imagery, the
useful data often populates only a small portion of the available range of digital values
(commonly 8 bits or 256 levels). Contrast enhancement requires changing the original values so
that more of the available range is used, thereby increasing the contrast between targets and their
backgrounds. contrast enhancements is to understand the concept of an image histogram. A
histogram is a graphical representation of the brightness values that comprise an image. The
brightness values (i.e. 0-255) are displayed along the x-axis of the graph. The frequency of
occurrence of each of these values in the image is shown on the y-axis, through manipulating the
range of digital values in an image, graphically represented by its histogram, various
enhancements to the data.
There are many different techniques and methods of enhancing contrast in an image. The
simplest type of enhancement is a linear contrast stretch. This requires identifying lower and
upper bounds from the histogram (generally the minimum and maximum brightness values in the
image) and applying a transformation to stretch this range to fill the full range. In the example,
the minimum value (occupied by actual data) in the histogram is 84 and the maximum value is

25

SRKR Engineering College, Bhimavaram


153. These 70 levels occupy less than one third of the full 256 levels available. A linear stretch
equally expands this small range to cover the full range of values from 0 to 255.
This enhances the contrast in the image with light toned areas comes lighter and dark areas
comes darker, making visual interpretation much easier. This illustrates the increase in contrast
in an image before (left) and after (right) a linear contrast stretch. A equal distribution of the
input range of values across the full range may not always be an appropriate enhancement,
specifically if the input range is not equally distributed. In this case, a histogram-equalized
stretch may be finer. This stretch assigns more display values (range) to the frequently occurring
portions of the histogram. In this way, the detail in these areas will be finer enhanced relative to
those areas of the original histogram where values occur less frequently. In other cases, it may be
desirable to enhance the contrast in only a specific portion of the histogram.

2.1 Image Enhancement Method


The aim of the Image Enhancement system is to develop methods that are fast, handle
noise efficiently and perform accurate segmentation. For this purpose, the below methodology
uses two stages. The first stage enhances the image in such a way that it making finer the
segmentation process, while the second step carryout the actual segmentation. The working of
the enhancement and the segmentation procedure is:

Step 1: Input Geometric rectified and geometric registration imagery


Step 2: Color Conversion
Step 3: Image segmentation
Step 4: Clustering the Edges
Step 5: Image Enhancement Technique
Stage1: Contrast Adjustment
Stage2: Intensity Correction
Stage3: Noise removal
Step 6: Enhanced Imagery

26

SRKR Engineering College, Bhimavaram


2.1.1 Color Conversion

Most remote sensing systems create arrays of numbers representing an area on the
surface of the Earth. The entire array is called an image or scene, and the individual numbers are
called pixels (picture elements) such as water body, wetland, forest area etc., the value of the
pixel represents a measured quantity such as light intensity over a given range of wavelengths.
However, it could also represent a higher-level product such as topography or chlorophyll
concentration or almost anything. Some active systems also provide the phase of the reflected
radiation so each pixel will have a complex number. Typical array sizes with optimum pixels
and system with multiple channels may require megabytes of storage per scene. Moreover, a
satellite can collect 50 of these frames on a single pass so the data sets can be enormous. There
are several established color models used in computer graphics, but the most common are the
Gray Scale model, RGB (Red- Green-Blue) model, HIS (Hue, Saturation, Intensity) model and
CMYK (Cyan-Magenta88 Yellow-Black) model, for Remote Sensing Technology used in digital
image processing by Gonzalez and Woods (2008) has presented a detailed explanation.

2.1.2 RGB and L Color Transformation:

When Red, Green and Blue light are combined it forms white. As a result to reduce the
computational complexity the geo referenced data that exists in RGB color model is converted
into a gray scale image. The range of gray scale image from black to white values can be
calculated by the equation. Where X is imagery, L is Luminance, R is RED, G is Green and B is
Blue. X = L + (0.2989 * R)+ (0.5870 ∗G)+ (0.1140 ∗ B) RGB is a color space originated from
CRT (or close) display applications, when it was convenient to describe color as a combination
of three colored rays (red, green and blue).

27

SRKR Engineering College, Bhimavaram


2.1.3 Segmentation

Satellite Image Segmentation is one of the most important problems in image


preprocessing technique. It consists of constructing a symbolic representation of the imagery that
divider an image into non-intersecting regions such that each region is homogeneous and the
combination of no two adjacent regions is homogeneous and it can be used for the process of
isolating objects of interest from the rest of the scene. In the literature survey, various
segmentation algorithms can be found. Starting from the sixties, diverse algorithms have been
arising persistently depending upon the applications involved. Most remote sensing applications
image analysis problems need a segmentation section in order to identify the objects or to detect
the various boundaries of the imagery and convert it into regions, which are homogeneous
according to a given condition, such as surface, color, etc., assigning labels to every pixel such
that pixels with the same label share certain visual characteristics and it‘s still reflected immature
in the field of satellite image processing. The main cause for these vast variations is the image
quality while capturing the image and increase in the size of the image and also difficulty in
understanding the satellite images by various applications. The total amount of visual pattern in
the image is increased by an overwhelming methodology. These anxieties have increased the use
of computers for assisting the processing and analysis of data. These images include many
textured regions or different background and often subjected to the enlightenment alters or
ground truth properties. All these force makes the urgent need in satellite image processing
system for rapid and efficient image segmentation model that requires minimum involvement
from user. Existing solutions for segmentation of satellite images face three major drawbacks.
The representation degradation when supplied with large sized images, degradation of
segmentation accuracy due to the quality of the acquired image and speed of segmentation is not
meeting the standards of the modern equipments. This image enhanced considers the use of GIS
and remote sensing application of preprocessing segmentation techniques. Preprocessing
carryout operations on the input imagery to making finer the imagery quality and FCM clustering
algorithm is to increase the image quality by the segmentation process. It includes Color
transformation, intensity correction, methods and parameter selection, edge or boundary
enhancement and denoising Out of these, boundary enhancement, pixel correction and de-
noising have more impact on segmented results. ERDAS imaging Segmentation process requires
28

SRKR Engineering College, Bhimavaram


several steps. To input image conversion to specific feature space depends on the clustering
techniques which uses two steps.
Primary step requires the conversion of the input image into L=RGB color value attributes using
fuzzy c-means clustering method.
Secondary step requires the image conversion to feature space with the selected fuzzy c-means
clustering method.
Fuzzy clustering is a process of assigning these membership levels, and then using them
to assign data elements to one or more clusters. The most significant part of this segmentation
method is grant of feature value. In the grant of feature value is build on simple idea, that
neighboring pixels have approximately same values of lightness and chroma. Then an actual
image, noise is corrupting the 90 imagery data or imagery commonly having textured segments.
Cluster Centers Initializations
Adaptive histogram equalization maximizes the contrast throughout an image by
adaptively enhancing the contrast of each pixel relative to its local neighborhood. This process
produces improved contrast for all levels of contrast (small and large) in the original image. For
adaptive histogram equalization to enhance local contrast, histograms are calculated for small
regional areas of pixels, producing local histograms. These local histograms are then equalized
or remapped from the often narrow range of intensity values indicative of a central pixel and its
closest neighbors to the full range of intensity values available in the display. more advanced, to
enhance the edges, a sigmoid function is used.

2.1.4 Intensity Correction

Intensity non-uniformity in satellite images is due to a number of causes during the


acquisition of the image data. In principle, they occur due to the non-uniformity of the
acquisition devices and relate to artifacts caused by slow, non-anatomic intensity variations. In
this paper, an Expectation-Maximization (EM) algorithm is employed to correct the spatial
variations of intensity. The Expectation Maximization methods do not make any assumption of
the sequences type or texture intensity and therefore can be applied to all kind of image
sequences. In general, the EM algorithm consists of two steps:

29

SRKR Engineering College, Bhimavaram


(i) E-Step (or) Expectation Step and (ii) M-Step (or) Maximization step. The algorithm is close
to the K-means procedure in the perception that a set of parameters are re-computed until a
desired convergence value is achieved. These two steps are repeated alternatively in an iterative
fashion till convergence is reached.

2.1.5 Noise Removal

After correcting intensity, an enhanced version of anisotropic diffusion is applied to


remove speckle noise in a fast and efficient manner. It also called Perona Malik diffusion, is a
technique aiming at reducing image noise without removing significant parts of the image
content, typically edges, lines or other details that are important for the 97 interpretation of the
image. Anisotropic diffusion filter is a frequently used filtering technique in digital images. In
spite of its popularity, the established anisotropic diffusion algorithm introduces blocking effects
and destroys structural and spatial neighborhood information. more advanced they are slow in
reaching a convergence stage. To solve these problems, the algorithm was combined with an
edge-sensitive partial differential equation during new hybrid method of noise removal. The
anisotropic filtering in hybrid noise removal simplifies image features to making finer image
segmentation by smoothening the image in homogeneous area while preserving and enhances the
edges. It reduces blocking artifacts by deleting small edges amplified by homomorphic filtering.

30

SRKR Engineering College, Bhimavaram


CHAPTER 3

Existing Method

3.1 Adaptive Enhancement Technique

Primarily, the RGB colour space is converted to HSV colour space. The V-plane in
extracted and information from brightness is improved and finally HSV plane is back converted
into RGB plane. The brightness plane of the image is enhanced using adaptive enhancement and
histogram equalization techniques followed with image fusion. The colour image transformation
combined with multi-scale decomposition is used for enhancing low illumination images.

Fig 3.1: Mechanism of enhancement

31

SRKR Engineering College, Bhimavaram


Unlike RGB colour space which has primary colour components, HSV is nearer to human
perception of colour. HSV colour space has three components: 1.Hue, 2. Saturation, 3. Value or
brightness plane.
V = max (R, G, B) [2]
S = 1 − min (R, G, B) / V [3]

Let I be the image taken. Ih, Is, Iv represents three HSV colour planes.

Fig 3.2: Hue, Saturation, Value planes of Box image.


The illumination component of the image is extracted from image by convolution of image with
multi scale Gaussian function.
Iv_g = Σ(𝐼𝑣3𝑖=1∗𝐺𝑖(𝑥,𝑦)) [5]
Iv_g represents estimated value of the illumination component and Iv represents brightness plane
of the image.
G(x, y) = λ𝑒−(𝑥2+𝑦2)/𝜎2 [6]
Where 𝜃𝑖 is weighting coefficient = 1/3

32

SRKR Engineering College, Bhimavaram


λ = Normalization factor = 1√2𝜋𝜎2
Gaussian function G(x, y) with three scale factors, σ = [15, 80, 250] is convoluted with image
and added to extract finer illumination component even from unevenly illuminated images.

3.1.1 Image Enhancement:


1. Adaptive brightness enhancement:

The extracted reflection component is enhanced by modifying the distribution profile of


the illumination components of the image. The illumination values of over illuminated are
decreased and vice-versa. The brightness plane Iv is improved adaptively using the formula:
I’v = Iv (255+k) Iv+k [7]
Where 255 is the gray-level of 8-bit image, k is adjusting parameter. The magnitude of image
enhancement decreases with increase in value of k. To reduce over enhancement the formula is
more advanced modified as,
I’v = Iv (255+k) max (Iv,Iv_g)+k [8]
I’v is v-plane after enhancement.
k = α * mean (Is) where α = [0.1, 1]
The adaptive parameter is calculated by multiplying α with average value of the saturation plane.
As the ‗k‘ increases the enhancement ability of the technique decreases.

2. Histogram Equalisation:

Histogram equalization is one of the basic image processing techniques used to improve
the contrast of the image. It is a contrast enhancement technique accomplished by adjusting
image histogram. HE assigns the intensity values of pixels in input image such that the output
image has a equal distribution of intensities.
I’v = histequ (Iv) * max (Iv_g, Iv) [9]

Histequ represents histogram equalization.

33

SRKR Engineering College, Bhimavaram


Image Fusion:

In image processing applications, the method used to extract key information from two or
more sub images is image fusion. The fused image is more informative than source images. It
provides all the inevitably information needed.
The image fusion is done as follows:
F = Σ𝑤𝑖𝑁𝑖=1𝑆𝑖 [10]

Where F is fused image,


w is the weighting coefficients,
S is the source images,
N is the total number of images to be fused.

The sub images which are to be fused almost have close information so the weighting
coefficients are obtained using PCA (Principal coefficient analysis). In PCA, the feature vectors
of source images are obtained and feature values are used to make weighting coefficients.

Fig 3.3: Fusion methodology

34

SRKR Engineering College, Bhimavaram


Consider two source images S1, S2.

Fig 3.4: The two source images S1, S2 for two alpha values [0.01, 1] due to adaptive enhancement technique.

1. Generate matrix S using both the source images.


S = [S1, S2]
2. Calculate the covariance matrix of S.
C = 𝜎112𝜎122𝜎212𝜎222 [11]
3. Calculate the eigen values (λ1, λ2) and feature vector 𝜀1𝜀2 for covariance matrix C.
5. Find the weighting coefficients w1 and w2 using feature values:
w1 = 𝜀1𝜀1+𝜀2 and w2 = 𝜀2𝜀1+𝜀2 [12, 13]
6. Finally calculate the fused image
F = w1S1 + w2S2 [14]
The fused V-plane is replaced with original V-plane and image is converted back into RGB
colour space.

35

SRKR Engineering College, Bhimavaram


CHAPTER 4

Proposed Method

4.1 Histogram Equalization Technique

Fig 4.1: Histogram Modeling, Histogram Equalization

Histogram modeling equalization technique gives involving the great deal of worldly
experience method for modifying the dynamic range and contrast of an image by alter in
character that image such that its intensity histogram has a required shape. The contrast,
histogram models operators may make use of non-linear and non-monotonic transfer functions to
map between pixel intensity values in the input and output images.

Histogram equalization make use of a monotonic, non-linear mapping which re-builds the
intensity values of the pixels in the input image such that the output image having a equal
distribution of intensity. These Techniques is used in image estimate of the closest processes and
in the correction of non-linear effects brings by the say a digitizer.

36

SRKR Engineering College, Bhimavaram


Histogram techquines is introduced by continuous, rather than discrete, process functions.
Therefore, we suppose that the images having continuous intensity levels (intervening time

[0,1]) and that the transformation function f maps an input image to an output

image is continuous with in the interval. more advanced, it will be assumed that the

transfer law ( in which also be written in terms of intensity density levels, e.g. )
is single-valued and monotonically increasing in the condition of histogram equalization. So

that it is possible to say the inverse law . The example of this type of
function is explained in Figure 4.2.

Fig 4.2: A histogram transformation function.

All pixels in the input image with densities in the region to will have their pixel
values re-assigned such that they assume an output pixel density value in the range

37

SRKR Engineering College, Bhimavaram


from to . The surface areas and will therefore be
equal, yielding:

where .

The result is written in the language of probability theory if the histogram h is regards as
a continuous probability density function p describing the distribution of the assumed random
intensity levels:

In the case of histogram equalization, the output probability densities have to be an equal fraction

of the maximum number of intensity levels in the input image (where the minimum level
considered is 0). The transfer function inevitably to achieve this result is simply:

Therefore,

Where is simply the cumulative probability distribution (i.e. cumulative histogram) of


the original image. Then, an image which is transformed using its cumulative histogram yields
an output histogram which is flat.

38

SRKR Engineering College, Bhimavaram


A digital implementation of histogram equalization is generally carry out by defining the transfer
function of form:

where N is the number of image pixels and is the number of pixels at intensity level k or less.

In the digital implementation, the output image will not inevitably be fully equalized and there
may be `holes' in the histogram (i.e. unused intensity levels). These effects are likely to decrease
as the number of pixels and intensity quantization levels in the input image are increased.

Illustrate for Using:

To say the utility of histogram equalization, consider

Fig 4.3: Surface on the moon.

which shows an 8-bit gray scale image of the surface of the moon. The histogram

Fig 4.4: Histogram representation Surface on the moon image.

39

SRKR Engineering College, Bhimavaram


Confirms what we can see by visual scrutiny: the image has poor dynamic range. (Note that we
can view this histogram as a description of pixel probability densities by simply scaling the
vertical axis by the total number of image pixels and return to a normal the horizontal axis using
the number of intensity density levels (i.e. 256). However, the shape of the distribution will be
the same in either case.)

In order to improve the contrast of this image, without affecting the structure
(i.e. geometry) of the information presented, we can apply the histogram equalization. The
occurring image is

Fig 4.5: Histogram equalization Surface on the moon.

and its histogram is shown

Fig 4.6: Histogram of Surface on the moon.

Note that the histogram is not equal but that the dynamic range and contrast have been
enhanced. Note also that when balancing the images with narrow histogram variations and
relatively few gray levels, increasing the dynamic range has the adverse effect of increasing
visual grainyness. Estimate this result with that produced by the linear contrast
stretching operator

40

SRKR Engineering College, Bhimavaram


In order to more advanced results the transformation defined by the histogram equalization
operator, consider the image below.

Fig 4.7: Image of the Scott Monument

Even though the contrast on the building is agreed, the sky area is represented almost fully by
light pixels. This results most histogram pixels

Fig 4.8: Histogram representation of the Scott Monument

to be pushed into a narrow peak in the upper graylevel region. The histogram equalization
operator defines a mapping build on the cumulative histogram

41

SRKR Engineering College, Bhimavaram


The results in the image

In histogram equalization has developed the contrast of the sky part in the image, the pic now
looks like not pleasant because it is very little variety in the middle gray level range. That occurs
because the transfer function is depends upon the shallow slope of the cumulative histogram in
the middle gray level regions (i.e. intensity density levels 100 - 230) and resulting pixels from
this area in the original image to be mapped to close gray levels in the output image.

We can enhance on this if we say a mapping build on a sub-section of the image which is having
a finer distribution of intensity densities from the low and middle range gray levels. If we crop
the image so that the it isolate a region which is having more building than sky

we can then say a histogram equalization mapping for the whole image build on the cumulative
histogram

42

SRKR Engineering College, Bhimavaram


of this smaller area. Since the selected image having a more even distribution of dark and light
pixels, the slope of the transfer function is steeper and smoother, and the contrast of the resulting
image

is more natural. This concept of mappings build upon specific sub-sections of the image is taken
up by another class of operators which perform Local Enhancements .

Common Variants

Histogram Specification

Histogram equalization is restricted in that it is capable of giving the only one solution:
an image with a equal intensity distribution. Sometimes it is useful to control the shape of the
output histogram in order to get the outstanding certain intensity levels in an image. This can be
done by the histogram specialization operator which maps a given intensity

distribution into a desired distribution using a histogram equalized

image as an intermediate stage.

The first step in histogram specialization is to specify the required output density function

and write a transformation g(c). If is single-valued (which is true when there are no unfilled

43

SRKR Engineering College, Bhimavaram


levels in the specified histogram or errors in the process of rounding off to the nearest

intensity level), then defines a mapping from the equalized levels of the original

image, . It is possible to combine these two transformations such that the


image need not be histogram equalized explicitly:

Local Enhancements

The histogram processing methods in above are global in the perception that they apply a
transformation function whose form is build on the intensity level distribution of an entire image.
Although this method can enhance the overall contrast and dynamic range of an image (thereby
making certain details more visible), there are cases in which enhancement of details over small
areas (i.e. areas whose total pixel contribution to the total number of image pixels has a
negligible influence on the global transform) is desired. The solution in these cases is to get a
transformation build upon the intensity distribution in the local the area surrounding of every
pixel in the image.

The histogram processes explained above can be adapted for local enhancement. The
process as a inevitably defining the area surrounding around each pixel and, using the histogram
characteristics of this the area surrounding, to get a transfer function which maps that pixel into
an output intensity level. This is done for each specific pixel in the taken image. (Since moving
across rows or down columns only adds one new pixel to the local histogram, updating the
histogram from the previous observation with new data at each motion is possible.) Local
enhancement may also says the transforms build on pixel attributes other than
histogram, e.g. intensity mean (to control variance) and variance (to control contrast) are
common.

44

SRKR Engineering College, Bhimavaram


CHAPTER 5

Tool Used

MATLAB R2018b

The name MATLAB stands for matrix laboratory. MATLAB was originally written to
provide easy access to matrix software developed by the LINPACK and EISPACK projects,
which together represent the state-of-the-art in software for matrix computation.

MATLAB for a range of applications, including deep learning and machine learning,
signal processing and communications, image and video processing, control systems, test and
measurement, computational finance, and computational biology.

The plotting of functions and data, implementation of algorithms, creation of user


interfaces, and interfacing with programs written in other languages.

MATLAB is intended primarily for numerical computing; an optional toolbox uses


the MuPAD symbolic engine allowing access to symbolic computing abilities. An additional
package, Simulink, adds graphical multi-domain simulation and model-based
design for dynamic and embedded systems.

As of 2020, MATLAB has more than 4 million users worldwide. MATLAB users come from
various backgrounds of engineering, science, and economics.

Syntax

The MATLAB application is built around the MATLAB programming language.


Common usage of the MATLAB application requires using the "Command Window" as an
interactive mathematical shell or executing text files having in MATLAB code.

45

SRKR Engineering College, Bhimavaram


Examples:

>> x = 17
x=
17
>> x = 'hat'
x=
hat

>> x = [3*4, pi/2]


x=
12.0000 1.5708
>> y = 3*sin(x)
y=
-1.6097 3.0000

Vectors and matrices

A simple array is defined using the colon syntax: initial: increment: terminator.

For instance:

>> array = 1:2:9


array =
13579

A variable named array (or assigns a new value to an existing variable with the name
array) which is an array consisting of the values 1, 3, 5, 7, and 9. That is, the array starts at 1 (the
initial value), increments with each step from the previous value by 2 (the increment value), and
stops once it reaches (or to avoid exceeding) 9 (the terminator value). The increment value can
actually be left out of this syntax (along with one of the colons), to use a default value of 1.

>> ari = 1:5


ari =
12345

assigns to the variable named ari an array with the values 1, 2, 3, 4, and 5, since the default

value of 1 is used as the increment. Indexing is one-based, which is the usual convention
for matrices in mathematics, unlike zero-based indexing commonly used in other programming
languages such as C, C++, and Java.
46

SRKR Engineering College, Bhimavaram


Matrices can be defined by separating the elements of a row with blank space or comma and
using a semicolon to terminate each row. The list of elements should be surrounded by square
brackets [] . Parentheses () are used to access elements and subarrays (they are also used to

denote a function argument list).

>> A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1]


A=
16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1
>> A(2,3)
ans = 11

Sets of indices can be specified by expressions such as 2:4 , which evaluates to [2, 3, 4] .

For example, a submatrix taken from rows 2 through 4 and columns 3 through 4 can be written
as:

>> A(2:4,3:4)
ans =
11 8
7 12
14 1

A square identity matrix of size n can be generated using the function eye , and matrices of any

size with zeros or ones can be generated with the functions zeros and ones , respectively

>> eye(3,3)
ans =
100
010
001
47

SRKR Engineering College, Bhimavaram


>> zeros(2,3)
ans =
000
000
>> ones(2,3)
ans =
111
111

Structures
MATLAB supports structure data types. Since all variables in MATLAB are arrays, a
more adequate name is "structure array", where each element of the array has the same field
names. In addition, MATLAB supports dynamic field names (field look-ups by name, field
manipulations, etc.).

Functions

When creating a MATLAB function, the name of the file should match the name of the
first function in the file. Valid function names begin with an alphabetic character, and can have
letters, numbers, or underscores. Variables and functions are case sensitive.

Function handles
MATLAB supports elements of lambda calculus by introducing function handles, or
function references, which are implemented either in .m files or anonymous/nested functions.

Classes and object-oriented programming

MATLAB supports object-oriented programming including classes, inheritance, virtual


dispatch, packages, pass-by-value semantics, and pass-by-reference semantics. However, the
syntax and calling conventions are significantly different from other languages. MATLAB has
value classes and reference classes, depending on whether the class has handle as a super-class
(for reference classes) or not (for value classes).

48

SRKR Engineering College, Bhimavaram


Graphics and graphical user interface programming

Fig 5.1: GUI Representation example

MATLAB has tightly integrated graph-plotting features. For example, the


function plot can be used to make a graph from two vectors x and y. The code:

x = 0:pi/100:2*pi;
y = sin(x);
plot(x,y)

MATLAB supports developing graphical user interface (GUI) applications. UIs can be
generated either programmatically or using visual design environments such as GUIDE and App
Designer.

Interfacing with other language


MATLAB can call functions and subroutines written in the programming
languages C or Fortran. A wrapper function is created allowing MATLAB data types to be
passed and returned. MEX files (MATLAB executable) are the dynamically loadable object files
created by compiling such functions. Since 2014 increasing two-way interfacing
with Python was being added.

49

SRKR Engineering College, Bhimavaram


Libraries written in Perl, Java, ActiveX or .NET can be directly called from
MATLAB, and many MATLAB libraries (for example XML or SQL support) are implemented
as wrappers around Java or ActiveX libraries. Calling MATLAB from Java is more complicated,
but can be done with a MATLAB toolbox which is sold separately by MathWorks, or using an
undocumented mechanism called JMI (Java-to-MATLAB Interface), (which should not be
confused with the unrelated Java Metadata Interface that is also called JMI). Official MATLAB
API for Java was added in 2016.

As alternatives to the MuPAD build Symbolic Math Toolbox available from MathWorks,
MATLAB can be connected to Maple or Mathematica. Libraries also exist to import and
export MathML.

MATLAB supports three-dimensional graphics as well:

[X,Y] = meshgrid(-10:0.25:10,-10:0.25:10);
f = sinc(sqrt((X/pi).^2+(Y/pi).^2));
mesh(X,Y,f);
axis([-10 10 -10 10 -0.3 1])
xlabel('{\bfx}')
ylabel('{\bfy}')
zlabel('{\bfsinc} ({\bfR})')
hidden off

Fig 5.2: This code produces a wireframe 3D plot of the two-dimensional unnormalized sinc function

50

SRKR Engineering College, Bhimavaram


CHAPTER 6

RESULTS

In this project, Image enhancement for low illumination images is done using adaptive
enhancement and histogram equalization techniques. Six different low illuminated images are
enhanced using these two methods. The methods are tested using MATLAB R2018b.
The output enhanced images are clear and bright. The Adaptive enhancement is done
using following algorithm1:

6.1 Algorithm 1:

1. Read an image (I).


2. Convert RGB space to HSV space.
3. Extract the illumination component, Iv_g from Iv using Gaussian distribution [5].
4. Obtain two enhanced images Iv1 and Iv2 with adaptive mechanism using [8] taking α = 0.1
and α=1.
5. Compute the covariance matrix (C) of Iv1 and Iv2 using [11].
6. Calculate the Eigen values and feature vector for C.
7. Find weighting factors w1 and w2 using [12, 13].
8. Apply the fusion formula to Iv1 and Iv2 using [14].
9. Pick up this fusion value plane, combine with Ih and Is and convert it into RGB space.
10. Display the enhanced image (J).

51

SRKR Engineering College, Bhimavaram


The six experimental images of The adaptive enhancement techquines are ‗Box‘, ‗Hill‘, ‗Fort‘,
‗Elephant‘, ‗City‘, ‗House‘. The low illuminated and enhanced images due to algorithm1 are
shown below:

Fig 6.1: Box adaptive enhancement techquine

As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.

Fig 6.2: Hill adaptive enhancement techquine

As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.

52

SRKR Engineering College, Bhimavaram


Fig 6.3: Fort adaptive enhancement techquine

As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.

Fig 6.4: Elephant adaptive enhancement techquine

As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.

53

SRKR Engineering College, Bhimavaram


Fig 6.5: City adaptive enhancement techquine

As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.

Fig 6.6: House adaptive enhancement techquine

As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.

54

SRKR Engineering College, Bhimavaram


The enhancement for low illuminated images using Histogram equalization is done by
following algorithm2.

6.2 Algorithm2:

1. Read an image (I).


2. Convert RGB space to HSV space.
3. Extract the illumination component, Iv_g from Iv using Gaussian distribution [5].
4. Obtain two enhanced images Iv1 and Iv2 with histogram equalization using [9].
5. Compute the covariance matrix (C) of Iv1 and Iv2 using [11].
6. Calculate the Eigen values and feature vector for C.
7. Find weighting factors w1 and w2 using [12, 13].
8. Apply the fusion formula to Iv1 and Iv2 using [14].
9. Pick up this fusion value plane, combine with Ih and Is and convert it into RGB space.
10. Display the enhanced image (J).

The same six images are enhanced using algorithm 2. The low illuminated and enhanced
images due to algorithm2 are Box, Hill, Fort, Elephant, City, and House shown below.

Fig 6.7: Box histogram equalization techquine

55

SRKR Engineering College, Bhimavaram


As shown in above figure the reference image as input for the algorithm 2 and we can see the
output of algorithm 2 for histogram equalization techquine it is not much clear and bright as
compare with algorithm 1.

Fig 6.8: Hill histogram equalization techquine

As shown in above figure the reference image as input for the algorithm 2 and we can see the
output of algorithm 2 for histogram equalization techquine it is not much clear and bright as
compared with algorithm 1.

Fig 6.9: Fort histogram equalization techquine

56

SRKR Engineering College, Bhimavaram


As shown in above figure the reference image as input for the algorithm 2 and we can see the
output of algorithm 2 for histogram equalization techquine it is not much clear and bright as
compared with algorithm 1.

Fig 6.10: Elephant histogram equalization techquine

As shown in above figure the reference image as input for the algorithm 2 and we can see the
output of algorithm 2 for histogram equalization techquine it is not much clear and bright as
compared with algorithm 1.

Fig 6.11: City histogram equalization techquine

57

SRKR Engineering College, Bhimavaram


As shown in above figure the reference image as input for the algorithm 2 and we can see the
output of algorithm 2 for histogram equalization techquine it is not much clear and bright as
compared with algorithm 1.

Fig 6.12: Box histogram equalization techquine

As shown in above figure the reference image as input for the algorithm 2 and we can see the
output of algorithm 2 for histogram equalization techquine it is not much clear and bright as
compared with algorithm 1.

58

SRKR Engineering College, Bhimavaram


6.3 Quantitative analysis:
Entropy:
It is a statistical measure of randomness Entropy is used to characterize the texture of the
input image. It finds the amount of information available in the image. Let ‗A‘ be the source set
of symbols {ai}.

P(ai) represents the probability of occurrence of symbol {ai}.

E = Σ(𝑎𝑖)log2𝑃(𝑎𝑖)𝑁𝑖=1

N is the total number of symbols in source set ‗A‘.

RMSE and PSNR:


The root mean-square error (RMSE) and the peak signal-to-noise ratio (PSNR) are used
to compare quality of enhanced image. The RMSE represent root mean square error between
enhanced and the original image, whereas PSNR represents a measure of the peak error. The
lower the value of MSE, lower the error.
MSE is calculated as follows:

MSE= √1𝑀𝑁ΣΣ[𝑥(𝑖,𝑗)−𝑥(𝑖,𝑗)]2𝑁𝑗=1𝑀𝑖=1

M, N represents the rows and columns of the image. MSE can be calculated for any dimensional
image. 𝑥(i, j) and x(i, j) represents the two images to be compared .
PSNR is calculated as follows:

PSNR= 10log10 (peak2/MSE)

where ‗peak‘ is either specified by the user or taken from the range of the image. ‗MSE‘ is mean
square error between enhanced and input image. PSNR is calculated in dB.

59

SRKR Engineering College, Bhimavaram


The entropy, PSNR and RMSE is calculated from all the six images and shown below in
tables - 1, 2.

Algorithm 1

Image Unprocessed Entropy PSNR RMSE


Entropy
Box 6.00 7.38 18.07 31.82
Hill 6.41 6.96 13.31 55.06
Fort 6.43 7.50 10.97 72.15
Elephant 5.61 7.44 18.51 30.27
City 5.65 7.16 17.04 35.87
House 6.10 6.95 14.23 49.57
Table-1: Entropy, PSNR, RMSE values for six enhanced images due to adaptive enhancement technique
(algorithm1).

Algorithm 2

Image Unprocessed Entropy PSNR RMSE


Entropy
Box 6.00 6.96 17.95 32.28
Hill 6.41 6.74 13.26 55.43
Fort 6.43 6.83 10.92 72.52
Elephant 5.61 6.94 18.45 30.47
City 5.65 6.76 16.98 36.12
House 6.10 6.58 14.18 49.82
Table-2: Entropy, PSNR, RMSE values for six enhanced images due to histogram equalization technique
(Algorithm 2).

60

SRKR Engineering College, Bhimavaram


CHAPTER 7

CONCLUSIONS AND FUTURE SCOPE

In this project, the low illuminated images are enhanced build on two enhancement
techniques:

1. Adaptive Enhancement,

2. Histogram Equalization.

The correction of brightness is done in HSV colour space. The V-plane is enhanced in
both the techniques by modifying the distribution the distribution profile of the reflection
component. In adaptive enhancement technique the parameters are adaptively adjusted to
enhance the image brightness. In histogram equalisation method, the contrast of the V-plane is
enhanced. The enhancement methods are followed by fusion strategy to more advanced improve
the information of image. The resulted enhanced images due to both the algorithms are
displayed. The processed images are more clear, and bright. The Quantitative analysis is also
done to the enhanced images. The entropy, PSNR, MSE of the images are calculated and
tabulated. The entropy of the images is improved, PSNR is increased and error is reduced.
Compared to both the methods adaptive enhancement technique provides finer results in all
aspects compared to histogram equalization technique. In future a complete framework can be
developed which can check the image and automatically choose the method and enhances the
image irrespective of the input type. This framework can reduce the manual work and
computation time still more advanced. Effect of contrast enhancement in segmentation can be
studied for various image types and included in the future. The proposed algorithm can be
extended to color and 3D image histogram. This project can be more advanced extended to video
processing and also for 3-D image processing.

61

SRKR Engineering College, Bhimavaram


References

[1] Rafael C. Gonza Lez, Richard E. Woods ,‖Digital Image Processing, 4th Edition‖
[2] W. Wang, Z. Chen and X. Yuan et al., ―Adaptive image enhancement method for correcting
low-illumination images ―Information Sciences 496 (2019).
[3] T.Huynh-The, B. Le, S. Lee, et al.,Using weighted dynamic range for histogram equalization
to improve the image contrast, EURASIP J. Image Video Process. 1 (1) (2014) 44.
[4] S. Huang, F. Cheng, Y. Chiu, Efficient contrast enhancement using adaptive gamma
correction with weighting distribution, IEEE Trans. Image Process. 22 (3) (2013) 1032–1041.
[5] D. Jobson , Z. Rahman , G. Woodell , Properties and performance of a center/surround
Retinex, IEEE Trans. Image Process. 6 (3) (1997) 451–462 .
[6] E. Land, J. McCann , Lightness and retinex theory, J. Opt. Soc. Am. 61 (1) (1971) 1–11 .
[7] D. Jobson , Z. Rahman , G. Woodell , A multiscale Retinex for bridging the gap between
color images and the human observation of scenes, IEEE Trans. Image Process. 6 (7) (2002)
965–976.
[8] Z. Rahman , D. Jobson , G. Woodell , Retinex processing for automatic image enhancement,
J. Electron. Imaging 13 (1) (2004) 100–110.
[9] A. Petro, C. Sbert , J. Morel , Multiscale Retinex, Image Process on Line 4 (2014) 71–88.
[10] X. Fu , D. Zeng , Y. Huang , et al. , A weighted variational model for simultaneous
reflectance and illumination estimation, in: IEEE Conference on Computer Vision and Pattern
Recognition, 2016, pp. 2782–2790 .
[11] M. Gharbi, J. Chen, J. Barron , et al. , Deep bilateral learning for real-time image
enhancement, ACM Trans. Graph. 36 (4) (2017) 118.
[12] Pinki, Rajesh Mehra, et al., ―Estimation of the Image Quality under Different Distortions
‖International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 5 Issues
7 July 2016.

62

SRKR Engineering College, Bhimavaram

You might also like