Professional Documents
Culture Documents
Digital image processing systems are used in many applications like remote sensing,
medical field, transferal and encoding, machine vision etc. The image enhancement is the
operation to improve the display of the image or properties of image for more advanced image
analysis. Image acquisition in adverse conditions like night-time, cloudy or smoky weather etc.,
results in many defects because of weak reflection of light from object. The need to acquire clear
images from these unfavorable situations now became a challenge.
1.1 IMAGE:
Images, such as a photograph, screen display, and as well as a 3-D. That can be captured
by optical devices—such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural
objects and phenomena, such as the human eye or water surfaces.
Image is used for the perception of any 2-D images such as graph, map, pie charts or
painting etc... In the perception, images also provides manually, by carving, drawing, painting,
provides automatic printing or making graphics, and also improved by the combination of
different methods, like pseudo-photograph.
Image is a rectangular grid pixels with columns and rows. It have Specific height and a
Specific width pixels. Each pixel is a square and it have fixed size on a given display. Different c
monitors is using the different sized pixel. The pixels that constitute for an image are placed as a
grid (columns and rows) each pixel is having the numbers which represents magnitudes of
brightness and color.
Image file size is expressed as the number of bytes that increases with the number of
pixels composing an image, and the color depth of the pixels. The greater the number of rows
and columns, the greater the image resolution, and the larger the file. Also, each pixel of an
image increases in size when its color depth increases, an 8-bit pixel (1 byte) stores 256 colors, a
24-bit pixel (3 bytes) stores 16 million colors, the latter known as true color.
Image compression uses algorithms to decrease the size of a file. High resolution cameras
make large image files, ranging from hundreds of kilobytes to megabytes, per the camera's
resolution and the image-storage format capacity. High resolution digital cameras record 12
megapixel (1MP = 1,000,000 pixels / 1 million) images, or more, in true color. For example, an
image recorded by a 12 MP camera; since each pixel uses 3 bytes to record true color, the
uncompressed image would occupy 36,000,000 bytes of memory, a great amount of digital
storage for one image, given that cameras must record and store many images to be practical.
Faced with large file sizes, both within the camera and a storage disc, image file formats were
developed to store such large images.
Image file formats are standardized means of organizing and storing images. This entry is
about digital image formats used to store photographic and other images. Image files are
composed of either pixel or vector (geometric) data that are rasterized to pixels when displayed
(with few exceptions) in a vector graphic display. Including proprietary types, there are hundreds
of image file types. The PNG, JPEG, and GIF formats are most often used to display images on
the Internet.
In addition to straight image formats, Metafile formats are portable formats which can
include both raster and vector information. The metafile format is an intermediate format. Most
Windows applications open metafiles and then save them in their own native format.
JPEG/JFIF:
EXIF:
The EXIF (Exchangeable image file format) format is a file standard close to the JFIF
format with TIFF extensions. It is incorporated in the JPEG writing software used in most
cameras. Its purpose is to record and to standardize the exchange of images with image
metadata between digital cameras and editing and viewing software. The metadata are recorded
for individual images and include such things as camera settings, time and date, shutter speed,
exposure, image size, compression, name of camera, color information, etc. When images are
viewed or edited by image editing software, all of this image information can be displayed.
TIFF:
The TIFF (Tagged Image File Format) format is a flexible format that normally saves 8
bits or 16 bits per color (red, green, blue) for 24-bit and 48-bit totals, respectively, generally
using either the TIFF or TIF filename extension. TIFFs are lossy and lossless. Some offer
relatively good lossless compression for bi-level (black & white) images. Some digital cameras
can save in TIFF format, using the LZW compression algorithm for lossless storage. TIFF image
format is not widely supported by web browsers. TIFF remains widely accepted as a photograph
file standard in the printing business. TIFF can handle device-specific color spaces, such as the
CMYK defined by a specific set of printing press inks.
PNG:
The PNG (Portable Network Graphics) file format was created as the free, open-source
successor to the GIF. The PNG file format supports true color (16 million colors) while the GIF
supports only 256 colors. The PNG file excels when the image has large, equally colored areas.
The lossless PNG format is best suited for editing pictures, and the lossy formats, like JPG, are
best for the final distribution of photographic images, because JPG files are smaller than PNG
files. PNG, an extensible file format for the lossless, portable, well-compressed storage of raster
images. PNG provides a patent-free replacement for GIF and can also replace many common
uses of TIFF. Indexed-color, gray scale, and true color images are supported, plus an optional
5
GIF:
GIF (Graphics Interchange Format) is limited to an 8-bit palette, or 256 colors. This
makes the GIF format suitable for storing graphics with relatively few colors such as simple
diagrams, shapes, logos and cartoon style images. The GIF format supports animation and is still
widely used to provide image animation effects. It also uses a lossless compression that is more
effective when large areas have a single color, and ineffective for detailed images or dithered
images.
BMP:
The BMP file format (Windows bitmap) handles graphics files within the Microsoft
Windows OS. Typically, BMP files are uncompressed, hence they are large. The advantage is
their simplicity and wide acceptance in Windows programs.
As opposed to the raster image formats above (where the data describes the
characteristics of each individual pixel), vector image formats have a geometric description
which can be provides smoothly at any desired display size.
At some point, all vector graphics must be rasterized in order to be displayed on digital
monitors. However, vector images can be displayed with analog CRT technology such as that
used in some electronic test equipment, medical monitors, radar displays, laser shows and early
video games. Plotters are printers that use vector data rather than pixel data to draw graphics.
CGM (Computer Graphics Metafile) is a file format for 2D vector graphics, raster
graphics, and text. All graphical elements can be specified in a textual source file that can be
compiled into a binary file or one of two text representations. CGM provides a means of
graphics data interchange for computer representation of 2D graphical information
independent from any specific application, system, platform, or device.
SVG:
SVG (Scalable Vector Graphics) is an open standard created and developed by the World
Wide Web Consortium to address the need for a versatile, scriptable and all-purpose vector
format for the web and otherwise. The SVG format does not have a compression scheme of
its own, but due to the textual nature of XML, an SVG graphic can be compressed using a
program such as gzip.
Several factor combine to indicate a lively future for digital image processing. A major
factor is the declining cost of computer equipment. Several new technological trends promise to
more advanced promote digital image processing. These include parallel procedure mode
Scanner produces a two-dimensional image. If the output of the camera or other imaging
sensor is not in digital form, an analog to digital converter digitizes it. The nature of the sensor
and the image it produces are said by the application.
Image enhancement is the simplest and most appealing areas of digital image processing.
Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or
simply to outstanding certain features of interesting an image. A familiar example of
enhancement is when we increase the contrast of an image because ―it looks finer.‖ It is
important to keep in mind that enhancement is a very subjective area of image processing.
The use of color in image processing is motivated by two principal factors. First, color is
a powerful descriptor that often simplifies object identification and extraction from a scene.
Second, humans can discern thousands of color shades and intensities, compared to about only
two dozen shades of gray. This second factor specific important in manual image analysis.
Wavelets are the formation for representing images in various degrees of resolution.
Although the Fourier transform has been the mainstay of aspect of based image processing since
the late1950‘s, a more recent transformation, called the wavelet transform, and is now making it
even easier to compress, transmit, and analyze many images. Unlike the Fourier transform,
whose basis functions are sinusoids, wavelet transforms are based on small values, called
Wavelets, of varying frequency and limited duration.
11
Wavelets were first shown to be the foundation of a powerful new approach to signal
processing and analysis called Multi resolution theory. Multi resolution theory incorporates and
unifies techniques from a variety of disciplines, including sub band coding from signal
processing, quadrature mirror filtering from digital speech recognition, and pyramidal image
processing.
1.5.6 Compression:
Compression, as the name implies, deals with techniques for reducing the storage
required saving an image, or the bandwidth required for transmitting it. Although storage
technology has improved significantly over the past decade, the same cannot be said for
transferal capacity. This is true specifically in uses of the Internet, which are characterized by
significant pictorial content. Image compression is familiar to most users of computers in the
form of image file extensions, such as the jpg file extension used in the JPEG (Joint
Photographic Experts Group) image compression standard.
In binary images, the sets in question are members of the 2-D integer space Z2, where
each element of a set is a 2-D vector whose coordinates are the (x,y) coordinates of a black(or
white) pixel in the image. Gray-scale digital images can be represented as sets whose
components are in Z3. In this case, two components of each element of the set refer to the
coordinates of a pixel, and the third corresponds to its discrete gray-level value.
1.5.8 Segmentation:
Segmentation procedures partition an image into its constituent parts or objects. In
general, autonomous segmentation is one of the most difficult tasks in digital image processing.
A rugged segmentation procedure brings the procedure a long way toward successful solution of
imaging problems that require objects to be identified individually.
13
Representation and description almost always follow the output of a segmentation stage,
which generally is raw pixel data, constituting either the boundary of a region (i.e., the set of
pixels separating one image region from another) or all the points in the region itself. In either
case, converting the data to a form suitable for computer procedure is inevitably. The first
decision that must be made is whether the data should be represented as a boundary or as a
complete region. Boundary representation is appropriate when the focus is on external shape
characteristics, such as corners and inflections.
The last stage requires recognition and interpretation. Recognition is the procedure that
assigns a label to an object based on the information provided by its descriptors. Interpretation
requires assigning meaning to an ensemble of recognized objects.
14
Knowledge about a problem domain is coded into image processing system in the form of
a knowledge database. This knowledge may be as simple as detailing regions of an image when
the information of interests is known to be located, thus limiting the search that has to be
conducted in seeking that information. The knowledge base also can be quite complex, such as
an inter related to list of all major possible defects in a materials inspection problem or an image
data base having high resolution satellite images of a region in connection with alter deletion
application. In addition to guiding the operation of each processing module, the knowledge base
also controls the interaction between modules. The system must be endowed with the knowledge
to recognize the significance of the location of the string with respect to other components of an
address field. This knowledge glides not only the operation of each module, but it also aids in
feedback operations between modules through the knowledge base. We implemented
preprocessing techniques using MATLAB.
As recently as the mid-1980s, numerous models of image processing systems being sold
throughout the world were rather substantial peripheral devices that attached to equally
substantial host computers. Late in the 1980s and early in the 1990s, the market shifted to image
processing hardware in the form of single boards designed to be compatible with industry
standard buses and to fit into engineering workstation cabinets and personal computers. In
addition to lowering costs, this market shift also served as a catalyst for a significant number of
new companies whose specialty is the development of software written specifically for image
processing.
15
Specialized image
Image processing
Hard copy processing
software
hardware
Image sensor
Problem domain
Although large-scale image processing systems still are being sold for massive imaging
applications, such as processing of satellite images, the trend continues toward miniaturizing and
blending of general-purpose small computers with specialized image processing hardware.
Figure shows the basic components comprising a typical general-purpose system used for digital
image processing. The function of each component is discussed in the following paragraphs,
starting with image sensing.
16
With reference to sensing, two elements are required to acquire digital images. The first
is a bodily device that is sensitive to the energy radiated by the object we wish to image. The
second, called a digitizer, is a device for converting the output of the bodily sensing device into
digital form. For instance, in a digital video camera, the sensors make an electrical output
proportional to light intensity. The digitizer converts these outputs to digital data.
Specialized image processing hardware generally consists of the digitizer just mentioned,
plus hardware that carryout other primitive operations, such as an arithmetic logic unit (ALU),
which carryout arithmetic and logical operations in parallel on entire images. One example of
how an ALU is used is in averaging images as quickly as they are digitized, for the purpose of
noise reduction. This type of hardware sometimes is called a front-end subsystem, and its most
distinguishing characteristic is speed. In other words, this unit carryout functions that require fast
data throughputs (e.g., digitizing and averaging video images at 30 frames) that the typical main
computer cannot handle.
Computer:
17
Software for image processing consists of specialized modules that perform specific
tasks. A well-designed package also includes the capability for the user to write code that, as a
minimum, utilizes the specialized modules. More sophisticated software packages allow the
integration of those modules and general-purpose software commands from at least one
computer language.
Mass storage:
18
Image displays in use today are mainly color (preferably flat screen) TV monitors.
Monitors are driven by the outputs of image and graphics display cards that are an integral part
of the computer system. Seldom are there requirements for image display applications that
cannot be met by display cards available commercially as part of the computer system. In some
cases, it is inevitably to have stereo displays, and these are implemented in the form of headgear
having two small displays embedded in goggles worn by the user.
Hardcopy:
Hardcopy devices for recording images include laser printers, film cameras, heat-
sensitive devices, inkjet units, and digital units, such as optical and CD-ROM disks. Film
provides the highest possible resolution, but paper is the obvious medium of choice for written
material. For presentations, images are displayed on film transparencies or in a digital medium if
image projection equipment is used. The latter approach is gaining acceptance as the standard for
image presentations.
Network:
Networking is almost a default function in any computer system in use today. Because of
the large amount of data inherent in image processing applications, the key consideration in
image transferal is bandwidth. In dedicated networks, this typically is not a problem, but
communications with remote sites via the Internet are not always as efficient. Fortunately, this
situation is making finer quickly as a result of optical fiber and other broadband technologies.
19
The common applications of Image sharpening and restoration are zooming, blurring,
sharpening, gray scale conversion, edges detecting, Image recognition, and Image retrieval, etc.
Medical field
The common applications of medical field are Gamma-ray imaging, PET scan, X-Ray
Imaging, Medical CT, UV imaging, etc.
Remote sensing
It is the process of scanning the earth by the use of satellite and acknowledges all
activities of space.
Machine/Robot vision
It works on the vision of robots so that they can see things, identify them, etc.
Pattern recognition
It requires the study of image processing; it is also combined with artificial intelligence
such that computer-aided diagnosis, handwriting recognition and images recognition can be
easily implemented. Now a days, image processing is used for pattern recognition.
Video processing
20
21
In digital image processing stems from two principal application areas making finer of
pictorial representation for human interpretation, processing of image data for autonomous
machine perception . Wang et al. suggested the colour image correction method build on non-
linear transformation function based on light reflection model and multi-scale theory. Traditional
low-illumination image enhancement algorithms include the gray transformation method and
histogram equalization method. Huang et al. suggested an adaptive gamma correction algorithm
to adaptively get gamma correction parameters build on a cumulative distribution probability
histogram. Jobson et al. suggested the single-scale retinex (SSR) algorithm build on the retinex
illumination-reflection model established by Land et al. Later evolved into the multiscale retinex
(MSR) algorithm, the MSR algorithm with colour restoration (MSRCR) , and the MSR
algorithm with chromaticity preservation (MSRCP) . Fu et al. suggested a weighted variation
model for simultaneous reflection and illumination estimation (SRIE) that could preserve the
estimated reflectivity with high accuracy and suppress noise to a certain extent. Image
enhancement methods build on machine learning have emerged in recent years. The image
quality for different distortions lie noise, compression etc., is estimated.
22
F(x,y) represents the brightness of the image at specific pixel is the product of I(x,y) and R(x,y).
I(x,y) and R(x,y) represents illumination component of the incident light and transferal
component from object surface.
To making finer of pictorial information of the image for human perception. DIP enhances
the quality of image.
The pictorial information of the image is enhanced for human perception by using following
image processing method
23
25
26
Most remote sensing systems create arrays of numbers representing an area on the
surface of the Earth. The entire array is called an image or scene, and the individual numbers are
called pixels (picture elements) such as water body, wetland, forest area etc., the value of the
pixel represents a measured quantity such as light intensity over a given range of wavelengths.
However, it could also represent a higher-level product such as topography or chlorophyll
concentration or almost anything. Some active systems also provide the phase of the reflected
radiation so each pixel will have a complex number. Typical array sizes with optimum pixels
and system with multiple channels may require megabytes of storage per scene. Moreover, a
satellite can collect 50 of these frames on a single pass so the data sets can be enormous. There
are several established color models used in computer graphics, but the most common are the
Gray Scale model, RGB (Red- Green-Blue) model, HIS (Hue, Saturation, Intensity) model and
CMYK (Cyan-Magenta88 Yellow-Black) model, for Remote Sensing Technology used in digital
image processing by Gonzalez and Woods (2008) has presented a detailed explanation.
When Red, Green and Blue light are combined it forms white. As a result to reduce the
computational complexity the geo referenced data that exists in RGB color model is converted
into a gray scale image. The range of gray scale image from black to white values can be
calculated by the equation. Where X is imagery, L is Luminance, R is RED, G is Green and B is
Blue. X = L + (0.2989 * R)+ (0.5870 ∗G)+ (0.1140 ∗ B) RGB is a color space originated from
CRT (or close) display applications, when it was convenient to describe color as a combination
of three colored rays (red, green and blue).
27
29
30
Existing Method
Primarily, the RGB colour space is converted to HSV colour space. The V-plane in
extracted and information from brightness is improved and finally HSV plane is back converted
into RGB plane. The brightness plane of the image is enhanced using adaptive enhancement and
histogram equalization techniques followed with image fusion. The colour image transformation
combined with multi-scale decomposition is used for enhancing low illumination images.
31
Let I be the image taken. Ih, Is, Iv represents three HSV colour planes.
32
2. Histogram Equalisation:
Histogram equalization is one of the basic image processing techniques used to improve
the contrast of the image. It is a contrast enhancement technique accomplished by adjusting
image histogram. HE assigns the intensity values of pixels in input image such that the output
image has a equal distribution of intensities.
I’v = histequ (Iv) * max (Iv_g, Iv) [9]
33
In image processing applications, the method used to extract key information from two or
more sub images is image fusion. The fused image is more informative than source images. It
provides all the inevitably information needed.
The image fusion is done as follows:
F = Σ𝑤𝑖𝑁𝑖=1𝑆𝑖 [10]
The sub images which are to be fused almost have close information so the weighting
coefficients are obtained using PCA (Principal coefficient analysis). In PCA, the feature vectors
of source images are obtained and feature values are used to make weighting coefficients.
34
Fig 3.4: The two source images S1, S2 for two alpha values [0.01, 1] due to adaptive enhancement technique.
35
Proposed Method
Histogram modeling equalization technique gives involving the great deal of worldly
experience method for modifying the dynamic range and contrast of an image by alter in
character that image such that its intensity histogram has a required shape. The contrast,
histogram models operators may make use of non-linear and non-monotonic transfer functions to
map between pixel intensity values in the input and output images.
Histogram equalization make use of a monotonic, non-linear mapping which re-builds the
intensity values of the pixels in the input image such that the output image having a equal
distribution of intensity. These Techniques is used in image estimate of the closest processes and
in the correction of non-linear effects brings by the say a digitizer.
36
[0,1]) and that the transformation function f maps an input image to an output
image is continuous with in the interval. more advanced, it will be assumed that the
transfer law ( in which also be written in terms of intensity density levels, e.g. )
is single-valued and monotonically increasing in the condition of histogram equalization. So
that it is possible to say the inverse law . The example of this type of
function is explained in Figure 4.2.
All pixels in the input image with densities in the region to will have their pixel
values re-assigned such that they assume an output pixel density value in the range
37
where .
The result is written in the language of probability theory if the histogram h is regards as
a continuous probability density function p describing the distribution of the assumed random
intensity levels:
In the case of histogram equalization, the output probability densities have to be an equal fraction
of the maximum number of intensity levels in the input image (where the minimum level
considered is 0). The transfer function inevitably to achieve this result is simply:
Therefore,
38
where N is the number of image pixels and is the number of pixels at intensity level k or less.
In the digital implementation, the output image will not inevitably be fully equalized and there
may be `holes' in the histogram (i.e. unused intensity levels). These effects are likely to decrease
as the number of pixels and intensity quantization levels in the input image are increased.
which shows an 8-bit gray scale image of the surface of the moon. The histogram
39
In order to improve the contrast of this image, without affecting the structure
(i.e. geometry) of the information presented, we can apply the histogram equalization. The
occurring image is
Note that the histogram is not equal but that the dynamic range and contrast have been
enhanced. Note also that when balancing the images with narrow histogram variations and
relatively few gray levels, increasing the dynamic range has the adverse effect of increasing
visual grainyness. Estimate this result with that produced by the linear contrast
stretching operator
40
Even though the contrast on the building is agreed, the sky area is represented almost fully by
light pixels. This results most histogram pixels
to be pushed into a narrow peak in the upper graylevel region. The histogram equalization
operator defines a mapping build on the cumulative histogram
41
In histogram equalization has developed the contrast of the sky part in the image, the pic now
looks like not pleasant because it is very little variety in the middle gray level range. That occurs
because the transfer function is depends upon the shallow slope of the cumulative histogram in
the middle gray level regions (i.e. intensity density levels 100 - 230) and resulting pixels from
this area in the original image to be mapped to close gray levels in the output image.
We can enhance on this if we say a mapping build on a sub-section of the image which is having
a finer distribution of intensity densities from the low and middle range gray levels. If we crop
the image so that the it isolate a region which is having more building than sky
we can then say a histogram equalization mapping for the whole image build on the cumulative
histogram
42
is more natural. This concept of mappings build upon specific sub-sections of the image is taken
up by another class of operators which perform Local Enhancements .
Common Variants
Histogram Specification
Histogram equalization is restricted in that it is capable of giving the only one solution:
an image with a equal intensity distribution. Sometimes it is useful to control the shape of the
output histogram in order to get the outstanding certain intensity levels in an image. This can be
done by the histogram specialization operator which maps a given intensity
The first step in histogram specialization is to specify the required output density function
and write a transformation g(c). If is single-valued (which is true when there are no unfilled
43
intensity level), then defines a mapping from the equalized levels of the original
Local Enhancements
The histogram processing methods in above are global in the perception that they apply a
transformation function whose form is build on the intensity level distribution of an entire image.
Although this method can enhance the overall contrast and dynamic range of an image (thereby
making certain details more visible), there are cases in which enhancement of details over small
areas (i.e. areas whose total pixel contribution to the total number of image pixels has a
negligible influence on the global transform) is desired. The solution in these cases is to get a
transformation build upon the intensity distribution in the local the area surrounding of every
pixel in the image.
The histogram processes explained above can be adapted for local enhancement. The
process as a inevitably defining the area surrounding around each pixel and, using the histogram
characteristics of this the area surrounding, to get a transfer function which maps that pixel into
an output intensity level. This is done for each specific pixel in the taken image. (Since moving
across rows or down columns only adds one new pixel to the local histogram, updating the
histogram from the previous observation with new data at each motion is possible.) Local
enhancement may also says the transforms build on pixel attributes other than
histogram, e.g. intensity mean (to control variance) and variance (to control contrast) are
common.
44
Tool Used
MATLAB R2018b
The name MATLAB stands for matrix laboratory. MATLAB was originally written to
provide easy access to matrix software developed by the LINPACK and EISPACK projects,
which together represent the state-of-the-art in software for matrix computation.
MATLAB for a range of applications, including deep learning and machine learning,
signal processing and communications, image and video processing, control systems, test and
measurement, computational finance, and computational biology.
As of 2020, MATLAB has more than 4 million users worldwide. MATLAB users come from
various backgrounds of engineering, science, and economics.
Syntax
45
>> x = 17
x=
17
>> x = 'hat'
x=
hat
A simple array is defined using the colon syntax: initial: increment: terminator.
For instance:
A variable named array (or assigns a new value to an existing variable with the name
array) which is an array consisting of the values 1, 3, 5, 7, and 9. That is, the array starts at 1 (the
initial value), increments with each step from the previous value by 2 (the increment value), and
stops once it reaches (or to avoid exceeding) 9 (the terminator value). The increment value can
actually be left out of this syntax (along with one of the colons), to use a default value of 1.
assigns to the variable named ari an array with the values 1, 2, 3, 4, and 5, since the default
value of 1 is used as the increment. Indexing is one-based, which is the usual convention
for matrices in mathematics, unlike zero-based indexing commonly used in other programming
languages such as C, C++, and Java.
46
Sets of indices can be specified by expressions such as 2:4 , which evaluates to [2, 3, 4] .
For example, a submatrix taken from rows 2 through 4 and columns 3 through 4 can be written
as:
>> A(2:4,3:4)
ans =
11 8
7 12
14 1
A square identity matrix of size n can be generated using the function eye , and matrices of any
size with zeros or ones can be generated with the functions zeros and ones , respectively
>> eye(3,3)
ans =
100
010
001
47
Structures
MATLAB supports structure data types. Since all variables in MATLAB are arrays, a
more adequate name is "structure array", where each element of the array has the same field
names. In addition, MATLAB supports dynamic field names (field look-ups by name, field
manipulations, etc.).
Functions
When creating a MATLAB function, the name of the file should match the name of the
first function in the file. Valid function names begin with an alphabetic character, and can have
letters, numbers, or underscores. Variables and functions are case sensitive.
Function handles
MATLAB supports elements of lambda calculus by introducing function handles, or
function references, which are implemented either in .m files or anonymous/nested functions.
48
x = 0:pi/100:2*pi;
y = sin(x);
plot(x,y)
MATLAB supports developing graphical user interface (GUI) applications. UIs can be
generated either programmatically or using visual design environments such as GUIDE and App
Designer.
49
As alternatives to the MuPAD build Symbolic Math Toolbox available from MathWorks,
MATLAB can be connected to Maple or Mathematica. Libraries also exist to import and
export MathML.
[X,Y] = meshgrid(-10:0.25:10,-10:0.25:10);
f = sinc(sqrt((X/pi).^2+(Y/pi).^2));
mesh(X,Y,f);
axis([-10 10 -10 10 -0.3 1])
xlabel('{\bfx}')
ylabel('{\bfy}')
zlabel('{\bfsinc} ({\bfR})')
hidden off
Fig 5.2: This code produces a wireframe 3D plot of the two-dimensional unnormalized sinc function
50
RESULTS
In this project, Image enhancement for low illumination images is done using adaptive
enhancement and histogram equalization techniques. Six different low illuminated images are
enhanced using these two methods. The methods are tested using MATLAB R2018b.
The output enhanced images are clear and bright. The Adaptive enhancement is done
using following algorithm1:
6.1 Algorithm 1:
51
As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.
As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.
52
As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.
As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.
53
As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.
As shown in above figure the reference image as input for the algorithm and we can see the
output of algorithm for adaptive enhancement techquine it is very clear and bright.
54
6.2 Algorithm2:
The same six images are enhanced using algorithm 2. The low illuminated and enhanced
images due to algorithm2 are Box, Hill, Fort, Elephant, City, and House shown below.
55
As shown in above figure the reference image as input for the algorithm 2 and we can see the
output of algorithm 2 for histogram equalization techquine it is not much clear and bright as
compared with algorithm 1.
56
As shown in above figure the reference image as input for the algorithm 2 and we can see the
output of algorithm 2 for histogram equalization techquine it is not much clear and bright as
compared with algorithm 1.
57
As shown in above figure the reference image as input for the algorithm 2 and we can see the
output of algorithm 2 for histogram equalization techquine it is not much clear and bright as
compared with algorithm 1.
58
E = Σ(𝑎𝑖)log2𝑃(𝑎𝑖)𝑁𝑖=1
MSE= √1𝑀𝑁ΣΣ[𝑥(𝑖,𝑗)−𝑥(𝑖,𝑗)]2𝑁𝑗=1𝑀𝑖=1
M, N represents the rows and columns of the image. MSE can be calculated for any dimensional
image. 𝑥(i, j) and x(i, j) represents the two images to be compared .
PSNR is calculated as follows:
where ‗peak‘ is either specified by the user or taken from the range of the image. ‗MSE‘ is mean
square error between enhanced and input image. PSNR is calculated in dB.
59
Algorithm 1
Algorithm 2
60
In this project, the low illuminated images are enhanced build on two enhancement
techniques:
1. Adaptive Enhancement,
2. Histogram Equalization.
The correction of brightness is done in HSV colour space. The V-plane is enhanced in
both the techniques by modifying the distribution the distribution profile of the reflection
component. In adaptive enhancement technique the parameters are adaptively adjusted to
enhance the image brightness. In histogram equalisation method, the contrast of the V-plane is
enhanced. The enhancement methods are followed by fusion strategy to more advanced improve
the information of image. The resulted enhanced images due to both the algorithms are
displayed. The processed images are more clear, and bright. The Quantitative analysis is also
done to the enhanced images. The entropy, PSNR, MSE of the images are calculated and
tabulated. The entropy of the images is improved, PSNR is increased and error is reduced.
Compared to both the methods adaptive enhancement technique provides finer results in all
aspects compared to histogram equalization technique. In future a complete framework can be
developed which can check the image and automatically choose the method and enhances the
image irrespective of the input type. This framework can reduce the manual work and
computation time still more advanced. Effect of contrast enhancement in segmentation can be
studied for various image types and included in the future. The proposed algorithm can be
extended to color and 3D image histogram. This project can be more advanced extended to video
processing and also for 3-D image processing.
61
[1] Rafael C. Gonza Lez, Richard E. Woods ,‖Digital Image Processing, 4th Edition‖
[2] W. Wang, Z. Chen and X. Yuan et al., ―Adaptive image enhancement method for correcting
low-illumination images ―Information Sciences 496 (2019).
[3] T.Huynh-The, B. Le, S. Lee, et al.,Using weighted dynamic range for histogram equalization
to improve the image contrast, EURASIP J. Image Video Process. 1 (1) (2014) 44.
[4] S. Huang, F. Cheng, Y. Chiu, Efficient contrast enhancement using adaptive gamma
correction with weighting distribution, IEEE Trans. Image Process. 22 (3) (2013) 1032–1041.
[5] D. Jobson , Z. Rahman , G. Woodell , Properties and performance of a center/surround
Retinex, IEEE Trans. Image Process. 6 (3) (1997) 451–462 .
[6] E. Land, J. McCann , Lightness and retinex theory, J. Opt. Soc. Am. 61 (1) (1971) 1–11 .
[7] D. Jobson , Z. Rahman , G. Woodell , A multiscale Retinex for bridging the gap between
color images and the human observation of scenes, IEEE Trans. Image Process. 6 (7) (2002)
965–976.
[8] Z. Rahman , D. Jobson , G. Woodell , Retinex processing for automatic image enhancement,
J. Electron. Imaging 13 (1) (2004) 100–110.
[9] A. Petro, C. Sbert , J. Morel , Multiscale Retinex, Image Process on Line 4 (2014) 71–88.
[10] X. Fu , D. Zeng , Y. Huang , et al. , A weighted variational model for simultaneous
reflectance and illumination estimation, in: IEEE Conference on Computer Vision and Pattern
Recognition, 2016, pp. 2782–2790 .
[11] M. Gharbi, J. Chen, J. Barron , et al. , Deep bilateral learning for real-time image
enhancement, ACM Trans. Graph. 36 (4) (2017) 118.
[12] Pinki, Rajesh Mehra, et al., ―Estimation of the Image Quality under Different Distortions
‖International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 5 Issues
7 July 2016.
62