You are on page 1of 39

51

CHAPTER 3

3 LITERATURE SURVEY

3.1 IMAGE DENOISING

Images have become an essential part of our daily life.


Applications of images are in the documentation of an event, visual
communication in surveillance and medical fields. This has raised the massive
demand for images with high accuracy and visual quality. However, digital
images are corrupted by noise at the time of image acquisition and
transmission degrades the visual appearance of an image. The image sensors
like Charge Coupled Device (CCD) camera, sensor temperatures and light
levels affect the image with some amount of noise. The corruption in images
may also occur during transmission. Reasons like lighting effects, any
atmospheric disturbances interference on the channel used for transmission.

Image denoising is the fundamental problem in the field of image


processing. Image denoising is a subjective process and it is used to recover
the visual quality of an image by reducing noise from its given noisy version.
Image denoising is the first step to be taken for image analysis. In general
image noise can be additive (or) multiplicative noise. Additive noise is also
called as Gaussian noise that arises during acquisition (i.e.) sensor noise
caused by poor illumination. In the additive noise model a random noise is
added to the pixel value, it can be defined as
52

̂ (3.1)

Here is the original pixel value and are the noise


added to the particular pixel value. Multiplicative noise is the unwanted
random signal gets multiplied with the original signal during capture,
transmission and other processing‟s.

Before seeing the various denoising schemes let us brief about


some of well-known noise sources and its types.

3.1.1 Noise Sources

Even though camera technologies have tremendously grown over


the past decade, the presence of noise is not still eradicated. This usually
appears as little dots over an image area that should be clear and smooth.
Noise can appear in images for different reasons. One cause of image noise is
heat. When an image sensor heats up, photons separate from the photo sites
and taint other photo sites. Long exposures also give your image greater risk
of showing image noise, since the sensor is left open to gather more image
data and this includes electrical noise. Image noise is a random variation of
brightness or color information in images, and is usually an aspect of
electronic noise. It can be produced by the sensor and circuitry of a scanner or
digital camera. Some of other types of noise sources of digital images are

1. Noise introduced by film grain while the image is scanned.


2. Sensors affected by image sensor during image capturing.
3. Noise occurrence when image is captured directly in digital format.
4. Insufficient light levels may cause noise in the image.
5. Noise in the transmission channel due to interference causes the image
to be noisy.
53

6. Noise introduced in image when dust particles are present in the


scanner screen.

3.1.2 Types of Noise

Noise is a very significant factor that degrades the transmitted


signal in the receiver. Hence the noise level is to be known to remove it from
the original image. In order to remove noise some strategies should be
designed to model and manage noise. Noise pattern can be categorized based
on the distribution of noise and they are as follows

3.1.3 Gaussian Noise

The Gaussian noise is an additive white noise with a bell shaped


curve. The random noise is modelled as a Gaussian distribution. Both dark and
light areas of the image are affected by Gaussian noise. The Gaussian noise
can be defined as

(3.2)

In equation (3.2), represents the grey level, is the mean value


and is the standard deviation of the image respectively. The observed error
due to Gaussian noise will be very small. The plot of Probability Density
Function (PDF) is shown in figure 3.2. A sample image and its Gaussian noise
affected counterpart are shown in figure 3.1a and figure 3.1b. This noise is
also called as amplifier noise.
54

Figure 3.1 a. Original Image b. Image with Gaussian Noise

Figure 3.2 Probability Density Function of Gaussian Noise

3.1.4 Impulse Noise

Impulse noise is caused by a sudden disturbance in image signal.


This noise is mostly caused by malfunctioning of camera‟s sensor cells,
synchronization errors in image digitizing or transmission and memory
problems due to which the pixels are assigned with incorrect maximum values.
The noise appears as black and white dots on the entire image. This noise is
also called as salt and pepper noise (or) shot noise. The sample image and the
55

corrupted image by impulse noise are shown in figure 3.4a and figure 3.4b.
The noise is defined by the following expression

{ (3.3)

The salt and pepper noise has two possible values namely a, and
b, and the probability of each value is less than 0.2. The plot of PDF is shown
in figure 3.3.

Figure 3.3 Probability Density Function of Impulse Noise

Figure 3.4 a. Original Image b. Image with Impulse Noise


56

3.1.5 Speckle Noise

Speckle noise appears in medical and Synthetic Aperture Radar


(SAR) images. The source of this noise is attributed to random interference
between the coherent returns. This noise is random and it increases the
amplitude along with signal intensity. It appears as bright specks in the lighter
region of the image. This noise is also called as multiplicative noise and its
PDF is shown in figure 3.5. It can be modelled by multiplying with the pixel
values of the original image. It is defined as

(3.4)

Here is the random noise having zero mean Gaussian probability


distribution function. Sample image and speckle noisy image are shown in
figure 3.6a and figure 3.6b.

Figure 3.5 Probability Density Function of Speckle Noise


57

Figure 3.6 a. Original Image b. Image with Speckle Noise

3.1.6 Rayleigh Noise

This type of noise is present in range images. Range images are


used in remote sensing applications. It is not of symmetric distribution and
noisy pixel value indicates the distance between the object and the camera
system. The PDF plot is shown in figure 3.7. The Rayleigh distribution is
defined as

{ (3.5)

Figure 3.7 Probability Density Function of Rayleigh Noise


58

3.1.7 Periodic Noise

Periodic noise occurs due to electrical interferences. This type of


noise looks like uniform bars over an image. Periodic noise is a sinusoidal
multiplication of a specific frequency. The original image and periodic noise
corrupted image are shown in figure 3.8a and figure 3.8b.

Figure 3.8 a. Original Image b. Image with Periodic Noise

3.2 CLASSIFICATION OF DENOISING METHODS

Numerous image denoising techniques have been developed to


minimize the effect of noise(s) occurred during acquisition and transmission.
These denoising techniques are classified into spatial and frequency domain
techniques. In spatial domain we deal with image pixel values, whereas in
frequency domain the rate at which the pixel values are changed is considered.

The noise elimination techniques need to be selected for


consistent with percentage of image quality degradation. Image denoising
static and it ruins are challenge for researchers since, noise removal brings
artifacts and results in blurring of the images.
59

This section designates different approaches for noise reduction


(or denoising) giving an insight as to which algorithms have to be applied to
identify the most consistent evaluate of the original image data from its
degraded version.

3.2.1 Spatial Domain Denoising Methods

Gabbouj et al. (1992) has projected a theory of stack filters, was


providing an opinion, which unifies many diverse filter classes, counting
morphological filters, so that it has been deliberated in aspect of particular
importance and it was the way that theory has brought together, in a single
analytical outline, both on estimation-based and also the structure-based
approaches to the design of these filters.

Mihcak et al. (1999) has anticipated utilizing the prototype for


image denoising by firstly evaluating the underlying variance field by means
of a Maximum Likelihood (ML) rule and after that execution of Minimum
Mean Squared Error (MMSE) forecasting scheme has been applied
congruently. In the procedure of variance estimation, they had taken
responsibility that the variance field was locally smooth to permit its reliable
estimation, and utilizing an adaptive window-based estimation schemes to
captivate the effect of edges compatibly.

Chang et al. (2000b) has proposed a spatially adaptive wavelet


thresholding technique on the basis of text modelling, a general process
applied in image compression so as to acclimatize the coder for varying image
characteristics. Every single wavelet coefficient was demonstrated as a
random variable of a generalized Gaussian distribution with an unidentified
parameter. Context modelling was used to evaluate the parameter for every
single coefficient, which was then used to adapt the thresholding approach.
60

Such spatially adaptive thresholding is protracted to entire wavelet expansion


that produces improved outcomes than the orthogonal transform.

Kervrann & Boulanger (2006) suggested a patch-based and novel


adaptive technique for image denoising and representation. The technique was
on the basis of point wise chosen of small image patches of fixed size in the
inconstant neighbourhood of each pixel. The approach is to acquaint with each
pixel the weighted sum of data points in the interior an adaptive
neighbourhood, in such a way that it equilibriums the exactitude of estimation
and the stochastic error, on each spatial position. Such technique was common
and can be applied beneath the supposition that there subsists a monotonous
pattern in a local neighbourhood of a point.

Akshat & Vikrant (2011) have anticipated an innovative image


denoising algorithm in spatial domain aimed at filtering concoction of speckle
and impulse noise by means of amalgamation of local statistics as well as non-
linear Robust Estimator. Appropriate selection of de-speckling filter was a
significant obligation for contrast enhancement. Alternatively, suppression of
impulse noise assistances the enhancement and conservation of edges. The
projected algorithm develops the local statistical methodology for de-
speckling followed by application of Robust Estimator for efficient filtering of
high density impulse noise. Therefore, mutually Coefficient of Correlation
(COC) and Peak Signal to Noise Ratio (PSNR) are utilized as quality metrics
for estimating the enactment of the proposed algorithm.

Mittal et al. (2012) has conferred a spontaneous prospect statistic-


based distortion-generic blind/No-Reference (NR) Image Quality Assessment
(IQA) prototype and such model functions in spatial domain correspondingly.
A novel model, termed as a Blind/Reference less Image Spatial Quality
Evaluator (BRISQUE) which is not sorted out to calculate distortion-specific
features, such as blur, or blocking, ringing, on the other hand instead employs
61

scene statistics of locally normalized luminance coefficients to enumerate


probable losses of “naturalness” in the image owing to the manifestation of
distortions, thus leading to a holistic measure of quality.

Vandana & Shailja (2013) conferred a probe that divulges the


amount of work conducted in the filtering tactics for image denoising. An
image frequently gets deteriorated by assorted noises that are conspicuous or
in conspicuous despite being gathered, coded, acquired and transmitted. Noise
regulates numerous process parametric quantities that may induce a quality
dilemma for further image processing. Denoising of natural images was seems
to be absolutely unsophisticated, even so when reasoned beneath practical
scenarios turn to be complicated. It has been noted by assorted author that
parametric quantities such as type and quantum of noise, image, etc. over a
single algorithm or overture emerges tedious when outcomes are optimized.

Neeraj & Atul (2014) have proposed an enactment comparison of


different image denoising algorithms in the spatial domain and in wavelet
domain. Such paper affords a complete estimation of PSNR and MSE and
hence the comparative outcomes are based on diverse kinds of noise such as
Gaussian noise, salt and pepper noise. Spatial domain exhibits a low pass
filtering on pixels by means of considering such noise which conquers the
higher region of the frequency spectrum.

3.2.1.1 Linear and Nonlinear Filtering Approach

Lee & Kassam (1985) deliberated some simplifications of median


filters which combines the characteristics of both linear and median filters.
Chiefly, L-filters and M-filters were reasoned, impelled by robust estimators
which are presumptions of the median as a location estimator. However, a
concerning filter, which we anticipate as the Modified Trimmed Mean (MTM)
62

filter, as well was delineated. The filters were examined for their efficiency on
noisy signals enclosing sharp discontinuities or edges.

Arce & Foster (1989) have formulated the hypothetical


assessment of multistage median filters. It was demonstrated that multistage
median filters are a synergy of max/median and also mid/median filters.
Whereas multistage median filters dwell to the class of two-dimensional stack
filters, they have threshold decomposition characteristics devising their
theoretical analysis uncomplicated. Statistical threshold decomposition is
incorporated to descend the statistical feature of these filters, and the
consequences are intended to determine the efficiency of such two varieties of
multistage filters congruently. Eventually, denary and substantive analogies of
the multistage filters and of other efficient detail-preserving filters are
conferred.

Acton & Bovik (1998) have proposed an innovative generalized


synthesis for nonlinear set-theoretic image estimation. Elaborate image
computation algorithms and instances are specified utilizing two PIM‟s
namely, PIecewise COnstant (PICO) and also PIecewise LInear (PILI) models.
Above and beyond, two LIM‟s including LOcally COnvex/COncave (LOCO)
and LOcally MOnotonic (LOMO) models as well. As such, representations
describe properties that hold over local image vicinities, and the analogous
image estimates may be reasonably estimated by iterative optimization
algorithms.

Zhang & Salari (2005) have presented a neural network based


system, performed on the wavelet transform domain was developed for image
denoising. A Layered Neural Network (LNN) was adequately configured and
disciplined to evaluate the acquisition potentiality of the neural network to
learn the correlation amidst the noise-free wavelet coefficients and their noisy
assessments as well. After a training process, the network was applied to the
63

noisy coefficients to generate noise-reduced output values. Prelude


experimental outcomes display that the projected method engenders improved
results than the Wiener, VisuShrink and Bayer‟s Shrink approaches.

Thivakaran & Chandrasekaran (2010) endorsed, a novel method


founded on nonlinear Adaptive Median Filter (AMF) for image restoration.
Image denoising was a familiar modality in digital image processing striving
at the elimination of noise, which possibly corrupts an image for the duration
of its acquisition or transmission, while preserving its quality. That
methodology was conventionally conducted in the spatial or frequency domain
by filtering. The objective of image optimization was to rebuild the true image
from the corrupted image. The operation of image acquisition recurrently leads
to degradation and the quality of the digitized image emerges deficient to the
original image. So, filtering was a procedure for enhancing the image.
Generally, linear filter was the filtering in which the esteem of an output pixel
was a linear combination of vicinity values, which can generate blur in the
image. Therefore, an assortment of smoothing procedures has been devised
which said to be non-linear. Formerly, median filter was the individual one
among the furthermost widespread non-linear filters. When contemplating
trivial vicinity it was extremely efficient, nevertheless for large window and in
case of high noise it gives rise to more blurring of the image. Consequently,
the Centre Weighted Median (CWM) filter has acquired an improved median
efficiency, however in excess of the median filter.

Bhumika & Negi (2013) have concentrated on the denoising of


images by means of linear and nonlinear filtering techniques. Linear filtering
was accomplished with the mean filter and LMS adaptive filter, even though
the nonlinear filtering was implemented utilizing a median filter. As such,
aforementioned, filters are expedient for eradicating noise which was
impulsive in nature i.e. salt and pepper noise. The mean filters find
64

applications where the noise was concentrated in the small portion of the
image.

Patidar et al. (2014) proposed an efficacious algorithm which has


been intended for image filtering by exploiting median, wiener and fuzzy
filter. The aim of image filtering was to eliminate the noise from the image in
such a manner that the "original" image is conspicuous. Suitably, linear
filtering can be done by means of a linear filter in which the value of an output
pixel was a linear amalgamation of vicinity values, that can create blur in the
image. Hence, median filter was one among the furthermost famous non-linear
filter. Image filtering was a technique by which they can enhance images.
Image filtering approaches are implemented on images to eliminate the
different kinds of noise that are either present in the image in the course of
capturing or injected into the image during transmission. In such effort,
Gaussian noise was utilized and the image filtering can be achieved by Linear
and Non Linear filter.

Gaussian Filter

Normally, Image Denoising is an essential part in numerous


image processing and computer applications. The significant part of a good
image denoising algorithm is that it must completely eliminate noise and
conserving the edges.

Shreyamshakumar (2013) has projected the Integration of both


Gaussian filter and bilateral filter and its technique of method noise
thresholding by means of wavelets. In case of Gaussian noise circumstances,
the performance of projected technique is equated with existing denoising
approaches and discovered that, it has substandard performance when equated
to Bayesian least squares estimate by means of Gaussian scale mixture and
conversely, excellent performance to that of multi-resolution bilateral filter,
65

bilateral filter, wavelet thresholding, Kernel based systems and also NL-means
as well.

Neighborhood Filter

Chen & Bui (2003) have presented denoising scheme based on


multi-wavelets using neighbouring coefficients. The propose scheme
outperforms single wavelet based denoising method and also a translation
invariant method. The result can be made better if the bigger neighbourhood is
chosen than immediate neighbourhood.

Buades et al. (2008) has presented about neighbourhood filters.


They have proposed three principles for the comparison of denoising methods
evaluating the loss of image structure, the creation of artifacts, and the
comprehensive utility of image self-similarity. Subsequently a structural
equivalence of non-local denoising approaches with other classes, they have
demonstrated that movie denoising can avert the explicit assessment of an
optical flow estimate. What was left to be done? The very recent extensions of
NL-means we mentioned in the introduction point towards the involvement of
still more sophisticated statistical instruments. Further than denoising, they
have perceived that the connotation with each pixel of a probability
distribution weighting its resemblance with the further pixels of the image will
turn out to be a central tool in image analysis respectively. That rich structure
unfolds image information and should be used for algorithm which
simultaneously analyze and process images.

SUSAN Filter

Smith & Brady (1997) have anticipated a new scheme for modest
level image processing specifically; corner detection plus edge detection and
structure preserving noise reduction as well. Non-linear filtering was
employed to delineate which portion of the image is intimately associated to
66

each individual pixel; every single pixel has been related to its local image
area which is of comparable brightness to such pixel. The new feature probes
are based on the mitigation of this local image region, and the noise alleviation
technique which uses such region as the smoothing vicinity.

Mao et al. (2006) were stimulated by the inspiring effect of the


SUSAN operator for low level image processing, and because of its usage
simplicity; they expand it to denoise the 3D mesh correspondingly. They
manipulate the angle between the normal on the surface to ascertain the
SUSAN area; individual point has correlated itself with the SUSAN area that
has an analogous continuity characteristic to the point. The SUSAN area
nullifies the characteristics to be assumed as noise efficiently, so the SUSAN
manipulator contributes the maximal number of appropriate neighbours with
which to consider an average, whereas no neighbours from dissimilar regions
are engaged. Consequently, the whole arrangement can be conserved.
Moreover, they correspondingly extend the SUSAN manipulator to two-ring
vicinity by a squared umbrella operator to increase the surface smoothness by
means of little loss of elaborate characteristics.

Bilateral Filter

Tomasi & Manduchi (1998) have presented the notion of bilateral


filtering for edge-preserving smoothing. The method is non-iterative, local,
and simple. However, it syndicates gray levels or else colors founded on
mutually their geometric closeness and their photometric resemblance, and
selects close values to distant values in both ranges as well as the domain.
Contradiction with filters that manoeuvre on the three bands of a color image
individually, a bilateral filter can operationalize the perceptual metric
corresponding the CIE-Lab color space, and also smooth colors. Furthermore,
it also conserves edges in such a way that is transformed into human
perception. Meanwhile, contrast with normal filtering, however the bilateral
67

filtering generates no phantom colors on the edges in color images, and also it
decreases phantom colors even though they appear as if in the original image
congruently.

Zhang & Gunturk (2008) have presented the exemplifications on


bilateral filter, which is a nonlinear filter that does spatial approximating,
deprived of smoothing edges; it has exhibited to be an effective image
denoising technique. A significant concern with the solicitation of the bilateral
filter was the assortment of the filter criteria, which impact the outcomes
considerably. Thus, the two foremost contributions for such paper includes as
follows. In general, the first contribution was an observational survey of the
ideal bilateral filter parameter assortment in image denoising applications.
Later, the second contribution was a prolongation of the bilateral filter: ie, a
multi-resolution bilateral filter, in which the bilateral filtering was engaged to
the assessing (low-frequency) sub-bands of a signal disintegrated by means of
a wavelet filter bank respectively. The multi-resolution bilateral filter was
joined with wavelet thresholding to create a new image denoising outline,
which changes to be very effective in removing noise in real noisy images.

Dong et al. (2013) have demonstrated a low-rank methodology


headed for SSC and delivers a theoretically uncomplicated elucidation from a
bilateral variance estimation context, viz. such singular-value disintegration of
analogously packed patches and can be observed as amalgamating both local
and nonlocal data for predicting signal variances. As such viewpoint
stimulates us to progress a novel class of image restoration algorithms termed
as Spatially Adaptive Iterative Singular-value Thresholding (SAIST). More
ever, for noise data, SAIST simplifies the celebrated Bayes Shrink from local
to nonlocal models; for imperfect data, SAIST expands preceding
deterministic annealing-based elucidation to sparsity optimization by means of
combining the ideas of dictionary learning.
68

Indulekha & Sasikumar (2015) they formulated a substantial


denoising technique which manoeuvre a leading part in image processing. The
image was first disintegrated into eight sub-bands by means of 3D DWT and
bilateral filter and the technique incorporated was Thresholding. Since, the
estimated coefficient received from DWT was filtered via bilateral filter and
the detail coefficients are exposed to Wavelet Thresholding. Hard thresholding
and Soft threshold are the popularly utilized thresholding methods. Though,
for improved outcomes, BayesShrink; VisuShrink, etc., were intended to
calculate the threshold value. The image was rebuilt by the inverse wavelet
transform of the consequent coefficients and so it was filtered by utilizing
bilateral filter.

Even though, the MRI images and the Ultrasound images were
supposed to be datasets for quantitative verification. Henceforward, the Root
Mean Square Error (RMSE), the Structural Similarity Index (SSIM) and the
Peak Signal to Noise Ratio (PSNR) were involved in quantity the
accomplishment of denoising respectively.

Akar (2016) has proposed a substantial pre-processing stage in


Magnetic Resonance (MR) images for clinical determinations were
confiscation of noise. An edge-preserving scheme in which Bilateral Filter
(BF) was applied for Rician noise exclusion in MR images. The choice of BF
parameters affects the performance of denoising. Therefore, as a novel
approach, the parameters of BF were optimized using a Genetic Algorithm
(GA).

3.2.1.2 Singular value decomposition

In linear algebra, Singular Value Decomposition (SVD) is a


factorization of a real or complex matrix. For example lets, matrix be a
69

real or complex then, the singular value decomposition is of the form


∑ , where is an unitary matrix, ∑ is an rectangular
diagonal matrix with singular values of , and is an real or complex
unitary matrix. The m columns of are the left-singular vector and the
n columns of is the right-singular vector of . The most important point is
the matrix that to be factored by SVD should not be of full rank since, sigma is
not small.

The utilization of SVD methods was conferred by Andrews &


Patterson (1976) for digital image processing was of significant involvement
for those amenities by means of large computational power and demanding
imaging necessities. Moreover, the SVD methods are beneficial for image
over and above quite general point spread function (impulse response)
exemplifications. The approaches characterize uncomplicated reference of the
theory of linear filtering. Image improvement instances would be formulated
by depicting such principles. The furthermost stimulating circumstances of
image restoration are those which encompass space variant imaging structure.

Lee et al. (1991) has conferred the block transform image


processing overture that applies singular value disintegration to decrease noise
deprived of affecting texture and edge detail. A nonlinear gain function based
on the measured statistics of the singular values for image noise is created.
The image was filtered to generate a detail image and a low pass filtered
image as well. The detail image was segmented into blocks and such blocks
were transformed to make singular vectors and arrays of singular values
respectively. The nonlinear gain function was functional to the arrays of
singular values and an inverse SVD transform was realized to the modified
singular values to yield a processed detail image. The processed detail image
is collectively with the low pass filtered image to produce a processed image
having decreased noise.
70

Pre-processing of image and video sequences with spatial


filtering methods typically enhances image quality and in addition
compressibility. In this proportion, Konstantinides et al. (1997) has conferred a
block-based, nonlinear filtering algorithm founded on singular value
disintegration and compression-based filtering.

Kakarala & Ogunbona (2001) have proposed an autoregressive


sort of the singular value decomposition such demonstration was based on
precisely how it might be utilized for signal analysis and approximation. It was
renowned that the SVD has ideal de-correlation and sub rank estimation
properties. The multi-resolution form of SVD proposed here to continue those
properties, and furthermore, has linear computational difficulty. By utilizing
the multi-resolution SVD, the subsequent significant features of a signal might
be analyzed, at each of numerous levels of resolution: sphere-city of principal
components, isotropy, self-similarity beneath scaling, and consequences of
mean-squared flaws into substantive constituents. Theoretical calculations
were provided for simple statistical models to show what might be expected.

Zujun (2003) have presented an image denoising approach by


performing block-SVD in detail sub-bands of wavelet transform domain. In
particular, they proposed an edge-adaptive thresholding method associated
with the process of block-SVD.

Wongsawat et al. (2005) has proposed a multichannel SVD


premised image denoising algorithm. Here, the DCT was engaged in
de-correlate the image into sixteen sub-bands. Therefore, at that juncture, SVD
was implemented for each one of the sub-bands and hence the additive noises
are minimized by truncating the Eigen values. The utilization of singular value
decomposition methods for digital image processing was of significant
involvement for those amenities by means of large computational power and
71

demanding imaging necessities. Image improvement instances would be


formulated by depicting such principles.

Rowayda (2012) has proposed an experimental survey on SVD in


image processing applications. SVD has attractive properties in imaging that
leads to new contributions in different image processing applications. The
explanation about research direction using SVD in various applications is
demonstrated.

Guo et al. (2016) has projected a computationally modest


denoising algorithm by employing the nonlocal self-similarity and the low-
rank estimation. The projected method comprises of three basic phases.
Initially, the method categorizes analogous image patches by the block
matching method to constitute the similar patch groups, which consequences
in the analogous patch clusters to be poor-rank. Following, each cluster of
analogous patches was factorized by particular value decomposition and
assessed by considering only a few largest singular values and consequent
singular vectors. On the other hand, the first denoised image was produced by
combining all processed patches. However, for poor-rank matrices, the SVD
can deliver the best energy compression in the least square sense. The
projected method utilizes the optimal energy concretion attribute of SVD to
precede a low-rank estimation of comparable patch clusters. Unlike other
SVD-based approaches, the low-rank estimation in SVD domain evades
learning the local assumption for demonstrating image patches which typically
is computationally affordable.

Frobenius Norm

Frobenius norm is similar to Euclidean norm that comes from the


inner product on the space of all matrices. The Frobenius norm for a
matrix , is a vector of size . The norm is defined as
72

√∑ ∑ √∑ ∑ | | (3.6)

The norm can be directly calculated as given below

‖ ‖ √ √∑ (3.7)

Frobenius norm works on space as wavelets do. Frobenius


norm work on Eigen values and Eigen vectors which is unique for a given
system.

Sutanshu et al. (2010) has presented a novel Frobenius Norm


Filter (FNF), in wavelet domain. The proposed filter is an adaptive order
statistics filter that modulates according to the noise level. By this proposed
filter they have proved the existence of minimizer and its convergence. The
filter can be used as a pre-processing filter in segmentation and feature
extraction process. FNF gives better denoising results even under high noise
density.

Aditya et al. (2016) has explained the use of Frobenius norm as a


basis of energy measurement using SVD on image data. Linear regression is
used to obtain the content parameters. The experimental results show that the
proposed approach is effective in removing noise.

3.2.2 Transform Domain Filtering

Mallat (1989) studied the properties of the operator which


approximates a signal at a given resolution. They show that the difference of
information between the approximations of a signal at the resolutions that can
be extracted by decomposing that signal on a wavelet orthonormal basis. The
decomposition defines an orthogonal multi-resolution representation called a
73

wavelet representation. It was computed with a pyramidal algorithm based on


convolutions with quadrature mirror filters. For images, the wavelet
representation differentiates several spatial orientations. They study the
application of this representation to data compression in image coding, texture
discrimination and fractal analysis.

Starck et al. (2002) have demonstrated a digital effectuation of


two revolutionary mathematical transforms specifically, the ridge-let
transform and the curve-let transform. They unveiled a very simple
interjection in Fourier space which undergo Cartesian samples and yields
samples on a recto-polar grid, which was a pseudo-polar sampling set founded
on concentric square geometry.

Portilla et al. (2003) has described a modus operandi for


eliminating noise from digital images, based on a statistical model of the
coefficients of an over complete multi scale oriented basis. Vicinities of
multipliers at adjoining positions and scales are designed as the product of two
independent random variables such as a Gaussian vector and a hidden positive
scalar multiplier. Later, modulates the local variance of the coefficients in the
neighbourhood, and was consequently capable to interpret for the empirically
perceived correlation concerning the coefficients amplitudes. Beneath that
model, the Bayesian least squares evaluate of every single coefficient
decreases to a weighted average of the local linear approximations over all
conceivable values of such concealed multiplier variable.

Do & Vetterli (2005) offered a discrete-domain, multi-resolution


and multi direction expansion resorting non-separable filter banks, often the
similar way that wavelets were derivative of filter banks. This structure
consequence in an adaptive multi-resolution, local, and directional image
expansion exploiting contour segments, and indeed, it is declared the contour-
let transform. The discrete contour-let transform has a fast iterated filter bank
74

algorithm that requires order operations for images. Meanwhile, we built a


precise link between the developed filter bank and the associated continuous-
domain contour-let expansion using a directional multi-resolution analysis
framework.

Elad & Aharon (2006) designate the image, denoising dilemma,


when zero-mean white and homogeneous Gaussian additive noise was to be
abstracted from a granted image.

The overture confiscated is supported on sparse and redundant


delegacy over trained dictionaries. Consequently, utilizing the K-SVD
algorithm, they receive a lexicon that depicts the image content efficaciously.
Two training choices are deliberated: using the corrupted image itself, or
training on a corpus of high-quality image database. Subsequently, the K-SVD
was restricted in manipulating small image patches; they expand its
implementation to capricious image sizes by describing a global image
preceding that push sparsity across patches in all locations in the image. They
demonstrate exactly how such Bayesian treatment precedes to a simple and
competent denoising algorithm.

Dabov et al. (2006) has endorsed a new image denoising scheme


supported on an improved sparse delegacy in transform domain. The
improvement of the sparsity was attained by clustering analogous 2D image
fragments (e.g. blocks) into 3D data arrays which we call "groups".
Collaborative filtering was an extraordinary process formulated to handle with
these 3D groups has addressed by Dabov et al. (2007). They fathom it by
exploiting the three consecutive steps: 3D transformation of a group, shrinkage
of the transform spectrum, and inverse 3D transformation. The outcome was
3D evaluate that comprises of the conjointly filtered clustered image blocks.
By mitigating the noise, the collaborative filtering divulges still the next
specifics distributed by grouped blocks and simultaneously it conserves the
75

essential unique characteristics of all individual blocks. The altered blocks are
then recovered to their primary positions. Since these blocks are overlapping,
for each pixel we obtain many diverse estimates which need to be joined.
However, aggregation is a specific averaging process which is employed to
take benefit of this redundancy.

Analogously, the denoising tactic which manipulates Frame let


transform was demonstrated by Sulochana & Vidya (2012) to disintegrate the
image and execute shrinkage operation to decimate the noise. The model
depicts a relative survey of divergent thresholding techniques for image
denoising in frame let transform domain. The notion was to transmit the data
into the frame let fundamental, illustration shrinkage accompanied by the
inverse transform. In this effort, diverse shrinkage principles such that
SureShrink, MinMaxShrink, BayesShrink, VisuShrink, UniversalShrink and
NormalShrink were incorporated.

From the above study, eliminating noise from the original signal
is still a stimulating problem, Denoising was an ambitious task in the arena of
signal and image processing, Denoising of the digital image debased by
Gaussian noise by means of wavelet techniques is very efficacious due to its
capability to capture the energy of a signal in a trivial number of energy
transform values congruently. The Discrete Wavelet Transform (DWT) of
image signals creates a non-redundant image depiction, which delivers better
spatial and spectral localization of image creation.

3.2.2.1 Thresholding methods

In image denoising Selection of optimal threshold value is a


crucial problem. If the threshold value is too large, then the noisy components
are left out as such. On the other hand if the threshold value is too small, it
removes the image details resulting in a smoothed image. Thus the inefficient
76

threshold value affects the image details and blurs the image quality. Hence,
the selection of threshold value plays an important role in image denoising.

Hard Thresholding Method and Soft Thresholding Method

Donoho & Johnstone (1994) proposed a method to reconstruct


the image affected by noise. The noise is considered to be an independent
identically distributed Gaussian noise. The wavelet coefficients are threshold
and the result may be smooth (or) settle down the noise. These results help to
understand about statistical inference and optimal recovery.

Coifman & Donoho (1995) have presented a translation invariant


denoising scheme to suppress the noise. Denoising by wavelet thresholding
results in Gibbs phenomena because, of neighbourhood discontinuity and lack
of translation invariance. The proposed approach overcomes the problem by
using a range of shifts. The shifted data are averaged out and then un-shifted to
get the denoised output. The method used here is named as cycle spinning.

Khare et al. (2010) has presented a multifaceted soft thresholding


technique for noise elimination in Daubechies complex wavelet transform
domain. There were two beneficial assets for Daubechies complex wavelet
transform, namely, approximate shift invariance and also strong edge
representation, have been examined. Utmost of the uncorrelated noise gets
eliminated by shrinking complex wavelet coefficients at the lowermost level,
even though correlated noise gets detached by merely a proportion at lower
levels, accordingly we employed multilevel thresholding and shrinkage on
complex wavelet coefficients. The projected technique initially identifies
strong edges by means of imaginary components of complex coefficients and
formerly using multilevel thresholding and contraction on complicated
wavelet coefficients in the wavelet domain of non-edge points appropriately.
Further, the proposed threshold relies on the variance of wavelet coefficients,
77

the mean and the median of absolute wavelet coefficients at a specific level.
Dependence of these parameters makes this method adaptive in nature.

Okuwobi & Yonghua (2014) have proposed an exploration of


dissimilar wavelet approaches in digital image denoising. Through several
wavelet threshold techniques such as SUREShrink, VisuShrink, and
BayesShrink in search for effective image denoising technique. On such paper,
they encompass the existing technique and deliver a comprehensive estimation
of the proposed technique. The wiener filtering technique was the proposed
method which was compared and analyzed, while the performance of all the
techniques were compared to ascertain the most efficient technique.

VisuShrink

Donoho & Johnstone (1994) have projected a novel precept for


spatially-adaptive evaluation: selective wavelet reconstruction. They reveal
that variable-knot spline fits and piecewise-polynomial fits, once furnished
with an oracle to choose the knots, and are not drastically more potent than
wavelet restoration using an oracle congruently. We enhance a pragmatic
spatially adaptive technique, RiskShrink, which performs by shrinkage of
empirical wavelet coefficients. Consequently, RiskShrink simulates the
concert of an oracle for selective wavelet restoration and in addition it is
feasible to do so.

Babu et al. (2014) has described several approaches of noise


elimination from degraded MRI images with Gaussian noise by means of
adaptive wavelet thresholding methods (VisuShrink, NeighShrink) and
planned a new hybrid thresholding algorithm and equated the results in terms
of PSNR which evidenced that proposed method was efficient.
78

BayesShrink

Chang et al. (2000a) has presented a thresholding proficiency


recognized to be an adaptive wavelet thresholding for compression of image
and in addition denoising of image as well. Here the initial portion
recommends an adaptive, data-driven threshold for image denoising by means
of a thresholding technique known to be wavelet soft-thresholding. The
aforementioned threshold is extracted in a Bayesian outline, and the erstwhile
is intended on the wavelet coefficients where the Generalized Gaussian
Distribution (GGD) conspicuously applicable in image processing diligence.
Afterwards, next part efforts to validate more on latest assertions that lossy
compression can be utilized for denoising. The BayesShrink threshold can
support in the parameter assortment of a coder considered with the objective
of denoising, and hence attaining synchronized denoising and compression.

Nezamoddini & Fieguth (2005) have presented a dyadic Gabor


filter bank, which was associated with BayesShrink method for image
denoising. In the projected system, the noisy image was disintegrated into
diverse channels in numerous levels by a dyadic Gabor filter bank. To
recuperate the image, the corrupting noise was eliminated by employing the
proposed BayesShrink technique on the noisy Gabor coefficients. The noise
variance was predicted in Gabor domain and the assessed noise was then
utilized to dynamically determine an individual threshold for every single
spatial-frequency channel. Last of all, the denoised coefficients were
transmitted back to reconstruct the image.

Ho & Hwang (2013) proposed an approach based on hidden


Bayesian network. In this method wavelet coefficients are constructed by
using the prior probability of the original image. Denoised wavelet coefficients
are obtained using Maximum-A-Posterior (MAP) estimator. The results
79

demonstrate that this method is having better perceptual quality on the


textured areas of the image. Here execution time is more.

NeighShrink

In general, denoising of common image degraded by Gaussian


noise is a conventional issue in signal processing or else in image processing.
In view of that, Donoho and his collaborators at Stanford established a wavelet
denoising system by thresholding the wavelet coefficients intensifying from
the standardized discrete wavelet transform congruently. However, his effort
was extensively intended for science and engineering applications.
Nevertheless, this denoising arrangement tends to destroy a lot of wavelet
coefficients that might encompass beneficial image information Chen et al.
(2004) have projected one wavelet image thresholding structure by integrating
neighbouring coefficients, viz. NeighShrink. This method was valid since a
huge wavelet coefficient will possibly have huge wavelet coefficients as its
vicinity.

ModiNeighShrink

Denoising of natural images degraded by Gaussian noise by


means of wavelet techniques was very efficient for the reason that its
capability to captivate the energy of a signal in petite energy transforms
standards. Henceforward, the wavelet denoising schematic thresholds the
wavelet coefficients arising from the normalized discrete wavelet transform
respectively. Mohideen et al. (2008) have projected an exploration of the
appropriateness of dissimilar wavelet bases and the size of diverse
neighbourhood on the concert of image denoising algorithms in terms of
PSNR.

The Neigh Shrink in addition to ModiNeighShrink was proficient


image denoising algorithms that were founded on universal threshold and
80

discrete wavelet transform. The enhanced image denoising technique


established on wavelet thresholding system gives improved results than the
ModiNeighShrink and the Neigh Shrink by means of altered universal
threshold. These approaches destroy a lot of wavelet coefficients; some of
them may encompass valuable image information. Thus, we may not get good
quality of image using these methods. Hari & Mantosh (2014) they extended
the idea of Cai and Silverman for developing a new image denoising method
and determine the coefficients of a neighbouring window for every sub-band.

SureShrink and NeighSure

The various shrinkage methods fail to translate the wavelet


coefficients due to computational intractability, spatial adaptivity, etc.
A method to solve the above said problems Donoho et al. (1995) developed an
asymptotically mini-max method. The method is nearly mini-max for point
wise error, global error, etc. The underlying theory of this method is about the
statistical relation and optimal recovery.

Dengwen & Wengang (2008) presented an improved method


which can determine an optimal threshold and vicinity window size for each
sub-band by expending the Stein‟s Unbiased Risk Estimate (SURE)
correspondingly. Its denoising concert was significantly higher to NeighShrink
and also surpasses SURE-LET, which was a latest denoising algorithm based
on the SURE. Nevertheless, it was renowned that increasing the redundancy of
wavelet transforms can considerably enhance the denoising performances.
Henceforth, the projected system was as well protracted to the redundant
Dual-Tree Complex Wavelet Transform (DT-CWT).

NeighLevel

Noise reduction is a vital step in a visual enhancement task and it


plays a major role in subsequent processing tasks such as image analysis.
81

Therefore, the main determination of image denoising specially in natural


image was to suppress the artifacts which destroy an image and quash the
additive Gaussian noise devoid of forfeiting the delicate surface of the latent
image. Essentially, the most important noise which attained in the
progressions of acquisition and diffusion of several digital images was
supposed to be AWGN. Cho et al. (2009) has offered three dissimilar wavelet
shrinkage approaches, viz. NeighShrink, NeighSure and NeighLevel.
Additionally, the NeighShrink thresholds the wavelet coefficients based on
Donoho‟s universal threshold and the sum of the squares of all the wavelet
coefficients contained by a vicinity window. Neigh sure adopts Stein‟s
Unbiased Risk Estimator (SURE) as a replacement for the universal threshold
of NeighShrink in order to obtain the optimal threshold through least possible
jeopardy for every single sub-band. NeighLevel employs parent coefficients in
a coarser level in addition to neighbours in the identical sub-band.

Hari & Mantosh (2012) have projected a better approach that


estimates the threshold as well as neighbouring window size for each sub-band
by means of its lengths. The trial results exemplify that the anticipated
approach was better than the existing ones. i.e., ModiNeighShrink, VisuShrink
and NeighShrink on the basis of Peak Signal-to-Noise Ratio (PSNR) in other
words, it can be signified as visual quality of the image.

Bivariate Shrinkage

To find the original noise free image, a non-Gaussian bivariate


probability distribution function is used to model the wavelet coefficients of
images. The approach is based on a non-linear function that generalizes the
soft thresholding approach of Donoho & Johnstone (1995).

Let represent the parent of ( is the wavelet coefficient


at the same spatial position as , but at the next coarser scale). Then
82

(3.8)

Where , and . By using the


empirical histograms the non-Gaussian bivariate probability distribution
function can be stated as


( √ ) (3.9)

In above equation (3.9) and are uncorrelated, but not


independent. Bivariate shrinkage function obtained by using the Maximum-a-
Posteriori (MAP) estimator of is given as


(√ )

̂ (3.10)

The bivariate shrinkage in equation (3.10) is derived using a


Bayesian estimation approach proposed by Sendur & Selesnick (2002a).

Sendur & Selesnick (2002a) have proposed the dependencies


between the coefficients and their parents in particular. Meant for this
determination, a novel based non-Gaussian bivariate distributions were
estimated and analogous nonlinear threshold functions (shrinkage functions)
were derived from the models expending Bayesian estimation theory. The
innovative shrinkage functions do not undertake the objectivity of wavelet
coefficients. They exhibited three image denoising instances so as to display
the concert of these new bivariate shrinkage guidelines. However, In case of
second illustration, a simple sub-band dependent data-driven image denoising
system was defined and compared with effective data-driven techniques in the
literature, viz. VisuShrink, hidden-Markov, SureShrink and BayesShrink
83

models. Moreover, in the third illustration, the identical idea was practical to
the dual-tree complex wavelet coefficient.

Generally the outcomes of image denoising techniques can be


improved if statistical dependencies are considered among wavelet
coefficients. Sendur & Selesnick (2002b) proposed an image denoising
method based on join statics of wavelet coefficients. The method is based on
local adaptivity using bivariate shrinkage function. They evaluated the
approach both in orthogonal wavelet and in dual tree complex wavelet
transforms. This scheme was more competitive than existing methods.

Zhang et al. (2013) have offered a novel image denoising


technique based on an enhanced Bivariate Model (BM) in Tetrolet domain.
Such prototype fits the joint delivery of parent-child Tetrolet coefficients with
a Scale Variable Parameter Bivariate Model (SVPBM). The equivalent
nonlinear threshold contraction operations were extracted from SVPBM by
means of the maximum a posteriori estimator. In order to estimate the
performance of the projected technique, the algorithm is applied to images that
are corrupted by additive Gaussian noise over a wide range of noise variance.

3.2.3 Denoising using Dual Tree Complex Wavelet Transform

Kingsbury (1999) initially appraised exactly how wavelets may


be expended for multi–resolution image processing, labelling the filter–bank
execution of the Discrete Wavelet Transform (DWT) and by what means it
may be prolonged by the use of separable filtering for processing images and
further multidimensional signals. We formerly illustrate that the condition for
inversion of the DWT (perfect reconstruction) forces numerous generally
utilized wavelets to be identical in shape, and as such shape engenders severe
shift dependence (variation of DWT coefficient energy at any specified scale
84

by means of shift of the input signal). Subsequently, it was also discovered


that separable filtering with the DWT inhibits the transform from providing
directionally selective filters for diagonal image characteristics. Complex
wavelets can deliver both shift invariance and good directional selectivity,
with merely modest increases in signal redundancy and computation load.
Though, improvement of a Complex Wavelet Transform (CWT) with better
reconstruction and good filter characteristics has evidenced problematic till of
late. At this instant, they anticipated the dual-tree CWT as a resolution to this
issue, yielding a transform with attractive properties for a range of signal and
image processing applications, comprising texture analysis and synthesis,
motion estimation, denoising, and object segmentation respectively.

Kingsbury (2001) has proposed a discrete wavelet transform,


which creates complex coefficients by using a dual tree of wavelet filters, so
as to reach their real as well as imaginary parts congruently. This presented a
restricted redundancy (2m: 1 for m-dimensional signals) and permits the
transform to deliver estimate shift invariance and also directionally selective
filters (i.e., characteristics deficient in the traditional wavelet transforms)
although conserving the standard properties of ideal reconstruction and
computational efficiency by means of good well-balanced frequency
responses. Here we examine why the novel transforms can be intended to shift
invariant and designate exactly how to estimate the exactness of this
approximation and project appropriate filters to realize that.

Chen et al. (2012) has proposed a new method for image


denoising by using the dual-tree complex wavelet transform, which has an
approximate shift invariant property and a good directional selectivity. The
thresholding formula uses three scales of complex wavelet coefficients for
image denoising.
85

Image denoising approaches were employed to eliminate the


noise constituents deprived of affecting the significant image features and
content. Wavelet transforms characterizes image energy in compressed
manner and this depiction aid to find threshold between noisy feature and
essential image features. Chinna et al. (2012) has suggested an appropriate
data based thresholding method in terms of Dual tree complex wavelet
transform.

Bal (2012) they demonstrated a dual tree complex wavelet


transform which was employed in their denoising algorithm. The denoising
algorithm was centered on the supposition that, however, for the Poisson noise
instance threshold values such wavelet coefficients can be predicted from the
estimated coefficients. Meanwhile, the proposed technique was equated with
one among the state of the art of such denoising algorithms. Improved
outcomes were attained by utilizing the proposed algorithm on the basis of
image quality metrics. Additionally, the dissimilarity improvement effect of
the proposed system on collagen fıber images was scrutinized. Their tactic
permits fast and effective improvement of images attained beneath low light
intensity situations.

Remenyi et al. (2014) introduced an innovative image denoising


technique which was on the basis of 2D scale-mixing complex-valued wavelet
transforms congruently. Mutually the minimal (unitary) and redundant
(maximum overlap) versions of the transform were employed. Since, the
covariance framework of white noise in wavelet domain is implanted.
Evaluation was implemented via empirical Bayesian methods, comprising
versions that conserve the phase of the complex-valued wavelet coefficients
and those that do not.

Varsha and Preetha (2014) presented Dual Tree Complex


Wavelet transform based image denoising using generalized cross validation
86

technique. Here denoising performance for different images using Dual Tree
Complex wavelet transform with different thresholding techniques are
evaluated in terms of peak signal to noise ratio, mean structural similarity and
coefficient of correlation.

Suresh et al. (2014) has proposed an algorithm which was


completely centered on wavelet transform and the foremost operation of
wavelet transform was real as well as imaginary values into orthonormal
series. On account of two algorithms projected, the first one Was Dual Tree
Complex Wavelet Transform (DTCWT) and another one was Dual Tree
Complex Wavelet Transform by means of Orthogonal Shift Property
(DTCWT with OSP) both are utilized for image denoising procedure. Wavelet
transform was generally employed for image denoising progression. In such
algorithms, new technique for impulse noise diminution in images denoising
when compared with wavelet transform was the best one for image denoising
and enhances both the orthogonal and symmetrical properties and also the
originality of the image was better and such aforementioned consequences
were shown in experimental outcomes of projected algorithms with better
Signal to Noise Ratio level (SNR).

Zhang & Liu (2015) they described an algorithm of image


denoising through local adaptive window bivariate prototype holding the
transform declared to be a Dual Tree Complex Wavelet Transform (DTCWT)
compatibly. In the algorithm, they used the advantages of DTCWT with
approximate shift invariance and well direction selectivity, and then, according
to the correlation of neighbourhood coefficient they designated appropriate
size algorithm of neighbourhood window to do the bivariate prototype of
image denoising algorithm, so as to accomplish a better noise diminution.
87

3.2.4 Denoising by Combined Techniques

Balster et al. (2006) has planned a combined spatial and


temporal-domain wavelet shrinkage algorithm for the determination of video
denoising. The spatial-domain denoising method is a discriminating wavelet
shrinkage technique which employs two-threshold measures to work the
geometry of the wavelet sub-bands of each video frame in addition each frame
of the image sequence is spatially denoised autonomous of one another. The
temporal-domain denoising method is a discriminating wavelet shrinkage
technique which evaluates the level of noise corruption and also the quantity
of motion in the image series. The amount of noise is evaluated to ascertain
exactly how much filtering is desirable in the temporal-domain, and the
amount of motion is confiscated into retaining so as to determine the degree of
resemblance between consecutive frames.

Othman & Qian (2006) have implemented a novel noise


reduction algorithm which was invented and characterized to the difficulty of
denoising hyper spectral imagery. As such algorithm routes to the spectral
derivative domain, whereas the noise range were preeminent, and it aids from
the divergence of the signal consistency in the spectral as well in the spatial
dimensions of hyper spectral images. The enactment of the novel algorithm
was verified on two different hyper spectral data cubes: they are an Airborne
Visible/Infra-Red Imaging Spectrometer (AVIRIS) data cube which was
developed in a vegetation-dominated site (in other words as location) and a
simulated AVIRIS data cube that simulates a geological site.

Chinna & Madhavi (2010) have projected a freshly denoising


technique for degraded images by means of additive white Gaussian noise.
The method employed here was to syndicate the wavelet transform and the
fractal transform with recursive Gaussian diffusion. Subsequently, the digital
image data compression gadget encompasses of a controller circuit for
88

receiving digital image data and for processing the image data into blocks. On
the whole, the controller circuit deliveries processed image data towards a
majority of transforming circuits and to a feeder circuit. The wavelet transform
was recognized to be virtuous at processing point individualities and small
patches. Therefore an amalgamation of these two transforms can be utilized
for image denoising such that the consequent image will encompass the data
deliberated as substantial by any of the two transforms and hence attaining a
visually better denoised image.

Iqbal et al. (2014) proposed Dual-tree complex wavelet transform


and singular value decomposition based medical image resolution
enhancement using non-local means filter. Here the image is enhanced using
singular value decomposition and high frequency sub-bands are obtained using
dual-tree complex wavelet transform. Then the enhanced low resolution image
and high frequency sub-bands is interpolated using Lanczos interpolator. Non-
local means filter is used to accommodate the artifacts produced by dual-tree
complex wavelet transform. Quantitative and qualitative analysis are used to
find the performance of the proposed technique.

Yu et al. (2016) has proposed a combined acoustic noise


reduction system on the basis of DTCWT and SVD. The technique was put
forward and it was focused on the notion that noise reduction approaches
ought to match the features of the noisy signal. Conferring to former studies, it
was recognized that the energy of acoustic signals composed beneath leaking
situation was mostly concentrated in low-frequency portion (0 to 100 Hz)
respectively. Furthermore, the ultralow-frequency component (0 to 5 Hz),
which was engaged as the characteristic frequency band in that survey, can
transmit a reasonably long distance and be apprehended by sensors.
Consequently, so as to filter the noises and to standby the distinguishing
frequency band, DTCWT was deliberate as the core to conduct multilevel
89

decomposition and refining for acoustic signals and SVD was utilized to
remove noise in non-characteristic bands.

Image processing was essentially conceded out to improve or


renovate a noisy image. The previous mechanism was recognized as image
enhancement and the advanced one was well-known as image restoration. The
image gets despoiled with noise in the course of acquisition phase or
throughout the transmission phase. Denoising can be carried out by abundant
approaches like neighbourhood operations, arithmetic operations, transforms,
etc. Pankaj & Rekha (2016) they have amalgamated neighbourhood
processing techniques to that of transform specifically wavelet transform.

You might also like