You are on page 1of 50

NOVEL REFORMULATION TAYLOR JET SPLITTING

APPROACH FOR LINEWISE SEGMENTATION


By

A.MARIA MICHAEL LIDIYA


960417621316
Of
C.S.I INSTITUTE OF TECHNOLOGY, THOVALAI
A PROJECT REPORT
Submitted to the
FACULTY OF INFORMATION AND COMMUNICATION ENGINEERING

In partial fulfillment of the requirements


for the award of the degree
of

MASTER OF COMPUTER APPLICATIONS

ANNA UNIVERSITY

CHENNAI
May 2020
ANNA UNIVERSITY, CHENNAI

BONAFIDE CERTIFICATE

Certified that this Report titled “NOVEL REFORMULATION TAYLOR JET


SPLITTING APPROACH FOR LINEWISE SEGMENTATION” is the bonafide work of
A. MARIA MICHAEL LIDIYA ( 960417621316) who carried out the work under my
supervision. Certified further that to the best of my knowledge the work reported herein does
not form part of any other thesis or dissertation on the basis of which a degree or award was
conferred on an earlier occasion on this or any other candidate.

Head Of the Department Supervisor


Mrs. Julie Emerald Jiju MCA, M.Tech, M.Phil., Mrs. V. Merin Shobi MCA, MPhil
Head Of The Department Assistant professor
Department of the Department Department of M.C.A
C.S.I Institute of Technology C.S.I. Institute of Technology
Thovalai – 629302 Thovalai - 629302

Submitted for the project Viva-Voce examination held on

Internal Examiner External Examiner


III

ABSTRACT

We propose an algorithm to efficiently compute approximate solutions of

the piecewise affine Mumford-Shah model. The algorithm is based on a novel

reformulation of the underlying optimization problem in terms of Taylor jets. A

splitting approach leads to linewise segmented jet estimation problems for which we

propose an exact and efficient solver. The proposed method has the combined

advantages of prior algorithms: it directly yields a partition, it does not need an

initialization procedure, and it is highly parallelizable. The experiments show that the

algorithm has lower computation times and that the solutions often have lower

functional values than the state-of-the-art. Image partitioning, piecewise affine-linear

Mumford-Shah model, image segmentation, unsupervised segmentation, non-convex

optimization, splitting approach, image processing, image edge detection.


IV

ACKNOWLEDGEMENT

First and Foremost I thank the God Almighty for his grace to enable me to complete
this project work a successful one.

Words cannot be expressed to show my gratitude to our respected principal,

Dr.K.DHINESH KUMAR, M.E., Ph.D., for his support and freedom throughout my course.

I would like to thank our Head of the Department,

Mrs.M.JULIE EMERALD JIJU MCA., M.Tech, M.Phil. Department of MCA, for her
valuable help and encouragements towards my project.

It is my proud privilege to express my sincere thanks and gratitude to my guide


Mrs.V.MERIN SHOBI MCA., M.Phil, Department of Computer Applications, for providing
me with her valuable suggestions and non-stop guidance throughout the development of this
project.

It is my great pleasure to acknowledge our parents and family members for their
prayers and generous support extended towards the successful completion of this project.
V

CHAPTER TITLE PAGE


NO NO

ABSTRACT III
ACKNOWLEDGEMENT IV
LIST OF TABLES VIII
LIST OF FIGURES IX
LIST OF ABBREVIATIONS X
1 INTRODUCTION
2 SYSTEM ANALYSIS AND DESIGN

2.1 Fast Piecewise-Affine Motion Estimation


Without Segmentation
2.2 Fast Approximate Energy Minimization via
Graph Cuts
2.3 Multi way Cut for Stereo and Motion with
Slanted Surfaces
2.4 SAR target recognition based on active
contour without edges

2.5 Low-rank quaternion approximation for color


image processing
3 SYSTEM DESIGN

3.1 Existing system

3.2 proposed system

3.3 system specification

3.3.1 Software Requirement

3.3.2 Hardware requirement

4 SYSTEM IMPLEMETATION

4.1 Modules
4.1.1 Piecewise Jet Formation

4.1.2 Discretization Jet Formation


VI
4.1.3 Image Splitting

4.1.4 Line wise Segmentation

5 SYSTEM DESIGN

5.1 Block Diagram

5.2 Data Flow Diagram

6 LANGUAGE DESCRIPTION

6.1 Introduction

6.2 Matlab's Power Of Computational


Mathematics
6.3 Features Of Matlab

6.4 Uses Of Matlab

6.5 Understanding The Matlab Environment

6.6 Commonly Used Operators And Spatial


Characters
6.7 Commands

6.7.1 Commands for managing a session

6.8 Input And Output Command

6.9 M Files

6.10 Data Types Available In Matlab

7 SYSTEM TESTING

7.1 Introduction
VII

7.2 Types Of Tests

7.2.1 Unit testing

7.2.2 Integration testing

7.2.3 Functional test

7.2.4 System Test

7.2.5 White Box Testing

7.2.6 Black Box Testing

7.3 Unit Testing

7.4 Integration Testing

7.5 Acceptance Testing

8 SCREEN SHOTS

9 CONCLUSION

APPENDIX XII

REFERENCES XIV
VIII
III

LIST OF TABLES

TABLE NO TABLE NAME PAGE NO

6.6 Operators And Spatial Characters

6.7.1 Commands For Managing A Session

6.8 Input and Output Commands

6.10 Data Types in MATLAB


IX
LIST OF FIGURES

FIGURE NO FIGURE NAME PAGE NO

5.1 Architecture Diagram

5.2.1 Level 0 - Piecewise Jet Formation

5.2.2 Level 1 - Discretization Jet Formation

5.2.3 Level 2 - Image Splitting

5.2.4 Level 3 - Linewise segmentation

6.5.1 MATLAB desktop environment

6.5.2 Current folder

6.5.3 Command window

6.5.4 Workspace

6.5.5 Command history


X
LIST OF ABBREVIATIONS

ABBREVIATION ABBREVIATIONS DESCRPTION

NO

1 PCW Model Piecewise Model

2 SVM Support Vector Machine

3 SAR Synthetic Aperture Rader

4 TGV Total Generalized Variation

Model

5 GNC Graduated Non Convexity

6 ADMM Alternating Direction Method

of Multipliers
XI
III

APPENDIX

Pseudo Code for the ADMM Strategy


Details for the Piecewise-Affine Mumford-Shah Model in the Multivariate Case
1) Discretized Problem:
Starting from 30 in analogy to the derivation starting from 6 in the single channel case we
obtain the discretized problem

Then the analogue of the splitting (10) in single channel case becomes

We rewrite (33) in the way of (11) which gives

The Lagrangian of (34) which corresponds to (13) in the univariate case is now understood
w.r.t. the Lagrange multipliers

Algorithm 1 ADMM Strategy for the Piecewise Affine-Linear Mumford-Shah Problem

Input: Image f ∈ Rm×n; model parameter 𝛾 > 0;

Neighborhood {d1,……..ds} and weights {w1,….ws};


Stopping parameter £; coupling sequence (uj)j𝜖 N,(Vj)j𝜖 N
XII

Output: The first order jrt J*=(u*,a*,b*)


Initialize J* = (us,as,bs,)=(f,0,0)for all s=1,….S;𝜆s,t𝜌s,t for all 1≤ s < t ≤ S
j←1

2) Univariate Subproblems:
The resulting univariate subproblems in (23) for a fixed line Q have now the prototypical
form

(JQ)∗
XIII

Concerning efficiently solving linewise segmented multichannel jet problems as


demonstrated section 2.4 in the main text one has to mind that E I denotes the sum over all
channels k of the residual sum of squares of the best approximating bivariate polynomial, that
is,

The least squares formulation of (36) on the discrete “interval” (ql,..., qr) is
CHAPTER 1
INTRODUCTION

An active field in image partitioning are various methods. The idea is to model
the partitioning as the result of an optimization problem which imposes regularity on
both regions and boundaries. A popular various approach to image partitioning is the
piecewise constant Mumford-Shah model. The central idea is to partition the image
domain such that the total boundary length of all segments is small and such that the
induced piecewise constant function approximates the image well. The model is
frequently applied for unsupervised segmentation of color images, depth images,
texture images, and medical images, to mention only some examples. However, the
piecewise constant model assumption is often restrictive: many types of images possess
linear trends within their segments: consider for example the sky in a landscape image,
or an object with an illumination gradient in a conventional image; further, the bias in a
magnetic resonance image, as well as depth and motion fields show linear trends.

Another possibility to account for color gradients in the image is the piecewise
smooth Mumford-Shah model which penalizes the deviations from piecewise constant
functions controlled by an additional hyper parameter β (instead of enforcing piecewise
constancy). However, the piecewise smooth model comes with side-effects: the
estimated discontinuity set is in general not the boundary set of a partition as it can
have open ends (so-called “crack tips”), and it suffers from the gradient limit effect, i.e.
the creation of spurious discontinuities at steep slopes. Even though linear trends are
better representable by the piecewise smooth Mumford-Shah model than by the
piecewise constant variant, approximating them increases its energy (in contrast to the
piecewise affine variant).

Therefore, the solution space of the piecewise smooth model does not contain
genuine piecewise affine functions but only approximations of them. Depending on the
application, the principal interest in solving (1) can be to find an optimal partition P*,
which may be used for segmentation or super pixilation’s as for instance in; or one can
be interested in finding an optimal piecewise affine function u∗, which may be used for
the regularization of flow fields as in or for the guidance of image filters. The focus of
this paper is the efficient computation of the partition.

Partitioning of images is a central task in computer vision. In a supervised setup,


learning based approaches, most notably convolutional neural networks, have become
standard. If not sufficiently many training data are available, model-based methods are
employed. Classical methods are k-means clustering, region growing , and the
watershed algorithm. Another popular technique for clustering (without having to fix
the number of clusters in advance) is based on the mean shift filtering. An overview of
image filtering methods. An active field in image partitioning are various methods. The
idea is to model the partitioning as the result of an optimization problem which imposes
regularity on both regions and boundaries . A popular various approach to image
partitioning is the piecewise constant Mumford-Shah model. The central idea is to
partition the image domain such that the total boundary length of all segments is small
and such that the induced piecewise constant function approximates the image well.
The model is frequently applied for unsupervised segmentation of color images, depth
images, texture images, and medical images, to mention only some examples.
However, the piecewise constant model assumption is often restrictive: many types of
images possess linear trends within their segments: consider

1.1 Piecewise Smooth Approximations


A synthetic 128 × 128 image with three different blobs for each color channel. In
the piecewise smooth approximation, the vector MS model clearly favors solutions
where the edge sets coincide. This is not the case for the channel-wise variant, which
processes one color at a time and is thus “color blind” w.r.t. color as a whole. This is
further confirmed on a real world image by directly visualizing the edge sets of the
different colors (coloring them accordingly). We used a 8×8×8 color discretization.
We compute piecewise smooth approximations for the 256 × 256 image “Dama con
l’ermellino” by Leonardo da Vinci using vector MS and channel-wise MS. The
parameters are α = 100 and λ = 0.1, with a 32 × 32 × 32 discretization. The large
parameter α makes the approximation nearly piecewise constant, showing significant
color artifacts of the e channel-wise MS model, as opposed to the vector on

1.2 Denoising

To compare various regularities, we devised a synthetic denoising, degrading the


128 × 128 test image by adding severe 30% noise. The result using simple total
variation regularization produces loss of contrast and stair casing effects are visible in
the grey lower right part. The result of the Ambrosio-Tortorelli method is also shown.
It is a non-convex approximation of the Mumford-Shah energy and thus merely allows
to compute a local minimum, missing the blue region. Finally, the proposed vector MS
relaxation provides more natural results than the other approaches (32×32×32).

1.3 Unsupervised Image Partitioning

In the limiting case α → ∞ in, the smoothness constraint is enforced more and
more leading to the well-known piecewise constant approximation, also known as the
minimal partition problem. The first constraint in reduces to just σ t i (x, t) ≥ −(t −
fi(x))2. Only one parameter λ remains in the model, which controls the overall length of
the interfaces between the regions where u is constant. The standard approach to
compute such a multi-region partitioning is by alternatingly determining a small
number of color models (with fixed regions) and optimizing for the regions (with fixed
color models). Of course, for such iterative approaches results are invariably
suboptimal and will depend on the choice of the initial color models. In contrast, the
proposed convex relaxation allows to jointly optimize over the regions of constancy
and the corresponding colors in these regions in a single convex optimization problem.
In particular, the algorithm determines the appropriate number of color models
depending on the input image and the scale parameter λ.
1.4 Joint Disparity and Segmentation

Our final experiment is a more advanced application of the vector Mumford-


Shah model to the stereo image analysis. Given a stereo image pair, the task is to jointly
compute disparity map and color segmentation, the central idea being that
discontinuities in disparity and color tend to coincide. While this problem has been
addressed in the non-convex Ambrosio-Tortorelli framework, we can apply the
proposed convex relaxation of the vector Mumford-Shah functional to compute
solutions independent of initialization. The proposed problem simply corresponds to
the case of k = 4 channels corresponding to the three color and one depth channel: u =
(u color, un depth).

However, the piecewise smooth model comes with side-effects: the estimated
discontinuity set is in general not the boundary set of a partition as it can have open
ends (so-called “crack tips”), and it suffers
from the gradient limit effect, i.e. the creation of spurious discontinuities
at steep slopes. Even though linear trends are better representable by the piecewise
smooth Mumford-Shah model than by the piecewise constant
variant, approximating them increases its energy (in contrast to the piecewise affine
variant). Therefore the solution space of the piecewise smooth model does not contain
genuine piecewise affine functions but only approximations of them.
CHAPTER 2

2.1 Fast Piecewise-Affine Motion Estimation without Segmentation

We have proposed a new method to estimate piecewise affine motion fields. In


contrast to related methods, our approach does not rely on explicit segmentation but
directly estimates a piecewise-affine motion field. Key steps in the derivation are the
specific formulation of the energy functional as a constrained optimization problem and
the decomposition into tractable sub problems by an alternating direction method of
multipliers strategy. Then, these sub problems are cast to (non-convex) un various
piecewise-affine problems. A crucial ingredient of our method is that we are able to
solve them exactly and efficiently. Our method overcomes the two main limitations of
previous piecewise-parametric approaches, namely, sensitivity to initialization and
computational cost. Yet, our experiments show that it is competitive in terms of quality.
Further, they suggest that the piecewise affine model improves upon total variation and
total generalized variation regularizations when using similar data terms. The versatility
of our proximal splitting strategy lets extensions of the method to new data terms be
easily implemented. Thus, the proposed approach can serve as a general regularization
framework for motion estimation.
Two important prior models have been explored for motion estimation. The first
one works with a dense representation of motion and imposes at each pixel a
smoothness constraint such as total variation (TV) or total generalized variation
(TGV).These regularization terms are most often convex and well suited to a large
collection of optimization techniques. However, they create undesirable art effects like
stair casing or smoothing of motion discontinuities. The second type of prior model
works with parametric representations of motion, which may be chosen to provide a
satisfying match of 3D translations on the camera plane. In particular, the piecewise-
affine model overcomes the limitations of TV and TGV and has yielded very accurate
results in several previous works. Yet, in spite of these achievements, local smoothness
priors are still preferred to piecewise-parametric ones in most motion estimation
methods. The main difficulty that restrain the spreading of the piecewise-parametric
model is the challenging optimization problem it involves. In this work, we address this
optimization issue. The problem is usually formulated as the joint segmentation of the
motion field and estimation of the parameters inside each region, following the seminal
work of Mumford and Shah for the image segmentation part these two tasks translates
into highly non-convex optimization. The existing solutions proceed iteratively by
alternating an optimization step with respect to the image partition and an optimization
step with respect to the motion parameters. This alternating scheme causes two main
issues. Firstly, the resulting scheme is very sensitive to initialization and can only be
used for refinement. Secondly, the computational cost is often prohibitive for practical
applications. In particular, it depends on the number of regions, which should typically
be very large to achieve high accuracy

2.2 Fast Approximate Energy Minimization via Graph Cuts


We consider a wide class of energy functions with various discontinuity
preserving smoothness constraints. While it is NP-hard to compute the exact minimum,
we developed two algorithms based on graph cuts that efficiently find a local minimum
with respect to two large moves, namely, expansion moves and swap moves. Our
expansion algorithm finds a labeling within a known factor of the global minimum,
while our swap algorithm handles more general energy functions. Empirically, our
algorithms perform well on a variety of computer vision problems such as image
restoration, stereo, and motion. We believe that combinatorial optimization techniques,
such as graph cuts, will prove to be powerful tools for solving many computer vision
problems.
An example of a local method using standard moves is Iterated Conditional
Modes (ICM), for each pixel, the label which gives the largest decrease of the energy
function is chosen, until convergence to a local minimum. Another example of an
algorithm using standard moves is simulated annealing, which was popularized in
computer vision by annealing is popular because it is easy to implement and it can
optimize an arbitrary energy function. Unfortunately, minimizing an arbitrary energy
function requires exponential time and as a consequence simulated annealing is very
slow. Theoretically, simulated annealing should eventually find the global minimum if
run for long enough. As a practical matter, it is necessary to decrease the algorithm's
temperature parameter faster than required by the theoretically optimal schedule. Once
annealing's temperature parameter is sufficiently low, the algorithm will converge to a
local minimum with respect to standard moves. Demonstrate that practical
implementations of simulated annealing give results that are very far from the global
optimum even in the relatively simple case of binary labels
If the energy minimization problem is phrased in continuous terms, various
methods can be applied. These methods were popularize various techniques use the
Euler equations, which are guaranteed to hold at a local minimum. To apply these
algorithms to actual imagery, of course, requires discretization.
Another alternative is to use discrete relaxation labelling methods; this has been
done by many authors, including. In relaxation labelling, combinatorial optimization is
converted into continuous optimization with linear constraints. Then, some form of
gradient descent which
gives the solution satisfying the constraints is used. Relaxation labelling techniques are
actually more general than energy minimization methods.
There are also methods that have optimality guarantees in certain cases.
Continuation methods, such as graduated non connectivity are an example. These
methods involve approximating an intractable energy function by a sequence of energy
functions, beginning with a tractable (convex) approximation. There are circumstances
where these methods are known to compute the optimal solution. Continuation methods
can be applied to a large number of energy functions, but except for these special cases
nothing is known about the quality of their output. Mean field annealing is another
popular minimization approach. It is based on estimating the partition function
from which the minimum of the energy can be deduced.
2.3 Multi way Cut for Stereo and Motion with Slanted Surfaces
Stereo and motion algorithms that search over all possible displacements to
minimize an energy functional have traditionally assumed that all the surfaces in the
world are parallel to the image plane. We have presented an algorithm to solve the
correspondence problem in the presence of slanted surfaces by alternately segmenting
an image into non-overlapping regions and finding the affine parameters of the
displacement function of each region. An additional step enables the algorithm to
recover when this alternation settles onto a suboptimal over segmentation. This
iterative, greedy algorithm is able to find clean, accurate displacement maps for a wide
range of images from stereo and motion.
The main limitation of this work is the restrictiveness of the energy functional
used. For example, the algorithm may become stuck in local minima if there are
extremely un textured surfaces in the world, in which case it will be difficult to
automatically determine their affine parameters. Moreover, it is easily distracted when
intensity edges do not accompany the region boundaries, and it prefers to draw region
boundaries along straight lines, thus ensuring a bias against tracing the contours of
curved objects. Future work should be aimed at incorporating occlusions, the curvature
of boundaries, or the shape of regions.
The goal of correspondence is to determine which points in one image
correspond to which points in another image taken of the same scene, i.e., to determine
which image points arise from the same physical point in the world. The images may be
taken by different cameras at the same time (stereo) or by the same camera at different
times (motion).
The problem of correspondence is often solved by minimizing an energy
functional that matches similar-looking pixels (in terms of intensity or color, for
example), while penalizing the discontinuities in order to preserve piece-wise
continuity. The result of such a search is the best mapping from pixels to
displacements, according to the cost functional. In stereo the displacement is disparity
(a scalar thanks to the polar constraint), while in motion it is a two-element vector. The
result, therefore, does not accurately represent the shape or orientation of the surfaces,
nor are the discontinuities and creases easily recoverable from such an output. That is,
differentiating and thresh holding this disparity map will generate many false
discontinuities because of the large jumps in disparity that occur within surfaces, and
the vertical crease along the interior edge of the Cheerios box cannot be recovered
because it lies in the middle of a region.
To handle slanted surfaces, we propose to minimize an energy functional that
allows not just constant displacements but rather affine wrappings. Our approach
segments the image into a number of non-overlapping regions, each corresponding to a
different surface in the world, and finds the affine parameters of the displacement
function for each region.
In this way, the algorithm greedily minimizes the energy functional until it
converges. If the result is over segmented, then an additional step merges adjacent
regions to further reduce the energy.
2.4 SAR target recognition based on active contour without edges
In this paper, the use of histogram equalization and active contour without edges
to segment SAR images with low contrast between target and background is proposed.
In addition, the active contour without edges can accomplish contour segmentation with
no need to consider filtering clutter, which is full in SAR images. The simulation
results of measured SAR images confirm the feasibility of our method in improving the
recognition rate of SAR targets. Moreover, from the comparison results with the
classical method, our method is more stable and faster and has a higher recognition rate
under different training samples. This makes the proposed method suitable for SAR
target recognition.
Target recognition is an important issue that we have greatly concerned about for
both security and civilian applications. Many approaches for automatic target
recognition (ATR) have been proposed in the literature. For ATR, there are two keys:
one is segmentation and feature extraction, and the other is target classification. Due to
the background clutter in SAR image, it is difficult to identify the target that we are
concerned. For many segmentation methods, the first step is to filter the clutter and then
do target segmentation. Here, we use the segmentation method of active contour
without edges that do not consider the clutter and extract the target directly. It is well
known that different SAR images have different translation, scaling and rotation angles
for even the same target, but Hu invariant moments are not affected. It has translation,
scaling and rotation invariance. Therefore, we use Hu invariant moments of the SAR
image as its features.
The active contour model is divided into two categories:
One is based on boundary and the other is based on region. It is difficult for the
former to obtain overall segmentation due to the image clutter. The most representative
method is traditional active contour. The latter is minimizing the global energy
functional which meets the image model to drive the expansion and contraction of the
contour. To demonstrate the performance of the proposed method, we perform three
simulated experiments, whereby we compare the traditional snakes based contour
extraction method and our proposed method. First, the recognition rate of our method is
comprised of the traditional snakes based the contour extraction method as a function
of training samples.
The recognition rate of two methods improves as the number of training samples
increases, and it becomes stable while the number of training samples reaches a certain
value. Moreover, from the comparison with the traditional snakes based contour
extraction method, one can see that the proposed method can improve the recognition
rate of SAR targets whatever the training samples.

2.5 Low-rank quaternion approximation for color image processing


In this paper, we proposed a novel color image processing model, named low-
rank quaternion approximation (LRQA), which targets at recovering a low-rank
quaternion matrix from its noisy observation. Different from the existing sparse
representation and low-rank matrix approximation (LRMA)- based methods, LRQA
can process the three channels in a holistic way. Inspired by the excellent performance
of the Non convex functions for LRMA, we proposed a general model for LRQA, and
then solve the optimization problem. Extensive experiments for color image de noising
and in painting tasks have demonstrated the effectiveness of the proposed LRQA. In
the future, we would like to extend LRQA to other color image processing tasks, such
as image de blurring, image super resolution, image de raining and so on.
As an emerging mathematical tool, low-rank matrix approximation (LRMA) has
been applied in a broad family of real applications, such as image de noising, image in
painting, image de blurring, background-foreground separation, subspace clustering,
and so on. Due to the fact that the high-dimensional data inherently possess a low-rank
structure, the goal of LRMA is to exactly and efficiently recover the underlying low-
rank matrix from its corrupted observation .
Most of LRMA-based variants follow two lines matrix factorization and matrix
rank minimization. In the first line, it usually factorizes the matrix to be recovered into
two or three thin matrices. An intuitive example is the popular nonnegative matrix
factorization. It factorizes the original data matrix into two factor matrices with
nonnegative constraints. And their product can well approximate the original data
matrix under the Frobenius-norm fidelity. Recently, the work in proposed a bilinear
factor matrix norm minimization for LRMA. The main issue of the matrix factorization
based LRMA algorithms is lack of the rank values in real applications. The second line
is usually achieved by different rank approximation regularizers, such as the nuclear
norm . Cand`es et al. has proven that the nuclear norm is the tightest convex relaxation
of the NP-hard rank minimization function. However, many works have pointed out
that the nuclear norm-based LRMA may yield sub-optimal results of the original rank
minimization. The reason is that each singular value is treated equally, which is
contrary to the fact that the large singular values may contain more information of the
original data. To better approximate the rank function, many nonconvex surrogates
have been proposed, such as the Shatter -norm, weighted nuclear norm, log-determinant
penalty (summarized in Table I) and achieved the promising performance in grayscale
image processing. Among them, the most popular one is the weighted nuclear norm
(WNNM), which assigns different weights to different singular values by considering
the physical significance of them.
CHAPTER 3
SYSTEM DESIGN
3.1 EXISTING SYSTEM
Despite their early introduction and their relatively simple formulation,
Mumford-Shah models are still a very active topic of research to mention only some
examples. This is because they lead to challenging (NP-hard) optimization problems .
We first discuss work related with the piecewise constant variant which is also widely
known as the Potts model and recently also called _0 gradient smoothing. Classical
algorithms for the piecewise constant Mumford-Shah model are based on simulated
annealing and on approximations by elliptic functionals. Currently popular approaches
are active contours, graph cuts, convex relaxations, semi-global matching, fused
coordinate descent, region fusion, iterative thresholding type techniques, and
alternating direction method of multipliers.
DISADVANTAGES
 Provide a not necessarily piecewise affine approximation
 Discontinuity set is in general not the boundary set of a partition.
 Higher computational time

3.2 PROPOSED SYSTEM


In this paper, we propose a novel algorithm to efficiently compute approximate
solutions of the piecewise affine Mumford-Shah model. The key novelties of our
method are as follows:
• The propose a formulation of the piecewise affine Mumford-Shah model as a
minimization problem with respect to Taylor jets, i.e., fields of local Taylor
polynomials. This eliminates the dependance on the partition P in (1);
• The propose a novel splitting approach based on a jet coupling which leads to
linewise segmented jet estimation problems. We propose an efficient and exact
solver for the linewise segmented jet estimation problems.
The proposed algorithm unifies the advantages of partition based methods and those of
function-based methods:
• It directly yields a partition P;
• It does not need an initialization procedure;
• Its average computation times are almost independent of the choice of the
edge penalty;
• It is highly parallelizable.
The experimental results show that the proposed method improves upon the
state-of-the-art approach to the piecewise affine Mumford-Shah model in the sense that
the algorithm has lower computation times and often achieves lower functional values.

ADVANTAGES
 It directly yields a partition P;
 It does not need an initialization procedure;
 Average computation time
 It is highly parallelizable.

3 .3 SYSTEM SPECIFICATION
3.3.1 SOFTWARE REQURIEMENT
 Operating system : Windows XP/7.
 Coding Language : Matlab
 Tool : Matlab 2018a
3.3.2 HARDWARE REQURIEMENT
 System : core i3.
 Hard Disk : 540 GB.
 Floppy Drive : 1.44 Mb.
 Monitor : 22 VGA Colour.
 Mouse : Logitech.
 Ram : 2 GB.
CHAPTER 4
SYSTEM IMPLEMETATION
4.1 MODULES
 Piecewise Jet Formation
 Discretization Jet Formation
 Image Splitting
 Line wise Segmentation
4.1.1 Piecewise Jet Formation
A key point in our derivation is the formulation of problem (1) in terms of Taylor
jets. The Taylor jet of a function u is a field of Taylor polynomials, i.e., with every
point x in the domain _ of u we associate the truncated Taylor expansion of u in the
point x up to a certain order k, where k is called the order of the jet. In the following,

we will need the (first order) jet J u of a function u at the point x _ _ which we define

as the first order polynomial J u(x) given by

J u(x)(z) := Txu(z)

where Txu is the (first order) Taylor polynomial of u centred at x and where z _
R2 is the argument of the polynomial. Summing up, the jet J u : _ _ _1 is a function on
with values in the space _1 of bivariate polynomials of order one in particular, J u(x)
is a polynomial, and J u(x)(z) denotes the point evaluation of the polynomial J u(x)
at the point z _ R2. We note that the variables z in the above definition are shifted
versions of the ones typically used for the jet formulation. The reason for our choice
here is that we will later need the notion of equality of the polynomials in the way
defined above; this will become clear in the following. It is a basic but important
observation that u is piecewise affine linear if and only if the jet J u is a piecewise
constant field.
4.1.2 Discretization Jet Formation
We may adopt a common way of discretizing the length penalty term in,

The terms on the right hand side count the number of changes of the field J w.r.t. to a
direction ds _ Z2; that is,

Where _ = {1, . . . , M} × {1, . . . , N} denotes the discretized domain. The


directions ds form a neighbourhood system {d1, . . . dS}, with S _ 2. The simplest
choice is the system {e1, e2}, e1 = (1, 0)T , e2 = (0, 1)T , with the weights _1,2 = 1. To
reduce anisotropy effects, we here focus on the more isotropic discretization {e1, e2, e1
+ e2, e1 − e2}.
4.1.3 Image Splitting
Using the discretization for the proposed jet formulation we obtain the
discretized problem

Our goal is to compute approximate solutions of (9). Towards an optimization


algorithm, we first split up the target J into S polynomial fields, J 1, . . . , J S, subject to
the constraints that they are all equal:

Since two bivariate first-order polynomials are equal if and only if their
evaluation in three non-collinear points in R2.
4.1.4 Linewise Segmentation
The computational effort of the proposed algorithm is dominated by solving the
linewise jet problems. To obtain an efficient overall algorithm, we here propose a fast
solver.
Let Q be an arbitrary line in direction d, parameterized as Q = (q1, q2 . . . , qL) with ql
= q1 + (l − 1)d. Our first crucial observation is that (23) essentially is a one-
dimensional segmented least squares problem on the line Q. Solving it consists of two

steps. At first, we compute an optimal partition I_ on the line Q, i.e., a minimizer of

Here E I denotes the residual sum of squares of the best approximating bivariate linear
polynomial p with p(x) = _x1+_x2+_ on the “interval” I = {ql , ql+1, . . . , qr } of the
data.
CHAPTER 5

SYSTEM DESIGN
5.1 BLOCK DIAGRAM
5.2 DATA FLOW DIAGRAM

5.2.1 Level 0
Piecewise Jet Formation

Input

Reformulation

Output Image
5.2.2 Level 1
Discretization Jet Formation

PCW Constant model

Input 𝛾 values

Assign 𝛾 = 0.3

Splitting the image


5.2.3 Level 2
Image Splitting

PCW smooth

Input 𝛽 = 10

Smoothen Image
5.2.4 Level 3
Linewise segmentation

PCW affine model

Input 𝛾 = 0.4

PCW smooth
CHAPTER 6
LANGUAGE DESCRIPTION
6.1 INTRODUCTION
The name MATLAB stands for MATrix LABoratory. MATLAB was written
originally to provide easy access to matrix software developed by the LINPACK
(linear system package) and EISPACK (Eigen system package) projects. MATLAB
is a high-performance language for technical computing. It integrates computation,
visualization, and programming environment. MATLAB has many advantages
compared to conventional computer languages (e.g., C, FORTRAN) for solving
technical problems. MATLAB is an interactive system whose basic data element is
an array that does not require dimensioning. Specific applications are collected in
packages referred to as toolbox. There are tool boxes for signal processing, symbolic
computation, control theory, simulation, optimization, and several other fields of
applied science and engineering.

6.2 MATLAB's POWER OF COMPUTAIONAL MATHMATICS

MATLAB is used in every facet of co mpu t at i on al mat h emat i cs .


Following ar e some commonly used mathematical calculations where it is used
most commonly:

• Dealing with Matrices and Arrays


• 2-D and 3-D Plotting and graphics
• Linear Algebra
• Algebraic Equations
• Non-linear Functions
• Statistics
• Data Analysis
• Calculus and Differential Equations Numerical Calculations
• Integration
• Transforms

• Curve Fitting
• Various other special functions
6.3 FEATURES OF MATLAB

Following are the basic features of MATLAB

It is a high-level language for numerical computation, visualization and


application development.

• It also provides an interactive environment for iterative exploration, design


and problem solving.
• It provides vast library of mathematical functions for linear algebra,
statistics, Fourier analysis, filtering, optimization, numerical integration and
solving ordinary differential equations.
• It provides built-in graphics for visualizing data and tools for creating custom
plots.
• MATLAB's programming interface gives development tools for improving
code quality, maintainability, and maximizing performance.

• It provides tools for building applications with custom graphical interfaces.

• It provides functions for integrating MATLAB based algorithms with


external applications and languages such as C, Java, .NET and Microsoft Excel.

6.4 USES OF MATLAB

MATLAB is widely used as a computational tool in science and engineering


encompassing the fields of physics, chemistry, math and all engineering
streams. It is used in a range of applications including:

• Signal processing and Communications

• Image and video Processing

• Control systems
• Test and measurement

• Computational finance

• Computational biology

6.5 UNDERTANDING THE MATLAB ENVIRONMENT

MATLAB development IDE can be launched from the icon created on the
desktop. The main working window in MATLAB is called the desktop. When
MATLAB is started, the desktop appears in its default layout.

Fig.6.5.1. MATLAB desktop environment

The desktop has the following panels:

Current Folder - This panel allows you to access the project


folders and files.

Fig.6.5.2. current folder


Command Window - This is the main area where commands can be entered at the
command line. It is indicated by the command prompt (>>).

Fig.6.5.3. command window

Workspace - The workspace shows all the variables created and/or imported from
files.

Fig.6.5.4.workspace

Command History - This panel shows or rerun commands that are entered at the
command line.
Fig.6.5.5 command history

6.6 COMMONLY USED OPERATORS AND SPATIAL CHARATERS

MATLAB supports the following commonly used operators and special


characters:

Operator Purpose
+ Plus; addition operator.

- Minus, subtraction

* Scalar and matrix


operator.

.* Array and
multiplication
multiplication operator.
^ Scalar and matrix

.^ Array exponentiation
operator.
exponentiation
operator.
operator.
\ Left-division operator.
/ Right-division
.\ Array left-division
operator.

Array right-division
operator.

./ operator.
Table 6.6 MATLAB used operators and special characters.

6.7 COMMANDS

MATLAB is an interactive program for numerical computation and data


visualization. You can enter a command by typing it at the MATLAB prompt '>>' on
the Command Window.

6.7.1 Commanda for managing a session

MATLAB provides various commands for managing a session. The


following table provides all

Commands Purpose

Clc Clear command window


Clear Removes variables from
Exist Checks
memoryfor existence of file or

Global Declare variables to be global.


Help Searches for help topics.
variable.
Look for Searches help entries for a

Quit Stops MATLAB.


Who Lists current variable.
keyword.
Whos Lists current variables (Long

Table.6.7.1 commands for managing a session


Display).
6.8 INPUT AND OUTPUT COMMAND

MATLAB provides the following input and output related commands:

Command Purpose
Disp Displays content for an
array or
Fscanf Read formatted data from a

Format Control
string. screen-display
Fprintf Performs
format. formatted write to
file.

Input Displays prompts and waits


screen
for or a file.
; Suppresses screen printing.
Table.6.8 input and output commands
input.
6.9 M FILES

MATLAB allows writing two kinds of program files:

Scripts:

Script files are program files with .m extension. In these files, you write series
of commands, which you want to execute together. Scripts do not accept inputs and
do not return any outputs. They operate on data in the workspace.

Functions:

Functions files are also program files with .m extension. Functions can
accept inputs and return outputs. Internal variables are local to the function.
Creating and Running Script File:

To create scripts files, you need to use a text editor. You can open the
MATLAB editor in two ways:

• Using the command prompt


• Using the IDE

You can directly type edit and then the filename (with .m extension).

Edit or

edit<file name>

6.10 DATA TYPES AVAILABLE IN MATLAB

MATLAB provides 15 fundamental data types. Every data type stores data
that is in the form of a matrix or array. The size of this matrix or array is a
minimum of 0-by-0 and this can grow up to a matrix or array of any size.

The following table shows the most commonly used data types in MATLAB:

Data type Description


Int8 8-bit signed integer
Unit8 8-bit unsigned integer
Int16 16-bit signed integer
Unit16 16-bit unsigned integer
Int32 32-bit signed integer
unit32 32-bit unsigned integer
Int64 64-bit signed integer
Unit64 64-bit unsigned integer
Single Single precision numerical data
Call array Array of indexed calls, each

capable of storing array of a


different dimension and data type
Structure C-like structure each structure

having named fields capable of


storing an array of a different
dimension and data type
Function handle Pointer to a function
User classes Object constructed from a user-

defined class
Java classes Object constructed from a java

Class
Logical Logical variables are 1or 0,
represent true & false respectively

Double Double precision numerical data

Char Character data(strings are stored

6.10. Data types in MATLAB


CHAPTER 7

SYSTEM TESTING

7.1 INTRODUCTION

The purpose of testing is to discover errors. Testing is the process of


trying to discover every conceivable fault or weakness in a work product. It
provides a way to check the functionality of components, sub-assemblies,
assemblies and/or a finished product It is the process of exercising software
with the intent of ensuring that the Software system meets its requirements
and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

7.2 TYPES OF TESTS

7.2.1 Unit testing


Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs
produce valid outputs. All decision branches and internal code flow should
be validated. It is the testing of individual software units of the application
.it is done after the completion of an individual unit before integration. This
is a structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a specific
business process, application, and/or system configuration. Unit tests ensure
that each unique path of a business process performs accurately to the
documented specifications and contains clearly defined inputs and expected
results.
7.2.2 Integration testing
Integration tests are designed to test integrated software components
to determine if they actually run as one program. Testing is event driven
and is more concerned with the basic outcome of screens or fields.
Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is
specifically aimed at exposing the problems that arise from the
combination of components.

7.2.3 Functional test


Functional tests provide systematic demonstrations that functions tested
are available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : Identified classes of valid input must be accepted.

Invalid Input : Identified classes of invalid input must be rejected.

Functions : Identified functions must be exercised.

Output : Identified classes of application outputs must be exercised

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on


requirements, key functions, or special test cases. In addition, systematic
coverage pertaining to identify Business process flows; data fields,
predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified
and the effective value of current tests is determined.
7.2.4 System Test

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable
results. An example of system testing is the configuration oriented system
integration test. System testing is based on process descriptions and flows,
emphasizing pre-driven process links and integration points.

7.2.5 White Box Testing


White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or
at least its purpose. It is purpose. It is used to test areas that cannot be
reached from a black box level.

7.2.6 Black Box Testing


Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box
tests, as most other kinds of tests, must be written from a definitive source
document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the
software works.

7.3 UNIT TESTING

Unit testing is usually conducted as part of a combined code and unit


test phase of the software lifecycle, although it is not uncommon for coding
and unit testing to be conducted as two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests will be
written in detail.
Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.
7.4 INTEGRATION TESTING
Software integration testing is the incremental integration testing of
two or more integrated software components on a single platform to produce
failures caused by interface defects.

The task of the integration test is to check that components or


software applications, e.g. components in a software system or – one step up
– software applications at the company level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No
defects encountered.

7.5 ACCEPTANCE TESTING


User Acceptance Testing is a critical phase of any project and
requires significant participation by the end user. It also ensures that the
system meets the functional requirements.
CHAPTER 8

SCREEN SHOTS
CHAPTER 9

CONCLUSION

The piecewise affine Mumford-Shah Model which is a


variational model for image partitioning. Determining approximate
solutions of this model is a challenging Problem. It requires a higher
computational effort than employing classical non-variational
clustering methods Such as the k-means algorithm or the mean shift
algorithm For partitioning. However, in contrast to these methods, the
Piecewise affine Mumford-Shah model incorporates the total Length of
the segment boundaries in its regularizing term and, on each segment,
the input image is approximated by an affine Function. We have
proposed an efficient algorithm for computing Approximate solutions
of the piecewise affine Mumford-Shah Model. A novel formulation of
the problem in terms of Taylor Jets allowed us to invoke a splitting into
few tightly coupled Jet-sub problems for which we developed an
efficient and exact Solver. We have seen that, in contrast to GNC and
FPAME, The proposed method provides a genuine partition of the
Image. The experiments showed that the proposed approach Yields
results with mostly lower average model energies than Those of the
benchmark approach based on graph cuts. Hence, the model energy is
efficiently minimized. At the same time,
It needs less computation time than the benchmark method, Especially
for small edge penalties.
XIV

REFERENCES

[1] P. J. Besl and R. C. Jain, “Segmentation through variable-order


surface fitting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 10, no. 2,
pp. 167–192, Mar. 1988.
[2] S. Beucher, “Use of watersheds in contour detection,” in Proc. Int.
Workshop Image Process., 1979, pp. 17–21.
[3] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward
feature space analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol.
24, no. 5, pp. 603–619, May 2002.
[4] K. Fukunaga and L. Hostetler, “The estimation of the gradient of a
density function, with applications in pattern recognition,” IEEE Trans.
Inf. Theory, vol. IT-21, no. 1, pp. 32–40, Jan. 1975.
[5] P. Milanfar, “A tour of modern image filtering: New insights and
methods, both practical and theoretical,” IEEE Signal Process. Mag.,
vol. 30, no. 1, pp. 106–128, Jan. 2013
[6] J.-M. Morel and S. Solimini, Variational Methods in Image
Segmentation: With Seven Image Processing Experiments, vol. 14.
Berlin, Germany: Springer, 2012.

You might also like