You are on page 1of 75

CHAPTER 1

INTRODUCTION
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

CHAPTER 1
INTRODUCTION

1.1 INTRODUCTION
Many regions on earth are undergrounded with large amounts of oil and gas, in
turn, forms large amounts of salt body. Salt boundary analysis is a model for understanding
the structure model and the seismic migration speed and a construction model. The traditional
methodology of analyzing uses great experts for examining the salt body, its attributes, and
features. The properties are manually made by experts, which may lead to the highly variable
results and complex noise pollution. The results produced are not even accurate. Due to these
incorrectanalyses, the company drilling personal also fall into danger. The data collected
from the existing data can be divided into four categories. (i)The use of seismic attributes.
(ii)The realization of computer vision methods. (iii) the combination of seismic attributes.
(iv) Deep learning algorithms.

Recently, computer vision fields are implemented using the learning methods using
different features as object direction and semantic segmentation [12]. The model directly
converts the raw input data into the mapping of geological area using the deep learning
statistical models. Past studies have considered the salt body as segmentation problem with
the image featuring, while other studies considered the salt body as a regression problem.
Deep learning algorithms like CNN are used to predict the salt bodies from the data. F.Meng
et al., proposed some methods which used directed graphs for the purpose of clustering and
segmenting salt foregrounds. J.Long et al., [10] classified salt bodies as semantic
segmentation. LBP (Local Binary Patterns) are used to obtain the features accurately.
The proposed architecture for salt segmentation is encoder-decoder architecture
which consists of convolutional layer and pixel-level binary labels. Each pixel is the form of
probability representing salt/non-salt. In this paper we proposed a deeply supervised U-
shaped model and effective salt division and introduced a ―Salt Boundary Prediction‖ to
optimize the results. In which it contains an encoder and decoder for the purpose of
downsampling and upsampling. Here another unit ReLU (Rectified Linear Unit) is used.
Using this, the output is accurate. But not faster. So, for this, we used a sigmoid function wt

VNITSW-CSE 1| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

et al., [17] to predict the probability of output and also efficiency of the resultant salt
delineated part.

Steps in Deep Learning:

1. Deep -supervised Model


2. Edge Prediction Branch
3. Edge Loss

1. Deep-Supervised Model:
Supervised deep learning frameworks are trained using well-labelled data. It
teaches the data learning algorithm to generalise from the training data and to
implement in unseen situations. After completing the training process, the model
is tested on a subset of the testing set to predict the output.

2. Edge Prediction Branch:


The edge of the salt body is predicted through the Batch Normalization.
3. Optimized Edge Prediction:
In order to get the optimized edge, Sigmoid function is used.

VNITSW-CSE 2| P a g e
CHAPTER 2
LITERATURE SURVEY
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

CHAPTER-2
LITERATURE SURVEY

Alan Souza et al. [1] uses convolutional neural networks being used for the semantic
delineation of salt bodies on different seismic volume data. The numerical experiments
showed in this paper are, even with a small amount of interpreted lines, one could obtain
reasonable salt segmentation results. The calculations are very easy.

Mikhail Karchevskiy et al. [2] The manual seismic images interpretation by geophysics
specialists is made. The predictions provided even by a single DL model were able to achieve
27th place and the efficiency is very high.

But it is a tedious process.S.J.Seol et al. [3] Here, the electrical resistivity data is distributed.
Mapping of this data with electromagnetic data along with CNN is done. Using this, the
subsurface salt structure can be is found easily. It is efficient, stable and reliable. But it does
not encode the exact position of the image and a lot of training data is necessary.

H. Di, M et al. [4] Here K-Means Clustering algorithm is used. The attribute domain used
here generates probability volume data that not only finds the salt boundaries but also
predicts effective salt interpretation. Here the more advanced salt interpretation is made. But
if the number of attributes used increases, the computational time also increases.

C.Ramirez et al. [5] Here Sparse representation of the seismic data is taken. The sparsifying
dictionary used gives robustness on account of redundancy which is highly stable. But the
Space complexity, Time complexity and limited processing power are major issues, as it uses
sparse representation.

Z. Long et al. [6] Here 2D discrete Fourier transform algorithm is used. The noise-robust
algorithms developed were able to label the boundaries of salt domes effectively and
efficiently and were robust to noise. Here the salt dome boundaries were detected efficiently
and accurately. But the, Mathematical calculations are complex.

A.U.Wildland et al. [7] Here, Convolution Neural Network is used. Here the classification of
the dataset is done easily as just a small cube from the input data set is taken. Here there

VNITSW-CSE 3| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

could be no need for attribute observation. This Can achieve test accuracy of over 70%. But it
requires training data from many different datasets.

Sergey et al.[8] Here Multilayer Convolution Neural Network for classification of the salt
body is used. In this, even the unseen parts are generalized and produce more optimized
results. This model can be even applied to large 3D data. But there will be redundancy in
high dimensions and disregards spatial information.

Table 2.1. Literature Study

Title Author Year Algorithms Advanta Accuracy Limitation


publish used ges s
ed
Salt Alan Souza, 2019 convolution The The some
segmentatio Wilson Leao al neural calculatio numerical mistakes
n using deep networks n is very experiments are due to
learning being used easy. showed that the lack of
for with a small contrast
semantic amount of among
segmentatio interpreted events
n of salt lines (less which
bodies on than 1% of would
seismic the volume) characteriz
volumes. one could e the
obtain salt/non-
reasonable salt
salt frontier
segmentatio
n results.
Automatic Mikhail 2018 The manual Efficienc The It is a
salt deposits Karchevskiy seismic y is very predictions tedious
segmentatio images high provided process.
n: A deep interpretatio even by a
learning n by single DL
approach geophysics model were
specialists. able to
achieve the
27th place.
Salt S. Oh, 2019 The stable, Precise CNN do
Delineation mapping of reliable, delineation not encode
K. Noh,
From subsurface and of a the
Electromag D. Yoon, electrical efficient. subsurface position

VNITSW-CSE 4| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

netic Data S.J.Se*ol, resistivity salt and


Using and J. Byun distribution structure is orientation
Convolution s from found. of object
al Neural electromag and lots of
Networks netic (EM) training
data with data is
CNNs. required.
Multi- H. Di, 2017 K-Means More The If the
attribute K- Clustering advanced attribute number of
M. Shafiq,
means algorithm salt domain used attributes
and
Cluster (unsupervis interpretat generates a used
Analysis for G. AlRegib ed ML ion is probability increases,
Salt algorithm) done. volume that the
Boundary not only computatio
Detection shows the nal time
salt also
boundaries, increases.
but also
supports
more
advanced
salt
interpretatio
n, such as
salt body
extraction
Salt body C.Ramirez, 2016 Sparse Highly The Space
detection G.Larrazaba representati stable. sparsifying complexity
from l, and on dictionary , Time
seismic data used gives complexity
G. Gonzalez
via sparse stability and and limited
representati robustness processing
on on account power are
of major
redundancy issues, as it
uses
Sparse
representat
ion.
Noise- Z. Wang, T. 2015 2D discrete The The salt Mathemati
robust Hegazy, Z. Fourier noise- dome cal
detection Long, and transformati robust boundaries calculation
and tracking G. AlRegib on algorithm were s are
of salt s detected

VNITSW-CSE 5| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

domes in algorithm. developed efficiently complex.


post were able and
migrated to label accurately.
volumes boundarie
using s of salt
texture, domes
tensors, and effectivel
subspace y and
learning efficiently
and were
robust to
noise.
Salt A.U.Waldel 2017 Convolutio The input Can achieve Require
classificatio and, n Neural can be test training
n using Network just small accuracy data from
A.H.S.S.Sol
Deep cube from over 70% many
berg
Learning the raw different
data and datasets
removes
the need
for
attribute
engineeri
ng and
makes
easy to
classify
the
dataset
Automatic 2018 Multilayer Efficientl The model There will
Salt-Body Convolutio y applied generalizes be
Classificatio n Neural to a unseen redundanc
n Using a Network for whole 3D seismic y in high
Deep classificatio volume of slices,includ dimensions
Convolution n of salt seismic ing both and
al Neural body data inline&cros disregards
Network sline spatial
directions,a informatio
nd produces n
accurate salt
body
detection

VNITSW-CSE 6| P a g e
CHAPTER 3
SYSTEM ANALYSIS
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

CHAPTER-3
SYSTEM ANALYSIS

3.1. EXISTING SYSTEM:

Areas with a large amount of oil and gas on the earth are likely to form huge salt
deposits below the surface of the earth. In addition, salt boundary analysis is of great
significance for understanding the model construction of salt layer structure and seismic
migration speed. Currently traditional seismic imaging still requires professionals to analyze
the salt body. Manually designed attributes are devised based on expertise, however, these
attributes may not yet fully describe the actual seismic data of complex noise pollution. This
leads to very subjective, highly variable results. Even more worrying is that drillers at oil and
gas companies may face dangerous situations because of these incorrect analysis results.
Previous related work can be roughly divided into four categories: the use of seismic
attributes, implementation of computer vision approaches, integrating seismic attributes and
the machine learning, application of the deep learning algorithms.
Traditional methods rely on geological, physical, and geometrical principles to
obtain seismic attributes. Specifically, automatic salt boundary interpretation approaches
needs to analyze salt properties, such as discontinuities [1], texture [2], reflection quadratic
vector fields [3], and salt likelihood function [4]. Implementation of computer vision
approaches also helps salt delineation, for example, Shafiq et al. [5] proposed a phase
congruency-based method to generate an attribute map of edges in the seismic data, which is
used to detect salt bodies within migrated seismic volumes by an interactive region growing
method. The design of handcrafted attributes requires professional background knowledge.
However, seismic data in real scenarios may be polluted by various noises [6], and these
attributes may not fully describe such complex data in detail. Di et al. [7] proposed a new
salt-boundary detection method based on multiattribute k-means cluster analysis, which
integrates seismic attributes and the machine learning. Recently, deep learning-based
methods boost many fields of computer vision by fusing different features [8], such as object
detection [9] and semantic segmentation [10].
Some recent works [11]–[16] also use a deep learning statistical model to directly
convert raw input seismic data into the final mapping of geological features. Waldeland and

VNITSW-CSE 7| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Solberg [17] achieved pixel-level salt classification by training a convolutional neural


network (CNN). Previous studies [18]–[21] considered salt boundary extraction as an image
segmentation problem. And Shi et al. [21] used Segnet [22] to design a segmentation network
of encoder–decoder structure. Oh et al. [23] regarded salt body analysis as a regression
problem. They use CNN to predict salt bodies from resistivity data.
The classification of salt bodies can be regarded as the category of semantic
segmentation. In this letter, we propose a deep-learning architecture for efficient salt
segmentation. It is essentially an encoder–decoder architecture—consisting of convolution
and upsampling layers. It provides a pixel-based semantic segmentation of salt/nonsalt
images, either in the form of a probability map, or as pixel-level binary labels, which
identifies each point as either salt or nonsalt.

3.2. PROPOSED SYSTEM:

The proposed architecture for salt segmentation is an encoder- decoder architecture


which consists of convolutional layer and pixel-level binary labels. Each pixel is the form of
probability representing salt/non salt. We proposed a deeply supervised U-shaped model and
effective salt division and introduced a ―Salt Boundary Prediction‖ to optimize the results. In
which it contains an encoder and decoder for the purpose of Down sampling and up
sampling. Here another unit ReLU (Rectified Linear Unit) is used. Using this the output is
accurate. But not faster. So, for this, we used a sigmoid function to predict the probability of
output and also efficiency of resultant salt delineated part.

3.3. FEASIBILITY STUDY:

A Feasibility Study is a preliminary study undertaken before the real work of a project
starts to ascertain the likely hood of the project‘s success. It is an analysis of possible
alternative solutions to a problem and a recommendation on the best alternative.

1. Economic Feasibility
2. Technical Feasibility
3. Operational Feasibility

VNITSW-CSE 8| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

3.3.1: Economic Feasibility:

It is defined as the process of assessing the benefits and costs associated with the
development of project. A proposed system, which is both operationally and technically
feasible, must be a good investment for the organization. With the proposed system the users
are greatly benefited as the users are able to detect water quality that is useful for salmon fish
basing on the physical, chemical and biological parameters of water. This proposed system
does not need any additional software and high system configuration. Hence the proposed
system is economically feasible.

3.3.2. Technical Feasibility

The technical feasibility infers whether the proposed system can be developed
considering the technical issues like availability of the necessary technology, technical
capacity, adequate response and extensibility. The project is decided to build using Python.
Jupyter Note Book is designed for use in distributed environment of the internet and for the
professional programmer it is easy to learn and use effectively. As the developing
organization has all the resources available to build the system therefore the proposed
system is technically feasible.

3.3.3. Operational Feasibility:

Operational feasibility is defined as the process of assessing the degree to which a proposed
system solves business problems or takes advantage of business opportunities. The system is
self-explanatory and doesn‘t need any extra sophisticated training. The system has built in
methods and classes which are required to produce the result. The overall time that a user
needs to get trained is less than one hour. As the software that is used for developing this
application is very economical and is readily available in the market. Therefore, the
proposed system is operationally feasible.

VNITSW-CSE 9| P a g e
CHAPTER 4
REQUIREMENTS SPECIFICATION
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

CHAPTER-4
REQUIREMENTS SPECIFICATION

4.1 PURPOSE, SCOPE, DEFINITION:


4.1.1 Purpose:
The purpose of the software requirements Specification is the basis for the entire project .It
lays the foundation and also framework that every team involved in the development will
follow .It is used to provide the critical information to multiple teams like development,
quality assurance , operations ,and maintenance . Software requirements specification is a
rigorous assessment of requirements before the more specific system designs and its goal is to
reduce later the redesign. It should also provide the realistic basis for estimating product
costs, risks and also schedules.

4.1.2 Scope:
The scope is the part of project planning that involves determining and documenting a list of
specific project goals, deliveries, features, functions, tasks, deadlines, and ultimately costs.
In other words ,it is what needs to be achieved and the work that must be done to deliver a
project .The Software scope is well defined boundary which encompasses all the activities
that are done to develop and deliver the software product. The software scope clearly defines
the all functionalities and artifacts to be delivered as a part of the software.

4.1.3 Definition:

The Software Requirement Specification is a description of a software system to be


developed. It is modeled of a software system to be developed. It is modeled after business
requirements specification, also known as the stake holder requirements specification.

4.2 REQUIREMENT ANALYSIS


The process to gather the software requirements from clients, analyze and document them is
known as requirements engineering or requirements analysis .The goal of requirement
engineering is to develop and maintain sophisticated and descriptive ‗System/Software
Requirements Specification‘ documents. It is a four step process generally, which includes—

• Feasibility Study

VNITSW-CSE 10| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

• Requirements Gathering
• Software Requirements Specification
• Software Requirements Validation

The basic requirements of our project are:

• Research Papers
• Camera

4.2.1 FUNCTIONAL REQUIREMENT ANALYSIS

Functional requirements explain what has to be done by identifying the necessary task,
action or activity that must be accomplished. Functional requirements analysis will be used as
the top –level functions for functional analysis.

4.2.2 USER REQUIREMENTS ANALYSIS


User Requirements Analysis is the process of determining user expectations for a new or
modified product. These features must be Quantifiable, relevant and detailed.

4.2.3 NONFUNCTIONAL REQUIREMENT ANALYSIS

Non-functional requirements describe the general characteristics of a system .They are also
known as quality attributes. Some typical non-functional requirements are Performance,
Response Time, Throughput, Utilization, and Scalability.

Performance:

The performance of a device is essentially estimated in terms of efficiency,


effectiveness and speed.

• Short response time for a given piece of work.


• High throughput (rate of processing work)
• Short data transmission time.

Response Time:

Response is the time a system or functional unit takes to react to a given input.

VNITSW-CSE 11| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

4.3 SYSTEM REQUIREMENTS

4.3.1 Software Requirements


Operating System: Windows 7 or 10 Ultimate, Linux, Mac.

Coding Language: Python

Software Environment: Anaconda

Packages: numpy, pandas, matplotlib, tensorflow, itertools, seaborn, os, sis,

random, warnings, math

4.3.2 Hardware Requirements


Processor: Intel 1-3, 5, 7

RAM: 4GB or Higher

Hard Disk: 500GB or Higher

VNITSW-CSE 12| P a g e
CHAPTER 5
SYSTEM DESIGN
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

CHAPTER-5
SYSTEM DESIGN

5.1 SYSTEM ARCHITECTURE:

The data characteristics of salt bodies are different from natural images. For example,
the shape of salt bodies is indefinite and there is no prior knowledge of shape; the texture of
geological data is more prominent and so on. Therefore, the general semantic segmentation
networks do not perform well on salt body data. Aiming at these characteristics of the salt
body data, we designed a deep supervised semantic segmentation model and optimized the
segmentation results through edge prediction branch. The structure of the proposed method is
illustrated in Fig. 1. In this section, we first present the proposed deep-supervised model.
Then we describe how edge prediction branch optimizes segmentation results. Finally, we
introduce some important components used in the network framework, which are
indispensable for the final improvement.

Figure 5.1. Deep Supervised Model

5.2 MODULES:

There are 3 modules used

VNITSW-CSE 13| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

They are:

1. Deep- Supervised Model


2. Edge Prediction Branch
3. Optimized Edge Prediction

5.2.1. Deep -Supervised Model:

The overall architecture of the model is a U-shaped structure, which includes an


encoder and a decoder, and they fuse feature maps with corresponding resolutions through
jump connections. The encoder is a process of down-sampling step by step. Backbone uses
ResNet-34, which takes into account the effect and time consumption. The decoder is a
process of upsampling step by step, and it finally outputs the original size segmentation
results. Due to the insufficient amount of data, we use ImageNet pretrained weights for the
encoder for better experimental results and shorter training time.

Since a considerable part of the salt data does not contain salt at all, this will cause
confusion to the segmentation result. In order to reduce or even eliminate this adverse effect,
we add a binary classification at the top of the encoder to predict whether the image contains
salt, which is defined as nonempty or empty. Specifically, the feature map at the top of the
encoder undergoes a global average pooling and then outputs the classification results. This
classification branch uses cross-entropy loss supervision Lclass, which assists the supervision,
promotes the network to learn the semantic information, and generates better segmentation
results.

Additionally, we add a branch at the top of the decoder to predict the segmentation
results of images that include salt, which further eliminates the negative impact of empty
images on the network. In order to obtain better supervision, we perform semantic
supervision of nonempty pictures at different levels of the decoder and the losses are set to
Lfinal and Lnon_empty, respectively. The loss Lnon_empty is calculated as below where i denotes the
level of the decoder

5
𝐿𝑛𝑜𝑛−𝑒𝑚𝑝𝑡𝑦= ∑ 𝐿i𝑛𝑜𝑛_𝑒𝑚𝑝𝑡𝑦
i=1

VNITSW-CSE 14| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

As shown in Fig. 1, we embed a global average pooling layer to extend the U-shape
architecture, then we concatenate the feature map of this layer with the feature map of the
final stage decoder, and finally output the segmentation results of all the images. With the
global average pooling layer, we introduce the global context information into the network as
a guidance.

5.2.2. Edge Prediction Branch:

In the semantic segmentation task, it is hard to extract different objects with similar
appearance, especially when they are adjacent spatially. In order to improve the segmentation
performance, it is necessary to amplify the distinction of features. In the salt body data set,
the spatial shapes of salt and nonsalt are more amorphous. For this reason, we propose an
edge prediction branch to guide feature learning. The branch directly learns the semantic
boundary through explicit semantic boundary supervision, so that the features on both sides
of the semantic boundary are more distinguishable. The main purpose of this edge prediction
branch is to obtain more accurate boundary information. Generally, the low level features
have more accurate spatial information and are more instructive for boundary prediction.
Therefore, the design of this branch is bottom-up.

VNITSW-CSE 15| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Figure 5.2.2. Edge Prediction Branch

The label of the edge prediction branch can be obtained from the label of the semantic
segmentation. The specific method can use traditional image processing methods to extract
edges, such as the sobel operator, the canny operator, and so on. The branch uses focal loss
[24] that is defined as Ledge. The details of the edge prediction branch is illustrated in Fig. 2.
Finally, the total loss L of our model is calculated as

L = L final + α ∗ Lnon- empty + β ∗ Lclass + γ ∗ Ledge

where α, β, and γ are weighting parameters. Our method of selecting loss weights consists of
two steps. First, different loss functions have different orders of magnitude.

VNITSW-CSE 16| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

5.2.3 Optimized Edge Prediction:

In order to get an optimized edge a Sigmoid function is used.

Sigmoid function: Used to predict the probability of the output as its range falls
between a given range.

Logistic Function: Gives the output in the range of (0,1)

Figure 5.2.3. Architecture of optimized Edge Prediction

VNITSW-CSE 17| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

A basic principle is to adjust these loss weights to a unified order of magnitude.


Because of the segmentation task, the magnitude of the remaining losses should be consistent
with the main loss Lfinal. Second, according to the control variable method, only one weight
is changed in a training process, and the accuracy of salt segmentation is gradually improved.
Thus, a set of relatively optimum weighting parameters is obtained. In this letter, we set α, β,
γ to 0.1, 0.05, 0.1, respectively. C. Important Components First, we inserted a lightweight
attention mechanism module into each level of encoder and decoder modules. This module is
composed of a channel attention module and a spatial attention module, called concurrent
spatial and channel squeeze & excitation modules (scSE) [25].

They rescale and recalibrate the channel and spatial dimensions to let the network
automatically obtain the channel and spatial characteristics that are favorable for the current
task, and suppress the disadvantageous characteristics. In addition, we also used a strategy
called hypercolumns [26] when predicting the final segmentation results. The features that we
use to predict the segmentation results are not just the output of the last layer of the final
decoder. We upsample the feature maps of all decoders to the same size and concatenate
them together as the source of feature maps for predicting the segmentation results. It allows
getting more precise localization and captures the semantics at the same time.

5.3 Design Overview


UML combines best techniques from data modeling (entity relationship diagrams),
business modeling (work flows), object modeling, and component modeling. It can be used
with all processes, throughout the software development life cycle, and across different
implementation technologies UML has synthesized the notations of the Booch method, the
Object modeling technique (OMT) and object-oriented software engineering (OOSE) by
fusing them into a single, common and widely usable modeling language. UML aims to be a
standard modeling language which can model concurrent and distributed systems.

VNITSW-CSE 18| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Figure 5.3. Types and categories of UML diagrams

5.4 UML Diagrams

The Unified Modeling Language (UML) is used to specify, visualize, modify, construct and
document the articrafts of an object-oriented software intensive system under development.
UML offers a standard way to visualize a system‘s architectural blue prints, including
elements such as:

• Actors
• Business process
• (Logical) Components
• Activities
• Programming language statements
• Database schemas, and

VNITSW-CSE 19| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

• Reusable software components.

The unified modelling language allows the software engineer to express an analysis
model using the modelling notation that is governed by a set of syntactic semantic and
pragmatic rules. A UML system is represented using 5 different views that describe the
system from distinctly different perspective. Each vie is defined by a set of diagrams, which
is as follows:

User Model View:

• This view represents the system from the user‘s perspective.


• The analysis representation describes a usage scenario from the end-user's
perspective.
• The UML user model view encompasses the models which define a solution to a
problem as understood by the client stakeholders.

Structural Model View:

• In this model the and functionality are arrived from inside the system.
• This model view models the static structures.

Behavioral Model View:

• It represents the dynamic of behavioural as parts of the system, depicting the


interactions of collection between various structural elements described in the user
model and structural model view.

Implementation Model View:

• Implementation view is also known as Architectural view which typically captures the
enumeration of all the subsystems in the implementation model, the component
diagrams illustrating how subsystems are organized in layers and hierarchies and
illustrations of import dependencies between subsystems.

Environmental Model View:

• These UML model describe both structural and behavioural dimensions of the domain
or environment in which the solution is implemented. This view is often also referred
to as the deployment or physical view.

VNITSW-CSE 20| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

5.4.1 Use case Diagram:

A flow of events is a sequence of transaction performed by the system. They typically


contain very detailed information, written in terms of what the system should do not how the
system accomplishes the task flow of events are created as separate files or documents in
your favourite text editor and then attached or linked to use case using the files or documents
in your favourite text editor and then attached or linked to a use case using the files tab of a
model element.

Modules

Image data

Feature Extraction

Global Average
System Salt Body
Segmentation
Training the data

Applying CNN
Algorithm

Salt Body

Predictions

Figure 5.4.1. Use case Diagram

Use case diagrams are usually referred to as behaviour diagrams used to describe a set of actions
(use cases) that some systems should or can perform in collaboration with one ormore
external users of the system(actors).

VNITSW-CSE 21| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

5.4.2 Activity Diagram:

Activity Diagrams are graphical representations of Workflows of stepwise activities and


actions with support for choice, iteration and concurrency. In the Unified Modelling
language, activity diagrams can be used to describe the business and operational step-by-step
workflows of components in a system. An activity diagram shows the overall flow of control.

Figure 5.4.2. Activity Diagram

VNITSW-CSE 22| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

5.4.3 Sequence Diagram:


A sequence diagram in UML is a kind of interaction diagram that shows how
processes operates with one another and in what order. It is a construct of Message
Sequence Chart. A sequence diagram shows, as parallel vertical lines(―lifelines‖),
different processes or objects that live simultaneously, and, as horizontal arrows, the
messages exchanged between them, in the order in which they occur. This allows the
specification of simple runtime scenarios in graphical manner.

Figure 5.4.3. Sequence Diagram

5.4.4 Component Diagram:

Component diagrams are used to display various components of a software system as well as
subsystems of a single system. They are used to represent physical things or components of a
system. It generally visualizes the structure and an organization of a system.

VNITSW-CSE 23| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Figure 5.4.4. Component Diagram

5.4.5 Deployment Diagram:


Deployment diagram is a type of diagram that specifies physical hardware on which the
software system will execute. It also determines how the software is deployed on the
underlying hardware.

Figure 5.4.5. Deployment Diagram

VNITSW-CSE 24| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

5.3 ALGORITHM:

Algorithm for Proposed system DSED:

Step1: For salt body Segmentation, Seismic image is taken as input.

Step 2: In 1x1 convolution a filter of size 1x1 is selected and it undergoes convolution. In this
manner the entire image undergoes convolution pixel by pixel.

Step3: Then a 3x3 convolution filter is selected. Here application of 2D convolution over the
selected 3x3 convolution filter will always have a 3D size. The filter to be applied is based on
the decision of taking number of channels of the input seismic image.

Step4: Batch Normalization is a process of tuning of pixels by calculating the error rate from
the previous data of the seismic image. This tuning ensures that there would be a very low
error rate.

M = M last+ P ∗ M un empty + Q ∗ M class+ R ∗ M border

Step5: In this step Sigmoid Function is performed. It is a logistic mathematical function. It is


introduced to produce the accurate results. If the input given is an independent variable then
the output is a real value. If the input given is a dependent variable then output is in [0,1].

𝑀
(𝑎) =
1 + 𝑒−𝑟(𝑎−𝑎0)

Step6: The rectified linear Unit function (ReLU) is an activation function. This function
outputs a positive value if the value inputted is positive. If the input value given is negative
then a zero is resulted.

Y=max(0,a)

Step7: Again Step3, step5 and step6 are repeated to get the accurate results

Step8: After completion of above all steps an image is displayed, the image then undergoes
through an Edge.

VNITSW-CSE 25| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

The above algorithm is, first we take seismic image as input. Then 1x1 Convolution is
performed [23-25] on that input by taking every single pixel in that seismic image. After that
the 3x3 filter is applied, which are like 2D convolutions that are applied for regular 2 matrix
objects (2D images). After that Batch normalization process is done. Then Sigmoid Function
is performed, in that logistic sigmoid function is used. For this the input can be any real
number but the output will be always ranges from 0 to 1 only. In that formula a0 is the value
of the sigmoid midpoint, M is the curve‘s maximum value, r is the logistic growth rate, a
range between -∞ to +∞. Next step is Rectified Linear Unit function, in this if the input is
positive the output will also be positive and if it is negative, it returns only zero value as
output. Like this the process is done and some steps are repeated again which gives the salt
body boundary in the given image. Using this we have successfully obtained boundary for the
salt body and we have examined the results using different tests which will be discussed in
the results section.

VNITSW-CSE 26| P a g e
CHAPTER 6
IMPLEMENTATION
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

CHAPTER-6
IMPLEMENTATION

6.1. Steps for Implementation


Implementation on Python

What is a Script?

A script or scripting language is a computer language with a series of commands within a file
that is capable of being executed without being compiled .This is a very useful capability that
allows us to type in a program and to have it executed immediately in an interactive mode .

• Scripts are reusable


• Scripts are editable

Difference between a script and a program

Script:

Scripts are distinct from the core code of the application, which is usually written in a
different language, and are often created or at least modified by the end-user. Scripts are
often interpreted from source code or byte code, whereas the applications they control are
traditionally compiled to naïve machine code.

Program:

The program has an executable from that the computer can use directly to execute the
instructions. The same program in its human-readable source code form, from which
executable programs are derived (e.g., compiled ).

Python:

What is Python? Python is an interpreter, high-level, general-purpose programming language.


It supports multiple programming paradigms, including procedural, object-oriented, and
functional programming.

Python concepts:

VNITSW-CSE 27| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Python is a high-level, interpreted, interactive and object-oriented scripting language. Python


is designed to be highly readable. It uses English keywords frequently where as other
languages use punctuation, and it has fewer syntactical constructions than other languages.

• Python is interpreted – Python is processed at runtime by the interpreter. You do not


need to compile your program before executing it. This is similar to PERL and PHP.
• Python is Interactive - You can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
• Python is Object-Oriented – Python supports Object-Oriented style or technique of
programming that encapsulates code within objects.
• Python is a Beginner’s Language – Python is a great language for the beginner-level
programmers and supports the development of a wide range of applications from
simple text processing to WWW browsers to games.

History of Python

Python was developed by Guido van Rossum in the late eighties and early nineties at
the National Research Institute for Mathematics and Computer Science in the
Netherlands.

Python is derived from many other languages, including ABC, Modula-3, C, C++,
Algol-68, Smalltalk, and Unix shell and Other scripting languages.

Python is copyrighted. Like Perl, Python source code is now available under the GNU
General Public License (GPL).

Python is now maintained by a core development team at the institute, although Guido
van Rossum still holds a vital role in directing its progress.

Python Features

Python‘s features include

• Easy-to-learn: Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
• Easy-to-read: Python code is more clearly defined and visible to the eyes.
• Easy-to-maintain: Python‘s source code is fairly easy-to-maintain.

VNITSW-CSE 28| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

• A broad standard library: Python‘s bulk of the library is very portable and
cross-platform compatible on UNIX, windows, and Macintosh.
• Interactive Mode: Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
• Portable: Python can run on a wide variety of hardware platforms and has the
same interface on all platforms.
• Extensible: You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more
efficient.
• Databases: Python provides interfaces to all major commercial databases.
• GUI Programming: Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows
MFC, Macintosh, and the X Window system of Unix.
• Scalable: Python provides a better structure and support for large programs than
shell scripting.

Python modules:

Python allows us to store our code in files(also called modules). To support this, Python
has a way to put definitions in a file and use them in a script or in an interactive instance of
the interpreter. Such a file is called a module; definitions from a module can be imported into
other module or into the main module.

Testing code:

• Code is usually developed in a file using an editor.


• To test the code, import it into a python session and try to run it.
• Usually there is an error, so you can check by go to file, make a correction, and test
again. This process is repeated until you are satisfied that the code works. The entire
process is known as the development cycle.

About Jupyter Notebook:

The Jupyter Notebook is an open-source web application that allows you to create and
share documents that contain live code, equations, visualizations and narrative text. Uses

VNITSW-CSE 29| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

include: data cleaning and transformation, numerical simulation, statistical modeling, data
visualization, machine learning.

About Spyder Notebook:

Spyder is an open source cross-platform integrated development environment (IDE) for


scientific programming in the Python language. Spyder uses Qt for its GUI, and is designed
to use either of the PyQt or PySide Python bindings.

SIGMOID AND RELU FUNCTIONS:

Sigma sometimes is called as logistic function, and this new class of neurons is called logistic
neurons. In other words, the output of the sigmoid neuron with inputs x1, x2. weights w1,
w2…. And bias b is 1. the algebraic equation of sigmoid neuron function and Rectified
Linear Unit (ReLU) is as follows:

Figure 6.1. Sigmoid Function and Rectified Linear Unit (ReLU)

At first, sigmoid neuron appears very different to perceptron.in fact, there are many
similarities between perceptron‘s and sigmoid neurons. To understand the similarity to the
perceptron model, suppose if the equation is a large positive number then (e-z=0) and so
sigma(z)=1. In other words, when (z=w.x+b) is large and positive, the output from the
sigmoid neuron is approximately 1, just as it would have been for a perceptron. If on the
other hand z= w.x+b is very negative then (e-z -> infinity), and sigma(z)=0. So, when

VNITSW-CSE 30| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

(z=w.x+b) is very negative, the behavior of the sigmoid neuron also closely approximates a
perceptron. It's only when(w.x+b) is of modest size then there is much deviation from the
perceptron model. What about the algebraic form of sigma? How can we understand that? In
fact, the exact form of the sigma isn‘t so important but what really matters is the shape of the
function when plotted.

If sigma had in fact been a step function then the sigmoid neuron would be a
perceptron and since the output would be 1 or 0 based on whether (w-x+b) is positive or
negative. The smoothness of sigma means that small changes in the weights and in the bias
will produce a small change, output in the output from neuron. On fact, calculus tells us that
output is well approximated by the following equation:

𝑀
(𝑎) =
1 + 𝑒−𝑟(𝑎−𝑎0)

Actually, when (w.x+b=0) the perceptron outputs 0, while the step function outputs 1
then we need to modify the step function at that one point where the sum is overall all the
weights, denotes partial derivatives of the output with respect to wj and b, respectively.
Output is the linear function of the changes and in the weights and bias. The linearly makes it
easy to choose small changes in the weights and biases to achieve any desired small changes
the output. So, while sigmoid neurons have much of the same qualitative behavior as as
perceptron, they make it much easier to figure out how changing the weights and biases will
change the output.

6.1.1 Architecture of Neural Networks:

Suppose we have the network as shown below,

Here, the leftmost layer in this network is called the input layer, and the neurons within the
layer are called input neurons. The rightmost or output layer contains the output neurons or as
in this case, a single output neuron. The middle layer is called a hidden layer, since the
neurons in this layer are neither inputs not outputs.

VNITSW-CSE 31| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

The network above has just a single hidden layer , but some networks have multiple hidden
layers.For example, the following four-layer network has two hidden layers.

For historical reason, multiple layer networks are sometimes called multilayer perceptron on
MLP‘s despite being made up of sigmoid neurons, not perceptron. The design of the input
and output layers in a network is often straightforward. For example, suppose we‘re trying to
determine whether a handwritten image depicts a ―9‖ or not. A natural way to design the
network is to encode the intensities of the image pixels into the input neurons. If the image is
a 64 by 64 grey scale image, then we‘d have 4096=64 X 64 input neurons, with the intensities
scaled appropriately between 0 and 1. The output layer will contain just a single neuron, with
output values of less than 0.5 indicating ―input image is not a 9‖, and values greater than 0.5
indicating ―input image is a 9‖.

Neural network researchers have developed many design heuristics for the hidden
layers, which help people get the behavior from their nets. For example, such heuristics can
be used to help determine how to trade off the number of hidden layers against the time
required training the network. Neural networks where the output from one layer is used as
input to the next layer are called a feed forward neural network which means that there are no
loops in the network. If we did have loops, we‘d end up with situation where the input to the
sigmoid function depended on the output. However, these types of situations are not allowed.
There are other models of artificial neural networks in which feedback loops are possible.
These models are called recurrent neural networks.

An algorithm which lets us find weights and biases so that the output from the network
approximates y(x) for all training inputs x. To quantify how well we‘re achieving this goal
we define a cost function:

C(w,b) = 1/2n Σx ||y(x)-a||2

The above equation is sometimes referred to as a loss or objective function. So here, w


denotes the collection of all weights in the network, b all the biases, n is the total number of
training inputs, a is the vector of outputs from the network when x is input, and the sum is
over all training inputs, x. The output ‗a‘ depends on x , w and b. The notation ||v|| just
denotes the usual length function for a vector v. We call C the quadratic cost function and
sometimes known as the mean squared error or just MSE. By studyingthe form of the
quadratic cost function, we see that

VNITSW-CSE 32| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

• C(w, b) is non-negative, since every term in the sum is non-negative and furthermore,
the cost C(w,b) becomes small that is C(w,b)=0, precisely when y(x) is approximately
equal to the output, ‗a‘, for all training inputs ‗x‘. So our training algorithm has done a
good job if it can find weights and biases so that C(w,b)=0.
• By contrast, it‘s not doing so well when C(w,b) is large then it would mean that y(x)
is not is not close to the output ‗a‘ for a large number of inputs. So the aim of our
training algorithm will be to minimize the cost C(w,b) as a function of the weights
and biases. In other words, we want to find a set of weights and biases which makes
the cost as small as possible. We‘ll do that using an algorithm known as gradient
descent where the minimization problems are solved.

Deep Convolutional Neural Networks

Deep learning is a family of machine learning concerned with algorithm inspired by the
Artificial Neural Networks. This is also known as deep structured learning or differential
programming. Artificial neural networks are inspired by the biological neural networks that
constitute brain and is based on a collection of connected units or nodes called artificial
neurons like that of a brain. A deep neural network I an artificial neural network is an
artificial neural network with multiple layers between the input and output layers. Whereas
recurrent neural network is a class of artificial neural network where connections between
nodes form a directed a graph.

Deep learning architectures such as deep neural networks, recurrent neural networks, and
convolutional

Neural networks are being applied to the fields including vision, speech recognition, neural
networks are processing audio recognition, bioinformatics, medical image analysis, where
they have produced results. The benefit of deep learning models is their ability to perform
automatic feature extraction from raw data, also called feature learning.

Convolutional Neural Networks:

Convolutional Neural Network models were developed for image classification, in


which the model accepts a two dimensional input representing an image‘s pixels and color
channels, in a process called feature learning. CNN‘s do take a biological inspiration from the

VNITSW-CSE 33| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

visual cortex. The visual cortex has small region of cells that are sensitive to specific regions
of the visual field. So, the CNN is a deep neural network. Image classification is the task of
taking an input image and outputting a class or a probability of classes that best describes the
image.

For humans, this task of recognition is one of the first skills we learn from the moment
we are born and is one that comes naturally and effortlessly as adults. Without even thinking
twice, we‘re able to quickly and seamlessly identity the environment we are in as well as the
objects that surround us. These skills of being able to quickly recognition patterns,
generalize from prior knowledge, and adapt to different image environments are ones that
wedo not share with our fellow machines.

1D Convolutional Neural Networks:

The model extracts features from sequence data and maps the internal features of the
sequence. A ID CNN is very effective for deriving features from a fixed-length segment of
the overall dataset, where it is not so important where the feature is located in the segment

The 1D CNN works well in the following cases-

• Analysis of a time series of sensor data.


• Analysis of signal data over a fixed-length period, for example, an audio recording.
• Natural Language Processing, Recurrent Neural Networks which is leverage Long
Short Term Memory cells are more promising than CNN as they take into account the
proximity of words to create trainable patterns.

2D Convolutional Neural Networks:

2D convolutional layers take a three- dimensional input, typically an image with three
color channels. They pass a filter, also called a convolutional kernel, over the image, a small
window of pixels at a time, for example 3x3 or 5x5 pixels in size , and moving the window
until they have scanned the entire image. The convolutional operation calculates the dot
product of the pixel values in the current filter window with the weights defined in the filter.

3D Convolutional Neural Networks:


3D convolutional apply a 3-dimensioanl filter to the dataset and the filter moves 3d to
calculate the low-level feature representations. Their output shape is a 3-dimensional volumes
space such as cuber or cuboid.
VNITSW-CSE 34| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

They are helpful in events detection in videos, 3D medical images etc. They are not limited
to 3D space but can also be applied to 2d space inputs such as images.

In a convolution, small areas of an image are scanned and the probability that they belong to
a filter class is assigned and translated to an activation map, a representation of the image
layers. In a 3D CNN, the kernels move through three dimensions of data (height length and
depth) and produce 3D activation maps. Convolutional neural networks are a type of deep
model that can act directly on the raw inputs. However, such models are currently limited to
handling 2D inputs. We can develop a novel 3D CNN model for action recognition.

In a convolution, small areas of an image are scanned and the probability that they belong to
a filter class is assigned and translated to an activation map, a representation of the image
layers. In a 3D CNN, the kernels move through three dimensions of data (height length and
depth) and produce 3D activation maps.

Figure 6.1.1. Upsampling and Downasampling of a seismic image using 3D CNN

VNITSW-CSE 35| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Normalization images with Image Data Generator

This class can be reused pixel values from the range of 0-255 to the range 0-1 preferred for
the neural network models. Scaling data to the range of 0-1 is traditionally referred to as
normalization. This can be achieved by setting the rescale argument to a ratio by which each
pixel can be multiplied to achieve the desired range. In this scale, the ratio is 1/255 or about
0.0039.

For example:

# create generator (1.0/255.0= 0.003921568627451)

datagen = ImageDataGenerator (rescale=1.0/255.0)

Target:

Target represents the desired output that we want our model to learn. In this case of a
classification problem, the targets would be the labels of each of the examples in the training
set.

For example, target_size: tuple of integers (height, width), default :(256, 256). So, here the
dimensions to which all images found will be resized.

Batch:

We can't pass the entire dataset into the neural net ar once. So, you divide dataset into number
of batches or sets or parts. Batch size is total number of training examples present in a single
batch. In other words, batch size is the total number of training examples in one
forward/backward pass.

Epoch:

An epoch represents a full pass over the entire training set, meaning that the model has seen
each example once.

Epoch = total number of examples/ batch size number of training iterations.

VNITSW-CSE 36| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

In other words, an epoch is one forward pass and one backward pass of all the training
examples.

Steps per Epoch:

An epoch usually means one iteration over all of the training data. For, instance just ser a
fixed number of steps like 1000 per epoch even though if we have a much larger data set.

Samples per epoch:

One epoch means that each sample in the training dataset has had an opportunity to update
the internal model parameters. An epoch is comprised of one or more batches.

Validation data:

A validation set is a subset of your dataset which contains examples available to a neural
network to adjust the hyper parameters and the model architecture based on the validation
loss. The validation set is used during training to run validation examples through the model
after each epoch.

6.2 Coding:

import os, sys, random, warnings, math


import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm.auto import tqdm, trange
from itertools import chain
import cv2
from skimage.io import imread, imshow, concatenate_images
from skimage.transform import resize
from skimage.morphology import label
from tensorflow.keras.preprocessing.image import array_to_img, img_to_array, load_img,
ImageDataGenerator
import tensorflow as tf
from tensorflow.keras import backend as K

VNITSW-CSE 37| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

from tensorflow.keras import models, Input, layers, callbacks, utils, optimizers


class config:
im_width = 128
im_height = 128
im_chan = 1
path_train = 'train/'
path_test = 'test/'
Data Exploration

! unzip -q ../input/tgs-salt-identification-challenge/train.zip -d train/


! unzip -q ../input/tgs-salt-identification-challenge/test.zip -d test/
random.seed(19)
ids = random.choices(os.listdir('train/images'), k=6)
fig = plt.figure(figsize=(20,6))
for j, img_name in enumerate(ids):
q = j+1
img = load_img('train/images/' + img_name)
img_mask = load_img('train/masks/' + img_name)

plt.subplot(2, 6, q*2-1)
plt.imshow(img)
plt.subplot(2, 6, q*2)
plt.imshow(img_mask)
fig.suptitle('Sample Images', fontsize=24);
train_ids = next(os.walk(config.path_train+"images"))[2]
test_ids = next(os.walk(config.path_test+"images"))[2]
X = np.zeros((len(train_ids), config.im_height, config.im_width, config.im_chan),
dtype=np.uint8)
Y = np.zeros((len(train_ids), config.im_height, config.im_width, 1), dtype=np.bool)
print('Getting and resizing train images and masks ... ')
sys.stdout.flush()
for n, id_ in tqdm(enumerate(train_ids), total=len(train_ids)):
x = img_to_array(load_img(config.path_train + '/images/' + id_, color_mode="grayscale"))

VNITSW-CSE 38| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

x = resize(x, (128, 128, 1), mode='constant', preserve_range=True)


X[n] = x
mask = img_to_array(load_img(config.path_train + '/masks/' + id_,
color_mode="grayscale"))
Y[n] = resize(mask, (128, 128, 1), mode='constant', preserve_range=True)
print('Done!')
print('X shape:', X.shape)
print('Y shape:', Y.shape)
X_train = X[:int(0.9*len(X))]
Y_train = Y[:int(0.9*len(X))]
X_eval = X[int(0.9*len(X)):]
Y_eval = Y[int(0.9*len(X)):]
X_train = np.append(X_train, [np.fliplr(x) for x in X], axis=0)
Y_train = np.append(Y_train, [np.fliplr(x) for x in Y], axis=0)
X_train = np.append(X_train, [np.flipud(x) for x in X], axis=0)
Y_train = np.append(Y_train, [np.flipud(x) for x in Y], axis=0)
del X, Y
print('X train shape:', X_train.shape, 'X eval shape:', X_eval.shape)
print('Y train shape:', Y_train.shape, 'Y eval shape:', Y_eval.shape)
Model Building

def BatchActivate(x):
x = layers.BatchNormalization()(x)
x = layers.Activation('relu')(x)
return x
def convolution_block(x, filters, size, strides=(1,1), padding='same', activation=True):
x = layers.Conv2D(filters, size, strides=strides, padding=padding)(x)
if activation == True:
x = BatchActivate(x)
return x
def residual_block(blockInput, num_filters=16, batch_activate = False):
x = BatchActivate(blockInput)
x = convolution_block(x, num_filters, (3,3) )

VNITSW-CSE 39| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

x = convolution_block(x, num_filters, (3,3), activation=False)


x = layers.Add()([x, blockInput])
if batch_activate:
x = BatchActivate(x)
return x
def build_model(input_layer, start_neurons, DropoutRatio = 0.5):
scaled = layers.Lambda(lambda x: x / 255) (input_layer)
conv1 = layers.Conv2D(start_neurons * 1, (3, 3), activation=None, padding="same")(scaled)
conv1 = residual_block(conv1,start_neurons * 1)
conv1 = residual_block(conv1,start_neurons * 1, True)
pool1 = layers.MaxPooling2D((2, 2))(conv1)
pool1 = layers.Dropout(DropoutRatio/2)(pool1)
# 50 -> 25
conv2 = layers.Conv2D(start_neurons * 2, (3, 3), activation=None, padding="same")(pool1)
conv2 = residual_block(conv2,start_neurons * 2)

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])


model.summary()
X_test = np.zeros((len(test_ids), config.im_height, config.im_width, config.im_chan),
dtype=np.uint8)

sizes_test = []

print('Getting and resizing test images ... ')

sys.stdout.flush()

for n, id_ in tqdm(enumerate(test_ids), total=len(test_ids)):

x = img_to_array(load_img(config.path_test + '/images/' + id_, color_mode="grayscale"))

sizes_test.append([x.shape[0], x.shape[1]])

x = resize(x, (128, 128, 1), mode='constant', preserve_range=True)

X_test[n] = x

print('Done!')

VNITSW-CSE 40| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

preds_test = model.predict(X_test, verbose=1)

preds_test_upsampled = []

for i in trange(len(preds_test)):

preds_test_upsampled.append(resize(

np.squeeze(preds_test[i]), (sizes_test[i][0], sizes_test[i][1]), mode='constant',


preserve_range=True

))

VNITSW-CSE 41| P a g e
CHAPTER-7
SYSTEM TESTING
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

CHAPTER-7
SYSTEM TESTING

7.1 Testing:
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.
Manual Testing:
Manual testing includes testing a software manually, i.e., without using any automated tool or
any script. In this type, the tester takes over the role of an end-user and tests the software to
identify any unexpected behavior or bug. There are different stages for manual testing such as
unit testing, integration testing, system testing, and user acceptance testing.
Automation Testing:
Automation testing, which is also known as Test Automation, is when the tester writes scripts
and uses another software to test the product. This process involves automation of a manual
process. Automation Testing is used to re-run the test scenarios that were performed
manually, quickly, and repeatedly.
What to Automate?
It is not possible to automate everything in a software. The areas at which a user can make
transactions such as the login form or registration forms, any area where large number of
users can access the software simultaneously should be automated.
When to Automate?
Test Automation should be used by considering the following aspects of a software
• Large and critical projects
• Projects that require testing the same areas frequently
• Requirements not changing frequently
• Accessing the application for load and performance with many virtual users
• Stable software with respect to manual testing

VNITSW-CSE 42| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

• Availability of time

How to Automate?
Automation is done by using a supportive computer language like VB scripting and an
automated software application. There are many tools available that can be used to write
automation scripts. Before mentioning the tools, let us identify the process that can be used to
automate the testing process –
• Identifying areas within a software for automation
• Selection of appropriate tool for test automation
• Writing test scripts • Development of test suits
• Execution of scripts
• Create result reports
• Identify any potential bug or performance issue

7.2 TYPES OF TESTS:


7.2.1 Unit Testing:
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application, and/or
system configuration. Unit tests ensure that each unique path of a business process performs
accurately to the documented specifications and contains clearly defined inputs and expected
results.
Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.
Test strategy and approach:
• Field testing will be performed manually and functional tests will be written in detail.
• Test objectives
• All field entries must work properly.

VNITSW-CSE 43| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

• Pages must be activated from the identified link.


• The entry screen, messages and responses must not be delayed.
• Features to be tested:
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
• All links should take the user to the correct page.

7.2.2 Integration Testing:


Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.
Software integration testing is the incremental integration testing of two or
more integrated software components on a single platform to produce failures caused by
interface defects.
The task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software applications at
the company level – interact without error.

Test Results:
All the test cases mentioned above passed successfully. No defects encountered.

7.2.3 Functional Testing:


Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user
manuals.
Functional testing is centered on the following items:
Valid Input: identified classes of valid input must be accepted.
Invalid Input: identified classes of invalid input must be rejected.
Functions: identified functions must be exercised.

VNITSW-CSE 44| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Output: identified classes of application outputs must be exercised.


Systems/Procedures: interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to identify
Business process flows; data fields, predefined processes, and successive processes
must be considered for testing. Before functional testing is complete, additional tests
are identified and the effective value of current tests is determined.

7.2.4 System Testing:


System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration-oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.

7.2.5 White Box Testing:


White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It
is used to test areas that cannot be reached from a black box level.

7.2.6 Black Box Testing:


Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests must be written from a
definitive source document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software under test is
treated, as a black box. you cannot ―see‖ into it. The test provides inputs and responds to
outputs without considering how the software works.

VNITSW-CSE 45| P a g e
CHAPTER 8
RESULTS
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

CHAPTER 8
RESULTS

8.1 RESULTS:
8.1.1. Accuracy
It is defined as the ratio of number of correctly predicted observations to the total number of
observations
Accuracy = ( TPr+ TNg ) / ( TPr+ FPr + FNg + TNg)

Our proposed model DSED have shown good results compared to existing approaches,
though CNN also shown good results as the No. of epochs increases our proposed system
accuracy is close to 98% because of low false positive rate.

Figure 8.1.1. Accuracy for proposed DSED

8.1.2. Precision

It is the ratio of number of correctly predicted positive observations to the total number of
predicted positive observations.Precision = TPr / (TPr + FPr)

Figure 8.1.2. Precision for proposed DSED

VNITSW-CSE 46| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Here the result shows that our proposed approach have a better precession irrespective of No.
of epochs compared to other two approaches as the number of positive observations have no
effect but for other two as the false observations increased it effected the precession

8.1.3 Recall
It is the ratio of number of correctly predicted positive observations to all observations in
actual class.

Recall=TPr / ( TPr + FNg)

Figure 8.1.3. Recall for proposed DSED

8.1.4 F-Score
F-Score is the weighted average of Recall and Precision.
F-Score = 2 * (Recall * Precision) / (Recall + Precision)

Figure 8.1.5. F-Score for proposed DSED

VNITSW-CSE 47| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Our proposed DSED approach have shown a better recall compared to other two and
sometimes it not so stable because of the sample and the most of the observations it gave
good recall.

Table 8.1 Comparison of proposed DSED with Salt Segmentation[11]

Metric SaltSeg Proposed


DSED

Accuracy 0.960 0.980

Precision 0.900 0.923

Recall 0.956 0.947

F1 Score 0.923 0.915

Our proposed method is compared with the existing SaltSeg, which is 3D modelling based on
CNN. We tested the performance using all the metrics mentioned above, all the tests are
conducted using 100 epochs and it clearly shows the proposed method has better accuracy,
precision and recall, F1 score are almost identical.

VNITSW-CSE 48| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

8.2 Screenshots

Figure 8.2.1. Training the Model

Figure 8.2.2. After Training

VNITSW-CSE 49| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Figure 8.2.3. Model Building

Figure 8.2.4. Output of Modeling

VNITSW-CSE 50| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

Figure 8.2.5. Testing the Model

Figure 8.2.6. Segmentation Results on Tgs Data Set

VNITSW-CSE 51| P a g e
CHAPTER-9
CONCLUSION AND FUTURE WORK
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

CHAPTER-9
CONCLUSION AND FUTURE WORK

9.1 CONCLUSION & FUTURE WORK


In this project, we mainly concentrated on finding the boundaries of salt bodies. For
categorizing salt bodies we used deep-supervised method, which exactly locates the salt
bodies. By utilizing Semantic Segmentation and convolution Process on the seismic
image which takes every pixel and classify whether it is salt body or not. To classify
precisely and accurately, a Deep- Supervised algorithm is used. Then in order to speed up
the training of neural data, ReLU is used In order to increase the uniformity and to add
the non-linearity of the data, a Sigmoid function is used, which predicts the salt
delineation accurately. We have evaluated our system using different parameters, which
gave good accuracy and precision compared to the existing approaches. Overall we end
up with good results.

VNITSW-CSE 52| P a g e
CHAPTER 10
BABLIOGRAPHY
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

BABLIOGRAPHY

1. Yunzhi Shi∗ ,Xinming Wu and Sergey Fomel,‖ Salt classification using Deep
Learning‖ The University of Texas at Austin,2018
2. Gopi, A. P., Jyothi, R. N. S., Narayana, V. L., & Sandeep, K. S. (2020). Classification
of tweets data based on polarity using improved RBF kernel of SVM. International
Journal of Information Technology, 1-16.
3. K.J Naik (2021), "Classification and Scheduling of Information-Centric IoT
Applications in Cloud- Fog Computing Architecture (CS_IcIoTA)," 2020 14th
International Conference on Innovations in Information Technology (IIT), 2020, pp.
82-87,
4. Annals of Botany, Volume 91, Issue 3, February 2003, Pages 361-371,‖ A Fliexible
Sigmoid function of Determinate growth‖ https://doi.org/10.1093/aob/mcg029
published on 01 February 2003.
5. Rao, B. T., Patibandla, R. L., Narayana, V. L., & Gopi, A. P. (2021). Medical Data
Supervised Learning Ontologies for Accurate Data Analysis. Semantic Web for
Effective Healthcare, 249-267.
6. Naik, K. J., &Soni, A. (2021). Video Classification Using 3D Convolutional Neural
Network, In A. Kumar, & S. Reddy (Ed.), Advancements in Security and Privacy
Initiatives for Multimedia Images (pp. 1-18). IGI Global.
7. F. Meng et al., ―Constrained directed graph clustering and segmentation propagation
for multiple foregrounds co-segmentation,‖ IEEE Trans. Circuits Syst. Video
Technol., vol. 25, no. 11, pp. 1735–1748, Nov. 2015
8. Z. Miao, K. Fu, S. Hao, S. Xian, and M. Yan, ―Automatic water-body segmentation
from high-resolution satellite images via deep networks,‖ IEEE Geosci. Remote Sens.
Lett., vol. PP, no. 99, pp. 1–5, 2018.
9. J. Long, E. Shelhamer, and T. Darrell, ―Fully convolutional networks for semantic
segmentation,‖ in Proc.
10. #K Jairam Naik, ―A Dynamic ACO based Elastic Load Balancer for Cloud
Computing (D-ACOELB)‖, Data Engineering and Communication Technology
(ICDECT), Lecture Notes in Advances in Intelligent Systems and Computing, book
series 1079, Springer Nature Singapore, pp. 11-20, 2020.

VNITSW-CSE 53| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING

11. Alan Souza, Wilson Leao, Daniel Miranda, Nelson Hargreaves, Bruno Pereira Dias
and Erick Talarico ˜ PETROBRAS PetroleoBrasileiro‖ Salt segmentation using deep
learning‖ Apr,2017.
12. Mikhail Karchevskiy, InsafAshrapov, Leonid KozinkinarXiv preprint ―Automatic salt
deposits segmentation: A deep learning approach ‖arXiv:1812.01429, 2018.
13. S. Oh, K. Noh, D. Yoon, S. J. Seol, and J. Byun, ―Salt delineation from
electromagnetic data using convolutional neural networks,‖ IEEE Geosci. Remote
Sens. Lett., vol. 16, no. 4, pp. 519–523, Apr. 2019.
14. H. Di, M. Shafiq, and G. AlRegib, ―Multi-attribute K-means cluster analysis for salt
boundary detection,‖ in Proc. 79th EAGE Conf. Exhibit., Jun. 2017, pp. 1–5
15. C.Ramirez, Gs.Larrazabal, and G. Gonzalez,‖ Salt body detection from seismic data
via sparse representation‖ 2016

VNITSW-CSE 54| P a g e
CHAPTER 11

PUBLICATION
Figure 11. Paper Acceptance
Figure 12. Certification of Participation
RESEARCH ARTICLE | APRIL 28 2023

Salt body segmentation based on edge detection using deep


supervised learning
Satya Sandeep Kanumalli  ; Nobina Parvin Syed; Pranathi Sunkara; ... et. al

AIP Conference Proceedings 2724, 020009 (2023)


https://doi.org/10.1063/5.0130158

CrossMark

 
View Export
Online Citation

Downloaded from http://pubs.aip.org/aip/acp/article-pdf/doi/10.1063/5.0130158/17125737/020009_1_5.0130158.pdf


Articles You May Be Interested In

Characterization of natural fracture on basement reservoir using the integration of ant tracking attribute,
FMI log, and resistivity log at "POME" field, Jambi sub basin
AIP Conference Proceedings (April 2020)

Fault assessment for basement reservoir compartmentalization: Case study at Northeast Betara gas field,
South Sumatra Basin
AIP Conference Proceedings (July 2017)

An application of LOTEM around salt dome near Houston, Texas


AIP Conference Proceedings (July 2017)
Salt Body Segmentation Based on Edge Detection Using
Deep Supervised Learning
Satya Sandeep Kanumalli1, (a, Nobina Parvin Syed1, (b ,Pranathi Sunkara1, (c,
Sneha Varsha Nuthalapati1, (d, Sahitha Padarthi 1, (e
1
Vignan’s Nirula Institute of Technology & Science for Women, Guntur, Guntur, Andhra Pradesh, India
a)
Corresponding author: satyasandeepk@gmail.com
b)
sdnobinaparvin@gmail.com,
c)
Pranathi.sunkara2001@gmail.com,
d)
rashi.nuthalapati@gmail.com,

Downloaded from http://pubs.aip.org/aip/acp/article-pdf/doi/10.1063/5.0130158/17125737/020009_1_5.0130158.pdf


e)
sahithasahi06@gmail.com

Abstract: In many fields, Convolutional neural networks have been efficiently used and by giving more attempts in the area of
Seismic imaging. Seismic image analysis is important in a variety of industrial applications like finding underground salt bodies
plays a vital role in detecting oil and gas reservoirs. Seismic image analysis still requires experts to examine the Salt body, which
is a time taking process. In this paper, we present a method for detecting and optimizing deep-supervised edges that accurately
segments the salt body. We create an edge-prediction branch to detect the salt body's border, which aids feature learning by
supervising boundary loss. We also used a Sigmoid function (Logistic) which gives output effectively and accurately. We have
compared our model with two existing approaches which ended up with good results.
Keywords
Deep Learning, Salt segmentation, Sigmoid function, Seismic imaging.

INTRODUCTION
Many regions on earth are undergrounded with large amounts of oil and gas, in turn forms large amounts of salt
body. Salt boundary analysis is a model for understanding the structure model and the seismic migration speed and a
construction model. Traditional methodology of analyzing uses great experts for examining the salt body, its
attributes, and features. The properties are manually made by experts, which may lead to the highly variable results
and complex noise pollution. The results produced are not even accurate. Due to this incorrect analysis, the company
drilling personal also fall into danger. The data collected from the existing data can be divided into four categories
[2-3]. (i) Seismic characteristics are used. (ii) The implementation of computer vision techniques. (iii) The
combination of seismic attributes. (iv) Deep learning algorithms [1].
Recently, computer vision fields are implemented using the learning methods using different features as object
direction and semantic segmentation [5]. Using deep learning statistical models, the algorithm immediately
translates raw input data into geological region mapping. The salt body has been regarded a segmentation [8]
challenge in previous investigations, with the image featuring, while other studies considered the salt body as a
regression problem. Deep learning algorithms like CNN [6,9] are used to predict the salt bodies from the data.
F.Meng et al., proposed some methods which used directed graphs for the purpose of clustering and segmenting salt
foregrounds. F. Meng et al., [7] classified salt bodies as semantic segmentation. LBP (Local Binary Patterns) are
used to obtain the features accurately. The encoder-decoder architecture, which consists of a convolutional layer and
pixel-level binary labels, is presented for salt segmentation. Each pixel is the form of probability representing
salt/non salt. In this paper we proposed a deeply supervised U-shaped model and effective salt division and
introduced a “Salt Boundary Prediction” to optimize the results. In which it contains an encoder and decoder for the
purpose of down sampling and up sampling. Here another unit ReLU (Rectified Linear Unit) is used. Using this, the
output is accurate. But not faster. So, for this, we used a sigmoid function Xinyou Yin et al., [4] to predict the
probability of output and also efficiency of resultant salt delineated part.

Computational Intelligence and Network Security


AIP Conf. Proc. 2724, 020009-1–020009-7; https://doi.org/10.1063/5.0130158
Published by AIP Publishing. 978-0-7354-4456-0/$30.00

020009-1
RELATED WORK
Alan Souza et al. [11] This uses convolutional neural networks being used for semantic delineation of salt bodies on
different seismic volume data. The numerical experiments showed in this paper are, even with a small amount of
interpreted lines one could obtain reasonable salt segmentation results. The calculations are very easy.
Mikhail Karchevskiy et al. [12] The manual seismic images interpretation by geophysics specialists is done. The
predictions provided even by a single DL model were able to achieve the 27th place and the efficiency is very high.
But it is a tedious process.S.J.Seol et al. [13] Here the electrical resistivity data is distributed. Mapping of this data
with electromagnetic data along with CNN is done. Using this the subsurface salt structure can be is found easily. It
is efficient, stable and reliable. But it does not encode the exact position of image and a lots of training data is
necessary.
H. Di, M et al. [14] Here K-Means Clustering algorithm is used. The attribute domain used here generates a
probability volume data that not only finds the salt boundaries, but also predicts effective salt interpretation. Here
more advanced salt interpretation is done. But if the number of attributes used increases, the computational time also
increases.

Downloaded from http://pubs.aip.org/aip/acp/article-pdf/doi/10.1063/5.0130158/17125737/020009_1_5.0130158.pdf


C.Ramirez et al. [15] Here Sparse representation of the seismic data is taken. The sparsifying dictionary used gives
robustness on account of redundancy which is highly stable. But the Space complexity, Time complexity and limited
processing power are major issues, as it uses sparse representation.
Z. Long et al. [16] Here 2D discrete Fourier transformation algorithm is used. The noise-robust algorithms
developed were able to label the boundaries of salt domes effectively, efficiently and were robust to noise. Here the
salt dome boundaries were detected efficiently and accurately. But the Mathematical calculations are complex.
A.U.Waldeland et al. [17] Here Convolution Neural Network is used. Here the classification of dataset is done easily
as just a small cube from the input data set is taken. Here there could be no need of attribute observation. This Can
achieve test accuracy over 70%. But it requires training data from many different datasets.
Sergey et al.[18] Here Multilayer Convolution Neural Network for classification of salt body is used. In this even
the unseen parts are generalized and produced more optimized results. This model can be even applied to a large 3D
data. But there will be redundancy in high dimensions and disregards spatial information.

PROPOSED MECHANISM
The salt bodies are completely different from natural imageswhich are a major issue. In order to overcome the issue,
the general semantic segmentation model cannot be used to produce the accurate results. So a deep Supervised
model is used.
Feature Extraction: Feature describes the characteristic of an image. So feature extraction is an important aspect in
Seismic imaging. Before feature extraction, every image undergoes several phases like Normalization, resizing,
binarizingthe image. Local Binary Pattern (LBP) is an image featuring operator which outputs the labels of each
pixel by an initial threshold value and by thresholding neighborhood of pixels. LBP operator used as LBRp,ru2.
Here histograms are used.

The subscript represents using the operator in a (P,R) neighborhood. Subscript u2 means uniform patterns. n is the
number of different labels produced.
A Deep-Supervised Model: In this paper, the entire architecture is defined using a simple U-shaped structure. It
has one encoder and a decoder. Using jump connections between the encoder and the decoder, the feature maps fuse
with existing resolutions. Up sampling and Down sampling of image is also done. The down sampling of the entire
image is done by selecting pixel by pixel using an encoder.

After the encoding and decoding of the image, the original size of the segmented image is obtained. There may be
such conditions in which some part of the image does not contain any salt. In order to overcome this negative impact
in the sampling process, a Binary classification is used. This classification identifies the part of image that contain
salt at the top of encoder. Meanwhile, it passes through a global average pooling process to detect the salt that is
already present. To predict the segmented salt region, a branch is added to eliminate the negative impact of empty
images on the network.

020009-2
Edge Prediction Branch: It is always a difficult task to predict the exact image in a pool of similar images. So feature
learning is used, which can be done by an Edge Prediction Branch. Here ReLU (Rectified Linear Unit) is used,
which improves the training speed of the data.

Downloaded from http://pubs.aip.org/aip/acp/article-pdf/doi/10.1063/5.0130158/17125737/020009_1_5.0130158.pdf


Figure 1. System architecture of proposed DSED

The importance of this ReLU is to produce the results efficiently and fastly. It uses a focal loss branch which can
be called as M border
M = M last + P ∗ M un empty + Q ∗ M class+ R ∗ M border

P, Q and R are called as weighting attributes. The choosing process of loss weights is done in two different
processes. Each loss function has its own loss magnitude in different orders. The basic aim is to order these different
magnitudes into a same order. Then a M last is obtained. But using this we may not get the exact boundary values.
So, a sigmoid function is used.
Sigmoid function: A sigmoid function is a Mathematical function. It contains several other functions such as
hyperbolic function, arctangent and logistic functions. It gives non-linearity i.e., it introduces uniformity and
provides evenness to the data. The function used gives accurate results, whose values fall in the range of [0,1], if we
used a logistic function.

020009-3
Explanation for the above algorithm is first we take seismic image as input. Then 1X1 Convolution is performed on
that input by taking every single pixel in that seismic image. After that the 3x3 filter is applied, which are like 2D
convolutions that are applied for regular 2 matrix objects (2D images). After that Back Propagation Process is done.

Algorithm: Edge Detection using Deep Supervised Learning


Step 1: For salt body Segmentation, a Seismic image is taken as input.
Step 2: In 1x1 convolution a filter of size 1x1 is selected and it undergoes convolution. In this manner the
entire image undergoes convolution pixel by pixel.
Step 3: Then a 3x3 convolution filter is selected. Here application of 2D convolution over the selected 3x3
convolution filter will always have a 3D size. The filter to be applied is based on taking a number
of channels of the input seismic image.

Downloaded from http://pubs.aip.org/aip/acp/article-pdf/doi/10.1063/5.0130158/17125737/020009_1_5.0130158.pdf


Step 4: Batch Normalization is a process of tuning of pixels by calculating the error rate from the previous
data of the seismic image. This tuning ensures that there would be a very low error rate.
M = Mlast+ P ∗ Mun-empty + Q ∗ Mclass+ R ∗ Mborder
Step 5: In this step, Sigmoid Function is performed. It is a logistic mathematical function. It is introduced
to produce accurate results. If the input given is an independent variable, then the output is a real
value. If the input given is a dependent variable, then output is in [0,1].

Step 6: The rectified linear Unit function (ReLU) is an activation function. This function outputs a positive
value if the value inputted is positive. If the input value given is negative, then a zero results.
Y=max(0,a)
Step 7: Repeat Step3 to step6 for improvement of accuracy
Step 8: After completion of the above all steps the output image has segmented edge.

The above algorithm is, first we take seismic image as input. Then 1x1 Convolution is performed [23-25] on that
input by taking every single pixel in that seismic image. After that the 3x3 filter is applied, which are like 2D
convolutions that are applied for regular 2 matrix objects (2D images). After that Batch normalization process is
done. Then Sigmoid Function is performed, in that logistic sigmoid function is used. For this the input can be any
real number but the output will be always ranges from 0 to 1 only. In that formula a0 is the value of the sigmoid
midpoint, M is the curve’s maximum value, r is the logistic growth rate, a range between -∞ to +∞. Next step is
Rectified Linear Unit function, in this if the input is positive the output will also be positive and if it is negative, it
returns only zero value as output. Like this the process is done and some steps are repeated again which gives the
salt body boundary in the given image. Using this we have successfully obtained boundary for the salt body and we
have examined the results using different tests which will be discussed in the results section.

RESULTS
Here in this section we are going to compare our proposed DSED algorithm with two existing approaches CNN by
S.Oh [3] and K Means by H. Di [4] using confusion matrix using 3 parameters accuracy, precision, recall. So we
have calculated true positive rate, false positive, true negative, false negative rate of our system and other two
mechanisms as well the results are as follows

Accuracy
It is defined as the ratio of number of correctly predicted observations to the total number of observations
Accuracy = ( TPr+ TNg ) / ( TPr+ FPr + FNg + TNg)
Our proposed model DSED have shown good results compared to existing approaches, though CNN also shown
good results as the No. of epochs increases our proposed system accuracy is close to 98% because of low false
positive rate.

020009-4
FIGURE 2: Accuracy for proposed DSED

Precision

Downloaded from http://pubs.aip.org/aip/acp/article-pdf/doi/10.1063/5.0130158/17125737/020009_1_5.0130158.pdf


It is the ratio of number of correctly predicted positive observations to the total number of predicted positive
observations.Precision = TPr / (TPr + FPr)

FIGURE 3: Precision for proposed DSED

Here the result shows that our proposed approach have a better precession irrespective of No. of epochs compared to
other two approaches as the number of positive observations have no effect but for other two as the false
observations increased it effected the precession

Recall
It is the ratio of number of correctly predicted positive observations to all observations in actual class.
Recall=TPr / ( TPr + FNg)

FIGURE 4: Recall for proposed DSED

020009-5
F-Score
F-Score is the weighted average of Recall and Precision.
F-Score = 2 * (Recall * Precision) / (Recall + Precision)

Downloaded from http://pubs.aip.org/aip/acp/article-pdf/doi/10.1063/5.0130158/17125737/020009_1_5.0130158.pdf


FIGURE 5: F-Score for proposed DSED

Our proposed DSED approach have shown a better recall compared to other two and sometimes it not so stable
because of the sample and the most of the observations it gave good recall.

Table 1: Comparison of proposed DSED with SaltSeg [16]

Metric SaltSeg Proposed DSED


Accuracy 0.960 0.980
Precision 0.900 0.923
Recall 0.956 0.947
F1 Score 0.923 0.915

Our proposed method is compared with the existing SaltSeg, which is 3D modelling based on CNN. We tested the
performance using all the metrics mentioned above, all the tests are conducted using 100 epochs and it clearly shows
the proposed method has better accuracy, precision and recall, F1 score are almost identical.

CONCLUSION
In this Paper we mainly concentrated on finding the boundaries of salt bodies. For categorizing salt bodies we used
deep-supervised method, which exactly locates the salt bodies. These methods are used for detecting oil and gas
reservoirs because, where ever the oil and gas are present there these salt bodies are formed. This analysis is very
much helpful to industries. Using the Semantic Segmentation and Convolution Process on the seismic image, which
classifies each pixel as a salt body or not. To classify precisely and accurately a Deep- Supervised algorithm is used.
Then in order to speed up the training of neural data, ReLU (Rectified Linear activation function) is used. This
activation function outputs zero if the input is negative value. Else it outputs a positive value if the input is positive.
In order to increase the uniformity and to add the non-linearity of the data a Sigmoid function is used, which predicts
the salt delineation accurately.

REFERENCES

1. Yunzhi Shi∗ ,Xinming Wu and Sergey Fomel,” Salt classification using Deep Learning” The University of
Texas at Austin,2018
2. Gopi, A. P., Jyothi, R. N. S., Narayana, V. L., & Sandeep, K. S. (2020). Classification of tweets data based
on polarity using improved RBF kernel of SVM. International Journal of Information Technology, 1-16.

020009-6
3. K.J Naik (2021), "Classification and Scheduling of Information-Centric IoT Applications in Cloud- Fog
Computing Architecture (CS_IcIoTA)," 2020 14th International Conference on Innovations in Information
Technology (IIT), 2020, pp. 82-87,
4. Annals of Botany, Volume 91, Issue 3, February 2003, Pages 361-371,” A Fliexible Sigmoid function of
Determinate growth” https://doi.org/10.1093/aob/mcg029 published on 01 February 2003.
5. Rao, B. T., Patibandla, R. L., Narayana, V. L., & Gopi, A. P. (2021). Medical Data Supervised Learning
Ontologies for Accurate Data Analysis. Semantic Web for Effective Healthcare, 249-267.
6. Naik, K. J., &Soni, A. (2021). Video Classification Using 3D Convolutional Neural Network, In A. Kumar,
& S. Reddy (Ed.), Advancements in Security and Privacy Initiatives for Multimedia Images (pp. 1-18). IGI
Global.
7. F. Meng et al., “Constrained directed graph clustering and segmentation propagation for multiple
foregrounds co-segmentation,” IEEE Trans. Circuits Syst. Video Technol., vol. 25, no. 11, pp. 1735–1748,
Nov. 2015
8. Z. Miao, K. Fu, S. Hao, S. Xian, and M. Yan, “Automatic water-body segmentation from high-resolution
satellite images via deep networks,” IEEE Geosci. Remote Sens. Lett., vol. PP, no. 99, pp. 1–5, 2018.

Downloaded from http://pubs.aip.org/aip/acp/article-pdf/doi/10.1063/5.0130158/17125737/020009_1_5.0130158.pdf


9. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc.
10. #K Jairam Naik, “A Dynamic ACO based Elastic Load Balancer for Cloud Computing (D-
ACOELB)”, Data Engineering and Communication Technology (ICDECT), Lecture Notes in Advances in
Intelligent Systems and Computing, book series 1079, Springer Nature Singapore, pp. 11-20, 2020.
11. Alan Souza, Wilson Leao, Daniel Miranda, Nelson Hargreaves, Bruno Pereira Dias and Erick Talarico ˜
PETROBRAS PetroleoBrasileiro” Salt segmentation using deep learning” Apr,2017.
12. Mikhail Karchevskiy, InsafAshrapov, Leonid KozinkinarXiv preprint “Automatic salt deposits
segmentation: A deep learning approach ”arXiv:1812.01429, 2018.
13. S. Oh, K. Noh, D. Yoon, S. J. Seol, and J. Byun, “Salt delineation from electromagnetic data using
convolutional neural networks,” IEEE Geosci. Remote Sens. Lett., vol. 16, no. 4, pp. 519–523, Apr. 2019.
14. H. Di, M. Shafiq, and G. AlRegib, “Multi-attribute K-means cluster analysis for salt boundary detection,”
in Proc. 79th EAGE Conf. Exhibit., Jun. 2017, pp. 1–5
15. C.Ramirez, Gs.Larrazabal, and G. Gonzalez,” Salt body detection from seismic data via sparse
representation” 2016
16. Jiangtao Guo, Linfeng Xu, Member,IEEE, Jisheng Ding, Bin He, Shengxuan Dai, Fangyu Liu. ”A Deep
Supervised Edge Optimization algorithm for Salt body segmentation” August 04, 2020.
17. A.U. Waldeland* (University of Oslo), A.H.S.S. Solberg (University of Oslo),” Noise-robust detection and
tracking of salt domes in post migrated volumes using texture, tensors, and subspace learning”2017
18. Patibandla, R. L., Narayana, V. L., Gopi, A. P., & Rao, B. T. (2021). Recommender Systems for the Social
Networking Context for Collaborative Filtering and Content-Based Approaches. In Recommender
Systems (pp. 121-137). CRC Press.
19. J.Long, E.Shel Hamer, and T.Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in
proc.IEEEconf.comput.vis.patternrecognit.(CVPR), Jun 2015, pp.3431-3440.
20. A. Asjad and D. Mohamed, “A new approach for salt dome detection using a 3D multidirectional a dee
detector,” Appl. Geophys., vol. 12, no. 3, pp. 334–342, Sep. 2015.
21. J. Haukás, O. RoaldsdotterRavndal, B. H. Fotland, A. Bounaim, and L. Sonneland, “Automated salt body
extraction from seismic data using the level set method,” Break, vol. 31, no. 1971, pp. 35–42, 2013.
22. X. Wu, “Methods to compute salt likelihoods and extract salt boundaries from 3D seismic images,”
Geophysics, vol. 81, no. 6, pp. 119–126, Nov. 2016.
23. Krishna, Komanduri Venkata Sesha Sai Rama, et al. "Classification of Glaucoma Optical Coherence
Tomography (OCT) Images Based on Blood Vessel Identification Using CNN and Firefly
Optimization." Traitement du Signal 38.1 (2021).
24. Sirisha, A., Chaitanya, K., Krishna, K.V..S.S.R., Kanumalli, S.S. (2021). Intrusion detection models using
supervised and unsupervised algorithms - a comparative estimation. International Journal of Safety and
Security Engineering, Vol. 11, No. 1, pp. 51-58. https://doi.org/10.18280/ijsse.110106
25. Rani, B.M.S., Majety, V.D., Pittala, C.S., Vijay, V., Sandeep, K.S., Kiran, S. (2021). Road identification
through efficient edge segmentation based on morphological operations. Traitement du Signal, Vol. 38, No.
5, pp. 1503-1508. https://doi.org/10.18280/ts.380526

020009-7

You might also like