Professional Documents
Culture Documents
INTRODUCTION
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION
Many regions on earth are undergrounded with large amounts of oil and gas, in
turn, forms large amounts of salt body. Salt boundary analysis is a model for understanding
the structure model and the seismic migration speed and a construction model. The traditional
methodology of analyzing uses great experts for examining the salt body, its attributes, and
features. The properties are manually made by experts, which may lead to the highly variable
results and complex noise pollution. The results produced are not even accurate. Due to these
incorrectanalyses, the company drilling personal also fall into danger. The data collected
from the existing data can be divided into four categories. (i)The use of seismic attributes.
(ii)The realization of computer vision methods. (iii) the combination of seismic attributes.
(iv) Deep learning algorithms.
Recently, computer vision fields are implemented using the learning methods using
different features as object direction and semantic segmentation [12]. The model directly
converts the raw input data into the mapping of geological area using the deep learning
statistical models. Past studies have considered the salt body as segmentation problem with
the image featuring, while other studies considered the salt body as a regression problem.
Deep learning algorithms like CNN are used to predict the salt bodies from the data. F.Meng
et al., proposed some methods which used directed graphs for the purpose of clustering and
segmenting salt foregrounds. J.Long et al., [10] classified salt bodies as semantic
segmentation. LBP (Local Binary Patterns) are used to obtain the features accurately.
The proposed architecture for salt segmentation is encoder-decoder architecture
which consists of convolutional layer and pixel-level binary labels. Each pixel is the form of
probability representing salt/non-salt. In this paper we proposed a deeply supervised U-
shaped model and effective salt division and introduced a ―Salt Boundary Prediction‖ to
optimize the results. In which it contains an encoder and decoder for the purpose of
downsampling and upsampling. Here another unit ReLU (Rectified Linear Unit) is used.
Using this, the output is accurate. But not faster. So, for this, we used a sigmoid function wt
VNITSW-CSE 1| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
et al., [17] to predict the probability of output and also efficiency of the resultant salt
delineated part.
1. Deep-Supervised Model:
Supervised deep learning frameworks are trained using well-labelled data. It
teaches the data learning algorithm to generalise from the training data and to
implement in unseen situations. After completing the training process, the model
is tested on a subset of the testing set to predict the output.
VNITSW-CSE 2| P a g e
CHAPTER 2
LITERATURE SURVEY
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
CHAPTER-2
LITERATURE SURVEY
Alan Souza et al. [1] uses convolutional neural networks being used for the semantic
delineation of salt bodies on different seismic volume data. The numerical experiments
showed in this paper are, even with a small amount of interpreted lines, one could obtain
reasonable salt segmentation results. The calculations are very easy.
Mikhail Karchevskiy et al. [2] The manual seismic images interpretation by geophysics
specialists is made. The predictions provided even by a single DL model were able to achieve
27th place and the efficiency is very high.
But it is a tedious process.S.J.Seol et al. [3] Here, the electrical resistivity data is distributed.
Mapping of this data with electromagnetic data along with CNN is done. Using this, the
subsurface salt structure can be is found easily. It is efficient, stable and reliable. But it does
not encode the exact position of the image and a lot of training data is necessary.
H. Di, M et al. [4] Here K-Means Clustering algorithm is used. The attribute domain used
here generates probability volume data that not only finds the salt boundaries but also
predicts effective salt interpretation. Here the more advanced salt interpretation is made. But
if the number of attributes used increases, the computational time also increases.
C.Ramirez et al. [5] Here Sparse representation of the seismic data is taken. The sparsifying
dictionary used gives robustness on account of redundancy which is highly stable. But the
Space complexity, Time complexity and limited processing power are major issues, as it uses
sparse representation.
Z. Long et al. [6] Here 2D discrete Fourier transform algorithm is used. The noise-robust
algorithms developed were able to label the boundaries of salt domes effectively and
efficiently and were robust to noise. Here the salt dome boundaries were detected efficiently
and accurately. But the, Mathematical calculations are complex.
A.U.Wildland et al. [7] Here, Convolution Neural Network is used. Here the classification of
the dataset is done easily as just a small cube from the input data set is taken. Here there
VNITSW-CSE 3| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
could be no need for attribute observation. This Can achieve test accuracy of over 70%. But it
requires training data from many different datasets.
Sergey et al.[8] Here Multilayer Convolution Neural Network for classification of the salt
body is used. In this, even the unseen parts are generalized and produce more optimized
results. This model can be even applied to large 3D data. But there will be redundancy in
high dimensions and disregards spatial information.
VNITSW-CSE 4| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
VNITSW-CSE 5| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
VNITSW-CSE 6| P a g e
CHAPTER 3
SYSTEM ANALYSIS
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
CHAPTER-3
SYSTEM ANALYSIS
Areas with a large amount of oil and gas on the earth are likely to form huge salt
deposits below the surface of the earth. In addition, salt boundary analysis is of great
significance for understanding the model construction of salt layer structure and seismic
migration speed. Currently traditional seismic imaging still requires professionals to analyze
the salt body. Manually designed attributes are devised based on expertise, however, these
attributes may not yet fully describe the actual seismic data of complex noise pollution. This
leads to very subjective, highly variable results. Even more worrying is that drillers at oil and
gas companies may face dangerous situations because of these incorrect analysis results.
Previous related work can be roughly divided into four categories: the use of seismic
attributes, implementation of computer vision approaches, integrating seismic attributes and
the machine learning, application of the deep learning algorithms.
Traditional methods rely on geological, physical, and geometrical principles to
obtain seismic attributes. Specifically, automatic salt boundary interpretation approaches
needs to analyze salt properties, such as discontinuities [1], texture [2], reflection quadratic
vector fields [3], and salt likelihood function [4]. Implementation of computer vision
approaches also helps salt delineation, for example, Shafiq et al. [5] proposed a phase
congruency-based method to generate an attribute map of edges in the seismic data, which is
used to detect salt bodies within migrated seismic volumes by an interactive region growing
method. The design of handcrafted attributes requires professional background knowledge.
However, seismic data in real scenarios may be polluted by various noises [6], and these
attributes may not fully describe such complex data in detail. Di et al. [7] proposed a new
salt-boundary detection method based on multiattribute k-means cluster analysis, which
integrates seismic attributes and the machine learning. Recently, deep learning-based
methods boost many fields of computer vision by fusing different features [8], such as object
detection [9] and semantic segmentation [10].
Some recent works [11]–[16] also use a deep learning statistical model to directly
convert raw input seismic data into the final mapping of geological features. Waldeland and
VNITSW-CSE 7| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
A Feasibility Study is a preliminary study undertaken before the real work of a project
starts to ascertain the likely hood of the project‘s success. It is an analysis of possible
alternative solutions to a problem and a recommendation on the best alternative.
1. Economic Feasibility
2. Technical Feasibility
3. Operational Feasibility
VNITSW-CSE 8| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
It is defined as the process of assessing the benefits and costs associated with the
development of project. A proposed system, which is both operationally and technically
feasible, must be a good investment for the organization. With the proposed system the users
are greatly benefited as the users are able to detect water quality that is useful for salmon fish
basing on the physical, chemical and biological parameters of water. This proposed system
does not need any additional software and high system configuration. Hence the proposed
system is economically feasible.
The technical feasibility infers whether the proposed system can be developed
considering the technical issues like availability of the necessary technology, technical
capacity, adequate response and extensibility. The project is decided to build using Python.
Jupyter Note Book is designed for use in distributed environment of the internet and for the
professional programmer it is easy to learn and use effectively. As the developing
organization has all the resources available to build the system therefore the proposed
system is technically feasible.
Operational feasibility is defined as the process of assessing the degree to which a proposed
system solves business problems or takes advantage of business opportunities. The system is
self-explanatory and doesn‘t need any extra sophisticated training. The system has built in
methods and classes which are required to produce the result. The overall time that a user
needs to get trained is less than one hour. As the software that is used for developing this
application is very economical and is readily available in the market. Therefore, the
proposed system is operationally feasible.
VNITSW-CSE 9| P a g e
CHAPTER 4
REQUIREMENTS SPECIFICATION
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
CHAPTER-4
REQUIREMENTS SPECIFICATION
4.1.2 Scope:
The scope is the part of project planning that involves determining and documenting a list of
specific project goals, deliveries, features, functions, tasks, deadlines, and ultimately costs.
In other words ,it is what needs to be achieved and the work that must be done to deliver a
project .The Software scope is well defined boundary which encompasses all the activities
that are done to develop and deliver the software product. The software scope clearly defines
the all functionalities and artifacts to be delivered as a part of the software.
4.1.3 Definition:
• Feasibility Study
VNITSW-CSE 10| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
• Requirements Gathering
• Software Requirements Specification
• Software Requirements Validation
• Research Papers
• Camera
Functional requirements explain what has to be done by identifying the necessary task,
action or activity that must be accomplished. Functional requirements analysis will be used as
the top –level functions for functional analysis.
Non-functional requirements describe the general characteristics of a system .They are also
known as quality attributes. Some typical non-functional requirements are Performance,
Response Time, Throughput, Utilization, and Scalability.
Performance:
Response Time:
Response is the time a system or functional unit takes to react to a given input.
VNITSW-CSE 11| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
VNITSW-CSE 12| P a g e
CHAPTER 5
SYSTEM DESIGN
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
CHAPTER-5
SYSTEM DESIGN
The data characteristics of salt bodies are different from natural images. For example,
the shape of salt bodies is indefinite and there is no prior knowledge of shape; the texture of
geological data is more prominent and so on. Therefore, the general semantic segmentation
networks do not perform well on salt body data. Aiming at these characteristics of the salt
body data, we designed a deep supervised semantic segmentation model and optimized the
segmentation results through edge prediction branch. The structure of the proposed method is
illustrated in Fig. 1. In this section, we first present the proposed deep-supervised model.
Then we describe how edge prediction branch optimizes segmentation results. Finally, we
introduce some important components used in the network framework, which are
indispensable for the final improvement.
5.2 MODULES:
VNITSW-CSE 13| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
They are:
Since a considerable part of the salt data does not contain salt at all, this will cause
confusion to the segmentation result. In order to reduce or even eliminate this adverse effect,
we add a binary classification at the top of the encoder to predict whether the image contains
salt, which is defined as nonempty or empty. Specifically, the feature map at the top of the
encoder undergoes a global average pooling and then outputs the classification results. This
classification branch uses cross-entropy loss supervision Lclass, which assists the supervision,
promotes the network to learn the semantic information, and generates better segmentation
results.
Additionally, we add a branch at the top of the decoder to predict the segmentation
results of images that include salt, which further eliminates the negative impact of empty
images on the network. In order to obtain better supervision, we perform semantic
supervision of nonempty pictures at different levels of the decoder and the losses are set to
Lfinal and Lnon_empty, respectively. The loss Lnon_empty is calculated as below where i denotes the
level of the decoder
5
𝐿𝑛𝑜𝑛−𝑒𝑚𝑝𝑡𝑦= ∑ 𝐿i𝑛𝑜𝑛_𝑒𝑚𝑝𝑡𝑦
i=1
VNITSW-CSE 14| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
As shown in Fig. 1, we embed a global average pooling layer to extend the U-shape
architecture, then we concatenate the feature map of this layer with the feature map of the
final stage decoder, and finally output the segmentation results of all the images. With the
global average pooling layer, we introduce the global context information into the network as
a guidance.
In the semantic segmentation task, it is hard to extract different objects with similar
appearance, especially when they are adjacent spatially. In order to improve the segmentation
performance, it is necessary to amplify the distinction of features. In the salt body data set,
the spatial shapes of salt and nonsalt are more amorphous. For this reason, we propose an
edge prediction branch to guide feature learning. The branch directly learns the semantic
boundary through explicit semantic boundary supervision, so that the features on both sides
of the semantic boundary are more distinguishable. The main purpose of this edge prediction
branch is to obtain more accurate boundary information. Generally, the low level features
have more accurate spatial information and are more instructive for boundary prediction.
Therefore, the design of this branch is bottom-up.
VNITSW-CSE 15| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
The label of the edge prediction branch can be obtained from the label of the semantic
segmentation. The specific method can use traditional image processing methods to extract
edges, such as the sobel operator, the canny operator, and so on. The branch uses focal loss
[24] that is defined as Ledge. The details of the edge prediction branch is illustrated in Fig. 2.
Finally, the total loss L of our model is calculated as
where α, β, and γ are weighting parameters. Our method of selecting loss weights consists of
two steps. First, different loss functions have different orders of magnitude.
VNITSW-CSE 16| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
Sigmoid function: Used to predict the probability of the output as its range falls
between a given range.
VNITSW-CSE 17| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
They rescale and recalibrate the channel and spatial dimensions to let the network
automatically obtain the channel and spatial characteristics that are favorable for the current
task, and suppress the disadvantageous characteristics. In addition, we also used a strategy
called hypercolumns [26] when predicting the final segmentation results. The features that we
use to predict the segmentation results are not just the output of the last layer of the final
decoder. We upsample the feature maps of all decoders to the same size and concatenate
them together as the source of feature maps for predicting the segmentation results. It allows
getting more precise localization and captures the semantics at the same time.
VNITSW-CSE 18| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
The Unified Modeling Language (UML) is used to specify, visualize, modify, construct and
document the articrafts of an object-oriented software intensive system under development.
UML offers a standard way to visualize a system‘s architectural blue prints, including
elements such as:
• Actors
• Business process
• (Logical) Components
• Activities
• Programming language statements
• Database schemas, and
VNITSW-CSE 19| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
The unified modelling language allows the software engineer to express an analysis
model using the modelling notation that is governed by a set of syntactic semantic and
pragmatic rules. A UML system is represented using 5 different views that describe the
system from distinctly different perspective. Each vie is defined by a set of diagrams, which
is as follows:
• In this model the and functionality are arrived from inside the system.
• This model view models the static structures.
• Implementation view is also known as Architectural view which typically captures the
enumeration of all the subsystems in the implementation model, the component
diagrams illustrating how subsystems are organized in layers and hierarchies and
illustrations of import dependencies between subsystems.
• These UML model describe both structural and behavioural dimensions of the domain
or environment in which the solution is implemented. This view is often also referred
to as the deployment or physical view.
VNITSW-CSE 20| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
Modules
Image data
Feature Extraction
Global Average
System Salt Body
Segmentation
Training the data
Applying CNN
Algorithm
Salt Body
Predictions
Use case diagrams are usually referred to as behaviour diagrams used to describe a set of actions
(use cases) that some systems should or can perform in collaboration with one ormore
external users of the system(actors).
VNITSW-CSE 21| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
VNITSW-CSE 22| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
Component diagrams are used to display various components of a software system as well as
subsystems of a single system. They are used to represent physical things or components of a
system. It generally visualizes the structure and an organization of a system.
VNITSW-CSE 23| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
VNITSW-CSE 24| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
5.3 ALGORITHM:
Step 2: In 1x1 convolution a filter of size 1x1 is selected and it undergoes convolution. In this
manner the entire image undergoes convolution pixel by pixel.
Step3: Then a 3x3 convolution filter is selected. Here application of 2D convolution over the
selected 3x3 convolution filter will always have a 3D size. The filter to be applied is based on
the decision of taking number of channels of the input seismic image.
Step4: Batch Normalization is a process of tuning of pixels by calculating the error rate from
the previous data of the seismic image. This tuning ensures that there would be a very low
error rate.
𝑀
(𝑎) =
1 + 𝑒−𝑟(𝑎−𝑎0)
Step6: The rectified linear Unit function (ReLU) is an activation function. This function
outputs a positive value if the value inputted is positive. If the input value given is negative
then a zero is resulted.
Y=max(0,a)
Step7: Again Step3, step5 and step6 are repeated to get the accurate results
Step8: After completion of above all steps an image is displayed, the image then undergoes
through an Edge.
VNITSW-CSE 25| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
The above algorithm is, first we take seismic image as input. Then 1x1 Convolution is
performed [23-25] on that input by taking every single pixel in that seismic image. After that
the 3x3 filter is applied, which are like 2D convolutions that are applied for regular 2 matrix
objects (2D images). After that Batch normalization process is done. Then Sigmoid Function
is performed, in that logistic sigmoid function is used. For this the input can be any real
number but the output will be always ranges from 0 to 1 only. In that formula a0 is the value
of the sigmoid midpoint, M is the curve‘s maximum value, r is the logistic growth rate, a
range between -∞ to +∞. Next step is Rectified Linear Unit function, in this if the input is
positive the output will also be positive and if it is negative, it returns only zero value as
output. Like this the process is done and some steps are repeated again which gives the salt
body boundary in the given image. Using this we have successfully obtained boundary for the
salt body and we have examined the results using different tests which will be discussed in
the results section.
VNITSW-CSE 26| P a g e
CHAPTER 6
IMPLEMENTATION
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
CHAPTER-6
IMPLEMENTATION
What is a Script?
A script or scripting language is a computer language with a series of commands within a file
that is capable of being executed without being compiled .This is a very useful capability that
allows us to type in a program and to have it executed immediately in an interactive mode .
Script:
Scripts are distinct from the core code of the application, which is usually written in a
different language, and are often created or at least modified by the end-user. Scripts are
often interpreted from source code or byte code, whereas the applications they control are
traditionally compiled to naïve machine code.
Program:
The program has an executable from that the computer can use directly to execute the
instructions. The same program in its human-readable source code form, from which
executable programs are derived (e.g., compiled ).
Python:
Python concepts:
VNITSW-CSE 27| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
History of Python
Python was developed by Guido van Rossum in the late eighties and early nineties at
the National Research Institute for Mathematics and Computer Science in the
Netherlands.
Python is derived from many other languages, including ABC, Modula-3, C, C++,
Algol-68, Smalltalk, and Unix shell and Other scripting languages.
Python is copyrighted. Like Perl, Python source code is now available under the GNU
General Public License (GPL).
Python is now maintained by a core development team at the institute, although Guido
van Rossum still holds a vital role in directing its progress.
Python Features
• Easy-to-learn: Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
• Easy-to-read: Python code is more clearly defined and visible to the eyes.
• Easy-to-maintain: Python‘s source code is fairly easy-to-maintain.
VNITSW-CSE 28| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
• A broad standard library: Python‘s bulk of the library is very portable and
cross-platform compatible on UNIX, windows, and Macintosh.
• Interactive Mode: Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
• Portable: Python can run on a wide variety of hardware platforms and has the
same interface on all platforms.
• Extensible: You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more
efficient.
• Databases: Python provides interfaces to all major commercial databases.
• GUI Programming: Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows
MFC, Macintosh, and the X Window system of Unix.
• Scalable: Python provides a better structure and support for large programs than
shell scripting.
Python modules:
Python allows us to store our code in files(also called modules). To support this, Python
has a way to put definitions in a file and use them in a script or in an interactive instance of
the interpreter. Such a file is called a module; definitions from a module can be imported into
other module or into the main module.
Testing code:
The Jupyter Notebook is an open-source web application that allows you to create and
share documents that contain live code, equations, visualizations and narrative text. Uses
VNITSW-CSE 29| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
include: data cleaning and transformation, numerical simulation, statistical modeling, data
visualization, machine learning.
Sigma sometimes is called as logistic function, and this new class of neurons is called logistic
neurons. In other words, the output of the sigmoid neuron with inputs x1, x2. weights w1,
w2…. And bias b is 1. the algebraic equation of sigmoid neuron function and Rectified
Linear Unit (ReLU) is as follows:
At first, sigmoid neuron appears very different to perceptron.in fact, there are many
similarities between perceptron‘s and sigmoid neurons. To understand the similarity to the
perceptron model, suppose if the equation is a large positive number then (e-z=0) and so
sigma(z)=1. In other words, when (z=w.x+b) is large and positive, the output from the
sigmoid neuron is approximately 1, just as it would have been for a perceptron. If on the
other hand z= w.x+b is very negative then (e-z -> infinity), and sigma(z)=0. So, when
VNITSW-CSE 30| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
(z=w.x+b) is very negative, the behavior of the sigmoid neuron also closely approximates a
perceptron. It's only when(w.x+b) is of modest size then there is much deviation from the
perceptron model. What about the algebraic form of sigma? How can we understand that? In
fact, the exact form of the sigma isn‘t so important but what really matters is the shape of the
function when plotted.
If sigma had in fact been a step function then the sigmoid neuron would be a
perceptron and since the output would be 1 or 0 based on whether (w-x+b) is positive or
negative. The smoothness of sigma means that small changes in the weights and in the bias
will produce a small change, output in the output from neuron. On fact, calculus tells us that
output is well approximated by the following equation:
𝑀
(𝑎) =
1 + 𝑒−𝑟(𝑎−𝑎0)
Actually, when (w.x+b=0) the perceptron outputs 0, while the step function outputs 1
then we need to modify the step function at that one point where the sum is overall all the
weights, denotes partial derivatives of the output with respect to wj and b, respectively.
Output is the linear function of the changes and in the weights and bias. The linearly makes it
easy to choose small changes in the weights and biases to achieve any desired small changes
the output. So, while sigmoid neurons have much of the same qualitative behavior as as
perceptron, they make it much easier to figure out how changing the weights and biases will
change the output.
Here, the leftmost layer in this network is called the input layer, and the neurons within the
layer are called input neurons. The rightmost or output layer contains the output neurons or as
in this case, a single output neuron. The middle layer is called a hidden layer, since the
neurons in this layer are neither inputs not outputs.
VNITSW-CSE 31| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
The network above has just a single hidden layer , but some networks have multiple hidden
layers.For example, the following four-layer network has two hidden layers.
For historical reason, multiple layer networks are sometimes called multilayer perceptron on
MLP‘s despite being made up of sigmoid neurons, not perceptron. The design of the input
and output layers in a network is often straightforward. For example, suppose we‘re trying to
determine whether a handwritten image depicts a ―9‖ or not. A natural way to design the
network is to encode the intensities of the image pixels into the input neurons. If the image is
a 64 by 64 grey scale image, then we‘d have 4096=64 X 64 input neurons, with the intensities
scaled appropriately between 0 and 1. The output layer will contain just a single neuron, with
output values of less than 0.5 indicating ―input image is not a 9‖, and values greater than 0.5
indicating ―input image is a 9‖.
Neural network researchers have developed many design heuristics for the hidden
layers, which help people get the behavior from their nets. For example, such heuristics can
be used to help determine how to trade off the number of hidden layers against the time
required training the network. Neural networks where the output from one layer is used as
input to the next layer are called a feed forward neural network which means that there are no
loops in the network. If we did have loops, we‘d end up with situation where the input to the
sigmoid function depended on the output. However, these types of situations are not allowed.
There are other models of artificial neural networks in which feedback loops are possible.
These models are called recurrent neural networks.
An algorithm which lets us find weights and biases so that the output from the network
approximates y(x) for all training inputs x. To quantify how well we‘re achieving this goal
we define a cost function:
VNITSW-CSE 32| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
• C(w, b) is non-negative, since every term in the sum is non-negative and furthermore,
the cost C(w,b) becomes small that is C(w,b)=0, precisely when y(x) is approximately
equal to the output, ‗a‘, for all training inputs ‗x‘. So our training algorithm has done a
good job if it can find weights and biases so that C(w,b)=0.
• By contrast, it‘s not doing so well when C(w,b) is large then it would mean that y(x)
is not is not close to the output ‗a‘ for a large number of inputs. So the aim of our
training algorithm will be to minimize the cost C(w,b) as a function of the weights
and biases. In other words, we want to find a set of weights and biases which makes
the cost as small as possible. We‘ll do that using an algorithm known as gradient
descent where the minimization problems are solved.
Deep learning is a family of machine learning concerned with algorithm inspired by the
Artificial Neural Networks. This is also known as deep structured learning or differential
programming. Artificial neural networks are inspired by the biological neural networks that
constitute brain and is based on a collection of connected units or nodes called artificial
neurons like that of a brain. A deep neural network I an artificial neural network is an
artificial neural network with multiple layers between the input and output layers. Whereas
recurrent neural network is a class of artificial neural network where connections between
nodes form a directed a graph.
Deep learning architectures such as deep neural networks, recurrent neural networks, and
convolutional
Neural networks are being applied to the fields including vision, speech recognition, neural
networks are processing audio recognition, bioinformatics, medical image analysis, where
they have produced results. The benefit of deep learning models is their ability to perform
automatic feature extraction from raw data, also called feature learning.
VNITSW-CSE 33| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
visual cortex. The visual cortex has small region of cells that are sensitive to specific regions
of the visual field. So, the CNN is a deep neural network. Image classification is the task of
taking an input image and outputting a class or a probability of classes that best describes the
image.
For humans, this task of recognition is one of the first skills we learn from the moment
we are born and is one that comes naturally and effortlessly as adults. Without even thinking
twice, we‘re able to quickly and seamlessly identity the environment we are in as well as the
objects that surround us. These skills of being able to quickly recognition patterns,
generalize from prior knowledge, and adapt to different image environments are ones that
wedo not share with our fellow machines.
The model extracts features from sequence data and maps the internal features of the
sequence. A ID CNN is very effective for deriving features from a fixed-length segment of
the overall dataset, where it is not so important where the feature is located in the segment
2D convolutional layers take a three- dimensional input, typically an image with three
color channels. They pass a filter, also called a convolutional kernel, over the image, a small
window of pixels at a time, for example 3x3 or 5x5 pixels in size , and moving the window
until they have scanned the entire image. The convolutional operation calculates the dot
product of the pixel values in the current filter window with the weights defined in the filter.
They are helpful in events detection in videos, 3D medical images etc. They are not limited
to 3D space but can also be applied to 2d space inputs such as images.
In a convolution, small areas of an image are scanned and the probability that they belong to
a filter class is assigned and translated to an activation map, a representation of the image
layers. In a 3D CNN, the kernels move through three dimensions of data (height length and
depth) and produce 3D activation maps. Convolutional neural networks are a type of deep
model that can act directly on the raw inputs. However, such models are currently limited to
handling 2D inputs. We can develop a novel 3D CNN model for action recognition.
In a convolution, small areas of an image are scanned and the probability that they belong to
a filter class is assigned and translated to an activation map, a representation of the image
layers. In a 3D CNN, the kernels move through three dimensions of data (height length and
depth) and produce 3D activation maps.
VNITSW-CSE 35| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
This class can be reused pixel values from the range of 0-255 to the range 0-1 preferred for
the neural network models. Scaling data to the range of 0-1 is traditionally referred to as
normalization. This can be achieved by setting the rescale argument to a ratio by which each
pixel can be multiplied to achieve the desired range. In this scale, the ratio is 1/255 or about
0.0039.
For example:
Target:
Target represents the desired output that we want our model to learn. In this case of a
classification problem, the targets would be the labels of each of the examples in the training
set.
For example, target_size: tuple of integers (height, width), default :(256, 256). So, here the
dimensions to which all images found will be resized.
Batch:
We can't pass the entire dataset into the neural net ar once. So, you divide dataset into number
of batches or sets or parts. Batch size is total number of training examples present in a single
batch. In other words, batch size is the total number of training examples in one
forward/backward pass.
Epoch:
An epoch represents a full pass over the entire training set, meaning that the model has seen
each example once.
VNITSW-CSE 36| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
In other words, an epoch is one forward pass and one backward pass of all the training
examples.
An epoch usually means one iteration over all of the training data. For, instance just ser a
fixed number of steps like 1000 per epoch even though if we have a much larger data set.
One epoch means that each sample in the training dataset has had an opportunity to update
the internal model parameters. An epoch is comprised of one or more batches.
Validation data:
A validation set is a subset of your dataset which contains examples available to a neural
network to adjust the hyper parameters and the model architecture based on the validation
loss. The validation set is used during training to run validation examples through the model
after each epoch.
6.2 Coding:
VNITSW-CSE 37| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
plt.subplot(2, 6, q*2-1)
plt.imshow(img)
plt.subplot(2, 6, q*2)
plt.imshow(img_mask)
fig.suptitle('Sample Images', fontsize=24);
train_ids = next(os.walk(config.path_train+"images"))[2]
test_ids = next(os.walk(config.path_test+"images"))[2]
X = np.zeros((len(train_ids), config.im_height, config.im_width, config.im_chan),
dtype=np.uint8)
Y = np.zeros((len(train_ids), config.im_height, config.im_width, 1), dtype=np.bool)
print('Getting and resizing train images and masks ... ')
sys.stdout.flush()
for n, id_ in tqdm(enumerate(train_ids), total=len(train_ids)):
x = img_to_array(load_img(config.path_train + '/images/' + id_, color_mode="grayscale"))
VNITSW-CSE 38| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
def BatchActivate(x):
x = layers.BatchNormalization()(x)
x = layers.Activation('relu')(x)
return x
def convolution_block(x, filters, size, strides=(1,1), padding='same', activation=True):
x = layers.Conv2D(filters, size, strides=strides, padding=padding)(x)
if activation == True:
x = BatchActivate(x)
return x
def residual_block(blockInput, num_filters=16, batch_activate = False):
x = BatchActivate(blockInput)
x = convolution_block(x, num_filters, (3,3) )
VNITSW-CSE 39| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
sizes_test = []
sys.stdout.flush()
sizes_test.append([x.shape[0], x.shape[1]])
X_test[n] = x
print('Done!')
VNITSW-CSE 40| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
preds_test_upsampled = []
for i in trange(len(preds_test)):
preds_test_upsampled.append(resize(
))
VNITSW-CSE 41| P a g e
CHAPTER-7
SYSTEM TESTING
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
CHAPTER-7
SYSTEM TESTING
7.1 Testing:
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.
Manual Testing:
Manual testing includes testing a software manually, i.e., without using any automated tool or
any script. In this type, the tester takes over the role of an end-user and tests the software to
identify any unexpected behavior or bug. There are different stages for manual testing such as
unit testing, integration testing, system testing, and user acceptance testing.
Automation Testing:
Automation testing, which is also known as Test Automation, is when the tester writes scripts
and uses another software to test the product. This process involves automation of a manual
process. Automation Testing is used to re-run the test scenarios that were performed
manually, quickly, and repeatedly.
What to Automate?
It is not possible to automate everything in a software. The areas at which a user can make
transactions such as the login form or registration forms, any area where large number of
users can access the software simultaneously should be automated.
When to Automate?
Test Automation should be used by considering the following aspects of a software
• Large and critical projects
• Projects that require testing the same areas frequently
• Requirements not changing frequently
• Accessing the application for load and performance with many virtual users
• Stable software with respect to manual testing
VNITSW-CSE 42| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
• Availability of time
How to Automate?
Automation is done by using a supportive computer language like VB scripting and an
automated software application. There are many tools available that can be used to write
automation scripts. Before mentioning the tools, let us identify the process that can be used to
automate the testing process –
• Identifying areas within a software for automation
• Selection of appropriate tool for test automation
• Writing test scripts • Development of test suits
• Execution of scripts
• Create result reports
• Identify any potential bug or performance issue
VNITSW-CSE 43| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
Test Results:
All the test cases mentioned above passed successfully. No defects encountered.
VNITSW-CSE 44| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
VNITSW-CSE 45| P a g e
CHAPTER 8
RESULTS
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
CHAPTER 8
RESULTS
8.1 RESULTS:
8.1.1. Accuracy
It is defined as the ratio of number of correctly predicted observations to the total number of
observations
Accuracy = ( TPr+ TNg ) / ( TPr+ FPr + FNg + TNg)
Our proposed model DSED have shown good results compared to existing approaches,
though CNN also shown good results as the No. of epochs increases our proposed system
accuracy is close to 98% because of low false positive rate.
8.1.2. Precision
It is the ratio of number of correctly predicted positive observations to the total number of
predicted positive observations.Precision = TPr / (TPr + FPr)
VNITSW-CSE 46| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
Here the result shows that our proposed approach have a better precession irrespective of No.
of epochs compared to other two approaches as the number of positive observations have no
effect but for other two as the false observations increased it effected the precession
8.1.3 Recall
It is the ratio of number of correctly predicted positive observations to all observations in
actual class.
8.1.4 F-Score
F-Score is the weighted average of Recall and Precision.
F-Score = 2 * (Recall * Precision) / (Recall + Precision)
VNITSW-CSE 47| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
Our proposed DSED approach have shown a better recall compared to other two and
sometimes it not so stable because of the sample and the most of the observations it gave
good recall.
Our proposed method is compared with the existing SaltSeg, which is 3D modelling based on
CNN. We tested the performance using all the metrics mentioned above, all the tests are
conducted using 100 epochs and it clearly shows the proposed method has better accuracy,
precision and recall, F1 score are almost identical.
VNITSW-CSE 48| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
8.2 Screenshots
VNITSW-CSE 49| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
VNITSW-CSE 50| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
VNITSW-CSE 51| P a g e
CHAPTER-9
CONCLUSION AND FUTURE WORK
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
CHAPTER-9
CONCLUSION AND FUTURE WORK
VNITSW-CSE 52| P a g e
CHAPTER 10
BABLIOGRAPHY
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
BABLIOGRAPHY
1. Yunzhi Shi∗ ,Xinming Wu and Sergey Fomel,‖ Salt classification using Deep
Learning‖ The University of Texas at Austin,2018
2. Gopi, A. P., Jyothi, R. N. S., Narayana, V. L., & Sandeep, K. S. (2020). Classification
of tweets data based on polarity using improved RBF kernel of SVM. International
Journal of Information Technology, 1-16.
3. K.J Naik (2021), "Classification and Scheduling of Information-Centric IoT
Applications in Cloud- Fog Computing Architecture (CS_IcIoTA)," 2020 14th
International Conference on Innovations in Information Technology (IIT), 2020, pp.
82-87,
4. Annals of Botany, Volume 91, Issue 3, February 2003, Pages 361-371,‖ A Fliexible
Sigmoid function of Determinate growth‖ https://doi.org/10.1093/aob/mcg029
published on 01 February 2003.
5. Rao, B. T., Patibandla, R. L., Narayana, V. L., & Gopi, A. P. (2021). Medical Data
Supervised Learning Ontologies for Accurate Data Analysis. Semantic Web for
Effective Healthcare, 249-267.
6. Naik, K. J., &Soni, A. (2021). Video Classification Using 3D Convolutional Neural
Network, In A. Kumar, & S. Reddy (Ed.), Advancements in Security and Privacy
Initiatives for Multimedia Images (pp. 1-18). IGI Global.
7. F. Meng et al., ―Constrained directed graph clustering and segmentation propagation
for multiple foregrounds co-segmentation,‖ IEEE Trans. Circuits Syst. Video
Technol., vol. 25, no. 11, pp. 1735–1748, Nov. 2015
8. Z. Miao, K. Fu, S. Hao, S. Xian, and M. Yan, ―Automatic water-body segmentation
from high-resolution satellite images via deep networks,‖ IEEE Geosci. Remote Sens.
Lett., vol. PP, no. 99, pp. 1–5, 2018.
9. J. Long, E. Shelhamer, and T. Darrell, ―Fully convolutional networks for semantic
segmentation,‖ in Proc.
10. #K Jairam Naik, ―A Dynamic ACO based Elastic Load Balancer for Cloud
Computing (D-ACOELB)‖, Data Engineering and Communication Technology
(ICDECT), Lecture Notes in Advances in Intelligent Systems and Computing, book
series 1079, Springer Nature Singapore, pp. 11-20, 2020.
VNITSW-CSE 53| P a g e
SALT BODY SEGMENTATION BASED ON EDGE DETECTION USING DEEP SUPERVISED LEARNING
11. Alan Souza, Wilson Leao, Daniel Miranda, Nelson Hargreaves, Bruno Pereira Dias
and Erick Talarico ˜ PETROBRAS PetroleoBrasileiro‖ Salt segmentation using deep
learning‖ Apr,2017.
12. Mikhail Karchevskiy, InsafAshrapov, Leonid KozinkinarXiv preprint ―Automatic salt
deposits segmentation: A deep learning approach ‖arXiv:1812.01429, 2018.
13. S. Oh, K. Noh, D. Yoon, S. J. Seol, and J. Byun, ―Salt delineation from
electromagnetic data using convolutional neural networks,‖ IEEE Geosci. Remote
Sens. Lett., vol. 16, no. 4, pp. 519–523, Apr. 2019.
14. H. Di, M. Shafiq, and G. AlRegib, ―Multi-attribute K-means cluster analysis for salt
boundary detection,‖ in Proc. 79th EAGE Conf. Exhibit., Jun. 2017, pp. 1–5
15. C.Ramirez, Gs.Larrazabal, and G. Gonzalez,‖ Salt body detection from seismic data
via sparse representation‖ 2016
VNITSW-CSE 54| P a g e
CHAPTER 11
PUBLICATION
Figure 11. Paper Acceptance
Figure 12. Certification of Participation
RESEARCH ARTICLE | APRIL 28 2023
CrossMark
View Export
Online Citation
Characterization of natural fracture on basement reservoir using the integration of ant tracking attribute,
FMI log, and resistivity log at "POME" field, Jambi sub basin
AIP Conference Proceedings (April 2020)
Fault assessment for basement reservoir compartmentalization: Case study at Northeast Betara gas field,
South Sumatra Basin
AIP Conference Proceedings (July 2017)
Abstract: In many fields, Convolutional neural networks have been efficiently used and by giving more attempts in the area of
Seismic imaging. Seismic image analysis is important in a variety of industrial applications like finding underground salt bodies
plays a vital role in detecting oil and gas reservoirs. Seismic image analysis still requires experts to examine the Salt body, which
is a time taking process. In this paper, we present a method for detecting and optimizing deep-supervised edges that accurately
segments the salt body. We create an edge-prediction branch to detect the salt body's border, which aids feature learning by
supervising boundary loss. We also used a Sigmoid function (Logistic) which gives output effectively and accurately. We have
compared our model with two existing approaches which ended up with good results.
Keywords
Deep Learning, Salt segmentation, Sigmoid function, Seismic imaging.
INTRODUCTION
Many regions on earth are undergrounded with large amounts of oil and gas, in turn forms large amounts of salt
body. Salt boundary analysis is a model for understanding the structure model and the seismic migration speed and a
construction model. Traditional methodology of analyzing uses great experts for examining the salt body, its
attributes, and features. The properties are manually made by experts, which may lead to the highly variable results
and complex noise pollution. The results produced are not even accurate. Due to this incorrect analysis, the company
drilling personal also fall into danger. The data collected from the existing data can be divided into four categories
[2-3]. (i) Seismic characteristics are used. (ii) The implementation of computer vision techniques. (iii) The
combination of seismic attributes. (iv) Deep learning algorithms [1].
Recently, computer vision fields are implemented using the learning methods using different features as object
direction and semantic segmentation [5]. Using deep learning statistical models, the algorithm immediately
translates raw input data into geological region mapping. The salt body has been regarded a segmentation [8]
challenge in previous investigations, with the image featuring, while other studies considered the salt body as a
regression problem. Deep learning algorithms like CNN [6,9] are used to predict the salt bodies from the data.
F.Meng et al., proposed some methods which used directed graphs for the purpose of clustering and segmenting salt
foregrounds. F. Meng et al., [7] classified salt bodies as semantic segmentation. LBP (Local Binary Patterns) are
used to obtain the features accurately. The encoder-decoder architecture, which consists of a convolutional layer and
pixel-level binary labels, is presented for salt segmentation. Each pixel is the form of probability representing
salt/non salt. In this paper we proposed a deeply supervised U-shaped model and effective salt division and
introduced a “Salt Boundary Prediction” to optimize the results. In which it contains an encoder and decoder for the
purpose of down sampling and up sampling. Here another unit ReLU (Rectified Linear Unit) is used. Using this, the
output is accurate. But not faster. So, for this, we used a sigmoid function Xinyou Yin et al., [4] to predict the
probability of output and also efficiency of resultant salt delineated part.
020009-1
RELATED WORK
Alan Souza et al. [11] This uses convolutional neural networks being used for semantic delineation of salt bodies on
different seismic volume data. The numerical experiments showed in this paper are, even with a small amount of
interpreted lines one could obtain reasonable salt segmentation results. The calculations are very easy.
Mikhail Karchevskiy et al. [12] The manual seismic images interpretation by geophysics specialists is done. The
predictions provided even by a single DL model were able to achieve the 27th place and the efficiency is very high.
But it is a tedious process.S.J.Seol et al. [13] Here the electrical resistivity data is distributed. Mapping of this data
with electromagnetic data along with CNN is done. Using this the subsurface salt structure can be is found easily. It
is efficient, stable and reliable. But it does not encode the exact position of image and a lots of training data is
necessary.
H. Di, M et al. [14] Here K-Means Clustering algorithm is used. The attribute domain used here generates a
probability volume data that not only finds the salt boundaries, but also predicts effective salt interpretation. Here
more advanced salt interpretation is done. But if the number of attributes used increases, the computational time also
increases.
PROPOSED MECHANISM
The salt bodies are completely different from natural imageswhich are a major issue. In order to overcome the issue,
the general semantic segmentation model cannot be used to produce the accurate results. So a deep Supervised
model is used.
Feature Extraction: Feature describes the characteristic of an image. So feature extraction is an important aspect in
Seismic imaging. Before feature extraction, every image undergoes several phases like Normalization, resizing,
binarizingthe image. Local Binary Pattern (LBP) is an image featuring operator which outputs the labels of each
pixel by an initial threshold value and by thresholding neighborhood of pixels. LBP operator used as LBRp,ru2.
Here histograms are used.
The subscript represents using the operator in a (P,R) neighborhood. Subscript u2 means uniform patterns. n is the
number of different labels produced.
A Deep-Supervised Model: In this paper, the entire architecture is defined using a simple U-shaped structure. It
has one encoder and a decoder. Using jump connections between the encoder and the decoder, the feature maps fuse
with existing resolutions. Up sampling and Down sampling of image is also done. The down sampling of the entire
image is done by selecting pixel by pixel using an encoder.
After the encoding and decoding of the image, the original size of the segmented image is obtained. There may be
such conditions in which some part of the image does not contain any salt. In order to overcome this negative impact
in the sampling process, a Binary classification is used. This classification identifies the part of image that contain
salt at the top of encoder. Meanwhile, it passes through a global average pooling process to detect the salt that is
already present. To predict the segmented salt region, a branch is added to eliminate the negative impact of empty
images on the network.
020009-2
Edge Prediction Branch: It is always a difficult task to predict the exact image in a pool of similar images. So feature
learning is used, which can be done by an Edge Prediction Branch. Here ReLU (Rectified Linear Unit) is used,
which improves the training speed of the data.
The importance of this ReLU is to produce the results efficiently and fastly. It uses a focal loss branch which can
be called as M border
M = M last + P ∗ M un empty + Q ∗ M class+ R ∗ M border
P, Q and R are called as weighting attributes. The choosing process of loss weights is done in two different
processes. Each loss function has its own loss magnitude in different orders. The basic aim is to order these different
magnitudes into a same order. Then a M last is obtained. But using this we may not get the exact boundary values.
So, a sigmoid function is used.
Sigmoid function: A sigmoid function is a Mathematical function. It contains several other functions such as
hyperbolic function, arctangent and logistic functions. It gives non-linearity i.e., it introduces uniformity and
provides evenness to the data. The function used gives accurate results, whose values fall in the range of [0,1], if we
used a logistic function.
020009-3
Explanation for the above algorithm is first we take seismic image as input. Then 1X1 Convolution is performed on
that input by taking every single pixel in that seismic image. After that the 3x3 filter is applied, which are like 2D
convolutions that are applied for regular 2 matrix objects (2D images). After that Back Propagation Process is done.
Step 6: The rectified linear Unit function (ReLU) is an activation function. This function outputs a positive
value if the value inputted is positive. If the input value given is negative, then a zero results.
Y=max(0,a)
Step 7: Repeat Step3 to step6 for improvement of accuracy
Step 8: After completion of the above all steps the output image has segmented edge.
The above algorithm is, first we take seismic image as input. Then 1x1 Convolution is performed [23-25] on that
input by taking every single pixel in that seismic image. After that the 3x3 filter is applied, which are like 2D
convolutions that are applied for regular 2 matrix objects (2D images). After that Batch normalization process is
done. Then Sigmoid Function is performed, in that logistic sigmoid function is used. For this the input can be any
real number but the output will be always ranges from 0 to 1 only. In that formula a0 is the value of the sigmoid
midpoint, M is the curve’s maximum value, r is the logistic growth rate, a range between -∞ to +∞. Next step is
Rectified Linear Unit function, in this if the input is positive the output will also be positive and if it is negative, it
returns only zero value as output. Like this the process is done and some steps are repeated again which gives the
salt body boundary in the given image. Using this we have successfully obtained boundary for the salt body and we
have examined the results using different tests which will be discussed in the results section.
RESULTS
Here in this section we are going to compare our proposed DSED algorithm with two existing approaches CNN by
S.Oh [3] and K Means by H. Di [4] using confusion matrix using 3 parameters accuracy, precision, recall. So we
have calculated true positive rate, false positive, true negative, false negative rate of our system and other two
mechanisms as well the results are as follows
Accuracy
It is defined as the ratio of number of correctly predicted observations to the total number of observations
Accuracy = ( TPr+ TNg ) / ( TPr+ FPr + FNg + TNg)
Our proposed model DSED have shown good results compared to existing approaches, though CNN also shown
good results as the No. of epochs increases our proposed system accuracy is close to 98% because of low false
positive rate.
020009-4
FIGURE 2: Accuracy for proposed DSED
Precision
Here the result shows that our proposed approach have a better precession irrespective of No. of epochs compared to
other two approaches as the number of positive observations have no effect but for other two as the false
observations increased it effected the precession
Recall
It is the ratio of number of correctly predicted positive observations to all observations in actual class.
Recall=TPr / ( TPr + FNg)
020009-5
F-Score
F-Score is the weighted average of Recall and Precision.
F-Score = 2 * (Recall * Precision) / (Recall + Precision)
Our proposed DSED approach have shown a better recall compared to other two and sometimes it not so stable
because of the sample and the most of the observations it gave good recall.
Our proposed method is compared with the existing SaltSeg, which is 3D modelling based on CNN. We tested the
performance using all the metrics mentioned above, all the tests are conducted using 100 epochs and it clearly shows
the proposed method has better accuracy, precision and recall, F1 score are almost identical.
CONCLUSION
In this Paper we mainly concentrated on finding the boundaries of salt bodies. For categorizing salt bodies we used
deep-supervised method, which exactly locates the salt bodies. These methods are used for detecting oil and gas
reservoirs because, where ever the oil and gas are present there these salt bodies are formed. This analysis is very
much helpful to industries. Using the Semantic Segmentation and Convolution Process on the seismic image, which
classifies each pixel as a salt body or not. To classify precisely and accurately a Deep- Supervised algorithm is used.
Then in order to speed up the training of neural data, ReLU (Rectified Linear activation function) is used. This
activation function outputs zero if the input is negative value. Else it outputs a positive value if the input is positive.
In order to increase the uniformity and to add the non-linearity of the data a Sigmoid function is used, which predicts
the salt delineation accurately.
REFERENCES
1. Yunzhi Shi∗ ,Xinming Wu and Sergey Fomel,” Salt classification using Deep Learning” The University of
Texas at Austin,2018
2. Gopi, A. P., Jyothi, R. N. S., Narayana, V. L., & Sandeep, K. S. (2020). Classification of tweets data based
on polarity using improved RBF kernel of SVM. International Journal of Information Technology, 1-16.
020009-6
3. K.J Naik (2021), "Classification and Scheduling of Information-Centric IoT Applications in Cloud- Fog
Computing Architecture (CS_IcIoTA)," 2020 14th International Conference on Innovations in Information
Technology (IIT), 2020, pp. 82-87,
4. Annals of Botany, Volume 91, Issue 3, February 2003, Pages 361-371,” A Fliexible Sigmoid function of
Determinate growth” https://doi.org/10.1093/aob/mcg029 published on 01 February 2003.
5. Rao, B. T., Patibandla, R. L., Narayana, V. L., & Gopi, A. P. (2021). Medical Data Supervised Learning
Ontologies for Accurate Data Analysis. Semantic Web for Effective Healthcare, 249-267.
6. Naik, K. J., &Soni, A. (2021). Video Classification Using 3D Convolutional Neural Network, In A. Kumar,
& S. Reddy (Ed.), Advancements in Security and Privacy Initiatives for Multimedia Images (pp. 1-18). IGI
Global.
7. F. Meng et al., “Constrained directed graph clustering and segmentation propagation for multiple
foregrounds co-segmentation,” IEEE Trans. Circuits Syst. Video Technol., vol. 25, no. 11, pp. 1735–1748,
Nov. 2015
8. Z. Miao, K. Fu, S. Hao, S. Xian, and M. Yan, “Automatic water-body segmentation from high-resolution
satellite images via deep networks,” IEEE Geosci. Remote Sens. Lett., vol. PP, no. 99, pp. 1–5, 2018.
020009-7