You are on page 1of 35

ICEBERG-SHIP CLASSIFICATION

OF SATELLITE RADAR IMAGES


A project report
Submitted for the requirements of the degreee of

Bachelor of Computer Science

BY- karan upare

Under the guidance Of

Prof. Ghodichor sir

SINHGAD INSTITUTE OF TECHNOLOGY


LONAVLA-410401 2020-21
SINHGAD INSTITUTE OF TECHNOLOGY, LONAVLA
1

CERTIFICATE

Certified that the project work entitled ‘ICEBERG SHIP


CLASSIFICATION OF SATEL- LITE RADAR IMAGES’ was carried
out by Mr KARAN UPARE, Roll No.C-55, a bonafide student of
SINHGAD INSTITUTE OF TECHNOLOGY, LONAVLA
towards partial fulfill- ment for the B.Tech project in Computer Science
Engineering during the academic year of 2020-21 . The project report has
been approved as it satisfies the academic requirements with respect to project
work prescribed for the said degree.

————————-

Prof. Ghodichor sir(Signature of the supervisor)

————————-
(Signature of the Head of College)

The project evaluation date:

Internal Examiner External Examiner


2

DECLARATION

I declare that this written submission represents our ideas in my own words and where

others’ ideas or words have been included, I have adequately cited and referenced the

original sources. I also declare that I have adhered to all principles of academic honesty
and integrity and have not misrepresented, fabricated, or falsified any idea, data, fact, or
source in my submission. I understand that any violation of the above will be cause for

disciplinary action by the Institute and can also evoke penal action from the sources which
have thus not been properly cited or from whom proper permission has not been taken
when needed.

—————
(Signature)

Karan upare
(Name of the student)

C-55
(Institute Roll Number)

ACKNOWLEDGEMENT

I have taken efforts in this project. However, it would not have been possible without the

kind support and help of many individuals and organizations and I would like to extend

my sincere thanks to all of them.


3

I am highly indebted to Prof. Ghodichor sir, for his guidance and constant super-

vision as well as for providing necessary information regarding the project also for his

support in helping me work on the project. I would like to express my gratitude towards
my parents for their kind co-operation and encouragement. I would also like to thank all
people who have willingly helped me out with their abilities.
4

Contents
1 Introduction1

1.1 Motivation and Scope ................................................................................ 1

1.2 Objective ................................................................................................... 1


1.3 Report outline ............................................................................................ 1

2 Literature Survey3

3 Data Engineering4

3.1 Data Collection ..........................................................................................4


3.2 Data Cleaning ............................................................................................ 4

3.3 Visulazing Dataset ................................................................................... 5

4 Feature Extraction and the Training Networks7

4.1 Simple Convolution Neural Network ........................................................ 7

4.2 Convolution Neural Network with Data Augmentation .......................... 8

4.3 Convolution Neural Network with incidence angle.................................... 9


4.4 Transfer Learning ...................................................................................... 10

4.4.1 Resnet Model ................................................................................


11

4.4.2 VGG16 model ..............................................................................


11

4.5 Feature Extraction and Different Classifiers ............................................ 12

5 Experiments and Results14


5

5.1 Simple Convolution Neural Network ..................................................... 14


5
5.2 CNN with Data Augmentation ................................................................. 14

5.3 CNN with Incidence Angle........................................................................ 15

5.4 Resnet Model ............................................................................................. 15

5.5 VGG16 Model ......................................................................................... 16

5.6 CNN + Support Vector Machine Classifier ............................................ 17

5.7 CNN + Random Forest Classifier ............................................................ 18

5.8 CNN + Kneighbors Classifier .................................................................. 19

5.9 Visualization of Filter and Features ............................................................ 20


6

List of Figures
3.1 Band1 of Radar signa ................................................................................. 5

3.2 Band2 of Radar signa ................................................................................. 6

4.1 Architecture of Baseline CNN .................................................................. 8

4.2 Architecture of CNN+Data Augmentation model .................................... 9

4.3 Architecture of CNN+Incidence model ...................................................... 10


4.4 Architecture of ResNet model ................................................................. 11

4.5 Architecture of VGG16 model.................................................................. 12

5.1 Accuracy of Baseline CNN model ............................................................ 14

5.2 Accuracy of CNN+Data Augmentation model .......................................... 15 5.3


Accuracy of CNN+Incidence Angle Model ............................................ 15

5.4 Accuracy of ResNet Model ........................................................................ 16

5.5 Accuracy of VGG16 Model ........................................................................ 16


5.6 Confusion Matrix of SVM ........................................................................ 17

5.7 Confusion Matrix of Random Forest ......................................................... 18

5.8 Confusion Matrix of K-neighbors ............................................................ 19

5.9 Horizontal band of Radar images .............................................................. 20


5.10 Filters of first CNN Layer ........................................................................... 21

5.11 Resultant of Convolution with these Filters ................................................ 21


List of Figures 7

Abstract

Drifting Icebergs present threats to navigation and activities in areas such as


offshore. In remote areas with particularly harsh weather, aerial
reconaissances methods are not fea- sible so use of machine learning
techniques is needed. However the major challenges that still remain is lack
of sufficient training data and integrating additional features. Here in this
Bachelors Project Report we go through some of the challenges faced and
detect Iceberg in radar satellite images. Different Machine Learning
Techniques were used like Support vector machine to achieve accuracy of 82
percent and transfer learning to achieve accuracy of 91 percent in ResNet

model.
1

Chapter 1

Introduction

1.1 Motivation and Scope


Drifting Icerbergs present threats to navigation and activities in areas such as offshore.

Companies use aerial reconnaissance and shore-based support to monitor environmental


conditions and assess risks from icebergs.In remote areas with particularly harsh weather, these

methods are not feasible, and the only viable monitoring option is via satellite.Considering this
need to bring in the advancements of Machine Learning and Deep Learning truly in

real-life problems.

1.2 Objective
The main objective is to build an algorithm that automatically identifies if a remotely sensed
target is a ship or iceberg using machine learning and deep learning.

1.3 Report outline


The present work is outlined as follows. Chapter 2 contains the Literature survey of

currently available datasets. We extensively talk about what work have been done in this
project. Chapter 3 covers the data collection for our thesis project and mentions the
2
resources used for creating the dataset until now. Also it talks about the preprocessing
Chapter 1. Introduction

conducted on the dataset. Chapter 4 covers the Feature Extraction methods viz. Baseline
CNN, Pre-Train model etc and talks about the process of obtaining it. It also talks about
the algorithms to train and test the dataset for Iceberg Ship classification. Chapter 5 talks
about the results obtained.
3

Chapter 2

Literature Survey
Several research papers have been published with different features and models for classi-
fying Satellite Radar images into ship and Icerberg classes. In [2] Cheng Zhan presented
a convolutional neural network (CNN) designed to work with a limited training data and

features, while demonstrating its effectiveness in this problem. Results showed that trans-

fer learning resulted in a significant boost in accuracy. The augmentation used for feature
engineering is accomplished by a variety of image transformations,smoothing, first and
second derivatives, gradient and Laplacian.In [3] Ankita Rane and Vadivel Sangili pre-

sented a Semisupervised Approach in which they labelled the test data using Pseudola-
beling learning and helped in data augmentation and increase in accuracy.

Chapter 3

Data Engineering

3.1 Data Collection


4
Statoil, an multinational energy provider operating worldwide, has partnered closely with

companies such as C-CORE to provide Dataset at Kaggle. C-CORE has been using satel-
lite data for over 30 years and has been developing surveillance network based on com-

puter vision.On November 2017, St. John’s-based applied RD organization C-CORE and
international energy company Equinor provided the dataset on kaggle[1].

3.2 Data Cleaning


Since dataset contain Radar images it can’t be directly feed into a machine learning model
as it contain noise .

Following were the main steps undertaken to ensure that the data provided for feature
extraction is clean:

• Normalization: Horizontal and Vertical band are a flat vector so we have to reshape
them into 75x75 matrix to work an image.

• Filters: Bilateral and Wavelet denoising filters are using to remove noise from both

horizontal and vertical channels.A bilateral filter is an edge-preserving and noise

reducing filter. It averages pixels based on their spatial closeness and radiometric
Chapter 3. Data Engineering

similarity.A wavelet denoising filter relies on the wavelet representation of the im- age. The
noise is represented by small values in the wavelet domain which are set to 0.

• Smothing: Resultant image is De-noised again with smoothing with Gaussian filter

• Third Band: Normalized HH and HV band and add then to create an extra feature as

part of feature engineering.

3.3 Visulazing Dataset


5
Band 1 and Band 2 are signals characterized by radar backscatter produced from dif-

ferent polarizations at a particular incidence angle. The polarizations correspond to HH


(transmit/receive horizontally) and HV (transmit horizontally and receive vertically).

FIGURE 3.1: Band1 of Radar signa


6

Chapter 3. Data Engineering

FIGURE 3.2: Band2 of Radar signa


Radar images can’t be classified as Icerberg or ship just be looking at the image.
7

Chapter 4

Feature Extraction and the Training

Networks

4.1 Simple Convolution Neural Network


The first step in any image classification system is to extract features i.e. identify com-
ponents of image that are good for identification of Iceberg or ship along all other stuff
which carries information like shape,sharpness,edge detection etc.

To Extract Features ,for example edge detection, from an image conventionally Haar

filter or Canny filter are convoluted with the image with fixed weight.Similarly in an Con-

volution Neural Network we use convolution of filter to extract features from an image
but the filters weight are not pre-determined.Those weight are determined through back-
propogation through the last layer.

In Convolution Neural Network(CNN) the first few layers are used to extract basic

features like edge,line,shape etc .Then these edge enhanced images are passed through
another Convolution layer to extract more important features and so on.

Following architecture was used :


Chapter 4. Feature Extraction and the Training Networks 8

A CNN model was trained having 2 CNN layers with kernel size of 3x3 with "relu"
activation function followed by Batch Normalization and MaxPooling and Dropout of 0.2

FIGURE 4.1: Architecture of Baseline CNN


percent probability.Then extracted features were then passed through a neural network with
dense layers of 512 and 256 neurons with "relu" activation and finally passed through a
sigmoid activation layer to give a probability of belonging to each class.

4.2 Convolution Neural Network with Data Augmenta-


tion
Lack of sufficient training data is a challenging problem with ML, But it is also realistic:

even small-scale data collection can be incredibly costly or even almost impossible in
9

certain real-world use cases (e.g. medical imaging). To improve the accuracy of model
and collect more data images from training set are mirror flipped left-right,up-down and
rotated.
Chapter 4. Feature Extraction and the Training Networks 10

Following architecture was used :

FIGURE 4.2: Architecture of CNN+Data Augmentation model

4.3 Convolution Neural Network with incidence angle


Angle at which an Radar image is taken is an important information or feature for detec-
tion of Iceberg or ship.It is important to include this feature into the dataset.So after the
Chapter 4. Feature Extraction and the Training Networks 11
extraction features from CNN model,incidence angle is included as feature before in can
be passed into dense neural network for classification.

Following architecture was used :

FIGURE 4.3: Architecture of CNN+Incidence model

4.4 Transfer Learning


Transfer learning is a form of machine learning, in which a model built for a task is reused
as the starting point for a second task model.
Chapter 4. Feature Extraction and the Training Networks 12
It is a common method in deep learning where pre-trained models are used as the

starting point for computer vision and natural language processing tasks due to the vast
computational and time resources required to build neural network models on these issues
and the huge skill leaps they provide on related issues.

4.4.1 Resnet Model

The Residual Network, or ResNet for short, is a model that makes use of the residual module
involving shortcut connections.

It was developed by researchers at Microsoft and described in the 2015 paper titled

“Deep Residual Learning for Image Recognition.”


Resnet architecture was modified to include the incidence angle feature into the model

and then new architecture was used.Convolution layer with 512 features was removed and
the output of fourth 128 layer was Concatenate with incidence angle and passed through

neural network with dense layer of 256 and 64 neurons.It’s output was passed through
sigmoid activation layer to give a probability of belonging to each class.

Following architecture was used :


Chapter 4. Feature Extraction and the Training Networks 13

FIGURE 4.4: Architecture of ResNet model

4.4.2 VGG16 model

VGG16 is a convolution neural net (CNN ) architecture which was used to win

ILSVR(Imagenet) competition in 2014. It is considered to be one of the excellent vision model


architecture

till date. Most unique thing about VGG16 is that instead of having a large number of

hyper-parameter they focused on having convolution layers of 3x3 filter with a stride 1
and always used same padding and maxpool layer of 2x2 filter of stride 2. It follows this

arrangement of convolution and max pool layers consistently throughout the whole archi-

tecture. In the end it has 2 FC(fully connected layers) followed by a softmax for output.
The 16 in VGG16 refers to it has 16 layers that have weights. This network is a pretty
large network and it has about 138 million (approx) parameters.

Features were extracted from VGG16 and mobileNet model and an extra global max-

pooling layer was applied.Then those features were concatenate with incidence

angle.Resultant was passed through neural network followed by sigmoid activation layer.Due
to less GPU model was trained on small dataset. Following architecture was used :

FIGURE 4.5: Architecture of VGG16 model


Chapter 4. Feature Extraction and the Training Networks 14
4.5 Feature Extraction and Different Classifiers
Features Extracted from CNN were passed though Support Vector Machine ,Random For-
est and Kneighbors classifier rather than dense neural network. Confusion matrix was cal-

culated to know precision,recall and f1-score of model.A confusion matrix is a summary

of prediction results on a classification problem. The number of correct and incorrect


predictions are summarized with count values and broken down by each class.The confu-
sion matrix shows the ways in which your classification model is confused when it makes
Chapter 4. Feature Extraction and the Training Networks 13

predictions. It gives us insight not only into the errors being made by a classifier but more
importantly the types of errors that are being made.
16

Chapter 5

Experiments and Results

5.1 Simple Convolution Neural Network


Following are the results that were obtained in case of Simple CNN model.

FIGURE 5.1: Accuracy of Baseline CNN model

5.2 CNN with Data Augmentation


Following are the results that were obtained in case of CNN model with Data Augmenta-
tion .
Chapter 5. Experiments and Results 17

FIGURE 5.2: Accuracy of CNN+Data Augmentation model

5.3 CNN with Incidence Angle


Following are the results that were obtained in case of CNN model with Incidence angle
as additional feature .

FIGURE 5.3: Accuracy of CNN+Incidence Angle Model

5.4 Resnet Model


18

Following are the results that were obtained in case of Resnet convolution neural network
model.
Chapter 5. Experiments and Results 19

FIGURE 5.4: Accuracy of ResNet Model

5.5 VGG16 Model


Following are the results that were obtained in case of Resnet convolution neural network
model.

FIGURE 5.5: Accuracy of VGG16 Model


Chapter 5. Experiments and Results 20

5.6 CNN + Support Vector Machine Classifier


Following are the results that were obtained when extracting features from convolution
neural network classified with Support Vector Machine as classifier.

FIGURE 5.6: Confusion Matrix of SVM


5.7 CNN + Random Forest Classifier
Chapter 5. Experiments and Results 21

Following are the results that were obtained when extracting features from convolution
neural network classified with Random Forest as classifier.

FIGURE 5.7: Confusion Matrix of Random Forest


5.8 CNN + Kneighbors Classifier
Chapter 5. Experiments and Results 22

Following are the results that were obtained when extracting features from convolution
neural network classified with Kneighbors algorithm.
Chapter 5. Experiments and Results 23

FIGURE 5.8: Confusion Matrix of K-neighbors


5.9 Visualization of Filter and Features
• Visualizing the band

FIGURE 5.9: Horizontal band of Radar images

• Visualizing the filters of first convolution neural network

There are 64 filter in each convolution neural network and these are 6 filters of size
3 X 3 .These filter are applied to an iceberg or ship images to detect edges and shape

in an radar image .These filters perform convolution with image matrix. After
Chapter 5. Experiments and Results 24

Convolution the resultant image have more sharp detected shape.This edge detected

resultant image is then passed through another convolution neural network to


enhance more features.

FIGURE 5.10: Filters of first CNN Layer Resultant image


after first convolution neural network filters convolution :
Chapter 5. Experiments and Results 25

FIGURE 5.11: Resultant of Convolution with these Filters


26

Bibliography
[1] Statoil/c-core radar images of iceberg and ship dataset. https://www.kaggle.com/

c/statoil-iceberg-classifier-challenge.

[2] Zhan Cheng, Zhang Licheng, Zhong Zhenzhen, Lin Sher Didi-Ooi, and Youzuo,

and Zhang andShujiao Huang; Changchun Wang Yunxi. Deep learning approach in
au- tomatic iceberg – ship detection with sar remote sensing data. University of
Bristol, Dec 2018.

[3] Ankita Rana and Vadivel Sangili. Implementation of improved ship-iceberg

classifier using deep learning. Jun 2017.

You might also like