Professional Documents
Culture Documents
Sonar Processing Project PDF
Sonar Processing Project PDF
Yousef
Qassim
Qassim_Youssef@hotmail.com
6/5/2008
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
Contents
Chapter2:
Background..3
Section2.1:
Sonar.3
Section2.2: Beamforming..5
Section3.3:
Beam-Time-Record
Images
at
100,
300,
and
400
Hz
Using
ULA
and
Chebyshev
Tapered
Beamforming11
Section3.5:
Image
Shows
the
Actual
Array
Elements
in
Space
and
the
Actual
Spacing
between
the
Elements14
Section3.7:
ULA
BTR
Images
and
the
Actual
Elements
Positions
BTR
Images
Comparison
..17
Chapter4: Conclusions.17
Chapter6:
References...23
2
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
1) Project
Statement:
The
project
is
a
MATLAB
code
that
was
written
to
perform
sonar
processing
on
data
provided
by
Dr.
Lisa
Zurk.
The
data
was
recorded
by
an
array
fielded
as
part
of
the
ONR
Shallow
Water
Array
Performance
(SWAP)
program.
These
data
recorded
using
32
element
bottom-mounted
hydrophone
array
deployed
off
the
Florida
coast
at
1
kHz
frequency,
and
it
represents
a
passive
sonar
system.
The
following
files
array_shape.mat
and
SWAP_sonar_data.mat
contain
the
locations
of
the
array
hydrophones
and
the
data
respectively.
The
code
is
aimed
to
generate
spectrogram
images,
beam-time-record
(BTR)
images,
and
applying
the
passive
sonar
equation
on
the
data
assuming
that
we
are
using
linear
arrays.
Finally,
we
will
provide
discussion
on
the
results
obtained
by
the
MATLAB
code.
2) Background:
2.1)
Sonar:
SONAR
is
the
short
term
for
Sound
Navigation
and
Ranging.
Sonar
is
used
for
communication,
navigation,
and
detection
of
targets
by
sending
sound
waves
and
or
listening
to
the
returning
echoes.
There
are
two
kinds
of
sonar
active
sonar
and
passive
sonar.
Active
sonar
is
consists
of
transmitter
and
receiver.
When
the
two
in
the
same
place
its
called
monostatic,
which
is
usually
the
case
of
most
of
sonar
systems.
Other
types
are
bistatic
and
multistatic
sonar
systems.
Active
sonar
generates
pulses
or
pings
of
sound,
and
then
listens
for
the
echo
and
reflections,
see
Fig1.
Using
several
hydrophones
it
can
measures
the
distance
to
an
object,
the
Doppler,
etc.
Active
sonar
performance
can
be
measured
by
two
equations:
the
first
one
is
used
in
presence
of
ambient
noise
(where
noise
is
limited),
and
the
other
one
is
used
where
reverberation
is
limited
SL
-
2TL
+
TS
-
(NL
-DI)
=
DT
(1)
SL
-
2TL
+
TS
=
RL+DT
(2)
3
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
Fig1:
Active
Sonar
System
[1]
Passive
sonar
listens
to
the
echo
in
the
water
without
transmitting
sound
waves.
The
passive
sonar
is
usually
used
in
military
applications.
Passive
sonar
uses
different
techniques
to
identify
targets
usually
done
by
a
computer
system.
A
computer
system
uses
large
databases
to
identify
the
targets
by
matching
the
received
sounds
with
these
databases.
However,
the
sonar
operator
usually
performs
the
final
classifications.
In
the
passive
sonar
systems
the
hydrophones
usually
towed
by
ships
or
submarines
so
its
mostly
limited
because
of
the
noise
generated
by
the
vehicle
in
addition
to
the
noise
of
the
water
environment.
The
performance
of
passive
sonar
is
defined
by
the
following
equation,
notice
that
one
way
propagation
is
involved
4
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
2.2) Beamforming:
Fig2:
Visualization
of
beamformer
In
active
systems
the
beamformer
control
the
phase
and
amplitude
of
each
transmitted
pulse
or
wave
to
produce
a
certain
pattern
with
constructive
and
destructive
interference
in
the
wavefront.
At
the
receiver
signals
from
different
sensors
is
combined
to
form
the
expected
pattern.
In
active
system
receiver
and
passive
sonar
the
beamformer
sum
the
delayed
signals
from
each
sensor
at
slightly
different
times.
This
done
so
each
received
signal
reaches
the
output
at
the
same
time
making
the
total
received
signal
as
one
strong
signal.
The
signals
from
each
sensor
can
be
amplified
by
different
weight.
Different
weighting
patterns
can
be
used
to
construct
the
desired
sensitivity
patterns;
example
of
this
is
Dolph-Chebyshev
weighting
5
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
patterns.
A
mainlobe,
sidelobes,
and
nulls
are
produced
and
they
could
be
controlled
which
give
the
advantage
of
ignoring
noise
or
jammers
from
specific
direction
and
listening
for
signals
in
other
directions.
Fig3
shows
a
beamformer
with
mainlobe,
sidelobes
and
nulls.
Fig3:
Beamformer
pattern
with
mainlobe,
sidelobes
and
nulls
[6].
6
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
Fig4:
Linea
array
sensors
[6]
Where ! is the distance from the reference sensor, and c is the speed of the plane wave.
7
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
Fig5:
Wavefront
arrive
at
different
sensors
with
time
delay
equal
to
the
travel
time
of
plane
wave
[3]
Fig6:
Delay-and-sum
beamformer
[4].
8
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
9
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
th
Fig7:
Lofagram
of
the
10
hydrophone
We
can
notice
in
this
picture
that
the
intensity
of
the
picked
signals
at
the
10th
phone
is
at
the
highest
levels
between
0-300secs
and
its
start
fading
after
that.
Which
means
regardless
how
many
targets
we
have
here
it
is
(or
they
are)
moving
away
while
the
time
is
increasing.
The
blue
values
represent
the
area
where
there
are
no
signals
could
be
picked
because
they
are
outside
the
range
of
the
hydrophones
linear
array.
Assuming
the
transmission
loss
TL=75
dB.
Array
gain
AG=15
dB
calculated
using
10*log10
(M),
where
M
is
number
of
phones
in
this
case
M=32.
The
source
level
SL
is
calculated
from
the
loudest
source
echo
level
added
to
the
transmission
level,
so
we
found
that
SL=195.7
dB.
Noise
level
is
10
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
calculated
by
adding
echo
level
estimated
from
the
color
bar
of
Fig7
at
a
quiet
place
(=
60
dB)
added
to
array
gain
AG,
this
results
in
NL=75
dB.
Finally,
the
SNR=120.7
dB
and
is
founded
by
taking
the
difference
between
SL
and
NL.
11
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
picked
by
the
hydrophones
linear
array
beside
many
other
week
sources.
We
cant
till
the
exact
number
of
sources
exist
in
these
figures
but
we
know
that
these
data
was
recorded
at
the
coast
of
Florid.
So,
this
gives
us
a
hunt
that
many
ships
may
exist
there.
Going
back
to
the
bearing
angle
90
where
the
strongest
source
exists,
we
can
conclude
that
the
source
start
at
the
end
fire
of
the
linear
array
because
the
intensity
of
the
source
is
the
highest
there.
While
time
progress,
we
can
notice
the
source
intensity
decreases
leading
us
to
a
simple
conclusion
that
the
source
is
going
away
from
the
linear
array.
Because
of
the
cone
angle
ambiguity
of
the
uniformly
linear
array
we
cant
differentiate
if
the
source
is
moving
to
the
right
or
to
the
left
of
the
ULA.
Fig8:
BTR
image
at
100
Hz
12
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
Fig9:
BTR
image
at
300
Hz
Fig10:
BTR
image
at
400
Hz
13
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
I
calculate
the
SL,
NL,
and
SNR
for
each
figure
(Fig8,
Fig9,
and
Fig10).
The
source
level
was
computed
by
finding
the
loudest
EL
and
adding
TL
to
find
SL=183.2,
174.4,
172.7
dB
for
100,
300,
400
Hz
respectively.
Noise
level
was
computed
by
taking
the
mean
value
of
the
EL
over
a
quiet
area
and
adding
AG
to
it,
NL=
54.5,
41.4,
44.3
dB.
Finally
SNR
was
computed
by
taking
the
difference
between
SL
and
NL,
SNR=125.7,
133,
128.4
dB.
3.5)
Image
Shows
the
Actual
Array
Elements
in
Space
and
the
Actual
Spacing
between
the
Elements:
This
part
of
code
is
meant
to
show
the
actual
array
elements
positions
in
a
plain
view
and
reflect
the
linear
distances
between
the
elements.
In
Fig11
the
red
line
shows
the
actual
positions
of
the
elements
and
the
blue
line
shows
the
actual
spacing
between
elements.
Fig11:
The
actual
array
elements
positions
and
the
actual
spacing
between
them
14
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
Fig12:
BTR
image
using
the
actual
array
elements
positions
at
100
Hz.
15
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
Fig13:
BTR
image
using
the
actual
array
elements
positions
at
300
Hz.
Fig14:
BTR
image
using
the
actual
array
elements
positions
at
400
Hz.
16
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
3.7)
ULA
BTR
Images
and
the
Actual
Elements
Positions
BTR
Images
Comparison:
By
comparing
the
results
obtain
in
section
3.3,
and
section
3.6.
We
can
notice
that
the
results
in
general
are
alike.
This
mean
in
both
cases
we
can
see
that
the
linear
array
pick
(receive)
a
strong
source
at
bearing
angle
=90
and
it
start
moving
away
while
time
progress.
We
also
could
notice
the
existence
of
week
sources
that
may
represent
other
objects
(ships,
boats,
etc).
The
main
difference
I
could
notice
in
these
figures
that
the
resolution
at
100
and
300
Hz
is
better
in
the
second
case
using
actual
array
elements
positions,
while
the
resolution
at
400
Hz
in
both
cases
is
the
same
and
the
result
match
greatly.
I
believe
the
difference
in
resolution
between
the
two
cases
for
100
and
300
Hz
figures
is
related
to
the
fact
we
didnt
take
into
the
consideration
the
distance
between
the
source
and
the
linear
array
in
the
first
case
while
we
did
in
the
second
one.
It
seems
the
second
case
scheme
in
taking
the
distance
between
the
source
and
each
element
in
the
array
and
using
these
distances
to
create
a
beamformer
is
more
consistent
than
the
scheme
used
in
the
first
case.
But
usually
in
passive
sonar
the
only
information
we
have
is
the
array
elements
positions
so
its
impossible
to
calculate
the
distances
between
the
target
source
and
the
array
elements
positions.
4) Conclusions:
Sonar
system
mainly
divided
into
active
and
passive
systems,
where
active
consist
of
transmitter
and
receiver
while
the
passive
is
consist
of
receiver
only.
Beamforming
broadly
divided
into
two
categories
conventional
and
adaptive
beamforming.
The
delay-and-sum
is
an
example
of
the
conventional
one.
In
this
project
we
use
the
tapered
beamforming,
which
is
considered
as
conventional
beamforming
because
its
originally
based
on
the
delay-and-sum
beamforming.
17
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
% Change the array type from single to double no effect on the actual data
% but its represtentation require more space in memory
Csamples=double(samples); % Casted samples
SnapshotTime=4; % Taking Snapshot every 4 sec
Phone10=Csamples(:,10); % Select the 10th phone
18
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
WindowSize=SnapshotTime*fs; % Window size is equal to 4000 sample
% # of overlapping segments (2000 segment) to produce 50% overlapping
OLS=WindowSize/2;
nfft=4096; % Number of FFT points
19
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
% Chebychev Taper Vector of 32 phones and r sidelobe level
tv = chebwin(M,r);
c=1500; % Speed of sound 1500 m/s
d=1.5; % Spacing between elements
wl=[c/f(1) c/f(2) c/f(3)]; % Wavelength at 100, 300, and 400 Hz
20
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
% BTR image @ 400 Hz
figure
surf(phi,t,20*log10(abs(Y400')),'EdgeColor','none')
axis xy; axis tight; colormap(jet); view(0,90);
colorbar
xlabel('Angle Phi(Deg)','FontWeight','bold');
ylabel('Time(Sec)','FontWeight','bold');
title('BTR @ f=400 Hz','FontWeight','bold');
%% ------------------------------------------------------------------------
%% 4)Discussion of the images including how many sources are present and
%% where they are relative to the array. Calculation of the sonar equation
%% including array gain and comparison to values in the plot
%% ------------------------------------------------------------------------
% There is one target that have strongest received signal and it's start at
% the end fire of the sensors array, then it's start moving to the right or
% the left of the array. We couldn't distinguish if the target is moving
% to the right or the left because we are using linear array that it
% deosn't have the ability to do so. Also, we could see that there are
% several other weak signals that picked from other targets but we can't
% till exactly how many are they.
% SL, NL, and SNR calculation for the results at different frequencies
SL100=20*log10(abs(max(max(Y100))))+TL %SL @ 100 Hz(=183.2dB)
NL100=20*log10(abs(mean(min(Y100(10:15,350:400)))))+AG%NL @ 100 Hz(=54.5dB)
SNR100=SL100-NL100 %SNR @ 100 Hz(=125.7dB)
% We can see that the result almost match the ones obtained in the Lofagram
% for one hydrophone(sensor) at all frequencies
%% ------------------------------------------------------------------------
%% 5)Plot of the actual array elements position reflect the linear
%% distances betweeen them
%% ------------------------------------------------------------------------
distance=zeros(31,1); % Vector of distances between elements
Xvec=zeros(32,1); % XVector to represent the linear distances
Xvec(1)=x(1); % It start at the first point of the real x vector
Yvec=zeros(32,1); % YVector of zeros to reflect the linear distances
21
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
ylabel('Y-Axis','FontWeight','bold');
title('Array Actual Elements postion and the Linear Distances'...
,'FontWeight','bold');
legend('Actual Array Elements Position','Array Elements Linear Spacing',...
'Location','NorthWest');
%% ------------------------------------------------------------------------
%% 6)Compute the BTR images using the actual element positions and assuming
%% the source is in the far-field
%% ------------------------------------------------------------------------
% Create a source points at distnace R with different angles
R=20e3; % Distance from source to the array(=20km)
x_s=R*sin(phi*pi/180); % x-points for the sources
y_s=R*cos(phi*pi/180); % y-points for the sources
% Distance matrix contain the distance between each source and each sensor
dis=zeros(Nbeams,M); % 41 angle and 32 sensor
22
|
P a g e
This
report
is
not
for
distribution.
Only
for
class
work
6/5/2008
Yousef
Qassim
surf(phi,t,20*log10(abs(Y300')),'EdgeColor','none')
axis xy; axis tight; colormap(jet); view(0,90);
colorbar
xlabel('Angle Phi(Deg)','FontWeight','bold');
ylabel('Time(Sec)','FontWeight','bold');
title('BTR @ f=300 Hz','FontWeight','bold');
6) References:
1. http://en.wikipedia.org/wiki/Main_Page;
6/3/2008
2. http://cnx.org/content/m12563/latest/
;
6/3/2008
3. http://cnx.org/content/m12516/latest/
;
6/3/2008
4. Gail
L.
Rosen,ULA
DELAY-AND-SUM
BEAMFORMING
FOR
PLUME
SOURCE
LOCALIZATION,
Drexel
University,
Philadelphia,
PA
19104.
5.
Joseph
J.
Sikora,
Sound
Propagation
around
Underwater
Seamounts,
master
thesis,
MIT,
Augest
2005.
6. D.G
Manolakis,
Statistical
and
Adaptive
Signal
Processing,
ARTECH
HOUSE,
INC.
Norwood,
MA,
2005.
23 | P a g e