You are on page 1of 67

DIGITAL SIGNAL PROCESSING

(EC-313)

Student Name: ______________________________

Degree/Syndicate: ____________________________

2
EC-313 Digital Signal Processing Lab Grading Rubric

Traits Exceptional Acceptable Amateur Unsatisfactory


(9-10) (6-8) (3-5) (0-2)
Tasks Completion All the lab tasks have Most of the lab A few of lab tasks No lab tasks or hardly any
Weight = 40% been completed in tasks have been have been completed of the lab tasks have been
class and shown in the completed and and shown in the completed and shown in
report. shown in the report. report. the report.
Tasks The code work The code works The code works The code neither works
Specifications properly, shows somewhat properly somewhat properly properly nor produces
Weight = 30% perfect outputs and and produces but barely produces correct results. And it
meets all the required correct results and any correct results does not meet any
specifications. meets most of the and hardly meets any specifications.
specifications. specifications.
Timeliness The report is The report is The report is The report is submitted
Weight = 10% submitted in time submitted within an submitted within a within a week of the
within the submission hour of the day of the deadline. deadline.
deadline. deadline.

Report The report is well The report is written The report is written The report is written in an
(Documentation) written with proper in a concise manner in a simple manner unclear manner, with no
Weight = 20% commenting and with appropriate with partial indentation and
indentation, and comments and comments and comments, which makes
separate functions indentation. improper it very difficult for the
/header files, and However, the indentation. The reader to read and
clearly explains what explanation is explanation makes it understand the code.
the code is somewhat useful for somewhat
accomplishing and the reader and does challenging for the
how. not explain every reader to read and
aspect of the code. understand the code.

*In all cases copied work will not be graded

𝑺𝒄𝒐𝒓𝒆 = ∑𝑛−1
𝑖=0 𝑡𝑟𝑎𝑖𝑡_𝑤𝑒𝑖𝑔ℎ𝑡𝑖 ∗ 𝑡𝑟𝑎𝑖𝑡_𝑠𝑐𝑜𝑟𝑒𝑖

3
Table of Contents

Lab # 01: Basics of Signal Processing........................................................................................................ 5


Lab # 02: Functions and Signal Operations ............................................................................................. 7
Lab # 03: Blind Source Separation.......................................................................................................... 11
Lab # 04: Linear Predictive Coding ........................................................................................................ 13
Lab # 05: Linear Convolution and Moving Average Filter................................................................... 17
Lab # 06: Fourier Transform................................................................................................................... 23
Lab # 07: Fourier Transform Application .............................................................................................. 27
Lab # 08: Z-Transform ............................................................................................................................. 31
Lab # 09: Sampling of Audio Signals and Aliasing ................................................................................ 40
Lab # 10: Interpolation & Decimation .................................................................................................... 44
Lab # 11: Installation and Introduction to Code Composer Studio (v7.4) .......................................... 46
Lab # 12: Switching LEDs and Working of Codec on DSK6713 .......................................................... 56
Lab # 13: Convolution Sum Implementation on DSK6713 ................................................................... 60
Lab # 14: Speech Processing using Lowpass Filter on DSK6713 ......................................................... 63

4
Lab # 01: Basics of Signal Processing

Objective:
1. The objectives of this session are to explore different types of digital signal, voice, image and
video.
2. Understand the basics operation on signals, images and videos.
Description: MATLAB is required.
Procedure: In this lab we are going to process audio signals and Images.
To record the voice signal use following MATLAB function:
recorder = audiorecorder(Fs,nBits,nChannels)
By default, value of Fs=8000Hz, nBits=8 and nChannels=1.
recordblocking(recorder, time in second)
To play the audio file use MATLAB play function:
play(recorder)
Store data in double-precision array to plot.
myRecording = getaudiodata(recorder)
To read the image use following MATLAB function:
A = imread( image_name );
To read the video and frames use following MATLAB function:
V = VideoReader( video_name)
frame = read(V,index)

LAB TASK
1. Record a voice with default sampling rate and perform following operations:
a) Flip your audio signal, play the voice and plot the signal using subplot.
b) Add 2 𝑐𝑜𝑠𝑖𝑛𝑒𝑠 into audio with frequency ranging from 1kHz to 1.5Hz, play the voice and
plot the signal using subplot.
c) Divide the array of your audio into two equal parts, save the divided arrays into two
variables (let say a and b) and add both (a+b), play the resultant voice and plot the signal
using subplot.
d) Perform [(a*2)+(b*0.5)], play the resultant voice and plot the signal using subplot.
e) Add 4 cosines with frequency ranging from (5Hz to 4kHz) into audio signal, play the
resultant voice and plot the signal using subplot.
f) Drop the sampling rate of your audio by dropping every other sample consecutively 5 time,
plot the signals using subplot.
2. Take two gray-scale images of same size and perform following operations:

5
a) Read the images in two matrices.
b) Take transpose of the matrix and display the resultant images using subplot.
c) Add two images and display the result.
d) Read an image then multiply matrix of image with 0.5 and display the result.
3. Take a video and read.
a) Read a video, split it into frames and display all the frames.
Home Task
b) After completing above task, now put the frames back into video in reverse order and play the
video.
What to submit:
After you have completed your tasks, insert your code and figures in word file and submit it on
the LMS as pdf.

6
Lab # 02: Functions and Signal Operations

Objective:
Generating and operating basic sequences. In this lab we will learn to:
• Generate Delta (Impulse) Function.
• Generate Unit Step Function.
• Generate Exponential Function.
• Generate sinusoidal function.
• Perform operations (scaling, shifting) on above functions.
The Unit Delta (Impulse) function: often called the discrete time impulse or the unit impulse. It
is denoted by δ[n].

The Unit Step function: The unit step, denoted by u(n), is defined by

Creation of Unit Impulse and Unit Step sequences: A unit impulse sequence I[n] of length N
can be generated using the MATLAB command
I= [1 zeros (1, N-1)];
Similarly, a unit step sequence S[n] of length N can be generated using the MATLAB command
S= [ones (1, N)]
The Exponential Function: Finally, an exponential sequence is defined by
Exponential signal = A𝛼 𝑛
The sign of α does not affect the rising or falling of the signal; instead, it affects its oscillatory behavior.
Negative alphas cause the signal to alternate between negative and positive values.
The magnitude of α affects the rising and falling behavior: |α|<1 results in falling signals,
whereas |α|>1 results in rising signals.

7
The sinusoidal function:

Basic Operations:
Signal Adding: This is a sample-by-sample addition given by

8
and the length of x1(n) and x2(n) must be the same.
Signal Multiplication: This is a sample-by-sample multiplication (or “dot” multiplication) given
by

and the length of x1(n) and x2(n) must be the same.


Shifting: In this operation each sample of x(n) is shifted by an amount k to obtain a shifted
sequence

Folding: In this operation each sample of x(n) is flipped around n = 0 to obtain a folded sequence
y(n).

LAB TASK # 01:


a) Write a generic MATLAB code to generate a unit impulse and unit step
b) Plot the exponential signal by taking the value of A=2 and taking the value of “𝛼” equal to -4,
-0.5, 0.5, 4. Plot all the sequences in a single figure using “subplot” command. Give your
interpretation about the sequences.
c) Write a generic MATLAB code to generate a sinusoidal range -50:50, frequency 0.08,
amplitude 2.5 and phase shift 90 degrees and display it.
LAB TASK # 02:
x1(𝒏) = [𝟏𝟏 −𝟏𝟑 𝟏𝟓 7 −𝟗] -2 ≤ n ≤ 2
x2(𝒏) = [−𝟏𝟐 𝟏𝟒 𝟔 –𝟖 5] 0≤n≤4
a) Write a MATLAB function for signal shifting, and shift given signals by 5 i.e. (x[n-5]) and by
6 i.e. (x[n+6]) respectively.
b) Write a MATLAB function for signal flipping and flip above signals. (Do not use MATLAB
built in function for flipping)
c) Generate and plot the following sequences:
x[n] = 3 ∗ 𝛿[𝑛 − 10] + 15 ∗ 𝛿[𝑛 + 7] -15 ≤ n ≤ 15
−0.8[𝑛−5]
x[n] = n * u[n] + u[n-10] + u[n-20] + 10𝛼 [u[n-20]-u[n-30]] -30 ≤ n ≤ 30
Where a=2.
Note: Plot signals using stem. Give title to resultant signal and label x-axis and y-axis
properly.

LAB TASK # 03:


In task 3 you have to equalize the audio through equalizer. Run the code given to you, now add
different frequencies to filter the audio signal on different points and analyze the voice.
a) You have to design the GUI in MATLAB to control the gain of each band of frequencies as
shown in the figure.

9
Figure 1: Sample GUI
b) You should be able to play the combined audio after applying different gains to each band of
frequencies.

10
Lab # 03: Blind Source Separation

Objective:
In this lab you will work on the problem of removing an echo from a recording of a speech signal.

LAB TASK:
Design a MATLAB code to build an audio conference system. Just like in conference call a speaker
can listen all other members voice but not his own.

In this task, you have to record audio of four member and apply all these four audios on each
speaker one by one assuring that specific speaker cannot listen his own voice but can listen other
three speech signal.

11
Research Work to be done individually

The work that you have done in this lab is covered under the following topics:

• Singular Value Decomposition


• Blind Source Separation
After you have done this lab, you are to research on these two topics and write a comprehensive
report of what you have understood. It will be great if you can find some MATLAB
implementations to show your understanding of these two algorithms.

12
Lab # 04: Linear Predictive Coding

Objective:
In this lab you will look at how Linear Predictive Coding works and how it can be used to compress
speech audio.

Explanation:
Linear predictive coding (LPC) is a method used mostly in audio signal processing and speech
processing for representing the spectral envelope of digital signal of speech in compressed form,
using the information of a linear predictive model.
LPC is the most widely used method in speech coding and speech synthesis. It is a powerful
speech analysis technique, and a useful method for encoding good quality speech at a low bit rate.
LPC starts with the assumption that a speech signal is produced by a buzzer at the end of a
tube (for voiced sounds), with occasional added hissing and popping sounds (for voiceless sounds
such as sibilants and plosives). Although apparently crude, this Source–filter model is actually a
close approximation of the reality of speech production. The glottis (the space between the vocal
folds) produces the buzz, which is characterized by its intensity (loudness) and frequency (pitch).
The vocal tract (the throat and mouth) forms the tube, which is characterized by its resonances;
these resonances give rise to formants, or enhanced frequency bands in the sound produced. Hisses
and pops are generated by the action of the tongue, lips and throat during sibilants and plosives.
LPC analyzes the speech signal by estimating the formants, removing their effects from
the speech signal, and estimating the intensity and frequency of the remaining buzz. The process
of removing the formants is called inverse filtering, and the remaining signal after the subtraction
of the filtered modeled signal is called the residue.
The numbers which describe the intensity and frequency of the buzz, the formants, and the residue
signal, can be stored or transmitted somewhere else. LPC synthesizes the speech signal by
reversing the process: use the buzz parameters and the residue to create a source signal, use the
formants to create a filter (which represents the tube), and run the source through the filter,
resulting in speech.
Because speech signals vary with time, this process is done on short chunks of the speech
signal, which are called frames; generally, 30 to 50 frames per second give an intelligible speech
with good compression.

LAB TASK:
When plain speech audio is recorded and needs to be transmitted over a channel with limited
bandwidth it is often necessary to either compress or encode the audio data to meet the bandwidth
specs.

13
There are two major steps involved in this lab. In real life the first step would represent the side
where the audio is recorded and encoded for transmission. The second step would represent the
receiving side where that data is used to regenerate the audio through synthesis. In this lab you
will be doing both the encoding/transmitting and receiving/synthesis parts. When you are done,
you should have at least 4 separate MATLAB files, the main file, the levinson function, the LPC
coding function, and the Synthesis function.
Below is an overview list of steps for this lab.

1. Read the *.wav file into MATLAB and pass the data to the LPC coding function
2. The LPC coding function will do the following:
a. Divide the data into 30mSec frames
b. For every frame, find the data necessary to reproduce the audio
3. Pass the data from the LPC coding function to the Synthesis function
4. The Synthesis function will do the following:
a. Regenerate each frame from the given data
b. Reconnect all the frames
c. The Synthesis function will return the reconstructed audio
5. Play the original audio
6. Play the synthesized audio

Please look at all the given code and instructions carefully before starting this lab.

The LPC coding function


As mentioned above the LPC coding function will take the speech audio signal and divide
it info 30mSec frames. These frames start every 20mSec. Thus, each frame overlaps with the
previous and next frame. Shown in the figure below:

Figure 1: Audio signal to separate frames

After the frames have been separated, the LPC function will take every frame and extract
the necessary information from it. This is the voiced/unvoiced, gain, pitch, and filter coefficients
information.

14
To determine if the frame is voiced or unvoiced you need to find out if the frame has a
dominant frequency. If it does, the frame is voiced. If there is no dominant frequency the frame is
unvoiced. If the frame is voiced, you can find the pitch. The pitch of an unvoiced frame is simply
0. The pitch of a voiced frame is in fact the dominant frequency in that frame. One way of finding
the pitch is to cross correlate the frame. This will strengthen the dominant frequency components
and cancel out most of the weaker ones. If the 2 biggest data point magnitudes are within 100 times
of each other, it means that there is some repetition and the distance between these two data points
is the pitch.
The gain and the filter coefficients can be found using Levinson’s method. (Look at the
MATLAB function levinson and use it in the body of your Levinson function)
After finding these variables for all the frames the function will pass them back to the main
file as seen below: (sample code format)
[Coeff, pitch, G] = lpc(x, Fs, Order);
• Coeff is a matrix of size (number of coefficients x number of frames) containing the filter
coefficients to all the frames
• Pitch is a vector of size (number of frames) containing the pitch information to all the
frames
• G is a vector of size (number of frames) containing the gain information to all the frames
• x is the input data
• Fs is the sample frequency of the input data
• Order is the order of the approximation filter

The Synthesis Function


The synthesis part is fairly easy compared to the coding function. Fist for each frame you
need to create an initial signal to run through the filter. This initial signal is also of length 30mSec.
Using the information from the variable passed into the synthesis function you will be able to
synthesize each frame. After you have synthesized the frames you can put them together to form
the synthesized speech signal.
The initial 30mSec signal in created based on the pitch information. Remember that if the
pitch is zero, the frame is unvoiced. This means the 30mSec signal needs to be composed with
white noise. (look at the MATLAB function randn) If the pitch is not zero, you need to create a
30mSec signal with pulses at the pitch frequency. (look at the MATLAB function pulstran)
Now that you have the initial signals all you have left to do is to filter them using the gain
and filter coefficients and then connect them together. (look at the MATLAB function filter)
Putting the frames back together is also done in a special way. This is where the reason for
having the frames overlap becomes clear. Because each frame has its own pitch, gain and filter, if
we simply put them text to each other after synthesizing them all, it would sound very choppy. By
making them overlap, you can smooth the transition from one frame to the next. The figure below
shows how the frames are connected. The amplitude of the tip and tail of each frame’s data is
scaled and then simply added.

15
Figure 2: Addition of separate frames into final audio signal

Interpreting your results


Compare the input speech audio and the synthesized speech audio by playing it back-to-
back and then listening via human ear. Also include a copy of all MATLAB code written and a
copy of all the plots created in the lab report.

16
Lab # 05: Linear Convolution and Moving Average Filter

Objective:
To study linear convolution with and without using built in function.
Description:
The MATLAB Signal Processing Toolbox is required.
This section will contain material to help students successfully conduct the experiment. It may
contain the list of experimental steps that the students will have to go through for successful
experimentation. It can guide students on common pitfalls and points to note.
Linear Convolution:
Convolution is an operation between the input signal to a system, and its impulse response,
resulting in the output signal.
In discrete time, convolution of two signals involves summing the product of the two signals,
where one of the signals is “flipped and shifted”. It doesn't matter which signal is flipped and
shifted. Have to take care to get limits of sum correct. Convolution Sum is given by:

Let’s try to understand the concept of convolution sum through an example.


Suppose we have a Linear Time Invariant (LTI) System having an impulse response h[n] and an
input signal x[n] is applied to the system. We will have a look at each step involved in the
convolution of these signals.
The input signal x[n] is given by:

Impulse response h[n] is given by:

17
Step I:-
Graph x[n] and h[n] as function of k.

Step II:-
Determine h[n-k] with the smallest n that prevents any overlap between x[k] and h[n-k].
First we reflect h[k] about k=0 to obtain h[-k].

Note that x-axis has been extended on the left to include a few additional points. These will be
useful later.
Second we shift h[-k] by –n. This result in h[n-k] and is equivalent to shifting h[-k] towards the
left side by n. We begin with large and negative value of n such that there is no overlap between
the non-zero values of the two signals.

Step III A:-


Increase n until x[k] and h[n-k] overlaps. We choose the smallest value on n such that x[k] and
h[n-k] start overlapping. In this case the overlap starts at n=-5. The output y[n] is zero for n<-5.

18
Step III B:-
Calculate output y[n] from the overlapping region. We multiply the overlapping values of x[k] and
h[n-k] and add the results. In this case there is only one overlap occurring at k=-2. The product is
therefore 8(1)=8.
The output y[n] is 8 for n=-5 and is shown in figure below.

Increment n by 1. Repeat Step 3B until h[n-k] slides past x[k].


Step IV:-
In step 4 we increment n by 1. This is equivalent to shifting h[n-k] toward the right-hand side by
1.

Still h[n-k] is overlapping with x[n] we repeat step III B.


We multiply the overlapping values of x[k] and h[n-k] and add the results. In this case there are 2
overlaps occurring at k=-2 and k=-1. The sum of product is therefore 4(1)+8(1)=12.
The output y[n] is 12 for n=-4 and is shown in the figure below.

19
Shift h[n-k] by 1.

The sum of product is 2(1)+4(1)+8(1)=14. Output y[n] is 14 for n-3 and is shown in figure
below.

And so on….
Since h[n-k] do not overlap with x[n] then the convolution sum is now 0 for n>3.
Final Result

Moving Average:
Moving average is a simple operation used usually to suppress noise of a signal: As the name
implies, the moving average filter operates by averaging a number of points from the input signal
to produce each point in the output signal. In equation form, this is written:

Where x[ ] is the input signal, y[ ] is the output signal, and M is the number of points in the average.
For example, in a 5-point moving average filter, point 80 in the output signal is given by:

20
As an alternative, the group of points from the input signal can be chosen symmetrically around
the output point:

Moving average by convolution


As you may have recognized, calculating the simple moving average is similar to the convolution:
in both cases a window is slid along the signal and the elements in the window are summarized.
So, give it a try to do the same thing by using convolution. Use the following parameters:

The desired output is:

As first approach, let us try what we get by convolving the x signal by the following k kernel:

>> x = [1 7 1 4 4 7 1];
>> k = [1 1 1];
>> y = conv(x, k, 'same')
y=
8 9 12 9 15 12 8

The output is exactly three times larger than the expected. It can be also seen, that the output values
are the summary of the three elements in the window. It is because during convolution the window
is slid along, all of the elements in it are multiplied by one and then summarized:

To get the desired values of y, the output shall be divided by 3:


>> x = [1 7 1 4 4 7 1];
>> k = [1 1 1];
>> y = conv(x, k, 'same');
>> y = y / 3
y=
2.6667 3.0000 4.0000 3.0000 5.0000 4.0000 2.6667

By a formula including the division:

21
But would not it be optimal to do the division during convolution? Here comes the idea by
rearranging the equation:

So, we shall use the following k kernel:

In this way we will get the desired output.

LAB TASK # 01:


a) Write a MATLAB function which takes two signals x[n] and h[n] as parameters and perform
the convolution sum of the two signals.
b) Using MATLAB built-in function of convolution to perform the convolution sum of two same
signals and compare the results to the result of Lab Task # 01. The result of convolution should
be same. MATLAB function for convolution is conv.
LAB TASK # 02:
a) Write MATLAB code to apply moving averaging filter on a noisy signal using moving
averaging equation

b) Apply Moving Averaging on the same signal using convolution.

22
Lab # 06: Fourier Transform

Objective:
The objective of this session is to perform Discrete Time Fourier Transform (DTFT) in
MATLAB.

Description:
Each representation has some advantages and some disadvantages depending upon the
type of system under consideration. However, when the system is linear and time-
invariant, only one representation stands out as the most useful. It is based on the
complex exponential signal set 𝑒𝑗w𝑛 and is called the discrete-time Fourier transform.
If x[𝑛] is absolutely summable, that is

then its Discrete time Fourier Transform is given by,

LAB TASK # 01:


1. Generate cosine signal in such a way that first value should be 0 with
frequency 5Hz, Amplitude 5 and Fs=5000.
a. Compute Fourier Transform of respective signal using MATLAB command
fft(). Plot the signal in frequency domain
b. Add Gaussian noise in input signal using (Y =awgn(x,10,'measured'))
command. Here x is input signal. Plot the resultant signal in time domain.
2. Pass DTFT of input signal from LTI system (Lowpass band) analyze the results.
Plot the resultant signal in time domain.

Note: Plot all signals respectively and Label them properly.

Expected Outputs:

23
LAB TASK # 02:

24
1. Consider an LTI system with an even unit sample response.

a. Plot the Frequency response of this filter


b. Plot the phase response of this filter
c. Plot the magnitude response of this filter

2. Now, make this system causal by shifting it to the right.

a. Plot the Frequency response of this filter


b. Plot the phase response of this filter
c. Plot the magnitude response of this filter

LAB TASK # 03:


You have to design the ideal low pass filter with fixed length approximation of IIR
25
Filter, by using different values of M. Sample output is given below. Find h[n] for each,
make it causal and plot the frequency, phase, and magnitude response of the system.

26
Lab # 07: Fourier Transform Application

Objective:
The objective of this session is to design filters using FDA tool, and use them to design an
Equalizer in MATLAB GU.

Description:
Conversion from a time-domain signal to a frequency domain signal or vice versa is usually done
with Fourier transform and it has various uses. For example, different devices like transistors,
detectors, amplifiers, and human ears respond differently to different frequencies. This means
some frequencies can be detected or retransmitted whereas the others can be blocked. For example,
if you have a signal in the time domain, the transformation to frequency domain is required to
determine the changes after passing through devices. Later, you can convert a frequency domain
signal to a time-domain signal to see the changes. These conversions are also used in various audio
compression algorithms, spectroscopy, and laser optics and electronics.

Let say you are in communicating with you friend who is far away and you have to transmit data
i.e. (send two signals x1 and x2) from here to there. now you have two ways of transmitting.

1. transmitting one after another


2. transmitting simultaneously
The first one has a problem with time and also less efficient. It takes more time since two signals
had to be transmitted one after another.

Therefore, the second case is efficient however, when you go to second case if you transmit both
signals at a time, the signals super impose and reach the receiver’s end and these cannot be
separated since they are mixed, so what we do is multiply first signal with cos(w1*t) and second
with cos(w2*t). Now even if we transmit these two signals they super impose and reach the
receiver, but we have a variation in frequencies because we have multiplied with cosines which
causes shifting of the frequency spectrum of the original signal. So, by using filters we can separate
them in terms of frequency which can again be converted into time domain signals using inverse
fft algorithm

Each representation has some advantages and some disadvantages depending upon the type of
system under consideration. However, when the system is linear and time-invariant, only one
representation stands out as the most useful. It is based on the complex exponential signal set 𝑒𝑗𝑤𝑛
and is called the discrete-time Fourier transform.

If 𝑥[𝑛] is absolutely summable, that is

27
then its Discrete time Fourier Transform is given by,

FDA TOOL:
The Filter Design and Analysis Tool (FDA Tool) is a powerful user interface for designing and
analyzing filters. FDA Tool enables you to quickly design digital FIR or IIR filters by setting
filter performance specifications, by importing filters from your MATLAB workspace, by
directly specifying filter coefficients, or by adding, moving, or deleting poles and zeros. FDA
Tool also provides tools for analyzing filters, such as magnitude and phase response plots and
pole-zero plots.

To open the Filter Design and Analysis Tool (FDA Tool), type
>> fdatool
The Filter Design and Analysis Tool opens with the Design Filter panel displayed.

28
Figure 2: FDA TOOLBOX

In the taskbar you can view the different filter response characteristics in the display region or in
a separate window:

Figure 3: Taskbar

Poles:
• Poles enhances the magnitude response of the system.
• Moving the poles nearer to the unit circle increases the effect of enhancement (Amplitude
of peak), moving towards the origin decreases the effect.
• By moving the pole along the arc of the unit circle it will move the peak along the
frequencies (use normalized frequencies).

Zeros:
• Zeros suppresses the magnitude response of the system

29
• Moving the zeros nearer to the unit circle increases the effect of suppression, moving zeros
towards the origin decreases effect.
• By moving the pole along the arc of the unit circle it will move the suppression factor(peak)
along the frequencies (use normalized frequencies).

LAB TASK:
Design Equalizer in order to equalize the audio. Use fdatool (Filter Designer App) to design these
filters of sliders and then use the resulting coefficients in filter() built-in function. Take fft of audio
signal apply filter in frequency domain then again convert audio signal into time domain and play
resultant audio. Design the GUI in MATLAB to control the gain of each band of frequencies as
shown in the figure.

Figure 4: Sample GUI

You should be able to play the combined audio after applying different gains to each band of
frequencies.

30
Lab # 08: Z-Transform

Objectives:
The objective of this lab is to perform Z-Transform in MATLAB and to understand the filter
design using FDA tool.

Description:
Z-transform 𝑋(𝑧) of any signal x[n] is define as:

or

The inverse Z-transform is denoted by:

In particular, for any LTI system, the system response (output response) 𝑦[𝑛] can be determined by using
the convolution integral such that 𝑦[𝑛] = 𝑥[𝑛] ∗ ℎ[𝑛] where ℎ[𝑛] is impulse response of the system.

By applying the convolution property in z transform on both sides, we have

𝑌(𝑧) = 𝑋(𝑧)𝐻(𝑧)

𝐻(𝑧) = 𝑌(𝑧) / 𝑋(𝑧)

where 𝑌(𝑧) and 𝐻(𝑧) are the z transform of 𝑦[𝑛] and ℎ[𝑛] respectively. 𝐻(𝑧) is also called the system
function or transfer function.

Z-transform in MALTAB:
Let computes the z-transform of the symbolic expression:

Example:

1 𝑛
𝑥(𝑛) = ( ) 𝑢(𝑛)
4
and its Z-transform is
𝑧
𝑋(𝑧) =
1
𝑧−4

31
Matlab Code:

Vector representation of transfer function model:

4𝑧 2 − 5𝑧
𝐻(𝑧) =
8𝑧 2 + 3𝑧 + 1

𝑜𝑟

4 − 5𝑧 −1
𝐻(𝑧) =
8 + 3𝑧 −1 + 𝑧 −2

Vector representation of zero-pole-gain model:

4𝑧 −1 (1 − 0.5𝑧 −1 )(1 − 2𝑧 −1 )
𝐻(𝑧) =
(1 + 0.3𝑧 −1 )(1 − 0.4𝑧 −1 )(1 + 0.6𝑧 −1 )

𝑜𝑟

4(𝑧 − 0.5)(𝑧 − 2)
𝐻(𝑧) =
(𝑧 + 0.3)(𝑧 − 0.4)(𝑧 + 0.6)

32
Creating LTI Models:

Switching Model Representation:


You can convert models from one representation to another using (tf2zp, zp2tf) command. For example:

33
Partial-fraction expansion:
MATLAB residuez command provides two features:
• Finds the residues, poles and direct term of a partial fraction expansion of the ratio of two
polynomials.
• Converts the partial fraction expansion back to the polynomials with coefficients.
Example:

4(𝑧 − 0.5)(𝑧 − 2)
𝐻(𝑧) =
(𝑧 + 0.3)(𝑧 − 0.4)(𝑧 + 0.6)

This gives,

The polynomials with coefficient can be converted back by using same command.

34
roots, poles, Z-plane:
In this part, we will compute the poles and zeroes of the system and mark them onto z-plane.

Computing the zeros, poles and gain from the transfer function model with the following commands:

(Roots, tf2zp, zero, poles)

Example:

To plot of the poles and zeroes onto the z-plane can be obtained by using MATLAB pzmap command
from the object form.

35
FDA TOOL:
The Filter Design and Analysis Tool (FDATool) is a powerful user interface for designing and
analyzing filters. FDATool enables you to quickly design digital FIR or IIR filters by setting filter
performance specifications, by importing filters from your MATLAB workspace, by directly
specifying filter coefficients, or by adding, moving or deleting poles and zeros. FDATool also
provides tools for analyzing filters, such as magnitude and phase response plots and pole-zero
plots.
To open the Filter Design and Analysis Tool (FDATool), type
>> fdatool
The Filter Design and Analysis Tool opens with the Design Filter panel displayed.

36
Figure 5: FDA TOOLBOX

In the taskbar you can view the different filter response characteristics in the display region or in
a separate window:

Figure 6: Taskbar

37
LAB TASKS:
TASK#01:

1. Compute the inverse Z-transform of the following Z-transform functions.


3 −1
𝑧 + 2𝑧 −2
𝑋(𝑧) = 7
1 + 5𝑧 −2

38
2. Compute the Z-transform of the following functions. Plot it poles and zeros and then
determine it ROCs.
1 𝑛 3 𝑛
𝑥[𝑛] = (− ) 𝑢[𝑛] − ( ) 𝑢[𝑛]
7 17

TASK#02:
Determine the partial fraction expansion of the z-transform H(z) given by:
25𝑧 3 − 7𝑧 + 2
𝐻(𝑧) =
50𝑧 3 − 8𝑧 2 − 39𝑧 − 11
Draw pole zero plot. Mention ROC and comment on causality and stability of the system

TASK#03:
Design a (Lowpass and Highpass) filter using FDA Tool. Analyze all the following characteristic
of respective filter using Tool.
• Magnitude and Phase responses
• Impulse response
• Step response
• Pole-zero plot

Apply both filters on audio, analyze and plot the resultant signal

39
Lab # 09: Sampling of Audio Signals and Aliasing

Objectives:
The objective of this lab is to perform sampling on audio signals while taking care of aliasing.

Description:
In signal processing, sampling is the reduction of a continuous signal to a discrete signal. A
common example is the conversion of a sound wave (a continuous signal) to a sequence of samples
(a discrete-time signal).
A sample is a value or set of values at a point in time and/or space. While a sampler is a subsystem
or operation that extracts samples from a continuous signal.
A theoretical ideal sampler produces samples equivalent to the instantaneous value of the
continuous signal at the desired points.
The Nyquist sampling theorem provides a prescription for the nominal sampling interval required
to avoid aliasing. It may be stated simply as follows:
The sampling frequency should be at least twice the highest frequency contained in the signal.

Fs >=2Fc
where fs is the sampling frequency (how often samples are taken per unit of time or space), and fc
is the highest frequency contained in the signal. That this is so is really quite intuitive. Consider
for example a signal composed of a single sinewave at a frequency of 1 Hz:

If we sample this waveform at 2 Hz (as dictated by the Nyquist theorem), that is sufficient to
capture each peak and trough of the signal:

40
If we sample at a frequency higher than this, for example 3 Hz, then there are more than enough
samples to capture the variations in the signal:

If, however we sample at a frequency lower than 2 Hz, for example at 1.5 Hz, then there are now
not enough samples to capture all the peaks and troughs in the signal:

41
Note here that we are not only losing information, but we are getting the wrong information about
the signal. The person receiving these samples, without any previous knowledge of the original
signal, may well be misled into thinking that the signal has quite a different form:

From this example, we can see the reason for the term aliasing. That is, the signal now takes on a
different \persona," or a false presentation, due to being sampled at an insufficiently high
frequency. Now we are ready to think about the sampling of a complex signal composed of many
frequency components. By Fourier's theorem, we know that any continuous signal may be
decomposed in terms of a sum of sines and cosines at different frequencies.
For example, the following waveform was composed by adding together sine waves at frequencies
of 1 Hz, 2 Hz, and 3 Hz:

According to the Nyquist sampling theorem, the signal must be sampled at twice the highest
frequency contained in the signal. In this case, we have fc=3 Hz, and so the Nyquist theorem tells
us that the sampling frequency, fs, must be at least 6 Hz. And sure enough, this appears to be
sufficient:

42
LAB TASK:
1. Prove Nyquist’s Sampling Theorem, by sampling the following waves
a) Y = Cos(2*pi*f*t)
Where f=10Hz
b) Y = Sin(2*pi*f1*t) + Cos(2*pi*f2*t)
Where f1 = 50Hz and f2 = 200Hz
at three possible sampling rates i.e.
(1) Fs =2Fc (2) Fs <2Fc (3) Fs >2Fc
2. Sample an audio with Fs=2Fc, Fs < 2Fc and Fs >2Fc. Observe the effect of all cases and plot
signal.

43
Lab # 10: Interpolation & Decimation

Objectives:
The objective of this lab is to learn up sampling and down sampling of a signal while taking care
of aliasing.

Description:
Interpolation and decimation are the terms used for the operations resulting in increasing and
decreasing of the sampling rate of an audio signal respectively. Alternate common terms used for
these two techniques are upsampling and downsampling respectively.

Downsampling:
To downsample a signal, you can use MATLAB function downsample. This function
y = downsample(x,n)
decreases the sample rate of x by keeping the first sample and then every nth sample after the first.
If x is a matrix, the function treats each column as a separate sequence.
You can also use decimate function to decimate the audio signal use which automatically applies
the antialiasing filter before downsampling,
y = decimate(x,r) reduces the sample rate of x by a factor r.

Upsampling:
To upsample a signal, you can use MATLAB function upsample. This function
y = upsample(x,n)
For upsampling you can also use the function interp
Interpolation increases the original sampling rate for a sequence to a higher rate.
y = interp(x,r) increases the sampling rate of x by a factor of r.

LAB TASKS
1. Record an audio signal saying @ 8kHz:

“Welcome to Digital Signal Processing Course.”


and plot the signal in time and frequency domain with all axes correctly labeled. Now down the
frequency spectrum occupied by your recorded voice. You can play the audio by using the
MATLAB command sound. Note: if you don’t have a working microphone, you can always use
the recorded audio.

44
2. Now decimate the audio signal using downsample() and plot the resultant signal
in both time and frequency domain. Listen and observe the change in the audio. Kindly
note that the decimation in Time domain results in spread of the spectrum in Frequency
domain by the same factor.

Keep on doing decimation and plot signals in time domains until aliasing starts to occur and you
can feel the change in voice while playing it.
3. Upsample the audio signal repeatedly by a factor 2 and plot the resultant signal in
both time and frequency domain using upsample() function. Note the changes happening
to the signal in time domain and corresponding spectrum whenever we add upsample
by adding zeros in the time domain. Listen and observe the change in the audio. Do
follow the same steps and plot the input signal in time and frequency domain.

4. Resample the audio signal to a sampling rate of 3/5 by using the functions
downsample() and upsample(). Repeat the task for sampling rate of 5/3.

Analyze the relation between original audio signal and its decimated and interpolated versions
based on the magnitude spectrum and by listening the audio after each step.
Functions to explore:

• Decimate()

• Interp()

• Resample()

45
Lab # 11: Installation and Introduction to Code Composer Studio (v7.4)

Objective:

Aim of this lab is to get the basic knowledge and understanding of Code Composer Studio 7.4.

Introduction to Code Composer Studio:

Code Composer is the DSP industry's first fully integrated development environment (IDE)
with DSP- specific functionality. With a familiar environment liked MS-based C++TM, Code
Composer lets you edit, build, debug, profile and manage projects from a single unified
environment. Other unique features include graphical signal analysis, injection/extraction of
data signals via file I/O, multi-processordebugging, automated testing and customization via a
C-interpretive scripting language and much more.

Code Composer Features Include:

• IDE
• Debug IDE
• Advanced watch windows
• Integrated editor
• File I/O, Probe Points, and graphical algorithm scope probes
• Advanced graphical signal analysis
• Interactive profiling
• Automated testing and customization via scripting
• Visual project management system
• Compile in the background while editing and debugging
• Multi-processor debugging
• Help on the target DSP

Useful Types of Files:

You will be working with a number of files with different extensions. They include:
• file.pjt: to create and build a project named file.
• file.c: C source program.
• file.asm: assembly source program created by the user, by the C compiler,or
by the linearoptimizer.
• file.sa: linear assembly source program. The linear optimizer uses file.saas input
to produce anassembly program file.asm.
• file.h: header support file.

46
• file.lib: library file, such as the run-time support library file rts6701.lib.
• file.cmd: linker command file that maps sections to memory.
• file.obj: object file created by the assembler.
• file.out: executable file created by the linker to be loaded and run on the processor.

Installation of Code Composer Studio:

To download CCS v7.4 (latest version of v7x) for your preferred OS, visit the below mentioned
link.
https://www.ti.com/tool/download/CCSTUDIO/7.4.0.00015

Once CCS v7.4 setup is downloaded, install the setup, and follow the steps as shown in the link
mentioned below.
https://software-dl.ti.com/ccs/esd/documents/users_guide_legacy/ccs_installation.html

For CCS system requirements, refer to the following link

https://software-dl.ti.com/ccs/esd/documents/users_guide_legacy/ccs_overview.html#System-
Requirements

Now, after initiating the downloading process, at this phase, select C6000 Power-Optimised DSP
(in the following step) and then after clicking ‘next’, check all the options/boxes in the next similar
format window (shown in installation steps link), and then complete rest of the steps to finish
installation.

47
Once CCS is installed, go to CCS App center, and install C6000 compiler (v7.4.24) being shown
in CCS Add-ons window. (please refer to image below)

48
Alternate Method (for installation):
A complete installation video guide can also be viewed by clicking this link.

Creating First Project in CCS v7.4:

A step-by-step procedure forcreating the first project in CCS 7.4 is described below:
1. Run CCS v7.4 with administrator.
2. Create new CCS project by clicking ‘CCS Project’ from ‘File → New → CCS Project

49
3. Type ‘6713’ in the target field and select ‘DSK6713’ from the drop-down menu on the right.

50
4. Select ‘Spectrum digital DSK-EVM-eZdsp onboard USB emulator’ from the connections
drop-down menu.

5. Type project name in the ‘Project name’ field, e.g. Test Project.

6. Make sure that the compiler version ‘TI v7.4.4’ is selected.

51
7. Click on ‘Hello World’ project template, and click finish. This will create a new project as
shown inthe following screenshots.

52
8. Connect DSK6713 kit via USB connection and right click on the project name and click
‘Rebuild All’.At this stage, CCS will show errors and/warnings of your code, if any. You
need to correct the errors (at-least) at this stage to continue running your code.

9. Now click ‘Debug Test Project’ icon, as shown in the figure below. USB connection of the
kit is madeverified at this stage. If you get some error about connection, give a retry or
reconnect your device.

53
10. Once the debug process is finished, press F8 or click ‘resume’ button from the menu. You
will see theoutput in the console at the bottom of the CCS screen.

54
LAB TASK:
• You are required to upload step by step screenshots of CCS Studio till point 8 (at least),
that doesn’t require any connection with the DSK6713 Processor (DSP Kit), as shown in
this manual.

55
Lab # 12: Switching LEDs and Working of Codec on DSK6713

Objective:

Aim of this lab is to perform LED switching and understand working of Codec.

Setting up CCS 7.4 for DSK6713:

Before the start of lab task, it is mandatory to follow the instructions below in order to setup
CCS 7.4 for DSK6713 development kit.

1. Download DSK6713 source files from this link, extract and navigate to the ‘include’ folder
inside ‘c6000’ folder. Copy all the header files from this ‘include’ folder and paste in the
following directory: C:\ti\ccsv7\tools\compiler\ti-cgt-c6000_7.4.4\include
2. From the same source files (downloaded in step 1), now navigate to ‘lib’ folder inside ‘c6000’
folder. Copy the library file named ‘dsk6713bsl.lib’ and paste in the following directory:
C:\ti\ccsv7\tools\compiler\ti-cgt-c6000_7.4.4\lib
3. Download Chip Support Library (CSL) installation setup from here, and install on the same
drive (i.e. C drive) where CCS has already been installed. It is recommended to install it at:
C:\C6xCSL directory.
4. Make a new CCS project as you did in Lab 1. Right click on the project name in left-pane and
click on ‘properties’. Go to ‘general’ from left-pane and set ‘Linker command file’ to
‘C6713.cmd’, and ‘Runtime support library’ to ‘dsk6713bsl.lib’. Click OK.
5. Add CSL headers to the project Project Properties → CCS Build → C6000 compiler → Include
options Click Add and select path "C:\C6xCSL\include".
6. Add "CHIP_6713" to Predefined names. Project Properties → CCS Build → C6000 compiler
→ Predefined Symbols.
7. Add CSL to project: Project Properties → CCS Build → C6000 linker → File search path →
Include library file or ... Click Add and select path "C:\C6xCSL\lib_3x\csl6713.lib"

56
Procedure:

In order to control LED switching and Audio playback, run the following script in your created
project.

57
The initial values are used for the configuration of CODEC. After running this code, the LED
#1 will be switched off. Attached earphones to both the audio IN and audio OUT ports. The
CODEC will playback the audio on the output port.

58
LAB TASKS:

1. Observe the audio playback and comment on the result.

2. Using the built-in function ‘DSK6713_waitusec (delay in microseconds in max 32 bit


decimal value)’, modify the LED part of the code so that the LED, which was initially
ON, is turned off after exactly 2 seconds.

59
Lab # 13: Convolution Sum Implementation on DSK6713

Objective:

Aim of this lab is to perform convolution sum using C++ and implement on DSK6713.

Setting up CCS 7.4 for DSK6713:

Before the start of lab task, it is mandatory to follow the instructions below in order to setup
CCS 7.4 for DSK6713 development kit.

1. Download DSK6713 source files from this link, extract and navigate to the ‘include’ folder
inside ‘c6000’ folder. Copy all the header files from this ‘include’ folder and paste in the
following directory: C:\ti\ccsv7\tools\compiler\ti-cgt-c6000_7.4.4\include
2. From the same source files (downloaded in step 1), now navigate to ‘lib’ folder inside ‘c6000’
folder. Copy the library file named ‘dsk6713bsl.lib’ and paste in the following directory:
C:\ti\ccsv7\tools\compiler\ti-cgt-c6000_7.4.4\lib
3. Download Chip Support Library (CSL) installation setup from here, and install on the same
drive (i.e. C drive) where CCS has already been installed. It is recommended to install it at:
C:\C6xCSL directory.
4. Make a new CCS project as you did in Lab 1. Right click on the project name in left-pane and
click on ‘properties’. Go to ‘general’ from left-pane and set ‘Linker command file’ to
‘C6713.cmd’, and ‘Runtime support library’ to ‘dsk6713bsl.lib’. Click OK.
5. Add CSL headers to the project Project Properties → CCS Build → C6000 compiler → Include
options Click Add and select path "C:\C6xCSL\include".
6. Add "CHIP_6713" to Predefined names. Project Properties → CCS Build → C6000 compiler
→ Predefined Symbols.
7. Add CSL to project: Project Properties → CCS Build → C6000 linker → File search path →
Include library file or ... Click Add and select path "C:\C6xCSL\lib_3x\csl6713.lib"

60
Background:

In signal processing, convolution sum is used to compute the output of an FIR filter by
performing flip-drag-sum operation. For an input 𝑥[𝑛] and impulse response ℎ[𝑛], the output
𝑦[𝑛] is given by the following discrete-time equation:

The flip and drag operation can be performed on any of the input sequences. If the length of input
is 𝑁, and the length of impulse response is 𝑀, then the length of output will be 𝑁+𝑀−1.

Procedure:

Using the template code structure provided below, you have to write the code for the

61
computation of convolution sum of two sequences and observe the output in the workspace of
CCS. Both the input sequences are given in the template. You have to only write the code for
computation of convolution sum. (Don’t change anything in the template)

LAB TASK:

• Perform convolution of the same sequences using built-in MATLAB function ‘conv()’,
and verify the output of your code.

62
Lab # 14: Speech Processing using Lowpass Filter on DSK6713

Objective:

Aim of this lab is to perform convolution sum using C++ and implement on DSK6713.

Setting up CCS 7.4 for DSK6713:

Before the start of lab task, it is mandatory to follow the instructions below in order to setup
CCS 7.4 for DSK6713 development kit.

1. Download DSK6713 source files from this link, extract and navigate to the ‘include’ folder
inside ‘c6000’ folder. Copy all the header files from this ‘include’ folder and paste in the
following directory: C:\ti\ccsv7\tools\compiler\ti-cgt-c6000_7.4.4\include
2. From the same source files (downloaded in step 1), now navigate to ‘lib’ folder inside ‘c6000’
folder. Copy the library file named ‘dsk6713bsl.lib’ and paste in the following directory:
C:\ti\ccsv7\tools\compiler\ti-cgt-c6000_7.4.4\lib
3. Download Chip Support Library (CSL) installation setup from here, and install on the same
drive (i.e. C drive) where CCS has already been installed. It is recommended to install it at:
C:\C6xCSL directory.
4. Make a new CCS project as you did in Lab 1. Right click on the project name in left-pane and
click on ‘properties’. Go to ‘general’ from left-pane and set ‘Linker command file’ to
‘C6713.cmd’, and ‘Runtime support library’ to ‘dsk6713bsl.lib’. Click OK.
5. Add CSL headers to the project Project Properties → CCS Build → C6000 compiler → Include
options Click Add and select path "C:\C6xCSL\include".
6. Add "CHIP_6713" to Predefined names. Project Properties → CCS Build → C6000 compiler
→ Predefined Symbols.
7. Add CSL to project: Project Properties → CCS Build → C6000 linker → File search path →
Include library file or ... Click Add and select path "C:\C6xCSL\lib_3x\csl6713.lib"

63
Procedure:

In this lab, you will filter the unwanted noise from the speech by using a lowpass filter. The
filter will be designed by using MATLAB’s filter design toolbox. Follow the instructions
below:

1. To open this toolbox, type ‘filterDesigner’ in the command window of MATLAB. This
will open the filter design toolbox. Configure all the options as shown in the figure
below.
2. To export the filter coefficients, go to file and then export. A dialogue box will be
opened. With all the other default options, change the name of Numerator to ‘h’ and
click ‘Export’.
3. A new variable named ‘h’ will be created in the MATLAB workspace. Copy the
contents of this variable and paste in notepad. Insert commas between all adjacent
values so that they can be copied in the CCS code.
4. Use the template given below to copy the coefficients and perform filtering by using
your own convolution sum code written and tested in the previous lab. Here
‘FILTER_LEN’ is the filter length to be defined in code using #define.

64
65
In the last while statement, write ‘msg’ if you want to hear the unfiltered voice, and replace it with
‘y’ to hear the filtered voice. Follow all the previous instructions to build and run your program.

LAB TASK:

• Playback the unfiltered and filtered voices to note any difference and comment on the
result.

66
67

You might also like