You are on page 1of 857

INDEX

S. No Topic Page No.


Week 1
Introduction to Signals and Systems, Signal Classification –
1 1
Continuous and Discrete Time Signals
Signal Classification – Deterministic/Random, Even/ Odd,
2 13
Periodic Signals
Energy/ Power Signals, Unit Impulse Function, Complex
3 27
Exponential
Real Exponential, Sinusoidal Signals, Basic Discrete Time Signals
4 40
– Unit Impulse/ Complex Exponential
Classifications of Systems – Memoryless and Casual/ Non-Casual
5 50
Systems
Week 2
Linear Systems – Additivity/ Homogeneity Properties, Time
6 Invariant Systems, Linear Time Invariant (LTI) Systems, BIBO 60
Stability
Example Problems in Signals and Systems – Plot, Odd/ Even
7 73
Components, Periodicity
Example problems in Signals and Systems – Energy, Properties of
8 83
Impulse, RL Circuit
Example Problems in Signals and Systems – Properties of
9 94
Modulator, Eigenfunction of LTI System
Properties and Analysis of LTI Systems – Impulse Response,
10 103
Response to Arbitrary Input, Convolution and Properties
Properties and Analysis of LTI Systems – Memoryless Systems,
11 114
Causality, Stability, Eigen function
Week 3
Properties and Analysis of LTI Systems – Differential Equation
12 126
Description, Linearity and Time Invariance
Properties of Discrete Time LTI Systems – Impulse Response,
13 Stability, Eigenfunction, Systems Described by Difference 134
Equation
Example Problems LTI Systems-Convolution, Periodic
14 145
Convolution, BIBO Stability
Example Problems LTI Systems – Eigenfunctions, System
15 Described by differential Equation, Homogenous and Particular 157
Solution
Example Problems Discrete time LTI Systems – Output of
16 170
System, Causality, Stability
Laplace Transform – Definition, Region of Convergence (ROC),
17 181
LT of Unit Impulse and Step Functions
Week 4
Laplace Transform Properties – Time Shifting Property,
18 193
Differentiation /Integration in Time
Laplace Transform – Convolution, Rational Function – Poles and
19 206
Zeros, Properties of ROC
Laplace Transform –Partial Fraction Expansion with simple Poles
20 220
and Poles with Multiplicity Greater than Unity, Laplace
Transform of LTI Systems
Laplace Transform Example Problems – Evaluation of Laplace
21 235
Transform, Inverse LT through Partial Fraction
Laplace Transform Example Problems – Inverse LT through
22 245
Partial Fraction for Poles with Multiplicity Greater than Unity
Laplace Transform – RL Circuit Problem, Unilateral Laplace
23 257
Transform, RC Circuit with Initial Conditions
Week 5
Z-Transform – Definition, Region of Convergence (ROC), z-
24 267
Transform of Unit Impulse and Step Functions
Properties of z -Transform –Linearity, Time Shifting, Time
25 280
Reversal, Multiplication by n
Properties of z-Transform – Convolution, Inverse z-Transform
26 293
through Partial Fractions, Properties of ROC
Z-Transform of LTI Systems – Causality, Stability, Systems
27 304
Described by Difference Equation
Example Problems in z-Transform – Evaluation of z-Transform,
28 313
ROC
Example Problems in Signals and Systems – Plot, Odd/ Even
29 323
Components, Periodicity
Example Problems in z –Transform – LTI System Output, Step/
30 334
Impulse Response of LTI System
Week 6
Example Problems in z–Transform – Impulse Response of LTI
31 347
System Described by Difference Equation
General Inversion Method for Inverse z -Transform Computation
32 354
– Method of Residues
Fourier Analysis of Continuous Time Signals and Systems –
33 369
Introduction
Fourier Analysis: Complex Exponential Fourier Series,
34 379
Trigonometric Fourier Series – Even and Odd Signals
Conditions for Existence of Fourier Series – Dirchlet Conditions,
35 392
Magnitude/ Phase Spectrum, Parseval’s Theorem
Fourier Transform (FT): Definition, Inverse Fourier Transform,
36 Fourier Spectrum, Dirichlet Conditions, Relation to Laplace 402
Transform, FT of Unit Impulses
Week 7
Fourier Transform of Exponential, Unit Step Function, Properties
37 of Fourier Transform – Linearity, Time Shifting, Frequency 415
Shifting, Time-Reversal
Properties of Fourier Transform - Duality, Differentiation in
38 428
Time, Convolution
Fourier Transform - Parseval’s Relation, Frequency Response of
39 438
Continuous Time LTI Systems
Fourier Transform – Distortionless Transmission, LTI Systems
40 Characterized by Differential Equations, Ideal Low Pass and High 463
Pass filters
Fourier Transform - Ideal Band Pass and Band Stop Filters, Non-
41 478
Ideal Low-Pass Filter, 3 dB Bandwidth
Week 8
Fourier Analysis Examples - Complex Exponential Fourier Series
42 680
of Periodic Square Wave
Fourier Analysis Examples-Trignometric Fourier Series of
43 699
Periodic Square Wave, Periodic Impulse Train
Fourier Analysis Examples - Complex Exponential Fourier Series
44 and Trigonometric Fourier Series of Periodic Triangular Wave, 715
Periodic Convolution
Fourier Analysis Examples - Fourier Transform of Square Pulse,
45 728
Fourier Transform of Sinc Pulse
Fourier Analysis Examples - Fourier Transform of Exponential,
46 Cosine, Sgn, Unit- 393
Step Signals, Even and Odd Components
Fourier Analysis Examples – Fourier Transform of Gaussian
47 Pulse, Fourier Transform Method to find Output of LTI Systems 764
Described by Differential Equations
Fourier Analysis Examples: Bode Plot for Magnitude/ Phase
48 780
Response – Simple Example
Fourier Analysis Examples: Bode Plot for Magnitude/ Phase
49 Response – Second Example, Fourier Transform of Hilbert 566
Transformer
Week 9
50 Fourier Transform Examples: Filtering – Ideal Low Pass Filter 577
Fourier Transform Problems: Unit Step Response of RC Circuit,
51 587
Sampling of Continuous Signal
52 Sampling: Spectrum of Sampled Signal, Nyquist Criterion 600
53 Sampling: Reconstruction from Sampled Signal 611
Fourier Analysis of Discrete Time Signals and Systems –
54 881
Introduction
Fourier Analysis of Discrete Time Signals – Duality, Parseval’s 631
55
Theorem
Week 10
Discrete Time Fourier Transform: Definition, Inverse DTFT,
56 Convergence, Relation between DTFT and z-Transform, DTFT of 639
Common Signals
Discrete Time Fourier Transform: Properties of DTFT - Linearity,
57 Time Shifting, Frequency Shifting, Conjugation, Time-Reversal, 654
Duality
Discrete Time Fourier Transform: Properties of DTFT -
58 Differentiation in Frequency, Difference in Time, Convolution, 668
Multiplication, Parseval’s Relation
DTFT: Discrete Time LTI Systems- LTI Systems Characterized
59 682
by Difference Equations
Discrete Fourier Transform-Definition, Inverse DFT, Relation
60 between DFT and DFS, Relation between DFT and DTFT, 689
Properties-Linearity, Time Shifting
Discrete Fourier Transform: Properties – Conjugation, Frequency
Shifts, Duality, Circular Convolution, Multiplication, Parseval’s
61 700
Relation, Example Problems for Fourier Analysis of Discrete
Time Signals
Example Problems: DFS Analysis of Discrete Time Signals,
62 710
Problems on DTFT
Week 11
63 Examples Problems: DTFT of Cosine, Unit Step Signals 721
64 Example Problems: DTFT – Impulse Response 731
754
65 Example Problems: DTFT - Sampling

66 Example Problems: DTFT – FIR, Discrete Fourier Transform 768


67 Example Problems: DFT 779
68 Example Problems: DFT – DFT, IDFT in Matrix form 787

Week 12
69 Group/ Phase Delay - Part I 789
70 Group / Phase Delay - Part II 800
817
71 IIR Filter Structures: Direct Form – I, Direct Form – II

72 IIR Filter Structures: Example 826


73 IIR Filter Structures: Cascade Form 832
74 IIR Filter Structures: Cascade Form 837
IIR Filter Structures: Parallel Form - I, Parallel Form - II,
75 843
Examples
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture- 01
Introduction to Signals and Systems, Signal Classification – Continuous and
Discrete Time Signals

Keywords: Introduction to Signals and Systems, Signal Classification,Continuous and


Discrete Time Signals

Hello, welcome to this module in this massive open online course alright. So in this
module we are going to look at signals and systems, the properties of signals and
systems.

(Refer Slide Time: 00:30)

So this course concerns itself with, as the title implies, signals and systems, two of the
fundamental quantities which are relevant in all of electrical, electronics and
communication engineering and relevant in a very profound sense. Their knowledge is
fundamental to understanding the various concepts or aspects of different applications in
electrical electronics and communication engineering.

1
(Refer Slide Time: 01:32)

So what we are interested in this massive open online course is to understand the
fundamental concepts in the properties of signals and systems and more importantly we
are interested in this interplay between signals and systems.

(Refer Slide Time: 02:43)

Now consider the case of a system which I am representing it schematically over here
and if I transmit a signal x(t) through this system, I have an output y(t). Now these are
the signals, x(t) is the input signal and y(t) is the output signal. So we would like to
characterize and better understand what happens when I take a signal, transmit it through

2
a system, what is this interaction, how the system acts upon this signal to generate the
output, because we are going to use systems to process these signals suitably. So we
would like to understand the interplay between signals and some of the impact systems
have on signals and also designing appropriate systems to extract certain behavior from
the signals.

So this interaction between signals and systems is of fundamental importance and these
are used in several branches of electrical engineering, for instance in the design of power
systems where they are monitored and efficiently controlled using the smart grid and you
can have applications in communication systems, such as 3G wireless systems or Wi-Fi
such as a 802.11x systems. So this course aims to look at the properties of both
continuous as well as discrete signals along with systems.

(Refer Slide Time: 06:09)

3
(Refer Slide Time: 08:15)

Let us start with a definition of a signal.

(Refer Slide Time: 09:38)

A signal can be defined as basically, it is a physical quantity that conveys information


about some physical phenomena, for instance such as a voltage signal, an
electromagnetic wave which is a signal that is transmitted over the air from the base
station to the mobile station which is carrying information about the communication
between two individuals or let us say it is a data signal carrying information about either
a video or an image that has been transmitted. So a signal is a bearer of information.

4
(Refer Slide Time: 12:20)

An image can also be thought of as a signal in space and we have signals in both space
and time, for instance a video signal which is 2 dimensional, each frame of the video can
be thought of as an image. So it has variation in space as well as time because it
comprises of a sequence of frames in time.

(Refer Slide Time: 13:48)

Now when we consider time signals we represent them using x(t), y(t) etc., and many
principles that we develop for the analysis of such signals which vary in time in a single
dimension can also be extended for 2D, 3D signals or a separate set of techniques can be

5
developed for them, but based on the fundamental principles that we learn for this time
signal. So in this course we are going to consider the analysis of such simple signals
which are varying with time and these can be suitably extended to other scenarios for
instance images which are 2D space signals or video which is a 3 dimensional both space
and time varying signals.

So a signal represents some physical quantity which conveys information about some
phenomenon that we are interested and naturally to understand more about that
phenomenon, we need to process that signal suitably. So we are going to consider time
varying signals or signals which are functions of time and these are known as time
signals.

(Refer Slide Time: 17:35)

So consider the classification of signals. Signals can be continuous time signals for
instance, x(t )  sin(2 t ) , that is a continuous time signal, it is also known as a sinusoid
or a sinusoidal signal either cos(2 t ) or sin(2 t ) both are known as sinusoids. It is defined
continuously over time.

6
(Refer Slide Time: 17:29)

So it is defined continuously at all time instants either from minus infinity to infinity or
over a continuous time interval not at a specific set of time instants.

(Refer Slide Time: 19:23)

Now we have discrete time signals which are defined at discrete set of time instants. For
instance, you have a discrete time signal which can be defined at discrete set of time
instants, this is known as a stem plot and these discrete set of time instants can either be
positive or negative.

7
(Refer Slide Time: 20:27)

So these are only defined at a set of discrete time instants. So it can be identified as a
series or sequence of numbers, for instance we have x(0) at time instant 0, x (1) at time
instant 1, x(2) at time instant 2 and so on.

(Refer Slide Time: 21:33)

So these can be defined as a sequence or series of numbers and so also known as a time
series, for instance if the signal is in time, it is a discrete time signal. So naturally similar
to discrete time signals, you can also have discrete space signals. For instance, if you
take an image signal and if you sample it at appropriate instants in space, this is a

8
discrete space signal. In fact, if you look at modern images which are represented as a
collection of pixels its nothing but a discrete space signal. The discrete time signals can
also be obtained by sampling continuous time signals. So I can go from a continuous
time signal to a discrete time signal and I can obtain a continuous time signal from a
discrete time signal through a suitable filtering operation.

(Refer Slide Time: 24:02)

So a discrete time signal can be obtained can be obtained by suitably sampling a


continuous time signal. There are certain properties of the sampling process. If you take
a continuous time signal and if you sample it at suitable points over a time grid, you get
points that are equal spaced in time, which are the samples.

9
(Refer Slide Time: 25:15)

These are samples and these are the sampling time instants. So by sampling continuous
time signal I am able to obtain a discrete signal. These discrete time signals are more
convenient to process, for instance in a digital communication systems such as most of
our mobile phones are based on such as for instance 3G, 4G, wireless communication
systems, it is convenient to handle digital signals which can be obtained again from
discrete time signals.

So discrete time signals give rise to digital signals and such signals can be processed
much more readily in comparison to the conventional systems which were analog in
nature, for instance the amplitude modulation, FM radio and so on. The examples of
discrete time signals would be the modern communication systems such as 3G, 4G or
your GSM which is a 2G communication system or for instance all your modern
communication systems such as Wi-Fi, etc., even your modern landline probably uses
digital communication systems your set top boxes in TV that is a very good example
which are alternative to your analog cable. So basically that concludes the basic
classification of signals as both continuous and discrete time signals.

10
(Refer Slide Time: 30:10)

(Refer Slide Time: 30:30)

n
1
Consider an exponential kind of signal, for instance x(n)    for n  0 and 0
2
otherwise. If you can look at this signal, since half is basically less than 1, it is going to
be a decreasing signal. So this is a discrete time signal defined only at discrete time
instants. So basically let us conclude this module with that.

So we have seen signals as physical quantities that convey some information about a
certain phenomenon. We are interested in studying the signals, behavior of these signals,

11
the properties of these signals and we begin with the characterization or classification of
these signals first as two basic classes that is continuous time and discrete time signals.
So we will stop here and continue with other aspects in subsequent modules. Thank you
very much.

12
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 02
Signal Classification – Deterministic/Random, Even/ Odd, Periodic Signals

Keywords: Deterministic and Random Signals, Even and Odd Signals

Hello welcome to another module in this massive open online course. So we are looking
at the classification of signals. And let us look at a different classification of signals that
is analog and digital signals. Analog signals are basically continuous time signals which
can take all possible values over a continuous interval.

(Refer Slide Time: 00:35)

13
(Refer Slide Time: 01:26)

So an analog signal x(t) is a continuous time signal which can take values belonging to
an interval  a, b . We have several examples of analog signals, for instance, sin(2 ft ) ,

which is a sinusoidal signal. It takes all values that belong to the interval  1,1 . Another

such example is et which can take all positive values belonging to the interval  0,   . So

et is a continuous time signal.

(Refer Slide Time: 04:03)

14
On the other hand, a digital signal is a discrete time signal, which can take only values
that belong to a discrete or finite set.

(Refer Slide Time: 05:32)

For instance, x(n) can either be -1 or 1 and this is not an interval that is x(n) can take
only values from a discrete set of two possible values.

(Refer Slide Time: 08:11)

In communication, this is termed as a binary phase shift keying system, where


transmitted digital symbol can either be -1 or 1 or voltage level can be -1 volt or 1 volt to
indicate the information symbol 0 or 1.

15
(Refer Slide Time: 09:13)

Now another classification of signals can be real and complex signals. So a signal is a
real signal, either it be continuous time or discrete time signal, if it belongs to the set of
real numbers that is you can take only real values.

(Refer Slide Time: 11:06)

For instance, sin(2 ft ) or et these take only real values, so these are real signals. On the
other hand if you look at other examples such as for instance, on the other hand if you
take other signals such as x(t) or discrete time signal x(n), which can take values
belonging to the set of complex numbers, these are termed as complex signals.

16
(Refer Slide Time: 12:12)

(Refer Slide Time: 12:50)

For instance, the classic example of a complex signal is x(t )  e j 2 ft which can also be

written as cos(2 ft )  j sin(2 ft ) where j equals the imaginary number which is 1 .


This is a complex number also termed as a complex sinusoid which can take values
belonging to the set of complex numbers. Complex signals are very useful in the
representation of signals in fact, if you go back to the analysis of communication
systems, all communication signals can be analyzed or represented as complex signals
assuming inphase and quadrature components.

17
So complex signals have a great utility in the study of signals and carrying out analysis
in various areas to understand different concept and carry out the analysis in various
areas of electronics and communication engineering. So these are also an important class
of signals.

(Refer Slide Time: 14:46)

Another important classification of signals is deterministic and random signals. These


can also be either continuous time or discrete time signals. A deterministic signal is
completely specified that is deterministically at any given time instant.

(Refer Slide Time: 16:16)

18
For instance, sin(2 ft ) , et these are deterministic signals in the sense that at a given
time instant t there is no ambiguity, they are completely specified that is, given a time
instant one can exactly determine what is the signal. However this is not the case of a
random signal, as the name implies this is random in nature that is it takes random values
at various time instants.

(Refer Slide Time: 17:30)

So this takes random values at different time instants and hence it is not completely
determined ahead. So let us say your signal represents the outcome of a coin toss
experiment. So if its outcome is a head it signaled by plus 1 if the outcome is tail it is
signaled by minus 1. So signal x(n) = 1 if outcome equals heads or = - 1, if outcome
equals tails. So this is basically representing a coin toss experiment and what this means
is that if at every instant of time you are tossing a coin, if the outcome is a head, you are
representing it by 1 and if the outcome is a tail you, you are representing it by 1 and
therefore, since the outcome of the coin toss experiment is random the signal itself is
random in nature and this is a discrete time random signal.

19
(Refer Slide Time: 19:46)

A classic example of a continuous time random signal is noise and it is some kind of a
signal which looks like as shown in slide. The noise limits the performance of a system
and it is important to understand the properties and behavior of noise to completely
characterize the performance and behavior of a system.

(Refer Slide Time: 21:15)

Noise is an inevitable component and whenever we analyze a signal, there is also an


underlying noise component that is present, its power may be less or the power level
relative to the signal might be varying.

20
Once again going back to our example of communication systems, to understand the
accuracy with which information can be transferred for instance from a base station to
the mobile, it is very important to understand and characterize the noise properties of the
system.

(Refer Slide Time: 23:52)

Another classification is even and odd signals. An even function is x(t )  x(t ) or for a
discrete time signal x(n)  x(n) . For instance, you have a classic example that
is cos(2 ft ) . So cos(2 ft )  cos(2 ft ) . This is an example of an even signal.

(Refer Slide Time: 25:37).

21
On the other hand, an odd signal satisfies x(t )   x(t ) or x(n)   x(n) . Another classic
example of an odd signal is x(t )  sin(2 ft ) .

(Refer Slide Time: 27:21)

 1
Here the value for , that is at t  is 1 and this is the value correspondingly at
2 4f
1 1
t and you can see this is basically 1 and at t   it is -1. So this satisfies
4f 4f
basically sin(2 ft )   sin(2 ft ) . So this is a classic example of an odd signal. The
concept of even and odd signals comes in handy when analyzing the properties of the
behavior of signals. So even signal has even symmetry that is symmetric about 0 and odd
signal has odd symmetry.

22
(Refer Slide Time: 28:51)

Another very important classification of signals is periodic versus aperiodic signals. So


x(t) is periodic if there a time period T such that x(t  T )  x(t ) for all t.

(Refer Slide Time: 30:33)

Let us consider a periodic triangular wave. So this is 0, this is T this is 2T, 3T, -T and so
on. And you can see that for any T, that is if you look at values T apart, they are all the
same.

23
(Refer Slide Time: 32:06)

Consider the classic example again that is the sine which is the periodic signal. So you
can see that sin(2 Ft ) this is equal to sin(2 Ft ) plus the period, the period here is going
1
to be which is equal to sin(2 t ) . So this is sin(2 (t  1))  sin(2 t  2 )  sin(2 t ) .
F
So this is a period of T = 1.

(Refer Slide Time: 33:33)

Now if T is a period of the periodic signal, then mT is also a period, where m is any
integer. We have x(t  mT )  x(t ) and this also holds for all t.

24
(Refer Slide Time: 35:22)

So therefore the fundamental period is the smallest positive number or it is the smallest
time period such that x(t  T )  x(t ) holds for all t. All other periods are basically
multiples of this fundamental period. So let us go back to our example, sin(2 t ) and
here T=1is the fundamental period.

(Refer Slide Time: 37:19)

Any multiple of the fundamental period is also a period and the fundamental period is the
smallest possible duration, such that x(t  T )  x(t ) for all time instance t. Now the same
can be defined for a discrete time signal again.

25
(Refer Slide Time: 39:00)

For a discrete periodic signal, we must have x(n  N )  x(n) and this must hold for all n.
And the smallest N for which this holds is known as the fundamental period N0. So that
is basically continuous time periodic signals and discrete time periodic signals.

So we have seen various classes of signals such as deterministic and random signals,
even and odd signals and periodic signals. So you can go over these different classes and
try to understand that better alright. So we will stop here and continue with other aspects
in the subsequent modules. Thank you very much.

26
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 03
Energy/ Power Signals, Unit Impulse Function, Complex Exponential

Keywords: Energy and Power Signals, Unit Impulse Function, Complex Exponential

Hello welcome to another module in this massive open online course. So let us continue
our discussion on the classification of signals, so let us look at energy and power signals.

(Refer Slide Time: 00:24)


The energy of a continuous signal x(t) is defined as E  
2
x(t ) dt . The energy of a



discrete time signal is E  
2
x(n) . Now a signal is an energy signal if the energy is
n 

finite.

27
(Refer Slide Time: 01:48)

If 0  E   , this implies that it has a finite energy and hence it is an energy signal. For
instance, a good example of an energy signal is et u (t ) or e nu (n) and these are energy
signals that is signals whose energies are finite.

(Refer Slide Time: 02:44)

And we also have the notion of power signals. The power of a signal,
T
2
1
P  lim 
2
x (t ) dt and the same thing can be defined for a discrete time signal that is
T  T
T

2

28
N
1
P  lim  x(n) . If the power of the signal is finite that is 0  P   , then it is
2

N  2 N  1
n  N

known as a power signal. An example of a power signal is the sinusoid, sin(2 ft ) .

(Refer Slide Time: 05:03)

So we have covered most of the major classes of signals and now let us look at some
important continuous and discrete time signals which occur very frequently in
applications and in the analysis of signals and systems. The first important signal you
must be very familiar with is the unit step signal.

(Refer Slide Time: 06:33)

29

The unit step signal is defined as u (t )  1 t  0 and also for t =0 you can sometimes
0t 0
1
define it as . At t =0, there is a discontinuity, that is it jumps from 0 to 1.
2

(Refer Slide Time: 07:04)

So this is your unit step function or the unit step signal. And another such important
signal is the unit impulse function.

(Refer Slide Time: 08:04)

30
And this is one of the most fundamental or key signals to understand the various
properties or the behavior of system.

(Refer Slide Time: 08:37)

 
Consider the following signal which is a pulse from  to . So the pulse has width
2 2
1 1
equals  and a height equals . So the area under the pulse equals    1 . So basically
 
we denote this pulse by   (t ) . So there is a sequence of pulses, one pulse for each value

  1
of  . So we are considering a narrow pulse with width  from  to and of height .
2 2 

(Refer Slide Time: 10:48)

31
So basically what we have is for each pulse   (t ) for every  the area under pulse equal

to 1. Now we define the pulse  (t ) as  (t )  lim  (t ) and this is your impulse function
 0

1
or simply known as the impulse. As   0 , the width goes to 0 and height   , but

the area under it still remains constant that is unity.

(Refer Slide Time: 11:48)

b

So basically what you can see is that the area under this that is   (t )dt  1
a
if a  0  b
0 otherwise
.

(Refer Slide Time: 12:35)

32

The impulse has several interesting properties as follows 

x(t ) (t )dt  x(0) that is if

you multiply the impulse function by any signal x(t) and integrate it, it picks the value of

the function x(t) at t = 0. Now  x(t ) (t  t0 )dt  x(t0 ) .


(Refer Slide Time: 14:39)

(Refer Slide Time: 16:00)

33
Now  (t  t0 ) is nothing but basically the impulse shifted to t = t0. Some of the other

1
properties of the impulse function are  (at )   (t ) and for any signal x(t),
a

x(t ). (t )  x(0). (t ) .

(Refer Slide Time: 16:57)


Further, if you consider 

x( ) (t   )d  x(t ) and this is a very important property.

(Refer Slide Time: 18:13)

34
(Refer Slide Time: 18:58)

So we are going to set   t  t which means   t  t and d  dt . So the integral



simplifies as   x(t  t ) (t )dt . Here we use the property that  (t )   (t ) .


(Refer Slide Time: 20:06)


So this becomes   x(t  t ) (t )dt , but x(t  t ) (t ) is nothing but x(t  t ) . So this is
 t 0

equal to simply x(t).

35
(Refer Slide Time: 20:54)


So what we have as a result is that  x( ) (t   )d  x(t ) and this property is known as


the sifting property of the impulse function or the sifting property of the delta function.
So it would be good to examine and understand the properties of the impulse function
because this arises very frequently in the analysis of signals and systems and it is very
important to both understand the behavior of systems and we will see several
applications of this impulse function as we go through the rest of this course. Let us
come to another interesting function that occurs frequently which is the complex
exponential.

(Refer Slide Time: 23:01)

36
The complex exponential function is defined as x(t )  e j0t or x(t )  e j 2 f0t which is

cos(0t )  j sin(0t ) where j  1 , is the imaginary number and if you can look at this,

x(t )  e j0t and you have magnitude of x(t) which is equal to well

x(t )  cos2 (0t )  sin 2 (0t ) . Now cos2 (0t )  sin 2 (0t )  1 . So x(t ) is always 1,

that is a complex exponential always has unit magnitude and further it is a periodic
signal.

(Refer Slide Time: 24:27)

0
Here e j 2 f0t  e j0t where f 0  and this is an important relation. Here 0 is the
2
circular frequency and this is also known as the angular frequency and is in radians per
second, the unit of f0 frequency is Hertz.

37
(Refer Slide Time: 25:22)

1 
This is a periodic signal and its period T  where f 0  0 . So the period, for instance,
f0 2
1
if f0 equals 5Hz implies the period T   0.2s . So the period is basically the reciprocal
5
of the frequency.

(Refer Slide Time: 26:11)

A general complex exponential is defined as s    j and x(t )  est which is basically

well, x(t )  est  e(  j )t  e t (cos(t )  j sin(t )) .

38
(Refer Slide Time: 26:54)

So what we have seen in this module is that we have seen other different classes of
signals such as energy and power signals etc., we have also seen some very commonly
arising and important continuous time signals such as the unit impulse or the unit step
function and we have also seen the complex exponential and the general complex
exponential signals. So we will stop here and continue with other aspects in the
subsequent modules. Thank you very much.

39
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 04
Real Exponential, Sinusoidal Signals, Basic Discrete Time Signals – Unit Impulse/
Complex Exponential

Keywords: Real Exponential, Sinusoidal Signals, Basic Discrete Time Signals – Unit
Impulse and Complex Exponential

Hello, welcome to another module in this massive open online course. So we are looking
at a classification of signals and let us look at yet another class of signals which are the
real exponential signals.

(Refer Slide Time: 00:29)

The real exponential signals are simply referred to as exponential signals similar to the
complex exponentials. The real exponential signal is of the form x(t )  e t and if   0 it

is increasing. At t  0 it is 1 and as t  , e t  0 .

40
(Refer Slide Time: 01:36)

So e t , for any t is in fact a positive signal, similarly when   0 it is a decreasing signal.

(Refer Slide Time: 03:03)

41
(Refer Slide Time: 03:37)

So let me just draw it, that is, it is decreasing for   0 and as t  , e t  0 .

(Refer Slide Time: 05:01)

The last class of signals which are also very important and one of the most fundamental
classes of signals is the sinusoidal signal simply a real sinusoidal signal. This
is x(t )  A cos(0t   ) , where A is the amplitude of the sinusoid, 0 is the angular

frequency or the radian frequency and  is the phase of the sinusoid. The frequency f0 as

42
0
we already seen is given as and f0 is the fundamental frequency and we have also
2
seen that it can be represented as A cos(2 f 0t   ) . The sinusoid is a periodic signal.

(Refer Slide Time: 06:21)

(Refer Slide Time: 07:12)

Now let us look at some basic discrete time signals which again occur frequently in the


analysis of systems. The unit step u (n)  1 n  0 and this is termed as the discrete unit
0n0
step function, which means it is defined only at discrete time instants.

43
(Refer Slide Time: 08:06)

(Refer Slide Time: 08:25)

44
(Refer Slide Time: 10:19)


Similarly we have the unit impulse function  (n)  1 n  0 .
0n0

(Refer Slide Time: 10:46)


This also has some interesting properties, for instance   (n)  1 and we can also have
n 

the sifting property for this unit impulse function similar to what we had for the

continuous time scenario that is  x(k ) (n  k )  x(n) .
k 

45
(Refer Slide Time: 11:27)

We can see that  (n)  u(n)  u(n 1) where u(n) is the discrete time unit step signal and
u(n  1) is the unit step signal shifted by 1. So this gives us the discrete time impulse

function. Similarly u (n)    (k ) , so this is again another representation of the unit step
k 

function.

(Refer Slide Time: 13:22)

46
j n
We have a discrete time complex exponential which is defined as x(n)  e o where

j n
 is again the frequency, so x(n)  e o  cos( n)  j sin( n) .
0 0
o

(Refer Slide Time: 14:35)

Now we need to know whether this discrete time complex exponential is periodic or not.
We have seen that the continuous time complex exponential is always periodic and the

1 j 2 f n 
fundamental period is given as . So let us look at e 0 where f  0 . So for
f0 0 2

j 2 f n j 2 f (n  N ) j 2 f n j 2 f N
this to be periodic we have e 0 e 0 e 0 .e 0 .

47
(Refer Slide Time: 16:25)

j 2 f N
So this is periodic if e 0  1 which means f0N must be equal to an integer. So

k
f 0 N  k which means f 0  . So it must be a rational number. So the condition for
N
k
periodicity of the discrete time impulse is that f 0  which is a rational number.
N

(Refer Slide Time: 17:17)

48
j n 
Therefore, e o is periodic only if f  0  k or some m , which has to be a rational
0 2 n n
number. So that is an interesting aspect of the discrete time complex exponential while
the complex exponential for the continuous time is always periodic.

(Refer Slide Time: 18:15)

So this basically completes the classification of signals where we have looked at a broad
set of signals which frequently arise from practice. There are several other signals and it
is not possible to characterize the entire set of signals, however, we have managed to
classify a fairly large set of frequently occurring signals in practice, the properties of
these signals are important to understand because these arise frequently in practice and
these are important to understand the principles of signals and principles of analysis of
signals and systems. So we will stop this module here and look at other aspects in the
subsequent modules. Thank you very much.

49
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 05
Classifications of Systems – Memoryless and Casual/ Non-Casual Systems

Keywords: Memoryless Systems, Casual Systems and Non-Casual Systems

Hello, welcome to another module in this massive open online course. So we have
looked at the classification of signals and we have looked at various frequently occurring
signals. Let us now start discussing the fundamental aspect of this course which is
systems, their behavior and their properties.

(Refer Slide Time: 00:39)

So we are going to start looking at the classification of systems and its representation.

50
(Refer Slide Time: 01:33)

So a system representation is a mathematical model for an actual physical process or a


physical device or an object, it is an analytical model that precisely captures the input
output relationship of the system that is given an input signal what is the behavior of the
system or what the output that is given out by the system. So for any given input signal
this system model or system representation must be able to precisely characterize the
output.

(Refer Slide Time: 03:49)

51
And the system is presented mathematically as y(t )  T ( x(t )) where T basically captures
this physical process of the system, x(t) is the input signal and y(t) is the output signal
and this is a continuous time system. We can also think of it as a transformation or it is
also going to be an input output mapping.

(Refer Slide Time: 04:43)

(Refer Slide Time: 05:34)

In the same manner I can also define a discrete time system when the output is y(n) and
the input is x(n). I can similarly have y(n)  T ( x(n)) .

52
(Refer Slide Time: 06:00)

(Refer Slide Time: 06:51)

Now let us start with the classification of systems to better understand the properties of
various systems and the first important class of systems that we are going to look at is
systems with memory and memory less systems.

53
(Refer Slide Time: 07:23)

The systems can either have memory or they can be memoryless. Now a memoryless
system implies if output depends only on current input that is input at current time
instant, that is the past inputs or the past history of the signal has no bearing on the
output of the system. So this characterizes a memoryless system.

(Refer Slide Time: 08:55)

For instance, we have y(t )  Kx(t ) where K is some constant. For instance in a circuit
we have V (t )  Ri(t ) from the Ohm’s law, where V is the voltage, i is the current, R is

54
the resistance and the voltage at time instant t depends only on the current at time instant
t. So this is a classic example of a memoryless system.

(Refer Slide Time: 10:08)

Similarly, I can have for a discrete time signal, y(n)  Kx(n) where K is some constant
and this is also an example of a memoryless system.

(Refer Slide Time: 11:00)

55
On the other hand, systems with memory are opposite of naturally systems which have
memory. Here the output depends not only on the current input at the current time
instant, but also at the input at the past time instants. The output depends not only on
x(t), but also depends on the past values of x(t). So let us call this output y(t0) which
depends not only on x(t0), but also on x(t) for t  t0 and this is known as a system with
memory.

(Refer Slide Time: 12:56)

For instance, if you again go back to the example of circuits you can have
1 t
C 
V (t )  i( )d where V(t) is the voltage signal, i( ) is the current signal and C is the

value of the capacitance and this is a system with memory, the capacitor has memory
because the voltage depends not only on the value of the current at the time instant t, but
also on the values of the current signal i(t) at the past time instants.

56
(Refer Slide Time: 15:07)

If you treat this as an input-output system where the input is current, output is voltage
then the present values of the voltage also depend on the past values of the current signal.
So the capacitor is a classic example of a system with memory. Let us now come to yet
another important property of a system that is causality that is causal and non- causal
systems.

(Refer Slide Time: 15:58)

Now a system is causal if output y(t0) depends only on x(t) for t  t0 , that is output
depends only on past values of x(t) and does not depend on future values of x(t), so such

57
a system is known as a causal system and this property is known as causality . If a
system is not causal it means non- causal obviously that is the output also depends on
future values of the input signal.

(Refer Slide Time: 19:26)

(Refer Slide Time: 20:28)

t
For instance, let us go back to a simple system, y (t )   x( )d , this is a continuous time


causal system because y(t) depends on all x(t) for   t .

58
(Refer Slide Time: 21:20)

However on the other hand if you have a discrete time system such
N
1
that y (n)  
2 N  1 k  N
x(n  k ) , if you look at this, y(n) depends on x(n-N), x(n-N+1), so

on until x(n+N). So it also depends on the future values of x(n), hence this is non- causal.
Since the output depends not only on the past values, but also on the future values of x(n)
it is an example of a non- causal system.

(Refer Slide Time: 23:17)

So we are looking at the classification of systems and we will stop here and continue this
discussion in the subsequent lectures. Thank you very much.

59
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 06
Linear Systems – Additivity/ Homogeneity Properties, Time Invariant Systems,
Linear Time Invariant (LTI) Systems, BIBO Stability

Keywords: Linear Systems, Additivity Property and Homogeneity Property, Time


Invariant Systems, Linear Time Invariant (LTI) Systems, BIBO Stability, Feedback
Systems

Hello, welcome to this module in this massive open online course. So we are looking at
the classification of systems and their properties. So let us continue our discussion on the
various kinds of systems. So another important class of systems is what is known as
linear systems.

(Refer Slide Time: 00:33)

Now the system T(.) is linear if it satisfies the following properties. The first property is
known as additivity.

60
(Refer Slide Time: 01:36)

Additivity is very simple, consider two input signals x1(t), x2(t) such that
T ( x1 (t ))  y1 (t ) that is input x1(t) produces y1(t), T ( x2 (t ))  y2 (t ) that is input x2(t)
produces output y2(t). Now the system satisfies additivity if this implies
T ( x1 (t )  x2 (t ))  y1 (t )  y2 (t ) , that is if x1(t) + x2(t) produces the output y1(t) + y2(t) and
this is true for all possible inputs x1(t) and x2(t).

(Refer Slide Time: 03:16)

So if this is satisfied this implies the system is additive or the system satisfies additivity
property. Now similarly, the next property is what is known as homogeneity.

61
(Refer Slide Time: 04:45)

Homogeneity simply implies that if any input x(t) produces the output y(t), this implies
T ( x(t ))   y(t ) , where  is any scalar quantity, any basic number either it can be real
number for real signals or can be a complex number for complex signals. So if the input
is scaled by a scalar quantity  then the output is also scaled by the same scalar
quantity  . If the system satisfies this property, it basically satisfies the homogeneity
property.

(Refer Slide Time: 06:04)

62
(Refer Slide Time: 06:22)

Now a linear system is simply one which satisfies both additivity and homogeneity.
Otherwise it is known as a non-linear system. That is if you add two input signals the
resulting output signal should correspond to the sum of the output signals of these two
input signals and if you scale an input signal the resulting output signal should be
correspondingly a scaled version of the output signal and a system that satisfies both
these is known as a linear system.

(Refer Slide Time: 08:32)

63
dx(t )
For example, let us take y (t )  , this is known as a differentiator, you can see that
dt
this is a linear system.

(Refer Slide Time: 09:04)

Now if you take another example, y(t )  2 x 2 (t ) , this is a non-linear system. You can
check that it will not satisfy the properties that are related to linearity. Another important
class of systems is what is known as the time invariant systems.

(Refer Slide Time: 09:59)

64
So time invariant implies if T ( x(t )  y(t ) , that is if input to system x(t) gives output

signal y(t), then this implies T ( x(t  t0 ))  y(t  t0 ) and this holds for all shifts t0 and for
all input signals x(t). That is, for any shift t0 to input, what we get is basically a delayed
version of the output. If t0 is positive it is a delay, if t0 is negative then it is an advanced
signal.

(Refer Slide Time: 12:13)

So if it satisfies this property such a system is said to be time invariant system otherwise
it is a time variant or a time varying system.

(Refer Slide Time: 13:03)

65
t
Let us take an example, if you have y (t )   x( )d , you can show that this is a time


invariant system, for instance if you delay the input by t0 that is x(  t0 ) let us call the

output y (t ) and let us see what the impact on the output signal is.

(Refer Slide Time: 14:31)

t t 0

Let us set   t0    d  d . So this integral becomes y (t )  



x( )d and you can

see that this is nothing but y(t  t0 ) . If I shift the input by t0 the output is correspondingly
shifted by t0 and hence this is a time invariant system.

66
(Refer Slide Time: 15:37)

On the other hand, for instance, y(t )  T ( x(t ))  t.x(t ) you can show that this is a time
variant or a time varying system.

(Refer Slide Time: 17:18)

And that brings us to the most important class of systems which is that of an LTI system
or a linear time invariant system which is probably the most important class of systems
that we most frequently encounter in practice and which we are going to be most
frequently interested in. As the name implies these are systems that are both linear as
well as time invariant. So these satisfy linearity as well as time invariant property.

67
(Refer Slide Time: 18:55)

(Refer Slide Time: 20:25)

Another class of systems that are also important is the stable system. Naturally, we are
interested in systems that are stable and not in systems that are unstable or a system that
exhibit unstable behavior from a practical. So we have the definition of stability which is
the following thing that is known as BIBO or Bounded Input Bounded Output criterion.
Such systems that satisfy this criterion are known as BIBO stable systems.

68
(Refer Slide Time: 22:19)

And the definition is, if a system is such that x(t )  C where C is some constant or for a

discrete time system, if x(n)  C , such bounded inputs produce bounded outputs

implies y(t )  T ( x(t ))  K or y(n)  T ( x(n))  K where K is some constant, that is


bounded means they are basically less than or equal to some constant.

(Refer Slide Time: 23:59)

So if for every bounded input, the output y(t) is bounded then such a system is known as
a BIBO stable system, that is every bounded input produces an output y(t) which is
bounded.

69
(Refer Slide Time: 25:52)

t
Again let us take an example here. So y (t )   x( )d now, if I give a bounded input that


t
is x( )  C , let us look at y (t ) which is equal to  x( )d

, but the magnitude of an

integral is less than or equal to the magnitude of the function that is being integrated or
the signal that is being integrated.

(Refer Slide Time: 26:43)

70
t
So this is y (t )  

x( ) d , but we know x( )  C because we have a bounded input.

t
Let us make this integral from t-T, rather than from  . So y (t )  
t T
x( ) d which is

t
  Cd  CT
t T
and now you can set this as your quantity K. So

x( )  C implies y(t )  K , where K= CT. This implies the system is bounded input

bounded output stable. So it satisfies the BIBO stability criterion.

(Refer Slide Time: 28:03)

(Refer Slide Time: 28:28)

71
The final classification of systems which are also important are known as the feedback
systems. Here we have a system T and an input x(t). So what I am going to do is I take
the output y(t) either directly or after suitably modifying it, I feed it back to the input. So
this is basically your feedback loop. So I have an input, I have an output and this is a
feedback. So output is being fed back to the input.

(Refer Slide Time: 30:19)

This completes our discussion on classification of systems. So we have looked at various


kinds or various classes of systems along with suitable examples. These are especially
important to understand the behavior and properties of various systems and also
understand the various applications, the various important criteria from a practical
perspective. So we will stop here and look at other aspects in the subsequent modules.
Thank you very much.

72
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 07
Example Problems in Signals and Systems – Plot, Odd/ Even Components,
Periodicity

Keywords: Odd Components, Even Components, Periodicity

Hello, welcome to another module in this massive open online course. So we have
looked at a basic introduction to the Principles of Signals and Systems, we have looked
at various kinds of signals and systems or a classification of different kinds of signals
and systems. So let us do some problems to understand these concepts better.

(Refer Slide Time: 00:36)

73
(Refer Slide Time: 01:45)

Consider x(t) given below and plot x(2-t) and the signal x(t) is given from - 2 to 2.

(Refer Slide Time: 02:21)

Now we can approach this as follows in a very simplified fashion, the first thing that we
will try to do is we will define a new signal x(t )  x(t ) . And x(t ) simply corresponds
to flipping the signal about the y axis that is taking a mirror image about 0.

74
(Refer Slide Time: 04:22)

And therefore let us now plot x(t ) , a mirror image of the signal about 0 and that looks
like as shown in slide. So if you look at that that is non-zero only from - 2 to 2. And now

I am going to define another signal that is x(t ) , which is x(t  2) . But x(t )  x(t ) . So this

will be x((t  2)) which is equal to x(2  t ) . But x(t  2) is simply delaying x(t ) by 2

seconds, t0 = 2, which means I take x(t ) that is x(t ) and shift it to the right or delay it.

(Refer Slide Time: 06:43)

75
So previously the signal was non-zero from - 2 to 2, now it is going to be non-zero from
0 to 4. Therefore, the net signal is going to look something like as shown in slide, so this

is your x(t )  x(2  t ) . So first what we did to approach this is basically, we flipped it
about 0, shifted it to the right by 2 and that gave us the solution.

(Refer Slide Time: 08:19)

Now let us do another example, now let us say we are given a signal x(t )  e t . This is
the exponential signal, we are required to find the even and odd components of x(t), that
is express x(t) as x(t )  xe (t )  xo (t ) . So we need to find the even and odd components of
the signal x(t).

76
(Refer Slide Time: 09:45)

x(t )  x(t )
The even component of x(t) can be obtained as xe (t )  , you can see this is
2
x(t )  x(t )
an even signal because xe (t )   xe (t ) . So xe (t ) in this case will be
2
e t  e t
xe (t )  which is nothing but the cosine hyperbolic, cosh(t ) .
2

(Refer Slide Time: 11:29)

77
x(t )  x(t )
Similarly, the odd component of x(t) is given as xo (t )  and you can see
2
x(t )  x(t )
xo (t ) is xo (t )  which is nothing but  xo (t ) , so this is indeed an odd
2
signal.

(Refer Slide Time: 12:15)

e t  e t
And for this particular example, we have xe (t )  , that is sine hyperbolic of t,
2
sinh(t ) . Now you can verify that x(t )  e t  xe (t )  xo (t )  cosh(t )  sinh(t ) and this is the
final solution. So we have decomposed the signal into its even and odd components.

(Refer Slide Time: 12:47)

78
(Refer Slide Time: 13:57)

5
tj
Let us do another example, consider the signal x(t )  e 8 . This is sampled with
2
sampling interval Ts  . We want to find out whether the resulting signal is periodic or
3
not and if so what is the period of the same.

(Refer Slide Time: 15:52)

79
Now we denote the sampled discrete type signal as x(n)  x(nTs ) . So this is the nth

2
sample of the discrete time signal. So this is going to be x(n ) which is therefore equal
3
5 2 5
j n j n
to e 8 3 , which is equal to e 12 and this is the resulting discrete time signal. Now
5
j (n  M )
this is periodic if there exists M such that x(n  M )  x(n) which implies e 12
5 5
j n j M
can be simplified as e 12 .e 12 .

(Refer Slide Time: 17:19)

80
(Refer Slide Time: 18:26)

5
M j
So if this should be equal to x(n) it implies that e 12  1 , which implies
5
M should be a multiple of 2 , so this quantity should be an integer multiple of 2 ,
12
5 24k
which implies that M  2k  M  .
12 5

(Refer Slide Time: 19:50)

81
Now if you observe this you will notice that M is an integer, which means 24 k should be
divisible by 5. Now the smallest k for which this holds is k = 5 which implies I have to
24  5
set k = 5, which implies M   24 and in fact, x(n) is periodic and M = 24 is the
5
fundamental period.

(Refer Slide Time: 21:01)

So these examples have probably helped you to better understand the various properties
of signals and systems. So we will continue this in the subsequent modules. Thank you
very much.

82
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 08
Example problems in Signals and Systems – Energy, Properties of Impulse, RL
Circuit

Keywords: Example Problems -Energy Signal, Properties of Impulse, RL Circuit

Hello, welcome to another module in this massive open online course. So let us continue
doing examples to illustrate the various principles of signals and systems and particularly
with respect to the properties and classification of signals and systems.

(Refer Slide Time: 00:34)

So in the previous lecture, the signal is periodic and the fundamental period is M = 24.

83
(Refer Slide Time: 00:49)

Let us continue our discussion of various other examples for signals and systems based
on the concepts that we have covered so far. So let us now consider the concept of an
energy signal which is a discrete time signal. So x(n)   nu(n) . We need to check
whether this is an energy signal and if so, what its energy is.

(Refer Slide Time: 02:20)

84
Here u(n) is the unit step function which is u (n)  1
0  n0
n0
. So this is basically


 n n  0 
x ( n)   . The energy of a discrete time signal is E   x(n) which is
2

0 n0

 n 

 (
n 
n 2
) .

Slide Time: 04:00)


This implies 
n 
2n
, which is basically 1   2   4   6  .... so on. Recall that this is

basically an infinite geometric progression with the ratio or the incremental factor of the
1
geometric progression given as  2 . So the sum equals , if   1 and this is equal
1 2
to infinity or this diverges, if   1 .

85
(Refer Slide Time: 05:29)

So this implies this is an energy signal if   1 and in that case the energy is equal to

1
. Otherwise it is not an energy signal, its energy diverges, it means it does not
1 2
have a finite energy.

(Refer Slide Time: 06:48)

Let us now come to another problem to evaluate the function  (at ) , so  (t ) for any

function is such that  x(t ) (t )dt  x(t ) t  0  x(0) .


86
(Refer Slide Time: 07:40)


Now similarly let us look at  x(t ) (at )dt . Now we can substitute

at   which means

d
dt  . Let us also consider a  0 that is we are considering a as strictly positive. So
a

  d
this integral therefore becomes,  x  a  ( )

a
and the limit still remains the same. So


1   1  
this becomes,  x   ( )d which is x   .
a   a  a  a   0

(Refer Slide Time: 09:54)

87
 
1 1
Now this is equal to x(0) . So
a  x(t ) (at )   a (t )dt
 
and this equality holds for any

1
arbitrary x(t). This implies  (at )   (t ) for a>0.
a

(Refer Slide Time: 12:11)

 
1 1
So this is implies that   (at )   a (t )dt  a , a  0
 
where  (t ) is the impulse

function. Let us proceed on to the next example.

(Refer Slide Time: 13:03)

88

  (t ) (t )dt , where  (t ) is any function such


So in this problem, we want to evaluate '


that lim  (t )   ()  0 . In general this can be any constant. Now here I can evaluate
t 

 

this by parts. So I will have   (t ) ' (t )dt   (t )  (t )     ' (t ) ' (t )dt . Now  (t ) at
 

t   is 0 and we have assumed  () . Similarly,  (t ) (t ) t   is equal to 0. So the

first term evaluates to 0. Now the second term is nothing but  ' (t ) . So this is  ' (0)
t 0
d
that is basically .
dt t  0

(Refer Slide Time: 15:28)

89
(Refer Slide Time: 16:27)

Let us now look at some examples related to the properties of systems and classification
of systems. Consider the R L circuit below, this is a current source connected in parallel
with a resistance and an inductance. So the current is the input and the current through
the inductor is the output. So we want to find an input- output relation for the R L circuit.

(Refer Slide Time: 19:03)

So you can observe that by Kirchhoff’s current law you have i(t) that is the current of the
source is the current in the resistance plus the current in the inductance. So that is simply
by KCL, i(t )  iR (t )  iL (t ) . Now we denote the voltage of the circuit by V(t) and by

V (t )
Ohms law, the current through the resistance is iR (t )  and from the property of
R

90
diL (t )
inductor, we have voltage across inductor is V (t )  L , which means
dt
L diL (t )
i(t )   iL (t ) .
R dt

(Refer Slide Time: 21:31)

L dy (t )
So I finally have x(t )   y(t ) where x(t) is the input and y(t) is the output. So
R dt
this is basically the input- output relation. This is input-output relation is expressed as a
differential equation and we will see in future, how to solve this input-output relation to
explicitly derive an expression for the transfer function.

(Refer Slide Time: 22:14)

So right now this input-output relation between the input x(t) and the output y(t) is
represented as a differential equation in the time domain, which is a very valid way to

91
represent an input-output relation and it can be in fact seen that this is a linear system,
that is it satisfies all the properties of an LTI system.

(Refer Slide Time: 24:17)

Let us look at another interesting problem. This is a system which is termed as a


modulator and which is another very important system in a communication system. So
here T ( x(t ))  y(t ) that equals x(t ) cos(2 Fct ) . So this operation is known as modulation
and this frequency Fc is known as the carrier frequency. In fact, the process of
modulating a signal that is basically modulating it by a certain carrier forms a very
integral component or forms a key aspect of any communication system, for instance, if
we look at any modern cellular communication system such as GSM which has carrier
frequencies around 800 to 900 megahertz, modern 4G wireless systems for instance have
carrier frequencies around the gigahertz that is 2.1 to 2.3 Gigahertz range, even broadcast
systems, such as FM systems have carrier frequencies around 90 megahertz to 100
megahertz, amplitude modulation systems have frequencies in the range of kilohertz and
so on.

So modulating it by a different carrier frequency, it can be transmitted in a certain


frequency band, specifically allocated for that kind of communication, for instances TV
broadcast or AM radio or FM radio or 2G communication, 3G communication, 4G
communication, this forms integral part of any wireless communication system, be it a

92
personal communication system such as your cellular telephony or be it a broadcast
communication system such as your radio or TV broadcast system.

So we want to look whether this is an LTI system. So we have solved several problems
in this module. So we will stop here and look at continue looking at some other problems
in the subsequent modules. Thank you very much.

93
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 09
Example Problems in Signals and Systems – Properties of Modulator,
Eigenfunction of LTI System

Keywords: Properties of Modulator, Eigenfunction

(Refer Slide Time: 00:25)

Hello, welcome to another module in this massive open online course. So we are looking
at examples of LTI systems or we are solving problems related to LTI systems. So let us
continue looking at LTI systems. So here we need to find out whether the modulation by
carrier that is y(t )  x(t ) cos(2 Fct ) is an LTI system or not. Remember for a system to
be an LTI system it has to be both linear and time invariant.

94
(Refer Slide Time: 01:46)

So let us first check linearity, to check linearity we have to see if it is additive and
homogeneous. So let us say y1 (t )  T ( x1 (t ))  x1 (t ) cos(2 Fct ) . Similarly,

y2 (t )  T ( x2 (t ))  x2 (t ) cos(2 Fct ) . Now, T ( x1 (t )  x2 (t ))  ( x1 (t )  x2 (t ))  cos(2 Fct ) .

(Refer Slide Time: 03:04)

This is again y1 (t )  y2 (t ) . So this is T ( x1 (t ))  T ( x2 (t )) . So therefore, it satisfies the


additivity property.

95
(Refer Slide Time: 03:54)

Now the other property in linearity is homogeneity. So let us say again


T ( x(t ))  x(t ) cos(2 Fct )  y(t ) . Now T ( x(t ))   x(t ) cos(2 Fct )   y(t )  T ( x(t )) .
Therefore, system satisfies homogeneity property. So the system satisfies both additivity
and homogeneity properties and therefore the system is linear. So the modulation system
is a linear system.

(Refer Slide Time: 05:30)

96
(Refer Slide Time: 06:54)

Now we need to check whether this system is time invariant or not. Let us look at
T ( x(t ))  x(t ) cos(2 Fct )  y(t ) .

(Refer Slide Time: 07:26)

Now for time invariance we have to consider a delayed input or a shifted input x(t  t0 ) .

So T ( x(t  t0 ))  x(t  t0 ) cos(2 Fct )  y(t  t0 ) . But y(t  t0 )  x(t  t0 ) cos(2 Fc (t  t0 )) ,


so that is the problem.

97
(Refer Slide Time: 08:24)

Hence if you delay the input, the resulting output is not the output corresponding to the
previous input that is delayed similarly. Therefore the system is not time invariant. And
hence it is not an LTI system, because the system is LTI only if it is both linear and time
invariant.

(Refer Slide Time: 09:35)

Hence the modulation operation is not a linear time invariant system.

98
(Refer Slide Time: 10:27)

So let us do another example for an LTI system. Now let T(.) represents an LTI system
and we need to show that if you consider any input which is a complex sinusoid, the
output is some constant C times the input, that is T (e j 2 F0t )  Ce j 2 F0t .

(Refer Slide Time: 12:30)

Let T (e j 2 F0t )  y(t ) . Since it is a linear time invariant system, this implies

T (e j 2 F0 (t t0 ) )  y(t  t0 ) . But T (e j 2 F0 (t t0 ) )  T (e j 2 F0t .e j 2 F0t0 )  y(t  t0 ) .

99
(Refer Slide Time: 13:15)

Now e j 2 F0t0 is simply a scaling factor because F0 is assumed to be a constant and t0, the
delay is also assumed to be a constant. So this is similar to the scaling factor  .

(Refer Slide Time: 14:14)

So this implies e j 2 F0t0 T (e j 2 F0t )  y(t  t0 ) , but T (e j 2 F0t ) is nothing but y(t). So this

implies y(t )e j 2 F0t0  y(t  t0 ) and this holds for all t, t0.

100
(Refer Slide Time: 15:23)

Now set t=0 and t0 = -t, this implies y(0)e j 2 F0t  y(t ) . So this y(0) can be assumed to be

the constant C. So what we have shown is that y(t )  Ce j 2 F0t . So output is simply a

scaled version of input. So such an input is known as an eigen function and hence e j 2 F0t
is an eigen function of the LTI system.

(Refer Slide Time: 17:18)

Hence this is a very interesting property and we are going to explore it further through
the various modules. But it is important to realize that this complex sinusoid has a very

101
important relevance in the context of analysis of LTI systems because this is an eigen
function of any LTI system.

So with that we will wrap up our example section. So we have done several examples
which to some extent have covered various aspects of the principles of signals and
systems and we will look at other aspects in the subsequent modules. Thank you very
much.

102
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 10
Properties and Analysis of LTI Systems – Impulse Response, Response to Arbitrary
Input, Convolution and Properties

Keywords: Analysis of LTI Systems, Response to Arbitrary Input, Convolution

(Refer Slide Time: 00:27)

Hello, welcome to another module in this massive open online course. So in this module
we are going to start looking at a new topic that is the analysis of LTI systems.

(Refer Slide Time: 02:28)

103
Let us start with the concept of an impulse response. So the impulse response of an LTI
system is nothing but the output signal of the LTI system corresponding to an impulse
 (t ) . So if you have an LTI system represented by the transformation T to which we
give the input  (t ) , then the output h(t) is, h(t )  T ( (t )) . This is the response of the
system.

(Refer Slide Time: 04:08)

Now, we will see later that this h(t) has a very important role to play in determining the
output of the LTI system corresponding to any arbitrary input.

(Refer Slide Time: 05:14)

104
Now let us look at the response of the LTI system to an arbitrary input signal. Let us say
x(t) is the arbitrary input to an LTI system which is characterized by T.

(Refer Slide Time: 06:06)

So we need to find the corresponding output when an arbitrary input signal is given and
also given that T ( (t ))  h(t ) that is let us also assume that we know the impulse
response for this system.

(Refer Slide Time: 07:08)

105
Now first let us start by using the sifting property for a continuous time signal. Using the

sifting property, we can write x(t )   x( ) (t   )d .


(Refer Slide Time: 07:59)

Therefore, the output of the LTI system y(t) is given as y(t )  T ( x(t )) . Now I am going
to use the sifting property to substitute the above expression for x(t) that is
 
y (t )  T   x( ) (t   )d  . Now this is a weighted combination of impulses.
  

(Refer Slide Time: 09:09)

106
So you can think of this approximately as a continuous sum that is this is a linear
combination of several signals, the  (t ),  (t   ) and the x( ) s are the weights. And
since this is a linear time invariant system, we have output of the linear combination of
signals equals linear combination of outputs. Therefore, this quantity here can now be

simplified as y (t )   x( )T ( (t   ))d .

Therefore, the output of this linear

combination is basically the linear combination of the outputs corresponding to the


 (t   ) signals. From time invariance, the output to  (t   ) is h(t   ) .

(Refer Slide Time: 12:04)

So we have used linearity and time invariance. And therefore,



y (t )   x( )T ( (t   ))d in this equation, T ( (t  )) can be replaced by h(t  ) . Now



we can further simplify this as y (t )   x( )h(t   )d and this describes the output y(t)


to any arbitrary input x(t).

107
(Refer Slide Time: 13:49)

Therefore knowing the impulse response one can completely characterize the output
signal corresponding to any arbitrary input signal and keep in mind this only for an LTI
system. This is known as the convolution integral or simply termed as convolution.

(Refer Slide Time: 16:32)

So y(t) is simply written as x(t) convolved with h(t). That is y(t )  x(t )  h(t ) and this is

the convolution operation which is y (t )   x( )h(t   )d .

The input signal x(t) is

convolved with the impulse response h(t) which determines the output corresponding to

108
the arbitrary input signal x(t). So this convolution integral or this convolution operation
has an important role to play in the analysis of LTI systems. Let us look at some of the
salient properties of this convolution integral.

(Refer Slide Time: 18:22)

So we have the first property that convolution is commutative. This means that
x(t )  h(t )  h(t )  x(t ) for any two signals x(t) and h(t).

(Refer Slide Time: 19:38)

109

We have x(t) convolved with h(t) from our definition above is equal to  x( )h(t   )d .


Now set t     . So this will become d  d . So when   ,t   will be 

x( ) is x(t   ) , h(t   ) is h( ) and d  d . Here we have the integral going
from  to   and there is a negative sign. Therefore this negative sign can be used to

change the order of limits. So this will be  x(t   )h( )d , which is basically again


 h( ) x(t   )d which



is nothing but h(t) convolved with x(t). So the convolution

operation is commutative that is x(t )  h(t )  h(t )  x(t ) .

(Refer Slide Time: 22:23)

The next property of convolution is that the convolution operator is associative. This
implies if x(t) is convolved with h1(t) and subsequently convolved with h2(t) this is
equivalent to h1(t) convolved with h2(t) and subsequently convolved with x(t). That is
( x(t )  h1 (t ))  h2 (t )  x(t )  (h1 (t )  h2 (t )) .

110
(Refer Slide Time: 23:54)

The third property is the distributed nature of convolution. This implies that x(t)
convolved with the sum of h1(t) and h2(t) is equal to the sum of x(t) first convolved with
h1(t) and x(t) convolved with h2(t). That is x(t )  (h1 (t )  h2 (t ))  x(t )  h1 (t )  x(t )  h2 (t ) .
So these are the three fundamental properties of the convolution.

(Refer Slide Time: 25:30)

Now let us look at a graphical representation of this convolution operation. We have



y (t )   x( )h(t   )d .

For instance let us take h(t )  e at u(t ) that is a decreasing

111
exponential which means h( )  e a u( ) that is non-zero for   0 . Now, let us look at
h( ) . So we are trying to graphically interpret this convolution operation. So h( )
corresponds to flipping this h( ) about the y axis.

(Refer Slide Time: 27:38)

So h( )  ea u( ) .

(Refer Slide Time: 29:01)

Now if you shift it to the right or delay it by t, you have h(t   ) as shown in slide. So we
are basically first flipping about the y axis and shifting to the right by t that gives

112
h(t   ) . Now multiply this by x( ) and integrate from  to  and do this for every
delay or every point. So in each point t you are computing the value of this convolution
and that basically is a graphical representation of this convolution operation.

So what we have seen in this module is we have started looking at the properties and
started to analyze LTI systems. We have introduced the concept of impulse response and
described how the impulse response can be used to characterize the response of an LTI
system to any arbitrary input signal x(t) and following that we have looked at several
properties of convolution and the graphical representation. So we will stop here. Thank
you very much.

113
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 11
Properties and Analysis of LTI Systems – Memoryless Systems, Causality, Stability,
Eigen function
Keywords: Analysis of LTI Systems, Memoryless Systems, Causality, Stability

Hello, welcome to another module in this massive open online course. So we are looking
at the properties and analysis of LTI systems, so let us continue this discussion.

(Refer Slide Time: 00:24)

So we want to look at further properties and analysis of LTI systems, let us specifically
look at memory less systems.

114
(Refer Slide Time: 01:06)

Now remember in a memory less system we have the output y(t )  T ( x(t ))  Kx(t )
where K is some constant which implies it depends only on the current input at the
current time instant and does not depend on the input of the past or the future time
instants. So that is known as a memory less system and hence the corresponding impulse
response of the system would therefore be h(t )  K (t ) because the output depends only
on the current input.

(Refer Slide Time: 02:39)

115
So it basically says that h(t )  0 if t  0 . So that is a characterization of the memoryless
system in terms of its impulse response.

(Refer Slide Time: 04:55)

Now another important property of LTI system is causality.

(Refer Slide Time: 05:19)


Now remember for any LTI system we have output y (t )   x(t   )h( )d

where

h( ) is the impulse response of the LTI system. Now for a causal system the output
should depend only on x(t) and past values of x(t). So it should not depend on the future

116
values of x(t). So this implies h( )  0 for   0 because if h( )  0 for   0 then this
will also pick up negative values of  .

(Refer Slide Time: 07:59)

So h( )  0 for   0 . So this is basically denotes the property of a causal LTI system

and as a result this integral can be simplified as y (t )   x(t   )h( )d . So now let us
0

look at another property, non causality.

(Refer Slide Time: 09:39)

117
A system is non causal if h( )  0 for   0 and the system is anti-causal if
h( )  0 for   0 and in this case of anti-causal system y(t) depends only on future
values of x(t). Let us now look at another important aspect that is the stability.

(Refer Slide Time: 11:40)

In particular, let us look at BIBO stability, that is bounded input, bounded output.

(Refer Slide Time: 12:38)

BIBO implies if input is bounded that is x(t )  C for all t where C is some finite value

then this implies that the system output must be less than or equal to some other constant

118
K for all t. So this basically implies that for any bounded input, correspondingly the
output should also be bounded. If the system satisfies this property this is known as a
BIBO stable system. Now we will derive an important condition for the impulse
response of the system that is BIBO stable.

(Refer Slide Time: 14:47)

We will show that the system is BIBO stable if the impulse response is absolutely

integrable that is 

h(t ) dt   . Now let us assume that we have a bounded input, that


is x(t )  C and let us say also assume that 

h(t ) dt   where  is some constant. Now

we have to show that the output is bounded.

(Refer Slide Time: 16:07)

119
(Refer Slide Time: 16:50)

Now the output is given for any arbitrary input signal x(t) in terms of the impulse
 
response as y (t )  

x(t   )h( )d and therefore y (t )   x(t   )h( )d

. Now the

magnitude of an integral is less than or equal to the integral of the magnitude of the

quantity being integrated. So this is y (t ) 

 x(t   ) h( ) d , but x(t   )  C since we


had assumed this from the bounded input property. This implies y (t )  C  h(t ) dt and



we had also assumed 

h(t ) dt   .

120
(Refer Slide Time: 18:47)

So this basically implies that y(t )  C which basically implies that the output y(t) is

also bounded. So what we have able to demonstrate is that if 

h(t ) dt   then every

bounded input will produce a bounded output for the LTI system. So this is the condition
for the LTI system to be BIBO stable.

(Refer Slide Time: 20:07)

Now another important aspect that we can take a look at is that of the Eigen functions of
an LTI system.

121
(Refer Slide Time: 20:48)

Consider x(t )  e t where  is some constant. Now for an LTI system we know the
 

 x(t   )h( )d . Now e h( )d . Here e t is a constant and


 ( t  )
output y (t )  y (t ) 
 


does not depend on the integration variable  . So this is y (t )  e t  h( )e d .




(Refer Slide Time: 22:05)

122

 h( )e d  H ( ) . So the output depends only on the impulse response h(t) and

Now


the constant  . So this implies T (e t )  H ( )e t that is when the input is e t the output
is some scaling factor times the input signal.

(Refer Slide Time: 22:51)

So the output is a scaled version of the input, remember in matrices where you have a
matrix A and a vector x such that Ax   x , if you remember from the basic knowledge
of linear algebra, the properties of matrices, whenever we have a matrix A and a vector
x it satisfies the above property and a vector satisfying this property is termed as the
eigenvector.

123
(Refer Slide Time: 24:24)

So this x is called an eigenvector and  is called an Eigen value. Similarly what we are
observing here is that the output is simply a scaled version of the input e t . Therefore
this e t is known as the Eigen function of any LTI system and these Eigen functions play

a very important role because if you look at this H ( )   h( )e d is known as a




transform of the impulse response h(t) and there can be several such transforms
depending on the nature of alpha that is chosen.

(Refer Slide Time: 25:54)

124
So there can be various transforms and these transforms and Eigen functions are very
fundamental to the analysis of LTI systems.

(Refer Slide Time: 27:57)

So Eigen functions are very important for the understanding of LTI systems. So in this
module we have continued our discussion on the properties and analysis of LTI systems.
So we have looked at several properties, let us stop here and we will continue this
discussion in the subsequent modules. Thank you very much.

125
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 12
Properties and Analysis of LTI Systems – Differential Equation Description,
Linearity and Time Invariance

Keywords: Differential Equation Description, Linearity and Time Invariance

Hello, welcome to another module in this massive open online course. So we are looking
at the properties and analysis of LTI systems. So let us look at another aspect that is the
differential equation representation of LTI systems.

(Refer Slide Time: 00:31)

We have seen previously in an example of LTI systems that the input output relationship
of an LTI system can be represented by a differential equation.

126
(Refer Slide Time: 01:49)

(Refer Slide Time: 02:44)

The general structure or the canonical form of such a representation is


M
d m y (t ) N d n x(t )
 am dt m  
m0 n 0
an
dt n
. x(t) is the input to system and y(t) is the output and we will

see under what conditions this represents an LTI system. For example, we have a current
source, an inductor and a resistor.

127
(Refer Slide Time: 04:13)

The current function is i(t) and we have the current through the inductor which is
denoted by iL (t ) and current through resistance which is iR (t ) . We have shown that the
input output relationship of this system can be described by the differential equation
L diL (t )
 iL (t )  i (t ) .
R dt

(Refer Slide Time: 05:16)

So this a simple circuit with comprises of a current source in parallel with an inductor
and a resistance. Now since iL(t) is the output y(t) and i(t) is the input x(t) I can

128
L dy (t )
equivalently write this as  y(t )  x(t ) . So this is the differential equation
R dt
representation of the input output relationship of this system.

(Refer Slide Time: 06:56)

Now the solution for this differential equation representation can be described as
y(t )  y p (t )  yh (t ) . So the output signal can be described as the sum of two signals

y p (t ) termed as the particular solution and yh (t ) termed as the homogenous solution.

(Refer Slide Time: 08:36)

129
So any solution y(t) can be expressed as the sum of a particular solution and a
homogenous solution.

(Refer Slide Time: 09:44)

M
d m yh (t )
The homogeneous solution is given as  am
m0 dt m
 0 . So you are taking the left hand

side of this differential equation and you are basically setting the left hand side to 0. We
need M auxiliary conditions along with this to solve this and these M auxiliary
conditions are termed as the initial conditions or the boundary conditions to determine
the solution. These auxiliary conditions are given as basically the values of the output
signal or its various derivatives of various orders.

130
(Refer Slide Time: 12:46)

Now we need to find out when is the system described by the above differential equation
linear. This is linear if all the auxiliary conditions are equal to 0.

(Refer Slide Time: 14:35)

Now for the system to be time invariant this system has to be at initial rest.

131
(Refer Slide Time: 16:25)

The initial rest condition basically implies that if x(t )  0 for t  t0 then assume

y(t )  0 for t  t0 .

(Refer Slide Time: 17:37)

dy
Therefore the initial conditions become y (t0 )  and so on that is the value of
dt t  t
0

the output signal at t0 and its various derivatives of various orders until the (m-1)th is
equal to 0. So for the system to be time invariant, it has to satisfy the initial rest

132
condition. So the auxiliary conditions are 0 implies it becomes a linear system, if it
satisfies the initial rest condition then it becomes a invariant system. And if it satisfies
both naturally it will be a linear time invariant system.

So that completes our discussion on the differential equation representation of a linear


system. So we will stop here and look at other aspects in the subsequent modules. Thank
you very much.

133
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 13
Properties of Discrete Time LTI Systems – Impulse Response, Stability,
Eigenfunction, Systems Described by Difference Equation

Keywords: Impulse Response, Stability, Eigenfunction, Systems Described by Difference


Equation

Hello, welcome to another module in this massive open online course. So we are looking
at the properties of LTI systems. So far we have looked at the properties of continuous
LTI systems. Let us now look at the properties of discrete LTI systems. Several
properties that we have looked at in the context of continuous LTI systems can be
extended in a straightforward manner to the scenario of discrete LTI systems.

(Refer Slide Time: 00:40)

Let us start with the fundamental quantity that is the impulse response.

134
(Refer Slide Time: 01:25)

The response of a discrete LTI system to unit impulse  (n) is the impulse response
h(n)  T ( (n)) .

(Refer Slide Time: 02:56)

Similar to what we have seen in the context of continuous time LTI system this is of
fundamental importance because the impulse response helps characterize the output for
any arbitrary input signal x(n) and this is given by the convolution sum.

135

So the response to an arbitrary input signal x(n) is y (n)   x(m)h(n  m) , which is
m 


  h(m) x(n  m) and this is known as the convolution sum.
m 

(Refer Slide Time: 05:05)

The convolution is commutative that is x(m)  h(m)  h(m)  x(m) . Now in case of a
memoryless discrete LTI system in which the output depends only on the current input
and does not depend on the past or the future outputs is characterized by an impulse
response h(n)  k (n) .

136
(Refer Slide Time: 06:48)

Now the discrete LTI system is causal which means output depends only on the present
and past values of the input, that is h(n)  0 for n  0 .

(Refer Slide Time: 08:09)


Now the discrete time LTI system is BIBO stable if 
n 
h(n)   implies that this

quantity is a finite quantity and is absolutely summable.

137
(Refer Slide Time: 09:22)

Now let us look at the Eigen function of discrete time LTI system.

(Refer Slide Time: 10:53)


Consider the function x(n)  z n . Now T ( x(n))   h(m) z
m 
( nm)
where h(m) is the


impulse response. Now this will simply be  z n  h ( m) z
m 
m
and is a function that

depends only on the impulse response and is H(z).

138
(Refer Slide Time: 12:46)

So this is H ( z ) z n , if the input x(n) is z n , output is simply H ( z ) z n and therefore this is


an Eigen function because the input is simply a scaled version of the output. Now similar
to a continuous time LTI system which can be described by a differential equation, a
discrete time LTI system can be described by a difference equation.

(Refer Slide Time: 15:15)

139
M N
The general structure of a difference equation is a
m0
m y (n  m)   bm x(n  m) if x(m)
m0

is the input and y(m) is the output. This general form of a difference equation can be
used to describe discrete time LTI systems.

So now let us start looking at some examples to better understand the properties of both
continuous time as well as discrete time LTI systems that we have looked so far in the
various modules.

(Refer Slide Time: 18:30)

Let us look at a very simple problem for a continuous time LTI system that is to simplify
x(t) convolved with shifted or delayed version of the unit step function that is
x(t )  u(t  t0 ) .

140
(Refer Slide Time: 19:28)


So can be simplified as follows,  x( )u(t  t

0   )d . Now remember u(t  t0   ) is

non-zero only for t  t0    0    t  t0 . Therefore, this can be simplified as

t  t0


x( )d .

(Refer Slide Time: 21:36)

141
Now let us look at another simple problem, let us say given our input signal x(t) = u(t)
and a system with impulse response h(t )  et u(t ) what is the corresponding output
signal y(t). Now, as you know, the output for an arbitrary input is given by the
convolution of the input with the impulse response h(t).

(Refer Slide Time: 23:13)

 

 h( ) x(t   )d  e u( )u (t   )d . Now u ( ) is



This is basically y(t )  x(t )  h(t ) 
 

non-zero only for   0 and u(t   ) is non-zero only for t   0    t . So the


t

e

equivalent limits will be from 0 to t. So this integral can be replaced by d if t  0
0

and equal to 0 if t < 0.

142
(Refer Slide Time: 25:29)

t
Now let us simplify this integral, y (t )   e d  e  1  et when t  0 and 0
t

0
0

otherwise.

(Refer Slide Time: 26:34)

So in this module we have looked at the properties of discrete time LTI systems and we
have also started exploring some example problems to illustrate the application of the
various concepts that we have learnt regarding the properties of continuous as well as
discrete type LTI systems. And we will be continuing this for at least a few of the

143
upcoming subsequent modules in which we are going to look at further problems
regarding the analysis of both continuous as well as discrete time LTI systems. Thank
you very much.

144
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 14
Example Problems LTI Systems-Convolution, Periodic Convolution, BIBO
Stability

Keywords: Convolution, Periodic Convolution, BIBO Stability

Hello welcome to another module in this massive open online course. So we are looking
at example problems in the analysis of continuous time and discrete time LTI systems.

(Refer Slide Time: 00:30)

Let us start looking at another problem where we are given x(t )  u(t )  u(t 1) where
u(t) is the unit step function.

145
(Refer Slide Time: 01:02)

We need to find out what is y(t )  x(t )  x(t ) . Let us try plotting u(t) which is given by 1,
if t  0 and 0 otherwise.

(Refer Slide Time: 02:11)

Now u(t - 1) is the unit step function delayed by 1. And therefore we can plot x(t). This


will be given as x(t )  1
if 0  t  1
0 otherwise
. So it is a pulse of height 1 between 0 and 1.

146
(Refer Slide Time: 03:50)

And now we want to convolve x(t) with itself.

(Refer Slide Time: 04:52)

Now to convolve we need to flip x(t) about 0. So let us say we make it as the  axis
rather than the t axis. So we make this as x( ) and x( ) will be flipping x( ) about the
y axis.

147
(Refer Slide Time: 05:53)

And x(t   ) will be delayed version of this by t. Now multiply this with x( ) and
integrate it from 0 to infinity. So this is basically the area of overlap between these two
pulses.

(Refer Slide Time: 07:00)


So this will be  x( )x(t   )d  1 t  t . Therefore, this increases linearly with t
0

for 0  t  1 . Now for t  1 it decreases linearly until it reaches 0.

148
(Refer Slide Time: 08:45)

So as shown in slide, this is t, this height corresponds to the maximum overlap which is 1
and this portion will be 2 - t. So this is y(t )  x(t )  x(t ) . This can be described

1  t  1 0  t  2
as  . So this is basically the triangular pulse basically. So the
 0 otherwise

convolution of two square pulses of equal width and height yields the triangular pulse.

(Refer Slide Time: 10:46)

Let us now look at the next problem which is that of a periodic convolution between two
periodic signals. Let x1 (t ), x2 (t ) be periodic with a common period T0.

149
(Refer Slide Time: 11:58)

T0

The periodic convolution is defined as f (t )   x1 ( ) x2 (t   )d that is convolution over


0

one period and it is represented as x1 (t )  x2 (t ) and this is termed as periodic


convolution.

(Refer Slide Time: 13:37)

We need to show that this periodic convolution remains unchanged if you shift it by t0.
That is if you denote this by f (t ) we need to show that this is equal to f(t).

150
(Refer Slide Time: 14:57)

Let us set t ( )  x1 ( ) x2 (t   ) for any given value of t. Now

t (  T0 )  x1 (  T0 )  x2 (t  (  T0 )) , which is x1 (  T0 ).x2 (t    T0 ) .

(Refer Slide Time: 16:23)

Now x1 (  T0 ) is shifting by period T0 and this is equal to x1 ( ).x2 (t   ) which is also

equal to t ( ) . So what we have shown is t (  T0 )  t ( ) which means t ( ) is periodic


T0

which means integral over any period is same. So you have   ( )d , you can shift it by
0
t

151
any constant quantity t0 . So this is
T0 t0 T0 t0 T0

f (t )   t ( )d   t ( )d   x1 ( ) x2 (t   )d  f (t ) .


0 t0 t0

(Refer Slide Time: 18:16)

This follows because for a periodic function. So this basically shows that the periodic
convolution also remains unchanged when you consider any time interval of duration T0.

(Refer Slide Time: 19:50)

152
Let us look at the next example which is considering the system with the impulse
t2

response h(t )  e 2
,    t   . So we need to find out whether this system is BIBO

stable or not. For BIBO stability we have 

h(t ) dt   that is the impulse response must

be absolutely integrable.

(Refer Slide Time: 21:29)

t2

2
Now if you look at e it is given basically by a Gaussian pulse which is this is a
standard signal, this is known as the Gaussian pulse which is also known as a bell-shaped
function.

153
(Refer Slide Time: 22:24)

  t2

Now to check BIBO stability 

h(t ) dt  e

2
dt , but this is always positive. So this is

 t2
 1
simply e

2
dt . Now we multiply by 2 outside the integral and
2
inside the

 2
1  t2
integral and this is 2 
 2
e dt . Now inside this integral is that of a Gaussian

t2
1 
2 2
probability density function that is e is a Gaussian probability density
2 2
 t2
1 
function with mean  and variance  .  2 2
dt  1 . Here you can see that the
2
e
 2 2

variance is  2  1 and mean   0 . So this integral is 1 and hence the net integral is

2 which is a finite quantity.

154
(Refer Slide Time: 24:58)

Hence the system with impulse response given by the Gaussian pulse is indeed BIBO
stable. Let us now proceed to the next example, consider the LTI system where the

e
 j0 ( t  )
output is y (t )  x( )d .


(Refer Slide Time: 25:58)

155

e
 j0 ( t  )
So we need to find the impulse response h(t). Now x( )d equals well you



can write this as  h(t   ) x( )d which is basically

h(t )  x(t ) and hence this implies

that the impulse response is e j0t .

(Refer Slide Time: 29:21)

So the impulse response of this system is e j0t . So we will stop this module here and
continue with other similar examples in the subsequent modules. Thank you very much.

156
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 15
Example Problems LTI Systems – Eigenfunctions, System Described by differential
Equation, Homogenous and Particular Solution

Keywords: Eigenfunctions, System Described by differential Equation, Homogenous and


Particular Solution

Hello, welcome to another module in this massive open online course. So we are looking
at problems on the analysis of LTI systems.

(Refer Slide Time: 00:28)

So we are looking at some example problems for that to illustrate the analysis and
properties of LTI systems. So before I proceed further let me just modify the previous
problem slightly. So I am simply going to change this integral to

e
 j0 ( t  )
u (t   ) x( )d .


157
(Refer Slide Time: 01:24)

So now you can see this is your h(t   ) and this is h(t )  x(t ) . So the impulse response is

h(t )  e j0t u(t ) .

(Refer Slide Time: 02:28)

158
Now we need to find the Eigen value corresponding to e st . Now I can also write this as

e
 j0
x(t) convolved with h(t) which means this is u ( ) x(t   )d , now since u ( ) is



non-zero only for t  0 this can be equivalently written as  e j0 x(t   )d .
0

(Refer Slide Time: 03:52)

Now I am going to substitute the function or signal x(t )  est and therefore this

becomes  e j0 e s (t  ) d . Now we take e st outside. So the integral
0


becomes e st  e ( s  j0 ) d .
0

159
(Refer Slide Time: 05:14)


e ( s  j0 )
This is est
. Now at  this is 0 if s  0 . So this is equal
( s  j0 ) 0

 1   1   1 
to e st  0   . So this is e  
st
 and this   is the Eigen value if
 ( s  j0 )   s  j0   s  j0 
real part of s  0 .

(Refer Slide Time: 06:43)

So this is the Eigen value corresponding to the Eigen function e st .

160
(Refer Slide Time: 07:38)

So let us look at another problem which is to determine the differential equation


describing a system for instance consider the system as in slide. So this system is x(t) and
I have an error signal and this is an integrator and the output of the integrator is fed back
and subtracted from the input to give the error signal and it is multiplied by a scaling
constant.

(Refer Slide Time: 09:38)

So we have to determine the differential equation representation of the system above and
we have x(t )   y(t )  e(t ) . e(t) is passed through an integrator to yield y(t) which means

161
t

 e( )d  y(t ) .


0
Now differentiating both sides of the above equation we have

dy (t ) dy(t )
e(t )  which means x(t )   y (t )  .
dt dt

(Refer Slide Time: 11:32)

(Refer Slide Time: 11:50)

162
dy(t )
This implies that   y (t )  x(t ) , so this is the differential equation describing the
dt
above system. So we need to solve this to get the output response. So let us consider the
auxiliary condition as y(0) equals y0 and the input x(t )  Ce  t u(t ) .

(Refer Slide Time: 13:40)

So the solution of the output y(t) to any system described by this differential equation
can be described as the sum of two components yp(t) and yh(t) where yp(t) is a particular
solution and yh(t) is the homogenous solution.

(Refer Slide Time: 15:08)

163
So we are going to start with the assumption that yp(t) has the structure same as that of
the input signal that is y p (t )  Ke  t .

(Refer Slide Time: 16:28)

dy p (t )
Now we have   y p (t )  x(t ) this implies  K  e  t   Ke  t  Ce  t . So this
dt
implies that K (   )e  t  Ce  t . So we have K (   )  C which in turn
C C t
implies K  . So we have the particular solution y p (t )  e .
   

(Refer Slide Time: 19:36)

164
Let us now find the homogenous solution. Now to find the homogenous solution let us
assume yh (t )  Kest . Now remember the homogenous solution can be found by equating

dy p (t )
  y p (t )  0 . This implies we have Ksest   Kest  0 which implies
dt
K ( s   )est  0 which implies s   .

(Refer Slide Time: 21:24)

So yh (t )  Ke t this is the homogeneous solution, but we need to determine K and that

C t
can be done as follows. We have y(t )  yh (t )  y p (t ) . So this is y (t )  Ke t  e .
 
Now to determine the value of this unknown constant K we use the auxiliary condition.

165
(Refer Slide Time: 23:13)

C C
So we have y(0) = y0 implies K   y0 so this implies K  y0  . So
   
 C   t C t
therefore, we have the solution y (t )   y0  e  e so this determines
     
the complete output signal.

(Refer Slide Time: 24:28)

166
Now for t  0 since the input is 0, one can determine the output as follows.
dy(t )
Let y(t )  Se t . So we have   y (t )  x(t ) which implies  Set   Set  0 , so
dt
this is now satisfies the homogeneous solution y(t )  Se t where S is some constant and
now we need to determine that by using the auxiliary condition y(0) = y0.

(Refer Slide Time: 26:17)

So this implies S  y0 . For t < 0, we have the output y(t )  y0e t .

(Refer Slide Time: 27:54)

167
(Refer Slide Time: 28:15)

Now if you look at the output for t > 0 that can be expressed as
 C   t C t
y (t )   y0  e  e which can be written as
     

y0e t 
C
 
 e  t  et  and you can now see that this y0et is basically the output
when the input is 0.

(Refer Slide Time: 29:52)

168
So I can express the solution as yzi(t) which is output to 0 input along with yzs(t) where zs
denotes the zero state. The zero state signal is basically output to 0 auxiliary condition
and you can see here that if y0 is 0, the auxiliary condition is 0 then yzi(t) is 0, it reduces
to the zero state signal. Similarly, if the input is 0 then we already seen the output is
basically yzi(t) which is basically the output to the 0 input

We have looked at a couple of interesting example problems pertaining to basically the


Eigen function of a LTI system, the differential equation representation of a given
system and also the solution or the output signal to a system represented by a differential
equation given the input signal and also the auxiliary conditions. So we will stop here
and continue with other aspects in subsequent modules. Thank you very much.

169
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 16
Example Problems Discrete time LTI Systems – Output of System, Causality,
Stability

Keywords: Output of System, Causality, Stability

Hello, welcome to another module in this massive open online course. So we are looking
at example problems for the analysis of LTI systems, let us continue looking at these
problems.

(Refer Slide Time: 00:24)

Let us look at a few more problems for the analysis of LTI systems. So now let us
consider discrete time LTI systems. So in discrete time systems let us say we have an
input x(n)   nu(n) and we have an impulse response h(n)   nu(n) , where u(n) is the
unit step function. Now we would like to find the discrete time convolution.

170
(Refer Slide Time: 01:58)

(Refer Slide Time: 02:48)

We know that y(n)  x(n)  h(n) which can be expressed as the discrete sum that
 
is 
m 
x(m)h(n  m) , which can be represented as 
m 
m
u (m)   nmu (n  m) . Now

u(m)  1if m  0 and this is equal to 0 if m  0 . And similarly u(n  m)  1if n  m  0


and this is equal to 0 if n  m  0  n  m .

171
(Refer Slide Time: 04:19)

Hence this integral is non-zero only for m lying between 0 to n.


 n m n  mu ( n  m )
   u ( m) 
 if n  0
So y (n)   m  0 . So this is 0 only if 0  m  n which
0 if n  0



automatically implies for n  0 output equal to 0. And you can also see this from the
nature of the input signal and the impulse response, because the input signal is causal, the
impulse response is causal and therefore the output is naturally causal that it is non-zero
only for n  0 and 0 otherwise. Now we simplify this further.

172
(Refer Slide Time: 06:24)

m
 
n
Now further we can write this as y (n)      which is equal to well again here we
n

m0   

  n  1
1   
  
 n

 1
  if   
have two cases that is y (n)   . And both these are for
 n  if   
 ( n 1)





n  0 and y(n) = 0 for n < 0. That is the complete expression for the output signal y(n).

173
(Refer Slide Time: 08:53)

n
So let us now consider another example that is output y(n)  2
k 
k n
x(k  1) . We need to

find whether this is causal or not.

(Refer Slide Time: 09:52)

n 1
Now set k  k  1  k  k  1 . So y(n)  2 k  n 1
x(k ) . Now if you separate the term
k 

n
corresponding to k becomes y (n)  x(n  1)  2 k  n 1
x(k ) .
k 

174
(Refer Slide Time: 11:53)

And now if you look at this expression you can see that the output y(n) depends on x(n +
1) and hence it is not causal.

(Refer Slide Time: 13:33)

n
If you want to find the impulse response, set x(n)   (n) , y(n)  2
k 
 (k  1) . Now
k n

0 otherwise
 (k  1)  1
if k  1
. So this implies y(n) is non-zero only if n  1. Because if n < -1

then this will be 0 because  (k  1)  k  1  0 or k < -1.

175
(Refer Slide Time: 15:26)

 1  n for n  1

For n > -1, the impulse response h(n)   2 . Now if you look at
 0 otherwise

h(1)  21( 1)  20  1 .

(Refer Slide Time: 16:18)

So this implies the impulse response at time instant n = - 1 is 1, this implies that h(n) is
not 0 for n < 0 implies that the resulting LTI system is non causal. So that completes that
example.

176
(Refer Slide Time: 17:40)

Let us do another example where we will examine the concept of BIBO stability for a
discrete time LTI system. So we are going to check if the given LTI system with impulse
response h(n)   nu(n) is BIBO stable or not.

(Refer Slide Time: 18:52)

 

 h( n)  
n
We have to check the absolute sum which is u (n) which is
n  n 

 1
  1    1
 
n
which is   otherwise
.
n0 


177
(Refer Slide Time: 19:56)

So the absolute sum of the impulse response is finite only if   1 which implies that the

system is BIBO stable only if   1 otherwise it is an unstable system.

(Refer Slide Time: 20:54)

And let us do one more example which is related to a difference equation. Since this is a
discrete time system we have a difference equation. So y(n)   y(n 1)  x(n) and we
need to find out the impulse response of the system. And the solution can be approached

178
as follows and we can also assume initial rest. Now set x(n)   (n) and from initial rest
this implies that y(n) = 0 for n < 0.

(Refer Slide Time: 22:58)

This means x(0)   (0)  1. We have y(0)   y(1)  x(0)  1 since y(-1) = 0. This
implies if you want to find y(1), set n =1. y(1)   y(0)  x(1) .

(Refer Slide Time: 24:09)

179
Now x(1) is 0, so this becomes  y(0) which is  times 1, which is equal to  , similarly

y(2)   y(1)  x(1) , x(1) = 0. So this becomes      2 , y(3)   3 and so on. So the

output y(n) with this system with initial rest is a response to the unit impulse and is  n .

So I think that completes the set of examples that we want to examine to describe the
properties of LTI systems. So we will stop this module here and we will look at other
aspects in the subsequent modules. Thank you very much.

180
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 17
Laplace Transform – Definition, Region of Convergence (ROC), LT of Unit
Impulse and Step Functions

Keywords: Laplace Transform, Region of Convergence (ROC), LT of Unit Impulse and


Step Functions

Hello, welcome to another module in this massive open online course. So in this module
we are going to start a new concept known as the Laplace transform.

(Refer Slide Time: 00:28)

So basically this gives us a transform domain representation of the system and this is
convenient representation for signals as well as LTI systems and this can be employed
for the analysis and also to obtain valuable insights regarding the performance and
behavior of LTI signals for various signals.

181
(Refer Slide Time: 02:32)

So let us look at the definition of the Laplace transform, which for convenience we can

 x(t )e
 st
abbreviate as LT. So the LT is defined as X ( s)  dt . Naturally this is a function


of s, where this s    j and this is complex valued. So X(s) is the Laplace transform
of x(t) and x(t) and X(s) form a signal and Laplace transform pair. This is also known as
the bilateral Laplace transform, because it uses an integral from  to  . Now an
important concept associated with the Laplace transform, is what is termed as the ROC
or the region of convergence.

182
(Refer Slide Time: 06:12)

ROC simply describes the region of the s plane or basically the range of values of s for
which the Laplace transform exists or the integral for which the Laplace transform exists.
So this characterizes the range of values of s for which the Laplace transform for a given
signal x(t) converges.

Let us examine this using an example. For instance let us say my signal is

x(t )  e u(t ) . Let us further assume that  is a real quantity. So X ( s)  e
 t  t
u (t )e  st dt


and since u(t) is non-zero only for t  0 , this integral can be written equivalently as
 
e ( s  )t
X ( s)   e  ( s  ) t
dt . So this becomes X ( s)  .
0
( s   ) 0

183
(Refer Slide Time: 08:53)

Now this becomes 0, but only under the condition s    0 .

(Refer Slide Time: 09:31).

So for t   this implies, the real part of s    0 , which implies that the real part of s
must be greater than  . And therefore, this is the region of convergence.

184
(Refer Slide Time: 10:26)

1
Therefore, the Laplace transform of x(t )  e t u(t ) is given as X ( s)  , but this
s 
exists only if real part of s is greater than  . This ROC can be represented in the s plane
as shown in slide.

(Refer Slide Time: 11:13)

So the s plane simply contains the real part, that is  on the x-axis and the imaginary
part that is j  on the y-axis and this is and the ROC is basically this entire half of the s

185
plane which is basically real part of s without the line corresponding to  , because real
part of s is strictly greater than  . So this is the ROC representation for x(t )  e t u(t ) .

(Refer Slide Time: 13:09)

Let us consider a slightly different signal which is x(t )  e t u(t ) . This is non-zero if
t  0 , because u(- t) is non-zero only for t  0 .

(Refer Slide Time: 13:28)

186
This signal exists on the left side and hence this is left handed signal. Now if you look at

 e
 t
the Laplace transform of this signal, we have X ( s)  u (t )e st dt .


(Refer Slide Time: 14:49)

Now since this is non-zero only for t  0 , this can be equivalently written as
0
e ( s  )t
0
X ( s)    e  ( s  ) t
dt and the integral is X ( s)   .

( s   ) 

(Refer Slide Time: 15:39)

187
Now e ( s  )t evaluated at 0 is 1 and e ( s  )t as t   is 0 only if the real part of
s    0 implies real part of s   .

(Refer Slide Time: 16:00)

1
Therefore the Laplace transform of x(t )  e t u(t ) is given by X ( s)  , if real
s 
part of s   and this is therefore the ROC. This can once again be represented in the s
plane as shown in slide.

(Refer Slide Time: 17:17)

188
(Refer Slide Time: 18:00)

Here the ROC is the entire s plane that is less than  , without including the line that is
 . Now we observe that both the signals et u(t ) and  et u(t ) have the same
Laplace transform.

(Refer Slide Time: 19:26)

In fact these are two distinct signals. So two distinct signals have the same Laplace
transform but notice the ROCs of both are different.

189
(Refer Slide Time: 20:20)

Therefore for the Laplace transform to be unique, the ROC must also be specified as part
of the Laplace transform. So to determine the signal from the Laplace transform
uniquely, the ROC must also be specified as part of the Laplace transform.

(Refer Slide Time: 21:27)

Let us now proceed to compute the Laplace transforms of some popular signals and two
of the most important signals are of course the unit impulse and the unit step function.

190
(Refer Slide Time: 22:56)


Now we have x(t )   (t ) and X ( s)    (t )e
 st
dt . This is simply the function evaluated


at t = 0. So this is X (s)  e st  1  s . So the region of convergence is all s. So the


t 0

ROC for impulse function covers the entire s plane.

(Refer Slide Time: 24:32)

191
Now let us look at the unit step function, x(t )  u(t ) and the Laplace transform of u(t) is

 u(t )e
 st
X (s)  dt . Since u(t) is non-zero for t 0 this integral becomes


 
e st
X ( s)   e dt which is X ( s) 
 st
. Now e  st as t   it tends to 0, only if real part
0
 s 0

1
of s > 0. So this reduces to for real part of s > 0 and this is basically the ROC for the
s
unit step function.

(Refer Slide Time: 26:24)

So in this module, we have basically looked at an important concept that is the definition
of the Laplace transform, which gives us the transform domain representation of several
signals, which is very useful to understand the behavior and properties of signals as well
as systems. We have also looked at some examples for the Laplace transform and
realized that it is also important to specify the region of convergence to uniquely
determine the signal given the Laplace transform. So we will stop here and continue in
the subsequent modules. Thank you very much.

192
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 18
Laplace Transform Properties – Time Shifting Property, Differentiation
/Integration in Time

Keywords: Time Shifting Property, Differentiation /Integration in Time

Hello, welcome to another module in this massive open online course. So we are looking
at the Laplace transform.

(Refer Slide Time: 00:45)

Let us now look at the Laplace transform of the cosine function. So x(t )  cos(0t )u(t ) .

Now u(t) is nonzero only for t  0 . So X ( s)   cos(0t )e st dt .
0

193
(Refer Slide Time: 01:40)

 2 e 
1 j0t
So this is equal to  e j0t e st dt which is equal to
0

 
1  ( s  j0t ) 1

20
e dt   e ( s  j0t ) dt .
20

(Refer Slide Time: 02:52)

194
 
1 e ( s  j0t ) 1 e ( s  j0t )
So I have written it as the sum of two integrals. This is 
2 ( s  j0 ) 0 2 ( s  j0 ) 0

1 1 1  1  2s 
only if the real part of s > 0. So this is    . So this  2 
2  s  j0 s  j0  2  s   20 

s
which is equal to . So the Laplace transform of x(t )  cos(0t )u(t ) is
s   20
2

s
X ( s)  .
s   20
2

(Refer Slide Time: 04:59)

Similarly you can compute the Laplace transforms of some other commonly occurring
signals, for instance, you can try to compute the Laplace transform of x(t )  sin(0t )u(t ) .

195
(Refer Slide Time: 06:18)

Now let us look at some properties of Laplace transform which indeed make it a very
handy tool. For instance, if you have two signals, x1(t) with Laplace transform X1(s),
x2(t) with Laplace transform X2(s), then since the Laplace transform is a linear transform,
the Laplace transform of a1 x1 (t )  a2 x2 (t )  a1 X1 (s)  a2 X 2 (s) and we can easily show
this because integral of a sum is the sum of the integrals but the only important point
here is to look at the region of convergence because the Laplace transform is incomplete
without specifying the region of convergence.

So the region of convergence of the first one is R1 and the second region of convergence
is R2, then the region of convergence of the linear combination will be their intersection.
So the ROC will be equal to R1 intersection with R2 because naturally both the Laplace
transforms X1(s) and X2(s) have to exist.

196
(Refer Slide Time: 08:32)

Now let us look at another important property which is the time shifting property.

(Refer Slide Time: 09:13)

We have the Laplace Transform of x(t) as X(s). Now x(t - t0) is a delayed version of x(t),
that is, if you have x(t) which is something as shown in slide, then x(t) delayed by t0 will
look something like this shifted to the right by t0.

197
(Refer Slide Time: 10:13)

Now we need to find the Laplace transform of the delayed signal that is x(t - t0). So

 x(t  t )e
 st
0 dt , now set t  t0  t  dt  dt . So this integral will become


  s(t  t )
 x(t )e 0 dt .


(Refer Slide Time: 11:24)

198
 st 
Now we can bring e st0 outside the integral and therefore, e 0  x(t )e st dt which is

basically the Laplace transform of x(t) which is equal to X(s). So this is equal to
 st
e 0 X ( s) which is the Laplace transform of the delayed signal x(t - t0) and the region of

convergence will be the same because we are directly using the Laplace transform X(s).

(Refer Slide Time: 12:37)

So when we delay a signal in time domain this is equal to multiplication by e st0 in the
Laplace domain or in the s domain.

199
(Refer Slide Time: 14:05)

Now let us look at another scenario when you have x(t) whose Laplace transform is X(s)
and if you multiply x(t) by e s0t that is, if you multiply this by an exponential signal then
 

e  x(t )e
s0t  st  ( s  s0 ) t
the Laplace transform will be x(t )e dt . This implies dt .
 

(Refer Slide Time: 14:54)

So if we look at this, this is the Laplace transform evaluated at s - s0. So this is X(s - s0).

200
(Refer Slide Time: 16:16)

The region of convergence of es0t x(t ) will be the region of convergence of x(t) along with
the real part of s0, that is, the region of convergence of x(t) shifted by the real part of s0.
Let us look at another property that is differentiation in time. If x(t) has Laplace
dx(t )
Transform X(s) then we need to find the Laplace Transform of .
dt

(Refer Slide Time: 18:04)

201

dx(t )  st
To evaluate the Laplace transform of this we have 

dt
e dt which is now after


applying integration by parts and applying the limits, we have s  x(t )e st dt  sX ( s ) .


(Refer Slide Time: 20:27)

The region of convergence will be the same as that of the region of convergence of x(t).
Therefore, the derivative has the Laplace transform sX(s).

(Refer Slide Time: 21:26)

202
So the differentiation in time is multiplication by s in the transform domain. Similarly,
you can explore other properties of the Laplace transform.

(Refer Slide Time: 22:50).

Let us look at one last property that is the integration in time. Let us consider x(t) which
has Laplace transform X(s). Then, if you pass it through an integrator the output is
t
x(t )   x( )d

and you can also show that it acts as a low pass filter.

203
(Refer Slide Time: 23:34)

d x(t )
Now  x(t ) , that is, when we differentiate the output of an integrator circuit we get
dt
back the original signal and x(t )  X ( s) . Now taking Laplace transform on both sides
X ( s)
we have s X ( s)  X ( s)  X ( s)  .
s

(Refer Slide Time: 25:26)

So basically that wraps up some of these interesting properties and we have looked at the
Laplace transform of some signals, such as the cosine signal and some interesting

204
properties, such as what happens when you delay a signal, what happens to Laplace
transform when you take a linear combination of signals and finally, when you
differentiate a signal and when you integrate a signal, what happens to the Laplace
transform. There is another important property of the Laplace transform which is the
convolution of two signals and we will look at that property in the subsequent modules,
so I stop here. Thank you very much.

205
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 19
Laplace Transform – Convolution, Rational Function – Poles and Zeros, Properties
of ROC

Keywords: Convolution, Rational Function – Poles and Zeros, Properties of ROC

Hello, welcome to another module in this massive open online course. So we are looking
at the properties of the Laplace transform, let us continue our discussion.

(Refer Slide Time: 00:26)

So we want to look at the convolution between two signals. So consider two signals x1(t)
with Laplace transform X1(s) and with ROC R1 and x2(t) with Laplace transform X2(s)
and ROC R2. Then we consider x1(t) convolved with x2(t) and we need to find out what
is the Laplace transform of x1(t) convolved with x2(t). So this can be expressed as
 

  x ( ) x (t   )d e
 st
1 2 dt . So the inside integral is that of convolution and the outer
 

integral is taking the Laplace transform of the convolution. And now this can be
simplified by modifying the order of integration as follows.

206
(Refer Slide Time: 02:44)

 

 x ( )d  x (t   )e
 st
So this can be simplified as 1 2 dt which can be again written as
 

 

 x1 ( )d e s  x (t  )e dt . Now set t    t  dt  dt which implies the integral


 s ( t  )
2
 

 

 x1 ( )e s d  x (t )e
 st
becomes 2 dt and this is basically X1(s) times X2(s).
 

(Refer Slide Time: 05:35)

207
So when we convolve two signals the output is given by the product of their Laplace
transforms.

(Refer Slide Time: 06:17)

So convolution in time is equivalent to taking the product in the transform domain that is
the Laplace domain or basically the s domain. This property helps easily evaluate the
convolution of two signals because rather than evaluating convolution which is slightly
difficult to evaluate, if one has knowledge of the Laplace transforms of the two signals
then the Laplace transform of the convolution can be readily evaluated by the
multiplication of the Laplace transform of these two signals.

So as a result of that naturally it is easier to look at the output of an LTI system given an
input signal because the output is nothing but the convolution between the input signal
and the impulse response of the LTI system.

208
(Refer Slide Time: 08:45)

So as a result the Laplace transform is a very convenient tool in both signal processing
and communication in several areas such as control systems, instrumentation etc., where
one needs to study the behavior of system, analyze the behavior of systems, examine the
behavior or the output of a given system for different input signals and also analyze the
impact of the system for different signals, characterize the interaction between various
signals and systems etc.

(Refer Slide Time: 10:11)

209
So let us now move on to another topic in the Laplace transform which is that of the
poles and zeros. Now consider X(s) to be a rational function of s so it is the form of
numerator polynomial divided by denominator polynomial, so this is going to be of the
a0 s m  a1s m1  ....  am
form X ( s)  .
b0 s n  b1s n 1  ....  bn

(Refer Slide Time: 12:02)

a0 ( s  z1 )( s  z2 )....(s  zm )
This can be again written as  where these roots of the
b0 ( s  p1 )( s  p2 )....(s  pn )
numerator polynomial that is zk for 1  k  m these are known as the zeros of the transfer
function and the roots of the denominator polynomials pk are known as the poles of the
transfer function.

210
(Refer Slide Time: 13:43)

So this is known as a proper rational function if m  n where m is the degree of


numerator polynomial and n is the degree of denominator polynomial and if m  n this
implies an improper rational function.

(Refer Slide Time: 15:41)

Now one thing to keep in mind is that poles of X(s) cannot lie in ROC, since X(s) is
undefined at the poles. So since X(s) evaluates to infinity at the poles, poles cannot lie in
the ROC. Let us take an example to understand this better.

211
(Refer Slide Time: 16:30)

3
4( s  )
4s  3 4
So I have X ( s)  2  . So this can be again written as
2s  6s  4 2( s  2)( s  1)
3
2( s  )
 4 .
( s  2)( s  1)

(Refer Slide Time: 17:24)

212
3
And now you can see here that the zeros z1   and poles p1  2, p2  1 and ROC
4
cannot include these poles. Now the degree of numerator polynomial m = 1, the degree
of denominator polynomial n = 2, so you have m < n implies this is a proper rational
function.

(Refer Slide Time: 18:57)

Let us look at the some of the properties of the ROC. The first property as we have
already seen is that the ROC does not contain poles. For a finite duration signal, that is
x(t) = 0 for t < t1 or t > t2, implies that signal is non-zero only for t1  t  t2 , so for such
signals, the ROC is typically the entire s plane.

213
(Refer Slide Time: 21:08)

Now a right handed signal implies x(t )  0  t  t1   . So this is non-zero only for

t  t1 .

(Refer Slide Time: 22:24)

For a right handed signal the ROC is of the form Re s   max where  max equals the

maximum of the real part of the poles of X(s). This implies that the ROC is the half plane
to right of the vertical line given by Re s   max .

214
(Refer Slide Time: 24:10)

Here all the other poles can only be to the left of this line given by Re s   max and of

course this line itself is not included in the ROC.

(Refer Slide Time: 25:54)

So let us take an example, consider x(t )  e2t u(t )  e3t u (t ) and this is a right handed
signal and it exists only for. If you look at the Laplace transform of this we have
1 1
X ( s)   and the poles are s= -2, -3.
s2 s3

215
(Refer Slide Time: 27:06)

Now  max  2 and therefore the ROC is Re s  2 .

(Refer Slide Time: 27:24)

If you plot this, this is your s plane, this is the line  max  2 and somewhere here you

have - 3 and the ROC is Re s  2 , so all the other poles lie on the left of this line and

so ROC is to the right of all poles and in fact it does not include this line.

216
(Refer Slide Time: 28:46)

Now let us look at the case of a left-handed signal. So x(t )  0 for t  t2   . Its ROC is

of the form Re s   min where  min is the minimum of the real part of the poles of X(s).

(Refer Slide Time: 30:12)

So ROC is the region to the left of all poles of X(s).

217
(Refer Slide Time: 30:58)

And this can be illustrated as shown in slide.

(Refer Slide Time: 32:52)

Let us take a look at a simple example to understand this. Consider the signal
x(t )  e2t u(t )  e3t u(t ) . Now this is a left-handed signal and this is non-zero only for
1 1
t  0 . The Laplace transform of this signal is given as X ( s)   and from this
s 2 s 3
you can see that the poles of this system equals 2, 3 and clearly you can see that
 min  2 .

218
(Refer Slide Time: 33:00)

Therefore the ROC of this left-handed signal is of the form Re s  2 .

(Refer Slide Time: 33:24)

Now if we plot this, this is the pole which is 2 and this is the line Re s  2 and the ROC

is to the left of this line and you can see all poles have to lie only to the right so ROC has
to lie to the left of all the poles of X(s). So we will stop this module here and continue
with other aspects on the subsequent modules. Thank you very much.

219
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 20
Laplace Transform –Partial Fraction Expansion with simple Poles and Poles with
Multiplicity Greater than Unity, Laplace Transform of LTI Systems

Keywords: Partial Fraction Expansion, Laplace Transform of LTI Systems

Hello welcome to another module in this massive open online course. So we are looking
at the properties of the region of convergence of the Laplace transform. Let us continue
this discussion if x(t) is a two-sided signal.

(Refer Slide Time: 00:26)

This implies x(t) is non-zero for entire range and it is as shown in slide.

220
(Refer Slide Time: 01:24)

Then the ROC is of the form 1  Re s   2 , where  1 ,  2 are the real parts of two

poles of X(s). Thus the ROC is the vertical strip between the lines,
Re s  1 and Re s   2 .

(Refer Slide Time: 02:47)

This can be drawn as shown in slide.

221
(Refer Slide Time: 03:17)

For instance, if you take x(t )  e2t u(t )  e3t u(t ) , it is a two-sided signal and the
1 1
Laplace transform is given as X ( s)   and the poles are 2, - 3 and the ROC
s2 s3
will be the intersection of the ROCs of the both signals, that is Re s  3  Re s  2 .

(Refer Slide Time: 04:58)

222
(Refer Slide Time: 05:13)

So this includes the strip between - 3 and 2 and this can be simply illustrated as in slide
and ROC is 3  Re s  2 and any other poles can only lie outside the ROC. And this

is basically a vertical strip.

(Refer Slide Time: 06:51)

( s  z1 )( s  z2 )....( s  zm )
Let us look at the partial fraction expansion. If X ( s)  K
( s  p1 )( s  p2 )....( s  pn )

where m < n and z1 , z2 ,..zm are the zeroes and p1 , p2 ,.. pn are the poles. This is known as
a proper rational function.

223
(Refer Slide Time: 08:05)

So if it is a proper rational function, it can be considered as the simple pole case. We are
assuming all the poles are distinct, that is we have simple poles.

(Refer Slide Time: 08:40)

That is no two poles are such that pi = pj. So we have all the poles are distinct and for
C1 C2 Cn
that scenario X(s) can be expressed as X ( s)    ....  , where the
s  p1 s  p2 s  pn

coefficient Ck is given as Ck  (s  pk ) X (s) s  p . So this is the partial fraction expansion


k

of X(s) and Ck is the kth coefficient.

224
(Refer Slide Time: 10:23)

Here all the poles are assumed to be simple, that is there are no repeated poles.

(Refer Slide Time: 11:40)

Now for the multiple pole case, X(s) in denominator has a factor of form and then in this
scenario, we say that the pole pi has a multiplicity of r.

225
(Refer Slide Time: 12:51)

1 2 r
Therefore X(s) will have terms of the form   ....  . So these
s  pi ( s  pi ) 2
(s  pi )r
are the terms corresponding to pole pi and you will have the rest of the terms
corresponding to other poles. The coefficient 1 , 2 ,....r k is evaluated as follows.

(Refer Slide Time: 14:18)

1 dk
k 
We have the expression for r  k as r k  ( s  pi )r X ( s)  .
k ! ds s p i

226
(Refer Slide Time: 15:48)

So this is basically the coefficient in the partial fraction expansion of X(s).

(Refer Slide Time: 16:37)

Now let us look at the Laplace transform properties for LTI systems and its application
for the for LTI systems. Consider an LTI system given by impulse response h(t) where
x(t) is the input and y(t) is the output of the LTI system. Then we have the output y(t) of
as the convolution of the input x(t) with the impulse response h(t). Now convolution in
the time domain is a product in the transform domain.

227
(Refer Slide Time: 18:26)

The Laplace transform Y(s) will be the product of the Laplace transforms of X(s) and
Y ( s)
H(s). This implies H ( s)  . This quantity has a very important role in analysis of
X ( s)
LTI systems and this is known as the transfer function of the system. This characterizes
the behavior of the LTI system.

(Refer Slide Time: 20:00)

Now let us look at other properties of the LTI systems from the perspective of the
Laplace transform.

228
(Refer Slide Time: 20:53)

Now let us look at causality. For a causal system, we have h(t) = 0 for t < 0, which
basically implies that h(t) is a right handed signal, which implies its Laplace transform
H(s) must have ROC, which is of the form Re s   max .

(Refer Slide Time: 22:52)

Now let us look at stability of the LTI system.

229
(Refer Slide Time: 24:28)


Now it is BIBO stable if 

h(t )   . This implies that ROC contains the j axis.

(Refer Slide Time: 24:56)

230
(Refer Slide Time: 25:42)


H ( j )   h(t )e
 jt
If you look at dt , that is if you substitute s  j ,


 
H ( j )    1 . So H ( j )  
 jt  jt
h(t ) e dt . But e h(t ) dt , which means therefore
 


if this is finite, this implies H ( j )  

h(t ) dt   which implies j belongs to the

ROC.

231
(Refer Slide Time: 27:12)

(Refer Slide Time: 28:25)

Now we have the transfer function of the LTI system described by the differential
N
d k y (t ) M d k x(t )
equation. that is we have  ak   bk . Now we know from the properties
k 0 dt k k 0 dt k
dy (t )
of the Laplace transform that y(t) has Laplace transform Y(s) and has Laplace
dt
transform sY(s).

232
(Refer Slide Time: 30:32)

d k y (t ) k d k x(t )
This implies has Laplace transform s Y ( s) . Similarly, has Laplace
dt k dt k
transform s k X ( s) . So substituting this, from the differential equation it follows that we
N M
have  a s Y (s)   b s
k 0
k
k

k 0
k
k
X (s) .

(Refer Slide Time: 32:03)

233
M

Y ( s) b s k
k

So we have H ( s)   k 0
N
and this is the transfer function of the system and
a s
X ( s) k
k
k 0

this is a rational function. So basically that completes our description of the definition
and the properties of the Laplace transform.

So we have completed our discussion of the definition, ROC, properties of the ROC,
some of the properties of the Laplace transform and the description of the properties of
the Laplace transform with respect to LTI systems and several other aspects. So this
completes our discussion of the theory and applications of the Laplace transform. And in
the subsequent modules, we look at some of the examples related to Laplace transform,
solving problems involving Laplace transform as it applies to the principles of signals
and systems. So we will stop here. Thank you very much.

234
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 21
Laplace Transform Example Problems – Evaluation of Laplace Transform, Inverse
LT through Partial Fraction

Keywords: Inverse Laplace Transform

Hello welcome to another module in this massive open online course. So we are looking
at the Laplace transform and the properties and applications of Laplace transform.

(Refer Slide Time: 00:29)

Let us now look at example problems to better understand the Laplace transform. So we
have a signal x(t )  e2t u(t )  e3t u (t ) , now we want to compute the Laplace transform
of this signal and also we want to find the ROC. So we already know that the Laplace
1
transform of e at u (t ) is and ROC is Re s  a and the Laplace transform of
sa
1
e at u (t ) is and ROC is Re s  a .
sa

235
(Refer Slide Time: 03:04)

Therefore if you look at this signal e2t u (t ) this is equal to e at u (t ) where a = - 2. So the
1
Laplace transform of this signal can be obtained as and the ROC is Re s  2 . On
s2
the other hand, if we look at the other signal that is e3t u (t ) this is equal to e at u (t )
1
with a = 3. So the Laplace transform is and the ROC is Re s  3 because this is
s3
a left handed signal.

So now you see the net ROC of the signal will be the intersection of these two ROCs
therefore, the net ROC equals Re s  2  Re s  3 and you can see that the

intersection of these two signals is basically an empty set. There is no value of s where
the Laplace transform of both these signals converge. And this implies that the Laplace
transform of this signal does not exist.

236
(Refer Slide Time: 05:59)

(Refer Slide Time: 06:46)

Let us look at another example for instance we have the signal x(t )  e3t u(t )  e2t u(t ) .
1
Now the Laplace transform of e3t u (t ) is and ROC is Re s  3 since this is a
s3
1
right handed signal. Now e2t u (t ) is a left handed signal with Laplace transform 
s2
and the ROC is Re s  2 . So the net ROC equals the intersection of these two ROCs

1 1
that is Re s  3  Re s  2 and the net Laplace transform X ( s)   .
s3 s2

237
(Refer Slide Time: 09:26)

5
So therefore X ( s)  and the ROC for this is 3  Re s  2 .
( s  3)( s  2)

(Refer Slide Time: 10:18)

Now let us look at another example where


x(t )  e2t  u(t )  u(t  5)   e2t u(t )  e2t u(t  5) . Now e2t u (t ) has the Laplace transform

1
with ROC of the form Re s  2 since this is a right handed signal.
s2

238
(Refer Slide Time: 11:55)

Now e2t u(t  5) can be equivalently written as e2t u(t  5)  e2(t 5) .e10u(t  5) which is

e10 .e2(t 5)u(t  5) . Now this signal is of the form x(t - t0) where t0 = 5. Hence we know

that the Laplace transform of x(t - t0) is e st0 X (s) therefore, the Laplace transform of this
1
signal will be e10e5 s with the ROC of the form Re s  2 .
s2

(Refer Slide Time: 13:43)

239
Therefore the final Laplace transform will be the sum of the two Laplace transforms that
e10e5 s
is X ( s) 
1
s2

s2
, which is X ( s) 
1
s2
1  e5( s2)  with the ROC of the
form Re s  2 .

(Refer Slide Time: 14:41)

Now let us do another example to find the Laplace Transform of e at cos(0t )u(t ) . So we

s
already know that the Laplace transform of cos(0t )u(t ) is and the ROC for
s   20
2

this will be Re s  0 and this is a right handed signal. Now if x(t) has the Laplace

transform X(s) then e at x(t ) has a Laplace transform X(s + a). So the signal we have will
sa
have the Laplace Transform X ( s)  .
( s  a)2   2 0

240
(Refer Slide Time: 15:55)

The region of convergence will be Re s  a  0  Re s  a .

(Refer Slide Time: 17:45)

3s  5
Let us now do another problem, we have the Laplace transform X ( s)  and
s  3s  2
2

the ROC is Re s  2 . Now we want to evaluate the inverse Laplace transform that is

what is the signal x(t).

241
Now observe that this is a rational function that is it is of the form numerator polynomial
divided by denominator polynomial, further the degree of numerator polynomial is 1 and
degree of denominator polynomial is 2. Therefore, we have m < n which implies this is a
proper rational function therefore we can express this as partial fractions.

(3s  5)
Now X ( s)  with the roots p1  2 and p2  1 .
( s  2)( s  1)

(Refer Slide Time: 20:21)

5
We have the zero at z1   . Therefore using the partial fraction expansion we can
3
C1 C
write, X ( s)   2 . Now to evaluate C1 we have
s  2 s 1
3s  5
C1  ( s  2) X ( s) s 2  1.
s  1 s 2

242
(Refer Slide Time: 21:55)

Therefore the coefficient C1 corresponding to the pole p1 =- 2 equals 1. Now similarly C2


3s  5
can be evaluated in a similar fashion as C2  ( s  1) X ( s) s 1  2.
s  2 s 1

(Refer Slide Time: 23:03)

1 2
And therefore from the partial fraction expansion we have X ( s)   and now
s  2 s 1
the ROC is Re s  2 which implies that it is a left handed signal.

243
(Refer Slide Time: 23:26)

When ROC is of the form Re s   max the corresponding time domain signal will be

x(t )  e2t u(t )  2et u (t ) . So that is the net time domain signal or the inverse Laplace
transform.

So in this module we have seen some examples with the Laplace transform as well as the
inverse Laplace transform. We will continue looking at other examples in the subsequent
modules. Thank you very much.

244
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 22
Laplace Transform Example Problems – Inverse LT through Partial Fraction for
Poles with Multiplicity Greater than Unity

Keywords: Inverse Laplace Transform, Poles, Multiplicity

Hello, welcome to another module in this massive open online course. So we are looking
at example problems to understand the properties and applications of the Laplace
transform, let us continue our discussion.

(Refer Slide Time: 00:26)

 s 2  2s  1
So we want to to find the inverse Laplace transform of X ( s)  and the
( s  2)( s  3) 2

ROC is Re s  2 .

245
(Refer Slide Time: 02:16)

So here the poles are p1 = - 2, p2 = - 3, but p2 is a multiple pole of order 2, that is this is a
repeated pole. So the partial fraction expansion will be of the form
C1  2
X ( s)   1  . Now multiplicity is 2 which means the factor r = 2. Now
s  2 s  3  s  3 2

the constant C1 corresponding to s + 2 in the partial fraction expansion can be found as


C1  (s  2) X (s) s 2 .

(Refer Slide Time: 04:05)

246
 s 2  2s  1
So this will be C1   1 . Now to evaluate the  s we have
( s  3)2 s 2

1 dk
k 
r k  ( s  pi )r X ( s)  .
k ! ds s p
i

(Refer Slide Time: 05:34)

Now first let us set k = 0, so we have r and we have r = 2. So

1  s 2  2s  1
2  ( s  3)2 X ( s)  2.
0! s 3 s2 s 3

247
(Refer Slide Time: 06:56)

1 d ( s  3) 2 X ( s)
Now if we set k = 1, we have (21)  1  . Now let us look at
1! ds s 3

d ( s  3)2 X ( s) d s 2  2s  1 1
  1  .
ds ds s2 ( s  2)2

(Refer Slide Time: 08:02)

248
(Refer Slide Time: 08:32)

d ( s  3)2 X ( s)
Now therefore 1   2 . So we have evaluated the coefficient 2
ds s 3

corresponding to ( s  3)2 and the coefficient 1 corresponding to s  3 . Therefore, the

1 2 2
partial fraction expansion of X(s) can now be written as, X ( s)    .
s  2 s  3  s  3 2

(Refer Slide Time: 09:59)

249
(Refer Slide Time: 11:43)

The ROC of the signal is given as, Re s  2 , now this implies it is a right handed

1
signal. Therefore you can see has the inverse Laplace transform e2t u (t ) .
s2

(Refer Slide Time: 12:41)

1
Similarly, will have the inverse Laplace transform e3t u (t ) . Now for the inverse
s3
1
Laplace transform of we will use the property that if x(t) has Laplace transform
 s  3
2

dX ( s)
X(s) then  tx(t ) .
ds

250
(Refer Slide Time: 13:54)

1 1 d 1
Now you can see that is nothing but  and therefore taking
 s  3  s  3 ds ( s  3)
2 2

1
the inverse Laplace transform of this derivative, we have  e3t u (t ) and
s3
d 1
  t (e3t u (t )) . So finally putting these three things together we find the
ds ( s  3)
inverse Laplace transform of the original signal.

(Refer Slide Time: 16:08)

251
So therefore the inverse Laplace transform of X(s) will be
x(t )  e2t u(t )  2e3t u(t )  2te3t u (t ) .

(Refer Slide Time: 18:22)

Let us now look at another example to find the inverse Laplace transform of
3  4se2 s
X (s)  . Now it is given that the ROC of this signal is Re s  2 which
s 2  6s  8
basically implies that this is a right handed signal.

(Refer Slide Time: 19:32)

252
1 1
Now we start by considering  . This has the partial fraction
s  6s  8 ( s  2)( s  4)
2

1 1 1 1
expansion .  . .
2 s2 2 s4

(Refer Slide Time: 20:08)

1 1 1
Now for the right handed signal . has the inverse Laplace transform e2t u (t ) and
2 s2 2
1 1 1 4t
. has the inverse Laplace transform e u (t ) . So
2 s4 2
1 1 1
 e2t u (t )  e4t u (t ) .
s  6s  8
2
2 2

253
(Refer Slide Time: 20:47)

s s 2 1
Now let us consider    , which has the inverse
s  6s  8 ( s  2)( s  4) s  4 s  2
2

Laplace transform 2e4t u(t )  e2t u(t ) .

(Refer Slide Time: 22:03)

We consider any signal Y(s) which has inverse Laplace transform y(t),
e st0 Y (s)  y(t  t0 ) , this is a delayed version of the signal. So this implies that

e2 sY (s)  y(t  2) this implies 4e2 sY (s)  4 y(t  2) .

254
(Refer Slide Time: 23:18)

4e2 s s
Therefore now using this property we have 2 will have the inverse Laplace
s  6s  8
transform 42e4(t 2)u(t  2)  e2(t 2)u(t  2)  8e4( t 2)u(t  2)  4e2( t 2) u(t  2) .

Therefore the final inverse Laplace transform corresponding to the original signal will be
the one we obtain when we put these two results that we had obtained, together.

(Refer Slide Time: 25:03)

255
Therefore the final inverse Laplace transform, the net signal is
3 3
x(t )  e2t u (t )  e4t u (t )  8e4(t 2)u (t  2)  4e 2(t 2)u (t  2) .
2 2

(Refer Slide Time: 26:16)

(Refer Slide Time: 27:27)

So basically what we have seen in this module is, we have seen slightly more advanced
or challenging problem of evaluating the inverse Laplace transform. So we will stop here
and look at other aspects in the subsequent modules. Thank you very much.

256
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 23
Laplace Transform – RL Circuit Problem, Unilateral Laplace Transform, RC
Circuit with Initial Conditions

Keywords: RL Circuit Problem, Unilateral Laplace Transform, RC Circuit

Hello, welcome to another module in this massive open online course. So we are looking
at example problems for the Laplace transform.

(Refer Slide Time: 00:27)

So consider the RL circuit that is given below. So there is a current source which is
connected in parallel with a resistance R and an inductor L.

257
(Refer Slide Time: 01:23)

diL (t )
The output current is iL(t) = y(t). We have v(t )  L . The current through the
dt
diL (t )
resistance is iR(t) and by Ohms law iR (t ) R  v(t )  L which implies
dt
L diL (t )
iR (t )  and further we have the current i(t )  iR (t )  iL (t ) .
R dt

(Refer Slide Time: 02:47)

258
L diL (t )
So i(t )   iL (t ) . So this is the input output equation. So we denote the source
R dt
current i(t) as x(t) and the output y(t) is the inductor current that is iL(t). Therefore, we
L dy (t )
have x(t )   y(t ) . Now taking the Laplace transform we have
R dt
L  L 
X ( s)  sY ( s)  Y ( s)  Y ( s) 1  s  . Therefore the transfer function H(s) is the
R  R 
Y ( s) 1
Laplace transform of the impulse response, so H ( s)   .
X (s) 1  L s
R

(Refer Slide Time: 05:09)

R
L R
So this can be written as H ( s )  and it has one pole p1   .
R L
s
L

259
(Refer Slide Time: 06:04)

R
Now let us assume a right handed signal. Therefore, ROC is basically s   and the
L
R  RL t
corresponding inverse Laplace transform, that is h(t) is basically h(t )  e u (t ) .
L

(Refer Slide Time: 07:06)

So basically we have found the impulse response of the RL circuit assuming that it is a
right handed signal. Now let us proceed on to the concept of the unilateral Laplace
transform. We have looked at so far the concept of a bilateral Laplace transform.

260
(Refer Slide Time: 08:00)

The unilateral Laplace transform or one sided Laplace transform can be defined as

 x(t )e
 st
X I (s)  dt and we are considering a right handed signal, now the ROC will be
0

of the form Re s   max . Now this is useful for determining the response of a causal

system to a causal input with non-zero initial conditions.

(Refer Slide Time: 10:02)

261
Now some of the properties of this unilateral transform are as follows. So let us say the
dx(t )
signal x(t) has unilateral Laplace transform XI(s), then  sX I ( s)  x(0 ) .
dt

(Refer Slide Time: 11:46)

t t 0
X ( s) X ( s) 1
Now  x( )d  I
s
and  x( )d  Is  s  x( )d . So these are some of the
0

properties of the unilateral transform. Let us look at an example to understand this in a


better manner.

(Refer Slide Time: 12:44)

262
Consider the RC circuit where you have a voltage source connected across a resistance
and a capacitance in series. So the voltage vc(t) is the voltage across the capacitor vR(t) is
the voltage across the resistance. Let us say we assume a current i(t) and let us say the
source voltage is given by vs(t).

Now we are given the initial condition, vc (0 )  V0 , this is the initial voltage of capacitor.

Further we are given that the source voltage vs (t )  Vu(t ) .

(Refer Slide Time: 14:11)

So we have a step change at t = 0, to V. Now we are required to find the capacitor


voltage for time t  0 . Now the source voltage will be the sum of the voltage across the
capacitor and the voltage across the resistance that is vc (t )  vR (t )  vs (t ) .

263
(Refer Slide Time: 15:14)

dvc (t )
The voltage across the resistance vR (t )  i(t ) R , but i(t )  C , which implies the
dt
dvc (t )
voltage across the resistance vR (t )  RC , which implies we have
dt
dvc (t )
vc (t )  RC  vs (t )  Vu (t ) . Now we take the one sided Laplace transform on both
dt
the sides because we have a system with non-zero initial conditions.

(Refer Slide Time: 16:57)

264
V
So taking the unilateral Laplace transform, we have Vc ( s)  RC ( sVc ( s)  Vc (0 ))  .
s

(Refer Slide Time: 17:35)

V RCV0 V
We have Vc ( s)(1  RCs)  RCV0  this implies Vc ( s)   . Now
s 1  RCs s(1  RCs)
V0 1 RC 
further simplifying this, we have Vc ( s)  V    which if you take the
s
1  s 1  RCs 
RC
inverse Laplace transform you get vc(t).

(Refer Slide Time: 19:18)

265
t
V0 
Now taking the inverse Laplace transform we have  V0 e RC
,
1
s
RC

1 RC  t t t
  
V    V (u (t )  e RC
u (t )) . So v (t )  V e RC
 V (1  e RC
)u (t ) for t  0 .
 s 1  RCs 
c 0

(Refer Slide Time: 20:42)

So using the concept of the unilateral Laplace transform, we have been able to find the
output for t  0 for a system with non-zero initial conditions.

So in this module we have seen different problems as well as the concept of the
unilateral Laplace transform. So we will stop here and continue in subsequent modules.
Thank you very much.

266
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 24
Z-Transform – Definition, Region of Convergence (ROC), z-Transform of Unit
Impulse and Step Functions

Keywords: Z-Transform, Region of Convergence (ROC)

Hello, welcome to another module in this massive open online course. In this module let
us start looking at z transform.

(Refer Slide Time: 28:00)

So we want to start looking at a new concept, the z transform which is basically to


represent and analyze discrete time signals and systems. Now for a given signal x(n), the

z transform X(z) is defined as X ( z )   x ( n) z
n 
n
.

267
(Refer Slide Time: 02:09)

So this is the definition of the z transform, where z is a complex number and can be
represented in terms of polar coordinates, that is magnitude and phase, as z  re j . Here
r  z and omega is the phase or the angle of the complex number.

(Refer Slide Time: 03:35)

And then x(n) and X(z) form a signal z- transform pair. Now similar to the Laplace
transform one can also define a region of convergence for the z transform, which is
basically the range of values for which the z transform converges.

268
(Refer Slide Time: 04:14)

For instance, consider x(n)  a nu(n) where u(n) is the discrete unit step function, that is,


u ( n)  1 n  0.
0 otherwise

(Refer Slide Time: 05:35)

269

Now the z transform of this signal is X ( z )   a u (n) z
n 
n n
which is given by


X ( z )   a n z  n since u(n) is non-zero only for n  0 and this therefore equals
n 0


1
 (az
n 0
1 n
) 
1  az 1
.

(Refer Slide Time: 06:40)

But this infinite sum converges only if az 1  1  a  z  z  a . This is basically the

region of convergence for this z transform. Let us consider ‘a’ to be a real quantity which
is 0  a  1.

270
(Refer Slide Time: 07:43)

z
So now if you look at this X ( z )  this has a 0, that is the root of the numerator is z
za
= 0 and this has a pole at z = a.

(Refer Slide Time: 08:22)

This can be plotted in the z plane as shown in slide and the region of convergence is all
values of z such that z  a and this is termed as the z plane. On the x axis, we have the

real part of z, on the y axis we have the imaginary part of z and we are plotting all values
of z where the z transform converges and this is known as the region of convergence and

271
that basically includes all values of z such that z  a that is basically all values of z in

the z plane which are outside this circle.

(Refer Slide Time: 10:41)


 n n  1 . So this is
Consider now as slightly different signal x(n)  a nu (n  1)   a
 0 otherwise

a left-handed signal because it is equal to 0 for n  0 .

(Refer Slide Time: 11:47)

272
 1
Now if you look at the z transform of this X ( z )   a u(n  1) z
n 
n n
 a z
n 
n n
. Now

 
a 1 z
set m = - n, so X ( z )   a  m z m . So this will be X ( z )   (a 1 z ) m  .
m 1 m 1 1  a 1 z

(Refer Slide Time: 12:53)

This converges only if a 1 z  1  z  a . Let us consider a slight modification, that is

n a 1 z
x(n)  a u(n  1) . So X ( z )   .
1  a 1 z

(Refer Slide Time: 14:18)

273
(Refer Slide Time: 14:29)

1 1 z
So this will be X ( z )   1
 1
 and in fact, the previous one also can
az  1 1  az za
1
be written as and now multiplying the numerator and denominator by z this
1  az 1
z
becomes .
za

(Refer Slide Time: 14:59)

274
z
So you can see both of them are equal to , however, the previous one has an ROC
za
z  a , however, this has an ROC magnitude of z  a . So we have the same z

transform but with different ROCs.

(Refer Slide Time: 15:48)

So two different signals can have the same z transform but different regions of
convergence. So to specify the signal accurately along with the z transform, the ROC
also has to be specified otherwise the z transform is incomplete. So the ROC of this
signal looks like again let us consider 0  a  1 that is the circle of amplitude a lies inside
the unit circle.

275
(Refer Slide Time: 16:51)

So the region of convergence is the region lying inside the interior of the circle
with z  a . So this is also termed as the unit circle and this is of course the z plane.

z
Therefore x(n)  a nu(n) has the z transform X ( z)  and ROC z  a and
za
x(n)  a nu(n  1) that is the left-handed signal also has the same z transform
z
X ( z)  but with different ROC z  a .
za

(Refer Slide Time: 08:11)

276
(Refer Slide Time: 20:04)

So to fully characterize the signal one has to also specify the ROC and now let us look at
the z transforms of some common sequences.

(Refer Slide Time: 20:40)

Let us start with the most common signal or the most fundamental signal which is the


unit impulse also the discrete impulse, the Kronecker delta which is  (n)  1 n  0 ,
0n0

now X ( z )    ( n) z
n 
n
 1.z 0  1 .

277
(Refer Slide Time: 21:21)

So the z transform of  (n) is basically unity.

(Refer Slide Time: 22:12)

Let us quickly look at another common signal that is the unit step


 
sequence u (n)  1 n  0 . So X ( z )   u (n) z  n   0 z  n 
1
.
0n0 n  n 0 1  z 1

278
(Refer Slide Time: 23:04)

The ROC is z  1 . Now if you look at the unit impulse, the ROC is in fact, all values of

the z, probably with the exception of z = 0 and z   .

So with this we will stop this module and continue our discussion on the z transform
further in the subsequent models. Thank you very much.

279
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 25
Properties of z -Transform –Linearity, Time Shifting, Time Reversal,
Multiplication by n

Keywords: Linearity, Time Shifting, Time Reversal, Multiplication by n

Hello, welcome to another module in this massive open online course. So we are looking
at the Z transform and its properties. So let us continue our discussion on this very
important transform for discrete time signals and systems.

(Refer Slide Time: 00:21)

We have looked at the Z transform of the unit impulse and the unit step function, now let
us look at the Z transform of a cosine signal that is x(n)  cos(0 n)u(n) so this is

basically
2
e
1 j0n  j0n
e 
u ( n) . The Z transform of this is

1 
X ( z)  
2 n
 
e j0n  e j0n u (n) z  n .

280
(Refer Slide Time: 01:58)

Now u(n) is non-zero only for n0. So this is equal to


1  j0 1 1   j0 1
e z  
e z  1 1 1 1
n n
 . So this is  and if you sum
2 n 0 2 n 0 2 1  e z 2 1  e j0 z1
j0 1

1 2  2cos 0 z 1
this, this is equal to .
2 1  2cos 0 z 1  z 2

(Refer Slide Time: 03:47)

281
1  cos 0 z 1
So this becomes which simplified by multiplying the numerator and
1  2cos 0 z 1  z 2

z 2  cos 0 z
denominator by z 2 as X ( z )  .
z 2  cos 0 z  1

(Refer Slide Time: 05:04)

So this has two poles that is p1  e j0 , p2  e j0 . So the ROC is z  e j0 , z  e j0

that is it has to be greater than the magnitude of both poles and the magnitude of both
poles is 1 so ROC is z  1 . So here  is the frequency and you can also derive the Z
0
transform of some other common signals for instance such as sin(0 n)u(n) and so on.
Now we will look at some of the important properties of the Z transform.

282
(Refer Slide Time: 08:01)

The first property is termed as the linearity property and the Z transform is a linear
transform which means if we have x1(n) it has Z transform X1(z) and ROC R1, x2(n) has
Z transform X2(z), ROC is R2 then this implies that
a1 x1 (n)  a2 x2 (n)  a1 X1 ( z)  a2 X 2 ( z ) .

(Refer Slide Time: 09:06)

That is a linear combination of the input signals results in a linear combination of their Z
transforms and the ROC will at least be greater than the intersection of the two ROCs.

283
Let us look at another property that is the time shifting property that is let us say x(n) has
the Z transform X(z), then we need to find the Z transform of x(n  n0 ) .

(Refer Slide Time: 10:59)

Now let us consider x(n  n0 ) as x(n) . Now its Z transform is


 
X ( z)   x(n)z
n 
n
  x(n  n )z
n 
0
n
. Now set m  n  n0  n  m  n0 and the limits

 
will still be the same. So X ( z )   x(m)z
m 
 ( m  n0 )
 z  n0  x(m)z
m 
m
. So this is

X ( z )  z  n0 X ( z ) .

284
(Refer Slide Time: 11:40)

(Refer Slide Time: 12:15)

So this is a signal delayed by n0 and this is the Z transform where the original Z
transform is multiplied by z  n0 .

285
(Refer Slide Time: 13:15)

Now for the special case where n0 = 1, that is for a unit delay, this will simply become
x(n  1)  z 1 X ( z ) , therefore this operator z 1 is also termed as the unit delay operator
and frequently we will see this in systems where there is a block which looks like as
shown in slide with input x(n), then the output is x(n - 1).

(Refer Slide Time: 15:00)

So this is simply a delay block or unit delay block that is used to represent a sample
delay and for instance if we connect two such blocks in series, the net signal will be
delayed two times so x(n), x(n - 1) and the output would be x(n – 2).

286
(Refer Slide Time: 16:12)

So let us go on to the next aspect that is time reversal. In time reversal we have
x(n)  x(n) and let us say the ROC of the Z transform is R. So you are reversing the

time for instance if x(n) is 0 for n < 0 then this x(n) is the time reversed version of x(n)
and it will be 0 for n > 0.

 
So the Z transform of this is given as X ( z )   x(n)z
n 
n
  x(n)z
n 
n
. Now set m = -

 
n implies this will be 
m 
x(m)z m   x(m)( z
m 
1  m
) and this is nothing but the Z

1
transform of x evaluated at z 1 or evaluated at .
z

287
(Refer Slide Time: 17:53)

1
So the time reversed version of x(n) has the Z transform X( ).
z

(Refer Slide Time: 18:44)

1
The ROC will be  R for the Z transform to converge. So let us look at another
z
property. Let us see what happens when you multiply the signal by n.

288
(Refer Slide Time: 20:12)

Consider the signal x(n) which has Z transform X(z) with the ROC given by R then

X ( z)   x ( n) z
n 
n
. Now if you differentiate this we have

 
dX ( z )
  (n) x(n) z  n1   z 1  nx(n) z  n . So this will be Z transform of nx(n).
dz n  n 

(Refer Slide Time: 21:45)

289
So that is time multiplication by n which is basically differentiation in the z domain. So
dX ( z )
nx(n)   z . Let us look at one final property that is the accumulation which is
dz
similar to integration in time.

(Refer Slide Time: 23:18)

n
So we have the accumulation property where we have y (n)   x(k ) . So this is termed
k 

as an accumulator which is similar to an integrator for continuous time. Now we consider


n n 1
y (n)  y (n  1)   x ( k )   x ( k )  x ( n) .
k  k 

290
(Refer Slide Time: 24:36)

Now taking the Z transform we have Y ( z )  z 1Y ( z )  X ( z ) which implies


X ( z)
(1  z 1 )Y ( z )  X ( z )  Y ( z )  .
1  z 1

(Refer Slide Time: 25:59)

zX ( z )
So we can simply write this as Y ( z )  . So this adds a pole in addition to poles in
z 1
X(z) at z = 1. If there is a right handed signal the ROC has to be z  1 . Previously the

signal x(n) has an ROC R. So the ROC must be intersection of this R and the region

291
z  1 . So since you are adding a pole at z  1 , the net ROC will be the previous ROC

intersection z  1 .

So we will stop here and look at other aspects in the subsequent modules. Thank you
very much.

292
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 26
Properties of z-Transform – Convolution, Inverse z-Transform through Partial
Fractions, Properties of ROC

Keywords: Convolution, Inverse z-Transform through Partial Fractions

Hello, welcome to another module in this massive open online course. We are looking at
the Z transform with properties and applications, let us continue this discussion. Let us
look at the property of the Z transform with respect to the convolution operation.

(Refer Slide Time: 00:33)

Let us say we have two discrete time signals x1(n) which has Z transform X1(z) and x2(n)
which has Z transform X2(z). The convolution of x1(n) and x2(n) is

e y (n)  x1 (n)  x2 (n)   x (m) x (n  m) .
m 
1 2

293
(Refer Slide Time: 01:53)


Now we have the Z transform of y(n) as Y(z) which is given as Y ( z )   y ( n) z
n 
n
. We

want to find out the Z transform of y(n), when y(n) is the convolution of two discrete
 
time signals x1(n) and x2(n). So we have Y ( z )    x (m) x (n  m) z
n  m 
1 2
n
. So now we

are going to interchange the order of summation.

(Refer Slide Time: 03:27)

294
 
So this implies  x (m)  x (n  m) z
m 
1
n 
2
n
. So this implies that this is the Z transform of


x2(n) that is delayed by m. So this is given as  x (m) X
m 
1 2 ( z) z m .

(Refer Slide Time: 05:06)


Now the X2(z) comes out common. So this is X 2 ( z )  x1 (m) z  m  X 1 ( z ) X 2 ( z ) which is
m 

basically the product of the Z transforms of two signals. So what we have here is
basically Y ( z )  X1 ( z ) X 2 ( z ) . So the Z transform of the convolution of two signals is
basically the product of their Z transforms and this is similar to what we have seen in the
Laplace transform. Hence similar to the Laplace transform this property gives us a
convenient way to implement convolution. So this is an interesting property which says
that convolution in time is multiplication in the Z transform domain.

295
(Refer Slide Time: 09:36)

Let us now look at techniques to evaluate the inverse Z transform and similar to the
inverse Laplace transform one can consider one of the simplest techniques, so that is by
partial fraction expansion.

(Refer Slide Time: 07:22)


So let us say we have X ( z )   x ( n) z
n 
n
. Now let us express X(z) as a power series.

Now once you express X(z) as a power series x(n) is nothing but the coefficient of z  n .
Now let us look at another technique.

296
(Refer Slide Time: 09:06)

Let us look at another technique that is a partial fraction expansion. So let us say X(z) is
N ( z)
a rational function, that is X ( z)  which can be written as
D( z )
( z  z1 )( z  z2 )....( z  zm )
k where k is some constant.
( z  p1 )( z  p2 )....( z  pn )

(Refer Slide Time: 10:20)

Now we know that z1 z2, zm are the zeros that is the roots of the numerator polynomial
and there are m such zeros. Now p1 p2, pn are the poles and we have n poles.

297
Let us assume that n  m and further assume all poles are simple that is there are no
X ( z)
repeated poles. This implies we can express the partial fraction expansion of as
z
n
X ( z ) C0 C C2 Cn C Ck
  1   ....  which can be simplified as  0   ,
z z z  p1 z  p2 z  pn z k 1 z  pk

where the coefficient C0 can be found as C0  X ( z ) z 0 Ck   z  pk  X ( z ) z  p .


k

(Refer Slide Time: 11:54)

(Refer Slide Time: 13:24)

298
And finally substituting the values of C0 C1 up to Cn and taking the z in the denominator
C1 z Cz Cz
on to the other side, we have X ( z )  C0   2  ....  n . Now the inverse
z  p1 z  p2 z  pn
transform can be found, because we know the inverse transform of each factor, which is
z
of the form . So compute inverse Z transforms of the individual terms and then
z  pi
followed by their sum. So that gives us the inverse Z transform.

(Refer Slide Time: 15:22)

Now if X(z) has multiple poles, let us say pole pi of multiplicity equals r means repeated
X ( z)
poles. So in that scenario, if let us say pole pi has multiplicity r then will have
z
1 2 r
terms of the form   ....  , where this
z  pi ( z  pi ) 2
( z  pi )r

1 dk  r X ( z) 
quantity r  k implies r k   ( z  pi )  .
k ! dz k  z  z p
i

299
(Refer Slide Time: 16:56)

From this, one can find the inverse Z transform of the given function.

(Refer Slide Time: 19:20)

Let us now look at the properties of the region of convergence. The ROC does not
contain any pole that is the first property. Now if x(n) is a finite sequence, implies that
x(n) = 0, for either n < N1 or n > N2 and X(z) converges for some value of z, then this
implies that ROC equals the entire Z plane.

300
(Refer Slide Time: 20:47)

So basically if you have a finite sequence the Z transform of that signal exists
everywhere on the Z plane.

(Refer Slide Time: 22:00)

Now for a right handed signal, implies that x(n) = 0 for n  N1 . Here ROC is of the form

z  rmax , where rmax equals the largest magnitude of the poles of X(z).

301
(Refer Slide Time: 23:47)

Now similarly for a left-handed signal where x(n) = 0 for some n  N 2 , the ROC is of

the form z  rmin , where rmin equals the minimum of the magnitude of poles of X(z).

(Refer Slide Time: 25:35)

Finally if x(n) is a two-sided signal, then this implies that the ROC is of the form
r1  z  r2 . Now r1 and r2 are the magnitudes of two poles of X(z). So these are the

properties of the ROC.

302
(Refer Slide Time: 27:12)

So these are the useful properties of ROC that helps us better reconstruct as well as
analyze the properties of a signal by looking at its Z transform. So we will stop here and
continue with other aspects in the subsequent modules. Thank you very much.

303
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 27
Z-Transform of LTI Systems – Causality, Stability, Systems Described by
Difference Equation

Keywords: Causality, Stability, Systems Described by Difference Equation

Hello, welcome to another module in this massive open online course. So we are looking
at Z transform and its various properties, let us continue our discussion with the system
function for a discrete time LTI system.

(Refer Slide Time: 00:28)

304
(Refer Slide Time: 01:15)

So we want to look at the system transfer function of a discrete time LTI system. So we
have x(n) which is given as input to a discrete time LTI system with impulse response
h(n) and the output is y(n). So we know that the output y(n) is given as the convolution
of the input with the impulse response. So we have y(n)  x(n)  h(n) and also we know
that convolution in time is basically multiplication in the transform domain.

(Refer Slide Time: 02:31)

305
Y ( z)
Therefore we have Y ( z)  X ( z) H ( z) implies the transfer function H ( z )  which
X ( z)
is the Z transform of the impulse response of the LTI system.

(Refer Slide Time: 03:40)

This is known as the system function or the transfer function of the discrete time LTI
system. Now let us look at the properties of the discrete time LTI system based on its
system function or the transfer function.

(Refer Slide Time: 04:23)

306
So we consider a causal LTI system where h(n) = 0 for n < 0. Therefore, the ROC must
be of the form z  rmax , where rmax is the maximum of the poles of H(z).

(Refer Slide Time: 06:27)

(Refer Slide Time: 06:35)

Now let us look at stability, bounded input bounded BIBO stability. For BIBO stability,
we require  h( n)  
n
that is it must be a finite quantity. Now for a BIBO stable

system the transfer function that is H(z) must include the unit circle in its ROC.

307
(Refer Slide Time: 08:44)

The unit circle is all the values of z which are of the form z  e j that is z  1 that is

unit magnitude, any arbitrary phase.

(Refer Slide Time: 10:08)


Now let us look at H (e j ) for a stable system. So H (e j )   h( n)e
n 
 jn
. Now


H (e j  )   h( n)e
n 
 jn
. Now this magnitude of a sum is less than or equal to the sum

308

of its magnitudes. H (e j )  
n 
h(n) e jn But e jn  1 . So this implies


H (e j  )  
n 
h(n)   since the system is BIBO stable.

(Refer Slide Time: 11:36)

This is positive quantity and this has to converge for every z  e j . Therefore z  e j for
every  is in the region of convergence, which means the unit circle must be in the
ROC for a BIBO stable LTI system. So the system transfer function must include the
unit circle in its region of convergence. Now let us look at the system function for an LTI
system described by a difference equation.

309
(Refer Slide Time: 13:45)

(Refer Slide Time: 14:41)

So for an LTI system the difference equation representation is given as


N M

 ak y(n  k )   bk x(n  k ) . Now this is a difference equation for discrete time signals.
k 0 k 0

310
(Refer Slide Time: 16:30)

N M
Now taking the Z transform on both sides we have  akY ( z) z k   bk X ( z) z k now this
k 0 k 0

N M
implies, Y ( z ) ak z  k  X ( z ) bk z  k .
k 0 k 0

(Refer Slide Time: 18:03)

311
M

Y ( z) b z k
k

Now H ( z )   k 0
N
is in the form of a rational transfer function. Now taking
a z
X ( z) k
k
k 0

the inverse Z transform, one can obtain the impulse response of the system described by
this difference equation.

So that completes our discussion on the definition, properties of the Z transform as well
as the analysis. So subsequently in the modules that follow we look at several problems
for the application of the Z transform, to understand the various properties and its
application. So we will look at other aspects in the subsequent modules. Thank you very
much.

312
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 28
Example Problems in z-Transform – Evaluation of z-Transform, ROC

Keywords: Causality, Stability, Systems Described by Difference Equation

Hello, welcome to another module in this massive open online course. So we are looking
at the z-transform and its various properties. Now let us start looking at some example
problems to better understand the applications of z-transform.

(Refer Slide Time: 00:28)


 n
So consider x(n)   a 0  n  N  1 .
 0 otherwise

313
(Refer Slide Time: 01:46)

We want to find the z-transform X(z) along with the poles and zeros. Now the z-
transform of this can be evaluated as follows.

(Refer Slide Time: 02:07)

 N 1 N 1
1  (az 1 ) N
The z-transform X ( z )  
n 
x(n) z  n   a n z  n   (az 1 ) n 
n 0 n 0 1  (az 1 )
.

314
(Refer Slide Time: 02:55)

zN  aN 1
So this can be written as X ( z )  . .
z  a z N 1

(Refer Slide Time: 04:09)

So this has pole at z = a and we have this term z N 1 which means this has a pole at z = 0
with multiplicity N - 1.

315
(Refer Slide Time: 04:58)

The zeros are obtained by solving the equation z N  a N  a N .1 . So this can be written as
k k
j 2 j 2
z  a.e N
where e N
is the Nth root of unity.

(Refer Slide Time: 06:29)

Now if you set k = 0, we have zero at z = a. So the zero and pole at z = a cancels. Now
k
j 2
the zeros are basically Nth roots of z  a.e N
, k ranging from 1 to N – 1. So zero with
multiplicity N – 1 is left out.

316
(Refer Slide Time: 07:30)

So we have the unit circle and we have the zeroes which are placed equidistant and we
have a pole of multiplicity N - 1 at zero. So the pole-zero diagram of the z-transform of
the given signal looks as shown in slide.

(Refer Slide Time: 11:06)

n n
1 1
Let us look at the next problem that is x(n)    u (n)    u (n  1) . We are
 3 2
required to find the z-transform of the given signal and also the associated ROC. Also
sketch the pole-zero plot.

317
So now to do this we are you going to use the following results which are
z z
a n u ( n)  , ROC is z  a and is a right handed signal. a nu (n  1)  and
za za
the ROC is z  a and this is a left-handed signal.

(Refer Slide Time: 13:10)

n n
1 z 1 1
So now   u (n) has z-transform and ROC z  and   u (n  1) has z-
3 z
1 3 2
3
z 1
transform and ROC z  . So the net z-transform of the signal is the sum of
1 2
z
2
these two components.

318
(Refer Slide Time: 14:09)

z z
So X ( z )   . Now the ROC of this is the intersection of these two that is
1 1
z z
3 2
1 1 1 1
z   z  which is basically  z  .
3 2 3 2

(Refer Slide Time: 15:04)

319
 1
z.   
The z-transform can be simplified as X ( z )   6 
1 z
and
 1  1 6 1  1
 z   z    z   z  
 3  2  3  2
therefore this is equal to the net z-transform. If you draw the region of convergence on
the Z plane, it looks something like as in slide.

(Refer Slide Time: 16:14)

1 1
So there is a zero at z = 0 and the poles are at z  , .
3 2

(Refer Slide Time: 16:29)

320
The ROC lies in between these two poles and this is a two sided signal.

(Refer Slide Time: 17:51)

The ROC comprises of basically the region between these two concentric circles, one is a
1
concentric circle corresponding to z  and the other is the circle corresponding to
3
1
z  .
2

(Refer Slide Time: 19:07)

321
Let us consider another problem to find the z-transform of x(n)  a n2u(n  2) . Now

consider a different sequence x(n)  a nu(n) and we can see that x(n)  x(n  2) that is a

time advanced version of x(n) . So now if we take the z-transpose of x(n) , we have
z
X ( z)  .
za

(Refer Slide Time: 20:29)

z z3
Now we have x(n)  x(n  2) , so X ( z )  z 2 X ( z )  z 2  and the ROC will be
za za
z  a since this is a right handed signal.

So we will wrap up this module with this problem and we will look at other problems in
the subsequent modules. Thank you very much.

322
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 29
Example Problems in Signals and Systems – Plot, Odd/ Even Components,
Periodicity

Keywords: Odd/ Even Components, Periodicity

Hello welcome to another module in this massive open online course. So we are looking
at several example problems to understand the application of z transform. So let us
continue this discussion.

(Refer Slide Time: 00:25)

So we want to use the power series approach to find the inverse z transform of
1
,z a.
1  az 1

323
(Refer Slide Time: 01:53)

So this is a right-handed signal. So now to use the power series approach, we will divide
the numerator by the denominator.

(Refer Slide Time: 02:48)

So this is similar to the long division that is the power series approach we get
1
X ( z)  1
 1  az 1  a 2 z 2  a3 z 3  ... .
1  az

324
(Refer Slide Time: 04:40)

So here we have been able to express this as a power series in z and now it follows that
the sequence x(n) is nothing but x(n)  a nu(n) that is for n  0 . So this is basically the
power series approach of constructing the inverse z transform.

(Refer Slide Time: 06:53)

So we have a rational function of z and we need to evaluate the inverse z transform. So


3z 2  5 z
we have X ( z )  , z  1 and therefore this is a left-handed sequence. And
z 2  3z  2
whenever we have a rational function of z we can use the partial fraction approach.

325
(Refer Slide Time: 08:23)

So the denominator is z 2  3z  2  ( z  1)( z  2) . The partial fraction expansion is of the


X ( z ) C0 C1 C
form    2 . So the poles of this are z = 1 and z = 2. And obviously
z z z 1 z  2
5
the zeroes will be 3z 2  5 z  0  z (3z  5)  0  z  0, .
4

(Refer Slide Time: 09:46)

So the partial fraction expansion depends only on the poles, that is to look at the structure
of these different terms we need knowledge of the poles.

326
(Refer Slide Time: 11:00)

Now we know C0  X ( z ) z 0 and on evaluating this we get C0 as 0.

(Refer Slide Time: 11:36)

X ( z)
Now C1  ( z  1) that is basically equal to 2. Similarly we have
z z 1

X ( z)
C1  ( z  2) that is basically 1.
z z 2

327
(Refer Slide Time: 12:44)

X ( z)
So these are the coefficients in the partial fraction expansion of . So this implies
z
X ( z) 2 1 2z z
  which basically implies that X ( z )   and the ROC
z z 1 z  2 z 1 z  2
remains the same is z  1 and therefore now if you look at the left-handed sequence

corresponding to this we have x(n)  2u(n  1)  2n u(n  1) .

(Refer Slide Time: 13:58)

So this is basically the inverse z transform.

328
(Refer Slide Time: 16:08)

So we consider another example to find the inverse z transform of


z.( z 2  5 z  5)
X ( z)  , ROC is 2  z  3 . So this is an infinite signal because the ROC
( z  2)( z  3) 2
lies between two poles 2 and 3. Now there are poles of multiplicity greater than 1, for
instance at z = 3 we have pole of multiplicity 2.

(Refer Slide Time: 17:45)

X ( z ) C0 C  2
So the partial fraction expansion will be of the form   1  1  .
z z z  2 z  3  z  3 2

329
(Refer Slide Time: 19:02)

X ( z)
Now C0  X ( z ) z 0 and this is 0. So now C1  ( z  2)  1.
z z 2

(Refer Slide Time: 20:41)

to  z  3 can
2
Now the coefficient corresponding be evaluated as

X ( z)
2  ( z  3)2  1.
z z 3

330
(Refer Slide Time: 21:55)

Now we are going to use the formula of r  k to find 1 which can be written as

1 dk  r X ( z)  1 d X ( z)
r k   ( z  pi )  . So 1  21  ( z  3)2 .
k ! dz k  z  z p
i
1! dz z z 3

(Refer Slide Time: 22:16)

X ( z) 1
Now let us evaluate ( z  3) 2 which will be  z  3  .
z z2

331
(Refer Slide Time: 23:59)

d X ( z)
Now we have ( z  3)2 which is to be evaluated at z = 3.
dz z

(Refer Slide Time: 25:11)

Now this becomes - 2 and hence 1  2 .

332
(Refer Slide Time: 26:38)

X ( z) 1 2 1
Therefore, we have    . So this implies
z z  2 z  3  z  3 2

z 2z z
X ( z)    and the ROC still remains the same.
z  2 z  3  z  3 2

So we will stop this module here and we will continue this in the subsequent module.
Thank you very much.

333
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 30
Example Problems in z –Transform – LTI System Output, Step/ Impulse Response
of LTI System

Keywords: Step/ Impulse Response of LTI System

Hello welcome to another module in this massive open online course. So we are looking
at example problems for the z transform and let us continue this discussion.

(Refer Slide Time: 00:23)

So we are looking at example problems pertaining to the z transform and the properties
of the z transform and from our previous discussion what we have seen is that we are
evaluating the inverse z transform and we have seen that
z 2z z
X ( z)    ROC is 2  z  3 . And now in this for instance
z  2 z  3  z  3 2

z
corresponds to 2n u (n) and here the ROC corresponds to z  2 . So this is a right
z2
handed signal with respect to the pole at 2.

334
(Refer Slide Time: 02:29)

2z
Now   (2)(3n u(n  1))  2.3n u(n  1) and the ROC is z  3 . So this is a
z 3
left-handed signal with respect to the pole at 3.

(Refer Slide Time: 03:57)

z
Consider the term , now to evaluate this we use the property
 z  3
2

d
z X ( z )  nx(n) .
dz

335
(Refer Slide Time: 05:04)

z
Now consider X ( z)  , now differentiating this we have,
 z  3
d d z d z 33 d  3  3
X ( z)    1   . Now
dz dz z  3 dz z  3 dz  z  3   z  32

d 3 3z
z X ( z)   z   .
 z  3  z  3
2 2
dz

(Refer Slide Time: 06:49)

336
3z
So therefore  nx(n) .
 z  3
2

(Refer Slide Time: 07:26)

z
Hence this corresponds to X ( z )   x(n)  3n u (n  1) . So this will be
 z  3
3z z
 n3n u (n  1) which implies  n3n 1 u (n  1) .
 z  3  z  3
2 2

(Refer Slide Time: 08:52)

337
Therefore the final inverse z transform will be
x(n)  2n u(n)  2.3n u(n 1)  n3n1u(n  1) . So we have constructed the inverse z
transform of this rational function of z which has a pole of multiplicity greater than 1.

(Refer Slide Time: 09:53)

Let us look at another example, consider the system with impulse response
h(n)  a nu(n),0  a  1 . Now consider the input x(n) that is the unit step function.

(Refer Slide Time: 11:41)

338
Now we have to find the output using the z transform. So we have
z
h( n)  a n u ( n )  H ( z )  and the ROC is z  a and this is a right sided signal.
za

(Refer Slide Time: 12:45)

(Refer Slide Time: 13:49)

z
Now x(n) = u(n) and the z transform of this is X ( z )  . Now we know y(n) is
z 1
simply the convolution of h(n) and x(n), which implies the z transform

339
z z z2
Y ( z )  H ( z ). X ( z )  . and this is and we need to include both
z  a z 1  z  1 z  a 
the ROCs and since a < 1, the ROC of this is z  1 . So you can see that this ROC is the

intersection of both the other ROCs.

(Refer Slide Time: 15:10)

Y ( z) z C C Y ( z) 1
Now   1  2 . So C1  ( z  1)  .
z  z  1 z  a  z  1 z  a z z 1 1  a

(Refer Slide Time: 16:11)

340
Y ( z) a 1 z a z
Now C2  ( z  a)  . So this implies Y ( z )  .  . .
z z a 1 a 1  a z 1 a 1 z  a

(Refer Slide Time: 17:29)

Now we can take the inverse z transform and we get


1 a n 1  a n1
y ( n)  u ( n)  a u (n)  y(n)  u(n) and the ROC is z  1 which means
1 a 1 a 1 a
this is a right handed signal.

(Refer Slide Time: 18:08)

341
(Refer Slide Time: 19:43)

Let us continue to another example where the step response of an LTI system is given as
a nu (n) and we have to find the impulse response for this LTI system.

(Refer Slide Time: 20:36)

z
Now if the input x(n) = u(n), we have X ( z )  . Now output is y(n)  a nu (n) and
z 1
z Y ( z)
Y ( z)  . Therefore the transfer function H(z) is given as H ( z )  .
za X ( z)

342
(Refer Slide Time: 21:39)

z 1
So this is H ( z )  and the ROC of both will be magnitude and this is the transfer
za
function for the LTI system.

(Refer Slide Time: 22:36)

Now
H ( z)

 z  1 . So there are two poles at z = 0, a. So this can be expressed as
z z  z  a

H ( z ) C1 C H ( z) 1
  2 . Now the coefficient C1 corresponding to z is C1  z 
z z za z z 0 a

H ( z) 1 a
and C2   z  a   .
z z a a

343
(Refer Slide Time: 23:53)

H ( z) 1 1 1  a 1
So therefore we have  .  . . Now this implies
z a z a z a
1 1 a z
H ( z)   .
a a za

(Refer Slide Time: 24:44)

1 1 a n
Now we take the inverse z transform and we get h(n)   (n)  a u ( n) .
a a

344
(Refer Slide Time: 25:19)

1 1 a
So now let us evaluate h(0), this is h(0)    1.
a a

(Refer Slide Time: 26:40)

So now for any n  0, h(n)   1  a  a n1 .Therefore combining both we have

h(n)   (n)  1  a  a n1u(n  1) . So this is the impulse response of the system. So we

have started with the system in which the step response is given and we are asked to find
the impulse response and so we have found the impulse response of this system using the
z transform technique.

345
So we will stop here and look at other examples and continue with other aspects in the
subsequent modules. Thank you very much.

346
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 31
Example Problems in z–Transform – Impulse Response of LTI System Described
by Difference Equation

Keywords: Impulse Response Described by Difference Equation

Hello welcome to another module in this massive open online course. So we are looking
at example problems in z transform.

(Refer Slide Time: 00:28)

So consider the LTI system described by the constant coefficient difference equation,
5
given as y(n)  y(n  1)  y(n  2)  x(n) .
2

347
(Refer Slide Time: 02:15)

Now we need to find the impulse response and the transfer function. So take the z
5
transform on both sides Y ( z )  Y ( z ) z 1  Y ( z ) z 2  X ( z ) .
2

(Refer Slide Time: 04:10)

 5 
So this implies Y ( z ) 1  z 1  z 2   X ( z ) which implies
 2 
Y ( z) 1 z2
  .
X ( z ) 1  5 z 1  z 2 z 2  5 z  1
2 2

348
(Refer Slide Time: 05:09)

z2
And this is the transfer function. So H ( z )  is basically the transfer function
5
z2  z 1
2
that is z transpose of the impulse response of the LTI system.

(Refer Slide Time: 05:46)

z2 1
Now H ( z )  . So there are two poles at z = 2 and at z  and these are
 z  2   z  
1 2
 2
poles of multiplicity 1. So these are simple poles and now we are going to perform the

349
partial fraction expansion to find the inverse z transform. So consider
H ( z) z
 .
 1
z
 z  2  z  
 2

(Refer Slide Time: 07:04)

H ( z) C C
And let us express this as  0  1 . Now we consider this as a causal
z z2 z 1
2
system. So this has to be a right handed signal, therefore ROC is of the form z  2 ,

since 2 is the maximum magnitude of the pole.

350
(Refer Slide Time: 08:31)

H ( z) 4 1 H ( z) 1
Now we evaluate C0 as C0  ( z  2)  and C1  ( z  )  .
z z 2 3 2 z z 1 3
2

(Refer Slide Time: 09:19).

351
(Refer Slide Time: 09:56)

H ( z) 4 1 1 1 4 z 1 z
So therefore we have  .  . and therefore H ( z )  .  .
z 3 z2 3 z 1 3 z2 3 z2
2
and we have the ROC of the form z  2 implies this is a right handed signal. Now
n
4 11
h(n)  .2n u (n)    u (n) that is basically the impulse response of the LTI system
3 3 2
described by the difference equation given.

(Refer Slide Time: 11:03)

352
So we have seen an LTI system described by the difference equation and found the
transfer function from that and then the impulse response. So we will stop here and look
at other aspects in the subsequent modules. Thank you.

353
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 32
General Inversion Method for Inverse z -Transform Computation – Method of
Residues

Keywords: Inversion Method, Method of Residues

Hello welcome to another module in this massive open online course. So we are looking
at the z transform, let us look at another technique to compute the inverse transform, this
is known as the general inversion method.

(Refer Slide Time: 00:30)

So let us say X(z) is the z transform, then x(n) can be computed as


1
2 j 
n 1
x ( n)  X ( z) z dz and this integral is a contour integral evaluated over a contour,

in the z plane. This is evaluated over this contour  and this is basically a closed contour
in counterclockwise sense enclosing all the poles of X ( z ) z n1 and has to be in the region
of convergence of this z transform.

354
(Refer Slide Time: 03:26)

(Refer Slide Time: 04:07)

P
1
This can be simplified as follows this is x(n)   X ( z ) z n1dz   Re s  X ( z ) z n1 
2 j  i 1
z  pi

here residues are evaluated at each pole.

355
(Refer Slide Time: 05:16)

So this P denotes the number of poles of X ( z ) z n1 . And therefore this is basically the

sum of all the residues evaluated at the poles of X ( z ) z n1 .

(Refer Slide Time: 06:10)

Now if pi is a simple pole that is basically it is a pole of multiplicity 1 then the residue of
this simple pole is Re sidue  lim( z  pi ) X ( z ) z n1  ( z  pi ) X ( z ) z n1 .
z  pi z  pi

356
(Refer Slide Time: 07:50)

Now for pole with multiplicity r where r  1 , then the residue of this is
1 d r 1
lim r 1  z  pi  X ( z ) z n 1 . So we evaluate the residues of X ( z ) z n1 at the
r

 r  1! z  pi dz
different poles and then we sum all the residues at these poles that gives us the value of
x(n).

(Refer Slide Time: 09:58)

357
So using this general inversion technique let us evaluate the inverse z transform of
1 z n 1
X ( z)  . Now for n > 0, we have X ( z ) z n 1  .
1 1
2( z  1)( z  ) 2( z  1)( z  )
2 2

(Refer Slide Time: 11:09)

1
Now the poles are at z = 1, z   therefore, x(n) is the sum of the residues evaluated at
2
these poles. So x(n)  Re sz 1  X ( z ) z n1   Re s 1  X ( z) z  . Now the residue at z = 1
n 1
z 
2

is Re sz 1  X ( z ) z n1   ( z  1) X ( z ) z n1
1
 .
1 3

(Refer Slide Time: 12:13)

358
(Refer Slide Time: 13:00)

1
Now the residue at z equals
2
n 1

Re s
z 
1  X ( z) z n 1
   z  12  X ( z) z n1 1   13 . 12  .
2 z 
2

(Refer Slide Time: 14:00)

359
(Refer Slide Time: 14:36)

(Refer Slide Time: 15:11)

n 1
1 1 1
Therefore x(n) is simply the sum of these two residues that is x(n)   .   . So
3 3 2
this is for n > 0 and now for n = 0 we obtain the following.

360
(Refer Slide Time: 15:46)

X ( z) X ( z) 1
So consider X ( z ) z n 1 , for n  0  , now  , now note that
z z 1
2 z ( z  1)( z  )
2
because of this division by z we have this additional pole at z = 0 which we have to take
into consideration while evaluating the sum of the residues.

(Refer Slide Time: 16:48)

X ( z) X ( z) 1
Now Re sidue at z  0  z  1 , Re sidue at z  1  ( z  1)  .
z z 0 z z 1 3

361
(Refer Slide Time: 18:06)

1  1  X ( z) 2
Now residue at z   implies  z    .
2  2  z z  1 3
2

(Refer Slide Time: 19:04)

362
(Refer Slide Time: 19:38)

So x(0) is basically the sum of residues.

(Refer Slide Time: 19:59)

n 1
1 2 1 1 1
So x(0)  1    0 . So x(n)   .   for n  1 which can be written as
3 3 3 3 2


1 1  1  
n 1

x(n)    .    u (n  1) . So this is basically evaluating using the general inversion
3 3  2  
 
technique.

363
(Refer Slide Time: 21:34)

Let us now look at another simple example to find the inverse z transform of
z2 k 1 z k 1
X ( z)  . Now consider X ( z ). z  .
( z  1)2 ( z  e aT ) ( z  1)2 ( z  e aT )

(Refer Slide Time: 22:44)

Now note that this has a simple pole at z  e aT and there is a pole of multiplicity 2 at z =
1.

364
(Refer Slide Time: 23:27)

Now first let us find the simple residue, residue at z  e aT that is

 aT k 1 e a ( k 1)T
(z  e ) X ( z) z  .
z  e aT
1  e  aT 2

(Refer Slide Time: 24:15)

Now we have to find the residue at z = 1, at z = 1 we have a pole of multiplicity 2. So we


have to use the general formula for computing the residue at a pole with multiplicity
r  1. So we have to set r = 2 and that gives us the residue as
1 d k e aT
( z  1)2 X ( z ) z k 1   .
(2  1)! dz z 1
1  e aT 1  e aT 2

365
(Refer Slide Time: 25:23)

(Refer Slide Time: 26:01)

366
(Refer Slide Time: 27:11)

And therefore x(n) will be the sum of these two residues, that is
e a ( k 1)T k e aT
x ( n)    and we can simplify this by bringing the first
1  e  1  e aT 1  e aT 2
 aT 2

k e aT  e akT 
and the last terms together as x(n)   , for k  0 . So basically that
1  e aT 1  e aT 2

completes the problem.

(Refer Slide Time: 28:03)

367
So we have seen a different technique to evaluate the inverse z transform that is using the
general inverse technique and this is also known as the method of residues that is
evaluating x(n) as the sum of the residues of the poles of X ( z ) z n1 and we have seen a
couple of examples to understand this technique better. So we will stop here and
continue in the subsequent modules. Thank you very much.

368
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 33
Fourier Analysis of Continuous Time Signals and Systems – Introduction

Keywords: Fourier Analysis

Hello welcome to another module in this massive open online course. So we are looking
at the various properties and principles of signals and systems in this course and we have
so far looked at various transforms including the Laplace transform and the z transform
and in this module let us start looking at yet another transform that is used frequently to
understand and analyze the properties and signals and systems, which is the Fourier
transform. So we will start by looking at the Fourier transform for continuous time
signals and subsequently also for discrete time signals and systems.

(Refer Slide Time: 00:56)

So the Fourier transform transforms signals or systems into the frequency or spectral
domain.

369
(Refer Slide Time: 01:56)

(Refer Slide Time: 02:57)

The Fourier transform is one of the most commonly employed and insightful transforms,
used not just in electronics and signal processing, but in various fields of science and
engineering. So the basis for the Fourier transform is the very simple function that is
consider the complex exponential signal, that is we have e j0 t , this is
2
cos(0t )  j sin(0t ) and this is periodic with period T0  .
0

370
(Refer Slide Time: 04:32)

(Refer Slide Time: 05:51)

2
j0t  j0 k
j0 ( t  kT0 ) 0
If you consider e e  e j0t  j 2 k  e j0t .e j 2 k e j 2 k  1 e j0 (t kT0 )  e j0t . So
2
this is periodic with period T0  .
0

371
(Refer Slide Time: 07:01)

2
So T0  is the fundamental period of the signal and 0 is the fundamental angular
0
frequency. Now 0  2 F0 where F0 is the fundamental frequency.

(Refer Slide Time: 09:00)

So let us look at the Fourier series representation and this is defined for a continuous
periodic signal x(t) and let us say the period is T0, which is the fundamental period.

372
(Refer Slide Time: 10:34)

So we have a periodic signal x(t) with fundamental period T0 and this x(t) can be

represented as the sum of complex exponentials as x(t )  Ce
k 
k
jk0t
and that is

basically the Fourier series representation. So k0 is known as the harmonics that is the
frequencies are basically multiples of the fundamental frequency, where k is obviously
an integer. Now these Cks are the Fourier series coefficients.

(Refer Slide Time: 14:15)

373
T0
2
1
These Cks can be obtained as Ck 
T0  T0
x(t )e  j0t dt .

2

(Refer Slide Time: 15:29)

Now we are going to evaluate over an interval of duration T0 that is


T0
1
 x(t )e
 j0t
Ck  dt and substitute the expression for x(t) for which a different index l is
T0 0

1 0  
T

being used and this is as follows Ck     Cl e jl0t  e j0t dt .


T0 0  l  

374
(Refer Slide Time: 16:50)

Now we need to interchange the integral and summation, so bring the summation outside
 T
1 0
and this implies Ck   Cl  e j (l k )0t dt .
l  T0 0

(Refer Slide Time: 18:41)

T
1 0 1
Now if l = k this integral becomes 
T0 0
1dt   T0  1 . Now if l is not equal to k then
T0
T
1 0 jm0t
T0 0
this becomes e dt , l  k  m .

375
(Refer Slide Time: 19:51)

So when you integrate this over one fundamental period, this integral vanishes and


T T

dt  1 l  k and this
1 0 jm0t 1 0 j (l k )0t
T0 0 T0 0
therefore, e dt  0 . As a result we will have e
0l  k

quantity is nothing but  (l  k ) . So if we substitute this in the previous equation, we will


 T 
1 0
have  Cl  e j (l k )0t dt   Cl (l  k )  Ck , the kth Fourier coefficient.
l  T0 0 l 

(Refer Slide Time: 21:54)

376
(Refer Slide Time: 22:12)

(Refer Slide Time: 23:08)

T0
1
Now when we set k = 0 here, Ck  C0 
T0  x(t )dt , this is simply the mean or average of
0

x(t) over the interval T0. This C0 is also known as the DC coefficient since this
corresponds to the zero frequency. So that corresponds to the zero frequency.

377
(Refer Slide Time: 23:56)

So what we have done in this module is, we have introduced this new concept, the
Fourier analysis of continuous time signals and systems and in particular we started with
the continuous time periodic signal with fundamental period T0 and we have looked at
the Fourier series expansion of that and how to derive the coefficients in the Fourier
series of x(t). So we will stop here and continue in subsequent modules. Thank you very
much.

378
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 34
Fourier Analysis: Complex Exponential Fourier Series, Trigonometric Fourier
Series – Even and Odd Signals

Keywords: Complex Exponential Fourier Series, Trigonometric Fourier Series

Hello welcome to another module in this massive open online course. So we are looking
at the properties of signals and systems and in particular we are trying to introduce the
Fourier analysis or the Fourier transform as a very viable and a convenient method to
understand and analyze the various properties of signals and systems.

(Refer Slide Time: 00:39)

So we are looking initially at the Fourier analysis for continuous time signals and
systems, subsequently we will look at the Fourier analysis for discrete time system
signals and systems. So we are looking at the Fourier series representation.

379
(Refer Slide Time: 01:25)

Any periodic signal with the fundamental period T0 can be expressed as the sum of an
infinite number of complex exponentials at the fundamental frequency 0 and then its

various harmonics, that is frequencies k0 , where k is an integer multiple of 0 .

(Refer Slide Time: 02:11)

380

So this sum can be expressed as x(t )  Ce
k 
k
j 0t
and 0 is the fundamental angular

2
frequency, 0  and the coefficient Ck in the Fourier series is given by
T0
T0
1
 x(t )e
 jk0t
Ck  dt .
T0 0

(Refer Slide Time: 03:31)

(Refer Slide Time: 03:58)

Now let us look at some of the properties of this Fourier series representation.

381
(Refer Slide Time: 04:57)

T0
1
 x(t )e
 jk0t
Now if x(t) is a real function or a real signal then we have, Ck  dt . Now,
T0 0


 1 T0 
if we take C 
   x(t )e jk0t dt  and now if I bring the conjugate operation inside, we
k T 
 0 0 
T0
1
 x (t )e
  jk0t
will have C k  dt .
T0 0

(Refer Slide Time: 05:53)

382
T0
1
 x(t )e
  jk0t
Now since x(t) is real x (t ) = x(t), which implies C k  dt , which is nothing
T0 0

but C k ,the Fourier series coefficient at the integer - k. So this implies for a real signal

x(t), C k  C k .

(Refer Slide Time: 07:31)

That is the Fourier series coefficients exhibit conjugate symmetry. Now if we look at the
magnitude spectrum of Ck it will have even symmetry because
Ck  C k  C k  Ck  C k .

(Refer Slide Time: 08:21)

383
Similarly if you look at the phase spectrum, the Ck  C k  C k which implies
the phase spectrum exhibits an odd symmetry.

(Refer Slide Time: 09:16)

(Refer Slide Time: 10:27)

Let us now move to a different representation which is known as the trigonometric


Fourier series. Let x(t) be a periodic signal that can be represented as
a0 
x(t )    ak cos(k0t )  sin(k0t ) and these are trigonometric functions and are not
2 k 1
complex exponentials. These aks can be complex.

384
(Refer Slide Time: 11:58)

And the cosines and the sines are orthogonal, that is if you multiply cos(k0t ) by

sin(k0t ) and integrate over one period the product vanishes. So we can see that these
basis functions, the cosines and sines are chosen for a particular reason because these are
orthogonal and we can readily derive the expressions for these coefficients of the
trigonometric Fourier series.

(Refer Slide Time: 13:41)

385
T0 T0
2 2
So that is ak 
T0 0 x(t ) cos(k0t )dt and bk  T0  x(t )sin(k t )dt .
0
0 Now we have the


Fourier series is x(t )  Ce
k 
k
jk0t
dt which can be written as


x(t )  C0   Ck e jk0t  C k e  jk0t dt and this can be further simplified as follows
k 1


x(t )  C0    Ck  C k  cos(k0t )  j  Ck  C k  sin(k0t )dt .
k 1

(Refer Slide Time: 15:22)

And therefore by comparison of the coefficients between the Fourier series and the
trigonometric Fourier series we can see that C0, the DC coefficient corresponding to 0
a0
frequency is C0  ,  Ck  C k   ak and  Ck  C k   bk .
2

386
(Refer Slide Time: 16:22)

(Refer Slide Time: 17:57)

That is the relation between the coefficients of the Fourier series and the trigonometric
Fourier series.

387
(Refer Slide Time: 17:34)

So this gives the trigonometric Fourier series in terms of the Fourier series and now for
the Fourier series in terms of the trigonometric Fourier series, we have
ak  jbk a  jbk
Ck  , C k  k . So these are the relationship of the Fourier series
2 2
coefficients in terms of the coefficients of the trigonometric series.

(Refer Slide Time: 18:27)

Now if x(t) is real then the coefficients of the trigonometric series ak and bk are real. Now
if x(t) is real then we have ak  Ck  C k  Ck  C*k  2 Re Ck  .

388
(Refer Slide Time: 19:58)

Now bk  j  Ck  C k   j  Ck  C*k   2 Im(Ck ) .

(Refer Slide Time: 20:37)

Now if x(t) is even then in the trigonometric Fourier series it is easy to infer that bk = 0,
since these bks are coefficients of the sin function which is an odd function.

389
(Refer Slide Time: 21:50)

a0 
So the trigonometric Fourier series is given as x(t )    ak cos(k0t ) that is for an
2 k 1
even signal only the cosine terms remain.

(Refer Slide Time: 23:07)

However if x(t) is odd then we have the aks are 0 which are the coefficients of the cos
function which is an even function. So when x(t) is odd the coefficients of the even
components in the trigonometric Fourier series, that is the cosine functions vanish and as
is the DC component. So the trigonometric Fourier series can be expressed as

390

2
x(t )   bk sin(k0t ) where 0  and this is the fundamental angular frequency. So
k 1 T0
that completes the discussion on the trigonometric Fourier series.

(Refer Slide Time: 23:58)

So in this module we have started with the discussion of the Fourier series, the
trigonometric Fourier series and related these two expansions. We have also seen the
various properties of the coefficients. So we will stop here and continue in the
subsequent modules. Thank you very much.

391
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 35
Conditions for Existence of Fourier Series – Dirchlet Conditions, Magnitude/ Phase
Spectrum, Parseval’s Theorem

Keywords: Dirchlet Conditions, Magnitude/ Phase Spectrum, Parseval’s Theorem

Hello welcome to another module in this massive open online course. So we are looking
at the Fourier analysis and in particular the Fourier series of continuous time periodic
functions. So in this module let us start by looking at the conditions for the existence of
the Fourier series.

(Refer Slide Time: 00:31)

So for the Fourier series to converge there are certain conditions known as the Dirichlet
conditions, known after the mathematician Dirichlet.

392
(Refer Slide Time: 01:38)

So a periodic signal x(t) has a Fourier series representation, if it satisfies the Dirichlet
conditions.

(Refer Slide Time: 02:53)

The Dirichlet conditions which are to be satisfied for the existence of the Fourier series
are as follows that is x(t) is absolutely integrable over a period. This implies
T0

 x(t ) dt   that is if we look at this integral over a period of magnitude of x(t) this has
0

393
to converge or this has to be a finite quantity. The second condition is x(t) has a finite
number of maxima or minima in any finite interval over time.

(Refer Slide Time: 04:08)

The third condition is that x(t) has a finite number of discontinuities within any finite
interval over t and each of these discontinuities is finite.

(Refer Slide Time: 05:56)

So these are the three Dirichlet conditions. Now an important thing to note is that, these
are sufficient conditions, these are not necessary.

394
(Refer Slide Time: 07:26)

So if these conditions are satisfied by a periodic signal x(t) then the Fourier series exists.

(Refer Slide Time: 09:20)

Let us also look at the amplitude and phase spectrum of this continuous periodic signal
x(t). Now the complex Fourier series coefficients of x(t) can be expressed as
Ck  Ck e jk where Ck is the magnitude of Ck and   Ck is the phase of Ck.
k

395
(Refer Slide Time: 11:02)

Now if we plot Ck versus the discrete frequencies that is 0 this is termed as the

magnitude spectrum of the signal x(t) or the amplitude spectrum. Similarly plot of
 versus 0 is termed as the phase spectrum.
k

(Refer Slide Time: 12:59)

Now observe that these Ck and  only exists at discrete set of points that is at
k
k0 where 0 is the fundamental angular frequency and therefore this is also known as a

396
discrete frequency spectrum or a line spectrum. So note that Ck ,  are not continuous
k
functions of 0 , these exist only at discrete set of frequencies.

(Refer Slide Time: 14:53)

Now for a real periodic signal the Fourier coefficient C k  C *k corresponding to the

frequency k0 .

(Refer Slide Time: 16:21)

397
Now this implies that C k  Ck implies the magnitude spectrum is an even function of

frequency. Further since they are complex conjugate of each other this also implies that
the phase or  k  k . This implies that the phase spectrum is an odd function. So we

have magnitude spectrum which is even function, the phase spectrum which is an odd
function for a real periodic signal x(t).

(Refer Slide Time: 17:44)

So let us now look at the power of the periodic signal x(t) and this is given as
T0 
1
P  Ce jk0t
2
x(t ) dt . Now consider the Fourier series of x(t) given as x(t )  k .
T0 0 k 

398
(Refer Slide Time: 18:48)

*
      
Then x(t )  x(t ).x (t )    Ck e jk0t   Cme jm0t     Ck C *m e j ( k m )0t .
2 *

 k   m  k  m 

(Refer Slide Time: 19:47)

399
(Refer Slide Time: 21:15)

T0  
1
Therefore for the power we have P   CC e j ( k m )0t dt and now if we
*
k m
T0 0 k  m 

interchange the sum/integral, we bring the summations outside, so that will give
  T
1 0 j ( k m )0t
 
k  m 
Ck C *m
T0 0
e dt .

(Refer Slide Time: 22:26)

400
And the integral of any harmonic of 0 over the period T0 is 0, unless k = m in which


T
1 0 j ( k m )0t if k  m
case this is 1. So this is 
T0 0
e dt  1
0 otherwise
and this implies this is basically

the discrete impulse that is  (k  m) .

(Refer Slide Time: 23:51)

 
So this is   CC
k  m 
k
*
 (k  m) and only the terms corresponding to k = m survive
m


while all the rest of the terms are 0, so this is  C
2
k . And therefore we have showed
k 

a very important property that if we compute the power in the time domain of this
T0 
1
periodic signal then that is given by P   x(t ) dt  C
2 2
k and this is known as the
T0 0 k 

Parseval’s identity or the Parseval’s theorem.

So in this module we have looked at the Dirichlet conditions for the existence of the
Fourier series representation or the convergence of the Fourier series representation of a
periodic signal x(t) and we have also seen the Parseval’s relation which relates the power
of continuous time periodic signal x(t) to its Fourier series representation. So we will
stop here and look at other aspects in the subsequent modules. Thank you very much.

401
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 36
Fourier Transform (FT): Definition, Inverse Fourier Transform, Fourier Spectrum,
Dirichlet Conditions, Relation to Laplace Transform, FT of Unit Impulses

Keywords: Inverse Fourier Transform, Fourier Spectrum

Hello welcome to another module in this massive open online course. So we are looking
at the Fourier series representation and it is various properties. In this module, let us start
looking at a different topic that is the Fourier transform. The Fourier transform is defined
for a continuous aperiodic signal.

(Refer Slide Time: 00:28)

402
(Refer Slide Time: 01:35)

The definition of the Fourier transform is given as a function of the angular frequency  ,

so X ( )   x(t )e
 jt
dt .


(Refer Slide Time: 02:14)

The Fourier transform framework is one of the most commonly used transforms to
understand and analyze the various properties and the behavior of signals and systems.

403

1
 X ( )e d and this is
jt
Similarly the inverse Fourier transform is given as x(t ) 
2 

simply represented as F 1  X ( ) that is the inverse Fourier transform.

(Refer Slide Time: 03:29)

(Refer Slide Time: 04:09)

And x(t) and X ( ) constitute a Fourier transform pair and is represented simply using a
double headed arrow that is x(t )  X () . The Fourier spectrum can be obtained as

X ( )  X ( ) e j ( ) where  ( ) is the angle or the phase of  .

404
(Refer Slide Time: 05:46)

X ( ) is the magnitude spectrum of x(t) that is X ( ) versus angular frequency  and

the angle of X of omega or the phase of phi of omega this is the phase spectrum.

(Refer Slide Time: 06:47)

So you have the magnitude spectrum and the phase spectrum of the signal x(t). Now for

real signals x(t), we have X ( )   x(t )e
jt
dt .


405
(Refer Slide Time: 07:19)


  
If you consider  X ( )     x(t )e dt    x (t )e jt dt , but we know x (t )  x(t ) for
 jt

   

 x(t )e
 jt
a real signal which implies this is  dt , but this is nothing else but X ( ) .


(Refer Slide Time: 08:26)

So therefore what we have is X  ( )  X ( ) . So for a real signal the Fourier transform
at  is the conjugate of the Fourier transform at  .

406
(Refer Slide Time: 08:42)

So we have X ( )  X ( ) which basically implies that the magnitude spectrum is an

even function of  for a real signal and since X  ( ) has a negative phase,
 ()   () which implies the phase spectrum is an odd function of  .

(Refer Slide Time: 11:00)

So now consider the condition for convergence of X ( ) similar to the Dirichlet


conditions which are sufficient but not necessary conditions. If the signal x(t) satisfies

407
the Dirichlet conditions, then the Fourier transform exists, but if the Fourier transform
exists it does not automatically imply that the Dirichlet conditions are satisfied.

(Refer Slide Time: 12:26)

These conditions are as follows that is x(t) is is absolutely integrable which implies that



x(t ) dt   which implies that this basically has to be a finite quantity. The next

condition is that x(t) has a finite number of maxima and minima in any finite interval.

(Refer Slide Time: 13:38)

408
The third condition is that x(t) has a finite number of discontinuities that is this signal
x(t) in any finite interval has a finite number of discontinuities and at each of these
discontinuities it has a finite value.

(Refer Slide Time: 15:11)

So these are the Dirichlet conditions for the convergence of the Fourier integral and these
are sufficient, but not necessary and we will see examples of signals which do not satisfy
the Dirichlet conditions, but the Fourier transform still exists.

(Refer Slide Time: 15:47)

409
Now we want to see the relation between the Fourier and the Laplace transforms and we

have the Fourier transform as X ( )   x(t ) e
 jt
dt .


(Refer Slide Time: 16:46)

 x(t )e
 st
The Laplace transform is given as X ( s)  dt .


(Refer Slide Time: 17:32)

410
This ‘s’ is a complex number. So s    j which means
 
X ( s)  X (  j )   x(t )e   x(t ) e  e dt where  is the real part and
 (  j ) t 
 t  jt
dt 
 

j is the imaginary part and s is the complex frequency. And now we can see that this is

nothing but the Fourier transform of x(t ) e t . So we have seen that the Laplace transform

of x(t) at s    j is basically the Fourier transform of x(t ) e t .

(Refer Slide Time: 19:13)

So now we can see that if   0  s  j then we have X ( j )  F x(t ) .

411
(Refer Slide Time: 21:00)

So we can say that the Fourier transform at  can be obtained by setting s  j in the
Laplace transform.

(Refer Slide Time: 21:42)

However, note that this is only true under some conditions that is, this is only true if the

signal x(t) is absolutely integrable, that is 

x(t ) dt   .

412
(Refer Slide Time: 23:55)

Now one of the most prominent signals is the unit impulse function, which is the Dirac
delta function that is  (t ) and we know that the Laplace transform of the Dirac delta is

  (t )e
 jt
unity. Now let us look at the Fourier transform of this which is equal to dt


 jt
and remember this is e from the property of the delta function and this is simply
t 0

one and therefore the Fourier transform of the unit impulse function is one and you can
see that this is similar to the Laplace transform.

(Refer Slide Time: 24:58)

413
So here there is no need to replace s by j . So we have the delta function which is non-
negative and this is an absolutely integrable function.

So what we have done in this module is basically we have introduced the concept of the
Fourier transform, we looked at the properties of Fourier transform, the magnitude and
the phase spectrum and also the conditions for the existence of the Fourier transform and
the relation between the Fourier transform and the Laplace transform. So we will stop
here and continue with other aspects in the subsequent modules. Thank you very much.

414
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 37
Fourier Transform of Exponential, Unit Step Function, Properties of Fourier
Transform – Linearity, Time Shifting, Frequency Shifting, Time-Reversal

Keywords: Fourier Transform, Linearity, Time Shifting, Frequency Shifting, Time-


Reversal

Hello welcome to another module in this massive open online course. So we are looking
at the Fourier transform and the properties of the Fourier transform of continuous
signals.

(Refer Slide Time: 00:27)

We have looked at the Fourier transform of the impulse signal, so let us now look at the
Fourier transform of an exponential signal, that is a signal of the form x(t )  e at u(t )
assuming that a is a positive quantity. Now notice that this signal is absolutely integrable.

415
(Refer Slide Time: 01:26)

Now if you look at the Laplace transform of this signal we have


   
e  ( s  a ) t
L e u (t ) 
1
e u (t )e dt   e e dt   e
 at  at  st  at  st  ( s a )t
dt   .
 0 0
s  a) 0
sa

(Refer Slide Time: 01:56)

416
(Refer Slide Time: 03:08)

And now if s + a > 0 this implies that real part of s + a > 0, this implies that the ROC is
of the form Re s  a . Now since ‘a’ is positive, this includes real part of s = 0. And

now if you look at the complex frequency s    j , real part of s = 0 implies   0 .


So this implies j is in the ROC.

(Refer Slide Time: 04:26)

417
So s  j belongs to the ROC if a > 0 and this is nothing but the Fourier transform and if
you evaluate the Fourier transform you will observe that the Fourier transform equals
 
e (a  j ) t 1
e
 at  jt
e dt   .
0
a  j 0
a  j

(Refer Slide Time: 05:39)

And here we are setting lim e (a  j ) t  0 since a > 0 and this is a decreasing exponential
t 0

which is absolutely integrable and therefore the Fourier transform exists. The Fourier
transform is obtained by setting s  j in the Laplace transform. Now let us consider a
signal which is not absolutely integrable.

418
(Refer Slide Time: 07:30)

 
Consider the unit step function, x(t) = u(t). Now 

x(t ) dt   1dt and this is not finite.
0

Therefore this signal is not absolutely integrable.

(Refer Slide Time: 08:35)

419
(Refer Slide Time: 08:54)

1
Now if you look at the Laplace transform we have L u (t )  and the region of
s
convergence of the Laplace transform is real part of s > 0, which means   0 which is
the real part of the complex frequency. So it does not include s  j in ROC.

(Refer Slide Time: 10:01)

And if you look at the Fourier transform you cannot evaluate this in a straightforward
fashion, but the Fourier transform of this exists and the Fourier transform can be shown

420
1
to be given as F u (t )   ( )  . In fact, if you set s  j in the Laplace transform
j
1
you will obtain and will miss the factor  ( ) . In fact, you can see that the Fourier
j
transform is not obtained by setting s  j in the Laplace transform because real part of
s = 0 is not in the region of convergence and you can also see that this function is not
absolutely integrable. Let us now look at the properties of the Fourier transform.

(Refer Slide Time: 11:54)

(Refer Slide Time: 12:31)

421
The first property that we want to look at is termed as the linearity property of the
Fourier transform. We have two signals x1(t) with Fourier transform X 1 ( ) , x2(t) with

Fourier transform X 2 ( ) where  is angular frequency.

(Refer Slide Time: 13:25)

Then a1 x1 (t )  a2 x2 (t )  a1 X1 ()  a2 X 2 () , so this is the Fourier transform.

(Refer Slide Time: 13:53)

422
Another property is the time shifting property where we consider x(t) with Fourier
transform X ( ) and then there is a time shift of t0 that is x(t) becomes x(t  t0 ) which is

x(t) delayed by t0. So x(t )  X () and we are considering x(t  t0 )  X ( ) .

(Refer Slide Time: 14:53)


Now X ( )   x(t  t )e
 jt
0 dt . Now set t  t0  t  dt  dt and therefore now if you


 

 x(t )e  x(t )e
 j ( t  t 0 )
simplify this we have X ( )  dt   j t
dte jt0 .
 

423
(Refer Slide Time: 15:27)

So this is nothing but X ( )e jt0 . So we have X ( )  X( ) e jt0 and this e jt0 is a
complex exponential.

(Refer Slide Time: 16:44)

So we have the Fourier transform of x(t  t0 ) as X ( )e jt0 . So a shift in time results in
a modulation in the frequency.

424
(Refer Slide Time: 17:57)

Now let us look at another property which is the frequency shifting property of the
Fourier transform. So this is a dual of the time shift that is the modulation in time
corresponds to a shift in frequency domain. So we have e j0t x(t )  X (  0 ) . Another
interesting property is the time scaling property.

(Refer Slide Time: 19:17)

Consider a > 0, that is a positive constant then let us consider x(t)  X( ) . Then
1
x(at)  X( ), a  0 . Now you can see that x(at) is a shrunk version of x(t) which
a
1
means in the frequency domain you have X( ), a  0 which means it expands by a
a
factor of a. Similarly if a < 1 then x(at) is an expanded version of x(t) and

425
1
correspondingly in frequency domain X( ), a  0 is a shrunk version of X( ) . So
a
basically expansion in the time leads to a shrinking in the frequency and similarly a
shrinking in a time or a scaling down in time leads to an expansion or scaling up in
frequency.

So if it is well localized in time that is it becomes more and more localized in time
implies it becomes less and less localized in frequency. The Fourier transform for
impulse in time is 1that is it spreads over the entire frequency axis. As it starts expanding
in time then it starts shrinking in frequency and this is a well known property of the
Fourier transform, the time frequency localization.

(Refer Slide Time: 22:33)

So this is an important property and we can see that for a > 0 you have

1
X ( )   x(at )e
jt
dt . Now set at  t  dt  dt which implies

a
  
1  
t
j dt 1 j t
X ( )  

x(t )e a
  x(t )e a dt which is nothing but X   assuming a > 0.
a a  a a

426
(Refer Slide Time: 23:49)

So similarly you can show the time reversal property.

(Refer Slide Time: 24:33)

The time reversal property of the Fourier transform states that if x(t )  X () then you
can show that x(t )  X () that is reversal in time leads to a reversal in frequency
and this can be shown easily.

So what we have seen in this module is that we continued our analysis of the Fourier
transform, we have looked at the Fourier transform of the exponential and the unit step
functions and we also started looking at some of the interesting properties of the Fourier
transform. So we will stop here and continue in the subsequent modules. Thank you very
much.

427
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electric Engineering
Indian Institute of Technology, Kanpur

Lecture - 38
Properties of Fourier Transform - Duality, Differentiation in Time, Convolution

Keywords: Duality, Differentiation in Time, Convolution

Hello welcome to another module in this massive open online course. So we are looking
at the Fourier analysis or Fourier transform for continuous time and aperiodic signals and
we are looking at the properties of the Fourier transform. So let us continue looking at
the properties, we have looked at the time reversal property that is if x(t) has a Fourier
transform X ( ) then x(-t) has the Fourier transform X ( ) .

(Refer Slide Time: 00:45)

So let us continue our discussion on the properties of the Fourier transform. Let us look
at another property which is known as the duality or the symmetry property of the
Fourier transform. If x(t) has the Fourier transform X ( ) what can we say about the
Fourier transform of X (t ) .

428
(Refer Slide Time: 02:09)


1
 X ( )e d .
jt
Let us start with the inverse Fourier transform that is given as x(t ) 
2 

Now we replace  by t and t by  .

(Refer Slide Time: 03:07)


1
x( )   X (t )e
jt
So what we are going to have is dt . Now
2 

 
1
x( )   X (t )e jt dt which basically implies that 2 x( )   X (t )e
 jt
dt and this
2  

is nothing but the Fourier transform of X(t).

429
(Refer Slide Time: 04:14)

So we have X (t )  2 x() where x(t )  X () and this is known as the symmetry or
the duality property of the Fourier transform. So let us look at a simple example to
understand this. We know that  (t ) has the Fourier transform 1 which means by the time

shifting property  (t  t0 ) has the Fourier transform e j0t and let  (t  t0 ) be x(t) and let

e j0t be X ( ) .

(Refer Slide Time: 05:44)

430
Now using duality we have X (t )  e jtt0 . So

e jtt0  2 x( )  2 (  t0 )  2 (  t0 ) . So basically what we have using duality

is that e jtt0 has the Fourier transform 2 (  t0 ) . Now replacing t0 by 0 we have

e j0t  2 (  0 ) which is an impulse located at omega0 scaled by 2 pi impulse

located at 0 scaled by 2 . So we have demonstrated this using the property of duality.

(Refer Slide Time: 08:31)

Let us look at differentiation in time. We have x(t) with the Fourier transform X ( ) and
dx(t )
we need to find the Fourier transform of . Let us start with the inverse Fourier
dt
 
1 dx(t ) 1 d
 X ( )e d , now dt  2 dt  X ( )e d . Now
jt jt
transform which is x(t ) 
2
moving the differentiation inside the integral we have
 
1 d 1
 dt X ( )e d  2  j X ( )e jt d and this is nothing but the Inverse Fourier
jt

2 

Transform of j X ( ) . Therefore differentiation in time is equivalent to multiplying by


j in the frequency domain.

431
(Refer Slide Time: 10:00)

(Refer Slide Time: 11:03)

432
(Refer Slide Time: 12:11)

Similarly we also have differentiation in frequency which is given by


d
 jtx(t )  X ( ) that is if we differentiate in the frequency with respect to  , the
d
corresponding time domain signal is multiplied by  jt . So this can be proved similar to
the proof for differentiation in time. Now consider integration in time. Let us say that x(t)
t
1
has Fourier transform X ( ) . Then we have  x( )d   X (0) ( )  j X ( ) . Now


another important property is the convolution property of the Fourier transform.

(Refer Slide Time: 15:20)

433
Now we have x1 (t)  X1 ( ) and x2 (t )  X 2 ( ) . Now the convolution of x1(t) and x2(t)

is given as x(t )  x1 (t )  x2 (t )   x ( ) x (t   )d . Now we want to look at the Fourier

1 2

transform of the convolution of these two signals.

(Refer Slide Time: 17:07)


This is X ( )   x(t) e
 jt
dt .


(Refer Slide Time: 17:58)

434

   jt
Now we can substitute for x(t) that is X ( )     x1 ( ) x2 (t   )d  e dt . Now we

 
interchange the order of integration and we get X ( )   1   x2 (t   ) e dt  d
  jt
x ( )

and thus the outer integral will become with respect to  . Now we can see that the inner
integral is nothing but the Fourier transform of the signal x2(t) delayed by  . Therefore
 
this becomes X ( )   x ( ) X ( ) e d  X 2 ( )  x1 ( ) e  j d .
 j
1 2
 

(Refer Slide Time: 20:53)

So the integral which is left is nothing but the Fourier Transform of x1(t). So this will
become X ( )  X1 ( ) X 2 () which is the Fourier transform of the convolution of two
signals.

435
(Refer Slide Time: 22:03)

So therefore we can see that convolution in time leads to multiplication in the frequency
domain. This is one of the most interesting and important properties which makes the
Fourier transform very useful.

(Refer Slide Time: 23:37)

And similarly another property which is the dual of this, that is when you multiply in
1
time we have x1 (t ) x2 (t )  X 1 ( )  X 2 ( ) . So multiplication in time naturally leads
2
to convolution in frequency domain.

436
(Refer Slide Time: 25:00)

So what we have seen in this module is we have continued our discussion of the
properties of the Fourier transform, we have looked at the differentiation in the time and
frequency domains, integration in the time domain and we have also looked at the
convolution property of the Fourier transform. So we shall look at other aspects in the
subsequent modules. Thank you very much.

437
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 39
Fourier Transform - Parseval’s Relation, Frequency Response of Continuous Time
LTI Systems

Keywords: Parseval’s Relation, Frequency Response of Continuous Time LTI Systems

Hello, welcome to another module in this massive open online course. So we are looking
at the Fourier transform for continuous aperiodic signals and its several properties. Let us
look at another interesting and important property that is Parseval’s relation and this will
bring up a lot of other aspects such as the autocorrelation function and so on.

(Refer Slide Time: 00:32)

438
(Refer Slide Time: 01:13)

Consider a signal x(t) which has the Fourier transform X ( ) . Now we want to look at

the Fourier transform of x (t ) .

(Refer Slide Time: 01:39)

Now using the properties of the inverse Fourier transform you can see that x(t) which is

1
 X ( ) e d which implies that if
jt
the inverse Fourier transform of X ( ) is x(t ) 
2 

you take the complex conjugate on both sides then you have

439

 1 

 X ( ) e
jt

x (t )   d  which if you now take the conjugate operation inside, this
 2  

1
X ( ) e jt d . Now if you replace t by -t this implies that
 
will be x (t ) 
2 


1
x (t )  X ( ) e jt d .

2 

(Refer Slide Time: 03:08)

Now you can see that this right hand side is nothing but the inverse Fourier transform of
X  ( ) . Therefore x (t ) is basically the inverse Fourier transform of X  ( ) . Now we

consider x(t )  x(t)  x (t ) . Now we know that the convolution in time domain is
basically the multiplication of the Fourier transforms of these two signals.

440
(Refer Slide Time: 04:46)

This implies that X ( )  X( ) X ( )  X( ) .


2

(Refer Slide Time: 05:54)

 x( ) x ((t   ))d that


 
Now let us again expand x(t ) as x(t )  x(t)  x ( t)  is


 x( ) x (  t )d , this is the definition of convolution. So setting t = 0 you have



x(t ) 


 
x(0)   x( ) x ( )d   x( ) d .
2

 

441
(Refer Slide Time: 07:40)

So this is nothing but the energy of the signal. That is x(t ) which is the convolution of

x(t) and x ( t) evaluated at t = 0 is simply the energy of the signal. Now we also know

that x(t ) is the inverse Fourier transform of X( ) .


2

(Refer Slide Time: 08:43)

442

1
So this implies that x(t )   X( ) e jt d . Now set t = 0 and this implies
2

2 


1
x(0)   X( ) d .
2
So we have two relations that is, we have
2 

 
1
x(0)   x( ) d and we also have x(0)   X( ) d Let the first relation be
2 2


2 

equation number 1 and the second one be equation number 2.

(Refer Slide Time: 10:01)

 
1
 x(t ) dt   X( ) d that is the energy of the
2 2
So from 1 and 2 we have basically

2 

signal in the time domain equals the energy in the frequency domain and this is basically
the Parseval’s theorem or the Parseval’s relation. So we can say the energy of x(t) is
1
basically the energy of X( ) except that there is a scaling factor of and this quantity
2

X( ) is termed as the energy spectral density of x(t).


2

443
(Refer Slide Time: 11:59)

This is the distribution of the energy over the spectrum, that is how the energy of this
signal x(t) is distributed in the various frequency bands. In fact, integrating this energy
spectral density over the entire frequency band gives the energy of the signal.

(Refer Slide Time: 14:19)


Now we have x(t )  x(t)  x ( t) . Now we can write it as x(t )   x( ) x (  t )d and




now you can set     t . So this is x(t )   x(  t) x ( )d and this is termed as the



444
autocorrelation of the signal x(t). So this is a measure of the similarity between these two
signals.

(Refer Slide Time: 16:15)

This is denoted by Rxx (t ) . And this is the measure of correlation between the signal
values separated by the interval of t.

(Refer Slide Time: 17:39)

And we have already shown that the autocorrelation Rxx(t) has the Fourier transform

X( ) which we denote by this notation S xx ( ) that is the energy spectral density. So
2

445
the energy spectral density is the Fourier transform of the autocorrelation function. Also

note that since S xx ( )  X( ) the ESD is always greater than or equal to 0 that is the
2

energy spectral density is non-negative.

(Refer Slide Time: 19:12)

So if you have a particular band and you are interested in finding the energy in that band
let us say, the band 1 to 2 , we have the energy in that band
2 2
1 1
2  S xx ( )d  2 2  X( ) d . So basically what we are doing is we are
2

2 1 1

integrating the energy spectral density over the band of interest.

446
(Refer Slide Time: 19:45)

(Refer Slide Time: 21:29)

Now consider the frequency response of continuous time LTI system. So let us say we
have a signal x(t) which is input to an LTI system with impulse response h(t). So the
output y(t) is nothing but the convolution of the input with the impulse response.

447
(Refer Slide Time: 23:04)

Therefore, we have y(t )  x(t )  h(t ) . We also know that the convolution in time domain
is multiplication of the corresponding frequency responses. So this implies
Y ( )
Y ( )  X( ) H( )  H( )  and this is termed as the transfer function or the
X( )
frequency response of the system.

(Refer Slide Time: 23:56)

We can also characterize the magnitude and phase responses. So you can write this in
polar coordinates as H ( )  H ( ) e jH where H ( ) is termed as the magnitude

448
response of the system and the quantity  is termed as the phase response. Now
H
H ( ) the frequency response is nothing but the Fourier transform of the impulse
response h(t).

(Refer Slide Time: 25:55)

Now in addition consider the input signal which is a complex exponential, that is
x(t )  e j0t and the Fourier transform of this is X ( )  2 (  0 ) that is the shifted

impulse scaled by 2 .

(Refer Slide Time: 26:45)

449
Therefore the response to this input is Y ()  X () H ()  2 (  0 ) H () and you

can see that  (  0 )  0 everywhere except where   0 . So this is simply

Y ( )  H (0 )2 (  0 ) . Now if you take the inverse Fourier transform you can see

that Y ( ) is a scaled version of the impulse shifted by 0 . So this implies

y(t )  H (0 )e j0t .

So if you input a complex exponential, the output is another complex exponential at the
same frequency simply scaled by H (0 ) . So this is an interesting property where the
output is simply a scaled version of the input and such a function which when given as
input to your system the output that you get is simply a scaled version of the input, is
termed as an Eigen function of the system. Therefore this complex exponential is an
Eigen function of this LTI system and its corresponding Eigen value is H (0 ) which is
the frequency response evaluated at angular frequency.

(Refer Slide Time: 29:01)

So in this module we have seen other interesting aspects of the Fourier transform, we
have defined the autocorrelation function of a signal, the energy spectral density, we
have seen the concept of the Parseval’s relation for a continuous aperiodic signal and we
have also seen the Eigen functions of LTI systems. So we will stop here and continue in
the subsequent modules. Thank you very much.

450
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 40
Fourier Transform – Distortionless Transmission, LTI Systems Characterized by
Differential Equations, Ideal Low Pass and High Pass filters

Keywords: Distortionless Transmission, LTI Systems Characterized by Differential


Equations, Ideal filters

Hello, welcome to another module in this massive open online course. So we are looking
at the Fourier transform and its properties. Let us look at another new aspect that is
distortionless transmission and how a system with distortionless transmission is
characterized.

(Refer Slide Time: 00:32)

So consider an LTI system with input x(t), impulse response h(t) and the output y(t). For
distortionless transmission through this LTI system, the output y(t) is a delayed and
scaled that is amplified or attenuated version of x(t).

451
(Refer Slide Time: 02:09)

So y(t) should not distort x(t), but y(t) can differ from x(t) only to the extent that it is a
scaling factor K times x(t - td). So let us say we have x(t) which is consider a simple
example for your signal x(t) ok.

(Refer Slide Time: 03:53)

T T
So let us say we have a triangular pulse of height 1, center at 0, from  to . Now let
2 2
us pass it through the LTI system and if the LTI system is distortionless, all it can do is,

452
T T
it can simply scale and delay by td, which means this is shifted as   td to  td and
2 2
the height is K, so what you can see basically is that the shape remains intact that is x(t)
and y(t) are similar to each other in the sense that y(t) is simply delayed and scaled
version of x(t).

(Refer Slide Time: 06:03)

Now if you take the Fourier transform you have Y ( )  Ke jtd X() and now you can
see that this is nothing but the frequency response H ( ) of this distortionless system, so

H( )  Ke jtd . Now if you look at the magnitude spectrum, we have H( )  K that is

a constant.

453
(Refer Slide Time: 07:34)

So for a distortionless transmission, the magnitude spectrum has to be constant over the
entire frequency band.

(Refer Slide Time: 08:40)

Now if you look at the phase of  that is  H , this is  H  td and if you look at this
quantity, this has a linear characteristic, this is linear in  .

454
(Refer Slide Time: 09:10)

So its characteristic will be with slope td and this passes through the origin. So this is
the phase characteristic for distortionless transmission. So basically what this shows is
that the magnitude or the amplitude of the frequency response has to be constant over the
entire frequency band and the phase has to be linear.

(Refer Slide Time: 10:52)

Now let us now look at LTI systems characterized by differential equations. So this is
N
d k y (t ) M d k x(t )
given as  ak
k 0 dt k
  bk
k 0 dt k
and these ak and bk are the constant coefficients.

455
(Refer Slide Time: 12:31)

Now if you take the Fourier transform on both sides we have


N M

 a (j) Y ()   b (j)


k 0
k
k

k 0
k
k
X ( ) . Now this implies you have

 N  M 
Y ( )   ak (j )k     bk (j )k  X ( ) which basically implies that
 k 0   k 0 
M

Y ( )  b (j )
k
k

 k 0
which is nothing but the frequency response of the LTI system. So
X ( ) N

 a (j )
k 0
k
k

this characterizes the frequency response of the LTI system which is characterized by the
constant coefficient differential equation. Now let us look at the frequency response
characterization of ideal filters.

456
(Refer Slide Time: 13:43)

(Refer Slide Time: 14:19)

457
(Refer Slide Time: 15:38)

The filters basically allow only a specific set of frequencies to pass. So we have a system
which takes x(t) as the input and only allows a specific set of frequency components of
x(t) to pass through and it blocks the rest of the frequencies.

(Refer Slide Time: 16:56)

For instance we have an ideal low pass filter which allows only the lower frequencies to
pass through.

458
(Refer Slide Time: 17:37)


 
So we have the magnitude response H ( )  1 c . So when the input is
0 otherwise
multiplied by the frequency response you can see that all these frequencies in this band
are multiplied by unity gain and therefore it allows these components to pass but the gain
outside this band of c and c is 0 which means the input frequencies are completely

blocked. So this quantity c is termed as a cut-off frequency. For a low pass filter all

frequencies of the input signal greater than this cut-off frequency c are basically cut off
or these are blocked.

459
(Refer Slide Time: 19:56)

The ideal filter has a very sharp cut-off frequency. In practice it is difficult to design such
sharp filters and so you have the pass band, the stop band and in practice you will have a
transition band where it is transitioning. So in a non-ideal filter you will have a transition
band where the frequency response is transitioning from the pass band to the stop band,
but it is not exactly 1 or 0, but takes gains which are between 1 and 0. So we would like
to design filters which are close to ideal which means the cutoff is as sharp as possible,
the transition band is as small as possible. So in practice we would like to have this
transition band T to be very narrow which means we want to have a very sharp
cutoff. So ideal filters, which have very sharp edges are difficult to design.

460
(Refer Slide Time: 22:51)

Similarly one can have an ideal high pass filter.

(Refer Slide Time: 24:09)

In an ideal high pass filter, as the name implies instead of blocking frequencies greater
than a cutoff frequency c , you allow the frequencies greater than cutoff frequency c to

pass and so the gain outside the band from c to c is 1. So this blocks all frequency

components in the frequency band from c to c .

461
(Refer Slide Time: 25:25)


 
So its magnitude response H ( )  1 c . And obviously it is very difficult to
0 otherwise
design high pass filters with such sharp edges, so once again you will have transition
bands from the stop band to the pass band.

So in this module we have looked at the frequency response of a distortionless LTI


system, its magnitude and phase characteristics and we have also started looking at ideal
filters in particular the ideal low pass and the ideal high pass filter. So let us stop here
and look at other aspects in the subsequent modules. Thank you very much.

462
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 41
Fourier Transform - Ideal Band Pass and Band Stop Filters, Non-Ideal Low-Pass
Filter, 3 dB Bandwidth

Keywords: Ideal Band Pass, Band Stop Filters, Non-Ideal Low-Pass Filter, 3 dB
Bandwidth

Hello, welcome to another module in this massive open online course. So we are looking
at the Fourier transform and in particular looking at the frequency response of ideal
filters. So let us continue that discussion.

(Refer Slide Time: 00:36)

We have looked at the ideal low pass and high pass filters, we also want to look at what
we are going to look at next are the ideal band pass and band stop filters ok. So let us
continue our discussion on the concept of ideal filters ok. And we want to look at the
notion of an ideal, of an ideal band pass. An ideal band pass filter as the name implies
allows only frequency components of a signal within a certain band to pass through and
suppresses of blocks all other frequency components which do not lie in this band,
alright.

So let us say the band is omega1 to omega2 we can call this as the pass band then we
have magnitude for this band pass filter we have magnitude H of omega equals 1 for

463
omega1 less than modomega magnitude omega less than omega2 and 0 otherwise ok. So
it allows only frequency components which lie within this band either omega1 to omega2
or minus omega1 minus omega2 to minus omega1 to pass through.

(Refer Slide Time: 02:00)

So we have this band which is either omega1 to omega2, so it has a gain of 1 or minus
omega2 to minus omega1. So allows, so what this does is it basically allows only band of
frequency, allows only only frequency components, only frequency components in the
band omega1 to omega2 to pass through. And this is also known as the pass band that is
omega1 to omega2 this is also known as the pass band ok.

And this is an and this is an ideal pass ideal band pass filter and the opposite analogously
we have an ideal band stop filter which basically stops all the blocks all the frequency
components belonging to that band and passes or allows undistorted transmission of all
the other frequency components which do not lie in this band, alright.

464
(Refer Slide Time: 03:50)

So we have also a band stop and the band stop filter you have well, you have let us say
omega1 to omega2 is the stop band then you have all the frequencies in this band which
are blocked. And that is gain is 0 in this band and all other frequencies pass without
distortion. So these are the stop bands.

So, omega1 to omega2 is the stop band. So the band stop filter is magnitude H of omega.
So, its characterized by the frequency response magnitude H of omega equals 0 omega1
less than mod omega less than omega2 and this is 1 otherwise ok. So this is 1 and this is 1
otherwise. So these are the stop bands alright.

465
(Refer Slide Time: 05:35)

So this basically blocks all frequencies that belong to the band omega1 to omega2 or
minus omega2 to minus omega1. So blocks all frequencies omega1 to omega2 minus
omega2 to minus omega1, ok, so it all blocks all the frequencies in these bands ok. And
further we have only talked about the magnitude response, but we would not talk to the
phase response , but we know that to avoid distortion for a distortion less response the
phase response has to be linear ok. So although we have not specified on the phase
response you can note that the phase response has to be linear to avoid distortion ok.

So, we can say to avoid distortions all these filters all these ideals filters should have
should have a linear phase characteristic, this should have a linear phase characteristics.
Remember for a distortionless, a distortion less LTI system simply attenuates and delays
the signal corresponding to that is if the input is x(t) the output is k times k x(t minus td)
td is the delay k is the scaling factor alright. So the phase response of that, its magnitude
response is k the phase response is minus j that is minus omegatd that is the phase which
has a linear characteristic in omega alright. So that is basically that basically
characterizes a distortionless, ok.

So now so far we have looked at ideal filters, but of course it is difficult to design such
ideal filters as I said which such sharp cut offs. So naturally we have to look at rely on
non filters that are non-ideal to a certain extent. So let us start look at also look at was

466
one of the basic non ideal filter its one of the basic low pass non ideal low pass filter
which is formed by a simple RC circuit, ok.

(Refer Slide Time: 08:26)

So we want to look at a non ideal frequency selective filter, let us look at a non-ideal low
pass filter which you can say is simply given by this RC circuit, the simplest, so let us
say simplest low pass filter let us say x(t) is the input voltage y(t) is the output voltage let
us look at the relation between x(t) and y(t). So this is an RC circuit a simple and we
have let us say if we have this if I call this if we denote this current by i(t) then we have
x(t), the voltage is voltage drop across the resistance which is i(t) times R plus the
voltage drop across the capacitance which is y(t).

But we know the current across the capacitor is c times the derivative of the voltage
across the capacitor that is a dy(t) over dt which basically implies now, substituting this
expression for i(t) over here we have the expression x(t) equals RC dy(t) over dt plus
y(t). This is a constant coefficient differential. You can see this is the constant coefficient
differential equation which characterizes the LTI system ok.

467
(Refer Slide Time: 10:27)

This is a this is a constant coefficient differential equation for the LTI system, the RC
circuit. Now if you take the Fourier transform take the Fourier transform then we have X
of omega equals RC, remember the Fourier transform of the derivative is j omega times
the Fourier transform of the signal, so j omega times Y(omega) plus Y(omega) which is
well Y(omega) times 1 plus j omega RC ok.

(Refer Slide Time: 11:23)

And now, therefore, if you look at Y(omega) over X(omega), so which is nothing but the
frequency response that is basically you can readily see that this is given as 1 over 1 plus

468
j omega RC, 1 over 1 plus j omega RC which is basically I can also write this as 1 over 1
plus j times omega over omega0 where omega0 equals, now you can see omega0 equals 1
over RC ok.

(Refer Slide Time: 12:04)

So for an RC circuit I can represent the frequency response is 1 over 1 plus j omega or
omega0 where omega0 equals 1 over omega0 equals 1 over RC ok. So basically that is the
omega0 is 1 over RC ok. Now, if you look at the magnitude, now let us look at the
magnitude response of this the magnitude of this is 1 over 1 plus or 1 over 1 plus omega
over omega or 1 over 1 plus omega over omega0 square under root that is the magnitude
response of this and therefore, magnitude H of omega square if you take the square of the
magnitude response that is nothing but 1 over 1 plus omega square divided by omega0
square.

469
(Refer Slide Time: 12:41)

(Refer Slide Time: 13:26)

And if you look at the magnitude response you will observe that if you look at the
magnitude response it is easy to see something, that is at omega equal to 0 magnitude H
of omega square equals 1, omega tends to infinity implies omega square tends to infinity.
So, 1 plus omega square over omega0square tends to infinity. So 1 over 1 plus omega
square over omega0 tends square tends to 0. So magnitude H of omega square tends to 0
and therefore, this is 1 at omega equal to 0 and as omega tends to infinity it tends
monotonically to 0 similarly as omega tends to minus infinity tends to 0. So you can
basically see this has peak at omega equal to 0 it has a maximum gain of unity and both

470
sides alright it decays to that is omega tends to infinity omega tends to minus infinity it
decays to the gain of the filter it decays to 0. So this is clearly a low pass filter and non-
ideal low pass filter because it does not have any sharp edges, but it smoothly decaying
to 0 the gain is decaying to 0 as omega tends to either minus infinity or infinity. So this is
basically your non-ideal low pass filter.

This is a non-ideal low pass filter and tends to 0 as omega tends to infinity and again
here also tends to 0 as omega tends to minus infinity and this is basically the maximum
gain which is unity ok. And of course, what we would like to do is we would like to we
would like to what we would like to do is we would like to characterize the bandwidth of
this filter, but because it does not have any sharp edges we cannot clearly characterize
what is the cut-off frequency alright what is the stop band and what is the pass band as
we did for the ideal filters.

Therefore, you would like to develop measures in metrics to characterize the bandwidth
of this filter, one such metric is what is known as the 3 dB bandwidth ok, the way to
characterize the bandwidth of this filter the pass band and the stop band is what is known
as the 3 dB bandwidth, ok.

(Refer Slide Time: 16:10)

So we want to define the 3 dB, we want to define the 3 dB bandwidth of the filter
omega3 dB the technical definition of this is value of omega such that, such that, and
power the power of the filter you can say the power decreases by a factor of half, power

471
decreases by a factor of half when compared to the peak implies the amplitude decreases
by a factor of 1 over root 2.

So this can be understood as follows what we want to show is that if at the peak the
amplitude alright the amplitude of the transfer function of the amplitude of the frequency
response is at the peak if the amplitude of the frequency response is let us say k alright.
Then at the 3 dB point the amplitude decreases by a factor of root 2 that is its k over root
2 and therefore, the amplitude decreases by a factor of square root of 2 the power which
is the square of the amplitude decreases is basically a factor of half in relation to the
power at the peak, ok.

So basically what this implies is that if you look at the and of course, for the previous
low pass filter you can see the peak occurs at 0 and in fact, it is the gain is unity. So we
have magnitude H of 0 equals 1, which implies at the 3 dB point we must have
magnitude H of omega 3 dB equals 1 over square root of 2 which implies magnitude H
of omega 3 dB square equals half.

(Refer Slide Time: 18:16)

Basically it is all related to the amplitude at the peak. So this is you can say half 1 over
root 2 into 1. So this is half into 1 which is nothing, but half. And now, you can see
magnitude H of omega3 dB square equals 1 over 1 plus omega square by omega0 square
which is equal to 1, omega0 omega3 dB which implies omega3 dB square where omega0
square equals 1.

472
(Refer Slide Time: 19:36)

I am sorry this is equal to this is equal to half which implies omega0 square by omega3 dB
square which implies omega 1 plus omega3 dB square by omega0 square equals 2, which
implies omega3 dB square omega0 square equals one which implies very simply that
omega3 dB equals omega0 which is nothing but this is equal to remember this is equal to 1
over RC correct.

(Refer Slide Time: 20:00)

So omega3 dB equals omega0 which is 1 over RC. So 3 dB bandwidth of this low pass
filter you can think of this as an approximate cut-off frequency of this low pass filter is

473
defined as that is one of the ways to define the cut-off frequency of this non-ideal low
pass filter is one over RC, where R is of course the resistance and C is the capacitance
ok.

And why is this known as 3 dB and the relation is obvious because if you look at the
power that is magnitude H(omega3 dB) square equals half. So if you look at the decibel
value of this that is 10 log to the base 10 magnitude 3 dB square or 20 log to the base 10
times the magnitude that is equal to 20 or 10 log to the base 10 of half which is equal to
minus 3 dB you can say this is approximately minus 3 dB.

(Refer Slide Time: 20:43)

So the 3 dB magnitude the 3 dB, ok the 3 dB point at the 3 dB point the output power of
the signal is attenuated by 3 dB correct, at the 0 frequency remember the gain is one the
magnit the magnitude is one the power is also one alright. At the 3 dB point there is a 3
dB frequency the magnitude decreases by a factor of root 2 that is 1 over root 2. So the
power decreases by a factor of half.

So in decibel values the power decreases by a factor of 3 dB or minus 3 dB. So therefore,


is known as the 3 dB frequency. So 3 dB reduction in this, so 3 dB reduction in power,
so this gives you a 3 dB reduction in power, ok hence it is known as the 3 dB point. And
the 3 dB bandwidth is 1 over RC that is what we have seen ok. So the 3 dB bandwidth of
this RC circuit of this non-ideal low pass filter is 1 over RC ok.

474
Let us look at a few other aspects. Now, we can also define the signal bandwidth.

(Refer Slide Time: 22:42)

So far we have defined the bandwidth of of a filter, we can also use the same definition
to define the bandwidth of a filter, the bandwidth of a filter to define the bandwidth of a
filter. Let us say of a signal let us say we have a signal x(t) that has a Fourier transform X
of omega, 3 dB bandwidth omega3 dB equals the frequency omega such that we have
magnitude X(omega) or magnitude X(omega) at the 3 dB point equals 1 over root 2 of
course, magnitude X of 0 which means magnitude X of omega3 dB square which recall is
the energy spectral density of the signal equals half magnitude X of 0 square.

475
(Refer Slide Time: 24:06)

And also there is the notion of what is known as a band limited signal. I think something
important a notion of a x(t) is a band limited signal x(t) is a band limited signal if let us
say we have the Fourier transform of x(t) which is X of omega x(t) is band limited if
magnitude of X of omega equals 0 for omega greater than omegaM. So omegaM is the
maximum frequency and this plays an important role we will talk more about this later.
So this is termed as the maximum frequency.

(Refer Slide Time: 25:05)

476
So all the frequency components in the spectrum X of omega, for omega larger than
omegaM that is omega larger than omegaM or omega smaller than minus omegaM, are 0
ok. And you can visualize this as follows if you have the omega axis then let us say ok,
so this is your omegaM, this is your minus omegaM. So this is band limited to the signal is
this is your, let us say this is the magnitude spectrum and this is the this is band limited
to this is band limited to omegaM ok. Band limited signals have an important role to play
in sampling and so on, alright.

So this is basically more or less summarizes what we wanted to talk about the Fourier
transform, we have looked at the various aspects of the Fourier analysis of continuous
time signals starting with first periodic signals. Looking then also at aperiodic signals,
the application of Fourier transform in the analysis of LTI systems ideal filters and now
also non ideal filters and how to characterize the bandwidth of a non-ideal filter alright.
So I will stop here and look at other aspects in the subsequent modules. Thank you.

477
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology Kanpur

Lecture - 42
Fourier Analysis Examples - Complex Exponential Fourier Series of Periodic
Square Wave

Keywords: Fourier Analysis Examples, Complex Exponential Fourier Series of Periodic


Square Wave

Hello welcome to another module in this massive of online course. So we are looking at
the Fourier analysis of both discrete time and as well as continuous time periodic as well
as aperiodic signals. In this module, we are going to start looking at various problems to
better understand the applications of the Fourier analysis for continuous time signals that
we have seen so far.

(Refer Slide Time: 00:45)


So consider the signal x(t )  cos(2 t  ) .
4

478
(Refer Slide Time: 01:50)

We want to obtain its complex exponential Fourier series which can be abbreviated as
the CEFS. Now here 0  2 which means the fundamental period of this is

2 2
 T0   .
0 2

(Refer Slide Time: 03:01)


So this can be expressed as the CEFS representation x(t )  Ce
k 
k
jk0t
and here Ck

denotes the complex coefficient corresponding to the kth harmonic that is the frequency
component which is k times 0 .

479
(Refer Slide Time: 03:52)

Rather than evaluating the coefficients directly, we can evaluate this as


 
 j (2t  )  j (2t  ) 1 j 4 j 2t 1  j 4  j 2t
x(t )  cos(2 t  )  e 4
e 4
. So this can be written as e e  e e ,
4 2 2

j0t  j0t 1 j 4 1  j 4
which is C1e  C1e where these coefficients C1  e and C1  e .
2 2

(Refer Slide Time: 05:06)

1 j 4 1     1 1 1  1 j
C1  e   cos  j sin    j  .
2 2 4 4  2 2 2 2 2

480
(Refer Slide Time: 06:02)

1  j 4 1     1 1 1  1 j
Similarly C1  e   cos  j sin    j  .
2 2 4 4  2 2 2 2 2

(Refer Slide Time: 06:53)

And from this you can clearly see that all Ck  0  k  1 .

481
(Refer Slide Time: 08:01)

Now let us look at a periodic sequence or a periodic square wave.

(Refer Slide Time: 08:40)

T0
So you have a square wave which is periodic and is of width and has a period T0. So
2
let us say the height of each square pulse or the peak amplitude of each square pulse is 1
and we want to find the complex exponential Fourier series and also the trigonometric
Fourier series which is an alternative Fourier series representation denoted by TFS.

482
(Refer Slide Time: 10:37)

(Refer Slide Time: 11:27)

2
So first observe that the fundamental period equals T0 which implies 0  and this
T0

implies that x(t )  Ce
k 
k
jk0t
. So this is the CEFS representation.

483
(Refer Slide Time: 12:30)

T0
T0 2
1 1
 x(t )e e
 jk0t  jk0t
Now the coefficient Ck can be found as Ck  dt  dt .
T0 0
T0 0

(Refer Slide Time: 13:22)

484
T0 T0
 jk0
 jk0t
1 e 2
1 e 1 2
2
So this will be  . And now we have 0   0T0  2
T0  jk0 0
T0  jk0 T0

1  e jk
and therefore, this expression for Ck can be simplified as Ck  and
jk 2

1   1
k

e    1 and therefore Ck 


 j k k
.
jk 2

(Refer Slide Time: 14:08)

1   1
2m

So now you can observe that if k is even implies if k = 2m Ck  0.


j 2m2

485
(Refer Slide Time: 16:19)

1   1
2 m 1
2 1
Now if k is odd implies k = 2m + 1, then Ck    .
2 j (2m  1) 2 j (2m  1) j (2m  1)

(Refer Slide Time: 17:32)

486
(Refer Slide Time: 18:15)

For the DC coefficient which is the term corresponding to k = 0 is simply


T0
T0 2
1 1 1 T0 1
C0 
T0  x(t )dt  T  1dt  T
0 0 0
. 
0 2 2
and finally what we have is


1 1 1
x(t )  
2 j
  2m  1e
m 
j (2 m 1)0t
. So this exists only for k = 2m + 1. So this is

basically the complex exponential Fourier series representation.

(Refer Slide Time: 19:11)

487
So basically what we have done in this module is we have started looking at the
problems for the Fourier analysis of continuous signals and systems, in particular, we
have looked at an example which basically finds the complex exponential Fourier series
of a simple signal followed by the complex exponential Fourier series of a periodic
square wave. We are also going to find the trigonometric Fourier series representation of
this in the subsequent module. So we stop here. Thank you very much.

488
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology Kanpur

Lecture - 43
Fourier Analysis Examples-Trignometric Fourier Series of Periodic Square Wave,
Periodic Impulse Train

Keywords: Trignometric Fourier Series of Periodic Square Wave, Periodic Impulse


Train

Hello welcome to another module in this massive open online course. So we are looking
at various example problems to better understand Fourier analysis.

(Refer Slide Time: 00:25)

So in particular we have started looking at the Fourier analysis representation of periodic


T0
square wave which has the fundamental period T0 with pulses of width and height 1.
2

489
(Refer Slide Time: 01:01)

We want to now look at the TFS that is the trigonometric Fourier series of this. The TFS
can be obtained as follows, the TFS of any signal x(t) is given as,
a0  
  ak cos(k0t )   bk sin(k0t ) . Now we also know the relation for these various
2 k 1 k 1

a0
aks for instance  C0  a0  2C0 and we have already evaluated C0 which is
2
1 1
C0   a0  2C0  2.  1 .
2 2

490
(Refer Slide Time: 02:44)

Further we have the coefficient of cos(k0t ) given as ak  Ck  C k . Now we know that

Ck equals 0 for even k implies, now ak is also equal to 0 if k is even because both Ck and

1
C k are 0 for even k. If k is odd, let us say k  2m  1 we have C2 m1  .
j (2m  1)

(Refer Slide Time: 04:43)

491
(Refer Slide Time: 04:28)

1 1
So a2 m1  C2 m1  C (2 m1)    0 . So basically what we have
j (2m  1) j (2m  1)
been able to show that ak is 0, if k is even and ak is 0 when k is odd as well, which
implies ak is 0 for all k.

(Refer Slide Time: 06:01)

Now bk  j (Ck  C k ) this is equal to 0 if k is even. So if k is odd, then let us assume

 1 1 
k  2m  1 , then bk  j   , the j cancels, so this
 j (2m  1) j (2m  1) 

492
2
becomes b2 m1  . Therefore the TFS is given as
 (2m  1)
1  2 2
x(t )   sin(2m  1)0t . So if you take the outside the summation,
2 m0  (2m  1) 
1 2  sin  (2m  1)0t 
what you have is x(t )    and this is the trigonometric Fourier
2  m0 (2m  1)
series or the TFS of the periodic square wave and further it can be written as
1 2 1 1 
x(t )    sin 0t  sin(30t )  sin(50t )  ....  . So this is the again the TFS that is
2  3 5 
the trigonometric Fourier series representation.

(Refer Slide Time: 07:07)

493
(Refer Slide Time: 08:28)

(Refer Slide Time: 09:40)

So let us proceed to the next problem where we consider a periodic impulse train and this
has a lot of relevance in sampling.

494
(Refer Slide Time: 10:57)

The periodic impulse train is nothing but a train of successive impulses at a spacing of
T0.

(Refer Slide Time: 12:45)

So we want to determine the CEFS as well as the TFS for this periodic impulse train. So

let T0 (t )    (t  kT ) . So this basically denotes impulses at integer multiples of T0.
k 
0

T0 T
Now if you look at any particular period for instance   t  0 this is simply
2 2

495
x(t )   (t ) . So the coefficient of the Fourier series Ck can be evaluated as follows,
T0 T0
2 2
1 1
 x(t )e  j 20t dt    (t )e
 j 20t
Ck  dt .
T0 T T0 T
 0  0
2 2

(Refer Slide Time: 14:53)

1  j 20t 1
The using the property of the delta function, this is simply Ck  e  . So,
T0 t 0 T0

1
each of the Fourier series coefficients are . So the CEFS is given as
T0

1
T (t ) 
0
T0
e
k 
jk0t
and the trigonometric Fourier series can be obtained as follows,

a0  
a 1 2
T (t )    ak cos(k0t )   bk sin(k0t ) . Now 0  C0   a0  .
0
2 k 1 k 1 2 T0 T0

496
(Refer Slide Time: 15:46)

(Refer Slide Time: 17:06)

497
(Refer Slide Time: 18:20)

2 1
Further ak  Ck  C k  , since both of them are . Now
T0 T0
1 1
bk  j (Ck  C k )  j (  )  0.
T0 T0

(Refer Slide Time: 19:08)


1 2
So the TFS of the impulse train is x(t )  
T0 T0
 cos(k t ) .
k 1
0

498
(Refer Slide Time: 20:34)

Let us now move on to problem number 4, where we consider the triangular wave which
is slightly more difficult to analyze. So we have a periodic triangular wave of period T0.
T0
Let the height of this be . We want to determine the CEFS and the TFS for this
4
periodic triangular wave.

T0
We are going to differentiate this and so the slope of this is the height divided by the
4
T0 1
width of the rising part which is , which is basically . So the slope of the rising part
2 2
1 1
is and the slope of the falling part is symmetric. So this slope is  . So now consider
2 2
dx(t )
x ' (t)  .
dt

499
(Refer Slide Time: 23:57)

1
Now x ' (t) will look as shown in slide, at time 0 it starts at because the increasing slope
2
1 1 T 1
is , so it is for 0 and then it is  for the next duration and then it keeps
2 2 2 2
continuing. So we get a periodic square wave.

(Refer Slide Time: 26:03)

1
Now let us consider x(t )  x ' (t )  . So now we will get a periodic square wave of
2
height 1 and this is something that we have seen before.

500
(Refer Slide Time: 26:42)

(Refer Slide Time: 28:20)

So we basically started with this triangular wave, using the differentiation we have
reduced it to the periodic square wave that we have seen earlier. And now we can derive
both the CEFS as well as the TFS of the periodic square wave, for the periodic triangular
wave. So let us stop here and continue with this problem in the next module. Thank you
very much.

501
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 44
Fourier Analysis Examples - Complex Exponential Fourier Series and
Trigonometric Fourier Series of Periodic Triangular Wave, Periodic Convolution

Keywords: Complex Exponential Fourier Series and Trigonometric Fourier Series of


Periodic Triangular Wave, Periodic Convolution

Hello, welcome to another module in this massive of online course. So we are looking at
example problems in Fourier analysis, let us continue this discussion.

(Refer Slide Time: 00:25)

So we are currently looking at the Fourier series expansion of periodic triangular wave.

502
(Refer Slide Time: 01:02)

(Refer Slide Time: 01:06)

1
If we look at the derivative x(t ) and consider x(t )   x(t ) , that is given by this
2
periodic square wave that we have seen earlier.

503
(Refer Slide Time: 01:37)

1
Therefore x(t )  x(t )  .
2

(Refer Slide Time: 01:50)

We have already defined or derived the complex exponential Fourier series of x(t ) which

1 1 
e j  2 m10t
is given as 
2 j

m   2m  1
. This implies the complex exponential Fourier series of

504
 j  2 m 10t
1 1 e
x(t )  x(t )  is
2 j
  2m  1 . Now realize that x(t ) is nothing but the derivative of
m 

x(t).

(Refer Slide Time: 04:07)


Let the CEFS of x(t) which is the original triangular wave be Ce
m 
k
jk0t
. Now the


dx(t )
complex exponential Fourier series of x(t )  will be given as  jk0Ck e jk0t .
dt m 

This jk0Ck  C k .

505
(Refer Slide Time: 05:02)

Let us call this as expression number 2 and we already have an expression for the
complex exponential Fourier series of x(t ) from above and let us call this as expression
1. Now we are comparing the Fourier series coefficients of both these expressions.

(Refer Slide Time: 05:56)

1 1
So what we have is that C k  , but remember C k is nothing but jk0Ck .
j  2m  1

506
(Refer Slide Time: 07:48)

Now if k is even we have C k  0 implies jk0Ck  0  Ck  0 if k  0 .

(Refer Slide Time: 08:52)

1 1
On the other hand for odd k, where k  2m  1 we have C 2 m1  , which
j  2m  1

1 1
implies that j0 (2m  1)C2 m1  , which implies C2 m1  and
j  2m  1   2m  1 0
2

507
2 1 T0
remember 0  which implies C2 m1   2 . So we have
2 2    
2
T0   2m  1 2 2 m 1
T0
the coefficients, we have derived the expression for the coefficient Ck and the only
coefficient that is remaining now is C0 because we cannot derive C0 using this technique,
because when we differentiate x(t) the DC component C0 which is constant, vanishes. So
T0
1
we simply evaluate the DC coefficient C0 
T0  x(t )dt which
0
is the area under x(t),

where x(t) is the triangular wave in a duration of T0 and thus


T0
1 1 1 T0 T0
C0 
T0  x(t )dt  T
0
. .T0 .  .
0 2 4 8

(Refer Slide Time: 11:28)

Therefore, the CEFS of the original triangular wave, that is x(t) is,

T0 T 
e j  2 m10t
 0 2
8  2 
 .
 2m  1
2
m 

508
(Refer Slide Time: 12:26)

Similarly one can readily evaluate the TFS that is the trigonometric Fourier series and we
a0 T T
have  C0  0  a0  0 . Now ak for k  0 is Ck  C k and if k is even, Ck and C-k
2 8 4
both equals 0 this implies ak = 0.

(Refer Slide Time: 14:00)

509
(Refer Slide Time: 14:30)

Now if k is odd we have k  2m  1 , then


T0 T0 T0
a2 m1  C2 m1  C (2 m1)   = .
2  2m  1 2  2m  1   2m  1
2 2 2 2 2 2

(Refer Slide Time: 15:10)

510
(Refer Slide Time: 16:00)

Similarly now coming to bk, bk  j (Ck  C k ) and this is 0 if k is even. If k is odd,

 T0 T0 
k=2m+1, then b2 m1  j  2    0 . So bk is 0 for all k.
 2  2m  12 2 2  2m  12 
 

(Refer Slide Time: 17:10)

511
(Refer Slide Time: 18:05)

T0  T0 cos  (2 m 1)0t 
2

Therefore, the TFS of the periodic triangular wave x(t) is  ,


 2  2m  1
2
8 m1

so that basically completes this slightly elaborate procedure of deriving the CEFS and the
TFS, that is trying to employ the triangular wave and trying to integrate the triangular
wave by multiplying by the various complex exponentials or the various harmonics and
integrating to derive the various coefficients of the CEFS. Let us proceed to the next
problem that is problem number 5.

(Refer Slide Time: 20:09)

512
This problem is regarding periodic convolution. Now consider two signals x1(t) and x2(t)
which are periodic with common period T0. The periodic convolution of these two
signals with common period T0 is x(t )  x1 (t )  x2 (t ) .

(Refer Slide Time: 21:35)

So in conventional convolution we are integrating from  to  but this is for two


periodic signals with the common period T0 and hence integration is over only one
period. Now we want to derive the CEFS of x(t ) given the CEFS of x1(t) and x2(t) that is,
 
x1 (t )   dk e jk0t , x2 (t ) 
k 
ee
k 
k
jk0t
.

513
(Refer Slide Time: 23:44)

To To 
Now x(t )   x1 ( ) x2 (t   ) d    x1 ( )  ek e jk0 (t  ) d  . And now you interchange the
0 0 k 

 To

summation and integration and that gives us ee


k 
k
jk0 t
 x ( )e
1
 jk0
d .
0

(Refer Slide Time: 25:00)

So we can now see that this quantity is nothing but the complex exponential Fourier
series of x1 ( ) which is nothing but T0 times dk because you can see
To
1
 x ( )e
 jk0
dk  1 d .
T0 0

514
(Refer Slide Time: 26:35)


So finally you get x(t )  Td ee
k 
o k k
jk0t
. This implies that C k  T0 d k ek which is the kth

CEFS coefficient of periodic convolution.

(Refer Slide Time: 27:43)

So that completes our discussion of examples for the discrete Fourier series, that is the
complex exponential Fourier series and the trigonometric Fourier series of continuous
time periodic signals and in the subsequent module, we shall look at examples for the
Fourier expansion for or the Fourier analysis for continuous time aperiodic signals. So let
us stop here and continue in the later modules. Thank you very much.

515
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology Kanpur

Lecture - 45
Fourier Analysis Examples - Fourier Transform of Square Pulse, Fourier
Transform of Sinc Pulse

Keywords: Fourier Transform of Square Pulse, Fourier Transform of Sinc Pulse

Hello, welcome to another module in this massive of online course. So we are looking at
the Fourier analysis and example problems in the Fourier analysis and so far we have
focused on example problems for discrete type signals, let us now shift our focus to
example problems for continuous time signals.

(Refer Slide Time: 00:36)

516
(Refer Slide Time: 01:15)

Let us start with example number 6, so far we have looked at continuous periodic
signals, let us now look at continuous aperiodic signals. So this is a rectangular pulse of
T T
width T centered at 0, from  to of height unity and we denote this signal x(t) as
2 2
T
t 

x(t )  PT (t )  1 2 . So now we want to find the Fourier transform of this pulse.
0 otherwise

(Refer Slide Time: 02:23)

517
(Refer Slide Time: 03:31)

(Refer Slide Time: 03:46)


The continuous time Fourier transform PT ( )   P (t )e
 jt
T dt , which is basically non-


T T
zero only from  to .
2 2

518
(Refer Slide Time: 04:12)

So this becomes
 T  T
2 j sin    2sin   
T T T T T
 j j
 jt 2
2 2
e e e2 2
 2  2
PT ( )   P (t )e e
 jt  jt
dt  dt   
 j  j  j 
T
T

T

T 
2 2 2

(Refer Slide Time: 04:58)

519
(Refer Slide Time: 06:34)

T
This is the Fourier transform of this pulse and you can multiply and divide by . So this
2
T  T  T
2 sin    T sin   
is PT ( ) 
2  2  2.
 T  T
   
 2  2

(Refer Slide Time: 07:16)

520
 T 
T sin   
 2  and we will now define a popular notation
This can be written as PT ( ) 
 T 
  
 2 

 sin  u 
known as the sinc function, sin cu    that arises very frequently in
 u 
communications and signal processing. Therefore this will simply be
 T 
PT ( )  T sinc  .
 2 

(Refer Slide Time: 08:19)


And further if we denote  F where  is the angular frequency, then this simply
2
becomes PT ( )  T sin c( FT ) T sinc FT and if you plot this you can see that

 T 
T sin  
 T   2  is 0 if T  k    2k .
PT ( )  T sinc  
 2  T 2 T
2

521
(Refer Slide Time: 09:05)

(Refer Slide Time: 09:34)

 T 
T sin  
2 2  2  T .
So this is 0 for  equals for instance  , 0, and so on and lim
T T   0  T
2
T 2
And you can see that the amplitude of this is decreasing as envelope will be  .
T 

2

522
(Refer Slide Time: 11:50)

So this is linearly decreasing in  and we can plot this.

(Refer Slide Time: 12:06)

So this will be something which is of height T with the decreasing envelope and these
2 4
points where it is 0 are , and so on and this is the sinc function and as    , the
T T
signal tends to 0. Now let us look at the Fourier transform of the sinc pulse.

523
(Refer Slide Time: 14:05)

sin( t )
We consider signal x(t )  sin c( t )  . Now we know that the Fourier transfer
 t
of square pulse is the sinc in the frequency domain. So using duality if you look at the
sinc pulse at the time domain, its Fourier transform must be related to that of a square
pulse in the frequency domain, so we use the duality principle which states that if x(t)
has Fourier transform X ( ) then capital X(t) must have the Fourier transform 2 x( ) .

(Refer Slide Time: 15:15)

524
(Refer Slide Time: 15:45)

 T 
So we have PT ( )  T sinc   which is the Fourier transform of PT (t ) . Now using the
 2 
 tT 
duality principle, it follows that if we replace  by t that is T sinc   , this has the
 2 
Fourier transform 2 PT ( ) . Now this pulse is an even function centered at 0 and it is

symmetric about 0. So 2 PT ( ) is the same as 2 PT ( ) .

(Refer Slide Time: 16:53)

525
 tT 
Since the square pulse is an even function this implies that sinc   has the Fourier
 2 
2 T
transform PT ( ) and now set    T  2 and therefore from here we will
T 2
2 P ( )
have sin c( t) has the Fourier transform P2 ( )  2 .
2 

(Refer Slide Time: 17:34)

P2 ( )
Now if you look at this is nothing but pulse of width 2 centered at 0 and

1
height and it spans from  to  .

526
(Refer Slide Time: 19:01)

(Refer Slide Time: 19:34)

Now if you bring  to the left hand side,  sin c ( t) has the Fourier transform P2 ( ) .

a a a
So if you set   this gives sin c( t)  P2 a ( ) .
  

527
(Refer Slide Time: 21:10)

sin at
So this can be written as  P2 a ( ) and this is basically pulse of height 1 and
t
width 2a centered at 0 from a to a .

(Refer Slide Time: 22:48)

So with this, we will stop here and continue with other problems in the Fourier analysis
for continuous time signals in the subsequent modules. Thank you very much.

528
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 46
Fourier Analysis Examples - Fourier Transform of Exponential, Cosine, Sgn, Unit-
Step Signals, Even and Odd Components

Keywords: Fourier Transform of Exponential, Cosine, Sgn, Unit-Step Signals, Even and
Odd Components

Hello welcome to another module in this massive open online course. So we are looking
at example problems for the Fourier analysis or the Fourier transform of aperiodic
signals.

(Refer Slide Time: 00:24)

So let us look at problem number 8 and we have a simple signal x(t )  e a t , a  0 .

529
(Refer Slide Time: 01:28)

This is basically an exponential which is decaying on both sides, so tends to 0 as t tends


to either  or   . And the Fourier transform of this can be evaluated as follows that is

e
a t
X ( )  e jt dt .


(Refer Slide Time: 02:21)

Now split this into two integrals, one from  to 0, t  0 , so t is t . So this becomes
0  0 
X ( )   e e dt   e e dt   e dt   e
at  jt  at  jt ( a  j ) t  ( a  j ) t
dt .
 0  0

530
(Refer Slide Time: 03:38)

0 
e( a  j )t e ( a  j )t 1 1
This implies    .
a  j 
a  j 0
a  j  a  j 

(Refer Slide Time: 05:02)

2a
So this finally evaluates as X ( )  . So this is the Fourier transform of the
a  2
2

decaying exponential signal.

531
(Refer Slide Time: 06:08)

Let us go to the next problem where we want to evaluate the Fourier transform of one of
the most frequently occurring signals which is x(t )  cos(0 t) and you can write this as

e j0t  e j0t
x(t )  . Now this has the Fourier transform,
2
1 1
X ( )  2 (  0 )  2 (  0 ) which is X ()   (  0 )   (  0 ) and
2 2
this is the Fourier transform of x(t )  cos(0 t) .

(Refer Slide Time: 07:18)

532
Similarly we can evaluate the Fourier transform of another very commonly occurring
signal that is the pure sinusoid x(t )  sin(0 t) .

(Refer Slide Time: 08:07)

Let us look at the next example, Fourier transform of the sign function


 t  0
x(t )  sgn(t)  11 t  0 .

 1 t 0
 2

(Refer Slide Time: 09:19)

533
And now to evaluate this, let us consider the derivative of sgn, the derivative of this is
basically if t > 0 it is constant, it is 1, so the derivative is 0, if t < 0 that is -1 which is
constant, so derivative is once again 0 and at t = 0 the derivative is an impulse because it
makes a transition or a step change from -1 to 1, that is a step change of basically
magnitude 2, so its the derivative is an impulse of magnitude 2. Let us call this derivative
by x(t )  2 (t) . And now the Fourier transform X ( ) is j X ( ) .

(Refer Slide Time: 10:39)

So from properties of Fourier transform, this is simply 2 since  (t ) has Fourier


transform of unity.

534
(Refer Slide Time: 11:27)

2
So this implies j X ( )  2  X ( )  which is the Fourier transform of sgn(t).
j

(Refer Slide Time: 12:18)

So now we want to use this result to derive the Fourier transform of u(t) that is your unit
1
step function. So the unit step function is basically 1 if t > 0, at 0 it is and for t < 0 it is
2
0.

535
(Refer Slide Time: 13:25)

Now we want to derive the Fourier transform of the unit step function U ( ) . To do this
first realize that the unit step function can be expressed in terms of the sgn function as
1 1 1 1 2 1
u (t )   sgn(t ) . So U ( )  2 ( )    ( )  .
2 2 2 2 j j

(Refer Slide Time: 14:41)

So this is the Fourier transform of the unit step function.

536
(Refer Slide Time: 15:39)

(Refer Slide Time: 16:09)

Let us now look at another interesting problem that is the Fourier transform of the even
and odd components of a signal.

537
(Refer Slide Time: 17:27)

So x(t) is a real signal. Let us say X ()  A()  jB() where A( ) denotes the real
part of the Fourier transform and B( ) denotes the imaginary part of the Fourier
transform. Now first we want to derive the expressions for the even and odd components
of this signal x(t). Let us denote this even and odd components by xe(t) and xo(t). So
x(t )  xe (t)  xo (t) .

(Refer Slide Time: 19:35)

Now the even component must exhibit even symmetry, which means xe (t )  xe (t ) and

the odd component has to exhibit odd symmetry that is xo (t )   xo (t ) . So this implies

x(t )  xe (t )  xo (t ) . Now x(t )  xe (t )  xo (t )  xe (t )  xo (t ) . So now if you look at

538
x(t )  x(t )
these two equations solving for xe(t) and xo(t), we have xe (t )  ,
2
x(t )  x(t )
xo (t )  and these represent the even and odd components of any signal x(t).
2

(Refer Slide Time: 21:01)

Now let us consider the Fourier transform of these even and odd components.

(Refer Slide Time: 22:29)

So let us say xe(t) has the Fourier transform X e ( ) and xo(t) has the Fourier transform

X o ( ) . Now recall that x(t )  xe (t )  xo (t ) which implies that the Fourier transform

X ( )  X e ( )  X o ( ) .

539
(Refer Slide Time: 23:20)

Further, x(t )  xe (t )  xo (t ) . Now if you take the Fourier transform of this we have

X ( )  X * ( )
X  ( )  X e ( )  X o ( ) . So we have X e ( )  .
2

(Refer Slide Time: 24:37)

2 Re  X ( )
Now this is nothing but  A( ) which is the real part of X ( ) .
2

540
(Refer Slide Time: 25:32)

X ( )  X * ( ) 2 j Im  X ( )
Similarly X o ( )    jB( ) where B( ) is the imaginary
2 2
part of X ( ) .

(Refer Slide Time: 26:29)

Therefore, we have X ()  A()  jB() which is similarly X ( )  X e ( )  X o ( ) .


And therefore that completes these interesting problems. So with these examples let us
stop this module here and we will continue with other examples in the subsequent
modules. Thank you.

541
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 47
Fourier Analysis Examples – Fourier Transform of Gaussian Pulse, Fourier
Transform Method to find Output of LTI Systems Described by Differential
Equations

Keywords: Fourier Transform of Gaussian Pulse, Fourier Transform Method to find


Output of LTI Systems Described by Differential Equations

Hello welcome to another module in this massive open online course. So we are looking
at example problems to understand the applications and theory behind Fourier transform
especially with relevance to its analysis of signals and systems. So in this module we are
going to look at the Fourier transform of an important signal that is the Gaussian pulse.

(Refer Slide Time: 01:33)

So by the Gaussian pulse we mean a signal which is of the form x(t )  e at .

542
(Refer Slide Time: 02:05)

And as t tends to both  and   , this tends to 0 and it is a bell shaped curve as shown in

slide. In fact, at t = 0 this is e0  1 and that is the peak value and then it decays as t
increases from 0 or t decreases from 0. So this is also frequently referred to as a bell
shaped and is used in signal processing and also in communication for instance in
communication this Gaussian pulse is now used in Gaussian shift keying or Gaussian
minimum shift keying which forms the basis of the GSM, a digital cellular standard

(Refer Slide Time: 03:41)

543

Now the Fourier transform of the Gaussian pulse that is X ( )  e
 at 2  jt
e dt .


(Refer Slide Time: 04:40)

 
dX ( )
  e at ( jt)e jt dt    jte at e jt dt .
2 2
So first we differentiate this, we have
d  

(Refer Slide Time: 06:15)

Now we are going to carry out integration by parts and when we obtain
  
j  at 2  jt j 
  e at ( j )e jt dt    e at e jt dt .
2 2
e e
2a  2a  2a 

544
(Refer Slide Time: 07:32)

Now if you recognize this, this is nothing but X ( ) that is the Fourier transform of e at .
2

(Refer Slide Time: 09:25)

dX ( )  dX ( ) 
So  X ( ) and this implies that   d . So from this we are
d 2a X ( ) 2a
obtaining a differential equation.

545
(Refer Slide Time: 10:51)

 
dX ( ) 
Now integrating on both sides we have     d .
0
X ( ) 0 2a

(Refer Slide Time: 11:59)


 2 2
And therefore this implies ln X ( ) 0    . This implies that
4a 0
4a

2 X ( ) 2
2

ln X ( )  lnX(0)    ln  . This implies X ( )  X (0) e 4 a .
4a X(0) 4a

546
(Refer Slide Time: 12:56)

Now all that is remaining in this Fourier transform is to evaluate X (0) which is nothing
but the Fourier transform value at   0 .

(Refer Slide Time: 14:10)

 e dt . Now this is difficult to evaluate and this also represents a


 at
This implies X (0) 
2



Gaussian probability density function, not exactly, but we can consider this with some

547
appropriate scaling to represent a Gaussian probability density function. So this can be
t2

1
1 2.
2a
written as e .
1
2
2a

(Refer Slide Time: 15:31)

So this represents a Gaussian probability density function with mean equal to 0 and
1 1
variance , here  2  .
2a 2a

(Refer Slide Time: 16:18)

548
t2

 
1 2.
1
1 
  e dt  2
 at
dt  1 which implies that 
2
2a
We have e and this is
1 2a a

2 

2a
nothing but X (0) .

(Refer Slide Time: 17:38)

2
 
So X ( )  e 4a
and this is the Fourier transform of the Gaussian pulse.
a

(Refer Slide Time: 18:54)

549
(Refer Slide Time: 20:03)

Let us now look at example number 13, where the applicability of the Fourier transform
in the analysis of the properties and the behavior of LTI systems is seen. So consider the
dy (t )
LTI system given by the constant coefficient differential equation  2 y (t )  x(t ) .
dt
This x(t) is the input and y(t) is the output. We need to find the expression for the output
for a given input signal x(t)  et u(t ) .

(Refer Slide Time: 22:33)

So taking the Fourier transform on both sides we have jY ()  2Y ()  X () .

550
(Refer Slide Time: 00:00)

Y ( ) 1
This implies that Y ( )(j  2)  X ( )    H ( ) which we also call the
X ( ) j  2
frequency response of the system which is the Fourier transform of the impulse response.

(Refer Slide Time: 24:34)

551
1
Now X ( )  that is the Fourier transform of x(t) which implies that the output
j  1
1 1 1 1
Y ( )  H ( ) X ( )  .   . So this is the partial fraction
j  2 j  1 j  1 j  2
expansion.

(Refer Slide Time: 26:02)

Now taking the inverse Fourier transform, we have


y(t )  et u(t )  e2t u (t )   et  e2t  u(t ) and that is basically the output signal to the

given input.

552
(Refer Slide Time: 27:16)

So in this module we have seen a couple of other examples of the Fourier transform. We
will stop here and in the subsequent modules we shall look at other important
applications of the Fourier transform starting with what is known as the Bode plot, to
basically get an idea or pictorially represent the properties or the frequency response of
an LTI system. So we will stop here. Thank you very much.

553
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 48
Fourier Analysis Examples: Bode Plot for Magnitude/ Phase Response – Simple
Example

Keywords: Bode Plot for Magnitude/ Phase Response – Simple Example

Hello, welcome to another module in this massive open online course. So we are looking
at Fourier transform and example problems for the Fourier transform. In this module we
are going to start the bode plot which can be used to visualize the frequency responses of
an LTI system.

(Refer Slide Time: 00:39)

So the bode plot gives us a graphical representation of the transfer function H ( ) for an
LTI system, it goes as a tool to graphically represent, visualize and thereby better
understand the properties of the LTI system under consideration.

554
(Refer Slide Time: 02:17)

For instance, look at the first example a, consider the following transfer function
j
H ( )  1  . Now consider the dB power given as G( )  10log10 H ( ) .
2

100

(Refer Slide Time: 03:09)

j
So this is G( )  20log10 H () . Now, in this case we have G( )  20log10 1  for
100
c  100Rad / s and this is termed as the corner frequency.

555
(Refer Slide Time: 04:41)

And now to plot this, let us divide this into two regions, one is when  c and another
corresponds to when  c . So consider the case when

  
 c  1 j  1  1 . So I can ignore j and in this case what you can see
c c c
j
is G( )  20log10 1   20log10 1  0 .
100

(Refer Slide Time: 06:08)

So G( ) for  c is equal to 0. Now what happens when  c .

556
(Refer Slide Time: 06:38)

  
Now  c  1 j 1  j , therefore,
c c c
j j
G( )  20log10 1   20log10  20log10   20log10 102  20log10   40 .
100 100

(Refer Slide Time: 07:35)

557
(Refer Slide Time: 08:35)

Now let us look at what happens if  increases by a factor of 10, so


G(10)  20log10 10  40  20log10   20log10 10  40 which is

20  20log10   40  20  G( ) .

(Refer Slide Time: 09:45)

So we can see that gain rises by 20dB whenever  increases by a factor of 10. So for
every factor of 10 increase in  it rises by 20dB and this is termed as a 20dB per decade
increase in the bode plot.

558
(Refer Slide Time: 10:48)

(Refer Slide Time: 11:50)

So the bode plot is best plot on a log axis. So on a log axis you can see that every decade
occupies a single unit. So when it goes from 1 to 10 remember the log increases from 0
to 1, when it goes from 10 to 100, log goes from 1 to 2. And here we are going to have
the vertical lines and this horizontal lines will be the dB values. So for  much smaller
than 100, this is basically 0dB and for  much larger than 100, it increases as 20dB per
decade.

559
(Refer Slide Time: 14:46)

If you want to compute the value of  exactly at the corner frequency, we have
  c  100 , so

100
G( )  20log10 1  j  20log10 1  j  20log10 2  10log10 2  3dB . So at the
100
corner frequency it is basically 3dB. So the exact plot will look something like as shown
in slide.

(Refer Slide Time: 16:40)

560
This is basically termed as the bode plot and bode plot for the magnitude or power to be
more specific. So this is basically the bode plot representation where  is present on the
log axis and the gain is represented in the dB scale on the y axis.

(Refer Slide Time: 17:33)

 j 
Let us now look at the phase plot, so consider now the phase of this is  H  1  ,
 100 
where 100 is the corner frequency. Again we analyze this similar to what we have done
before, we consider two cases, one is  c and the second  c . So

 c   100   H  1  0 .

561
(Refer Slide Time: 18:31)

j 
So the phase is 0. Now, for  significantly larger than 100 this will be  H  
100 2

and now at   100 you can see that  H  (1  j )  . So angle starts at 0 for  much
4

less than the corner frequency, at the corner frequency its and after significantly
4

greater than the corner frequency it is .
2

(Refer Slide Time: 19:47)

562
So if you plot this bode plot for the phase that looks as shown in slide, it rises from 0 to
 
, at 100 this is equal to . Now let us look at a slightly more sophisticated example
2 4
for this bode plot.

(Refer Slide Time: 22:28)

(10  j )
So this is our example b, for the bode plot and let H ( )  , so this can
(1  j )(100  j )
1 j
(1  )
be simplified as follows, H ( )  10 10 . Now you can see that there are three
j
(1  j )(1  )
100
corner frequencies, corner frequency of 1, 10 and 100 corresponding to each of these
terms in the numerator and denominator.

563
(Refer Slide Time: 24:35)

Now for very small , much smaller than 1,


1
1
G ( )  20log10 H ( )  20log10 10  20dB This is because when  is much
1 1
smaller than 1, all the terms will simply reduce to the constants.

(Refer Slide Time: 25:52)

564
Now, consider another scenario when  is much larger that is  is in the range
1
1
1  10 . Here G ( )  20 log10 10  20  20 log10  . So it starts decreasing at
j 1

20dB per decade.

So we are in the middle of this example and we will stop here in this module and we will
continue with this example, look at the bode plot of the magnitude and the phase and also
other examples in the subsequent module. Thank you very much.

565
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 49
Fourier Analysis Examples: Bode Plot for Magnitude/ Phase Response – Second
Example, Fourier Transform of Hilbert Transformer
Keywords: Fourier Transform of Hilbert Transformer

Hello, welcome to another module in this massive open online course. So we are looking
at the example problems for the Fourier transform and particularly focusing on the Bode
plot to represent both the magnitude as well as the phase response of an LTI system.

(Refer Slide Time: 00:32)

So we are looking at the Bode plot specifically at the transfer function


1 j 
1  
10  10 
H ( )  .
 j 
(1  j ) 1  
 100 

566
(Refer Slide Time: 00:52)

We have derived the Bode plot for two regions that is for  1 and 1    10 radians
per second.

(Refer Slide Time: 01:05)

1 
j
Now when 10  100 , G ( )  20 log10 10 10 . Now this will be
j 1

1
20log10  40dB .
100

567
(Refer Slide Time: 02:15)

1 
j
So finally, when  100 radians per second, we have G ( )  20 log10 10 10 . So
j
j 
100
1
this is equal to 20log10  20log10  , which basically decreases as 20dB per decade.

(Refer Slide Time: 03:53)

568
(Refer Slide Time: 04:36)

And therefore, the Bode magnitude plot looks something like as shown in slide. So this
will be starting from 0.1 to 1 to 10 to 100 to 1000 radians per second and this is 0dB, -
20dB, - 40dB and so this starts for  1 this is - 20dB. So it would look something like
this when 1    10 and it decreases 20dB per decade, between 10 and 100 it is constant,
- 40dB and when  100 it again starts decreasing as 20dB per decade. So this is the
actual Bode plot for H ( ) . Now when you look at the phase, it becomes a little bit

more complicated.

(Refer Slide Time: 07:35)

569
1 j 
1  
10  10 
So the transfer function is H ( )  . Now consider the first case, that is
 j 
(1  j ) 1  
 100 
when  1, the first corner frequency, the phase  H  H ( ) is

1
H   1  1  1  0 and all these are constants.
10

(Refer Slide Time: 09:41)

1 
Now, when   1,  10, H   1   . Now, proceeding so forth when
10 4
 
1  10  H  0  0   0   .
2 2

(Refer Slide Time: 11:26)

570
  
When   10 and  100 then  H  0    0   and now when
4 2 4
 
10  100, H  0    0  0 .
2 2

(Refer Slide Time: 13:03)

   
And when   100, H  0     and finally, when
2 2 4 4
   
 100, H  0     . So if you take all these cases and try to plot the
2 2 2 2
Bode plot for the magnitude, this is a little bit tricky and it is going to be very
approximate because plotting the phase is more difficult.

(Refer Slide Time: 14:03)

571
So if you plot these and join these by a line, it looks something like this and this is the
Bode plot of the phase. It is approximate and is not exact because it is much more
difficult to plot the phase as the angle of the transfer function varies from very small

value to a much larger than value. So it starts from 0 and ends up in  . So that
2
basically completes the discussion on the Bode plot.

(Refer Slide Time: 18:35)

Let us now consider a phase shifter system, the frequency response of the system looks
  j2

like H ( )   e j   0
 0 . Now this is also termed as a Hilbert transform and this has a lot of
e 2
applications in communication, this is used to generate a type of amplitude modulation
known as single sideband modulation where instead of transmitting both the sidebands
one can transmit only a single sideband.

572
(Refer Slide Time: 19:50)

(Refer Slide Time: 20:32)

Now we want to derive the impulse response of the phase shifter. Now what this is doing

is all the positive frequency components get shifted in phase by  , all the negative
2
 
frequency components get shifted in phase by . So H ( )   j j  00 . Therefore this is
2  0  0
equal to  j sgn( ) . So this is basically the frequency response.

573
(Refer Slide Time: 21:55)

Now we will use the principle of duality which states that if x(t) has Fourier transform
X ( ) , then X(t) has Fourier transform 2 x( ) . Now we know that sgn(t) has Fourier
2
transform .
j

(Refer Slide Time: 22:37)

2
Now using the duality principle, has Fourier transform 2 sgn( ) but realize
jt
sgn(t ) is an odd function, therefore, sgn()   sgn() . So this is equal to 2 sgn( )
1
which basically implies that has Fourier transform  j sgn( ) . So this is the
t
corresponding impulse response.

574
(Refer Slide Time: 23:40)

(Refer Slide Time: 24:27)

1
So this is the impulse response of the Hilbert transform, that is h(t)  .

(Refer Slide Time: 24:59)

575
Now if I represent this as a system, basically I would have input signal x(t), output signal
y(t) and the impulse response is h(t) and remember y(t) is x(t) convolved with h(t) which
 
1 1 x( )
is basically y (t )  x(t )  h(t )  

x( )
 (t   )
d , so this is 
   (t   )
d and this is the

output of the Hilbert transformer. So this y(t) is the output of the Hilbert transformer and
it is also known as the Hilbert transform of the signal x(t).

(Refer Slide Time: 26:35)

So we have looked at a few more interesting problems and we will stop this module here
and continue with other aspects in the subsequent modules. Thank you very much.

576
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 50
Fourier Transform Examples: Filtering – Ideal Low Pass Filter

Keywords: Filtering, Ideal Low Pass Filter

Hello welcome to another module in this massive open online course. So we are looking
at example problems in the Fourier transforms. So let us start looking at another
interesting problem that is to understand the process of filtering.

(Refer Slide Time: 00:31)

So we are going to discuss an application on the theory of filtering.

(Refer Slide Time: 01:24)

577
So consider an ideal low pass filter and the response looks something like as in slide. So
the cut off frequency is c and the response H ( )  10 otherwise
 c
and this is an ideal low

pass filter.

(Refer Slide Time: 02:17)

Now what we want to do is, for this LPF we want to consider an input signal
x(t )  e2t u(t ) . So we want to design a filter which means we want to choose a cutoff
frequency c such that it passes a certain fraction or a certain desired component of the

signal. So we want to find c such that the ideal LPF passes exactly half or allows
exactly half of the energy of the input signal. So this is our filter design problem and we
can approach this problem as follows.

(Refer Slide Time: 05:08)

578
Now first let us use the Fourier transform approach. So now taking the Fourier transform
1
of thus input signal we have X ( )  . Now, we know from the theory of the
j  2

 1
 j  2  c
Fourier transform that the output Fourier transform Y ( )  X( ) H ( )   0 otherwise . So


it allows only the signal component which are in the required range and it suppresses all
the frequency components which are outside this band.

(Refer Slide Time: 06:45)

 
1  1
 x(t ) dt   e4t dt   e 4t 
2
Now the energy of the signal is and this is the
 0
4 0 4

energy of the input signal.

(Refer Slide Time: 07:42)

Now we want to introduce the concept of energy spectral density, which gives us the
distribution of energy of the input signal over the frequency spectrum. So this is a very

579
good example to illustrate the practical application of this concept of energy spectral
density. So now we want to consider the energy spectral density of output.

(Refer Slide Time: 08:48)

Now the energy spectral density of the output


is
 c c c
1 1 1 1 1 1 1 1 1  1 
 Y ( ) d  2  4   2 d  2  4 d   tan 1 c .
2
tan
2  2
2 4 2 2 2
c c 1 c
4
(Refer Slide Time: 11:24)

So this is the energy of output and you can also call this as EY. So we know that we want
to design the filter with a cut off frequency c such that the output energy is basically

1 1
half of the input energy and the input energy is , so half of the input energy is .
4 8

580
(Refer Slide Time: 13:15)

So we have is EY equals half of EX which is input energy which means


1  1 1   
tan 1 c  . which implies tan 1 c   c  1  c  2 rad / s . So this gives
2 2 2 4 2 4 2
us a filter such that the energy of the output signal corresponding to the input signal
which is an exponential signal, is basically half the energy of the input signal.

(Refer Slide Time: 14:42)

So let us proceed to the next problem which is with respect to an RC filter and these
blocks talks about the rise time of an RC filter. So let us say we have a serial RC circuit,
the input is the signal x(t) and the output voltage across the capacitor is y(t). So the
problem is to find the unit step response and find the rise time that is the time that takes
the step response to go from 10 percent to 90 percent of its final value. So we want to
characterize the unit step response.

581
(Refer Slide Time: 16:57)

And then we want to characterize the rise time in terms of 3dB.

(Refer Slide Time: 18:00)

dy (t )
So the input voltage x(t) is x(t )  y (t )  RC so this is the differential equation for
dt
input/ output relation of RC circuit. This implies that X ()  Y()  RCj Y() . Now
we have a simple equation in terms of the Fourier transform which we can solve to
Y ( ) 1
obtain the transfer function. So this implies that  .
X ( ) 1  j RC

582
(Refer Slide Time: 19:41)

Now consider the unit step function u (t )  10 otherwise


t 0
.

(Refer Slide Time: 20:36)

1
The Fourier transform of this is U ( )   ( )  .
j

583
(Refer Slide Time: 21:15)

Therefore the output corresponding to unit step is basically


 1  1 
Y ( )  U( ) H( )    ( )   . So this is basically
 j  1  j RC 
1 1 1 1 1
 ( )  which is now  ( )  .
1  j RC j 1  j RC j 1  j RC

(Refer Slide Time: 22:07)

1 RC 1 1
So this is simply equal to  ( )   which is  ( )   . So
j j  RC j j  1
RC
taking the Inverse Fourier Transform we have,

t
 
t

y (t )  u (t )  e RC
u (t )  y(t )  1  e RC  u(t ) .
 

584
(Refer Slide Time: 24:04)

So if you plot it, it is going to look something like as in slide.

(Refer Slide Time: 27:18)

So this is the unit response of the RC circuit and at t = 0 this is 0, at t   it approaches


1. So this RC is known as the time constant, larger the time constant implies slower the
rise.

585
(Refer Slide Time: 29:34)

So we want to find the rise time and its relation to the 3dB frequency. So in the next
module we can find the rise time and find its relation to the 3dB frequency, F3dB. So we
will stop this module here and look at the other aspects in the subsequent modules.
Thank you very much.

586
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 51
Fourier Transform Problems: Unit Step Response of RC Circuit, Sampling of
Continuous Signal

Keywords: Unit Step Response of RC Circuit, Sampling of Continuous Signal

Hello, welcome to another module in this massive open online course. So we are looking
at problems in the Fourier transform and specifically the unit step response of the serial
RC circuit.

(Refer Slide Time: 00:24)

 
t

So the unit step response is given as the y (t )  u (t ) 1  e  .
RC

 
(Refer Slide Time: 01:40)

587
Now, you can see that the final and the initial values are y()  1, y(0)  1 (1 1)  0 so
the final output is 1 and all the voltage will be across the capacitor. So it starts from 0
and rises to 1. Now we define the rise time as the time taken by the RC circuit or the
capacitor voltage to go from 10 percent of the final value to 90 percent of its final value.

(Refer Slide Time: 03:10)

Now 10 percent of its final value is 0.11  0.1 . Now we need to find the time t10 taken
  10 
t t
 10
for 10 percent of the final value that is  1  e RC
  0.1  e RC
 0.9  t10   RC ln 0.9 .
 

(Refer Slide Time: 04:05)

588
Now time taken for the 90 percent of the final value is 0.9 times 1 that is 0.9.

(Refer Slide Time: 04:46)

  90 
t t
 90
Now let this time be t90 that is, 1  e RC   0.9  e RC  0.1  t90   RC ln 0.1 .
 

(Refer Slide Time: 05:54)

So the rise time is basically time taken to reach 90 percent of the final value from 10
percent that is t90  t10   RC ln 0.1  RC ln 0.9  RC ln 9  2.1972RC .

589
(Refer Slide Time: 06:17)

So this is the expression for the rise time and you can clearly see as RC increases, the
rise time also increases.

(Refer Slide Time: 07:55)

Now, we want to relate rise time to the 3dB bandwidth. So the transfer function is
1 1 1
H ( )   where 0  and this is termed as the 3dB bandwidth
1  j RC 1  j  RC
0
1 1 1
because H (0 )   H (0 )   H (0 )  .
2

1 j 2 2

590
(Refer Slide Time: 08:43)

(Refer Slide Time: 10:00)

Therefore the output power is half corresponding to that frequency which basically
corresponds to a 3dB suppression in the power of the input signal to this RC circuit.

591
(Refer Slide Time: 11:11)

So we have the rise time Trise  2.1972RC and the 3dB bandwidth

1  1 1
3dB   f3dB  3dB  . So f3dB is the 3dB frequency and RC  .
RC 2 2 RC 2 f3dB
2.1972 0.35
Now Trise   .
2 f3dB f3dB

(Refer Slide Time: 11:56)

So basically that gives us the relation that rise time is inversely proportional to the 3dB
frequency, which means that if the 3dB frequency is large, the rise time is much smaller.
On the other hand if the 3dB frequency is small, that is the filter has a very small
bandwidth, then the rise time is much larger.

592
(Refer Slide Time: 13:55)

Let us now look at another very important aspect of the Fourier transform which is
related to the sampling of a continuous time analog signal.

(Refer Slide Time: 15:29)

Now consider a continuous time signal as shown in slide. Now before converting it to
digital signal we would like to convert it to a discrete time signal. So to convert this
continuous time signal into a discrete time signal, we have to look at the values of the
signal at discrete time instants or we have to extract the values of this signal at discrete
time instants which are regularly spaced. This process is termed as sampling that is
converting this continuous time analog signal into a discrete time signal.

593
(Refer Slide Time: 17:10)

So we take different time instants such as 0, Ts 2Ts, 3Ts -Ts and so on. So these are the
various equi-spaced discrete time instants and these are the samples.

(Refer Slide Time: 18:21)

You have a continuous time signal, convert it into a discrete time signal by extracting
samples at equi-spaced intervals of time and this interval of time which is of duration Ts
is known as the sampling interval. So this is the first step to convert this analog signal
into a digital signal and it is one of the most important steps, because frequently the
natural signals are analog and to store and transmit these signals, we convert them into
2
digital signals. Now the quantity s  is the sampling frequency or basically the
Ts
angular frequency of sampling.

594
(Refer Slide Time: 20:39)

1
The sampling frequency can also be denoted by f s  where one is the frequency and
Ts
the other is the angular frequency. And there are various techniques of sampling, one of
the simplest methods of sampling is basically where we have an impulse scaled by the
value of the signal at each multiple of Ts that is basically we have impulses which are
periodically spaced or equi-spaced at multiples of Ts and we scale the amplitude or we
scale each impulse by the value of the signal at that particular point. This is known as
impulse train sampling.

(Refer Slide Time: 21:59)

595

Now the impulse train is basically Ts (t )    (t  kT ) that is impulses of unit scaling
k 
s

at each multiple of Ts.

(Refer Slide Time: 23:16)

(Refer Slide Time: 23:40)

So we have the time axis and at each multiple of Ts we have unit impulses. So this is the
impulse train that is impulses spaced at equal intervals on the time axes at multiples of
Ts, where Ts is the sampling interval. Now the impulse train sampling is simply we take
the continuous time signal x(t) and multiply it with the impulse train.

596
(Refer Slide Time: 24:42)

 
This is x(t )  Ts (t )  x(t )   (t  kTs )   x(t ) (t  kT ) .
s
k  k 

(Refer Slide Time: 25:15)

And now by the property of the impulse function this basically extracts the value of the
signal at the corresponding sampling instant kTs. So this becomes

xTs (t )   x(kT ) (t  kT ) and this is the sampled signal.
k 
s s

597
(Refer Slide Time: 27:18)

Now a very interesting view point about the sampling process can be gained by looking
into the Fourier transform, as what happens to the spectrum of the signal that is sampled.
So if x(t) has the spectrum X ( ) then we would like to understand the spectrum of x(Ts)
that is the sampled signal.

(Refer Slide Time: 28:19)

Now let us start with the spectrum of the impulse train. So the impulse train is

T (t ) 
s   (t  kT ) and
k 
s this is a periodic signal with period Ts which is the

1
fundamental period. The fundamental frequency is s  .
Ts

598
(Refer Slide Time: 30:11)

Now since it is a periodic signal we can derive the CEFS that is the complex exponential

Fourier series that is, Ts (t )  Ce
k 
k
jks t
.

(Refer Slide Time: 30:53)

That is basically a combination of an infinite number of sinusoids, corresponding to the


fundamental frequency s and multiples of the fundamental frequency that is harmonics

at multiples of the fundamental frequency that is ks . We can derive the coefficients of
the CEFS, Ck and from that we can basically understand where the resulting spectrum of
the sampled signal is. So we will stop this module here and continue in the subsequent
module. Thank you very much.

599
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 52
Sampling: Spectrum of Sampled Signal, Nyquist Criterion

Keywords: Spectrum of Sampled Signal, Nyquist Criterion

Hello, welcome to another module in this massive open online course. So we are looking
at sampling and considering impulse train based sampling which is the most common
form of sampling a continuous time analog signal to convert it into a discrete time signal.

(Refer Slide Time: 00:32)

(Refer Slide Time: 00:53)

600

The impulse train  Ts is a periodic signal, so Ts    (k  T ) and this Ts, is known as
k 
s

1 2
the sampling duration and f s  is known as the sampling frequency,  s is the
Ts Ts
angular frequency of sampling in radians per second.

(Refer Slide Time: 01:18)

We need to find the coefficient Ck of the complex exponential Fourier series of the
Ts
2
1
impulse train and this coefficient Ck 
Ts  Ts
Ts (t )e jkst dt .

2

(Refer Slide Time: 02:25)

601
Ts Ts
But in the interval  to this is simply  (t ) , so this is basically
2 2
Ts
2
1 1 jkst 1
  (t )e
jks t
Ck  dt  e  .
Ts Ts t 0 Ts
Ts

2

(Refer Slide Time: 03:18)

So the complex exponential Fourier series of the impulse train is given as



1
T (t) 
s
Ts
e
k 
jks t
.

602
(Refer Slide Time: 04:01)

Now you recall that e jkst  2 (  ks ) . So if you take the Fourier transform of the

1
impulse train that will simply be Ts  2   (  ks ) .
Ts k 

(Refer Slide Time: 05:24)


So therefore that will simply be Ts ( )  s   (  k ) .
k 
s So this is basically the

Fourier transform of impulse train.

603
(Refer Slide Time: 06:30)

Now what we are doing in the sampling process is that basically we are taking the signal
x(t) and multiplying it with the impulse train. So when you multiply two signals in the
time domain, in the frequency domain the corresponding Fourier transforms are
convolved. So the corresponding spectrum of the sampled signal is therefore given by
the convolution of the Fourier transform of the original signal x(t) with that of the
impulse train.

(Refer Slide Time: 08:01)

604
1
So x1 (t ) x2 (t )  X 1 ( )  X 2 ( ) . Now our sampled signal xs(t) is the product of the
2
original signal x(t) with the impulse train. So xs (t )  x(t )Ts (t ) , which implies its Fourier

transform is going to be
 
1 1 1
X ( )  Ts ( )  X ( )  s   (  ks )   X ( )  (  k ) .
2 2
s
k  Ts k 

(Refer Slide Time: 09:24)


1
So this is simply X Ts ( ) 
Ts
 X (  k ) .
k 
s

605
(Refer Slide Time: 11:00)

So this is the spectrum of the sampled signal and you can see that this is basically
shifting and adding. So you are basically adding all these shifted copies of the original
spectrum externally. So these are replicas of the original spectrum X ( ) shifted by
every integral multiple of the sampling frequency s and then all these shifted replicas
are basically added.

(Refer Slide Time: 12:56)

606
Now consider this pictorially as shown in the slide and we have
X ( )  0for   m where m is the maximum frequency of the signal. So X ( ) is

restricted to this band to m to m . Now m is the bandwidth of the signal.

(Refer Slide Time: 14:53)

1
Now if you consider the spectrum of the sampled signal, it is basically scaled by . Let
Ts
us say that the original signal is scaled unity. So the spectrum of the sampled signal
consists of shifted replicas of the original signal.

(Refer Slide Time: 15:29)

607
So if you look at this gap, if s  m  m then you will have this gap, which implies that

these two spectral copies corresponding to X ( ) and X (  s ) do not overlap.

(Refer Slide Time: 17:18)

As a result they will not interfere with each other and therefore there should be this gap
or this guard band between the original spectrum and it is various shifted copies and this
condition has to be satisfied that is s  m  m  s  2m . This quantity s is the

sampling frequency and m is the maximum frequency, so if this is satisfied, then there
will be no interference from the spectral overlap.

608
(Refer Slide Time: 18:46)

If this condition is not satisfied, that is s  m  m then there will be overlap and that
causes distortion which is termed as aliasing.

(Refer Slide Time: 19:56)

So when these spectral superimpose over each other that leads to distortion of the
original spectrum and then you cannot recover the original message spectrum by any
kind of filtering operation. Hence this is termed as aliasing and this criterion for no
aliasing is termed as the Nyquist sampling theorem or the Nyquist criterion for no
aliasing. So we need s  2m for no distortion.

609
(Refer Slide Time: 21:02)

And this is the Nyquist sampling theorem, so for no overlap we need to have this
sampling frequency to be greater than or equal to twice the maximum frequency of the
signal and this 2m is termed as the Nyquist rate which is basically the minimum rate at
which the signal has to be sampled so that there is no distortion. This is one of the key
properties which has to be kept in mind when converting a signal from the continuous
time to the discrete time domain. So I will stop here and continue in the subsequent
modules. Thank you very much.

610
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 53
Sampling: Reconstruction from Sampled Signal

Keywords: Reconstruction from Sampled Signal

Hello, welcome to another module in this massive open online course. So we are looking
at sampling and in particular we are looking at impulse train sampling to convert a
continuous time analog signal to a discrete time signal and we are multiplying the signal
x(t) by an impulse train, we have also derived an expression for the spectrum of the
sampled signal in terms of the spectrum of the original signal.

(Refer Slide Time: 00:39)

So we have seen that x(t) is the original continuous time analog signal and has spectrum
X ( ) . The spectrum of the sampled signal is X Ts ( ) and this is basically given as

1
X Ts ( ) 
Ts
 X (  k  ) .
k 
s

611
(Refer Slide Time: 02:04)

So that is adding for all shifts and therefore the original spectrum is from m to m

where m is the maximum frequency.

(Refer Slide Time: 03:05)

Now, if s  m , the sampled signal will have a spectrum that looks as shown in slide

and you will have all the copies of the original spectrum and if s  m  m this implies
that there is a guard band between copies implies no interference or no distortion.

612
(Refer Slide Time: 05:27)

So there is no distortion or aliasing and aliasing means basically some other frequency is
trying to pose as the original frequency. So this implies no aliasing if s  2m and this is
called the Nyquist criterion.

(Refer Slide Time: 06:45)

On the other hand, if there is no guard band which happens when s  2m there is
overlap which implies distortion when you sum them. This implies there is aliasing.
Since you have aliasing you will not be able to recover the origin spectrum and hence we
need to find out how to reconstruct the original message signal from the sampled signal.

613
(Refer Slide Time: 08:50)

So by reconstruction we mean basically retrieve the original signal from the sampled
signal. So we are talking about perfect reconstruction such that the original signal is
equal to the reconstructed signal and this is only possible when there is no aliasing.

(Refer Slide Time: 10:25)

So we have the copies of the spectrum which are not aliased and we use an ideal low
pass filter to extract the spectral components corresponding to the original spectrum. So
the cut-off is c and I can choose c such that m  c  s  m . So for this to be

possible s  2m , that is when there is a gap between those two bands we can design a

614
low pass filter such that it extracts only the original spectral band corresponding to x(t)
by suitably scaling that is while it suppresses the other shifted spectral copies of the
original signal x(t).

(Refer Slide Time: 13:34)

So when we pass the sampled signal through the ideal LPF the resulting spectrum of the
output is simply the multiplication of the spectra of the input that is a sampled signal
with that of the low pass filter, which only extracts the original message spectrum in the
band m to m , because the cutoff frequency is greater than m . Now, without loss of

generality we can choose c  m .

(Refer Slide Time: 15:20)

615

1  
So the gain is . Now let us call the spectrum as H ( )  1 m.
Ts 0 otherwise

(Refer Slide Time: 16:51)

Now we know that if you look at the pulse in frequency of width 2a centered at 0, that
sin(at )
has Fourier transform P2 a ( )  and this is of height 1.
t

(Refer Slide Time: 17:49)

616
Now this implies that if you consider the pulse of width 2m centered at 0, it has an

sin(m t)
impulse response P2m ( )  which implies if you multiply this by Ts, we have
t
Ts sin(m t)
Ts P2m ( )  and this is the impulse response of the ideal LPF required for
t
reconstruction. So you pass xs(t) through ideal LPF with cut off frequency c  m and
gain Ts.

(Refer Slide Time: 19:53)

So let us denote the reconstructed signal by xR(t) which is



xR (t )   x(kT ) (t  kT )  sin(
k 
s s m t) . Now we are choosing the Nyquist rate that is

2 
s  2m   2m  m   mTs   . So here Ts is the sampling interval
Ts Ts
between consecutive samples.

617
(Refer Slide Time: 22:10)

And in this case we have the reconstructed signal as



sin(m t)
xR (t )   x(kT ) (t  kT ) 
k 
s s
m t
and you can take the sine function inside. So that


sin(m t  k  )
gives xR (t )   x(kT )
k 
s
(m t  k  )
. So this is the reconstructed signal and you can

see that this is basically taking the samples and interpolating them, we are performing
linear combinations of these samples x(kTs) by interpolating them by a sinc filter. Now
you can realize that this reconstructed signal is equal to the original signal only if there is
no aliasing, this implies that s  2m .

618
(Refer Slide Time: 25:36)

Here perfect reconstruction is possible when there is no aliasing. So you sample it at a


sampling rate that is greater than or equal to Nyquist rate, then pass it through an ideal
low pass filter which satisfies the given cutoff frequency requirements and from that you
can reconstruct the original signal by extracting the spectrum corresponding to the
original signal and suppressing all the spectral copies, all the aliases that arise because of
the sampling process.

So that completes our discussion of the sampling as well as the reconstruction process
which as we have said is one of the fundamental aspects of signal processing and
communications since most of the processing is digital. So I will stop here and continue
with other aspects in the subsequent modules. Thank you very much.

619
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 54
Fourier Analysis of Discrete Time Signals and Systems – Introduction

Keywords: Fourier Analysis of Discrete Time Signals and Systems

Hello, welcome to another module in this massive open online course. In this module, we
are going to start looking at the Fourier analysis for discrete time signals and systems.

(Refer Slide Time: 00:40)

So far we have looked at the Fourier analysis for continuous time periodic as well as
aperiodic signals, starting with this module we are going to start looking at discrete time
periodic as well as aperiodic signals. Now we are going to start with the Discrete Fourier
series which is defined for a discrete time periodic signal, similar to the complex
exponential Fourier series and the trigonometric Fourier series that are defined for
continuous time periodic signals.

620
(Refer Slide Time: 02:35)

We consider a discrete time periodic signal with period N0 such that x(n  N0 )  x(n) for
all n, then we say that the discrete time periodic signal is periodic with period N0.

(Refer Slide Time: 03:48)

2
For example, we have seen that the exponential signal, x(n)  e j0n , where 0  . This
N0
is a periodic signal with period N0 that is you have
2 2 2
j ( n  kN0 ) j n j n
j 2 k
x(n  kN0 )  e N0
e N0
e e N0
, where k is any integer and e j 2 k  1 for all k.

621
(Refer Slide Time: 04:44)

So this is simply e j0n which is basically x(n) which means that this is a periodic signal
and the period equals N0.

(Refer Slide Time: 05:50)

622
(Refer Slide Time: 06:25)

Now the Discrete Fourier Series representation is given as follows, so consider a signal
x(n) and the fundamental period that is the smallest period is N0, then this can be
N0 1
expressed as x(n)  C e
k 0
k
jk 0 n
and these Ck's are basically the coefficients of the

Discrete Fourier Series.

(Refer Slide Time: 07:36)

So you can see that this can be expressed as a sum of a finite number of complex
exponentials, unlike the complex Fourier series, which is the sum of an infinite number

623
of complex exponentials corresponding to the fundamental frequency. So this can be
expressed as a sum of a finite number of complex exponentials with frequencies 0,
2 4 2
, , so on and so forth, up to ( N 0  1) . So these are the discrete set of
N0 N0 N0
frequencies.

(Refer Slide Time: 09:20)

(Refer Slide Time: 10:36)

624
N0 1
Consider  x ( n )e
n 0
 jl 0 n
, now substituting the expression for the DFS implies

N0 1 N0 1

 C e
n 0 k 0
k
jk 0 n  jl 0 n
e .

(Refer Slide Time: 11:33)

N0 1 N0 1
Now interchanging the summations, we have C e
k 0
k
n 0
j0 ( k l ) n
. Now you can see

N0 1
for k  l ,  e j0 ( k l ) n  0 . So the only case that is remaining is, when k  l , so when
n 0

k  l , this is Cl N0.

625
(Refer Slide Time: 13:46)

N0 1
This means we have N 0Cl   x ( n )e
n 0
 jl 0 n
, which basically implies that the lth

N0 1
1
coefficient of the Discrete Fourier series is Cl 
N0
 x ( n )e
n 0
 jl 0 n
.

(Refer Slide Time: 15:18)

So this summation can be taken over any continuous N0 samples. So finally, you can see
N0 1
1
the DC coefficient, C0 
N0
 x(n) . So this is the mean of the samples over a period
n 0

N0.

626
(Refer Slide Time: 16:26)

(Refer Slide Time: 17:37)

Now, you can see, if you look at this expression for the discrete Fourier series, this is a
finite sum. By finite sum, we mean the summation over a finite number of elements and
the inverse discrete Fourier series to relate the coefficients C1 is also a finite sum.

627
(Refer Slide Time: 18:27)

(Refer Slide Time: 18:43)

So because both the sums are finite, the convergence is guaranteed. The convergence is
guaranteed unlike both the continuous time periodic as well as the continuous time
aperiodic signals, because in the case of the continuous time periodic signals, it is
summation over infinite series and in the case of the continuous time aperiodic signals, it
is integral over an infinite duration and in both those scenarios, it is important to ensure
that the relevant summations or the integrals converges. However, since in this case both
the summations are finite and hence the convergence is guaranteed.

628
(Refer Slide Time: 19:57)

N0 1
1
You can see that Ck 
N0
 x ( n )e
n 0
 jk 0 n
,

N0 1 N0 1
1 1
Ck  N0 
N0
 x(n)e j (k  N0 )0n 
n 0 N0
 x ( n )e
n 0
 jk 0 n  j 2 n
e and e j 2 n  1 therefore, this

N0 1
1
again reduces to Ck  N0 
N0
 x(n)e
n 0
 jk 0 n
, which is nothing but Ck.

(Refer Slide Time: 21:13).

629
(Refer Slide Time: 21:46)

This is for any general k implies that the discrete Fourier series coefficients are periodic.

(Refer Slide Time: 22:37)

So we have a discrete time periodic signal and the corresponding discrete Fourier series
coefficients in the frequency domain are also periodic and the period incidentally is the
same. These quantities are known as the spectral coefficients of x(n).

So in this module we have started looking at the discrete Fourier series or the Fourier
analysis for discrete time signals and we have started with the discrete Fourier series for
periodic signals. So we will stop here and continue in the subsequent modules. Thank
you very much.

630
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 55
Fourier Analysis of Discrete Time Signals – Duality, Parseval’s Theorem

Keywords: Duality, Parseval’s Theorem

Hello, welcome to another module in this massive open online course. So you are
looking at the Fourier analysis for discrete time signals

(Refer Slide Time: 00:27)

(Refer Slide Time: 01:18)

We are looking at the Discrete Fourier series, where a periodic discrete time signal x(n)
N0 1
can be expressed as x(n)  C e
k 0
k
jk 0 n
. Similarly the inverse discrete Fourier series, that

631
N0 1
1
is the coefficients Ck can be derived as Ck 
N0
 x ( n )e
n 0
 jk 0 n
. Let us now look at

another interesting aspect of this discrete Fourier series which is basically the duality
property of the discrete Fourier series.

(Refer Slide Time: 02:38)

N0 1
So we have x(n)  C e
k 0
k
jk 0 n
. What can we say about the sequence with Ck equals C(n),

remember duality is when a signal x(t) has a Fourier transform X ( ) what is the Fourier
transform of the signal X(t) that is the frequency domain becomes a time domain.
Similarly if the sequence Ck becomes a time domain signal, we already said Ck is
periodic with period N0 implies that the time domain signal C(n) is also periodic in time
with again period N0.

(Refer Slide Time: 04:36)

632
N0 1 N0 1 N0 1
If you consider x( N0  n)  C ek 0
k
jk 0 (N0  n)
 C e
k 0
k
jk 2
e  jk 0 n
 C e
k 0
k
 jk 0 n
.

(Refer Slide Time: 06:07)

So now we make Ck equal to C(n) and x(n) equals xk then we have


N0 1
1
xN 0  k 
N0
 N C (n)e
n 0
0
 jk 0 n
. Now, you can see that N0C (n)  C (n) and this is identical

to the inverse discrete Fourier series where C (n) has the discrete Fourier series
coefficient x( N0  k )  x k .

(Refer Slide Time: 07:59)

1
C (n)  x k or going back to the original notation Ck DFS N0 x(n) . This is the duality
N0
property that is if Ck’s become a time domain signal then the corresponding discrete
Fourier series coefficients are given by N0 x(n) . So this basically is the duality property.

633
(Refer Slide Time: 10:04)

Now let us consider the property of the discrete Fourier series for a real signal x(n)
which implies x (n)  x(n) .

(Refer Slide Time: 11:16)

N0 1
1
Then we have from the inverse discrete Fourier series Ck 
N0
 x ( n )e
n 0
 jk 0 n
. Now

N0 1 N0 1
1 1
CN0  k 
N0
 x(n)e j ( N0 k )0n 
n 0 N0
 x ( n )e
n 0
 j0 n
.

634
(Refer Slide Time: 12:24)

N0 1
1
Now this means C  N0  k 
N0
 x (n)e
n 0
  j0 n
.

(Refer Slide Time: 13:50)

But x (n)  x(n) , for a real signal, so this is simply C  N0 k  Ck . Now

C  N0 k  C  k  Ck and this is for a periodic signal. So basically that means the magnitude

of this has an even symmetry.

635
(Refer Slide Time: 15:48)

2
N0 1 N0 1 N0 1
1
C    x ( n )e  jk 0 n
2
Parseval’s theorem for the discrete Fourier series is k .
k 0 k 0 N0 n 0

This is

N0 1
1  N0 1  jk 0 n  
N0 1
 jk 0 m  1 N0 1 N0 1 N0 1

 2 
N 0  n 0
x ( n ) e    x(m)e
   2    x(n) x (m)e   jk ( n  m ) 0
.
k 0   m0  N0 k 0 n 0 m0

(Refer Slide Time: 17:19)

636
(Refer Slide Time: 19:17)

So I will bring the summation with respect to k inside. So that will give
N0 1 N0 1 N0 1 N0 1
1
N0
  x(n) x (m)  e jk (nm)0 . Now
n 0 m0 k 0
e
k 0
 jk ( n  m ) 0
  0 if n  m
N0 if n  m  N 0 (n  m) .

(Refer Slide Time: 21:19)

N0 1 N0 1 N0 1
1 1
  x(n) x (m)  N 0 (n  m) 

 x ( n)
2
So this is . So this is
N0 n 0 m0 N0 n 0

N0 1 N0 1
1
 Ck   x ( n)
2 2
and hence the Parseval’s relation. So this basically completes
k 0 N0 n 0

our discussion of the discrete Fourier series.

637
(Refer Slide Time: 23:15)

So we have looked at the discrete Fourier series representation of a periodic discrete time
signal with fundamental period N0 and we have looked at its several properties. So we
will stop here and start with another transform which is the discrete time Fourier
transform for discrete time aperiodic signals in the subsequent module. Thank you very
much.

638
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 56
Discrete Time Fourier Transform: Definition, Inverse DTFT,
Convergence, Relation between DTFT and z-Transform, DTFT of Common Signals

Keywords: Discrete Time Fourier Transform, Inverse DTFT, Convergence

Hello welcome to another module in this massive open online course. So far we are
looking at the Fourier analysis of discrete time signals particularly periodic discrete time
signals and we have looked at the Fourier series representation of such signals.

(Refer Slide Time: 00:43)

In this module we will start looking at the Fourier analysis for discrete time aperiodic
signals through the discrete time Fourier transform. So we will discuss the discrete time
Fourier transform and its various properties. So this is also abbreviated as the DTFT and
this is basically used for the Fourier analysis of discrete time aperiodic signals.

639
(Refer Slide Time: 01:30)

(Refer Slide Time: 02:10)

Consider a discrete time aperiodic sequence x(n) and the DTFT of this is defined as

X ()   x ( n)e
n 
 jn
.

640
(Refer Slide Time: 03:00)

(Refer Slide Time: 03:40)

 X ()e
jk
To find the inverse DTFT we start with d  . This can be simplified as

 

  x(n)e
 jn
e jk d  . Now interchanging the integral and the summation as usual we
 n 

 
have 
n 
x(n)  e j ( k n ) d  and now you can see this is a complex exponential.


641
(Refer Slide Time: 05:30)

So e j( k n ) is basically a harmonic that is a multiple of the fundamental frequency


which is 1 radian per second. So if you integrate it over any fundamental period that is
over  to  the integral is going to be 0 except when k=n in which case e j( k n ) is 1
and this is equal to basically 2 if k= n.

(Refer Slide Time: 07:00)

So the integral only survives when k=n and for that it is so we have to extract this

element x(k), so we have 2 x(k )   X ()e
jk
d  which implies that x(k) is the inverse

Fourier transform and the inverse discrete Fourier transform is given as



1
 X ()e
jk
d .
2 

642
(Refer Slide Time: 08:11)

Let us look at some of the properties of this DTFT, let us first start with the spectrum,
now the spectrum X(  ) in general you can see that it is complex and I can represent
X(  ) as | X () | e j ( ) . This quantity | X () | is the magnitude spectrum and the
quantity  () is the phase spectrum.

(Refer Slide Time: 09:56)

Now if x(n) is real we have X(  ) = X*(-  ), similar to what we have seen for the
Fourier transform. This further implies that |X(  )|=|X(-  )|, because they are conjugate
of each other. So basically this means that the magnitude spectrum for real signals is
even. Further you can see that the phase spectrum exhibits odd symmetry.

643
(Refer Slide Time: 11:11)


Now the condition for convergence is X(  ) converges if  | x(n) |   , that is if the
n 

sequence is a finite quantity implies the sequence is absolutely summable.

(Refer Slide Time: 12:54)

Let us look at the relationship between the Fourier transform and the Z transform. The
 
DTFT is given as  x ( n )e
n 
 jn
and the Z transform X (z)   x ( n) z
n 
n
.

644
(Refer Slide Time: 14:14)

This implies you can see that if we substitute z  e j in the Z transform then the Z
transform becomes the Fourier transform. But remember for the Z transform to converge,
z  e j has to be in the ROC of the Z transform, only then we can substitute z  e j in
the Z transform to derive the DTFT. So it is very important that it exists only if e j is in
the region of convergence of the Z transform implies that the ROC has to contain the unit
circle.

(Refer Slide Time: 15:39)

645
(Refer Slide Time: 16:50)

(Refer Slide Time: 17:00)

You can see that the ROC contains the unit circle if the sequence is absolutely
 
summable. For instance if 
n 
x(n)e jn   | x(n) || e
n 
 jn
| ,but | e jn | is 1, this is less

than  if sequence is absolutely summable.

646
(Refer Slide Time: 18:26)

The substitution of z  e j to yield the DTFT from from the Z transform does not
generally hold true if the sequence x(n) is not absolutely summable.

(Refer Slide Time: 19:54)

647
(Refer Slide Time: 20:30)

So the discrete time impulse  (n) equals 1 for n = 0, this is 0 for n  0 and the DTFT is

simply   ( n )e
n 
 jn
and this is only non-zero for n = 0.

(Refer Slide Time: 21:45)

So  (n) that is the unit impulse has DTFT which is unity. Let us look at the causal
exponential signal, we consider a decaying exponential that is real.

648
(Refer Slide Time: 23:00)


Now for this we have DTFT is X ()   a u ( n )e
n 
n  jn
and u(n) is non-zero only for n

 
 0 and for n  0 it is 1, this simply reduces to  a ne jn which is
n 0
 (ae
n 0
 j n
) which is

1
.
1  ae j

(Refer Slide Time: 23:50)

1
And further you can see the Z transform of this is given as and the ROC is |z| >
1  az 1
|a|. Now |z| > |a|, implies so ROC includes everything so ROC includes the unit circle.

649
(Refer Slide Time: 24:51)

(Refer Slide Time: 25:30)

(Refer Slide Time: 25:47)

650
(Refer Slide Time: 27:16)

1
And therefore, now if you replace z by e j we obtain which is the DTFT.
1  ae j

(Refer Slide Time: 28:04)

Now, on the other hand if you look at the unit step signal, you can see that it is not
 
absolutely summable  | u(n) |  1 which is not a finite quantity which means the
n  n0

ROC does not contain the unit circle.

651
(Refer Slide Time: 29:07)

Hence you cannot obtain the DTFT by simple substitution of z  e j .

(Refer Slide Time: 30:20)

1
Hence if you take that is not equal to X(  ). In fact, you can show that X(  )
1  e  j
for the unit step signal is given as something that is slightly different that is
1
 ()  . So this is the DTFT of the discrete time unit step signal, that is it
1  e  j
contains this additional term  () .

So in this module we have looked at the definition of the DTFT, the inverse DTFT and
also the convergence issues related to the DTFT and the DTFT of some of the commonly

652
occurring discrete time signals. So we will stop here and continue in subsequent
modules. Thank you very much.

653
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 57
Discrete Time Fourier Transform: Properties of DTFT - Linearity, Time Shifting,
Frequency Shifting, Conjugation, Time-Reversal, Duality

Keywords: Linearity, Time Shifting, Frequency Shifting, Conjugation, Time-Reversal,


Duality

Hello welcome to another module in this massive open online course.

(Refer Slide Time: 00:30)

So we are looking at the Fourier analysis of discrete time aperiodic signals through the
discrete time Fourier transform. So now let us look at the properties of the DTFT.

(Refer Slide Time: 01:14)

654
Now, one of the most fundamental properties of the DTFT is that the DTFT is periodic

that is X (  2 )  X () . This can be seen as follows X ()   x ( n)e
n 
 jn
.

(Refer Slide Time: 02:02)

 
And X (  2 )  
n 
x(n)e j (  2 ) n which is  x ( n)e
n 
 jn  j 2 n
e and this is

 x ( n )e
n 
 jn
which is nothing but X(  ).

(Refer Slide Time: 03:15)

655
(Refer Slide Time: 03:35)

Let us look at one of the simplest properties which is obeyed by all the transforms that
we have seen before and that is the linearity that is if x1 (n)  X1 () , x2(n) has DTFT

X2(  ), then a1 x1 (n)  a2 x2 (n)  a1 X1 ()  a2 X 2 () where a1, a2 are constants.

(Refer Slide Time: 05:00)

Let us say x(n) has the DTFT X(  ), when you time shift by n0, we have X () which is

 x ( n  n )e
n 
0
 jn
, set n-n0 = m that implies n =m+n0.

656
(Refer Slide Time: 05:52)


So this is  x(m)e
m 
 j ( m  n0 )
from which if you take e jn0 outside you have

 x ( m)e
m 
 jm
and this is X(  )and therefore, this is basically X ()e jn0 that is X () .

(Refer Slide Time: 06:54)

So when there is a shift in time that is basically a modulation in frequency.

657
(Refer Slide Time: 07:30)

Similarly, we have frequency shifting, now in frequency shifting we have x(n) with
Fourier transform X(  ) then we need to find e jn0 x(n) , that is a modulated signal, so
modulation in time leads to a corresponding shift in frequency. So this leads to a shift in
the frequency by 0 .

(Refer Slide Time: 08:58)

So modulation in time implies a shift in the frequency domain. Now x(n) has DTFT
X(  ), then x*(n) has DTFT X*(-  ) and therefore this naturally implies if x(n) is real
then we have x(n) = x*(n) and that implies the corresponding DTFTs are equal that is
X(  )=X*(-  ).

658
(Refer Slide Time: 10:33)

Similarly one can look at time reversal, if x(n) has DTFT X(  ), x(-n) has DTFT that
 
would be X ()   x (  n )e
n 
 jn
now setting m = -n that would be  x ( m) e
m 
jm
where


m=-n and therefore, X ()   x(m)e
m 
 jm
that is X(  ).

(Refer Slide Time: 11:37)

So basically x(-n) has the Fourier transform X(  ).

659
(Refer Slide Time: 12:47)

Another interesting property is that of time scaling. While most properties are similar to
the continuous time properties, this one is not similar and the reason is as follows that is
if you look at a continuous time signal x(t) has the Fourier transform X(  ), then x(at)
1  
has the Fourier transform X  .
|a|  a 

(Refer Slide Time: 14:20)

However, this is not true for discrete time signals that is if you have x(n) with DTFT
X(  ) then x(2n) will not have the DTFT as we obtained in the previous case.

660
(Refer Slide Time: 14:47)

(Refer Slide Time: 15:56)

 n
 x( ) if n  km
However if I define a signal xm (n)   m .
0 if n  km

661
(Refer Slide Time: 17:15)

And for instance this signal can be visualized as follows, let us take a simple example, let
us say this is x(0), this is x(1), this is x(2), x(3), this is x(-3) and then if I consider the
signal x2(n) which is x2(0) is simply x(0), x2(1) because 1 is not a multiple of 2, this is 0,
x2(2) is x(1) ok. So x2(2) is x(1), similarly x2(3) is 0 and x2(4) is x(2), similarly on the
negative axis, this is x(n).

(Refer Slide Time: 18:50)

So you are inserting zeros in between and you are spreading the signal out. So x2(n),
x2(0) is 0, x2(0) is x(0), x2(1) is 0, x2(2) is x(1), x2(3) is 0, x2(4) is x(2) and so on. So we
are spreading the signal out and basically you are inserting zeros in between, this is
known as up sampling by a factor of 2.

662
(Refer Slide Time: 20:00)

Now when you up sample a signal, that is xm(n) the resulting Fourier transform is
X(m  ) that is in the frequency domain it shrinks by a factor of m.

(Refer Slide Time: 20:43)

X(  ) is periodic, let us try to plot it.

663
(Refer Slide Time: 21:23)

Now x2(n) will have the Fourier transform which looks something like as in slide. So you
can see that you have replicas of the same spectrum. So you have replicas of the original
spectrum in -  to  and you can see that the number of such replicas is equal to the up
sampling factor. Now when you down sample that is the opposite of up sample that is
you scale down the signal of time domain, it is slightly more difficult to derive the DTFT
of the resulting signal, because you can see that results in loss of samples. When you
shrink the time domain signal there is a certain loss that is occurring.

(Refer Slide Time: 25:18)

Duality is another very interesting property, in the continuous time if you have x(t) it has
a Fourier transform X(  ), then we know that X(t) has the Fourier transform 2  x(-  ).
Now in the context of a discrete time signal, it is slightly complex. Now x(n) has the
DTFT X(  ) and X(t) is a periodic signal because X(  ) is periodic.

664
(Refer Slide Time: 26:17).

So we have X(t+2  ) = X(t), we cannot use the continuous time Fourier transform, but
we have to look at the complex exponential Fourier series. We have X(  ) =
 


n 
x(n)e jn which means if you replace  by t we have X(t) equals  x ( n )e
n 
 jnt
and


if you set m=-n then you have  x (  m) e
m 
jmt
.

(Refer Slide Time: 27:22)

And now you can see here we have the fundamental frequency 0 .

665
(Refer Slide Time: 28:00)

2
So we have the time period T0 = 2  , which means the fundamental frequency 0 
T0

and this can be written this as  x (  m)e
m 
jm0t
where 0 is equal to 1 and now you can

see this is the complex exponential Fourier series, because you are expressing X(t) as a
linear combination of infinite number of complex exponentials at the fundamental
frequency 0 and it is harmonics.

(Refer Slide Time: 29:11)

So basically you can say that X(t) has the complex exponential Fourier series given by
x(-m) that is the duality result for the DTFT, because of the asymmetric nature of the
DTFT, when you take the resulting signal in time that is a periodic signal.

666
(Refer Slide Time: 30:17)

And in fact X(  ) is a continuous time signal but it is going to be a periodic signal and
this equivalence is in terms of the complex exponential Fourier series x(-n) with the
1
fundamental period T0 = 2  , linear frequency f0 = , angular frequency 0 = 2  f0
2
equals 1 radian per second and this is the result for of DTFT.

So basically that completes several properties of the DTFT, starting from the simplest
property linearity, time shifting, modulation, etc., and finally, we have looked at up
sampling of a discrete time signal and also the duality property for the DTFT. So we will
stop here and look at other aspects in the subsequent modules. Thank you very much.

667
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 58
Discrete Time Fourier Transform: Properties of DTFT - Differentiation in
Frequency, Difference in Time, Convolution, Multiplication, Parseval’s Relation

Keywords: Differentiation in Frequency, Difference in Time, Convolution,


Multiplication, Parseval’s Relation

Hello welcome to another module in this massive open online course. So we are looking
at the Fourier analysis for discrete time aperiodic signals through the discrete time
Fourier transform and we are looking at the properties of the DTFT. So let us continue
our discussion on that.

(Refer Slide Time: 00:30)

So the next property that we want to look at is differentiation in frequency.

668
(Refer Slide Time: 01:34)


So we have x(n) with the DTFT X(Ω) alright which means X ()   x(n) e
n 
 jn
. Then

dX () 
d  jn
if you differentiate this,   x(n) e .
d n  d

(Refer Slide Time: 02:35)

dX () 
So   j  nx(n) e jn is the DTFT of nx(n).
d n 

669
(Refer Slide Time: 03:31)

dX () 
This implies if you bring the j to the other side this means j   nx(n) e jn and
d n 

dX ()
therefore, we can conclude that nx(n) basically has the DTFT which is j .
d

(Refer Slide Time: 04:16)

So the next property is the difference in time which means if x(n) has the DTFT X(Ω)
what can we say about the difference signal x(n) - x(n-1).

670
(Refer Slide Time: 05:05)

Now if you take the Fourier transform of this you can see it is (1  e j ) X () .

(Refer Slide Time: 06:18)

n
Let us now look at another property, consider the accumulation y (n)   x(k)
k 
that is

you are accumulating all the samples of the signal x(k) until sample n. You can see this
is very simpler similar to the integrator which basically integrates the input signal x(t)
the corresponding system for an analog signal or for a continuous time signal is an
integrator. So basically it integrates or accumulates the signal at a certain time, in the
discrete time we are basically accumulating all the signal samples. So basically if you
t
look at the continuous time that is  x(t) dt , so this is similar to the integrator.


671
(Refer Slide Time: 07:54)

n
X ()
And now the Fourier transform of  x(k) is given as  X (0) ()  1  e
k 
 j
, so that is

basically the accumulation of a discrete time signal. Let us now proceed to another
important property that is convolution and this is always a very important property in
analysis of signals and systems because the convolution describes the output of a linear
time invariant system to any input either continuous time or discrete time.

(Refer Slide Time: 09:16)

So we want to look at the convolution of two discrete time aperiodic signals. So y(n) is

the convolution of x1(n) and x2(n) which means this is equal to  x (m) x (n  m) .
m 
1 2

672
(Refer Slide Time: 10:11)

And now we want to find the DTFT of y(n) where, x1(n) has a DTFT that is X1(  ) and
x2(n) has the DTFT that is X2(  ).

(Refer Slide Time: 10:30)

  
And now Y ()  
n 
y (n) e jn which is basically   x (m) x (n  m) e
n  m 
1 2
 jn
.

673
(Refer Slide Time: 11:34)

(Refer Slide Time: 12:35)

 
So that basically gives  x (m) e
m 
1
 jn
 X 2 () and now if you look at  x (m) e
m 
1
 jn

this is nothing but X 1 () .

674
(Refer Slide Time: 13:20)

And therefore, what we get is Y(  )which is the convolution of x1(n) and x2(n) has the
DTFT X1(  )  X2(  ).So convolution in time implies multiplication in frequency.

(Refer Slide Time: 14:13)

(Refer Slide Time: 15:07)

675
Let us look at multiplication in time that is if you look at two signals x1(n) multiplied by
x2(n), this is basically the periodic convolution of X1(  ) with X2(  ) which is
1
2  X ( )X

1 2 (   ) d  .

(Refer Slide Time: 16:55)

Now let us consider a real signal, then I can always express a real signal as the sum of
even and odd components, xe(n) and xo(n) respectively. We have seen that the even
x(n)  x( n) x(n)  x( n)
component of a signal is and the odd component is . And
2 2
now further if x(n) is complex, the DTFT of this can be expressed as A(  )+jB(  ).

(Refer Slide Time: 18:08)

676
Then it can be shown that the even component has the DTFT A(  ) that is real part of
the DTFT X(  ) and the odd component has the DTFT that is given by jB(  ) and this is
purely imaginary.

(Refer Slide Time: 19:32)

And now further since we have a real signal, recall that for a real signal X * ()  X ()
where X(  ) is the DTFT of this real signal x(n). So this implies that A(  )+ jB(  ) this
is X(  ) must be equal to X * () so (A(-  ) + j B(-  ))*.

(Refer Slide Time: 20:50)

Now equating the real and imaginary parts, we have A(  )=A(-  ) which implies A(  )
is an even function.

677
(Refer Slide Time: 21:52)

And further we also have B(  ) = -B(-  ) which means this is an odd function. So for a
real signal the real part is an even function and the imaginary part is an odd function.

(Refer Slide Time: 23:13)

So now if x(n) is an odd signal, the even component xe(n)=0.

678
(Refer Slide Time: 24:13)

And this implies X(  )=Xo(  )=jB(  ) which is purely imaginary and odd. So if the
signal is real even signal its DTFT is purely real and even. If x(n) is real and odd signal
then its DTFT is purely imaginary and odd.

(Refer Slide Time: 25:08)

And then we can look at the Parseval’s relation. Let us say we have y(n) be the
multiplication of x1(n) and x2(n) then we know that Y(  ) is the periodic convolution of
1
X1(  ) and X2(  ). So that is given as
2  X ( )X
2
1 2 (   )d . So we


have  y(n) e
n 
 jn
 Y () .

679
(Refer Slide Time: 26:30)


Now if you set  =0 in this, this implies  y(n)  Y (0) which
n 
implies

 x (n) x (n)  Y (0) .


n 
1 2

(Refer Slide Time: 26:50)

1
So in fact, you can also write that as
2  X ( )X
2
1 2 ( )d . This is nothing but the

correlation between these two discrete time sequences x1(n) and x2(n)

1
 x (n) x (n)  2  X ()X
n 
1 2 1 2 ()d  . Now you can think of this as the generalized
 2

Parseval’s relation.

680
(Refer Slide Time: 28:58)

Now if you set x2(n) = x1*(n), then what you have is X2(  ) = X1*(-  ) because
conjugate sequence is X1*(-  ) which means X2(  ) = X1*(  ). So we have

1
 x (n) x (n)  2 
2
1 2 X 1 () d  that is the sum of the energy over one period. In fact
n   2

this is not the energy, but you can think of this as the power because you are dividing by
2 .

(Refer Slide Time: 29:43)


1

2
x1 (n)   X 1 () d  . So basically that
2
So this is the Parseval’s relation that is
n  2 2

completes our discussion of the properties of the DTFT. In the subsequent module we
will start looking at the DTFT and its properties with relation to the LTI systems alright.
So we will stop here. Thank you very much.

681
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 59
DTFT: Discrete Time LTI Systems- LTI Systems Characterized by Difference
Equations

Keywords: LTI Systems Characterized by Difference Equations

Hello welcome to another module in this massive open online course. So we are looking
at the DTFT for the analysis of discrete time signals and systems and in this module we
will look at the DTFT for the analysis of discrete time LTI systems ok. So we intend to
discuss the applications of the DTFT and in particular we would like to look at the
frequency response.

(Refer Slide Time: 00:33).

(Refer Slide Time: 01:33)

682
So consider a discrete time LTI system with impulse response given by h(n). So the input
to this LTI system is x(n) and output is y(n). And now we know that the output y(n) is
given as the convolution of x(n) and h(n).

(Refer Slide Time: 02:46)

Now taking the DTFT we have Y(  ) = X(  )H(  ) which implies the transfer function
Y ()
H ( )  .
X ( )

(Refer Slide Time: 03:51)

Now let us say x(n), the input signal is periodic and is a complex exponential, that is
x(n)  e j0n , the output y(n) equals h(n) convolved with x(n) which is basically

683
 

 h( k ) e
k 
 j0 ( n  k )
which is  h( k ) e
k 
j0 n
e j0k now this e j0n is common and can be

taken outside.

(Refer Slide Time: 04:50)

(Refer Slide Time: 05:33)


So that becomes e j0 n
 h( k ) e
k 
 j0 k
which is simply H( 0 ) that is the DTFT at 0 . So

the output is simply H( 0 ) times e j0n and therefore this is also known as the eigen
function or the eigen signal, this is the eigen function of this LTI system because the
output is simply a scalar multiple times the input.

684
(Refer Slide Time: 07:11)

Now if x(n) is periodic then we realize that x(n) can be expressed as a discrete Fourier
N0 1
2
series, then we have x(n)  C
k 0
k e j0 n where 0 
N0
. And this 0 is basically the

fundamental frequency ok. So it is expressed as a sum of the complex exponential at the


fundamental frequency and its harmonics. Now from linearity now we know that
corresponding to each e jk 0n the output of the LTI system is simply H (k 0 ) e jk0n .

(Refer Slide Time: 08:42)

So for each e jk 0n , the corresponding output is H (k 0 ) e jk0n which implies if input is
N0 1 N0 1
x ( n)   Ck e jk0n , the output by linearity, is simply
k 0
 C H (k  ) e
k 0
k 0
jk 0 n
.

685
(Refer Slide Time: 09:51)

2
So this is for a periodic signal x(n) with period N 0  that is you can see that if input
0
is periodic then the output is also periodic.

(Refer Slide Time: 11:49)

And finally, let us now look at discrete time LTI systems characterized by difference
N M
equations, let us say the difference equation is  ak y(n  k )   bk x(n  k ) . Now taking
k 0 k 0

686
M

Y ( ) b e k
 jk 

the DTFT we have H ()   k 0


. So that is the transfer function and
X ( ) N

a e
k 0
k
 jk 

taking the inverse DTFT of this gives the impulse response.

(Refer Slide Time: 13:26)

(Refer Slide Time: 14:02)

687
(Refer Slide Time: 14:31)

So basically that completes the analysis of the application on the properties of the DTFT
for LTI systems with respect to the transfer function, the output for a periodic signal,
output for an aperiodic signal and also how to derive the impulse response and the
transfer function of an LTI system described by a constant coefficient difference
equation. So we will stop here and continue in the subsequent module. Thank you very
much.

688
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 60
Discrete Fourier Transform-Definition, Inverse DFT, Relation between DFT and
DFS, Relation between DFT and DTFT, Properties-Linearity, Time Shifting

Keywords: Discrete Fourier Transform, Inverse DFT, Linearity, Time Shifting

Hello, welcome to another module in this massive open online course. So we are looking
at the Fourier analysis of discrete time signals. We have completed our discussion of the
discrete time Fourier transform and in this module we will start a different concept that is
the discrete Fourier transform.

(Refer Slide Time: 00:34)

So this is the Fourier analysis of discrete time signals which has a revolutionized digital
signal processing because there is a very fast way to implement the discrete Fourier
transform that is the DFT through the FFT that is the fast Fourier transform routine. So
this is used for finite length sequences, this is used heavily in images, audio and video
processing and also for communication applications especially wireless communications
applications and so on. So the definition is consider a sequence x(n) and this is a finite
length sequence defined only for 0  n  N-1.

689
(Refer Slide Time: 02:01)

N 1  j 2 k
n
And X(k), the kth DFT coefficient is defined as  x ( n) e
n 0
N
and this is defined for a

finite length sequence 0  k  N-1.

(Refer Slide Time: 02:58)

kn
N 1
  j 2N 
And this can also be written as X (k )   x(n)  e  .
n 0  

690
(Refer Slide Time: 03:53)

2
j
Now, this WN is a fundamental quantity e N
and also note that WNk where k is any
2
j .lN
integer or any integer let us say l times N equals e N
 e 2 l which is equal to 1. So
this is in fact, one of the roots of unity.

(Refer Slide Time: 05:08)

N 1 2 km

 X (k ) e
j
N
Now, for the IDFT if you perform that is equal to
k 0

N 1 N 1 2 kn 2 km
j
 x(n) e
j
N N
e .
k 0 n 0

691
(Refer Slide Time: 06:24)

N 1 N 1 2 k ( m  n )

 x ( n)  e
j
N
So interchanging the order of summation we have . And now, you
n 0 k 0

N 1 2 k ( m  n )

e
j
can see that this N
is equal to 0 if m  n and this is equal to N if m = n. And
k 0

therefore, this is equal to Nx(m).

(Refer Slide Time: 07:26)

N 1 2 km 2 km
1 N 1
 X (k ) e  X (k ) e N and in
j j
N
So we have Nx(m) = which implies that x(m) =
k 0 N k 0
2
1 N 1

j
fact, e N
is WN-1, so this implies x(m) = X (k )WN km . So this is the expression for
N k 0
the IDFT, inverse discrete Fourier transform.

692
(Refer Slide Time: 08:51)

And x(n) and X(k) form a DFT pair, x(n) in the range 0 to N-1, k also in the range 0 to
capital N-1, so both have N samples. So we have seen several transforms and we want to
look at the relation between them.

(Refer Slide Time: 10:39)

So we want to look at the relation between DFT and DFS. So consider the periodic
1 N 1
extension of x(n), then we have Ck = 
N n 0
x(n) e j0kn . So we are extending it

periodically which means we are repeating the same sequence x(n) periodically. So Ck
denote its DFS coefficient.

693
(Refer Slide Time: 12:20)

2
2 1 N 1  j kn
In fact, we have 0 
N
, so this becomes 
N n 0
x ( n ) e N
which is nothing but

basically the DFT coefficient X(k).

(Refer Slide Time: 13:09)

So this implies if Ck denotes the kth discrete Fourier series coefficient of the periodic
extension then NCk = X(k).

694
(Refer Slide Time: 14:32)

So the next thing that we want to see is the relation between the DFT and DTFT.
Consider a signal x(n) for 0  n  N-1 and 0 otherwise and if you look at the DTFT it will
N 1
be  x(n) e jn .
n 0

(Refer Slide Time: 15:52)

N 1  j 2
kn
k
Now, if you substitute   2
N
, this will become  x ( n) e
n 0
N
which is nothing but

X(k). So this implies that X(k) is nothing but the DTFT of X(  ) evaluated at
k
  2 .
N

695
(Refer Slide Time: 16:24)

Let us now look at some properties of the DFT. Now, we have x(n) for 0  n  N-1 and
X(k) for 0  k  N-1. Now we consider x(n-n0) which has to be interpreted modulo N,
where N is the total number of samples.

(Refer Slide Time: 18:09)

Let us say we have n=1 and N=4. So let us say we call x(n-n0) as x . So we have x(1) =

x(0). So x(1) that is the first sample of the shifted signal, is actually 0. So basically you
are sort of circularly shifting this, the shifting is basically modulo 4 where N = 4 is the
total length of the sequence. And now, we look at the properties of the DTFT and we
start with linearity.

696
(Refer Slide Time: 19:52)

So linearity tells us that if x1(n) has the DFT coefficient X1(k) and x2(n) has the DFT
DFT coefficient X2(k) and both have the same length 0  n  N-1 then a1x1(n) + a2x2(n)
has the DTFT a1X1(k) + a2X2(k).

(Refer Slide Time: 20:55)

Now let x(n) has the DTFT X(k) then what can we say about x(n-n0). So we have
N 1
X (k )   x(n)WNkn .
n 0

697
(Refer Slide Time: 22:31)

Now, let n0- n or n- n0 mod N = m this implies n = n0 + LN+m.

(Refer Slide Time: 23:32)

N 1 N 1
So we have X (k )   x(n  n0 )WNkn   x(m)WNk ( n0  LN  m ) . Now WNkn0 comes outside
n 0 m0

N 1
because it does not depend on m, so that is WNkn0  x(m)WNkm which is basically X(k)
m 0

since WNkLN  1 .

698
(Refer Slide Time: 25:07)

So this is WNkn0 times X(k) which is similar to the modulation property or basically the
shift in time leads to modulation in the frequency domain.

(Refer Slide Time: 26:00)

So that basically covers the time shift property. So in this module we have introduced the
DFT, looked at its definitions and some of its properties. So we will stop here and
continue in the subsequent modules. Thank you very much.

699
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 61
Discrete Fourier Transform: Properties – Conjugation, Frequency Shifts, Duality,
Circular Convolution, Multiplication, Parseval’s Relation, Example Problems for
Fourier Analysis of Discrete Time Signals

Keywords: Conjugation, Frequency Shifts, Duality, Circular Convolution,


Multiplication, Parseval’s Relation

Hello, welcome to another module in this massive open online course. So we are looking
at the DFT that is the discrete Fourier transform and its application for finite length
discrete time sequence.

(Refer Slide Time: 00:26)

(Refer Slide Time: 01:05)

700
So we have the conjugation property where we have a sequence x(n) which has the DFT
X(k). Then you can show that x*(n) has the DFT that is X*(-k), but this has to be
interpreted as modulo N that is X*(-k)modN.

(Refer Slide Time: 02:05)

The next property is the frequency shift property, shift in time results in modulation in
frequency. So x(n) has the DFT X(k) then WNk0n x(n) has the DFT X(k-k0).

(Refer Slide Time: 02:59)

Now when we consider duality, x(n) has the DFT X(k), we need to find the DFT of the
time domain coefficients given by X(n).

701
(Refer Slide Time: 03:42)

1 N 1
The inverse DFT can be written as  X (k )WNkn , now if we interchange the roles of n
N k 0
1 N 1
and k then you will have x(k )  
N n 0
X (n)WN nk .

(Refer Slide Time: 05:13)

1 N 1
And now if we consider x(-k) that becomes  X (n)WNnk and then Nx(-k) is
N n 0
N 1

 X (n)W
n 0
nk
N which is basically DFT of X(n).

702
(Refer Slide Time: 06:15)

So the next property is also fairly important, which is circular convolution.

(Refer Slide Time: 06:51)

N 1
So we have x1 (n)  x2 (n)   x(i) x2 (n  i) mod N .
i 0

703
(Refer Slide Time: 07:57)

Now we need to find the DFT of this quantity and this can be found as follows, you have
N 1 N 1 N 1 N 1


n 0
 x1 (i) x2 (n  i)W which is
i 0
kn
N  x (i) x (n  i)W
i 0
1
n 0
2
kn
N and this is the DFT of the

shifted sequence. So this is simply WNkiX2(k).

(Refer Slide Time: 09:53)

N 1
So this basically reduces to  x (i)W
i 0
1
ki
N X 2 (k ) and X2(k) is a constant depending only on

k and you can see that the rest is nothing but X1(k), so that reduces to X1(k)X2(k). So the
circular convolution of two finite length sequences result in the multiplication of their
respective DFT coefficients and this is a very important property used by practical
wireless communication system, in fact, most of the 4G phones that are based on the LTI
standard which uses OFDM, orthogonal frequency division multiplexing are based on
this phenomenon.

704
(Refer Slide Time: 11:03)

So for periodic signals or finite length signals it is always circular convolution, for
infinite signals it is always linear convolution.

(Refer Slide Time: 12:10)

1
So x1(n)x2(n) has the DFT that is X 1 (k )  X 2 (k ) that is the rule for multiplication of
N
two time domain signals and now we can derive the Parseval’s relation from this.

705
(Refer Slide Time: 13:10)

So set x2(n) = x1*(n). Now if you multiply x1(n) with x1*(n), then we have |x1(n)|2 and
1 N 1
that has the Fourier transform which is 
N i 0
X 1 (i) X 1* (i  k ) . So if you look at kth DFT

N 1
1 N 1
 x1 (n) WNkn   X1 (i) X1* (i  k ) . Now we set k=0 on both sides.
2
coefficient of
i 0 N i 0

(Refer Slide Time: 14:37)

N 1
1 N 1
So this reduces to  | x ( n) |
n 0
1
2
 
N i 0
| X 1 (i) |2 and that is the Parseval’s relation for DFT.

706
(Refer Slide Time: 16:23)

That basically completes the discussion for the DFT. So we can start looking at some
example problems for the Fourier analysis of discrete time signals.

(Refer Slide Time: 17:42)

Let us consider a periodic signal with period N0 being even. So


N
0n 0 1

x ( n)  1
0N
2
0  n  N 1
. So this is basically the description of it in a single period and

2 0
then it repeats.

707
(Refer Slide Time: 19:45)

Now we need to find the discrete Fourier series coefficients of x(n). The fundamental
2
frequency 0 can be defined as and the DFS coefficient is Ck given as
N0

1 N 1

N n 0
x(n)e j0kn .

(Refer Slide Time: 21:02)

708
(Refer Slide Time: 22:28)

k
k
 j ( 0 )
sin( )
1 2 2 .
So as shown in slide it reduces as e
N0 k
sin(0 )
2

(Refer Slide Time: 24:02)

So in this module we have completed the discussion of the DFT, the various properties
of the DFT followed by an example for the discrete Fourier series. So we will stop here
and look at other examples in the subsequent modules. Thank you very much.

709
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 62
Example Problems: DFS Analysis of Discrete Time Signals, Problems on DTFT

Keywords: DFS Analysis of Discrete Time Signals

Hello, welcome to another module in this massive open online course. So we are looking
at example problems for the Fourier analysis of discrete time signals and we have started
with example problems for the discrete Fourier series for discrete time periodic signals,
so let us continue our discussion.

(Refer Slide Time: 00:32)

(Refer Slide Time: 01:14)

710
So let us look at the next problem that is problem number 2 which is the following.

(Refer Slide Time: 01:46)

 
So we want to find the Fourier series cos( n)+sin( n) and you can see that these are
2 3
 
periodic. So the period of cos( n) can be found as follows N1 = 2  implies N1 = 4.
2 2

Similarly for the other signal we have N2 = 2  which implies N2 = 6.
3

(Refer Slide Time: 03:04)

So the period of the sum is therefore, the least common multiple because it has to be a
multiple of both and therefore the minimum possible size length is the least common
multiple of these. So the fundamental period of this sum signal is N0 which is the LCM
of 4 and 6 which is 12. So N0 is the period of x(n) which is 12. Now using the properties

711
 
j n j n
 e 2
e 2

of complex exponentials we can write cos( n) as and sin( n)
2 2 3
 
j n j n
e 3
e 3
as .
2j

(Refer Slide Time: 04:22)

2 2   
So we have 0  = = . So is 3 0 and is 2 0 .
N0 12 6 2 3

(Refer Slide Time: 05:23)

1 j 30n 1  j 30n 1 j 20n 1  j 20n


So we can write this as e  e  e  e . And clearly you can
2 2 2j 2j
see that the coefficient of 3 0 is half which is equal to C3, this is C-3, this is C2 and this

712
is C2, but everything is modulo N. So C-3, C-3+12 that is n - k so that is basically C9 and C-
2 = C-2+12 that is equal to C10.

(Refer Slide Time: 06:28)

So we have from the above discrete Fourier series C3 =1/2 C-3 that is C9 is also ½,
1 j 1
C2 = which is also  and C10 is equal to - .
2j 2 2j

(Refer Slide Time: 07:16)

We can write the discrete Fourier series representation as


1 j 20n 1 j 30n 1 j 90n 1 j100n
x ( n)  e  e  e  e .
2j 2 2 2j

713
(Refer Slide Time: 08:42)

Let x(n) be a real periodic sequence and the DFS coefficient is Ck which can be
expressed as ak + jbk and we want to find the trigonometric discrete Fourier series. Let us
consider for simplicity the scenario where N0 is odd which also implies that N0 - 1 is
even.

(Refer Slide Time: 10:36)

N0 1
Now, the discrete Fourier series is given as x(n) = c e
k 0
k
jk 0 n
which when N0 is odd

N0 1
equals c0   ck e jk 0n .
k 1

714
(Refer Slide Time: 11:42)

N0 1
2
So I can write this as c0  ce
k 1
k
jk 0 n
 cN0 k e j ( N0 k ) 0n . Now, if you look at this, since this

is a real signal this implies Ck = C*- k = C*N0- k.

(Refer Slide Time: 13:00)

And further if you look at e j ( N0 k ) 0n this is e jk 0n as shown in slide.

715
(Refer Slide Time: 13:51)

N0 1
2
So basically we can write x(n) as c0  ce
k 1
k
jk 0 n
 ck*e jk 0 n . Now, you can see this

ck*e jk0n is nothing but (ck e jk0n )* . So what we get is 2 Re{ck e jk0n } . So this will be
N0 1
2
simplified as c0   2 Re{c e
k 1
k
jk 0 n
} . And the real part of ck e jk 0n equals basically the real

part of (ak+ jbk)(cos(k 0 n) + jsin(k 0 n)) which is equal to akcos(k 0 n) - bksin(k 0 n).

(Refer Slide Time: 16:15)

And therefore, substituting this we have the trigonometric discrete Fourier series as
N0 1
2
c0  2  ak cos(k 0 n) bk sin(k 0 n) so ak is the real part and bk is the imaginary part of
k 1

716
the DFS coefficient Ck. Similarly, you can find the trigonometric discrete Fourier series
when N0 is even, in that case N0 - 1 be odd, so you cannot apply the same technique. So
basically that brings us to the next problem which is to find the discrete time Fourier
transform.

(Refer Slide Time: 18:39)

Let us say x(n) = u(n) - u(n-N), we need to find the corresponding DTFT of this signal.
So this implies x(n) is 1, 0  n  N-1 and 0 otherwise.

(Refer Slide Time: 20:13)


Now the DTFT is given as X ()   x ( n)e
n 
 jn
. Now, this is only non-zero from 0 to

N 1
1  e jN
N-1, so X ()   e jn which is a geometric series and the sum of this is ,
n 0 1  e  j

717
N N
 j  j
2 2
from which if you take e common in the numerator and e common in the
N
 j
N 1 sin( )
denominator what is left is we have e 2 2 .

sin( )
2

(Refer Slide Time: 20:56)

(Refer Slide Time: 22:14)

Let us find the inverse DTFT of another signal as shown in slide. So this is periodic with
  W
periodicity 2  . So we have X ()  1 .
0W    

718
(Refer Slide Time: 23:59)

W W
1 1
 e
jn jn
The inverse DTFT is given as x(n) = X ()e d  which is d  which
2 W
2 W

sinWn
on evaluation reduces to . So basically that gives us the inverse DTFT of x(n).
n

(Refer Slide Time: 25:41)

719
(Refer Slide Time: 26:30)

So we have looked at problems pertaining to the discrete Fourier series and also started
looking at problems pertaining to the DTFT in this module. Let us continue this
discussion in the subsequent modules. Thank you very much.

720
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 63
Examples Problems: DTFT of Cosine, Unit Step Signals

Keywords: DTFT of Cosine, Unit Step Signals

Hello, welcome to another module in this massive open online course. So we are looking
at example problems in the Fourier analysis of discrete time aperiodic signals.

(Refer Slide Time: 00:30)

(Refer Slide Time: 01:02)

721
Let us look at example number 6 where we want to find the IDTFT of 2 (  0 ) and

1
 2 (   )e d  and now the 2  cancels
jn
this can be found as follows, this is
2
0

  (   )e d  which is basically e j0n .


jn
and this will be 0

(Refer Slide Time: 02:07)

(Refer Slide Time: 02:25)

722
(Refer Slide Time: 03:19)

Now let us apply this result to compute the DTFT of cos( 0 n) and this can be written as,

1 1
cos( 0 n) = e j0n  e j0n , e j0n has DTFT 2 (  0 ) .
2 2

(Refer Slide Time: 03:54)

1 1
So this will be 2 (  0 )  2 (  0 ) .
2 2

723
(Refer Slide Time: 04:45)

So cos( 0 n) has the DTFT  (  0 )   (  0 ) .

(Refer Slide Time: 05:37)

Let us continue to another example where we want to find the inverse DTFT of
1
given the condition that |a|<1. Now let us start by considering the signal
(1  ae j ) 2
x(n) = anu(n), where |a|<1 and we will manipulate the DTFT of this signal to derive the
inverse DTFT of the given signal. So this is related to the DTFT of the exponential
signal.

724
(Refer Slide Time: 06:54)

1
If you look at the DTFT of this you have X(  )= . Now if you differentiate this
1  ae j
1 (ae j ( j ))
with respect to  you will have  and now if you simplify this, this is
(1  ae j ) 2

 jae j dX () dX () ae j


that is which implies that j will be .
(1  ae j ) 2 d d (1  ae j ) 2

(Refer Slide Time: 08:05)

725
(Refer Slide Time: 08:48)

dX () ae j
So j corresponds to the DTFT of nx(n). So nanu(n) corresponds to .
d (1  ae j ) 2

(Refer Slide Time: 09:42)

726
(Refer Slide Time: 10:06)

ae j
So which implies - nanu(n) has the DTFT which is given by .
(1  ae j ) 2

(Refer Slide Time: 10:54)

(1  ae j )  1
Now you can add and subtract 1, so that will be basically equal to .
(1  ae j )2

727
(Refer Slide Time: 11:13)

1 1
So that is  j
 and if you take the inverse discrete Fourier transform
1  ae (1  ae j )2
of the first term you will get anu(n) and taking the inverse discrete Fourier transform of
the second term we will get, let us call this x(n) . So this implies basically -nanu(n) is

anu(n)- x(n) , which implies that x(n) is nanu(n)+anu(n), which is basically equal to
(n+1)anu(n).

(Refer Slide Time: 13:09)

Therefore the inverse discrete Fourier transform of the given quantity is (n+1)anu(n).

728
(Refer Slide Time: 14:03)

(Refer Slide Time: 14:36)

1
Look at this, we have the given DTFT is which is equal to
(1  ae j ) 2
1 1
 . Now the multiplication in the DTFT domain must be the
(1  ae ) (1  ae j )
 j

convolution of the corresponding inverse DTFT that is the corresponding time domain

1
signal for
(1  ae j )
which is anu(n). And that convolution is  a u(k )a
k 
k nk
u (n  k )

which is basically equal to a k  a nk  a n , which does not depend on k, hence comes out
of the summation.

729
(Refer Slide Time: 16:05)


Now if you look at  u(k )u(n  k ) , this non-zero only for k  0 and k  n. So this this is
k 

n
simply a n 1 for n  0. So this is basically (n+1)an for n  0 and you can write this as
k 0

(n+1)anu(n).

(Refer Slide Time: 17:03)

So this is another way which seems to be a slightly simpler way to derive the same
result. So let us do another example problem.

730
(Refer Slide Time: 17:56)

(Refer Slide Time: 17:59)

This is another standard problem to find the DTFT of u(n), the unit step function, so we
can write u(n) as x(n)+y(n), where x(n)=1/2, for all n and y(n)=1/2 for n  0 and -1/2 for
n<0.

731
(Refer Slide Time: 18:44)

(Refer Slide Time: 19:19)

So 1 has the DTFT 2 () , so this implies 1/2 has the DTFT  () . So x(n)=1/2
implies X(  )=  () . So now, if you look at y y(n)-y(n-1) that is equal to  (n) . This
1
implies Y(  ) = .
1  e  j

732
(Refer Slide Time: 20:53)

1
By linearity we have U(  )=X(  )+Y(  ) implies U(  ) =  () + that is the
1  e  j
DTFT of the standard unit step signal. Now let us look at the accumulation property.

(Refer Slide Time: 22:06)

n
Consider y (n)   x(k ) and this is termed as the accumulation property. This is similar
k 0

to integration in the continuous time, that is you are integrating the input signal x( )
from  to t or 0 to t.

733
(Refer Slide Time: 23:15)

(Refer Slide Time: 23:55)

So this is nothing but the convolution of x(n) with u(n), because convolution of x(n) with

u(n) is  x(k )u(n  k ) , this is non-zero only for k  n.
k 

734
(Refer Slide Time: 24:34)

So basically what you have is that w convolution in time is multiplication in frequency


domain. So Y(  ) which is the DTFT of the accumulator is basically X(  )U(  ). This
X ()
is simply  X () () + .
1  e  j

(Refer Slide Time: 25:17)

735
(Refer Slide Time: 25:44)

And now you can see that X () () is nothing but X(0) from the properties of the unit
X ()
impulse function, so this is simply X (0) () . So this is  X (0) () + . So
1  e  j
basically that is the DTFT of the accumulator. So let us stop this module here and we
will continue looking at other problems in the subsequent modules. Thank you very
much.

736
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 64
Example Problems: DTFT – Impulse Response

Keywords: DTFT – Impulse Response

Hello, welcome to another module in this massive open online course. So we are looking
at example problems in DTFT.

(Refer Slide Time: 00:25)

So let us start with the next problem which deals with the impulse response of the given
discrete time system. So we have the difference equation that is
5 1
y ( n)  y(n  1)  y(n  2)  x(n) .
6 6

737
(Refer Slide Time: 01:25)

So this is the constant coefficient difference equation and we have seen how to derive the
impulse response from the constant coefficient difference equation using the discrete
time Fourier transform.

(Refer Slide Time: 02:28)

Now frequency response is simply obtained by taking the DTFT on both sides, we have
5 1
Y ()(1  e j  e j 2 )  X () .
6 6

738
(Refer Slide Time: 03:40)

(Refer Slide Time: 04:12)

Y () 1
Now  H ( )  , this is the frequency response.
X ( ) 5 1
1  e  j  e  j 2 
6 6

739
(Refer Slide Time: 04:42)

Now we are going to first start by factorizing this and from that the partial fraction
expansion we can continue. So if you take the inverse DTFT of the frequency response
1
you get the impulse response. So we have and now we split this
1 1
(1  e j )(1  e j )
3 2
into partial fractions.

(Refer Slide Time: 05:41)

 
 12 1
The partial fraction expansion is 6   3  and now you take the inverse

1  1 e  j 1  1 e j 
 2 3 
DTFT of each component.

740
(Refer Slide Time: 06:29)

1
So  j
has the IDFT anu(n), so we are going to use this property.
1  ae

(Refer Slide Time: 07:33)

1 n 1
2 11 3
So we have 6 and its IDFT is   u ( n ) , now has the IDFT
1
1  e  j 22 1
1 e  j

2 3
n n n
11 1 1
  u (n) , which on simplification yields h(n) = 3   u (n)  2   u (n) . So this is
3 3 2  3
the impulse response of the LTI system described by the given difference equation.

741
(Refer Slide Time: 08:51)

So now let us proceed to the next problem where we consider the LTI system given in
slide and we have to find the frequency response H(  ), the impulse response h(n) and
also the 3dB frequency which is defined as that point on frequency at which the
1
amplitude is basically that of the maximum. So basically the power of that point
2
corresponds to half of the maximum.

(Refer Slide Time: 10:08)

So we have the input x(n) going through the summer and also x(n) which is going to this
block z-1 which corresponds to a delay. So if x(n) is the input, the output here is x(n-1)
and you are summing x(n) and x(n-1), so the difference equation that describes this is
y(n) is very simple that is y(n) = x(n) + x(n-1).

742
(Refer Slide Time: 11:23)

Now to find the impulse response, we set x(n) to be the impulse  (n) and then we find
the output of the given LTI system and we have h(n) =  (n)   (n  1) . Now you can see
h(0)=  (0)   (1) =1.

(Refer Slide Time: 12:25)

Similarly you can see h(1)=1 and h(n) for all other n is 0.

743
(Refer Slide Time: 12:41)

So the impulse response is very simple that is for n=0, h(n) is 1, for n=1, h(1) is 1 and for
all other n, h(n) is 0.

(Refer Slide Time: 13:27)

Now taking the DTFT we have X(  )+ e j X(  ), this is equal to Y(  ). Now we have


Y ( )
H(  ) which is = (1  e j ) which we can further simplify as, take
X ( )
 j
2
e common.

744
(Refer Slide Time: 14:32)

j  j
This will be e 2
e 2
, but this is cos( ) .
2

(Refer Slide Time: 15:09)

 j
So this H(  ) is simply e 2
2cos( ) and this is the frequency response of the given
2
LTI system, in fact, you can treat this as a filter, in fact, it will be a low pass filter. So the

magnitude response is 2| cos( ) |.


2

745
(Refer Slide Time: 15:59)

(Refer Slide Time: 16:28)

So if you draw this, it will look something like as shown in slide, it is symmetric,
periodic with period 2  . The peak is |H(  )|=2. So this is the plot for |H(  )|. So it starts
at the maximum, at 0 and decreases towards -  and  . So clearly this is a low pass
filter, but it is not an ideal low pass filter, because the attenuation is not 0 outside the
cutoff frequency. So clearly it is not an ideal low pass filter alright and therefore, we
know that for a non-ideal low pass filter we can characterize the effective bandwidth
using the 3dB frequency.

746
(Refer Slide Time: 19:13)

(Refer Slide Time: 19:43)

Now if you look at the maximum of |H(  )|this occurs at 0 which is 2|cos(0)|, which is
equal to 2, now let us call the 3dB frequency as 0 . Now we have |H( 0 )| is

1 1
 max | H () | which is  2 which is 2.
2 2

747
(Refer Slide Time: 20:44)

0 1  1 
Now for  <0 it is symmetric, cos( ) implies 0  cos 1 ( ) , which is
2 2 2 2 4
 
implies the 3 dB frequency 0  . So the 3dB frequency 0  .
2 2

(Refer Slide Time: 21:23)

748
(Refer Slide Time: 21:34)


And you can also say that the 3dB bandwidth for this non-ideal low pass filter is .
2

(Refer Slide Time: 22:03)

Let us now move on to the next problem, let us look at how to derive a high pass filter
from a low pass filter.

749
(Refer Slide Time: 22:35)

So let h(n) be the impulse response of ideal LPF low pass filter and the cutoff is 0 .

Now, we want to consider (-1)nh(n), let us denote this by h(n) and we want to
demonstrate that this is in fact, a high pass filter. So H(  ) is as shown in slide and it is
periodic.

(Refer Slide Time: 23:54)

So we have h(n)  (1)n h(n) .

750
(Refer Slide Time: 25:04)

Now you can write this -1 as e j or (1) n is (e j )n and this is (e j )n h(n) which is

e j n h(n) and you can see this is modulation in time.

(Refer Slide Time: 25:31)

So the modulation property states that if h(n) has DTFT H(  ) , then e j0n has DTFT
H(   0 ).

751
(Refer Slide Time: 26:15)

So which means e j n has DTFT H(    ) and that is H () . So H () is H(    ),


where you are taking the low pass filter response and shifting it by  .

(Refer Slide Time: 26:57)

So the shifted version that is H () which will also be periodic with period 2  . So
simply shifting does not either induce or destroy periodicity. So this is the shifted LPF
and this can be shown as in slide.

752
(Refer Slide Time: 27:48)

So we have h(n) which is a low pass filter and h(n) is an equivalent high pass filter with
cutoff  - c .

(Refer Slide Time: 31:15)

So let us stop here and we will continue with other problems in the subsequent modules.
Thank you very much.

753
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 65
Example Problems: DTFT - Sampling

Keywords: DTFT - Sampling

Hello welcome to another module in this massive open online course. So we are looking
at example problems for the discrete time Fourier transform. So let us look at the
following discrete time LTI system described by the differential equation y(n) =
N M
 ak y(n  k )   bk x(n  k ) .
k 1 k 1

(Refer Slide Time: 00:25).

(Refer Slide Time: 01:41)

754
N
Now taking the z transform or taking the DTFT we have Y ()   ak e jk Y () =
k 1

b e
k 1
k
 jk 
X () .

(Refer Slide Time: 02:38)

(Refer Slide Time: 03:51)

Y ( )
Now, H(  ) = is the transfer function which relates the input and output in the
X ( )
M

b e k
 jk 

frequency domain. So H(  )is k 1


N
.
1   ak e  jk 

k 1

755
(Refer Slide Time: 04:47)

Now this is a low pass filter, so to get the corresponding transfer function of the
equivalent high pass filter we have to shift it by  . So H(  ) becomes H(  -  ). So we
have to replace  by  -  and therefore H () of this high pass filter is
M

b e
k 1
k
 jk (  )

N
.
1   ak e  jk (  )

k 1

(Refer Slide Time: 05:37)

756
(Refer Slide Time: 06:42)

 b (e  ) e
k 1
k
j k  jk 

Now you can simplify this as N


.
1   ak (e ) e j k  jk 

k 1

(Refer Slide Time: 07:30).

And now you can see (e j )k  (1)k , so H () , the transfer function of the equivalent
M

Y ( )  (1)
k 1
k
ak e  jk 
high pass filter is that is . And therefore, this implies that, we
X ( ) N
1   (1) bk e k  jk 

k 1

757
 N
 M
have Y () 1   (1)k bk e jk    X () (1) k ak e jk . Now taking the inverse DTFT,
 k 1  k 1

N M
this gives rise to the difference equation y(n)   (1)k bk y (n  k )   (1)k ak x(n  k ) .
k 1 k 1

(Refer Slide Time: 08:52)

So this is the difference equation of the equivalent high pass filter that is developed from
the difference equation of the given low pass filter.

(Refer Slide Time: 10:12).

Let us proceed to the next problem, consider the impulse response h(n) such that h(n) is
real and h(n)=0 n<0 or n  N and non-zero only for 0  n  N-1.

758
(Refer Slide Time: 11:18)

And importantly h(n) has the property h(n)=h(N-1-n).

(Refer Slide Time: 11:36)

So this has symmetry and it will be something like as shown in slide, so this is 0,1,2,3, so
N =4, so we have h(0) = h(3-0)=h(3), further h(1)=h(3-1)=h(2). So this has symmetry
and here N is even. So the impulse response of this filter is symmetric, satisfies the
property h(n)= h(N-1-n).

759
(Refer Slide Time: 13:27)

So now we want to find phase response of this quantity,  () . Now from this property
h(n)= h(N-1-n), we can see that if you flip this filter about zero and if you delay it by N-1
then you get again the same filter.

(Refer Slide Time: 14:56).

So if you flip it about the zero axis that is first you take h(n)  h(n) and then we delay

it by N-1. So H ()  H () , this is from the time reversal property, and delay by N-1,

is we have h(n)  h(n  ( N  1)) that is H ()  H ()e j ( N 1)  which is equal to

H ()e j ( N 1)  from the time reversal property.

760
(Refer Slide Time: 15:53)

(Refer Slide Time: 16:24).

Now, H(-  ) is also equal to H*(  ), so H * ()e j ( N 1)  and this follows because h(n) is

a real filter. So this can be written as |H(  )| e j ( ) e j ( N 1) , but this is equal to H(  )

from the property of the filter. So we have H ()  |H(  )| e j (  ) . Now equating the
phase of these two terms we have  ()  ( N 1)   () implies 2 ()  ( N  1) .

761
(Refer Slide Time: 18:03).

(Refer Slide Time: 18:43).

1
So  () is  ( N  1) . This is a linear phase constraint, so the symmetric property of
2
this real filter results in a linear phase.

762
(Refer Slide Time: 19:57)

Let us proceed to the next problem which talks about the sampling of a continuous time
system. So we have an RC circuit with the input voltage x(t), output voltage y(t).

(Refer Slide Time: 20:52).

Here we need to find the discrete time filter obtained by sampling the above continuous
time impulse response. Now we need to sample this continuous time impulse response
and derive a discrete time impulse response, so that gives us a discrete time LTI system
with a discrete time impulse response.

763
(Refer Slide Time: 22:40)

So the discrete time filter can be described in terms of either the impulse response or the
transfer function and this can be found as follows. Now we know the difference equation
dy (t )
is given as y(t) + R C = x(t), now taking the Laplace transform, we have Y(s) + RC
dt
sY(s) = X(s), RC = 2, given. So this implies (1+2s)Y(s) = X(s), this implies
Y ( s) 1
 .
X ( s) 1  2s

(Refer Slide Time: 24:04).

1 t2
This is H(s) that implies taking the inverse Laplace transform, we have h(t) = e u (t ) .
2
h(n) is obtained by sampling this with sampling duration Ts.

764
(Refer Slide Time: 24:36)

T T
1 n s 1 n s
So h(n) = h(nTs), so h(n)  e 2 u (nTs ) . In fact, this will be e 2 u (n) because Ts is
2 2
n
1   2s 
T
greater than zero, this is equal to  e  u (n) . So this is the impulse response of the
2 
equivalent discrete time system.

(Refer Slide Time: 26:08)

Now taking the z transform yields H(z) that is the transfer function, now this is can be
T
1 n  s 1 1
considered as a u (n) where a is e 2 . So H ( z ) will be , so and this
2 1  az 1 
Ts
1
1 e 2
z
is the transfer function of the equivalent discrete time system.

765
(Refer Slide Time: 27:06)

Now, one can derive the discrete time Fourier transform by replacing z by e j , so
T
1 1  s
H () becomes or , where a is e 2
and now we can also find the

Ts
1  ae j
1 e 2
e  j
difference equation as follows.

(Refer Slide Time: 27:58)

Y ( )
Ts
1 
So we have  , which means Y ()(1  e 2 e j )  X () which implies
X ( ) 
Ts
1 e 2
e  j
Ts

2
that the difference equation is given as y(n) - e y(n-1) = x(n), this is the difference
equation of the equivalent discrete time system.

766
(Refer Slide Time: 28:54).

So we have found the impulse response, transfer function as well as the difference
equation that describes the equivalent discrete time system obtained by sampling the
impulse response of the continuous time LTI system. So let us stop here and we will
continue with other problems in the future modules. Thank you very much.

767
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 66
Example Problems: DTFT – FIR, Discrete Fourier Transform

Keywords: FIR, Discrete Fourier Transform

Hello welcome to another module in this massive open online course. So we are looking
at example problems for the DTFT, let us continue our discussion.

(Refer Slide Time: 00:22)

Now let us look at a continuous time system with the Laplace transform
1
H(s)= . Now, if this is sampled at Ts, we want to find the corresponding, the
( s  2)( s  3)
DTFT of the resulting system.

768
(Refer Slide Time: 01:55)

So basically we want to take samples at intervals of multiples of Ts that is 0, Ts, 2Ts, -Ts,
1 1
-2Ts and so on. Now if you express this in partial fractions we will have  and
s2 s3
considering a causal system we have the corresponding inverse Laplace transform as
e2t u(t )  e3t u(t ) , that is a right handed signal and we have (e2t  e3t )u(t ) as the h(t).

(Refer Slide Time: 02:58)

This is the impulse response, which implies if you sample this at nTs that is your nth
sample of the discrete time impulse response, that will be equal to (e2nTs  e3nTs )u(nTs ) .

769
(Refer Slide Time: 04:17)

e   
n n
So this is e2nTs u(n)  e3nTs u(n) which can be written as 2Ts
u (n) - e3Ts u ( n) .

Therefore we have sampled the impulse response of the continuous time system and
derived the corresponding discrete time impulse response.

(Refer Slide Time: 05:31)

1 1
So now taking the z transform, we have Hd(z) = 2Ts 1
 3Ts
. Now, to derive
1 e z 1 e z 1
1 1
the DTFT, substitute z= e j and that gives us Hd(  )= 2Ts  j
 3Ts
.
1 e e 1 e e  j

770
(Refer Slide Time: 06:49)

So this is the frequency response of the corresponding discrete time system.

(Refer Slide Time: 08:14)

Let us look at the next problem, for this we consider again the LTI system with infinite
impulse response h(n). So this means h(n) is non-zero over an infinite duration. Now we
want to find an FIR approximation such that h(n) =0 for n<0 or n>N-1 and non-zero
only for 0  n  N-1.

771
(Refer Slide Time: 10:32)

So since the DTFT is periodic, we want this quantity that is H(  )- H () that is we want

to minimize the error between the DTFT of the original filter h(n) and the DTFT H ()
of the new filter.

(Refer Slide Time: 11:48)

Now to solve this first observe that we have h(n) which has the DTFT or frequency
response H(  )and h(n) has the DTFT H () this implies h(n)- h(n) has the DTFT

H(  )- H () .

772
(Refer Slide Time: 12:31)

 2
1
Now, if you look at this quantity
2 

H ()  H () d  from the Parseval’s theorem,

 2

this is equal to 
n 
h( n)  h( n) .

(Refer Slide Time: 13:35)

Now you can split this into three components, one from n=  to -1, h(n) =0. So

h(n)  h(n) is only h(n). So this reduces to |h(n)|2. Now let us look at n=N to  , again in

this range h(n) =0. So this reduces to h(n) =0. Therefore to minimize the error one needs
to minimize this quantity.

773
(Refer Slide Time: 15:09)

The minimum value is 0 and that occurs when h(n)  h(n) and is always greater than or

equal to 0. The minimum occurs when h(n)  h(n) for 0  n  N-1.

(Refer Slide Time: 16:14)

So the best FIR approximation is h(n) = 0 n<0 or n>N-1 and equal to h(n) for 0  n  N-1.
This minimizes the squared error of the frequency response. This gives the best FIR
approximation to h(n) given the infinite impulse response filter h(n).

774
(Refer Slide Time: 18:00)


We are going to start problems on the DFT, we are given two sequences x(n)= sin( n)
2
and this is for 0  n  3 and h(n), the impulse response is 2n 0  n  3.

(Refer Slide Time: 19:49)

Now N is length of the sequence which is 4. Now we want to find y(n) which is the
circular convolution of x(n) with h(n). So you want to find circular convolution of x(n)
with h(n) which is nothing but a wraparound convolution and that is defined with the
N 1
following expression that is y(n)=  x(i)h(n  i)mod N .
i 0

775
(Refer Slide Time: 20:21)

That is you are basically wrapping the sequence around and you are performing the
circular convolution.

(Refer Slide Time: 21:37)

 
So we are given this sequence x(n)= sin( n) . So x(0) is 0, x(1) is sin( ) which will be
2 2
 
1, x(2) is sin(  2) which is sin( ) which is again 0, x(3) is sin(  3) is that is
2 2
3 
sin( ) that is -1, so this is x(n) which is sin(  n) . Now let us plot h(n), h(0) is 1, h(1)
2 2
is 21 is 2, h(3) is 23, h(2) is 22 which is 4, h(3) is 23 is 8.

776
Now consider h(n-i) and this will be as shown in slide. Now we need to find the
corresponding y(n) as shown.

(Refer Slide Time: 25:50)

So this is circular convolution, as you can see at each time instant the sequence is
basically wrapping around itself. So the rightmost quantity is moving to the left and all
the others are shifting one place to the right.

(Refer Slide Time: 29:22)

777
(Refer Slide Time: 31:06)

(Refer Slide Time: 31:37)

So the output will be 6,-3,- 6,3, for 0  n  3. This is the output of the circular convolution
of the given sequence and we have to use a time domain interpretation for the circular
convolution. In the subsequent module we are going to carry it out in the frequency
domain, by using the DFT of this finite length sequences the circular convolution
becomes a multiplication of the corresponding DFT. So we will use that principle to
evaluate the output of the corresponding circular convolution. So we will stop here and
continue in the subsequent module. Thank you very much.

778
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 67
Example Problems: DFT

Keywords: DFT

Hello welcome to another module in this massive open online course. So we are looking
at example problems in the Fourier analysis of discrete time signals, in particular, the
discrete Fourier transform.

(Refer Slide Time: 00:27)


So we have the signal x(n)  sin( n), 0  n  3 , which basically turns out to be the
2
sequence 0, 1, 0, -1. Consider the impulse response of the system h(n) to be 1, 2, 4, 8 and
y(n) is x(n) circularly convolved with h(n). We have implemented the circular
convolution between these two signals in the time domain, now we want to evaluate it
using the DFT.

779
(Refer Slide Time: 01:16)

(Refer Slide Time: 02:31)

N 1
So the kth DFT coefficient can be written as, X (k )   x(n) WN kn .
n 0

780
(Refer Slide Time: 03:59)

So this will be X (k )  x(0)  x(1)WN k  x(2)WN 2k  x(3)WN 3k . So substituting the values

we have X (k )  WN k  WN 3k .

(Refer Slide Time: 04:56)

N 1
Now similarly H (k )   h(n) WN kn  1  2WN k  4WN 2 k  8WN 3k .
n 0

781
(Refer Slide Time: 06:46)

Now the DFT of the output is the multiplication of the DFT coefficients of the input. So
we have Y (k )  X (k ) H(k)  WN k  WN 3k   1  2WN k  4WN 2k  8WN 3k  . Now
4k
 j 2 
WN 4k
  e 4   e j 2 k  1 . So this property can be used in the simplification of the
 
above expression.

(Refer Slide Time: 08:02)

782
(Refer Slide Time: 08:37)

So after evaluation we will have Y (k )  6  3WN k  6WN 2k  3WN 3k .

(Refer Slide Time: 10:57)

Now if you do a term by term comparison, we have


y(0)  6, y(1)  3, y(2)  3, y(4)  3 . So we have evaluated this using the DFT and also
evaluated the circular convolution and both of them yield the same answer.

783
(Refer Slide Time: 12:58)

Now in the next example, we want to find the DFT of e j0n ,0  n  N  1 under the

2 k N 1
condition 0 is not equal to . In this case X (k )   e j0nWN kn , this is a finite
N n 0

summation. This can be evaluated as shown in the slides.

(Refer Slide Time: 14:06)

784
(Refer Slide Time: 15:44)

(Refer Slide Time: 16:25)

(Refer Slide Time: 18:02)

785
 2 k  N 
sin  0  
N  2 
 2 k  N 1 

And finally we get X (k )  e


j  0 


N  2 

  as the expression for the
 2 k  1 
sin  0  
 N  2 
kth DFT coefficient of the given signal.

(Refer Slide Time: 20:06)

So the DFT and IDFT can basically be expressed in a matrix form. So let us stop this
here and we will look at this fundamental aspect in the next module. Thank you very
much.

786
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 68
Example Problems: DFT – DFT, IDFT in Matrix form

Keywords: DFT, IDFT in Matrix form

Hello, welcome to another module in this massive open online course. So we are looking
at example problems in the discrete Fourier transform and inverse discrete Fourier
transform.

(Refer Slide Time: 00:23)

So we would like to write this discrete Fourier transform and the IDFT in a matrix form
and that will be a very convenient for evaluating the DFT and IDFT as well as
representing them as linear transformations. So given, the finite length signal,
 x(0) 
 x(1) 
 
. 
x  and there are N samples.
. 
. 
 
 x(N  1) 

787
(Refer Slide Time: 01:30)

So this is an N  1 vector and correspondingly we also have the vector of N DFT


 X (0) 
 X (1) 
 
. 
coefficients which is X   and we want to express X  WN x and
. 
. 
 
 X (N  1) 
also x  W N X .

(Refer Slide Time: 02:42)

So these are the DFT matrix and the IDFT matrix. So we are representing the DFT and
IDFT as linear transformations.

788
(Refer Slide Time: 03:20)

N 1
Now, the N point DFT is X (k )   x(n)W kn N . So you have
n 0

X (0)  x(0)  x(1)  ....  x(N 1) . Similarly,

X (1)  x(0)  x(1) WN  x(2)W 2 N  ...  x(N 1)W N 1N .

(Refer Slide Time: 04:56)

Now the rest follows as shown in slide. So we are trying to develop a linear
transformation that is a matrix representation for this DFT. And the Nth DFT point

is X (N 1)  x(0)  x(1) W( N 1) N  x(2)W 2( N 1) N  ...  x(N 1)W ( N 1) N .
2

789
(Refer Slide Time: 06:21)

And you put all these things together, as a Matrix Form as shown in slide.

(Refer Slide Time: 06:38)

You are going to have on the left this N dimensional vector of DFT coefficients and then
2
j
this is an N  N matrix and this is the DFT matrix of size N, where WN  e N
and this

is the N  1signal vector and this is X  WN x . So this is the structure of the DFT matrix.

790
(Refer Slide Time: 10:27)

Now, you can do similarly for the IDFT matrix as shown in slide.

(Refer Slide Time: 11:25)

1
Here you have this scaling factor and the input will be the DFT coefficients. So this
N
is the N  N IDFT matrix and this is the N  1 DFT coefficient vector X , the input and
output is the N  1signal vector. And you can see that x  W N X .

791
(Refer Slide Time: 14:15)

(Refer Slide Time: 15:17)

So this implies that W NWN  I , the identity which means, W N is the inverse of WN .

Similarly, WN is the inverse of W N , because, IDFT is the inverse of the DFT, the DFT is
the inverse of the IDFT. So the inverse of a square matrix when exists is unique and so
the DFT and IDFT matrices satisfy this property.

792
(Refer Slide Time: 16:40)

So if you look at the DFT matrix, the element in the second row, third column is
W 2 N and this is the same as the element in the third row, second column. Similarly, you

can see that WN (i, j )  WN ( j, i) . This means that if you take the transpose of this matrix
then that matrix remains unchanged, which means that the DFT matrix is equal to the
transpose of itself.

(Refer Slide Time: 17:55)

So we have WN  WN T , the inverse property and now if we look at the IDFT matrix we
T
can see that W N  W N .

793
(Refer Slide Time: 19:09)

4
j
Now if you look at each element, that is for instance, WN (2,3)  W 2 N  e N
. Now, if you

1 2 1 j 4
look at W N (2,3)  W N  e N , so you can see that these elements are complex
N N
1
WN (2,3)  .

conjugates of each other. So we have W N (2,3) 
N

(Refer Slide Time: 20:23)

794
(Refer Slide Time: 21:33)

WN   WN T   WN H . Now we
1 1  1
So you can say, basically the entire matrix W N 
N N N
1
have W NWN  I N N  WN HWN  I  WN HWN  NI N N .
N

(Refer Slide Time: 23:01)

1 1
So basically you can also say that WN 1  WN H  WN  .
N N

795
(Refer Slide Time: 24:04)

Now we want to do a simple example, where we write the DFT and IDFT matrices for N
= 4, that is the length of sequence. Now, realize that when N = 4, we have
2
j
W4  e 4
j.

(Refer Slide Time: 24:53)

Now we have the N  N matrix, which is a 4  4 matrix and this is denoted by W N . The
elements are going to be all ones in the first row and it is evaluated and we get the matrix
as shown in slide.

796
(Refer Slide Time: 27:11)

Now we write the IDFT matrix which is denoted by W N and it is evaluated as shown in
slide. Now we consider the conjugate of the DFT matrix that gives the IDFT matrix and
that can also be verified by using MATLAB. So you can check these properties and
verify them.

(Refer Slide Time: 29:14)

So in this module we have dealt with an important aspect that is representing IDFT and
DFT operations as matrices and this is a very important tool, because it allows us to
represent these operations as linear transformations. So representing the signals and
outputs as vectors and representing these DFT and IDFT operations as matrices helps us
significantly simplify analysis. So we will stop here and continue in the subsequent
modules. Thank you very much.

797
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 69
Group/ Phase Delay - Part I

Keywords: Group / Phase Delay

Hello, welcome to another module in this massive open online course. In this module we
will start looking at a different concept that is group and phase delays of an LTI system.

(Refer Slide Time: 00:25)

Consider an LTI system with impulse response h(t), input x(t) and output y(t).

(Refer Slide Time: 01:22)

798
The impulse response h(t) has the Fourier transform H ( ) and this also known as the
transfer function of the LTI system. Now we have Y ()  X () H () .

(Refer Slide Time: 02:21)

The output is basically the convolution of the input signal with the impulse response
which means in the Fourier domain it is a multiplication of the input Fourier transform
X ( ) with the transfer function H ( ) . Now, let H ( )  H ( ) e j ( ) where  ( ) is the

phase response or angle of H ( ) . Now consider a sinusoidal carrier x(t )  Acos 0t  .

Now this is given as input to the LTI system.

(Refer Slide Time: 03:41)

799
(Refer Slide Time: 03:59)

Now we have X ()   A  (  0 )   A  (  0 ) that is two impulses at 0 and

0 scaled suitably.

(Refer Slide Time: 05:01)

Now, Y ()  H () X () implies Y ()  H () A  (  0 )  H () A  (  0 )

which is equal to  AH(0 ) (  0 )   AH(0 ) (  0 ) . Let us consider filter with

real impulse response h(t) which implies H(0 )  H (0 ) .

800
(Refer Slide Time: 06:14)

So we have H(0 )  H(0 ) e j (0 ) or H(0 )  H(0 ) e j (0 ) which means it has the

same magnitude and negative phase. So substituting this above, this implies
Y ()   A H(0 ) e j (0 ) (  0 )   A H(0 ) e j (0 ) (  0 ) .

(Refer Slide Time: 07:39)

Now, if you take the inverse Fourier transform we will get


1 j0t j (0 ) 1
A H(0 ) e e  A H(0 ) e j0t e j (0 ) .
2 2

801
(Refer Slide Time: 09:29)

So this will simply be


 1 j  t  (0 ) 1  j 0t  (0 ) 
A H(0 )  e  0  e   A H(0 ) cos(0t   (0 )) . And now you can
2 2 
see this is the gain due to the LTI system and this is the phase offset. So when you input
a pure sinusoid to an LTI system you get an output that is also a pure sinusoid with gain
H(0 ) and phase offset  ( ) .
0

(Refer Slide Time: 11:15)

And this can be again written as

   (0 )       (0 )   
A H(0 ) cos  0  t     A H(0 ) cos  0  t        A H(0 ) cos 0 (t   p ) 
  0      0   
and this quantity is denoted by  p which is the delay. So what you are observing is that

802
you input a pure sinusoid and you get another pure sinusoid with a suitable gain that is
H(0 ) and a delay  p which is known as the phase delay of the system.

(Refer Slide Time: 12:53)

 (0 )
So  p   and is basically the delay of the carrier input to the LTI system and here
0
 (0 ) is the phase response of the LTI system.

(Refer Slide Time: 14:51)

Let us now consider the other aspect that is the group delay. We consider a modulated
signal that is a pure carrier modulated by a message signal. So we have
x(t )  A cos(m t)  cos(0 t) .

803
(Refer Slide Time: 16:38)

So when 0 m , the message frequency, the carrier has a much higher bandwidth than
the bandwidth of the message signal. For instance you can look at a typical AM signal
with carrier frequencies of 1000s of kilohertz, the message bandwidth is typically only of
10s of kilohertz. So the carrier frequencies are typically 100 times larger than the
bandwidth of the message signal.

(Refer Slide Time: 17:31)

Now I can write this signal x(t) using the properties of trigonometric functions as

x(t ) 
A
2
cos  (0  m ) t   cos  (0  m ) t  . Now 0  m  l can be thought of as
the low frequency and 0  m  h is the higher frequency component. So this implies

x(t ) 
A
2
cos l t   cos h t  . So because you are modulating a sinusoidal signal of

804
much smaller bandwidth with a carrier that has a much higher bandwidth that gives you
two resultant signals, one at the lower frequency and the other at a higher frequency.

(Refer Slide Time: 19:35)

And now, we pass this through the LTI system and the output corresponding to this is
A A
H (l ) cos l t   (l )   H (h ) cos ht   (h )  .
2 2

(Refer Slide Time: 21:21)

805
(Refer Slide Time: 22:08)

Now notice that l  0  m , h  0  m and further we have m 0 . So


h  l  2m 0 . That is the magnitude response of the LTI system can be assumed

to be constant over the interval l , h  which implies that H (l )  H (h )  H (0 ) .

So one can assume that the variation of the magnitude response of the LTI system to be
negligible over this small frequency interval.

(Refer Slide Time: 24:13)

So basically what we have seen in this module is we have looked at the phase delay and
also started our discussion on the group delay. We will complete this derivation in the
subsequent module. Thank you very much.

806
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 70
Group / Phase Delay - Part II

Keywords: Group / Phase Delay

Hello welcome to another module in this massive open online course and we are looking
at the group and phase delay, so let us continue our discussion.

(Refer Slide Time: 00:22)

(Refer Slide Time: 00:40)

807
(Refer Slide Time: 00:56)

Therefore, the net output of the LTI system corresponding to this modulated message
signal x(t )  A cos(m t)  cos(0 t) is given as

A A
H (0 ) cos l t   (l )   H (0 ) cos ht   (h )  .
2 2

(Refer Slide Time: 02:09)

808
(Refer Slide Time: 03:04)

So this can be written as


   l  (h )   (l )   h  l  (h )   (l ) 
A H (0 ) cos  h t   cos  t .
 2 2   2 2 

(Refer Slide Time: 04:39)

h  l h  l
Now, we have  0 and  m which means that this can be simplified as
2 2
A   (h )   (l )    (h )   (l ) 
H (0 ) cos  mt    cos  0t  .
2  2   2 

809
(Refer Slide Time: 05:52)

Now since m is small, we are considering a small neighborhood around the carrier

frequency 0 . So we can use Taylor series expansion to simplify this phase function. So

d
we have  (0   )   (0 )   . This is the well- known Taylor series
d  0

approximation, first order Taylor series approximation.

(Refer Slide Time: 07:30)

Thus this can be simplified as  (h )   (0  m )   (0 )   (0 )m and

 (l )   (0  m )   (0 )   (0 )m . So these two relations are obtained using the
Taylor series approximation.

810
(Refer Slide Time: 08:39)

And this implies that using these Taylor series approximations we can evaluate
 (h )   (l )  (h )   (l ) d
  (0 ) and   (0 )m   m . Now substituting
2 2 d  0

these in the expression for the net output yields the net output.

(Refer Slide Time: 10:05)

 d 
Thus we have A H (0 ) cos  mt   m   cos 0t   (0 )  which is
 d  0 
 

   (0 )  
 A H (0 ) cos m (t   (0 ))   cos  0  t    .
  0 

811
(Refer Slide Time: 12:13)

We know that this quantity is  p that is the phase delay and this quantity  (0 ) is the

delay of the message signal this is the group delay.

(Refer Slide Time: 12:36)

So the resulting output can be written as A H (0 ) cos m (t   g )   cos 0 (t   p )  . And

d ( )  (0 )
therefore, the group delay  g (0 )   and the phase delay  p (0 )   .
d  0 0

812
(Refer Slide Time: 13:57)

So when you pass a signal through the LTI system with impulse response h(t) and
frequency response H ( ) , phase response  ( ) , the envelope that is the message is
delayed by the group delay and the carrier is delayed by the phase delay.

(Refer Slide Time: 15:49)

Further if the phase is linear which implies  ( )  k  , the phase delay

 (0 ) d ( )
 p (0 )    k and also the group delay  g (0 )    k . This implies
0 d  
0

that for a linear phase the phase delay equals the group delay. Thus for a narrowband
modulated message signal, when you pass it through a LTI system which has a linear
phase characteristic then the phase delay is equal to the group delay.

813
(Refer Slide Time: 17:33)

Let us do a simple example to understand this better. Consider the standard serial RC
circuit with input voltage Vi(t) and output voltage V0(t). We consider f = 100Hz, R = 1 
and C = 1  F . Now we want to derive the group delay for this serial RC circuit with
output across the capacitor.

(Refer Slide Time: 19:10)

V0 ( ) 1
Now, we know that the transfer function of this is simply H ( )   .
Vi ( ) 1  j RC

814
(Refer Slide Time: 19:45)

Now, you can see that RC  1k 1 F  103 , the units is seconds. So if you look at the

phase this will be  ( )   tan 1  RC . Therefore, the group delay

 g ( )  
d ( )

RC

1k 1 F   0.717ms .
d 1  R C 1   2 100   1k 1 F 
2 2 2 2 2

(Refer Slide Time: 21:16)

815
(Refer Slide Time: 22:09)

So in this module we have completed the discussion on the group delay and
demonstrated that when you pass a modulated message signal through an LTI system
where the message signal frequency is much smaller than the carrier frequency, then one
can approximate the output signal as the message being delayed by the group delay while
the carrier being delayed by the phase delay. We have calculated this group delay for a
linear system and we have shown that both these delays are equal. We have also
evaluated the group delay for a simple RC circuit. So we will stop here and continue in
the subsequent module. Thank you very much.

816
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 71
IIR Filter Structures: Direct Form – I, Direct Form – II

Keywords: Direct Form – I, Direct Form – II

Hello, welcome to another module in this massive open online course. So in this module
let us start looking at the implementation of IIR filters using various IIR filter structures.

(Refer Slide Time: 00:34)

IIR filter structures play an important role in the implementation of digital filters and IIR
stands for Infinite Impulse Response. To illustrate this, consider the third order filter with
P0  P1 z 1  P2 z 2  P3 z 3 P( z )
the transfer function H ( z )  , which is and this is the
1  d1 z 1  d 2 z 2  d3 z 3 D( z )
Y ( z)
output z transform and this can be implemented as follows.
X ( z)

817
(Refer Slide Time: 02:41)

P( z ) 1
This can be written as Y(z) = X(z)H(z) which is X ( z ) and this is X ( z ) P( z ) .
D( z ) D( z )
So this can be implemented as a cascade of two systems, this X(z)P(z) can be defined as
W(z).

(Refer Slide Time: 03:35)

1
So this is Y(z) = W ( z ) where W(z) = X(z)P(z). So this is a cascade of two systems.
D( z )

818
(Refer Slide Time: 04:08)

Now I have X(z) given as input to the first system H1(z), the output W(z) is given as
1
input to the second system H2(z). So H1(z) is basically P(z) and this is . And now
D( z )
1
let us look at W(z) = X(z)P(z) which is X ( z )( P0  Pz
1  P2 z 2  P3 z 3 ) .

(Refer Slide Time: 05:16)

And therefore if you look at the corresponding time domain expression or the equivalent
time domain difference equation will be

819
w(n)  P0 x(n)  Px
1 (n  1)  P2 x(n  2)  P3 x(n  3) . And the structure for this can be given

as follows.

(Refer Slide Time: 06:03)

So this is x(n), so this is P0, this is the element z-1 and you are multiplying that by P1, that
will give you x(n-1). So this is performing the addition of P1x(n-1) and P0x(n) and again
going through another element z-1 that will then be multiplied by this x(n-2), that is
multiplied by P2 and then over here you take this and you add to this.

(Refer Slide Time: 07:53)

820
And finally, over here there is another element z-3 and you will multiply it by the gain P3
and add it to the existing ones. So this is
w(n)  P0 x(n)  Px
1 (n  1)  P2 x(n  2)  P3 x(n  3) , so this basically implements X(z)P(z).

W ( z)
Now we come to the next part that is Y ( z )  .
D( z )

(Refer Slide Time: 09:09)

W ( z) W ( z)
So we have Y ( z )   . Now once again taking the inverse z
D( z ) 1  d1 z  d 2 z 2  d3 z 3
1

transform, this implies that you have y(n)  d1 y(n  1)  d2 y(n  2)  d3 y(n  3) = w(n).

821
(Refer Slide Time: 10:14)

Now this can also be written as y(n)  w(n)  d1 y(n 1)  d2 y(n  2)  d3 y(n  3) . And I
can incorporate the IIR structure corresponding to this as follows. So let us say your
outcome is y(n) and there has to be a delay over here. So that will give you y(n-1) and
proceeding as shown in the slide we can implement this. So you have implemented it as a
cascade of two systems. So this is known as the direct form 1 realization also termed as
DF 1 realization. Now, what you can do is, first you can interchange these two blocks
because convolution is commutative, the resulting system remains unchanged.

(Refer Slide Time: 14:18)

822
So now we interchange the two branches in DF 1. And that gives me something as
shown in slide.

(Refer Slide Time: 14:59)

So after interchanging the branches of DF 1 what you can realize at this point is that if
you look at this pairs of signal points 1 and 1’ and if you call this 2 and 2’ and finally, if
you call the signals at these points 3 and 3’, these signals at these points 1 and 1’, 2 and
2’, 3 and 3’, these are the same. So basically what that means is I can fuse these two
signal points and therefore, the advantage of that is basically that eliminates the
duplication of these delays. So I can fuse or I can merge the delays.

(Refer Slide Time: 19:15)

823
This implies that these node pairs are identical and so these delays can be shared in
common and this basically eliminates the duplicate delays.

(Refer Slide Time: 20:31)

(Refer Slide Time: 22:01)

So basically the 6 delays have become 3 delays and this is basically termed as the Direct
Form 2 realization or the DF 2 realization.

824
(Refer Slide Time: 24:33)

So previously we had 6 delay elements and we have managed to reduce this to 3 delays
by fusing them suitably. That is the advantage of the direct form 2 realization.

(Refer Slide Time: 25:13)

So in this module we have started looking at the realization of various IIR filters. We
have looked at the direct form 1, direct from 2 realizations and will continue with other
realizations in the subsequent modules. Thank you very much.

825
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur
Lecture - 72
IIR Filter Structures: Transpose Form

Keywords: Transpose Form

Hello, welcome to another module in this massive open online course. So we are looking
at IIR filter structures and in this one we will start looking at another filter structure that
is the transpose form.

(Refer Slide Time: 00:26)

(Refer Slide Time: 01:05)

826
Y ( z)
Now consider H(z) the transfer function equals . Let us say that is given as follows
X ( z)
P( z )
similar to what we have considered before . Now let us write it as the cascade of
D( z )
1 1
followed by P(z). So we will call this X ( z ) as W(z). And therefore, if you
D( z ) D( z )

look at this we have X(z)= W(z)D(z) implies W ( z )(1  d1 z 1  d2 z 2  d3 z 3 ) .

(Refer Slide Time: 02:12)

Now, converting it into the time domain this implies that


x(n)  d1w(n  1)  d2 w(n  2)  d3w(n  3) this quantity equals w(n). And this is the first
stage of the cascade.

(Refer Slide Time: 03:27)

827
So I can represent this as x(n), an adder that gives rise to w(n) and w(n) is in turn being
fed as follows. So I will have z-1 that is a delay, we have here we will have another adder
and then of course, this has to be multiplied by -d1 and we will have another delay
element, the z-1 and we will have another gain -d2 and finally, we will have another
element here, this is an adder, this is a delay and here of course there will be another gain
that will be -d3. Now, if you see this here, we have -d1w(n) and at this point we have -
d2w(n) and at this point we have -d3 w(n) and you can see here I have -d3w(n-1), at this
point I have -d3w(n-1), further delayed, so that will be -d3 w(n-2) -d2w(n-1) and at this
point finally, I will have -d3w(n- 3) - d2w(n-2) - d1w(n- 1) added to x(n) that produces
X ( z)
your w(n). So that is basically the first part that is implementing W ( z )  . So now
Y ( z)
we have this quantity w(n) and from w(n) we have to generate y(n). So we know that
W(z)P(z) is Y(z).

(Refer Slide Time: 07:02)

1
So we have W ( z )( P0  Pz
1  P2 z 2  P3 z 3 ) and that will be

y(n)  P0 x(n)  Pw
1 (n  1)  P2 w(n  2)  P3 w(n  3) and that is the forward path. And

now, you can see w(n), send it through gain P0, it is an adder that gives me y(n). Now,
this I have to also try send through gain P1 and this will be delay z-1 alright and that will
be give that again you have another adder here ok and that will give you this is P1w(n).
And now, I will send that through another gain that is P2 and this is will be another delay
over here and finally, you send it through another gain stage and another delay, so that
will give me the gain is P3. So that basically completes this part and basically

828
implements the W(z)P(z). And now, similar to what we did for the direct form 1, we
want interchange these two branches. And once we interchange these two branches we
can see that the adders can be combined for both the branches. So once you combine
them now then you will get the representation with the minimum number of delays and
adders.

(Refer Slide Time: 11:59)

So this is alternative representation is termed as the direct form 2 transpose.

(Refer Slide Time: 13:11)

So this is the transpose structure that is denoted by this t in the subscript. And we can
now see that we have the minimum number of delays that is here you have 6 delays and
3 adders.

829
(Refer Slide Time: 17:20)

But if you look at the direct form II transpose that has only 3 delays and 4 adders.

(Refer Slide Time: 17:43)

So the direct form II transpose has fewer number of delays. So in comparison to the
direct form I transpose, this has fewer delays. So this is the minimal representation in
terms of the number of delays and the number of adders.

830
(Refer Slide Time: 18:32)

So in this module what we have seen is, we have looked at the direct form I, its transpose
structure and also the direct form II, its transpose structure and we have seen that the
direct form II transpose results in the minimum number of adders and delays. So let us
stop this module here and continue in the subsequent modules. Thank you very much.

831
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture – 73
IIR Filter Structures: Example

Keywords: IIR Filter Structures

Hello welcome to another module in this massive open online course. So we are looking
at IIR filter structures and their implementation. We have seen various forms such as the
direct form 1, direct form 2, direct form 1 transpose and direct form 2 transpose. Let us
now look at several examples to understand the implementation.

(Refer Slide Time: 00:35)

So want to look at IIR filter structures let us consider this example where we have
1
1  z 1
H ( z)  2 . So this implies you will look at the numerator polynomial P(z)
7 3
1  z 1  z 2
8 32
1
which is 1  z 1 .
2

832
(Refer Slide Time: 02:17)

So this implies basically that P0=1 and P1=-1/2 and if you look at the D(z) polynomial
you can see this implies that well d0=1, d1=-7/8 and d2=3/32 and we can now develop the
several representations.

(Refer Slide Time: 03:02).

So the DF 1, direct form 1 will be as follows, that will be you have x(n) fed to an adder
and you will have z-1 and this will go to the gain of -1/2 and come into this adder, this
will be x(n) -1/2, x(n-1) that will be w(n), the output of this goes through another adder,
finally you have the branch here that goes through z-1 and the gain here is -d1 that is 7/8.

833
Further you have another delay and this goes through another gain and here you have
another adder and this will be -d1 and then we have -d2.

(Refer Slide Time: 04:09)

And now we want to develop the direct form 2 realization which is obtained by
interchanging the branches and then merging the common delays.

(Refer Slide Time: 05:13)

So we have the direct form 2 realization as follows. So we have the delays along the
central path. So this will be P1 that is -1/2 and this will go as input to the adder and on
the other side you have -d1 that is 7/8, so this will be added to x(n) and then you have

834
another delay here and you have the corresponding gain -3/32. And you can see that you
have 3adders and 2 delays. So this is the DF 2 realization and it results in a significant
saving because delays are typically difficult to implement.

(Refer Slide Time: 08:11)

And now let us look at the transpose versions, let us say you have x(n) and then you have
to pass it through the gain stage. So this is a delay and here you have the gain stage and
the gain in this case will be -d1 that is 7/8 and that is also passed through another gain
stage that corresponds to -d2 that is -3/32. And therefore you have w(n) and this is now
passed through another gain stage that is there is an adder here and you have another
delay element. This will be -1/2 and therefore this is the transpose. And you can also see
that this also has 3 adders and 3 delay elements. And now finally, let us see the direct
form 2 transpose which is obtained by interchanging the branches and then merging the
adders and as well as the delay elements.

835
(Refer Slide Time: 11:25)

So we have the DF 2 transpose which can be derived as x(n) to adder that gives y(n), z-1
and you have here another adder and from here what you have the gain, 7/8 and this gain
here is -1/2 that is P1 and finally there is another delay element z-1 and gain that is -3/32
and this is your DF 2 transpose form. So this has the minimum number of adders and
delays. So we have four representations, the direct forms, because here the coefficients
are directly given in terms of the coefficients of the actual polynomials.

So you have the direct form 1, direct form 2, direct form 1 transpose, direct form 2
transpose and direct form 2 transpose has the minimum number of delay elements as
well as adders. And we have seen an example we also demonstrated the construction of
these structures using a practical example. So let us stop here and we will look at other
IIR structures in the subsequent modules. Thank you very much.

836
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 74
IIR Filter Structures: Cascade Form

Keywords: Cascade Form

Hello welcome to another module in this massive open online course. So we are looking
at IIR filter structures, let us continue our discussion by looking at another form that is
the cascade form.

(Refer Slide Time: 00:27)

We have our transfer function H(z) which is the rational function in z that is
P( z )
H ( z)  and these polynomials are factored into first and second order factors. So
D( z )

 1  11 z 1  1  12 z 1   22 z 2 
H ( z )  P0  1  2 
and this can be realized as follows.
 1  11 z  1  12 z   22 z 
1

837
(Refer Slide Time: 02:19)

So you can see that this is the cascade of several first order and second order terms.

(Refer Slide Time: 03:02)

And this can be represented as follows, I have x(n), cascade of one system after the
other, goes through gain P0 and then we have z 1 and we have the adder over here and
this will go to another gain 11 corresponding to the numerator, this will go to the adder
and thereby it continues. So that basically shows the cascade form representation of the
IIR filter structure.

838
(Refer Slide Time: 07:06)

Let us go back to the example that we have considered previously. So we have


1 1
1  z 1 1  z 1
H ( z)  2 . This is equal to 2 .
7 1 3 2
1 z  z  3 1  1 1 
1  z 1  z 
8 32  4  8 

(Refer Slide Time: 08:38)

839
1
1  z 1
2 1
Therefore, I can write this as two factors, and the representation is as
3 1 1 1
1 z 1 z
4 8
follows.

(Refer Slide Time: 09:02)

So we have x(n) and that will go here, you have a delay z 1 and here you have the gains
1 3
that are basically  and that is going inside the adder. And this is your first factor
2 4
1
and then this goes into z 1 and this is and the output of this is y(n). So you can see that
8
each of this is the DF 2 representation. So this is the example of the cascade form, now
let us look at another form for IIR implementation. This is the parallel form 1 in which
we decompose it into a sum, a partial fraction expansion of various factors.

840
(Refer Slide Time: 12:13)

P( z )
So we have the parallel form 1 and let us again consider H ( z )  . And now we
D( z )

perform a partial fraction expansion in z 1 .

(Refer Slide Time: 13:08)

 01  02   12 z 1
So we can express H(z) as H ( z )   0   . And therefore
1  11 z 1 1  12 z 1   22 z 2
what you can see is that we have split it into a partial fraction expansion of the first order
and second order factors.

841
(Refer Slide Time: 14:17)

And the structure of this parallel form this can be given as follows. So you have x(n), so
now we will have these parallel branches. So you have the first one that is the gain  0

then you have the gain  01 and this is y(n) and here you have the delay z 1 and you have

11 , this is the DF 1 representation of the first term and then you will have the second
order factor. And therefore this is basically your parallel form 1.

(Refer Slide Time: 17:34)

So basically what we have done in this module is we have essentially looked at these
several examples, we have introduced the cascade form, and the parallel form and we
will continue this discussion in the subsequent modules. Thank you very much.

842
Principles of Signals and Systems
Prof. Aditya K. Jagannatham
Department of Electrical Engineering
Indian Institute of Technology, Kanpur

Lecture - 75
IIR Filter Structures: Parallel Form - I, Parallel Form - II, Examples

Keywords: Parallel Form - I, Parallel Form - II

Hello welcome to another module in this massive open online course. So we are looking
at the parallel form 1 for IIR filter implementation. Let us look at this with the aid of an
example.

(Refer Slide Time: 00:27)

Let us consider the following example, we have


1 1
1  z 1 1  z 1
H ( z)  2  2 which we split into the partial fraction
7 1 3 2  1 1  1 1 
1 z  z 1  z 1  z 
8 32  3  8 
expansion.

843
(Refer Slide Time: 01:24)

2 3
So this can be written as H ( z )  5  5 . Each of these is a first order factor
3 1
1  z 1 1  z 1
4 8
in the partial fraction expansion and now we are going to illustrate the parallel form 1
structure for this IIR filter and it is very simple, it has two factors, so we will have two
branches.

(Refer Slide Time: 02:10)

844
2
So the top branch will be corresponding to the first factor that will be 5 . So that
3 1
1 z
4
2 3
will have a gain that is and then z 1 . And the other one is the bottom branch
5 4
3
5 .
1 1
1 z
8

(Refer Slide Time: 03:54)

And now what we have is we have to add the outputs of both the branches. So we have
an adder and the output will be y(n). So this is basically the parallel form 1 structure for
the IIR filter. And similarly now you have the parallel form 2 where given a transfer
function, a rational function in z, you use a partial fraction expansion in z. In parallel
form 1 you use a partial fraction expansion in z 1 , in this you use a partial fraction
expansion in z.

845
(Refer Slide Time: 05:21)

P( z )
So in parallel from 2 you have H ( z )  and we perform partial fraction expansion
D( z )
on this. Therefore the transfer function can be represented as
11 z 1 12 z 1   22 z 2
H ( z)   0   . The parallel form 2 structure is given as
1  11 z 1 1  12 z 1   22 z 2
follows.

(Refer Slide Time: 06:53)

846
(Refer Slide Time: 07:31)

So you have x(n) and top branch which is simply the gain, once again that is your factor
 0 , then you have a middle branch that is the delay z 1 and then you have your term
11 and here you have the gain 11 .

(Refer Slide Time: 08:44)

So you have two delays and then you have 12 and then you have the rest of the terms.
And all these three outputs can be combined together to yield the final output. So this is
the parallel form 2.

847
(Refer Slide Time: 11:08)

And now let us do a simple example to understand this better. Let us consider the same
transfer function H(z) that we have been considering, but now do partial fraction
expansion in z and that gives us the parallel form 2 structure.

(Refer Slide Time: 11:44)

1
1  z 1
So we have H ( z )  2 , we have to do partial fraction expansion in z.
7 1 3 2
1 z  z
8 32

848
(Refer Slide Time: 12:41)

1 3 3
z2  z z
So now this can be simplified as 2  1 8 32 .
7
z2  z 
3  3  1
 z   z  
8 32  4  8

(Refer Slide Time: 13:39)

And now we need to expand this into partial fractions solving this yields
3 3
H ( z )  1  10  40 .
3 1
z z
4 8

849
(Refer Slide Time: 15:11)

And now finally multiplying the numerator and denominator by z 1 yields


3 1 3 1
z z
H ( z )  1  10  40 . We are converting it into z 1 because we can implement
3 1 1 1
1 z 1 z
4 8
only delays. So now we can implement this parallel form 2 realization and that is given
as follows.

(Refer Slide Time: 16:01)

850
3 1
So I have x(n), the constant gain is 1, so this top branch is 1, this is z and the gain
4
3 1 3
related to this is and the final branch z 1 and there is a gain of and all these
10 8 40
branches are added and then you get the output y(n).

So basically what we have done is we have seen several structures for IIR filter
implementation starting from the direct form, direct form 1, direct form 2, direct form 1
transpose, the cascade form and also the parallel form 1 and the parallel form 2.. So we
will stop here and we will continue in the subsequent modules. Thank you.

851

You might also like