You are on page 1of 9

Haricharan

Signals and Systems


3rd May
Signal

Some definitions:

Signal: Mathematical function that depends on some variables

Classification

Continuous or Discrete time


Digital or analog signal: has to do with the amplitude of the signal
Digital: amplitude can take discrete values. With "M" different values, it's called an M-ary
signal
Analog: amplitude can take any value
Even or odd signal
Conjugate symmetry x(-t) = x*(t)
Real part is even, Imaginary part is odd
Periodic and aperiodic:
x(t) = x(t + T0), ω = 2π

T0

x[n] = x[n + N], Ω = 2π

Causal Signal:
x(t) = 0 for t < 0
Anticausal: x(t) = 0 for t >= 0
Deterministic or Random
Deterministic can be predicted
Random cannot be predicted

Power and Energy

For current and resistor:

P (t) = i(t)v(t)

t2

E = ∫ P (t)dt
t1

Analogously, for any signal x(t):

2
P (t) = |x(t)|

t2
2
E = ∫ |x(t)| dt
t1

E
P avg =
t2 − t1
If energy is finite, power is zero and is called energy signal.

Eg.

1. x(t) is 1 if 0 < t < 1, 0 otherwise

E ∞ = 1, P ∞ = 0

If energy is infinite, power is finite, it is called power signal

Eg.
2. x[n] = 2 for all n

P ∞ = 4, E ∞ = ∞

We can have neither power nor energy signal also (Ramp signal).

Eg.

3. x(t) = t

E ∞ = ∞, P ∞ = ∞

Signal Transformation

Time shifting

x(t) -> x(t - T)

T units delayed

x(t) -> x(t + T)

T units advanced

Eg.

Time Scaling

CT

x(a t) squishes the signal by a times


x(t / a) stretches the signal by a times
x(-t) -> reflection about y axis

DT:

x (at) squishes the signal, but they are losing information. For eg, x [3 n]: x[1], x[2], x[4], x[5], x[6],
etc. are thrown away.
x (at) stretches the signal. For eg, x[n/2] would stretch the signal.
However, what do we do for intermediate values? We Interpolate the signal.

Amplitude Scaling:

Stretch about y axis


x(t) flip about x axis

Elementary Signals

1. Exponential

Āe
s̄t
= Ae

e
(σ+jω)t

s̄ -> complex frequency

Real Signal = x(t) + x(t)

Imaginary Signal = x(t) - x(t)

s can be thrown into the complex plane

Depending on sign of σ:
Anything in the left-half plane is decaying exponential
Anything on imaginary axis is neither growing nor decaying, i.e. it is pure sinusoid
Anything on right half plane is growing exponential

2. Unit step function or heavyside step function

CT: u(t) = 1 for t > 0, 0 for t < 0

DT: u[n] = 1 for t >= 0, 0 for t < 0

Make any signal into a causal signal: x(t) u(t) is causal.


Gating:

3. Unit impulse or Unit Delta or Dirac delta function: a generalized function

δ(x) = δ(−x)

Sifting property: ∫

f (t)δ(t − T )dt = f (T )
−∞

4. Unit Ramp function

r(t) = t u(t)
2
du(t) d r(t)
δ(t) = = 2
dt dt

System

Continuous Time System

Input Signal -> System -> Output Signal

x (t) y(t)

Eg. Some electric Circuit:

V s = x(t)

Vs − Vc dV c
i = = C
R dt

dV c Vc Vs
+ =
dt RC RC

Eg. Cart: This is similar to cart with friction: x(t) is the external force, a y(t) corresponds to friction.

dy(t)
+ ay(t) = bx(t)
dt

This is first-order system!

Discrete Time system

Input Signal -> System -> Output signal

x[n] y[n]

n = 0, ±1, ±2, . . .

Eg. Bank monthly balance:

y[n] is monthly balance


x[n] is net amount deposited = Deposit - Withdrawal
1% Interest also

y[n] = x[n] + 1.01y[n − 1]

This is a difference equation

Eg. For the same cart, it can be represented as DT also, where time would be like Δn.

becomes
dy(t)
+ ay(t) = bx(t)
dt

v[n] − v[n − 1] ρv[n] f [n]


+ =
Δn m m

For CT-> Differential equation, For DT-> Difference equation

Imp

If we come up with methods to solve one class of system, we can do same thing for other systems
in same class.

Classification of systems
Interconnection of subsystems, devices, to transform input into output

CT and DT systems
Systems with and without memory
Without memory:
Resistor system: v(t) = i (t) R
Output depends only on present value
With memory:
Depends on present/future
It "stores" energy!
t

capacitor system: v(t) =


∫ i(t)dt
−∞

summer: y[n] = ∑ n

k=−∞
x[k] -> computes running average
Invertible and non-invertible
Invertible: If there exists S2 for S1, such that x(t) -> S1 -> y(t) -> S2 -> x(t), S1 is said to be
invertible
Eg. y(t) = 2 x(t)
Non-invertible
Eg. y(t) = 0
Causal and Non-causal system (not signal)
Causal:
Output at any time depends on values from present or past times not future
Eg: y(t) = x(t) cos (t + 1)
Non-causal
Eg. y[n] = x[n] - x[n + 1]
Stability
Stable system:
Small input leads to responses which do not diverge
If |x(t)| < B < ∞ implies |y(t)| < B < ∞
x y

BIBO -> Bounded input, bounded output


Time invariance
Time invariant: If behaviour and characteristics do not vary with time
x(t) → y(t) ⟹ x(t − T 0 ) → y(t − T 0 )

x(t) -> S -> Delay -> y(t - T0 ) = x(t) -> Delay -> x(t - T0 ) -> S -> y(t - T0 )
Eg. y(t) = sin (x(t))
No explicit time dependence
Time variant
Eg. y(t) = n (x(t))
Explicit Time dependence
Linearity
Linear system:
a x1(t) + b x2(t) -> S -> a y1 (t) + b y2(t)
Eg. y(t) = t x(t)
Non-linear
Eg. y(t) = m x(t) + c

We're worried only about LTI systems in this course.

11th May
Convolution of two functions:


x[n] ∗ y[n] = ∑ x[k]y[n − k]
k=−∞
Imagine a DT system that is linear

Sifting property

∑ x[k]δ[n − k] = x[n]
−∞

Consider LTI system

x[k] -> S -> y[k]


δ[n − k] -> S -> h k [n] = h[n − k]

∞ ∞
x[n] = ∑ −∞ x[k]δ[k] → ∑ −∞ x[k]h[n − k] = y[n]

This is called the superposition sum or convolution sum -> y[n] = ∑ ∞

−∞
x[n]h[n − k]

where h[n] is the unit impulse response.

y[n] = x[n] ∗ h[n]

CT LTI system

Wow!

The response of the system is determined by unit impulse response

We can give an unit impulse, see what it does, then convolve x[n] to figure out y[n]!

δ Δ (t − k 0 ) → h kΔ (t)


x(t) = ∑ −∞ x(kΔ)δ Δ (t − k 0 )Δ

∫ x(τ )δ(t − τ )dτ = x(t)


−∞

Using Linearity:

∞ ∞
∑ ∞ x(kΔ)δ Δ (t − k) → ∑ ∞ x(kΔ)h kΔ (t)

Using time invariance:

y(t) = ∫ x(τ )h(t − τ )dτ = x(τ ) ∗ h(t)


−∞

17th May
Properties:

Commutative property

x[n] h[n] = h[n] x[n]

∞ ∞
y[n] = ∑ −∞ x[k]x[n − k] = ∑ −∞ x[n − k]x[k]

(Property of reversing summation)

- Distributive property

x[n] ∗ (h 1 [n] + h 2 [n]) = x[n] ∗ h 1 [n] + x[n] ∗ h 2 [n]

Proof: y[n] = ∑ ∞

−∞
x[k](h 1 [n − k] + h 2 [n − k]) == ∑

−∞
x[k]h 1 [n − k] + ∑

−∞
x[k]h 2 [n − k] = y 1 [n] + y 2 [n]

-Associative property

(x[n] ∗ h1[n]) ∗ h2[n] = x[n] ∗ (h1[n] ∗ h2[n])


x(t) -> S1 -> y1(t) -> S2 -> y2(t)

δ[n] → h 1 [n] , δ[n] → h 2 [n]

Using commutativity,

(x[n] ∗ h 1 [n]) ∗ h 2 [n] = (x[n] ∗ h 2 [n]) ∗ h 1 [n]

If we're passing a signal x(t) through many LTI systems, order doesn't matter.

-For memoryless systems

h[n] = Aδ[n]


y[n] = ∑ x[k]Aδ[n − k] = Ax[n]
−∞

-Invertibility

x[n] ∗ (h 1 [n] ∗ h 2 [n]) = x[n] = x[n] ∗ δ[n]

⟹ h 1 [n] ∗ h 2 [n] = δ[n]

-Causality

h[n] < 0 f or n < 0

Reason-> Unit impulse function is 0 for n < 0, so as it is causal, unit impulse response is also 0 for n < 0

-Stability

Consider giving a Bounded input x[n] < B.

∞ ∞
y[n] = ∑ x[k]h[n − k] < B ∑ |h[k]|
−∞ ∞

∑ |h[k| < ∞
−∞

18th May
Unit Step Response:

u(t) -> System -> s(t)

s(t) = u(t) h(t)

= h(t) u(t)

s[n] = h[n] * u[n]

=∑


h[k]u[n − k]
−∞

=∑
n

−∞
h[k]

h[n] = s[n] - s[n - 1]

∞ t
s(t) = ∫ u(t − τ )h(τ )dτ = ∫ h(τ )dτ
−∞ −∞

ds(t)
h =
dt

19th May

FOURIER LESGOO

Lesgoo

CT system

x(t) = e
st

∞ s(t−τ ) −st ∞ −sτ st


y(t) = ∫ h(τ )e dτ = e ∫ h(τ )e dτ = e H (s)
−∞ −∞
e -> Eigenfunction of system

st

H(s) -> Eigenvalue of the system

DT system

x[n] = z
n

∞ n −k n ∞ −k
y[n] = ∑ h[k]z z = z ∑ h[k]z
−∞ −∞

z
n
is the Eigenfunction of system

H (z) is Eigenvalue of system

If I can decompose x(t):

∞ sk t ∞ sk
x(t) = sum −∞ a k e → sum −∞ a k H (s k )e

If x(t) is periodic, x(t) = e jw o


ϕk = e
jkw o t

Any arbitrary periodic x(t) = ∑ ∞

−∞
ak e
jw o t

a −k = a k


a k = a −k

You might also like