Professional Documents
Culture Documents
ECE354 Module1
ECE354 Module1
Module
VISION
A provide of relevant and quality education to a
society where citizens are competent, skilled,
dignified and community- oriented.
MISSION
An academic institution providing technological,
professional, research and extension programs to
form principled men and women of competencies
and skills responsive to local and global
development needs.
QUALITY POLICY
Northwest Samar State University commits to
provide quality outcomes-based education,
research, extension and production through
continual improvement of all its programs, thereby
producing world class professionals.
CORE VALUES
Resilience. Integrity. Service. Excellence.
Table of Contents
Title Page
Module 1: Classification and Characteristics of signals
Module Description 1
Purpose of the Module 1
Module Guide 1
Module Outcomes 1
Module Requirements 1
Module Pre-test 1
Learning Plan
Let’s Get Started
Lesson 1: What is a Signal? 2
Lesson 2: Analog versus Digital 6
Lesson 3: Basic Elements of a Digital Signal Processing System 10
Lesson 4: Advantages of Digital over Analog Signal Processing 11
Lesson 5: Classifications of Signals 12
Lesson 6: The concept of frequency in Continuous-time and Discrete- 18
time Signals
Let’s Do this
Exercise
25
Module Post-test
25
References
25
Signals Spectra, and Signal Processing
Rationale
In this course we present the fundamentals of discrete-time signals, systems, and
modern digital processing as well as applications for students in electronics engineering. This
is suitable for one-semester undergraduate-level course in discrete systems and digital signal
processing. It is assumed that the student has had undergraduate courses in advanced calculus
(including ordinary differential equations) and linear systems for continuous-time signals,
including an introduction to the Laplace transform. We expect that many students may have
had this material in a prior course.
Course Code: ECE 354
Course Description: Fourier transform; z transform; convolution; FIR filters; IIR filters;
random signal analysis; correlation functions; DFT; FFT; spectral analysis; applications of
signal processing to speech, image, etc.
Course Outcomes: Upon completion of the course, the student must be able to conceptualize,
analyze and design signals, spectra and signal processing system.
Course Content:
As explained above, ECE 354 introduces students the concept, theories, principles and
practice, discusses the functions to come up interventions addressing problems.
The table below shows the outline of the topics to be discussed in the lecture per week
vis-à-vis the course outcomes. It is designed based on the course syllabus approved by the
college Dean in the College of Engineering Technology.
1. Auto-Correlation Functions
1. Properties of Auto-Correlation Function
of Energy Signal
2. Auto-Correlation Function of Power
Signals
Course Requirements:
OTHER REQUIREMENTS:
Aside from the major course outputs stated above, this course requires two (2) major exams:
midterm and final examinations. In addition, each of the topic will require you to perform outputs.
Grading Criteria:
Signals Spectra, and Signal Processing
Course Materials:
RUBRICS FOR ASSESSMENT: Simulation Project Rubric
Documentati Did not follow Did not follow Followed the Followed the
on the required the required required format. required format.
format. Poorly format. Well Adequately Well written and
10%
written and written and written and organized. Easy
organized. organized. Easy organized. to follow.
Difficult to to follow. Reasonably References were
follow easy to follow. properly cited.
CLASS POLICIES
1. Problem Sets will be given at least one week before the completion of the major topics
covered. There will be three major problem sets each to be submitted before the
scheduled major exam.
2. There will be2 major exams in this course. Make-up exams will be given provided that
the reason for not taking the exam is excused as approved by the dean of the college.
Signals Spectra, and Signal Processing
References:
1. Weeks, Michael. Digital signal processing using MatLab and Wavelet
2. Proakis, John G. Digital Signal Processing. 4th Edition. Pearson
3. Ziemer, Rodger E. and Tranter, William H. Principles of Communications: Systems,
Modulation and Noise. Wiley. 2015
Signals Spectra, and Signal Processing
Module 1
Module Title: Classification and Characteristics of Signals
Module Description:
In this module our objective is to present an introduction of the basic analysis tools and
techniques for digital processing of signals. We begin by introducing some of the necessary
terminology and by describing the important operations associated with the process of
converting an analog signal to digital form suitable for digital processing. As we shall see,
digital processing of analog signals has some drawbacks. First, and foremost, conversion of an
analog signal to digital form, accomplished by sampling the signal and quantizing the samples,
results in a distortion that prevents us from reconstructing the original analog signal from the
quantized samples. Control of the amount of this distortion is achieved by proper choice of the
sampling rate and the precision in the quantization process. Second, there are finite precision
effects that must be considered in the digital processing of the quantized samples. While these
important issues are considered in some detail in this module, the emphasis is on the analysis
and design of digital signal processing systems and computational techniques.
Purpose of the module:
This module helps student understand the classification and characteristics of signal for
them to be equipped in the later topics of this course.
Module Outcomes:
After completing this module, students will be able to:
1. Explain what is a signal, a system, and a signal processing.
2. Describe the basic elements of a digital signal processing system.
3. Discuss the advantages of digital over analog signal processing.
4. Discuss the four classification of signals.
5. Explain the concept of frequency in continuous-time and discrete-time signals
Module Requirements:
At the end of this module, the students will be assessed by a long quiz/exam through
moodle/google classroom.
Module Pretest:
Classify the following signals according to whether they are (1) one- or multi-dimensional;
(2) single or multichannel, (3) Continuous-time or discrete-time, and (4) analog or digital (in
amplitude). Give a brief explanation.
Key Terms:
Learning Plan
Lesson No: 1
Discussions:
A signal is a varying phenomenon that can be measured. It is often a physical quantity
that varies with time, though it could vary with another parameter, such as space. Examples
include sound (or more precisely, acoustical pressure), a voltage (such as the voltage
differences produced by a microphone), radar, and images transmitted by a video camera.
Temperature is another example of a signal. Measured every hour, the temperature will
fluctuate, typically going from a cold value (in the early morning) to a warmer one (late
morning), to an even warmer one (afternoon), to a cooler one (evening) and finally a cold value
again at night. Often, we must examine the signal over a period of time. For example, if you
are planning to travel to a distant city, the city’s average temperature may give you a rough
idea of what clothes to pack. But if you look at how the temperature changes over a day, it will
let you know whether or not you need to bring a jacket.
Signals may include error due to limitations of the measuring device, or due to the
environment. For example, a temperature sensor may be affected by wind chill. At best, signals
represented by a computer are good approximations of the original physical processes.
Some real signals can be measured continuously, such as the temperature. No matter
what time you look at a thermometer, it will give a reading, even if the time between the
readings is arbitrarily small. We can record the temperature at intervals of every second, every
minute, every hour, etc. Once we have recorded these measurements, we understand intuitively
that the temperature has values between readings, and we do not know what values these would
be. If a cold wind blows, the temperature goes down, and if the sun shines through the clouds,
then it goes up. For example, suppose we measure the temperature every hour. By doing this,
we are choosing to ignore the temperature for all time except for the hourly readings. This is
an important idea: the signal may vary over time, but when we take periodic readings of the
signal, we are left with only a representation of the signal.
A signal can be thought of as a (continuous or discrete) sequence of (continuous or
discrete) values. That is, a continuous signal may have values at any arbitrary index value (you
can measure the temperature at noon, or, if you like, you can measure it at 0.0000000003
seconds after noon). A discrete signal, however, has restrictions on the index, typically that it
must be an integer. For example, the mass of each planet in our solar system could be recorded,
numbering the planets according to their relative positions from the sun. For simplicity, a
discrete signal is assumed to have an integer index, and the relationship between the index and
time (or whatever parameter) must be given. Likewise, the values for the signal can be with an
arbitrary precision (continuous), or with a restricted precision (discrete). That is, you could
record the temperature out to millionths of a degree, or you could restrict the values to
something reasonable like one digit past the decimal. Discrete does not mean integer, but rather
that the values could be stored as a rational number (an integer divided by another integer). For
example, 72.3 degrees Fahrenheit could be thought of as 723/10. What this implies is that
irrational numbers cannot be stored in a computer, but only approximated. π is a good example.
You might write 3.14 for π, but this is merely an approximation. If you wrote 3.141592654 to
represent π, this is still only an approximation. In fact, you could write π out to 50 million
digits, but it would still be only an approximation!
It is possible to consider a signal whose index is continuous and whose values are
discrete, such as the number of people who are in a building at any given time. The index (time)
may be measured in fractions of a second, while the number of people is always a whole
number. It is also possible to have a signal where the index is discrete, and the values are
continuous; for example, the time of birth of every person in a city. Person #4 might have been
born only 1 microsecond before person #5, but they technically were not born at the same time.
That does not mean that two people cannot have the exact same birth time, but that we can be
as precise as we want with this time. Table 1.1 gives a few example signals, with continuous
as well as discrete indices and quantities measured.
For the most part, we will concentrate on continuous signals (which have a continuous
index and a continuous value), and discrete signals (with an integer index and a discrete value).
Most signals in nature are continuous, but signals represented inside a computer are discrete.
A discrete signal is often an approximation of a continuous signal. One notational convention
that we adopt here is to use x[n] for a discrete signal, and x(t) for a continuous signal. This is
useful since you are probably already familiar with seeing the parentheses for mathematical
functions, and using square brackets for arrays in many computer languages.
Therefore, there are 4 kinds of signals:
• A signal can have a continuous value for a continuous index. The “real world” is full of such
signals, but we must approximate them if we want a digital computer to work with them.
Mathematical functions are also continuous/continuous.
• A signal can have a continuous value for a discrete index.
• A signal can have a discrete value for a continuous index.
• A signal can have a discrete value for a discrete index. This is normally the case in a computer,
since it can only deal with numbers that are limited in range. A computer could calculate the
value of a function (e.g., sin(x)), or it could store a signal in an array (indexed by a positive
integer). Technically, both of these are discrete/discrete signals.
mySignal =
4.0000 4.6000 5.1000 0.6000 6.0000
Notice that the MATLAB version is much more compact, without the need to declare
variables first. For simplicity, most of the code in this book is done in MATLAB.
A signal is defined as any physical quantity that varies with time, space, or any other
independent variable or variables. Mathematically, we describe a signal as a function of one or
more independent variables. For example, the functions
describe two signals, one that varies linearly with the independent variable t (time) and a
second that varies quadratically with t. As another example, consider the function
This function describes a signal of two independent variables x and y that could represent the
two spatial coordinates in a plane.
The signals described by (1.1.1) and (1.1.2) belong to a class of signals that precisely
defined by specifying the functional dependence on the independent variable. However, there
are cases where such functional relationship is unknown or too highly complicated to be of any
practical use.
For example, a speech signal (see Fig. 1.1.1) cannot be described functionally by
expressions such as (1.1.1). In general a segment of speech may be represented to a high degree
of accuracy as a sum of several sinusoids of different amplitudes and frequencies, that is, as
Where {𝐴𝑖 (𝑡)}, {𝐹𝑖 (𝑡)}, 𝑎𝑛𝑑 {𝜃𝑖 (𝑡)} are the sets of (possibly time-varying) amplitudes,
frequencies, and phases, respectively, of the sinusoids. In fact, on way to interpret the
information content or message conveyed by any short time segment of the speech signal is to
measure the amplitudes, frequencies, and phases contained in the short time segment of the
signal.
Another example of a natural signal is an electrocardiogram (ECG). Such a signal
provides a doctor with information about the condition of the patient’s heart. Similarly, an
electroencephalogram (EEG) signal provides information about the activity of the brain.
Speech, electrocardiogram, and electroencephalogram signals are examples of
information-bearing signals that evolve as functions of a single independent variable, namely,
time. An example of a signal that is a function of two independent variables is an image signal.
The independent variables in this case are the spatial coordinates. These are but a few examples
of the countless number of natural signals encountered in practice.
Associated with natural signals are the means by which such signals are generated. For
example, speech signals are generated by forcing air through the vocal cords. Images are
obtained by exposing a photographic film to a scene or any object. Thus signal generations
usually associated with a system that responds to a stimulus or force. In a speech signal, the
system consists of the vocal cord and the vocal tract, also called the vocal cavity. The stimulus
is combination with the system is called the signal source. Thus we have speech sources,
images sources, and various other types of signal sources.
A system may also be defined as a physical device that performs an operation on a
signal. For example, a filter used to reduce the noise and interference corrupting a desired
information-bearing signal is called a system. In this case the filter performs some operation(s)
on the signal, which has the effect of reducing (filtering) the noise and interference from the
desired information-bearing signal.
When we pass a signal through a system, as in filtering, we say that we have processed
the signal. In this case the processing of the signal involves filtering the noise and interference
from the desired signal. In general, the system is characterized by the type of operation that it
performs on the signal. For example if the operation is linear, the system is called linear. If the
operation on the signal is nonlinear, the system is said to be nonlinear, and so forth. Such
operations are usually referred to as signal processing.
For our purposes, it is convenient to broaden the definition of a system to include not
only physical devices, but also software realizations of operations on a signal. In digital
processing of signals on a digital computer, the operations performed on a signal consist of a
number mathematical operations as specified by a software program. In this case the program
represents an implementation of the system in software. Thus we have a system that is realized
on a digital computer by means of a sequence of mathematical operations; that is, we have a
digital signal processing system realized in software. For example, a digital computer can be
programmed to perform digital filtering. Alternatively, the digital processing on the signal may
be performed by digital hardware (logic circuits) configured to perform the desired specified
operations. In a broader sense, a digital system can be implemented as a combination of digital
hardware and software, each of which performs its own set of specified operations.
Lesson No: 2
Discussions:
There are two kinds of signals: analog and digital. The word analog is related to the
word analogy; a continuous (“real world”) signal can be converted to a different form, such as
the analog copy seen in Figure 1.3. Here we see a representation of air pressure sensed by a
microphone in time. A cassette tape recorder connected to the microphone makes a copy of
this signal by adjusting the magnetization on the tape medium as it passes under the read/write
head. Thus, we get a copy of the original signal (air pressure varying with time) as
magnetization along the length of a tape. Of course, we can later read this cassette tape, where
the magnetism of the moving tape affects the electricity passing through the read/write head,
which can be amplified and passed on to speakers that convert the electrical variations to air
pressure variations, reproducing the sound.
An analog signal is one that has continuous values, that is, a measurement can be taken
at any arbitrary time, and for as much precision as desired (or at least as much as our measuring
device allows). Analog signals can be expressed in terms of functions. A well-understood
signal might be expressed exactly with a mathematical function, or approximately with such a
function if the approximation is good enough for the application. A mathematical function is
very compact and easy to work with. When referring to analog signals, the notation x(t) will
be used. The time variable t is understood to be continuous, that is, it can be any value: t = −1
is valid, as is t = 1.23456789.
A digital signal, on the other hand, has discrete values. Getting a digital signal from an
analog one is achieved through a process known as sampling, where values are measured
(sampled) at regular intervals, and stored. For a digital signal, the values are accessed through
an index, normally an integer value. To denote a digital signal, x[n] will be used. The variable
n is related to t with the equation t = nTs, where Ts is the sampling time. If you measure the
outdoor temperature every hour, then your sampling time Ts = 1 hour, and you would take a
measurement at n = 0 (0 hours, the start time), then again at n = 1 (1 hour), then at n = 2 (2
hours), then at n = 3 (3 hours), etc. In this way, the signal is quantized in time, meaning that
we have values for the signal only at specific times.
The sampling time does not need to be a whole value, in fact it is quite common to have
signals measured in milliseconds, such as Ts = 0.001 seconds. With this sampling time, the
signal will be measured every nTs seconds: 0 seconds, 0.001 seconds, 0.002 seconds, 0.003
seconds, etc. Notice that n, our index, is still an integer, having values of 0, 1, 2, 3, and so forth.
A signal measured at Ts = 0.001 seconds is still quantized in time. Even though we have
measurements at 0.001 seconds and 0.002 seconds, we do not have a measurement at 0.0011
seconds.
Figure 1.4 shows an example of sampling. Here we have a (simulated) continuous curve
shown in time, top plot. Next we have the sampling operation, which is like multiplying the
curve by a set of impulses which are one at intervals of every 0.02 seconds, shown in the middle
plot. The bottom plot shows our resulting digital signal, in terms of sample number. Of course,
the sample number directly relates to time (in this example), but we have to remember the time
between intervals for this to have meaning. In this text, we will start the sample number at zero,
just like we would index an array in C/C++. However, we will add one to the index in
MATLAB code, since MATLAB indexes arrays starting at 1.
Suppose the digital signal x has values x[1] = 2, and x[2] = 4. Can we conclude that
x[1.5] = 3? This is a problem, because there is no value for x[n] when n = 1.5. Any interpolation
done on this signal must be done very carefully! While x[1.5] = 3 may be a good guess, we
cannot conclude that it is correct (at the very least, we need more information). We simply do
not have a measurement taken at that time.
Digital signals are quantized in amplitude as well. When a signal is sampled, we store
the values in memory. Each memory location has a finite amount of precision. If the number
to be stored is too big, or too small, to fit in the memory location, then a truncated value will
be stored instead. As an analogy, consider a gas pump. It may display a total of 5 digits for the
cost of the gasoline; 3 digits for the dollar amount and 2 digits for the cents. If you had a huge
truck, and pumped in $999.99 worth of gas, then pumped in a little bit more (say 2 cents worth),
the gas pump will likely read $000.01 since the cost is too large a number for it to display.
Similarly, if you went to a gas station and pumped in a fraction of a cent’s worth of gas, the
cost would be too small a number to display on the pump, and it would likely read $000.00.
Like the display of a gas pump, the memory of a digital device has a finite amount of precision.
When a value is stored in memory, it must not be too large nor too small, or the amount that is
actually stored will be cut to fit the memory’s capacity. This is what is meant by quantization
in amplitude. Incidentally, since “too small” in this context means a fractional amount below
the storage ability of the memory, a number could be both too large and “too small” at the same
time. For example, the gas pump would show $000.00 even if $1000.004 were the actual cost
of the gas.
Consider the following C/C++ code, which illustrates a point about precision.
This code appears to be an infinite loop, since variable i starts out greater than 0 and always
increases. If we run this code, however, it will end. To see why this happens, we will look at
internal representation of the variable i. suppose our computer has a word size of only 3 bits,
and that the compiler uses this word size for integers. This is not realistic, but the idea holds
even if we had 32, 64, or 128 bits for the word size instead. Variable i starts out at the
decimal value 1, which, of course, is 001 in binary. As it increases, it becomes 010, then 011,
100, 101, 110, and finally 111. Adding 1 results in a 1000 in binary, but since we only use 3
bits for integers the leading 1 is an overflow, and the 000 will be stored in i. Thus, the
condition to terminate the while loop is met. Notice that this would also be the case if we
remove the unsigned keywords, since the above values for i would be the same, only our
interpretation of the decimal number that it represents will change. Figure 1.5 shows how a
signal can appear in three different ways; as an analog signal, a digital signal, and as an
analog signal based upon the digital version. That is, we may have a “real world” signal that
we digitize and process in a computer, then convert back to the “real world.” At the top, this
figure shows how a simulated analog signal may appear if we were to view it on an
oscilloscope. If we had an abstract representation for it, we would call this analog signal x(t).
In the middle graph of this figure, we see how the signal would appear if we measured it once
every 10 milliseconds. The digital signal consists of a collection of points, indexed by the
sample number n. Thus, we could denote this signal as x[n]. At the bottom of this figure, we
see a reconstructed version of the signal, based on the digital points. We need a new label for
this signal, perhaps x 0 (t), x˜(t), or xˆ(t), but these symbols have other meanings in different
contexts. To avoid confusion, we can give it a new name altogether, such as x2(t),
xreconstructed(t) or new_x(t). While the reconstructed signal has roughly the same shape as the
original (analog) signal, differences are noticeable. Here we simply connected the points,
though other (better) ways do exist to reconstruct the signal.
It is possible to convert from analog to digital, and digital to analog. Analog signal
processing uses electronics parts such as resistors, capacitors, operational amplifiers, etc.
Analog signal processing is cheaper to implement, because these parts tend to be inexpensive.
In DSP, we use multipliers, adders, and delay elements (registers) to process the signal. DSP
is more flexible. For example, error detection and error correction can easily be implemented
in DSP.
Lesson No: 3
Discussions:
Most of the signals encountered in science and engineering are analog in nature. That
is, the signals are functions of a continuous variable, such as time or space, and usuallytake on
values in a continuous range. Such signals may be processed directly by appropriate analog
systems (such as filters, frequency analyzer, or frequency multipliers) for the purpose of
changing their characteristics or extracting some desired information. In such a case we can
say that the signal has been processed directly in its analog form, as illustrated in Fig. 1.1.2.
Both the input signal and output signal are in analog form.
Digital signal processing provides an alternative method for processing the analog
signal, as illustrated in Fig. 1.1.3. To perform the processing digitally, there is a need for an
interface between the analog signal and the digital processor. This interface is called an analog-
to-digital (A/D) converter. The output of the A/D converter is a digital signal that is appropriate
as an input to the digital processor.
The digital signal processor may be a large programmable digital computer or a small
microprocessor programmed to perform the desired operations on the input signal. It may also
be a hardwired digital processor configured to perform a specified set of operations on the input
signal. Programmable machines provide the flexibility to change the signal processing
operations through a change in a software, whereas hardwired machines are difficult to
configure. Consequently, programmable signal processors are in very common use. On the
other hand, when signal processing operations are well defined, a hardwired implementation
of the operations can be optimized, resulting in a cheaper signal processor and, usually, one
that runs faster than its programmable counterpart. In applications where the digital output from
the digital signal processor is to be given to the user in analog form, such as in speech
communications, we must provide another interface from the digital domain to the analog
domain. Such an interface is called a digital-to-analog (D/A) converter. Thus the signal is
provided to the user in analog form, as illustrated in the block diagram of Fig. 1.1.3. However,
there are other practical applications involving signal analysis, where the desired information
is conveyed in digital form and no D/A converter is required. For example, in the digital
processing of radar signals, the information extracted from the radar signal, such as the position
of the aircraft and its speed, may simply be printed on paper. There is no need for D/A converter
in this case.
Lesson No: 4
Discussions:
There are many reasons why digital signal processing of an analog signal may be
preferable to processing the signal directly in the analog domain, as mentioned briefly earlier.
First, a digital programmable system allows flexibility in reconfiguring the digital signal
processing operations simply by changing the program. Reconfiguration of an analog system
usually implies a redesign of the hardware followed by testing nag verification to see that it
operates properly.
Accuracy considerations also play an important role in determining the form of the
signal processor. Tolerances in analog circuit components make it extremely difficult for the
system designer to control the accuracy of an analog signal processing system. On the other
hand, a digital system provides much better control of accuracy requirements. Such
requirements, in turn, result in specifying the accuracy requirements in the A/D converter and
the digital signal processor, in terms of word lengths, floating-point versus fixed-point
arithmetic, and similar factors.
Digital signals are easily stored on magnetic media (tape or disk) without deterioration
or loss of signal fidelity beyond that introduced in the A/D conversion. As a consequence, the
signals become transportable and can be processed off-line in a remote laboratory. The digital
signal processing method also allows for the implementation of more sophisticated signal
processing algorithms. It is usually very difficult to perform precise mathematical operations
on signals in analog from but these same operations can be routinely implemented on a digital
computer using software.
In some cases a digital implementation the signal processing system is cheaper than its
analog counterpart. The lower cost maybe due to the fact that the digital hardware is cheaper,
or perhaps it is a result of the flexibility for modifications provided by the digital
implementation.
As a consequence of these advantages, digital signal processing has been applied in
practical systems covering a broad range of disciplines. We cite, for example, the application
of digital signal processing techniques in speech processing and signal transmission, in
seismology and geophysics, in oil exploration, in the detection of nuclear explosions, in the
processing of signal received from outer space, and in a vast variety of other applications.
As already indicated, however, digital implementation has its limitations. One practical
limitation is the speed of operation of A/D converters and digital signal processors. We shall
see that signals having extremely wide bandwidths require fast-sampling-rate A/D converters
and fast digital signal processors. Hence there are analog signals with large bandwidths for
which a digital processing approach is beyond the state of the art of digital hardware.
Lesson No: 5
Discussions:
The methods we use in processing a signal or in analysing the response of a system to
a signal depend heavily on the characteristics attributes of the specific signal. There are
techniques that apply only to specific families of signals. Consequently, any investigation in
signal processing should start with a classification of the signals involved in the specific
application.
is complex valued.
In some applications, signals are generated by multiple sources or multiple sensors.
Such signals, in turn, can be represented in vector form. Figure 1.2.1 shows the three
components of a vector signal that represents the ground acceleration due to an earthquake.
This acceleration is the result of
three basic types of elastic waves. The primary (P) waves and the secondary (S) waves
propagate within the body of rock and are longitudinal and transversal, respectively. The
third type of elastic wave is called the surface wave, because it propagates near the ground
surface. If 𝑠𝑘 (𝑡), 𝑘 = 1, 2, 3, denotes the electrical signal from the kth sensor as function of
time, the set of p = 3 signals can be represented by a vector S3(t), where
𝐼𝑟 (𝑥, 𝑦, 𝑡)
𝐼(𝑥. 𝑦, 𝑡) = [𝐼𝑔 (𝑥, 𝑦, 𝑡)]
𝐼𝑏 (𝑥, 𝑦, 𝑡)
0.8𝑛 , 𝑖𝑓 𝑛 ≥ 0
𝑥𝑛 = { (1.2.1)
0. 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
2. By accumulating variable over a period of time. For example, counting the number of
cars using a given street very hour, or recording the value of gold every day, results in
discrete-time signals. Figure 1.2.4 shows a graph of the Wolfer sunspots numbers. Each
sample of this discrete- time signal provides the number of sunspots observed during
an interval of 1 year.
In order for a signal to be processed digitally, it must be discrete in time and its values
must be discrete (i.e., it must be a digital signal). If the signal to be processed is in analog form,
it is converted to digital signal by sampling the analog signal at discrete instants in time,
obtaining a discrete-time signal, and then by quantizing its values to a set of discrete values.
The process of converting a continuous-valued signal, called quantization, is basically an
approximation process. It may be accomplished simply by rounding or truncation. For example,
if the allowable signal values in the digital signal are integers, say 0 through 15, the continuous-
value signal is quantized into these integer values. Thus the signal value 8.58 will be
approximated by the value 8 if the quantization process is performed by truncation or by 9 if
the quantization process is performed by rounding to the nearest integer. An explanation of
analog-to-digital conversion process is given later.
Deterministic versus Random Signals
The mathematical analysis and processing of signals requires the availability of a
mathematical description for the signal itself. This mathematical description, often referred to
as the signal model, leads to another important classification of signals. Any signal that can be
uniquely described by an explicit mathematical expression, a table of data, or a well-defined
rule is called deterministic. This term is used to emphasize the fact that all past, present, and
future values of a signal are known precisely, without any uncertainty.
In many practical applications, however, there are signals that either cannot be
described to any reasonable degree of accuracy by explicit mathematical formulas, or such a
description is too complicated to any practical use. The lack of such relationship implies that
such signals evolve in time in an unpredictable manner. We refer to these signals as random.
The output of the noise generator, the seismic signal of Fig. 1.2.1and the speech signal in Fig.
1.1.1 are examples of random signals.
Figure below shows two signals obtained from the same noise generator and their
associated histogram. Although the two signals do not resemble each other visually, their
histograms reveal some similarities. This provides motivation for the analysis the description
of random signals is provided by theory of probability and stochastic process
It should be emphasized at this point that the classification of a real-world signal as
deterministic or random is not always clear. Sometimes, both approaches lead to meaningful
results that provide more insights into signal behaviour. At other times, the wrong classification
may lead to erroneous results, since some mathematical tools may apply only to deterministic
signals while others may apply only to random signals.
Lesson No: 6
Discussions:
The concept of frequency is familiar to students in engineering and sciences. This
concept is basic in, for example, the design of a radio receiver, a high-fidelity system, or a
spectral filter for color photography. From physics we know that frequency is closely related
to a specific type of periodic motion called harmonic oscillation, which is described by
sinusoidal functions. The concept of frequency is directly related to the concept of time.
Actually, it has the dimension of inverse time. Thus we should expect that the nature of time
(continuous or discrete) would affect the nature of the frequency accordingly.
shows in Fig. 1.3.1. The subscript a used with x(t) denotes an analog signal. This signal is
completely characterized by three parameters: A is the amplitude of the sinusoid, Ω is the
frequency in radians per second (rads/s), and θ is the phase in radians. Instead of Ω we often
use the frequency F in cycles per second or hertz (Hz), where
Ω = 2𝜋𝐹 (1.3.2)
We will use both forms, (1.3.1) and (1.3.3), in representing sinusoidal signals.
𝑥𝑎 (𝑡 + 𝑇𝑝 ) = 𝑥𝑎 (𝑡) (1.3.4)
A2. Continuous-time sinusoidal signals with distinct (different) frequencies are themselves
distinct.
A3. Increasing the frequency F results in an increase in the rate of oscillation of the signal, in
the sense that more periods are included in a given time interval.
We observe that F = 0 the value Tp = ∞ is consistent with the fundamental relation F = 1/ Tp.
Due to continuity of the time variable t, we can increase the frequency F, without limit, with
corresponding increase in the rate of oscillation.
The relationships we have described for the sinusoidal signals carry over to the class of
complex exponential signals
This can easily be seen by expressing these signals in terms of sinusoids using the Euler identity
which follows from (1.3.5). Note that a sinusoidal signal can be obtained by adding two equal-
amplitude complex-conjugate exponential signals, sometimes called phasors, illustrated in Fig.
1.3.2. S time progresses the phasors rotate in opposite directions with angular frequencies ±Ω
radians per second. Since a positive frequency corresponds to counter clockwise uniform
angular motion, a negative frequency simply corresponds to clockwise angular motion.
For mathematical convenience, we use both negative and positive frequencies through
this lesson. Hence the frequency range for analog sinusoids is -∞ < F < ∞.
Where n is an integer variable, called the sample number, A is the amplitude of the sinusoid, ω
is the frequency in radians per sample, and θ is the phase in radians.
If instead of ω we use the frequency variable f defined by
𝜔 = 2𝜋𝑓 (1.3.8)
The frequency has the dimensions of cycles per sample. We relate the frequency variable f of
a discrete-time sinusoid to the frequency F in cycles per second for the analog sinusoid. For
the moment we consider the discrete-time sinusoid in (1.3.7) independently of the continuos-
time sinusoid given in (1.3.1). Figure 1.3.3 shows a sinusoid with frequency 𝜔 = 𝜋⁄6 radians
1
per sample (𝑓 = 12 𝑐𝑦𝑐𝑙𝑒𝑠 𝑝𝑒𝑟 𝑠𝑎𝑚𝑝𝑙𝑒) and phase 𝜃 = 𝜋⁄3.
By definition, a discrete-time signal x(n) is periodic with period N(N > 0) if and only if
The smallest value of N for which (1.3.10) is true is called fundamental period.
The proof of the periodicity property is simple. For a sinusoid with frequency f0 to
be periodic, we should have
cos[2𝜋𝑓0 (𝑁 + 𝑛) + 𝜃] = cos(2𝜋𝑓0 𝑛 + 𝜃)
This relation is true if and only if there exists an integer k such that
2𝜋𝑓0 𝑁 = 2𝑘𝜋
or, equivalently,
𝑘
𝑓0 = 𝑁 (1.3.11)
According to (1.3.11), a discrete-time sinusoidal signal is periodic only if its frequency f0can
be expressed as the ratio of two integers (i.e., f0 is rational.
To determine the fundamental period N of a periodic sinusoid, we express its
frequency f0 as in (1.3.11) and cancel common factors so that k and N are relatively prime. Then
the fundamental period of the sinusoid is equal to N. Observe that a small change in frequency
can result in a large change in the period. For example, note that f1 = 31/60 implies that N1 =
60 whereas f2 = 30/60 results in N2 = 2.
B2. Discrete-time sinusoids whose frequencies are separated by an integer multiple of 2𝜋 are
identical.
To prove this assertion, let us consider the sinusoid cos(𝜔0 𝑛 + 𝜃). It is easily follows that
Where
𝜔𝑘 = 𝜔0 + 2𝑘𝜋, − 𝜋 ≤ 𝜔0 ≤ 𝜋
are indistinguishable (i.e., identical). Any sequence resulting from a sinusoid with a
frequency |𝜔| > 𝜋, 𝑜𝑟 |𝑓| > 12, is identical to a sequence obtained from a sinusoidal signal with
frequency |𝜔| < 𝜋. Because of this similarity, we call the sinusoid having the frequency |𝜔| >
𝜋 an alias of a corresponding sinusoid with frequency |𝜔| < 𝜋. Thus we regard frequencies in
the range −𝜋 ≤ 𝜔 ≤ 𝜋, 𝑜𝑟 − 12 ≤ 𝑓 ≤ 12 , as unique and all frequencies |𝜔| > 𝜋, 𝑜𝑟 |𝑓| > 12 as
aliases. The reader should notice the difference between discrete-time sinusoids and
continuous-time sinusoids, where the latter result in distinct signals for Ω or F in the entire
range -∞ < Ω < ∞ or -∞ < F < ∞.
B3. The highest rate of oscillation in a discrete-time sinusoid is attained when 𝜔 = 𝜋 (𝑜𝑟 𝜔 =
−𝜋) or equivalently, 𝑓 = 12 (𝑜𝑟 𝑓 = −12)
To illustrate this property, let us investigate the characteristics of the sinusoidal
signal sequence
𝑥(𝑛) = cos 𝜔0 𝑛
When the frequency varies from 0 to π. To simplify the argument, we take values of 𝜔0 =
1 1 1 1
0, 𝜋⁄8 , 𝜋⁄4 , 𝜋⁄2 , 𝜋 corresponding to 𝑓 = 0, 16 , 8, 4, 2, which result in periodic sequences
having periods N = 0, 16, 8, 4, 2 as depicted in Fig. 1.3.4. We note that the period of the sinusoid
decreases as the frequency increases. In fact, we can see that the rate of oscillation increases as
the frequency increases.
Hence 𝜔2 is an alias of 𝜔1. If we had used a sine function instead of cosine function, the result
would basically be the same, except for 180˚ phase difference between the sinusoids x1(n) and
x2(n). In any case, as we increase the relative frequency 𝜔0 of a discrete-time sinusoid from π
to 2π, .its rate of oscillation decreases. For 𝜔0 = 2𝜋 the result is a constant signal, as in the
case of 𝜔0 = 0. obviously, for 𝜔0 = 𝜋 (or = 12 ) we have the highest rate of oscillation.
As for the case of continuous-time signals, negative frequencies can be introduced
as well for discrete-time signals. For this purpose we use the identity
𝐴 𝐴
𝑥(𝑛) = 𝐴 cos(𝜔𝑛 + 𝜃) = 2 𝑒 𝑗(𝜔𝑛+𝜃) + 2 𝑒 −𝑗(𝜔𝑛+𝜃) (1.3.15)
Since discrete-time sinusoidal signals with frequencies that are separated by an integer multiple
of 2π are identical, it follows that the frequencies in any interval 𝜔1 < 𝜔 < 𝜔1 + 2𝜋 constitute
all the existing discrete-time sinusoids or complex exponentials. Hence the frequency range
for discrete-time sinusoids is finite with duration 2π. Usually we choose the range 0 ≤ 𝜔 ≤
1 1
2𝜋 𝑜𝑟 − 𝜋 ≤ 𝜔 ≤ 𝜋 (0 ≤ 𝑓 ≤ 1, − 2 ≤ 𝑓 ≤ 2), which we call the fundamental range.
Sinusoidal signal and complex exponentials play a major role in the analysis of signals and
systems. In some cases we deal with sets of harmonically related complex exponentials (or
ECE 354: Signals Spectra and Signal Processing Page 22 of 37
Signals Spectra, and Signal Processing
sinusoids). These are sets of periodic complex exponentials with fundamental frequencies that
are multiples of a single positive frequency. Although we confine our discussion to complex
exponentials, the same properties clearly hold for sinusoidal signals. We consider harmonically
related complex exponentials in both continuous time and discrete time.
We note that for each value of k, sk(t) is periodic with fundamental period 1/(kF0) = Tp/k or
fundamental frequency kF0. Since signal that is periodic with period Tp/k is also periodic with
period k(Tp/k) = Tp for any positive integer k, we see that all of the sk(t) have a common period
of Tp. Furthermore, F0 is allowed to take any value and all members of the set are distinct, in
the sense that if 𝑘1 ≠ 𝑘2 , 𝑡ℎ𝑒𝑛 𝑠𝑘1 (𝑡) ≠ 𝑠𝑘2 (𝑡).
From the basic signals in (1.3.16) we can construct a linear combination of
harmonically related complex exponentials of the form
𝑥𝑎 (𝑡) = ∑∞ ∞
𝑘=−∞ 𝑐𝑘 𝑠𝑘 (𝑡) = ∑𝑘=−∞ 𝑐𝑘 𝑒
𝑗𝑘Ω0𝑡
(1.3.17)
Where ck, k=0, ±1, ±2, . . .are arbitrary complex constants. The signal xa(t) is periodic with
fundamental period Tp = 1/F0, and its representation in terms of (1.3.17) is called the Fourier
series expansion for xa(t). The complex-valued constant are Fourier series coefficients and the
signal sk(t) is called the kth harmonic of xa(t).
This means that, consistent with (1.3.10), there are only N distinct periodic complex
exponentials in the set described by (1.3.18). Furthermore, all members of the set have a
common period N samples. Clearly, we can choose any consecutive N complex exponentials,
say from 𝑘 = 𝑛0 to 𝑘 = 𝑛0 + 𝑁 − 1 to form a harmonically related set with fundamental
frequency 𝑓0 = 1/𝑁. Most often, for convenience, we choose the set that corresponds to 𝑛0 =
0, that is the set
results in the periodic signal with fundamental period N. As we shall see later, this is the Fourier
series representation for a periodic discrete-time sequence with Fourier coefficients (𝑐𝑘 ). The
sequence 𝑠𝑘 (𝑛) is called the kth harmonic of x(n).
Example 1.3.1
Stored in the memory of a digital signal processor is one cycle of the sinusoidal signal
2𝜋𝑛
𝑥(𝑛) = sin( + 𝜃)
𝑁
2𝜋𝑞
where 𝜃 = , where q and N are integers.
𝑁
a) Determine how this table of values can be used to obtain value of harmonically related
sinusoids having the same phase.
b) Determine how this table can be used to obtain sinusoids of the same frequency but
different phase.
Solution:
a) Let 𝑥𝑘 (𝑛) denote the sinusoidal signal sequence
2𝜋𝑛𝑘
𝑥𝑘 (𝑛) = sin ( + 𝜃)
𝑁
This is a sinusoid with frequency 𝑓𝑘 = 𝑘/𝑁, which is harmonically related to x(n). But
𝑥𝑘 (𝑛) may be expressed as
2𝜋(𝑘𝑛)
𝑥𝑘 (𝑛) = sin [ + 𝜃]
𝑁
= 𝑥(𝑘𝑛)
Thus we observe that 𝑥𝑘 (0) = 𝑥(0), 𝑥𝑘 (1) = 𝑥(𝑘), 𝑥𝑘 (2) = 𝑥(2𝑘), and so on. Hence,
the sinusoidal sequence 𝑥𝑘 (𝑛) can be obtained from the table of values of x(n) by taking
every kth term value of x(n), beginning with x(0). In this manner we can generate the
𝑘
values of all harmonically related sinusoids with frequencies 𝑓𝑘 = 𝑁 𝑓𝑜𝑟 𝑘 =
0,1, … . 𝑁 − 1.
𝑘
b) We can control the phase θ of the sinusoid with frequency 𝑓𝑘 = 𝑁 by taking the first
value of the sequence from memory location q = θN/2π, where q is an integer. Thus the
initial phase θ controls the starting location in the table and we wrap around the table
each time the index exceeds N.
Let’s Remember:
In this introductory module we have attempted to provide the motivation for digital
signal processing as an alternative to analog signal processing. We presented the basic elements
of a digital signal processing system and defined the operations needed to convert an analog
signal into a digital signal ready for processing. Of particular importance is the sampling
theorem, which was introduced by Nyquist (1928) and later popularized in the classic paper by
Shannon (1949).
Let’s Do this:
Review Questions:
1. What is a signal?
2. What is a system?
3. What is a transform?
4. What does x[n] imply?
5. What does x(t) imply?
6. What are the differences between analog and digital?
References/Sources:
1. Weeks, Michael. Digital signal processing using MatLab and Wavelet
2. Proakis, John G. Digital Signal Processing. 4th Edition. Pearson
3. Ziemer, Rodger E. and Tranter, William H. Principles of Communications: Systems,
Modulation and Noise. Wiley. 2015
PROGRAM OBJECTVES
ECE.A Apply knowledge of mathematics and sciences to solve electronics engineering
problems.
ECE.B Design and conduct experiments, as well as to analyze and interpret data.
ECE.C Design a system, component, or process to meet desired needs within realistic
constraints, in accordance with standards.
ECE.D Function in multi-disciplinary and multi-cultural teams.
ECE.E An ability to recognize, formulates, and solves engineering problems
ECE.F Understand professional, social, and ethical responsibility
ECE.G Communicate effectively electronics engineering activities with the engineering
community and with society at large
ECE.H Understanding the impact of electronics engineering solutions in a global, economic,
environmental, and societal context
ECE.I recognize the need for, and engage in life-long learning
ECE.J Know contemporary issues
ECE.K Use techniques, skills, and modern engineering tools necessary for electronics
engineering practice
ECE.L Know and understand engineering and management principles as a member and leader
of a team, and to manage projects in a multidisciplinary environment
ECE.M Understand at least one specialized field of electronics engineering practice
COLLEGE OBJECTIVES