You are on page 1of 7

Addis Ababa institute of Technology

School of Electrical and Computer Engineering


Program: Communications Engineering/PG
Chair: Communication & Networking Track
Course Number ECEG-6302
Section Number 1
Course Title Digital communication systems
Semester/Year II/I

Instructor Dr. Dereje H. Woldegebriel

Assignment 1
Assignment Title

Submission Date Apr 27, 2020


Due Date

Student Name Yonas Desta Ebren


Section Comm. & Networking Track
Student ID Gsr/3116 /12

Assessor’s comments Grade


1.1 The basic elements of digital communication system are:-source of information and input
transducer, source encoder, channel encoder, digital modulator, channel, Demodulator, channel
decoder, source decoder and user of information.

Fig 1.1 block diagram of digital communication system


1.2
source of information: it consists of analog signal which is a raw data such as a sound
signal and the input transducer which takes the physical input and converts it to an electrical
signal such as microphone.
transmitter
 Source encoder: the source encoder compresses the data into sequence of binary
number of bits so as to remove unnecessary bits. It helps to efficiently operate
with the given bandwidth.
 Channel encoder: the channel encoder adds some unnecessary bits to the
transmitted data, the bits are called error correcting bits and it maps bitstream to a
pulse pattern.
 Digital modulator: It helps to modulate the transmitted signal with RF carrier
.used for effective long distance transmission of the data.
channel: it provides a path for the signal and permits the analog signals to transmit from
transmitter end to receiver end.
receiver
 demodulator: a place where The data retrieving process started at the receiver
end. The received signal demodulated and again converted from analog to digital.
 channel decoder: used for error corrections post sequence detection.it adds some
additional bits and this helps for complete recovery of the original signal.
 source decoder: used to digitize the resulting signal by sampling and quantizing
so that to get the digital output without any loss of information. It also creates a
source output.
Output transducer: this is the final block which converts the signal into its original form.
The conversion of electrical signal physical output is held here.
2.1 What is BER?
Since the final goal of the designed and well configured communication system is to transport the
message signal to the user along the noisy channel, the message that is ready to be delivered must be
efficient and reliable as well. When data is transmitted over a data link, there is a possibility of errors
being introduced into the system. If errors are introduced into the data, then the integrity of the system
may be compromised. As a result, it is necessary to assess the performance of the system, and bit error
rate, BER, provides an ideal way in which this can be achieved. And the reliability, is commonly
expressed in terms of BER (Bit Error Rate) which is measured and calculated at the receiver output.
A smaller BER shows more reliable communication system. BER is often quoted for many
communications systems and it is a key parameter used in determining what link parameters should be
used, everything from power to modulation type.
The bit error rate can be expressed as the ratio of the number of received bit errors divided by the total
number of transferred bits during some time interval.
If the medium between the transmitter and receiver is good and the signal to noise ratio is high, then
the bit error rate will be very small - possibly insignificant and having no noticeable effect on the
overall system. However if noise can be detected, then there is chance that the bit error rate will need
to be considered.
2.2 Factors that determine bit error rate, BER
The bit error rate, BER can be affected by a number of factors. By manipulating the variables that can
be controlled, it is possible to optimize a system to provide the performance levels that are required.
This is normally undertaken in the design stages of a data transmission system so that the performance
parameters can be adjusted at the initial design concept stages. So here are some factors that are
responsible for the determination of BER
 Interference: The interference levels present in a system are generally set by external factors
and cannot be changed by the system design. However it is possible to set the bandwidth of the
system. By reducing the bandwidth the level of interference can be reduced. However reducing
the bandwidth limits the data throughput that can be achieved.
 Transmitter power: This has to be balanced against factors including the interference levels to
other users and the impact of increasing the power output on the size of the power amplifier and
overall power consumption and battery life. It is also possible to increase the power level of the
system so that the power per bit is increased.
 Reduce bandwidth: Another approach that can be adopted to reduce the bit error rate is to
reduce the bandwidth. Lower levels of noise will be received and therefore the signal to noise
ratio will improve. Again this results in a reduction of the data throughput attainable.
 Order of modulation and decoding: Lower order modulation schemes can be used, but this
is at the expense of data throughput.

It is necessary to balance all the available factors to achieve a satisfactory bit error rate. Normally it is
not possible to achieve all the requirements and some trade-offs are required. However, even with a bit
error rate below what is ideally required, further trade-offs can be made in terms of the levels of error
correction that are introduced into the data being transmitted. Although more redundant data has to be
sent with higher levels of error correction, this can help mask the effects of any bit errors that occur,
thereby improving the overall bit error rate.
3.1 source coding in digital communication systems is a mapping from a sequence of symbols from an
information source, to a sequence of alphabet symbols such that the source symbols can be exactly
recovered from the binary bits. In source coding, we decrease the number of redundant bits of
information to reduce the band width and it is often termed as data compression. And we can easily
detect the redundant information simply by using the probability measure of that message that is, if the
probability is higher then, the number of bits required to represent that information will be less and the
average length decreases; this results in reduction of bandwidth. This yields the band width of the
signal is adjusted for effective transmission. Hence in the presence of source coding, the throughput of
the communication will be increased.
3.2.1 the most common technique to change the analog signal to a discrete pulse is composed of three
basic processes,
Sampling –is a process of measuring the amplitude of a continuous-time signal at discrete instants,
converting the continuous signal into a discrete signal.
Quantization:- the method of sampling by choosing a few points on the analog signal and joining
these points by rounding off the value near to stabilized value. In practice the number of quantizing
intervals used is 256 (+1 to +128 in positive range and -1 to -128 in negative range).
Encoding –The digitization of the analog signal is done by the encoder. After each sample is quantized
and the number of bits per sample is decided, each sample can be changed to an n bit code. Encoding
also minimizes the bandwidth used and this is done by an encoder circuit.
3.2.2 The Nyquist theorem says that over a channel of bandwidth W, we can transmit a signal fastest
with no interference at a rate no more than 2W. so that in order to cleanly extract the original spectrum
the sufficient (minimum) separation with the adjacent sidebands must be greater than or equal to twice
of the bandwidth. And if the signal is sampled bellow its Nyquist rate, spectral folding or aliasing
occurs and hence, the low pass filtering will not recover the baseband spectrum.
3.3.1 Discrete time source
Discrete source, is a source that can be encoded in such a way that the source output can be uniquely
retrieved from the encoded string of binary digits. And the average amount of information emitted by
these types of sources are represented by Entropy H ( x ) . Example of such source is an Ergodic Markov
source and DMS.
3.3.2 Discrete Memoryless sources (DMS)
DMS is simple kind of source in such a way that the data that is being emitted at successive intervals
is statistically independent of previous values as it is memoryless and the codes produced by these type
of source has to be efficiently represented. In other words, the source symbol sequence is independent
and identically distributed. DMS’s provide the tools for studying more realistic sources. The source
output of a DMS is an unending sequence of randomly selected letters from a finite set (i.e the source
alphabet.). Each source output is selected from the source alphabet using a common probability
measure and they are statistically independent of the other source outputs. A sequence of independent
tosses of a biased coin is one example of a DMS. The sequence of symbols drawn (with replacement)
in a Scrabble game is another.
4.1 In the equation,
I ( X i /Y j )is mutual information content of two random variables X i and Y j.
P ( X i /Y j ) is probability density of random variable X i , given that event Y j is already
occurred.
P ( X i ) is the marginal probability density of random variable X i .
4.2 Mutual information is a quantity that measures how much one random variable expressed about the
other. It can be taught as the reduction in uncertainty about one random variable given knowledge of
the other. Here, I ( X i /Y j ) is the average amount of uncertainty provided about the R.V X, by the
observation of set of variables Y and is not a function of the symbols Xi andYj, but rather the
probabilities of occurrence of the symbols. Moreover, the sum in I ( X i /Y j ) is taken over only those

( X i /Y j )for which P ( X i /Y j ) > 0.

5.1 Self information of event X i , is the information content of each symbols, can be calculated as

I ( X i )=−log P ( X i )
1
¿−log
k
¿ log k shanons

5.2 Entropy H ( X i ), of the source can be calculated as

k−1
H ( X i ) =∑ P ( X i ) I ( X i )
i=0
k−1
1
¿∑ log 2 k
i=0 k
¿ log 2 k Bits/symbol
Since a probability distribution is a function that assign a probability to the possible outcomes. Hence,
these distribution is uniform when all of the outcomes are equiprobable, so that a good measure of
uncertainty achieves its highest values for uniform distributions since the equiprobability of the
symbols yields maximum uncertainty and therefore maximum entropy.

5.3 Yes, because the information content of an event X given by the formula where P ( X i )is the

probability of the symbols. Since high probability meant P ( X i ) =1, and low probability meant P ( X i ) =0.
So that the information content for high probability events,

I ( X i )=−logP ( X i )
¿−log ⁡(1)
=0
Information content for low probability events,
I ( X i )=−logP ( X i )
¿−log ⁡( 0)
=1
And from this, we can conclude that, lower probability of occurrence of the symbols, implies higher
degree of uncertainty (and the vice versa) and a R.V with high degree of uncertainty have much higher
information associated with them. Example for this is rare events, such as News have higher
information content. Events that occurred on daily basis such as a local happenings, they are not
covered in the news because typically we know intuitively that they do not constitute much
information. So events which occurs very rarely such as happening on national basis attracts a lot of
attention, so coverage of these rare events attracts more. Sensational events attracts a lot of attention
b/c they carry a lot of attention. And events which occur very frequently contain less information.

You might also like