Professional Documents
Culture Documents
Conclusion: The two-way traffic light system is designed using Proteus Design Suite and
it can be seen that both channels open one after another following a specific period of time.
Huffman coding also uses the same principle. This type of coding makes average number of
binary digits per message nearly equal to Entropy (average bits of information per message).
This is a variable length and prefix free coding. So there are different length code words and
no code words are prefix of others. It gives the advantages of variable length and prefix free
coding like - required less bandwidth. The main application of Huffman coding arises in
image compression.
Program:
clear all;
close all;
symbol = [1:6];% Create a vector of data symbols.
p = [0.05 0.10 0.10 0.15 0.25 0.35]; % Assign a probability to each vector
of data symbols.
[dict,avglen] = huffmandict(symbol,p) % Generates a binary Huffman code
dictionary using maximum variance algorithm.
dict{1,:}%Displaying Huffman code dictionary for 1st data symbol.
dict{2,:}%Displaying Huffman code dictionary for 2nd data symbol.
dict{3,:}%Displaying Huffman code dictionary for 3rd data symbol.
dict{4,:}%Displaying Huffman code dictionary for 4th data symbol.
dict{5,:}%Displaying Huffman code dictionary for 5th data symbol.
dict{6,:}%Displaying Huffman code dictionary for 6th data symbol.
hcode = huffmanenco(symbol,dict);% Encodes the signal using the Huffman
codes described by the code dictionary.
dhsig = huffmandeco(hcode,dict);% Decodes the numeric Huffman code vector
using the code dictionary.
disp('Encoded Message:');
Conclusion: From the study of Huffman Encoding and Decoding on two code-word it is
found that the symbol having minimum probability gets the code with more number of bits
and symbol having higher probability gets the code with less number of bits.
Conclusion: From the obtained result we conclude that the Bit Error Rate for the M-ary
PSK modulation techniques decrease monotonically with increasing values of SNR(Eb/N0).
It is clearly observed from the simulation curves that as the number of signals or number of
M increases, the error probability also increases over AWGN channel. Thus higher-order
modulations exhibit higher error-rates.
The most commonly used de-fuzzification method is the centre of area method (COA), also
commonly referred to as the centroid method. This method determines the center of area of
fuzzy set and returns the corresponding crisp value.
Conclusion: The FIS is designed for deciding the waiter tip using parameters ambiance,
quality of service, quality of food.
Conclusion: From the study of various functions of Contrast Adjustment in the input
image baby.jpg it can be concluded that the original image has low contrast and most of the
pixel values in middle of the intensity range. Histogram equalisation produces an output
image with pixel values evenly distributed throughout the range.
Conclusion: The 4-bit ripple counter circuit is designed and simulated in multisim
software and it is able to count from 0 to F.
When the first clock pulse is applied, the FF0 changes state on its negative edge. Therefore,
Q2Q1Q0 = 001. On the negative edge of second clock pulse flip flop FF0 toggles. Its output
changes from 1 to 0. This being negative change, FF1 changes state. Therefore, Q2Q1Q0 =
010. Similarly, the output of flip-flop FF2 changes only when there is negative transition at
its input when fourth clock pulse is applied.
The output of the flip flops is a binary number equivalent to the number of clock pulses
received. On the negative edge of eighth pulse, counter is reset.
The counter acts as a frequency divider. FF0 divides clock frequency by 2, FF1 divides clock
frequency by 4, FF2 divides clock frequency by 8. If n flip flops are cascaded, we get 2 n
output conditions. The largest binary number counted by n cascaded flip flops has a decimal
equivalent of 2n-1. MOD-8 counter has count of the largest binary number 111 which has
decimal equivalent of 23-1=7
Conclusion: The 3-bit up down counter circuit is designed and simulated in Multisim and
it is clear from the output result shown in fig. 7.2 that the circuit is able to count up and down
simultaneously up to 7.
Objective: To study the impact of Low Frequency and High Frequency components on
quality of image using different wavelets.
Software Used: MATLAB 2018a.
Theory: Images contain large amounts of information that requires much storage space,
large transmission bandwidths and long transmission times. Therefore, it is advantageous to
compress the image by storing only the essential information needed to reconstruct the image.
Wavelet analysis can be used to divide the information of an image into approximation and
detail sub-signals. The approximation sub-signal shows the general trend of pixel values, and
three detail sub-signals show the vertical, horizontal and diagonal details or changes in the
image. For a two dimensional images, the approach of the 2D implementation of the discrete
wavelet transform(DWT) is to perform the one dimensional DWT in row direction and it is
followed by a one dimensional DWT in column direction as shown in fig 7.1. LL is a coarser
version of the original image and it contains the approximation information which is low
frequency, LH, HL, and HH are the high frequency sub-band containing the detail
information.
Wavelet: Wavelets are mathematical functions that cut up data into different frequency
components, and then study each component with a resolution matched to its scale. These
basis functions are short waves with limited duration, thus the name 'wavelets' is used. The
basic functions of the Wavelet Transform are scaled with respect to frequency. Several
families of wavelets that have proven to be especially useful are:
1. Haar: The Haar wavelet is discontinuous, and resembles a step function. It represents the
same wavelet as Daubechies db1.
2. Daubechies: Ingrid Daubechies, one of the brightest stars in the world of wavelet
research, invented what are called compactly supported orthonormal wavelets — thus
making discrete wavelet analysis practicable. The names of the Daubechies family
wavelets are written dbN, where N is the order, and db the “surname” of the wavelet. The
db1 wavelet, as mentioned above, is the same as Haar wavelet.
3. Coiflets: Built by I. Daubechies at the request of R. Coifman. The wavelet function has
2N moments equal to 0 and the scaling function has 2N-1 moments equal to 0. The two
functions have a support of length 6N-1.
Mean Square Error(MSE): It is the cumulative squared error between the original image
and the noise added image. The lower the level of MSE, lower the error.
Here M and N are the number of rows and columns in the input image respectively. Hence, to
evaluate PSNR firstly MSE value should be calculated.
Peak Signal to Noise Ratio(PSNR) and signal to noise ratio(SNR) are mathematical
measure for image quality assessment between original image and noise added image. It
shows the measure of peak error.
Here MAX is maximum fluctuation in input image data type. PSNR measures the peak error.
2. PSNR and MSE calculation using Level 1 sym wavelet for given input image.
3. PSNR and MSE calculation using Level 1 bior wavelet for given input image.
Conclusion: From the above obtained results it can be concluded that most of the
significant information of an image is stored in its low frequency component and the bior
wavelet gives better PSNR and MSE values for the input image ‘ cameraman.tif ’ than db and
sym wavelet.