You are on page 1of 25



Mr.Mahantesh Paramashetti

Anusha.G Parveen.A.G Pallavi.S.Yadav Christeena.S

Introduction Biologically Inspired Neuron Artificial Neural Networks Back Propagation Algorithm Compression Techniques Implementations Advantages Disadvantages Applications Conclusion

Uncompressed multimedia data requires considerable storage capacity and transmission bandwidth. Apart from the existing technology like JPEG and MPEG standards, new technology such as neural networks are used for image compression.

Natural images are captured using image sensors and stored in memory banks. Large storage space is required

eg: A color image of size 256x256 requires a storage space of 1.5 Mega bits.

Storage cost for 1 GB is approximately Rs. 200. With the available bandwidth of 64kbps and 54mbps transmitting a three hour movie requires in uncompressed format takes 2917 years and 19 days respectively.
Transmission of huge image data is time consuming

Artificial neural networks has been chosen for image compression due to their massively parallel and distributed architecture. The idea behind this Training commands is the Back propagation algorithm.

The focus of this project is to implement the Neural Architecture Digitally.

Biological Neurons

The Analogy to the Brain

Neurons are basic signaling units of the nervous system of a living being in which each neuron is a discrete cell whose several processes are from its cell body. The basic element of human brain has abilities to remember, think and apply previous experiences to our every action. Neural networks process information in a similar way the human brain does.

Biologically Inspired Neuron

Artificial Neural Networks

Artificial Neural Networks are used to process the information the way biological systems process analog signals like image and sound.

Types of ANN
Feed forward networks Information only flows one way One input pattern produces one output No sense of time (or memory of previous state) Recurrency Nodes connect back to other nodes or themselves Information flow is multidirectional Sense of time and memory of previous state(s)

Artificial Neuron System

Input layer Hidden layer Output layer

Block Diagram of Neural Architecture

Back propagation algorithm

Information about errors is filtered back through the system and it is used to adjust the connections between the layers, thus improving performance.
The Feed-Forward Neural Network architecture is capable of approximating most problems with high accuracy and generalization ability.

The Back propagation algorithm is used to update weights and bias of the neural networks. Weight and bias elements of the neuron decides the functionality of the network. Value of these weight and bias elements are calculated during training phase.

Image compression refers to the task of reducing the amount of data required to store or transmit an image. The compressed image is then subjected to further digital processing such as error control coding, encryption or multiplexing with other data sources, before being used to modulate the analog signal that is actually transmitted through the channel or stored in a storage medium.


Original images Image scaling(256x256) Vector values of the scaled Images (16x4096)

Combining these images to increase the resolution (16x32768)

Normalizing the combined image

Adding bias & weights

Training the network

testing each image Comparing scaled & decompressed image by finding their PSNR & MSE values Each image is converted to vector form normalizing Passing the image through the network denormalizing

MATLAB version R2007b. The Maximum error, MSE and PSNR values are calculated. Hardware implementation is done using FPGA board (Spatan 3 ).

Neural network training & performance plots: The neural network is trained
using the nntraintool, available in MATLAB.
The plot of MSE wrt epochs for

different iterations are as shown:

A neural network can perform tasks that a linear program cannot. When an element of the neural network fails, it can continue without any problem by their parallel nature. A neural network learns and does not need to be reprogrammed. It works even in the presence of noise with good quality output.

The neural network needs training to operate. The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated. Requires high processing time for large neural networks. As the number of neurons increases the network becomes complex.

Pattern Matching Pattern Recognition Optimization Vector Quantization Data Clustering

Chipscope Pro Analyzer can easily implement the design on FPGA kit. The analysis showed that comparision between input and output values was proved to be similar. Using Chipscope Pro Analyzer smaller architectures can be easily built.