You are on page 1of 11

Chapter-2

Literature Review
Daniel Baumgartner et al. [1] reports their work on a performance benchmark of different
implementations of some low-level vision algorithms. The algorithms are implemented on both
Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA) high-speed
embedded platforms. The target platforms are a TI TMS320C6414 DSP and an Altera Stratix
FPGA. The implementations are evaluated, compared and discussed. The paper claims that the
DSP implementations outperform the FPGA implementations, but at the cost of spending all its
resources to these tasks. FPGAs, however, are well suited to algorithms, which benefit from
parallel execution. In this work a performance benchmark of several DSP and FPGA
implementation of some low-level image processing algorithms was presented. Three algorithms
– a Gaussian pyramid, a Bayer filter and a Sobel filter – were implemented on both a high-end
DSP and on high-end FPGAs. Results shown indicate that all three low-level algorithms are
processed faster on the DSP than on the FPGA. The DSP implementation of the Gaussian
pyramid is about four times faster than the FPGA implementation, while the Bayer and the Sobel
filters perform about two times faster on the DSP.
Thus, the DSP outperforms the FPGA on a sub-function basis. However, from a system view the
FPGA benefits due to parallelism. Currently, authors are investigating how they can make use of
the advantages of both technologies to build a new platform which is based on both DSPs and
FPGAs. That platform would enable to split algorithms and to execute parts of the algorithm on
the processing unit (DSP or FPGA) which is better suited for.
Zhou Jianjun, Zhou Jianhong [2] introduces a kind of high-speed digital image
processing system based on ARM-DSP. This system increases the processing speed of digital
image and realizes accurate recognition of figures (characters) in images. The paper also
discusses the hardware structure of image tracking system taking ARM-DSP as main frame and
the development process and control flow of DSP. Finally, it looks forward to the development
prospect of image processing. This paper studies basic theories of digital image processing. The
system is roughly divided into two parts from function: DSP image acquisition and processing
part and ARM real-time control application part. Because the time sequence of ARM is different
from that of DSP, the data between the two parts is transmitted by a two-port RAM. It not only
meets the time sequence requirements of the system, but also improves the work efficiency of the
6

the book is well written and succeeds in filling a big void in image processing literature. Springer. the book is intended for signal and image processing practitioners or software developers that plan to use a TI DSP. L. along with the rapid development of large scale integrated circuit. all examples have been tested and debugged on either the TI C6701 Evaluation Module or the C6416 DSP Starter Kit. The developed system can acquire image. color space conversion and image transmission based on EDMA are described. Konstantinos Konstantinides [4] Hewlett. and then some key problems which include of image data stored mode. Taking digital signal processor DSP and complex programmable logic device as core. this paper constructs a recognition hardware platform. The whole architecture was implemented into a single FPGA without the use of any external memory and/or host machine. operations based on pixels. C. most of the programming techniques demonstrated here can easily be applied to other embedded platforms. the architecture was implemented on Xilinx Virtex-FPGA technology using VHDL structural description. image compression and color space conversion.Packard. The hardware configuration and working principle is introduced firstly. in particular. orthographic transform. the proposed architecture is scalable and can be tailored to process larger images. Bouridane [3] proposed approach that has resulted in much improved processing times with smaller and flexible area utilization. A. 2005. Reviewed a book on Embedded Image Processing on the TMS320C6000 DSP: Examples in Code Composer Studio and MATLAB by Shehrzad Qureshi. According to the author. display image and make some image processing operations which include of geometry transform. that is DSP is used as advanced image processing unit and FPGA as logic unit for image sampling and display. focus on embedded image processing and. However. To demonstrate the effectiveness of the approach. TMSC6713 DSP board is used as executing image processing algorithms. Siéler .system and makes the system more stable and reliable. Finally the program flowchart for developing image processing software is given. [14]. The CPU on the board is TI DSP chip TMSC6713 which is a high performance float digital signal 7 . on the efficient implementation of image processing algorithms on the TI TMS320C6000 family of DSPs. Duan Jinghong et al. [5] presented an image processing system structure based on DSP and FPFA. tackling how to efficiently implement signal and image processing algorithms using embedded processors. Moreover. Tanougast .

K Baskaranc. Operating results reveal that the proposed architecture is able to process 3D data at areal-time rate. Crookes. robust invisible watermarking technique is used in this paper for images. speed and area) D. The entire system consumes just 1637 slices of an XC2V chip. P Karthigaikumara. and different skeletons for the same operation can be provided.P. Skeletons are parameterisable. The aim of this work is to demonstrate and verify the feasibility of a compact and programmable image compression sub-system that uses just one low-cost FPGA device.There are 1Mbits RAM. The processing elements can bedeployed in a systolic architecture and operate on multiple image areas simultaneously. whose implementations include task-specific optimisations. it runs at 100 MHz clock frequency and reaches a speed performance suitable for several real-time applications. Anumolb. The fragile and semi fragile watermarking techniques have some serious disadvantages like increased use of resources. [9] Digital watermarking is the process of embedding information into a digital signal in a way that is difficult to remove. the Set Partitioning In Hierarchical Trees algorithm means of custom circuits. 8Mbytes with 32bit exterior expanded memory SDRAM. Pasquale Corsonello et. Chaikalis . The proposed system can handle large sized InIms in real time and outputs 3D scenes of enhanced depth and detailed texture. This in turn supports experimentation with different implementations and choosing the most suitable one for the particular constraints in hand (e. Sgouros. i. A watermark is embedded in the host signal for authentication. D. 512Kbytes Flash. larger area requirements. The library also contains high level skeletons for compound operations. K.[8] implemented a widely known wavelet-based (SPIHT). for instance for different arithmetic representations. D. N. The whole algorithm is designed and simulate 8 . Maroulis [7] stated that the parallel digital system realizes a number of computational-heavy calculations in order to achieve real-time operation. which apply to emerging 3D applications.g. The computationally intensive 2Dwavelet-transform is performed by compression method. 4 user accessible LEDs and 4 DIP switches. Moreover. A. and high power consumption.processor with 255MHz. al. Benkrid . In order to overcome this. Benkrid [6] proposed High level descriptions of task-specific architectures specifically optimised for Xilinx XC4000 FPGAs. This gives the user a range of implementation choices.e. memoryorganization allows random access to image data and copes with the increased processing throughputof the system.

The algorithm is prototyped in virtex -6 (vsx315tff1156-2) FPGA. Karim Achour. assuring a shorter design cycle and a lower cost. Oualid Djekoune [10] In this paper a novel algorithm for computing the Hough transform is introduced. The basic idea consists in usinga combination of an incremental method with the usual Hough transform expression to join circuit performances and accuracy requirements. This implementation may be achieved by generator program. power consumption estimation for partial reconfiguration area and automatic generation of the partial and initial bitstreams. Krill et. Samir Tagzout. R. The perception of texture is believed to play an important role in the human visual system for recognition and interpretation and understanding of synthetic and natural image objects. implementation results of 8-bit image pixels is given. They proposed a set of 9 . making its VLSI implementation very straightforward. The results show that proposed designcan operate at maximum frequency 344 MHz in Vertex 6 FPGA by consuming only 1. The recognition ability of classifiers depends on the quality of feature used as well as the amount of training data available to them.1 % of available device. Image features are mostly extracted on shape and texture of segmented objects. The design exploration offered by the proposed DPR environment allows the generation of efficient IP cores with optimized area/speed ratios. twodimensional biorthogonal discrete wavelet transform (2-D DBWT) and three-dimensional Haar wavelet transform (3-D HWT) have been selected to validate the proposed Dynamic Partial Reconfiguration (DPR)design flow and environment. Haralick et al [12] defined classification in generic sense as the categorization of some input data into identifiable classes via the extraction of significant features or attributes of the data from a background of irrelevant details. The algorithm is primarily developed to "t "eld programmable gate arrays (FPGA) implementation that have become a competitive alternative for high-performance digital signal processing applications. For illustration. The induced architecture presents a high degree of regularity. al [11] Three intellectual property (IP) cores used in pre-processing and transform blocks of compression systems including colour space conversion (CSC).dusing simulink block in MATLAB and then the algorithm is converted into Hardware Description Language (HDL) using Xilinx system generator tool. The interpretation of images is only possible if classifiers can effectively label previously unseen objects. B. Results obtained reveal that the proposed environment has a better solution providing: a scriptable program to establish the communication between the field programmable gate array (FPGA) with IP cores and their host application.

lays foundation for texture based image classification using exhaustive features extracted from one of the best texture description method i. The approach consists of two steps: automatic extraction of the most discriminative texture features of regions of interest and creation of a classifier. the co-occurrence matrix. P. Michael Unser [13] has proposed sum and difference histograms as an alternative to usual co-occurrence matrices for texture analysis. which can be used for texture classification. Two maximum likelihood classifiers are presented depending on the type of object used for texture characterization. Lindsay Semler [15] focuses on comparing the discriminating power of several multi-resolution texture analysis techniques using wavelet. Within all the wavelets.14 textural features extracted from a co occurrence matrix. ridgelet. 10 . They reported an overall accuracy rate of 84 percent on eleven types of textures obtained from satellite images. The comparison between wavelet. the Haar wavelet outperformed the others. He has proved that the sum and difference histograms used conjointly.e. and curvelet is carried out and the paper thus suggests that in comparing the three wavelet-based features.Hiremath and S. and curvelet-based texture descriptors. perform equally well as co-occurrence matrices with decrease in computation time and memory storage. These features offer a better discriminating strategy for texture classification and enhance the classification rate. however Coiflet performed slightly higher with accuracy rates in the 85–93% compared to Daubechies at 83–93%. The Euclidean distance measure and the minimum distance classifier are used to classify the texture. Lucia Dettori. The paper. The sum and difference of two random variables with same variances are de-correlated and define the principal axes of their associated joint probability function. the Haar based descriptors outperformed both Daubechies and Coiflet for most images and performance measures Coiflet and Daubechies had similar performance. thus.S. thus.Shivashankar [14] presents a feature extraction algorithm using wavelet decomposed images of an image and its complementary image for texture classification. suggests a novel texture feature in the form of sum and difference histograms as an alternative to spatial gray level dependence matrix. ridgelet. The paper. The features are constructed from the different combination of sub-band images.

All of the sinusoidal transforms results in filters with a rectangular power spectrum with only two possible orientations. thus. are given. Phillipe P. [17] considered various filter based texture feature extraction operators. Still. which comprise linear filtering. Grigorescu et al. dyadic Gabor filter banks. For reference. E. eventually followed by post-processing. Multi-channel filtering features. The features are computed as the local energy of the filter responses. all sinusoidal transforms and the Laws’ filter bank give comparable results. it seems plausible to explain the good results obtained with Gabor filters by their orientation selectivity. With a given type of post-processing the different filtering schemes lead to different results. Filtering approaches included are Laws masks. The effect of the filtering is highlighted. filters derived from well-known discrete transforms and Gabor filters. The goal is comparing and evaluating in quantitative manner the four types of features. ring/wedge filters. A ranking of the tested approaches based on extensive experiments is presented in the paper. The performance is measured by means of the Mahalanobis distance between clusters of feature vectors derived from different textures. discrete cosine transform. eigen filters. Taking in consideration that the test material contains oriented textures only. while Gabor filters have an elliptical power spectrum with eight possible orientations. John Håkon Husøy[16] reviewed most major filtering approaches to texture feature extraction and perform a comparative study. inspection and other problems in Computer vision. With an average Mahalanobis distance. wavelet transforms. The problem addressed is to determine which features optimize classification rate. linear predictors. The results show that post-processing improves considerably the performance of filter based texture operators. The post-processing step comprises non-linear point operations and/or local statistics computation like mean and standard deviation. S. quadrature mirror filters. co-occurrence (statistical) and autoregressive (model based) features. compression. namely Markov random field parameters. claims that the Gabor filters perform best among all linear filters for feature extraction and some sort of post-processing necessarily improves the performance.Ohanian et al [18] compared textural features for pattern recognition. The paper. comparisons with two classical non filtering approaches. keeping the local energy function and the classification algorithm identical for most approaches. and optimized finite impulse response filters. Such feature may be used in image segmentation. fractal based features and co11 . optimized Gabor filters.Trygve Randen. they are worse than those obtained with Gabor filters. The filters used are Laws’ masks. wavelet packets and wavelet frames.

Performance is assessed by the criterion of classification error rate with a nearest neighbor classifier and the leave-one-out estimation method using forward selection. Wavelet-domain HMMs. The paper claims that compared with the fixed ASIC mode of image coding. i. where it was usually assumed that three sub-bands of the 2-D discrete wavelet transform (DWT). The paper. and experimental results show that HMT-3S provides the highest percentage of correct classification of over 95% upon a set of 55 Brodatz textures. The specially designed CPU can be embedded into video encoder or other multimedia processor. two synthetic and two natural. Wei Benjie et al. deals with the performance evaluation of four different types of feature extractors for texture classification. LH. Four types of textures are studied. a new HMM. in addition to the joint statistics captured by HMT. The article. including classification and segmentation. thus. 12 . thus. The results show that the Cooccurrence features perform the best followed by fractal features.e. The proposed HMT-3S is applied to texture analysis. there is no universally best subset of features. [19] produced experimentation on wavelet-based texture analysis and synthesis using hidden Markov models (HMMs) and developed particularly. called HMT-3S. they studied four wavelet-based methods. in particular hidden Markov tree (HMT). were recently proposed and applied to image processing. and work well. The basic idea of the HMT-3S is that a more complete statistical characterization of DWT can be implemented by more sophisticated graph structures for Bayesian inference in the wavelet-domain. They have shown that. However.occurrence features. Guoliang Fan et al. are independent. it can also be used to do DWT (digital wavelet transform) with only few instructions. the new HMT-3S can also exploit the crosscorrelation across DWT sub-bands. for statistical texture characterization in the wavelet-domain. HL. [20]. for describing textures for the synthesis and classification process. for texture classification. the HMT3S. and HH. For texture synthesis. the reconfigurable CPU circuit adopted is more flexible and has low cost. they demonstrated that more accurate texture characterization from HMT-3S allows the significant improvements in terms of both classification accuracy and boundary localization. they have. Specifically. proved that wavelet domain statistical image modeling plays an important role in texture characterization. proposed a novel 32 bit processor whose structure is simple and efficient for image processing. and texture synthesis with improved performance over HMT. focuses on the development of new model. in general. For texture segmentation.

Lygouras [22] evaluated the performance of a hardware/software architecture designed to perform a wide range of fast image processing tasks. then gives the RTL design scheme described in VHDL for wavelet transform. Finally. The specific signal processing demands of image and video processing algorithms in these applications and their mapping to DSPs are described. the CPU proposed in this paper is not only used for DWT. The simulation results show that the hardware model of the CPU is effective and practical. Reduced time-to-market combined with the ease to add features favors programmable solutions over dedicated chip designs (ASICs). mainly consumer market. two examples of implementing imaging systems on DSPs are introduced.A. on TI's TMS320C54x DSP series conclude the paper. Realizing such. but cost and power consumption are equally important. and the advantages of programmable CPU are also discussed. This paper analyses first basic processor architectures enabling imaging in portable and embedded devices. After discussing today’s platform concepts and why DSPs are especially well suited. Recent results of successful implementation of two major embedded image and video applications. applications is not only constraint by the integration density of computational power. Kalomiros. the fundamental operations of imaging applications are analyzed.This paper puts forward a new image processor based on customed RISC CPU. The suggestion for critical module in the design is given. but also can be embedded into the image encoder as a reconfigurable IP core for arithmetic coding or other purpose.The system architecture is based on hardware featuring a Field Programmable Gate Array (FPGA) co- 13 . catalogue DSPs and DSP-based systems sustain and even gain market shares against specialized processor concepts involving a general purpose processor core (GPP) with accelerator units. Noticeably. at last make experiments to verify it. Paper also discusses the feasibility and implementation issues of image processing algorithms on DSPs. digital still cameras and video phones. Refer to the RISC CPU architecture of MIPS. including work flow and implementation of the circuit design. a digital still camera and a video communications codec. they study on the whole structure of customed image processor. which gets rid of the drawbacks of ASIC. Klaus Illgner [21] presented an overview of DSP architectures and their advantages for embedded applications. J. Therefore. This paper aims in giving an insight into how DSPs can be used for certain imaging applications and video processing. J.

we use the perceptually uniform CIE-L*u*v* color values as color features and a set of Gabor filters as texture features.0 channel. The theoretical framework relies on Bayesian estimation via combinatorial optimization (simulated annealing). The proposed approach enables the segmentation of text in complex situations such as in the presence of varying colour and texture (characters and background). These classes are represented by multi-variate Gaussian distributions. external and on chip memory.processor and a host computer. Measured transfer rates over the communication channel and processing times for the implemented hardware/software logic are presented for various frame sizes. implemented with a standard macrocell. Zoltan Kato. characters are segmented as distinct regions with separate chromaticity and/or lightness by performing a layer decomposition of the image. Antonacopuos [23] argues that the challenging segmentation stage for such images benefits from a human perspective of colour perception in preference to RGB colour space analysis. The method described here is a result of the authors’ systematic approach to approximate the human colour perception characteristics for the identification of character regions. More precisely. A. Karatzas. We also propose a parameter estimation 14 . the communication channel and typical image filters appropriatefor the evaluation of the system performance. the image is decomposed by performing histogram analysis of Hue and Lightness in the HLS colour space and merging using information on human discrimination of wavelength and luminance. Thus. ALabVIEW host application controlling a frame grabber and an industrial camera is used to capture and exchange video data with thehardware co-processor via a high speed USB2. the only hypothesis about the nature of the features is that an additive Gaussian noise model is suitable to describe the feature distribution belonging to a given class. D. Gaussian parameters are either computed using a training data set or estimated from the input image. In this instance.They also claimed that Partitioning a machine vision application between a host computer and a hardware co-processor may solve a number of problems and can be appealing in an academic or industrial environment where compactness and portability of the system is not of primal importance. Ting Chuen Pong[24] proposed a Markov random field (MRF) image segmentation model. which aims at combining color and texture features.The SOPC system integrates the CPU. Here. The segmentation is obtained by classifying the pixels into different pixel classes. The FPGA accelerator is based on aAltera Cyclone II chip and is designed as a system-on-a-programmable-chip (SOPC) with the help of an embedded Nios II software processor.

Experimental results are provided to illustrate the performance of our method on both synthetic and natural color images. with design details. Albert C. 15 . a Bayer filter and a Sobel filter – were implemented on both a high-end DSP and on high-end FPGAs. The main target of this approach is to enhance the performance of segmentation by emphasizing the interactions between label and boundary MRFs. Chung [25] proposed a non texture segmentation model using compound MRFs based on a boundary model. sum and difference histogram descriptor. It performs favorably compared with four alternative segmentation algorithms. discrete wavelet transforms. al [27] In this work a performance benchmark of several DSP and FPGA implementation of some low-level image processing algorithms was presented. Although the algorithm uses the random field type model it is relatively fast because it uses efficient recursive parameter estimation of the model and therefore is much faster than the usual Markov chain Monte Carlo estimation approach. Markov random field model. Daniel Baumgartner et. as applied in space and spatial frequency (transform) domains. to be applied in generic sense to feature extraction process. co-occurrence matrix descriptors.  Algorithms for texture description. Three algorithms – a Gaussian pyramid. The algorithm’s performance is demonstrated on the extensive benchmark tests on natural texture mosaics. Stanislav Mike [26] proposed novel efficient and robust method for unsupervised texture segmentation with unknown number of classes based on the underlying CAR and GM texture models. namely. Jue Wu. S. representation and modeling. for texture classification using the extracted features. Our method requires only a contextual neighborhood selection and two additional thresholds. Gabor function descriptors etc.method using the EM algorithm. The comparisons with other existing MRF models show that the proposed model can give more accurate segmentation results in both high and low noise level regions while preserving subtle boundary information with high accuracy. Summary of literature survey The articles reviewed in literature survey can be classified broadly into two categories. Michal Haindl. Usual handicap of segmentation methods is their lot of application dependent parameters to be experimentally estimated.

implementation details. memory size etc.  Selecting algorithms for hardware implementation of texture classification is challenging task because of the constraints like sequential processing architecture of existing processors. detailed for each approach. It is suggested to implement Wavelet based techniques on hardware platform like DSP and PLD.optimality of choice. Multi-channel Gabor filters. necessity of post processing. by and large. It is obvious from the summary that. 16 . Wavelet transforms are comparatively efficient and less time consuming. variety of input textures etc. Co-occurrence matrix. and Wavelet transforms have been largely deployed by the researchers as feature extractors for texture classification.