## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

WRITTEN REPORT

ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS

SUBMITTED BY:

NAVEEN MATHEW FRANCIS #105249595

These two options are used together in some cases. two different adaptive image compression techniques are explained and a comparison is made between them. internet. use of multimedia involves a large amount of information which requires a large bandwidth. environmental conditions etc. fax. In this report. For mobile cellular systems it has been found that many parameters like signal strength vary depending on the user mobility. weather updates. The JPEG image compression algorithm is a commonly used source coding technique. online games.INTRODUCTION The advent of new technologies like GPRS. Also factors like computation energy (energy consumed in processing information to be transmitted). This may cause significant changes in the wireless channel conditions. The effects of varying parameters of the JPEG image compression algorithm on energy consumption. Then an Energy Efficient Wavelet Transform method is explained and is compared to both AWIC and the First method. There has been significant growth in this area in past few years and more and more users are getting involved in the new generation mobile systems. thereby increasing the average bandwidth available. We also introduce a methodology to select the optimal image compression parameters to minimize energy consumption given the current conditions and constraints. thereby lowering energy consumption. The large requirements for bandwidth and energy consumption are significant bottlenecks to wireless multimedia communication. This predicts a huge market for wireless multimedia communication in the near future. First one is based on Dynamic Algorithm that makes use of the effects on an image by varying the parameters like Quantization Level and Virtual Block Size. bandwidth. . However. Also we can modify the broadcast power of a power amplifier to meet QoMD (Quality of Multimedia Data) requirements. In some cases channel coding parameters are adapted to match current channel conditions. stock updates and so on. However. by designing a radio that adapts to current communication conditions and requirements. EDGE made it possible for the evolution of 3G Networks which integrated data and fax capabilities to the existing voice services. The 3G features provided include data services. it is possible to help overcome the bandwidth and energy bottlenecks to wireless multimedia communication. and latency are discussed in detail. In the second part we introduce the Wavelet Transform and the corresponding Adaptive Wavelet Image Compression Method. One way to design a multimedia capable radio that accounts for varying communication conditions and requirements is to assume the worst-case. and communication energy (energy consumed in wirelessly transmitting information) requirements for mobile multimedia communication have to be taken into account. In the first case analyze the effects of modifying the source coder. surroundings.

All DCT multiplications are real. B(k1. Tradeoffs between the Quality of Multimedia Data (QoMD) transmitted. This lowers the number of required multiplications. are: The input image is N2 pixels wide by N1 pixels high. AND IMAGE QUALITY One of the issues that make wireless multimedia communication difficult is the large amount of data that needs to be transmitted in the air interface between the mobile user and the base station. A. This array contains each pixel's gray scale level. The DCT input is an 8 by 8 array of integers. these can range from -1024 to 1023." B. The discrete cosine transform (DCT) helps separate the image into parts (or spectral subbands) of differing importance (with respect to the image's visual quality). and Quality of Service (latency) required in wireless multimedia communication are possible.k2) is the DCT coefficient in row k1 and column k2 of the DCT matrix. The output array of DCT coefficients contains integers. Therefore.EFFECTS OF VARYING JPEG IMAGE COMPRESSION PARAMETERS ON ENERGY. multimedia data can be compressed in a lossy manner. these appear in the upper left corner of the DCT. much of the signal energy lies at low frequencies. as compared to the discrete Fourier transform. However. the energy consumed. the coefficients for the output "image. The lower right values . With an input image.j) is the intensity of the pixel in row i and column j. LATENCY. 8 bit pixels have levels from 0 to 255. source coding (compression) plays an important role in communicating multimedia information. the bandwidth required for communication. The basic algorithm used in the JPEG compression is given below. The DCT is similar to the discrete Fourier transform: it transforms a signal or image from the spatial domain to the frequency domain. For most images. leading to smaller compressed representations of the multimedia data than is available with lossless data compression. A(i.

When the block size is 8. but outputs a VBS X VBS amount of frequency information rather than an 8x8 block. After the transform stage. Here the input image is divided into blocks of size 8 pixels by 8 pixels. As the quantization level decreases. but more information needs to be transmitted. all frequency information is computed. . Huffman coding is an entropy encoding algorithm used for data comprssion. The term refers to the use of a variable-length code table for encoding a source symbol (such as a character in a file) where the variable-length code table has been derived in a particular way based on the estimated probability of occurrence for each possible value of the source symbol. causing more bits to be transmitted. and are often small .Huffman coding uses a specific method for choosing the representation for each symbol. less computation energy is required because the elements set to zero do not have to be computed or quantized. lowering the communication energy as the VBS values decrease. due to the encoding stage of JPEG. quality of image. the image quality increases. To investigate the effects of image compression parameters on energy. each frequency component is quantized to reduce the amount of information that needs to be transmitted. The second parameter that we study is Virtual Block Size (VBS). Huffman was able to design the most efficient compression method of this type: no other mapping of individual source symbols to unique strings of bits will produce a smaller average output size when the actual symbol frequencies agree with those used to create the code. we consider two parameters. Each of these 8x8 pixel block is transformed by a Discrete Cosine Transform (DCT) into its frequency domain equivalent. In addition. The first parameter is the scaling of the quantization values used in the quantization step of JPEG. all frequency data outside the 5x5 block is set to 0. the DCT still inputs the entire 8x8 block of pixels. and bandwidth.represent higher frequencies. the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol) that expresses the most common characters using shorter strings of bits than are used for less common source symbols. Huffman coding is equivalent to simple binary block encoding. zeros require fewer bits to transmit. For a set of symbols with a uniform probability distribution and a number of members which is a power of two.By setting the frequency values outside the VBS X VBS block to zero. To implement VBS. The JPEG standard defines some default quantization tables that can be scaled up or down depending on the desired quality of the final image. latency. resulting in a prefix-free code (that is. This parameter affects the DCT portion of JPEG. When the block size is 5. These quantized values are then encoded using a Huffman encoding-based technique to reduce the size of the image representation.small enough to be neglected with little visible distortion.

In addition. latency. uses the image quality parameters table to select the optimal image compression parameters for the current latency. bandwidth. and bandwidth required to wirelessly transmit the image. also decrease. and bandwidth required. First. the number of bits transmitted. The first two steps precompute an image quality parameters table consisting of the quantization level and compression ratio (in bits per pixel) for each possible image quality (PSNR –Peak Signal to Noise Ratio) and Virtual Block Size (VBS). Effects of Varying Virtual Block Size As smaller VBSs are chosen. along with the current transmission energy/bit (set by the power amplifier to minimize inter-signal interference). and therefore the corresponding communication energy. performed on-line in the multimedia capable radio. . SELECTION OF IMAGE COMPRESSION PARAMETERS FOR LOW POWER WIRELESS COMMUNICATION This process consists of three steps. and quality of image requirements. because zeros require fewer bits to encode. while the energy consumed in the encoding portion remains the same. decreasing the communication energy. The third step.128 150 150 150 128 110 100 90 839 125 47 -8 1 0 0 0 124150 150 0150 128 110 100 90 205 -82 -25 -13 0 0 80 44 -81 -47 -11 -7 0 0 0 -6 -22 -13 -1 150 128 110 100 90 000 1509 0 0 0 0 80 80 0 -16 -5 -1 00000000 150 128 110 100 90 80 80 85 00000000 00000000 128 110 100 90 80 80 85 90 110 110 90 80 85 90 95 100 100 90 80 80 85 95 105 115 90 80 80 80 90 100 115 128 839 125 47 -8 1 6 0 -2 124 205 -82 -25 -13 5 -2 -2 44 -81 -47 -11 –7 -4 0 2 -6 -22 -13 -1 0 0 -1 0 0 -16 -5 -1 9 1 0 1 8 2 -6 2 0 -5 -1 0 2 -1 -1 0 -1 -3 0 0 -3 -3 2 1 2 1 0 0 VBS = 8 839 125 124 205 44 -81 -6 -22 0 -16 0 0 0 0 0 0 47 -8 -82 -25 -47 -11 -13 -1 -5 -1 0 0 0 0 0 0 1 -13 -7 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 VBS = 5 Effects of Varying Quantization Level Varying the quantization level of the JPEG image compression algorithm has several effects on wireless multimedia communication. Increasing the quantization level also decreases the number of bits to be sent. latency. increasing the quantization level reduces the image quality. the quality of image and computation energy used in the DCT and quantization portion of the JPEG algorithm decrease.

lm4 321 Possible VBS & QL Combinations Precomputation Avg PSNR & Compression ratio for each VBS & QL Precomputation Possible Parameters Table Possible Image Quality & QL Combinations Step 3(Online) Possible parameters Table Transmission energy/bit Compute Optimal Image Compression Parameters Image Quality. latency & Bandwidth constraints Current Image Compression parameters ..Steps 1..2 (Offline) lm .

to choose the image compression parameters which will minimize overall energy consumption. and their effects on both computation energy and number of bits transmitted (communication energy) must be considered simultaneously. along with the corresponding compression ratio found in the table resulting from step 1. The result of step 1 is a table of PSNRs and compression ratios referenced by VBS and quantization level. consider the figure given below. Algorithm for run-time selection of image compression parameters Once the image quality parameters table is stored inside the multimedia capable radio. latency. Precomputation step 2 uses the table computed in step 1 to determine a quantization level and compression ratio for each possible image quality and VBS combination. and bandwidth constraints. the quantization level must decrease. and hence the communication energy. step 3 of the methodology must consider not only the direct effect that choosing the VBS has on computation energy. Therefore. an average over a large number of images is used. the multimedia capable radio monitors the image quality. but also the indirect effect on communication energy. which image compression parameters to use. is precomputed for each VBS and quantization level combination. Here in order to obtain a PSNR of 30dB. The bottom line in the graph shows the decrease in computation energy as the VBS decreases. a tradeoff between computation energy and number of bits to transmit occurs when choosing different VBS values. . a search is made in the table generated by step 1 for the largest quantization value which yields an acceptable image quality with the given VBS. To determine the quantization level for a given image quality and VBS pair. is stored in the image quality parameters table for use in the multimedia capable radio. the selection of the VBS and quantization level parameters. However. as well as the transmission power per bit for the current communication conditions and multimedia service. The resulting quantization value. step 3 of our methodology is performed in which the most appropriate image compression parameters are selected. at run-time. To determine the image compression parameters which will minimize the overall energy consumption of the mobile multimedia radio. the radio must select. increasing the number of bits to transmit. as determined by the compression ratio. As an example. Since image quality and compression ratio values vary from image to image. to maintain the same image quality. the image quality (PSNR) and number of bits to transmit. If there is a significant change in the conditions or constraints. During wireless communication.In the first precomputation step.

and beginning with the largest VBS value (say VBS=8). The multimedia capable radio then determines the overall energy dissipation by adding the computation energy (determined by VBS) and communication energy (determined as a product of uncompressed image size. . Energy Savings Due to Adaptive Image Compression Using the methodology presented to select the optimal JPEG image compression parameters. then the algorithm decrements the VBS. the overall energy consumed in transmitting an image can be reduced. The adaptive radio consumes significantly less energy than the non-adaptive radio. or violate the latency or bandwidth constraints. then the current VBS is chosen. For example. If choosing the next smallest VBS will increase the overall energy consumption. If choosing the next smallest VBS decreases the overall energy consumption without violating latency or bandwidth constraints. The process is repeated till the optimal VBS and quantization level parameters are identified.For the required image quality (PSNR). The algorithm compares the energy dissipation of the current VBS with the next smallest VBS. only 20% of the energy consumed by a non-adaptive radio is consumed by the adaptive radio for an image quality of 33dB. and transmission power/bit). compression ratio. the algorithm identifies the quantization level and compression ratio used to satisfy the image quality constraint by performing a lookup in the image quality parameters table.

the image can be further decomposed by applying the 2-D subband decomposition to the existing LLli sub band. The 2-D sub band decomposition is just an extension of 1-D sub band decomposition. Similarly. Consider the figure given below. HH2. We refer to the sub band LLi as a low-resolution sub band and high-pass sub bands LHi. first in one direction (horizontal). and diagonal residual information of the original image. which is further decomposed into LH2. The high-pass sub band represents residual information of the original image. An example of three-level decomposition into sub bands of the image CASTLE is shown in the figure below. The main property of the wavelet filter is that it includes neighborhood information in the final result. the high pass sub band (Hi) is further decomposed into HLi and HHi. and diagonal sub band respectively since they represent the horizontal. Wavelet Transform The forward wavelet-based transform uses a 1-D sub band decomposition process where a 1-D set of samples is converted into the low-pass sub band (Li) and high-pass sub band (Hi). After one level of transform. The entire process is carried out by executing a 1-D sub band decomposition twice. needed for the perfect reconstruction of the original image from the low-pass sub band. HL1. HHi as horizontal. and the information of LL2 is used for the third level transform. the low-pass sub band (Li) resulting from the horizontal direction is further decomposed in the vertical direction. leading to LLi and LHi sub bands. The low-pass sub band represents a down sampled low-resolution version of the original image. vertical. and HH1.Another Adaptive image compression method called Wavelet Image Compression is presented now. vertical. For example. HLi. This iterative process results in multiple “transform levels”. HL2. . then in the orthogonal (vertical) direction. The first level of transform results in LH1. thus avoiding the block effect of DCT transform which was discussed earlier. Energy Consumption of the Wavelet Transform Algorithm We choose the Daubechies 5-tap/3-tap filter [8] for embedding in the forward wavelet transform. in addition to LL1. LL2 at the second level.

Suppose we first apply the decomposition in the horizontal direction. high-pass decomposition requires 2 shift and 4 adds. In addition. For a given input image size of M X N and wavelet decomposition applied through L transform levels.It also has good localization and symmetric properties. We analyze energy efficiency by determining the number of times certain basic operations are performed for a given input. Similarly. where C is the size of the compressed image (in bits) and R is the per bit transmission energy consumed by the RF transmitter. Thus 8S + 8A units of computational load are required in a unit pixel of the low-pass decomposition and 2S + 4A units for the high-passes. We model the energy consumption of the low/high-pass decomposition by counting the number of operations and denote this as the computational load. and hence the energy consumption. we can estimate the total computational load as follows. Here we estimate the data-access load by counting the total number of memory accesses during the wavelet transform. Using the fact that the image size decreases by a factor of 4 in each transform level. the total computational load involved in horizontal decomposition is 1/2MN(10S+12A). which in turn determines the amount of switching activity. and high quality compressed image. The amount of computational load in the vertical decomposition is identical. . high-speed computation. Hence. At a transform level. with the same condition as the above estimation method. 8 shift and 8 add operations are required to convert the sample image pixel into a low-pass coefficient. this filter is amenable to energy efficient hardware implementation because it consists of binary shifter and integer adder units rather than multiplier/divider units. For example. Since all even-positioned image pixels are decomposed into the low-pass coefficients and odd-positioned image pixels are decomposed into the high-pass coefficients. each pixel is read twice and written twice. in the forward wavelet decomposition using the above filter. the total data-access load is given by the number of read and write operations: The overall computation energy is computed as a weighted sum of the computational load and data-access load. The following equation represents the Daubechies 5-tap/3-tap filter. Also we have to consider the communication energy which is given by C*R. the total computational load can be represented as follows: The transform also involves a large number of memory accesses. which allow for simple treatment.

This “H* elimination”. We observe that the high-pass coefficients are generally represented by small integer values. we can estimate the high-pass coefficients to be zeros and hence avoid computing them and incur minimal image quality loss. This approach has two main advantages. First. thereby reducing the communication energy required. Because of the numerical distribution of the high-pass coefficients and the effect of the quantization step on small valued coefficients. the algorithm aims at effecting energy savings while minimally impacting the quality of the image. and HHi) are removed. This technique is called “HH elimination”. LLi) is kept and all high-pass sub bands (LHi. The first technique attempts to conserve energy by eliminating the least significant sub band. The figure shown below illustrates the distribution of high-pass coefficients after applying a 2 level wavelet transform to the 512 X 512 image. . Second. because all high-pass sub bands are eliminated in the transform step. This algorithm exploits the numerical distribution of the high-pass coefficients to judiciously eliminate a large number of samples from consideration in the image compression process. because the encoder and decoder are aware of the estimation technique. because the high-pass coefficients do not have to be computed. This algorithm consists of two techniques attempting to conserve energy by avoiding the computation and communication of high-pass coefficients. In the second scheme. only the most significant sub band (lowresolution information. Further. 80 % of the high-pass coefficients for level 1 are less than 5.Energy Efficient Wavelet Image Transform Algorithm Here we have an algorithm aimed at minimizing computation energy (by reducing the number of arithmetic operations and memory accesses) and communication energy (by reducing the number of transmitted bits). we reduce the computation energy consumed during the wavelet image compression process by reducing the number of executed operations. no information needs to be transmitted across the wireless channel. HLi. For example.

To implement the HH elimination technique. We can therefore remove all high-pass decomposition steps during the transform by using the H* elimination technique. However. The results in a savings of 1/4MN(4A+2S) operation units of computational load (7. during the column transform of the high-pass sub band resulting from the previous row transform. the high-pass coefficients are only fed into the low-pass filter.Energy Efficiency of Elimination Techniques The modifications made to the wavelet transform step for applying the elimination technique is shown in the figure below. 25 % of the image data is removed leading to less information to be transmitted over the wireless channel. In the HH elimination technique. Therefore. the computation load during the row transform is the same as with the AWIC algorithm. Now lets analyze the energy efficiency of the elimination techniques. However. To implement the H* elimination technique. we cannot save on “read” accesses using the HH elimination technique. . after the row transform. the high-pass sub band (HH) is not computed. For each transform level that the HH elimination technique is applied. Thus. the total computational load when using HH elimination is represented as: Because the high-pass sub band resulting from the row transform is still required to compute the HL sub band during the column transform.4 % compared to the Adaptive Wavelet Image Compression algorithm). we can save on a quarter of “write” operations (12.5 % savings) during the column transform since the results of HH sub band are pre-assigned to zeros before the transform is computed. the input image is processed through only the low-pass filter during both the row and column transform steps. the total dataaccess load is given by: The HH elimination technique also results in significant communication energy savings. This avoids the generation of a diagonal sub band (HH). and not the high-pass filter in the following column transform step.

Thus. Odd-positioned pixels are skipped. . At elimination level 2. the degradation in image quality is more significant in the case of H* elimination. write operations are saved by 63 %. In the H* elimination Technique only the LL sub band is generated and all high pass sub bands are removed. at elimination level 1.While the HH elimination technique reduces some computation loads during the transform steps by eliminating one out of every four sub bands. We therefore can reduce the number of “read” accesses by 25 %. at the column transform step. The figure also shows that significantly more energy savings can be accomplished using H* elimination over HH elimination. However. Similarly. since these pixels represent all the high-pass coefficients (HL. the H* elimination technique targets more significant computation energy savings. only even-positioned pixels are processed in the row transform and fed to the subsequent column transform. Consider the figure given below we observe that the H* elimination technique leads to significant energy savings over the AWIC algorithm. only even-columned pixels are required. For example. Effects on Computation Energy and Image Quality Here we compare the computation energy consumed and the image quality generated by each of the two elimination techniques and the AWIC algorithm. since only low-pass coefficients (L. The total data-access load is given by: The H* elimination technique can result in significant savings in communication energy since three out of four sub bands are removed from the compressed results. Therefore. all image pixels should be read once to generate low-pass coefficients in the row transform. while the loss in image quality is negligible. Since the wavelet transform utilizes neighborhood pixels to generate coefficients. the energy savings using H* elimination is about 34 %. However. Similarly. This leads to a savings of MN(6A+4S) operation units of computational load (over 47 % compared to the AWIC algorithm). while the image quality degradation is within 3dB. sometimes at nominal loss in image quality. the H* elimination technique yields 42 % energy savings. in the column transform. all odd-columned pixels are skipped and only even-columned low-passed pixels are processed. HH). the total computational load when using H* elimination is represented as: H* elimination also reduces the data-access load significantly. LL) are written to memory and accessed through the next transform steps.

and the communication energy consumed in transmitting the compressed image. Effects on Communication Energy and Image Quality Here we study the image quality obtained.08 dB (AWIC) and 28.63 dB (H* level 2) respectively. there is almost no perceivable difference in the quality of the two images. The PSNRs of the two images are 31. The Communication energy for each technique is estimated from the size of the compressed image. and the state of the battery of the wireless appliances. in doing so. the savings in communication energy increases. This demonstrate that depending on the image quality desired by a wireless service. . As the number of elimination levels increase. Here we note that when the H* elimination technique is applied through level 2. by applying the HH and H* techniques at different levels of elimination. demonstrating a trade-off between communication energy and quality of image obtained. Note that while the energy saving between the two approaches is significant (42 %). we next present visual comparisons of two versions of the image obtained. the quality of image also degrades. different trade-offs can be obtained between the image quality obtained and the energy expended in compressing the image and transmitting the compressed image. while the image quality degradation is within only 3 dB. using the HH and H* elimination techniques. the communication energy consumption is 37 % less compared to the AWIC algorithm. However.To get an idea of the impact on image quality.

Varying Quantization Level The goal of quantization is to reduce the entropy of the transformed coefficients so that the entropy-coder can meet a target bit-rate. which can be used to minimize computation and communication energy consumed. we can decrease the number of transmitted bits. and quantization level) of the new energy efficient image compression algorithm. image quality obtained. Based on EEWITA and its parameters. which can minimize energy consumption and air time needed for an image-based wireless service. increasing the transform level also results in an increase in computation energy consumption. However. and bandwidth and air time (service cost) expended during multimedia mobile communication. we can have an adaptive image codec. However. and bandwidth required to wirelessly transmit the image. which is lower than the required bit-rate for wireless transmission.Wavelet Image Compression Parameters Besides the elimination techniques we saw earlier. leading to less communication energy for mobile image communication. By increasing the quantization level. while meeting bandwidth constraints of the wireless network. increasing the quantization level has negative effects such as decreasing the image quality. and the image quality obtained. can produce significant impact on the computation and communication energy needed. there are other wavelet image compression parameters. in wireless image communication. . EEWITA. the image quality. latency. leading to a lower bitrate and less communication energy. Varying the quantization level of the wavelet image compression algorithm has several effects on mobile image communication. and effect the desired trade-off between energy consumed. elimination level. and latency constraints of the wireless service. Adaptive Image Communication Varying the three parameters (wavelet transform level. Varying Wavelet Transform Level Increasing the applied wavelet transform level can reduce the number of transmitted bits.

and latency constraints. Also another method based on Wavelet Transform named adaptive EEWITA codec enables transmission of image data. with significant savings in the energy consumed and air time (service cost) required.Joint Photographic Experts Group PSNR –Peak Signal to Noise Ratio AWIC – Adaptive Wavelet Image Compression Li .Dong-Gi Lee and Sujit Dey . The effects of varying the quantization level and VBS on quality of image. .Virtual Block Size QL – Quantization Level DCT .High-pass sub band EEWITA . By adapting the source coder of a multimedia capable radio to current communication conditions and constraints.Discrete Cosine Transform JPEG . Elimination Level (EL). while meeting available bandwidth and data quality constraints. bandwidth required. it is possible to overcome the bandwidth and energy bottlenecks to wireless multimedia communication. and communication energy is significant and an adaptive algorithm depending on these factors were discussed. and Quantization Level (QL). creating tremendously high energy and bandwidth requirements that cannot be fulfilled by limited growth in battery technologies. which can select the optimal Transform Level (TL). image quality.Central to the adaptive EEWITA is a dynamic parameter selection methodology. CONCLUSION Future deployment of cellular multimedia data services will require very large amounts of data to be transmitted.Low-pass sub band Hi . GLOSSARY GPRS – General Packet Radio Service EDGE – Enhanced Data Rates for Global Evolution 3G – Third Generation Cellular Networks QoMD . computation energy. to minimize energy consumption based on the bandwidth. an important part of internet and other data applications. or the projected growth in available cellular bandwidth.Energy Efficient Wavelet Image Transform Algorithm REFERENCES: 1) Adaptive and Energy Efficient Wavelet Image Compression For Mobile Multimedia Data Services .Quality of Multimedia Data VBS .

2) Adaptive Image Compression for Wireless Multimedia Communication -Clark N.A. Kak . Taylor and Sujit Dey 3) Digital Image Processing – Gonzalez and Woods 4) Digital Picture Processing . C. Rosenfeld and A.

- Rfc 3284
- 2.Optimum.full
- JPEG
- MMC-Dec-2011
- Paper (1)
- 62-305-2-PB.pdf
- A Low Cost Very Large Scale Integration Architecture for Multistandard Inverse Transform
- CSE-7 and 8 Semester Syllabus 09-06-2010.Oct 2009
- hci
- Matlab Topics
- Lossless Optimal Compression of Medical Data using Adaptive Golomb Rice Encoding
- IJCS_34_2_02
- An Objective and Subjective Evaluation of Content-based Privacy Protection of Face Images in Video Surveillance Systems using JPEG XR
- Innovative Approach to Detect Mental Disorder Using Multimodal Technique
- c710bSyllabus
- Optimizing Storage for Oracle People Soft Applications
- Digital Video Watermarking Techniques
- Cs1 3 Text Images Video Sound
- Compressed Mode Technique in Umts
- IJETTCS-2013-04-10-088
- Bh Usa 07 Krawetz
- American-Cinematographer-AGUSTUS 2012 VOL 93 NO 8.pdf
- ACM Digital Library Publications-Syed Bahauddin Alam
- FIFOAppBook
- Analog VLSI Implementation of Neural Network Architecture for Signal Processing
- Milguide Command
- Video Processing in the Cloud
- Improving LZW Image Compression
- Data Compression
- Follow Instruction For Filling Up Of Online Application Form In JKSSB Recruitment 2015

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading