Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
ECE-300(RB6703B50)

ECE-300(RB6703B50)

Ratings: (0)|Views: 18|Likes:
Published by viraniyan

More info:

Published by: viraniyan on Aug 13, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less

08/13/2010

pdf

text

original

 
AUDIO COMPRESSION
Varun Kumar Sen, RB6703B50, 3460070010,Dept. of ECE, Lovely Professional University, Phagwara, Punjab.144402E-mail id:varun01638@gmail.com
 ABSTRACT: Digital audio compression enables more efficient  storage and transmission of audio data. The many form of audio compression techniques offer a rangeof encoder and decoder complexity, compresseaudio quality, and differing amounts of datacompression. The MPEG/audio standard is a highcomplexity, high compression, and high qualityalgorithm. These techniques apply to general audio signals and not specifically tuned for speech signals.
1. INTRODUCTION:Advances in digital audio technology are consists of two sources: hardware developments and new signal processing techniques. When processors dissipatedtens of watts of power and memory densities were onthe order of kilobits per square inch, portable playback devices like an MP3 player were not possible. Now, however, power dissipation, memorydensities, and processor speeds have improved byseveral orders of magnitude. Increasing hardwareefficiency and an expanding array of digital
 
audiorepresentation formats are giving rise to a widevariety of new digital audio applications. Theseapplications include portable music playback devices,digital surround sound for cinema, high-qualitydigital radio and television broadcast, DigitalVersatile Disc (DVD), and many others.This paper introduces digital audio signalcompression, a technique essential to theimplementation of many digital audio applications.Digital audio signal compression is the removal of redundant or otherwise irrelevant information from adigital audio signal a process that is useful foconserving both transmission bandwidth and storagespace. We begin by defining some usefulterminology. We then present a typical “encoder” (ascompression algorithms are often called), and explainhow it functions.2. THEORY:Digital audio compression allows the efficientstorage and transmission of audio data. Audiocompression is designed to reduce the transmission bandwidth requirement of digitalaudio streamsandthe storage size of audio files.Audio compressionalgorithmsare implemented incomputer softwareas audio codec.Data compression algorithms perform poorly with audio data and reduce the data size much below 87% from the original. And producesspecifically optimized audiolosslessand losesalgorithms. There is greater compression rate provides by lose algorithms and are used inmainstream consumer audio devices. In both lose and
 
lossless compression, information superfluously isreduced, using methods such ascoding,pattern  recognitionandlinear predictionto reduce the amount of information used to represent theuncompressed data. Lossless audio compression produces a digital data that can be expanded to anexact digital duplicate of the original audio stream.The trade-off between slightly reduced audio qualityand transmission or storage size is outweighed by thelatter for most practical audio applications in whichusers may not perceive the loss in playback renditionquality. For example, one compact disk (CD) holdsapproximately one hour of uncompressed highfidelity music, less than 2 hours of 
 
music compressedlossless, or 7 hours of music compressed in theMP3 format at medium bit rates.There are two processes which reduces the data rateor storage size of digital audio signal.
1). Dynamic range compression
In this process, compression is performed withoutreduces the amount of digital data.
2). Time compressed speech
In this process, compression is done with reduces theamount of time it takes to listen and to recording.Compression is the reduction in size of data in order to save space or transmission time.There are numerous goals when compressing data,many of which are especially relevant to audio.Among these goals is reducing the required storagespace, which in turn also acts to reduce the cost of storage. Another goal in compression of audio isreducing the bandwidth required to transfer thecontent. This aspect is especially relevant whenapplied to the Internet and commercial television both of which require streaming audio and video.Compression generally is presented in two differentforms known as lossy and non-lossy or lossless.Lossless compression uses formulas to look for redundancy within data and represent thatredundancy by using less information. By reversingthe process the data can be reproduced in an exactform mirroring the original bit for bit. Lossycompression schemes throw away part of the data toget a smaller size. Using formulas, a description of the useful components of the data is recorded, andany excess information is left out. Whenreconstructed during decompression the reproduceddata is often substantially different from the original, but since only the least perceptually relevant portionsof the signal are prone to disposal due to the psychoacoustic complexity of the compression methods, theremoved data can be very hard to detect. Lossycompression results in vast improvements in finalstorage requirements, which makes the often-imperfect output quite acceptable. One of the biggestdrawbacks with lossy schemes is that the effect isadditive in that successive iterations of saving thedata will begin to show greater data loss. For thisreason, they should never be used in the studio andare only of use for final output
.
3.Basic building blocks:
 
Figure 1 shows a generic encoder that takes blocks of a sampled audio signal as its input. These blockstypically consist of between 500 and 1500 samples per channel, depending on the encoder specification.For example, the MPEG-1 layer III (MP3)specification takes 576 samples per channel per input block. The output is a compressed representation of the input block that can be transmitted or stored for subsequent decoding.3.1 Psychoacoustics:The basic of how to reduce the size of the input datais comes from eliminate information that is inaudibleto the ear. This type of compression is often referredto as perceptual encoding. To help determine whatcan and cannot be heard, compression algorithms relyon the field of psychoacoustics, i.e., the study of human sound perception. Specifically, audiocompression algorithms exploit the conditions under which signal characteristics obscure or mask eachother. This phenomenon occurs in three differentways: threshold cut-off, frequency masking, andtemporal masking. The remainder of this sectionexplains the nature of these concepts; subsequentsections explain how they are typically applied toaudio signal compression.3.2 Threshold Cut-off:The human ear detects sounds as a local variation inair pressure measured as the Sound Pressure Level(SPL). If variations in the SPL are below a certainthreshold in amplitude, the ear cannot detect them.This threshold, shown in Figure 2, is a function of thesound’s frequency.3.3 Frequency Masking:Even if a signal component exceeds the hearingthreshold, it may still be masked by louder components that are near it in frequency.This phenomenon is known asfrequency masking or simultaneous masking. Eachcomponent in a signal can cast a “shadow” oveneighboring components. If the neighboringcomponents are covered by this shadow, they will not be heard. The effective result is that one component,the masker, shifts the hearing threshold. Figure 3shows a situation in which this occurs.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->