You are on page 1of 5

A-law algorithm

From WikiAudio
Jump to: navigation, search An a-law algorithm is a standard companding algorithm, used in European digital communications systems to optimize, i.e., modify, the dynamic range of an analog signal for digitizing.

A law vs U law:
A law and U law being two algorithms which are utilized in telephony systems over the globe, it is very easy to confuse these two terms with each other. Yet, A law and U law do have some significant differences which set them apart. What is A law? A law algorithm can be defined as a standard compounding algorithm which is utilized in the process of optimizing and modifying the dynamic range of an analog signal for digitizing. This is an algorhithm that is widely used in European digital communications systems. Because of the fact that linear digital encoding does not serve well with the wide range of speech, the following code was introduced which results in the coding efficiency and reducing the varying ranges of the signals.

What is U law? The U law algorithm too can be defined as a standard compounding algorithm which is utilized in the process of optimizing and modifying the dynamic range of an analog signal for digitizing. This system is widely used in north America and Japan. Whereas in analog systems, it increases the signal-to-noise ratio (SNR) during transmissions, in digital domain, it reduces the quantization error increasing signal to quantization noise ratio in turn. This algorithm too is used because of the diverse ranges of the speech and because of the need to preserve the finer details of this range during communication. The u law encoding aids in reduction of the dynamic range of the signal which in turn increases the coding efficiency at the same time biasing the signal in a way that gives out the result in a signalto-distortion ratio. What is the difference between A law and U law? Dynamic range can be defined as the smallest and the loudest sound that are represented by the signal. Therefore, the main difference between A law and U law can be defined in terms of the dynamic range output of the two algorhithms. Whereas U-law has a larger dynamic range than a-law, the dynamic range of A law is quite low. Yet, a higher dynamic range leads to greater distortion of small signals. Therefore, it is advisable that when the sound input is very soft, A-law is the best algorithm to be utilized. Also, with regards to the countries that utilize a law and U law, the two algorithms vary. Whereas A law is widely used by European digital communication systems, U law is used widely throughout North America and Japan. While there are no significant differences between these two algorithms, these are the only distinctive factors which set them apart. Summary

While A law is widely used in Europe, U law is generally utilized throughout north America and Japan. While U-law has a larger dynamic range than a-law, the dynamic range of A law is quite low.

Read more http://www.wikidifference.com/difference-between-a-law-and-u-law/ What are A-Law and Mu-Law compression? In the simplest terms, they are standard forms of audio compression for 16 bit sounds. Like most audio compression techniques, they are lossy, which means that when you expand them back from their compressed state, they will not be exactly the same as when you compressed them. The compression is always 2:1, meaning that audio compressed with either of these algorithms will always be exactly half of their original size.

Mu-Law and A-Law compression are both logarithmic forms of data compression, and are extremely similar, as you will see in a minute. One definition of Mu-Law is "...a form of logarithmic data compression for audio data. Due to the fact that we hear logarithmically, sound recorded at higher levels does not require the same resolution as low-level sound. This allows us to disregard the least significant bits in high-level data. This turns out to resemble a logarithmic transformation. The resulting compression forces a 16-bit number to be represented as an 8-bit number." (www-s.ti.com/sc/psheets/spra267/spra267.pdf) And from the comp.dsp newsgroup FAQ we also get this definition: Mu-law (also "u-law") encoding is a form of logarithmic quantization or companding. It's based on the observation that many signals are statistically more likely to be near a low signal level than a high signal level. Therefore, it makes more sense to have more quantization points near a low level than a high level. In a typical mu-law system, linear samples of 14 to 16 bits are companded to 8 bits. Most telephone quality codecs (including the Sparcstation's audio codec) use mu-law encoded samples. In simpler terms, this means that sound is represented as a wave, and humans can only hear audio in the middle of the wave. We can remove data from the upper and lower frequencies of a sound, and humans will not be able to hear a significant difference. Both Mu-Law and A-Law take advantage of this, and are able to compress 16-bit audio in an manner acceptable to human ears. A-Law and Mu-Law compression appear to have been developed at around the same time, and basically only differ by the particular logarithmic function used to determine the translation. When we get to the work of implementing the algorithms, you will see that the differences are nominal. The main difference is that Mu-Law attempts to keep the top five bits of precision, and uses a logarithmic function to determine the bottom three bits, while A-Law compression keeps the top four bits and uses the logarithmic function to figure out the bottom four. Both of these algorithms are used as telecommunication standards, A-Law being used mainly in Europe, and Mu-Law being used in the United States.

An a-law algorithm is a standard compression algorithm, used in digital communications systems of the European digital hierarchy, to optimize, i.e., modify, the dynamic range of an analog signal for digitizing. It is similar to the mu-law algorithm used in the United States. The reason for this encoding is that the wide dynamic range of speech does not lend itself well to efficient linear digital encoding. A-law encoding effectively reduces the dynamic range of the signal, thereby increasing the coding efficiency and resulting in a signal-todistortion ratio that is superior to that obtained by linear encoding for a given number of bits.

A-law vs u-Law A-law and u-law are two algorithms that are used in modifying an input signal for digitization. These algorithms are implemented in telephony systems all over the world. The two algorithms have a fairly minimal difference and most people would not know the difference. The first difference between the two is the dynamic range of the ouput; U-law has a larger dynamic range than a-law. Dynamic range is basically the ratio between the quietest and loudest sound that can be represented in the signal. The downside of having a higher dynamic range is greater distortion of small signals. This simply means that a-law would sound better than u-law when the sound input is very soft. The advantages and disadvantages of one over the other are fairly insignificant and both are currently in use in different areas of the world. U-law is currently being used by companies in North America and in Japan while A-law is being used in Europe. Other areas use a mixture of the two depending on the country. Most countries use only one standard so there should be no problems with local call or even with international calls between countries that use the same standard. A problem arises when a call is made from a country that uses one standard to a country that uses the other standard. Although it is possible to facilitate a conversion from one algorithm to the other, this would be a lossy conversion and the result would be a degraded signal. To avoid the problem, a-law is the algorithm that would be used whenever either side uses a-law. Because of this, it is necessary for countries that use u-law to also have the capability of using a-law while countries that use a-law do not necessarily have to be able to do u-law. Summary: 1. U-Law has a larger dynamic range compared to A-law 2. U-Law has worse distortion with small signals compared to A-law 3. U-Law is used in North-America and Japan while A-law is commonly used in Europe 4. A-law takes precedence over u-law with international calls

Read more: Difference Between A-law and u-Law | Difference Between | A-law vs u-Law http://www.differencebetween.net/technology/difference-between-a-law-and-ulaw/#ixzz1uThy7mT8

In telecommunication, a mu-law algorithm (-law) is a standard analog signal compression algorithm, used in digital communications systems of the North American digital hierarchy, to optimize, i.e., modify, the dynamic range of an analog signal prior to digitizing. It is similar to the A-law algorithm used in Europe. The reason for this encoding is that the wide dynamic range of speech does not lend itself well to efficient linear digital encoding. Mu-law encoding effectively reduces the dynamic range of the signal, thereby increasing the coding efficiency and resulting in a signal-todistortion ratio that is greater than that obtained by linear encoding for a given number of bits.