|a¢|e :aec|lc+t|ea.


Audio Specifcations
• Audio Distortion
• THD - Total Harmonic Distortion
• THD+N - Total Harmonic Distortion + Noise
• IMD – SMPTE - Intermodulation Distortion
• IMD – ITU-R (CCIF) - Intermodulation Distortion
• S/N or SNR - Signal-To-Noise Ratio
• EIN - Equivalent Input Noise
• BW - Bandwidth or Frequency Response
• CMR or CMRR - Common-Mode Rejection
• Dynamic Range
• Crosstalk or Channel Separation
• Input & Output Impedance
• Maximum Input Level
• Maximum Output Level
• Maximum Gain
• Caveat Emptor
Dennis Bohn
Rane Corporation
RaneNote 145
© 2000 Rane Corporation
RaneNote
AUDIO SPECIFICATIONS
Introduction
Objectively comparing pro audio signal processing
products is often impossible. Missing on too many data
sheets are the conditions used to obtain the published
data. Audio specifcations come with conditions. Tests
are not performed in a vacuum with random param-
eters. Tey are conducted using rigorous procedures
and the conditions must be stated along with the test
results.
To understand the conditions, you must frst un-
derstand the tests. Tis note introduces the classic
audio tests used to characterize audio performance.
It describes each test and the conditions necessary to
conduct the test.
Apologies are made for the many abbreviations,
terms and jargon necessary to tell the story. Please
make liberal use of Rane’s Pro Audio Reference (www.
rane.com/digi-dic.html) to help decipher things. Also,
note that when the term impedance is used, it is as-
sumed a constant pure resistance, unless otherwise
stated.
Te accompanying table (back page) summarizes
common audio specifcations and their required condi-
tions. Each test is described next in the order of ap-
pearance in the table.
|a¢|e :aec|lc+t|ea.-z
Audio Distortion
By its name you know it is a measure of unwanted
signals. Distortion is the name given to anything that
alters a pure input signal in any way other than chang-
ing its magnitude. Te most common forms of distor-
tion are unwanted components or artifacts added to
the original signal, including random and hum-related
noise. A spectral analysis of the output shows these
unwanted components. If a piece of gear is perfect the
spectrum of the output shows only the original sig-
nal – nothing else – no added components, no added
noise – nothing but the original signal. Te following
tests are designed to measure diferent forms of audio
distortion.
THD. Total Harmonic Distortion
What is tested? A form of nonlinearity that causes un-
wanted signals to be added to the input signal that are
harmonically related to it. Te spectrum of the output
shows added frequency components at 2x the original
signal, 3x, 4x, 5x, and so on, but no components at,
say, 2.6x the original, or any fractional multiplier, only
whole number multipliers.
How is it measured? Tis technique excites the unit
with a single high purity sine wave and then examines
the output for evidence of any frequencies other than
the one applied. Performing a spectral analysis on
this signal (using a spectrum, or FFT analyzer) shows
that in addition to the original input sine wave, there
are components at harmonic intervals of the input
frequency. Total harmonic distortion (THD) is then
defned as the ratio of the rms voltage of the harmonics
to that of the fundamental component. Tis is accom-
plished by using a spectrum analyzer to obtain the
level of each harmonic and performing an rms summa-
tion. Te level is then divided by the fundamental level,
and cited as the total harmonic distortion (expressed
in percent). Measuring individual harmonics with
precision is difcult, tedious, and not commonly done;
consequently, THD+N (see below) is the more common
test. Caveat Emptor: THD+N is always going to be a
larger number than just plain THD. For this reason,
unscrupulous (or clever, depending on your viewpoint)
manufacturers choose to spec just THD, instead of the
more meaningful and easily compared THD+N.
Required Conditions. Since individual harmonic
amplitudes are measured, the manufacturer must state
the test signal frequency, its level, and the gain condi-
tions set on the tested unit, as well as the number of
harmonics measured. Hopefully, it’s obvious to the
reader that the THD of a 10 kHz signal at a +20 dBu
level using maximum gain, is apt to difer from the
THD of a 1 kHz signal at a -10 dBV level and unity
gain. And more diferent yet, if one manufacturer mea-
sures two harmonics while another measures fve.
Full disclosure specs will test harmonic distortion
over the entire 20 Hz to 20 kHz audio range (this is
done easily by sweeping and plotting the results), at
the pro audio level of +4 dBu. For all signal processing
equipment, except mic preamps, the preferred gain set-
ting is unity. For mic pre amps, the standard practice
is to use maximum gain. Too often THD is spec’d only
at 1 kHz, or worst, with no mention of frequency at
all, and nothing about level or gain settings, let alone
harmonic count.
Correct: THD (5th-order) less than 0.01%, +4 dBu,
20–20 kHz, unity gain
Wrong: THD less than 0.01%
THD+N. Total Harmonic Distortion + Noise
What is tested? Similar to the THD test above,
except instead of measuring individual harmonics this
tests measures everything added to the input signal.
Tis is a wonderful test since everything that comes
out of the unit that isn’t the pure test signal is mea-
sured and included – harmonics, hum, noise, RFI, buzz
– everything.
How is it measured? THD+N is the rms summation
of all signal components (excluding the fundamental)
over some prescribed bandwidth. Distortion analyzers
make this measurement by removing the fundamental
(using a deep and narrow notch flter) and measuring
what’s left using a bandwidth flter (typically 22 kHz,
30 kHz or 80 kHz). Te remainder contains harmonics
as well as random noise and other artifacts.
|a¢|e :aec|lc+t|ea.-1
Weighting flters are rarely used. When they are
used, too often it is to hide pronounced AC mains hum
artifacts. An exception is the strong argument to use the
ITU-R (CCIR) 468 curve because of its proven correla-
tion to what is heard. However, since it adds 12 dB of
gain in the critical midband (the whole point) it makes
THD+N measurements bigger, so marketeers prevent its
widespread use.
[Historical Note: Many old distortion analyzers la-
beled “THD” actually measured THD+N.]
Required Conditions. Same as THD ( frequency,
level & gain settings), except instead of stating the num-
ber of harmonics measured, the residual noise band-
width is spec’d, along with whatever weighting flter
was used. Te preferred value is a 20 kHz (or 22 kHz)
measurement bandwidth, and “fat,” i.e., no weighting
flter.
Conficting views exist regarding THD+N band-
width measurements. One argument goes: it makes
no sense to measure THD at 20 kHz if your measure-
ment bandwidth doesn’t include the harmonics. Valid
point. And one supported by the IEC, which says that
THD should not be tested any higher than 6 kHz, if
measuring fve harmonics using a 30 kHz bandwidth,
or 10 kHz, if only measuring the frst three harmonics.
Another argument states that since most people can’t
even hear the fundamental at 20 kHz, let alone the
second harmonic, there is no need to measure any-
thing beyond 20 kHz. Fair enough. However, the case
is made that using an 80 kHz bandwidth is crucial, not
because of 20 kHz harmonics, but because it reveals
other artifacts that can indicate high frequency prob-
lems. All true points, but competition being what it is,
standardizing on publishing THD+N fgures measured
fat over 22 kHz seems justifed, while still using an 80
kHz bandwidth during the design, development and
manufacturing stages.
Correct: THD+N less than 0.01%, +4 dBu, 20–20
kHz, unity gain, 20 kHz BW
Wrong: THD less than 0.01%
IMD – SMPTE. Intermodulation Distortion
– SMPTE Method
What is tested? A more meaningful test than THD,
intermodulation distortion gives a measure of distor-
tion products not harmonically related to the pure sig-
nal. Tis is important since these artifacts make music
sound harsh and unpleasant.
Intermodulation distortion testing was frst adopted
in the U.S. as a practical procedure in the motion pic-
ture industry in 1939 by the Society of Motion Picture
Engineers (SMPE – no “T” [television] yet) and made
into a standard in 1941.
How is it measured? Te test signal is a low fre-
quency (60 Hz) and a non-harmonically related high
frequency (7 kHz) tone, summed together in a 4:1 am-
plitude ratio. (Other frequencies and amplitude ratios
are used; for example, DIN favors 250 Hz & 8 kHz.)
Tis signal is applied to the unit, and the output signal
is examined for modulation of the upper frequency by
the low frequency tone. As with harmonic distortion
measurement, this is done with a spectrum analyzer or
a dedicated intermodulation distortion analyzer. Te
modulation components of the upper signal appear as
sidebands spaced at multiples of the lower frequency
tone. Te amplitudes of the sidebands are rms summed
and expressed as a percentage of the upper frequency
level.
[Noise has little efect on SMPTE measurements
because the test uses a low pass flter that sets the mea-
surement bandwidth, thus restricting noise components;
therefore there is no need for an “IM+N” test.]
Required Conditions. SMPTE specifes this test
use 60 Hz and 7 kHz combined in a 12 dB ratio (4:1)
and that the peak value of the signal be stated along
with the results. Strictly speaking, all that needs stat-
ing is “SMPTE IM” and the peak value used. However,
measuring the peak value is difcult. Alternatively, a
common method is to set the low frequency tone (60
Hz) for +4 dBu and then mixing the 7 kHz tone at a
value of –8 dBu (12 dB less).
Correct: IMD (SMPTE) less than 0.01%, 60Hz/7kHz,
4:1, +4 dBu
Wrong: IMD less than 0.01%
|a¢|e :aec|lc+t|ea.-+
IMD – ITU-R (CCIF). Intermodulation
Distortion – ITU-R Method
What is tested? Tis tests for non-harmonic nonlin-
earities, using two equal amplitude, closely spaced,
high frequency tones, and looking for beat frequen-
cies between them. Use of beat frequencies for distor-
tion detection dates back to work frst documented in
Germany in 1929, but was not considered a standard
until 1937, when the CCIF (International Telephonic
Consultative Committee) recommend the test. [Tis
test is often mistakenly referred to as the CCIR method
(as opposed to the CCIF method). A mistake compound-
ed by the many correct audio references to the CCIR
468 weighting flter.] Ultimately, the CCIF became the
radiocommunications sector (ITU-R) of the ITU (In-
ternational Telecommunications Union), therefore the
test is now known as the IMD (ITU-R).
How is it measured? Te common test signal is a
pair of equal amplitude tones spaced 1 kHz apart. Non-
linearity in the unit causes intermodulation products
between the two signals. Tese are found by subtract-
ing the two tones to fnd the frst location at 1 kHz,
then subtracting the second tone from twice the frst
tone, and then turning around and subtracting the frst
tone from twice the second, and so on. Usually only the
frst two or three components are measured, but for
the oft-seen case of 19 kHz and 20 kHz, only the 1 kHz
component is measured.
Required Conditions. Many variations exist for this
test. Terefore, the manufacturer needs to clearly spell
out the two frequencies used, and their level. Te ratio
is understood to be 1:1.
Correct: IMD (ITU-R) less than 0.01%, 19 kHz/20
kHz, 1:1, +4 dBu
Wrong: IMD less than 0.01%
S/N or SNR. Signal-To-Noise Ratio
What is tested? Tis specifcation indirectly tells you
how noisy a unit is. S/N is calculated by measuring a
unit’s output noise, with no signal present, and all con-
trols set to a prescribed manner. Tis fgure is used to
calculate a ratio between it and a fxed output reference
signal, with the result expressed in dB.
How is it measured? No input signal is used, how-
ever the input is not left open, or unterminated. Te
usual practice is to leave the unit connected to the
signal generator (with its low output impedance) set for
zero volts. Alternatively, a resistor equal to the expect-
ed driving impedance is connected between the inputs.
Te magnitude of the output noise is measured using
an rms-detecting voltmeter. Noise voltage is a func-
tion of bandwidth – wider the bandwidth, the greater
the noise. Tis is an inescapable physical fact. Tus, a
bandwidth is selected for the measuring voltmeter. If
this is not done, the noise voltage measures extremely
high, but does not correlate well with what is heard.
Te most common bandwidth seen is 22 kHz (the extra
2 kHz allows the bandwidth-limiting flter to take afect
without reducing the response at 20 kHz). Tis is called
a “fat” measurement, since all frequencies are mea-
sured equally.
Alternatively, noise flters, or weighting flters, are
used when measuring noise. Most often seen is A-
weighting, but a more accurate one is called the ITU-R
(old CCIR) 468 flter. Tis flter is preferred because it
shapes the measured noise in a way that relates well
with what’s heard.
Pro audio equipment often lists an A-weighted noise
spec – not because it correlates well with our hearing
– but because it can “hide” nasty hum components that
make for bad noise specs. Always wonder if a manu-
facturer is hiding something when you see A-weighting
specs. While noise flters are entirely appropriate and
even desired when measuring other types of noise, it is
an abuse to use them to disguise equipment hum prob-
lems. A-weighting rolls of the low-end, thus reducing
the most annoying 2
nd
and 3
rd
line harmonics by about
20 dB and 12 dB respectively. Sometimes A-weighting
can “improve” a noise spec by 10 dB.
Te argument used to justify this is that the ear
is not sensitive to low frequencies at low levels (´ la
Fletcher-Munson equal loudness curves), but that argu-
|a¢|e :aec|lc+t|ea.-å
ment is false. Fletcher-Munson curves document equal
loudness of single tones. Teir curve tells us nothing
of the ear’s astonishing ability to sync in and lock onto
repetitive tones – like hum components – even when
these tones lie beneath the noise foor. Tis is what
A-weighting can hide. For this reason most manufac-
turers shy from using it; instead they spec S/N fgures
“fat” or use the ITU-R 468 curve (which actually makes
their numbers look worse, but correlate better with the
real world).
However, an exception has arisen: Digital products
using A/D and D/A converters regularly spec S/N and
dynamic range using A-weighting. Tis follows the
semiconductor industry’s practice of spec’ing delta-sig-
ma data converters A-weighted. Tey do this because
they use clever noise shaping tricks to create 24-bit
converters with acceptable noise behavior. All these
tricks squeeze the noise out of the audio bandwidth
and push it up into the higher inaudible frequencies.
Te noise may be inaudible, but it is still measurable
and can give misleading results unless limited. When
used this way, the A-weighting flter rolls of the high
frequency noise better than the fat 22 kHz flter and
compares better with the listening experience. Te fact
that the low-end also rolls of is irrelevant in this ap-
plication. (See the RaneNote Digital Dharma of Audio
A/D Converters)
Required Conditions. In order for the published
fgure to have any meaning, it must include the mea-
surement bandwidth, including any weighting flters
and the reference signal level. Stating that a unit has a
“S/N = 90 dB” is meaningless without knowing what
the signal level is, and over what bandwidth the noise
was measured. For example if one product references
S/N to their maximum output level of, say, +20 dBu,
and another product has the same stated 90 dB S/N,
but their reference level is + 4 dBu, then the second
product is, in fact, 16 dB quieter. Likewise, you cannot
accurately compare numbers if one unit is measured
over a BW of 80 kHz and another uses 20 kHz, or if
one is measured fat and the other uses A-weighting. By
far however, the most common problem is not stating
any conditions.
Correct: S/N = 90 dB re +4 dBu, 22 kHz BW, unity
gain
Wrong: S/N = 90 dB
EIN. Equivalent Input Noise or Input
Referred Noise
What is tested? Equivalent input noise, or input re-
ferred noise, is how noise is spec’d on mixing consoles,
standalone mic preamps and other signal processing
units with mic inputs. Te problem in measuring mix-
ing consoles (and all mic preamps) is knowing ahead
of time how much gain is going to be used. Te mic
stage itself is the dominant noise generator; therefore,
the output noise is almost totally determined by the
amount of gain: turn the gain up, and the output noise
goes up accordingly. Tus, the EIN is the amount of
noise added to the input signal. Both are then ampli-
fed to obtain the fnal output signal.
For example, say your mixer has an EIN of –130
dBu. Tis means the noise is 130 dB below a refer-
ence point of 0.775 volts (0 dBu). If your microphone
puts out, say, -50 dBu under normal conditions, then
the S/N at the input to the mic preamp is 80 dB (i.e.,
the added noise is 80 dB below the input signal). Tis
is uniquely determined by the magnitude of the input
signal and the EIN. From here on out, turning up the
gain increases both the signal and the noise by the
same amount.
How is it measured? With the gain set for maxi-
mum and the input terminated with the expected
source impedance, the output noise is measured with
an rms voltmeter ftted with a bandwidth or weighting
flter.
Required Conditions. Tis is a spec where test
conditions are critical. It is very easy to deceive with-
out them. Since high-gain mic stages greatly amplify
source noise, the terminating input resistance must be
stated. Two equally quiet inputs will measure vastly
diferent if not using the identical input impedance.
Te standard source impedance is 150 Ω. As unintui-
tive as it may be, a plain resistor, hooked up to nothing,
generates noise, and the larger the resistor value the
greater the noise. It is called thermal noise or Johnson
noise (after its discoverer J. B. Johnson, in 1928) and
results from the motion of electron charge of the atoms
making up the resistor. All that moving about is called
thermal agitation (caused by heat – the hotter the resis-
tor, the noisier).
Te input terminating resistor defnes the lower limit
of noise performance. In use, a mic stage cannot be
quieter than the source. A trick which unscrupulous
|a¢|e :aec|lc+t|ea.-ê
manufacturers may use is to spec their mic stage with
the input shorted – a big no-no, since it does not repre-
sent the real performance of the preamp.
Te next biggie in spec’ing the EIN of mic stages is
bandwidth. Tis same thermal noise limit of the input
terminating resistance is a strong function of mea-
surement bandwidth. For example, the noise voltage
generated by the standard 150 Ω input resistor, mea-
sured over a bandwidth of 20 kHz (and room tempera-
ture) is –131 dBu, i.e., you cannot have an operating
mic stage, with a 150 Ω source, quieter than –131 dBu.
However, if you use only a 10 kHz bandwidth, then the
noise drops to –134 dBu, a big 3 dB improvement. (For
those paying close attention: it is not 6 dB like you might
expect since the bandwidth is half. It is a square root
function, so it is reduced by the square root of one-half,
or 0.707, which is 3 dB less).
Since the measured output noise is such a strong
function of bandwidth and gain, it is recommended to
use no weighting flters. Tey only complicate compari-
son among manufacturers. Remember: if a manufac-
turer’s reported EIN seems too good to be true, look for
the details. Tey may not be lying, only using favorable
conditions to deceive.
Correct: EIN = -130 dBu, 22 kHz BW, max gain, Rs =
150 Ω
Wrong: EIN = -130 dBu
BW. Bandwidth or Frequency Response
What is tested? Te unit’s bandwidth or the range of
frequencies it passes. All frequencies above and below a
unit’s Frequency Response are attenuated – sometimes
severely.
How is it measured? A 1 kHz tone of high purity
and precise amplitude is applied to the unit and the
output measured using a dB-calibrated rms voltme-
ter. Tis value is set as the 0 dB reference point. Next,
the generator is swept upward in frequency (from the
1 kHz reference point) keeping the source amplitude
precisely constant, until it is reduced in level by the
amount specifed. Tis point becomes the upper fre-
quency limit. Te test generator is then swept down in
frequency from 1 kHz until the lower frequency limit is
found by the same means.
Required Conditions. Te reduction in output
level is relative to 1 kHz; therefore, the 1 kHz level
establishes the 0 dB point. What you need to know is
how far down is the response where the manufacturer
measured it. Is it 0.5 dB, 3 dB, or (among loudspeaker
manufacturers) maybe even 10 dB?
Note that there is no discussion of an increase,
that is, no mention of the amplitude rising. If a unit’s
frequency response rises at any point, especially the
endpoints, it indicates a fundamental instability prob-
lem and you should run from the store. Properly de-
signed solid-state audio equipment does not ever gain
in amplitude when set for fat response (tubes or valve
designs using output transformers are a diferent story
and are not dealt with here). If you have ever wondered
why manufacturers state a limit of “+0 dB”, that is why.
Te preferred condition here is at least 20 Hz to 20 kHz
measured +0/-0.5 dB.
Correct: Frequency Response = 20–20 kHz, +0/-0.5
dB
Wrong: Frequency Response = 20-20 kHz
|a¢|e :aec|lc+t|ea.-I
CMR or CMRR. Common-Mode Rejection
or Common-Mode Rejection Ratio
What is tested? Tis gives a measure of a balanced
input stage’s ability to reject common-mode signals.
Common-mode is the name given to signals applied
simultaneously to both inputs. Normal diferential
signals arrive as a pair of equal voltages that are op-
posite in polarity: one applied to the positive input
and the other to the negative input. A common-mode
signal drives both inputs with the same polarity. It is
the job of a well designed balanced input stage to am-
plify diferential signals, while simultaneously rejecting
common-mode signals. Most common-mode signals
result from RFI (radio frequency interference) and
EMI (electromagnetic interference, e.g., hum and buzz)
signals inducing themselves into the connecting cable.
Since most cables consist of a tightly twisted pair, the
interfering signals are induced equally into each wire.
Te other big contributors to common-mode signals
are power supply and ground related problems between
the source and the balanced input stage.
How is it measured? Either the unit is adjusted for
unity gain, or its gain is frst determined and noted.
Next, a generator is hooked up to drive both inputs si-
multaneously through two equal and carefully matched
source resistors valued at one-half the expected source
resistance, i.e., each input is driven from one-half the
normal source impedance. Te output of the balanced
stage is measured using an rms voltmeter and noted.
A ratio is calculated by dividing the generator input
voltage by the measured output voltage. Tis ratio is
then multiplied by the gain of the unit, and the answer
expressed in dB.
Required Conditions. Te results may be fre-
quency-dependent, therefore, the manufacturer must
state the frequency tested along with the CMR fgure.
Most manufacturers spec this at 1 kHz for comparison
reasons. Te results are assumed constant for all input
levels, unless stated otherwise.
Correct: CMRR = 40 dB @ 1 kHz
Wrong: CMRR = 40 dB
Dynamic Range
What is tested? First, the maximum output voltage
and then the output noise foor are measured and their
ratio expressed in dB. Sounds simple and it is simple,
but you still have to be careful when comparing units.
How is it measured? Te maximum output voltage
is measured as described below, and the output noise
foor is measured using an rms voltmeter ftted with a
bandwidth flter (with the input generator set for zero
volts). A ratio is formed and the result expressed in dB.
Required Conditions. Since this is the ratio of the
maximum output signal to the noise foor, then the
manufacturer must state what the maximum level is,
otherwise, you have no way to evaluate the signifcance
of the number. If one company says their product has
a dynamic range of 120 dB and another says theirs is
126 dB, before you jump to buy the bigger number, frst
ask, “Relative to what?” Second, ask, “Measured over
what bandwidth, and were any weighting flters used?”
You cannot know which is better without knowing the
required conditions.
Again, beware of A-weighted specs. Use of A-weight-
ing should only appear in dynamic range specs for digi-
tal products with data converters (see discussion under
S/N). For instance, using it to spec dynamic range in an
analog product may indicate the unit has hum compo-
nents that might otherwise restrict the dynamic range.
Correct: Dynamic Range = 120 dB re +26 dBu, 22
kHz BW
Wrong: Dynamic Range = 120 dB
|a¢|e :aec|lc+t|ea.-°
Crosstalk or Channel Separation
What is tested? Signals from one channel leaking
into another channel. Tis happens between indepen-
dent channels as well as between left and right stereo
channels, or between all six channels of a 5.1 surround
processor, for instance.
How is it measured? A generator drives one chan-
nel and this channel’s output value is noted; meanwhile
the other channel is set for zero volts (its generator is
left hooked up, but turned to zero, or alternatively the
input is terminated with the expect source impedance).
Under no circumstances is the measured channel left
open. Whatever signal is induced into the tested chan-
nel is measured at its output with an rms voltmeter
and noted. A ratio is formed by dividing the unwanted
signal by the above-noted output test value, and the an-
swer expressed in dB. Since the ratio is always less than
one (crosstalk is always less than the original signal) the
expression results in negative dB ratings. For example,
a crosstalk spec of –60 dB is interpreted to mean the
unwanted signal is 60 dB below the test signal.
Required Conditions. Most crosstalk results from
printed circuit board traces “talking” to each other.
Te mechanism is capacitive coupling between the
closely spaced traces and layers. Tis makes it strongly
frequency dependent, with a characteristic rise of 6
dB/octave, i.e., the crosstalk gets worst at a 6 dB/octave
rate with increasing frequency. Terefore knowing the
frequency used for testing is essential. And if it is only
spec’d at 1 kHz (very common) then you can predict
what it may be for higher frequencies. For instance, us-
ing the example from above of a –60 dB rating, say, at
1 kHz, then the crosstalk at 16 kHz probably degrades
to –36 dB. But don’t panic, the reason this usually isn’t
a problem is that the signal level at high frequencies is
also reduced by about the same 6 dB/octave rate, so the
overall S/N ratio isn’t afected much.
Another important point is that crosstalk is as-
sumed level independent unless otherwise noted. Tis
is because the parasitic capacitors formed by the traces
are uniquely determined by the layout geometry, not
the strength of the signal.
Correct: Crosstalk = -60 dB, 20-20kHz, +4 dBu,
channel-to-channel
Wrong: Crosstalk = -60 dB
Input & Output Impedance
What is tested? Input impedance measures the load
that the unit represents to the driving source, while
output impedance measures the source impedance that
drives the next unit.
How is it measured? Rarely are these values actually
measured. Usually they are determined by inspection
and analysis of the fnal schematic and stated as a pure
resistance in Ωs. Input and output reactive elements
are usually small enough to be ignored. (Phono input
stages and other inputs designed for specifc load reac-
tance are exceptions.)
Required Conditions. Te only required informa-
tion is whether the stated impedance is balanced or
unbalanced (balanced impedances usually are exactly
twice unbalanced ones). For clarity when spec’ing
balanced circuits, it is preferred to state whether the
resistance is “foating” (exists between the two lines) or
is ground referenced (exists from each line to ground).
Te impedances are assumed constant for all fre-
quencies within the unit’s bandwidth and for all signal
levels, unless stated otherwise. (Note that while this is
true for input impedances, most output impedances are,
in fact, frequency-dependent – some heavily.)
Correct: Input Impedance = 20k Ω, balanced
line-to-line
Wrong: Input Impedance = 20k Ω
|a¢|e :aec|lc+t|ea.-°
Maximum Input Level
What is tested? Te input stage is measured to estab-
lish the maximum signal level in dBu that causes clip-
ping or specifed level of distortion.
How is it measured? During the fnal product pro-
cess, the design engineer uses an adjustable 1 kHz in-
put signal, an oscilloscope and a distortion analyzer. In
the feld, apply a 1 kHz source, and while viewing the
output, increase the input signal until visible clipping
is observed. It is essential that all downstream gain and
level controls be set low enough that you are assured
the applied signal is clipping just the frst stage. Check
this by turning each level control and verifying that the
clipped waveform just gets bigger or smaller and does
not ever reduce the clipping.
Required Conditions. Whether the applied signal is
balanced or unbalanced and the amount of distortion
or clipping used to establish the maximum must be
stated. Te preferred value is balanced and 1% distor-
tion, but often manufacturers use “visible clipping,”
which is as much as 10% distortion, and creates a false
impression that the input stage can handle signals a
few dB hotter than it really can. No one would accept
10% distortion at the measurement point, so to hide it,
it is not stated at all – only the max value given without
conditions. Buyer beware.
Te results are assumed constant for all frequencies
within the unit’s bandwidth and for all levels of input,
unless stated otherwise.
Correct: Maximum Input Level = +20 dBu, balanced, <1%
THD
Wrong: Maximum Input Level = +20 dBu
Maximum Output Level
What is tested? Te unit’s output is measured to
establish the maximum signal possible before visible
clipping or a specifed level of distortion.
How is it measured? Te output is fxed with a
standard load resistor and measured either balanced
or unbalanced, using an oscilloscope and a distortion
analyzer. A 1 kHz input signal is increased in ampli-
tude until the output measures the specifed amount of
distortion, and that value is expressed in dBu. Next, the
signal is swept through the entire audio range to check
that this level does not change with frequency.
Required Conditions. Two important issues are
present here: Te frst is the need to know whether a
unit can swing enough unclipped volts for your ap-
plication. Te second is more difcult and potentially
more serious, and that is the unit’s ability to drive long
lines without stability problems, or frequency loss.
Te manufacturer must state whether the spec is
for balanced or unbalanced use (usually balanced
operation results in 6 dB more swing); what distortion
was used for determination (with the preferred value
being 1% THD); over what frequency range is this spec
valid (prefer 20 Hz – 20 kHz; watch out for just 1 kHz
specs); and what load impedance is guaranteed (2k Ω
or greater is preferred; 600 Ω operation is obsolete and
no longer required except for specialized applications,
with broadcast and telecommunications noted as two of
them).
Tis last item applies only to signal processing units
designed as line drivers: Tese should specify a max
cable length and the specs of the cable – by either spe-
cifc brand & type, or give the max cable capacitance in
pF/meter.
Correct: Max Output Level = +26 dBu balanced, 20-
20 kHz, >2k Ω, <1% THD
Wrong: Max Output Level = +26 dBu
|a¢|e :aec|lc+t|ea.-¹ò
Maximum Gain
What is tested? Te ratio of the largest possible output
signal as compared to a fxed input signal, expressed in
dB, is called the Maximum Gain of a unit.
How is it measured? With all level & gain controls
set maximum, and for an input of 1 kHz at an average
level that does not clip the output, the output of the
unit is measured using an rms voltmeter. Te output
level is divided by the input level and the result ex-
pressed in dB.
Required Conditions. Tere is nothing controver-
sial here, but confusion results if the test results do not
clearly state whether the test was done using balanced
or unbalanced outputs. Often a unit’s gain difers 6 dB
between balanced and unbalanced hook-up. Note that
it usually does not change the gain if the input is driven
balanced or unbalanced, only the output connection is
signifcant.
Te results are assumed constant for all frequencies
within the unit’s bandwidth and for all levels of input,
unless stated otherwise.
Correct: Maximum Gain = +6 dB, balanced-in to
balanced-out
Wrong: Maximum Gain = +6 dB
Caveat Emptor
Specifcations Require Conditions Accurate audio
measurements are difcult and expensive. To purchase
the test equipment necessary to perform all the tests
described here would cost you a minimum of $10,000.
And that price is for computer-controlled analog test
equipment, if you want the cool digital-based, dual do-
main stuf – double it. Tis is why virtually all purchas-
ers of pro audio equipment must rely on the honesty
and integrity of the manufacturers involved, and the
accuracy and completeness of their data sheets and
sales materials.
Tolerances or Limits Another caveat for the in-
formed buyer is to always look for tolerances or worst-
case limits associated with the specs. Limits are rare,
but they are the gristle that gives specifcations truth.
When you see specs without limits, ask yourself, is this
manufacturer NOT going to ship the product if it does
not exactly meet the printed spec? Of course not. Te
product will ship, and probably by the hundreds. So
what is the real limit? At what point will the product
not ship? If it’s of by 3 dB, or 5%, or 100 Hz – what?
When does the manufacturer say no? Te only way you
can know is if they publish specifcation tolerances and
limits.
Correct: S/N = 90 dB (± 2 dB), re +4 dBu, 22 kHz BW,
unity gain
Wrong: S/N = 90 dB
|a¢|e :aec|lc+t|ea.-¹¹
Common Signal Processing Specs With Required Conditions
Abbr Name Units Required Conditions Preferred Values*
THD Total Harmonic Distortion %
Frequency
Level
Gain Settings
Harmonic Order Measured
20 Hz – 20 kHz
+4 dBu
Unity (Max for Mic Preamps)
At least 5th-order (5 harmonics)
THD+N
Total Harmonic Distortion
plus Noise
%
Frequency
Level
Gain Settings
Noise Bandwidth or Weighting Filter
20 Hz – 20 kHz
+4 dBu
Unity (Max for Mic Preamps)
22 kHz BW (or ITU-R 468 Curve)
IM
or
IMD
Intermodulation Distortion
(SMPTE method)
%
Type
2 Frequencies
Ratio
Level
SMPTE
60 Hz/7 kHz
4:1
+4 dBu (60 Hz)
IM
or
IMD
Intermodulation Distortion
(ITU-R method)
(was CCIF, now changed to
ITU-R)
%
Type
2 Frequencies
Ratio
Level
ITU-R (or Diference-Tone)
13 kHz/14 kHz (or 19 kHz/20 kHz)
1:1
+4 dBu
S/N
or
SNR
Signal-to-Noise Ratio dB
Reference Level
Noise Bandwidth or Weighting Filter
Gain Settings
re +4 dBu
22 kHz BW (or ITU-R 468 Curve)
Unity (Max for Mic Preamps)
EIN
Equivalent Input Noise
or
Input Referred Noise
–dBu
Input Terminating Impedance
Gain
Noise Bandwidth or Weighting Filter
150 Ω
Maximum
22 kHz BW (Flat – No Weighting)
BW Frequency Response Hz Level Change re 1 kHz +0/–0.5 dB (or +0/–3 dB)
CMR
or
CMRR
Common Mode Rejection
or
Common Mode Rejection
Ratio
dB
Frequency (Assumed independent
of level, unless noted otherwise)
1 kHz
— Dynamic Range dB
Maximum Output Level
Noise Bandwidth or Weighting Filter
+26 dBu
22 kHz BW (No Weighting Filter)

Crosstalk (as –dB)
or
Channel Separation (as +dB)
–dB
or
+dB
Frequency
Level
What-to-What
20 Hz – 20 kHz
+4 dBu
Chan.-to-Chan. & Left-to-Right
— Input & Output Impedance Ω
Balanced or Unbalanced
Floating or Ground Referenced
(Assumed frequency-independent
with negligible reactance unless
specifed.)
Balanced
No Preference
— Maximum Input Level dBu
Balanced or Unbalanced
THD at Maximum Input Level
Balanced
1%
— Maximum Output Level dBu
Balanced or Unbalanced
Minimum Load Impedance
THD at Maximum Input Level
Bandwidth
Optional: Maximum Cable Length
Balanced
2k Ω
1%
20 Hz - 20 kHz
Cable Length & Type (or pF/meter)
— Maximum Gain dB
Balanced or Unbalanced Output
(Assumed consfant over full BW & at
all levels, unless otherwise noted.)
Balanced
* Based on the common practices of pro audio signal processing manufacturers.
|a¢|e :aec|lc+t|ea.-¹z
11628 1-03
ª|+ae teraer+t|ea ¹ò°òz +It| |.e. \., Ma|||tee \| °°zIå-åò°° ú:| I|| +zå-1åå-êòòò ||/ +zå-1+I-IIåI \|| www.r+ae.cem
-10 dBV Consumer Reference Level
+4 dBu Pro Audio Reference Level
+26 dBu Maximum Output Level
-86 dBu Output Noise Floor
39µV
315 mV
1.23 V
15.5 V
S/N
90 dB
Dynamic Range
112 dB
Headroom
22 dB
Signal Processing Defnitions & Typical Specs
Further Reading
1. Cabot, Richard C. “Fundamentals of Modern Audio
Measurement,” J. Audio Eng. Soc., Vol. 47, No. 9, Sep.,
1999, pp. 738-762 (Audio Engineering Society, NY,
1999).
2. Metzler, R.E. Audio Measurement Handbook (Audio
Precision Inc., Beaverton, OR, 1993).
3. Proc. AES 11
th
Int. Conf. on Audio Test & Measure-
ment (Audio Engineering Society, NY, 1992).
4. Skirrow, Peter, “Audio Measurements and Test
Equipment,” Audio Engineer’s Reference Book 2
nd
Ed,
Michael Talbot-Smith, Editor. (Focal Press, Oxford,
1999) pp. 3-94 to 3-109.
5. Terman, F. E. & J. M. Pettit, Electronic Measurements
2
nd
Ed. (McGraw-Hill, NY, 1952).
6. Whitaker, Jerry C. Signal Measurement, Analysis,
and Testing (CRC Press, Boca Raton, FL, 2000).
Portions of this note appeared previously in the May/
June & Sep/Oct 2000 issues of LIVESOUND! Interna-
tional magazine reproduced here with permission.

Audio Distortion By its name you know it is a measure of unwanted signals. its level. no added noise – nothing but the original signal. hum. Total Harmonic Distortion + Noise What is tested? Similar to the THD test above. This is a wonderful test since everything that comes out of the unit that isn’t the pure test signal is measured and included – harmonics. Caveat Emptor: THD+N is always going to be a larger number than just plain THD. THD. tedious. but no components at. the preferred gain setting is unity. The following tests are designed to measure different forms of audio distortion. unscrupulous (or clever. The spectrum of the output shows added frequency components at 2x the original signal. Total harmonic distortion (THD) is then defined as the ratio of the rms voltage of the harmonics to that of the fundamental component. The most common forms of distortion are unwanted components or artifacts added to the original signal. Required Conditions. there are components at harmonic intervals of the input frequency. The level is then divided by the fundamental level. Hopefully. the manufacturer must state the test signal frequency. except instead of measuring individual harmonics this tests measures everything added to the input signal. 30 kHz or 80 kHz). Full disclosure specs will test harmonic distortion over the entire 20 Hz to 20 kHz audio range (this is done easily by sweeping and plotting the results). and not commonly done. For this reason. at the pro audio level of +4 dBu. Too often THD is spec’d only at 1 kHz. If a piece of gear is perfect the spectrum of the output shows only the original signal – nothing else – no added components. or FFT analyzer) shows that in addition to the original input sine wave. the standard practice is to use maximum gain. Performing a spectral analysis on this signal (using a spectrum. +4 dBu. or any fractional multiplier. 2. consequently. Measuring individual harmonics with precision is difficult. How is it measured? This technique excites the unit with a single high purity sine wave and then examines the output for evidence of any frequencies other than the one applied. Since individual harmonic amplitudes are measured. For mic pre amps. 20–20 kHz. Distortion is the name given to anything that alters a pure input signal in any way other than changing its magnitude. is apt to differ from the THD of a 1 kHz signal at a -10 dBV level and unity gain. as well as the number of harmonics measured.01%. including random and hum-related noise. only whole number multipliers. THD+N (see below) is the more common test. and cited as the total harmonic distortion (expressed in percent). unity gain Wrong: THD less than 0. it’s obvious to the reader that the THD of a 10 kHz signal at a +20 dBu level using maximum gain. let alone harmonic count. except mic preamps. And more different yet. Distortion analyzers make this measurement by removing the fundamental (using a deep and narrow notch filter) and measuring what’s left using a bandwidth filter (typically 22 kHz. 5x.01% THD+N. RFI. The remainder contains harmonics as well as random noise and other artifacts. For all signal processing equipment. say. noise. 4x. buzz – everything. and the gain conditions set on the tested unit. Correct: THD (5th-order) less than 0. Total Harmonic Distortion What is tested? A form of nonlinearity that causes unwanted signals to be added to the input signal that are harmonically related to it. 3x. or worst. Audio Specifications- . depending on your viewpoint) manufacturers choose to spec just THD. and so on. if one manufacturer measures two harmonics while another measures five. A spectral analysis of the output shows these unwanted components. How is it measured? THD+N is the rms summation of all signal components (excluding the fundamental) over some prescribed bandwidth. and nothing about level or gain settings. This is accomplished by using a spectrum analyzer to obtain the level of each harmonic and performing an rms summation. instead of the more meaningful and easily compared THD+N. with no mention of frequency at all.6x the original.

When they are used. DIN favors 250 Hz & 8 kHz. Intermodulation Distortion – SMPTE Method What is tested? A more meaningful test than THD. Conflicting views exist regarding THD+N bandwidth measurements. One argument goes: it makes no sense to measure THD at 20 kHz if your measurement bandwidth doesn’t include the harmonics. +4 dBu Wrong: IMD less than 0. if measuring five harmonics using a 30 kHz bandwidth. a common method is to set the low frequency tone (60 Hz) for +4 dBu and then mixing the 7 kHz tone at a value of –8 dBu (12 dB less). The amplitudes of the sidebands are rms summed and expressed as a percentage of the upper frequency level. as a practical procedure in the motion picture industry in 1939 by the Society of Motion Picture Engineers (SMPE – no “T” [television] yet) and made into a standard in 1941. SMPTE specifies this test use 60 Hz and 7 kHz combined in a 12 dB ratio (4:1) and that the peak value of the signal be stated along with the results. let alone the second harmonic.S.01% Audio Specifications- . and the output signal is examined for modulation of the upper frequency by the low frequency tone. An exception is the strong argument to use the ITU-R (CCIR) 468 curve because of its proven correlation to what is heard. (Other frequencies and amplitude ratios are used. unity gain. but because it reveals other artifacts that can indicate high frequency problems. And one supported by the IEC. too often it is to hide pronounced AC mains hum artifacts. there is no need to measure anything beyond 20 kHz. 60Hz/7kHz. How is it measured? The test signal is a low frequency (60 Hz) and a non-harmonically related high frequency (7 kHz) tone. for example. The modulation components of the upper signal appear as sidebands spaced at multiples of the lower frequency tone. As with harmonic distortion measurement. Correct: THD+N less than 0. standardizing on publishing THD+N figures measured flat over 22 kHz seems justified. so marketeers prevent its widespread use. if only measuring the first three harmonics. Another argument states that since most people can’t even hear the fundamental at 20 kHz. level & gain settings). not because of 20 kHz harmonics. this is done with a spectrum analyzer or a dedicated intermodulation distortion analyzer. [Noise has little effect on SMPTE measurements because the test uses a low pass filter that sets the measurement bandwidth. This is important since these artifacts make music sound harsh and unpleasant.01%. +4 dBu. the case is made that using an 80 kHz bandwidth is crucial. and “flat. Valid point. all that needs stating is “SMPTE IM” and the peak value used. no weighting filter. Fair enough. Correct: IMD (SMPTE) less than 0. However. the residual noise bandwidth is spec’d. 4:1. The preferred value is a 20 kHz (or 22 kHz) measurement bandwidth. or 10 kHz. since it adds 12 dB of gain in the critical midband (the whole point) it makes THD+N measurements bigger. 20–20 kHz. All true points.” i. Strictly speaking. Alternatively. except instead of stating the number of harmonics measured. However. Same as THD ( frequency. However. which says that THD should not be tested any higher than 6 kHz. therefore there is no need for an “IM+N” test..01%. while still using an 80 kHz bandwidth during the design. development and manufacturing stages. [Historical Note: Many old distortion analyzers labeled “THD” actually measured THD+N. 20 kHz BW Wrong: THD less than 0. but competition being what it is.] Required Conditions. intermodulation distortion gives a measure of distortion products not harmonically related to the pure signal.01% IMD – SMPTE. Intermodulation distortion testing was first adopted in the U. thus restricting noise components. summed together in a 4:1 amplitude ratio. along with whatever weighting filter was used.) This signal is applied to the unit. measuring the peak value is difficult.] Required Conditions.Weighting filters are rarely used.e.

Sometimes A-weighting can “improve” a noise spec by 10 dB. Required Conditions. and so on. 19 kHz/20 kHz. but does not correlate well with what is heard. a bandwidth is selected for the measuring voltmeter. and all controls set to a prescribed manner. and their level. A-weighting rolls off the low-end. Use of beat frequencies for distortion detection dates back to work first documented in Germany in 1929. and looking for beat frequencies between them. the noise voltage measures extremely high. A mistake compounded by the many correct audio references to the CCIR 468 weighting filter. using two equal amplitude. 1:1. then subtracting the second tone from twice the first tone. a resistor equal to the expected driving impedance is connected between the inputs. This is an inescapable physical fact. How is it measured? No input signal is used. and then turning around and subtracting the first tone from twice the second. noise filters. closely spaced.IMD – ITU-R (CCIF). only the 1 kHz component is measured. but that argu- Audio Specifications- . Most often seen is Aweighting. Thus. high frequency tones.01% What is tested? This specification indirectly tells you how noisy a unit is.] Ultimately. Signal-To-Noise Ratio What is tested? This tests for non-harmonic nonlinearities. the greater the noise. with no signal present. the CCIF became the radiocommunications sector (ITU-R) of the ITU (International Telecommunications Union). therefore the test is now known as the IMD (ITU-R). [This test is often mistakenly referred to as the CCIR method (as opposed to the CCIF method). Noise voltage is a function of bandwidth – wider the bandwidth. however the input is not left open. This is called a “flat” measurement. S/N is calculated by measuring a unit’s output noise. The usual practice is to leave the unit connected to the signal generator (with its low output impedance) set for zero volts. or unterminated. with the result expressed in dB. when the CCIF (International Telephonic Consultative Committee) recommend the test. How is it measured? The common test signal is a pair of equal amplitude tones spaced 1 kHz apart. Alternatively. While noise filters are entirely appropriate and even desired when measuring other types of noise. but was not considered a standard until 1937. Correct: IMD (ITU-R) less than 0. The magnitude of the output noise is measured using an rms-detecting voltmeter. the manufacturer needs to clearly spell out the two frequencies used. since all frequencies are measured equally. but a more accurate one is called the ITU-R (old CCIR) 468 filter.01%. Usually only the first two or three components are measured. thus reducing the most annoying 2nd and 3rd line harmonics by about 20 dB and 12 dB respectively. The ratio is understood to be 1:1. +4 dBu Wrong: IMD less than 0. Nonlinearity in the unit causes intermodulation products between the two signals. Therefore. it is an abuse to use them to disguise equipment hum problems. This filter is preferred because it shapes the measured noise in a way that relates well with what’s heard. The most common bandwidth seen is 22 kHz (the extra 2 kHz allows the bandwidth-limiting filter to take affect without reducing the response at 20 kHz). or weighting filters. If this is not done. Many variations exist for this test. These are found by subtracting the two tones to find the first location at 1 kHz. but for the oft-seen case of 19 kHz and 20 kHz. are used when measuring noise. This figure is used to calculate a ratio between it and a fixed output reference signal. Pro audio equipment often lists an A-weighted noise spec – not because it correlates well with our hearing – but because it can “hide” nasty hum components that make for bad noise specs. Intermodulation Distortion – ITU-R Method S/N or SNR. Alternatively. Always wonder if a manufacturer is hiding something when you see A-weighting specs. The argument used to justify this is that the ear is not sensitive to low frequencies at low levels (´ la Fletcher-Munson equal loudness curves).

. the output noise is almost totally determined by the amount of gain: turn the gain up. the terminating input resistance must be stated. Their curve tells us nothing of the ear’s astonishing ability to sync in and lock onto repetitive tones – like hum components – even when these tones lie beneath the noise floor. or input referred noise. the added noise is 80 dB below the input signal). Likewise. How is it measured? With the gain set for maximum and the input terminated with the expected source impedance. is how noise is spec’d on mixing consoles. For example. Fletcher-Munson curves document equal loudness of single tones. As unintuitive as it may be. unity gain Wrong: S/N = 90 dB EIN. say your mixer has an EIN of –130 dBu. then the second product is. In order for the published figure to have any meaning. generates noise. and the output noise goes up accordingly. The mic stage itself is the dominant noise generator. Required Conditions. the A-weighting filter rolls off the high frequency noise better than the flat 22 kHz filter and compares better with the listening experience. but their reference level is + 4 dBu. 22 kHz BW. instead they spec S/N figures “flat” or use the ITU-R 468 curve (which actually makes their numbers look worse. hooked up to nothing. The fact that the low-end also rolls off is irrelevant in this application. in fact. B. Correct: S/N = 90 dB re +4 dBu. an exception has arisen: Digital products using A/D and D/A converters regularly spec S/N and dynamic range using A-weighting. -50 dBu under normal conditions. When used this way. turning up the gain increases both the signal and the noise by the same amount. therefore. This means the noise is 130 dB below a reference point of 0. This is uniquely determined by the magnitude of the input signal and the EIN. Two equally quiet inputs will measure vastly different if not using the identical input impedance. All these tricks squeeze the noise out of the audio bandwidth and push it up into the higher inaudible frequencies. then the S/N at the input to the mic preamp is 80 dB (i. However. Both are then amplified to obtain the final output signal. This follows the semiconductor industry’s practice of spec’ing delta-sigma data converters A-weighted. and another product has the same stated 90 dB S/N. say. (See the RaneNote Digital Dharma of Audio A/D Converters) Required Conditions. By far however. the most common problem is not stating any conditions. a mic stage cannot be quieter than the source. The input terminating resistor defines the lower limit of noise performance. It is called thermal noise or Johnson noise (after its discoverer J. If your microphone puts out. From here on out. you cannot accurately compare numbers if one unit is measured over a BW of 80 kHz and another uses 20 kHz. This is what A-weighting can hide. The problem in measuring mixing consoles (and all mic preamps) is knowing ahead of time how much gain is going to be used.775 volts (0 dBu). In use. All that moving about is called thermal agitation (caused by heat – the hotter the resistor. including any weighting filters and the reference signal level.e. Thus.ment is false. +20 dBu. They do this because they use clever noise shaping tricks to create 24-bit converters with acceptable noise behavior. For example if one product references S/N to their maximum output level of. and the larger the resistor value the greater the noise. The noise may be inaudible. Equivalent Input Noise or Input Referred Noise What is tested? Equivalent input noise. a plain resistor. the EIN is the amount of noise added to the input signal. The standard source impedance is 150 Ω. 16 dB quieter. in 1928) and results from the motion of electron charge of the atoms making up the resistor. or if one is measured flat and the other uses A-weighting. For this reason most manufacturers shy from using it. standalone mic preamps and other signal processing units with mic inputs. but correlate better with the real world). A trick which unscrupulous Audio Specifications- . but it is still measurable and can give misleading results unless limited. the noisier). Stating that a unit has a “S/N = 90 dB” is meaningless without knowing what the signal level is. Johnson. and over what bandwidth the noise was measured. It is very easy to deceive without them. This is a spec where test conditions are critical. the output noise is measured with an rms voltmeter fitted with a bandwidth or weighting filter. Since high-gain mic stages greatly amplify source noise. say. it must include the measurement bandwidth.

especially the endpoints. the generator is swept upward in frequency (from the 1 kHz reference point) keeping the source amplitude precisely constant. that is why. 3 dB.707. If a unit’s frequency response rises at any point. Correct: Frequency Response = 20–20 kHz. you cannot have an operating mic stage. Rs = 150 Ω Wrong: EIN = -130 dBu BW. if you use only a 10 kHz bandwidth. until it is reduced in level by the amount specified. with a 150 Ω source. i. so it is reduced by the square root of one-half.. (For those paying close attention: it is not 6 dB like you might expect since the bandwidth is half. However. This same thermal noise limit of the input terminating resistance is a strong function of measurement bandwidth. The preferred condition here is at least 20 Hz to 20 kHz measured +0/-0. it indicates a fundamental instability problem and you should run from the store. that is. Bandwidth or Frequency Response What is tested? The unit’s bandwidth or the range of frequencies it passes. They only complicate comparison among manufacturers. +0/-0. quieter than –131 dBu. the 1 kHz level establishes the 0 dB point. All frequencies above and below a unit’s Frequency Response are attenuated – sometimes severely. They may not be lying. which is 3 dB less).manufacturers may use is to spec their mic stage with the input shorted – a big no-no. a big 3 dB improvement. or (among loudspeaker manufacturers) maybe even 10 dB? Note that there is no discussion of an increase. If you have ever wondered why manufacturers state a limit of “+0 dB”. it is recommended to use no weighting filters. then the noise drops to –134 dBu. Since the measured output noise is such a strong function of bandwidth and gain. The reduction in output level is relative to 1 kHz. What you need to know is how far down is the response where the manufacturer measured it.5 dB Wrong: Frequency Response = 20-20 kHz Audio Specifications- . the noise voltage generated by the standard 150 Ω input resistor. max gain.5 dB. This value is set as the 0 dB reference point. The test generator is then swept down in frequency from 1 kHz until the lower frequency limit is found by the same means.e. How is it measured? A 1 kHz tone of high purity and precise amplitude is applied to the unit and the output measured using a dB-calibrated rms voltmeter. look for the details. Required Conditions. This point becomes the upper frequency limit. since it does not represent the real performance of the preamp. 22 kHz BW. Is it 0. only using favorable conditions to deceive. Correct: EIN = -130 dBu. Next. The next biggie in spec’ing the EIN of mic stages is bandwidth. It is a square root function. Remember: if a manufacturer’s reported EIN seems too good to be true. Properly designed solid-state audio equipment does not ever gain in amplitude when set for flat response (tubes or valve designs using output transformers are a different story and are not dealt with here). therefore.5 dB. or 0. measured over a bandwidth of 20 kHz (and room temperature) is –131 dBu. For example. no mention of the amplitude rising.

therefore. ask. The results may be frequency-dependent. Normal differential signals arrive as a pair of equal voltages that are opposite in polarity: one applied to the positive input and the other to the negative input. Again. Since this is the ratio of the maximum output signal to the noise floor. The output of the balanced stage is measured using an rms voltmeter and noted. Common-mode is the name given to signals applied simultaneously to both inputs. beware of A-weighted specs. “Measured over what bandwidth. Correct: Dynamic Range = 120 dB re +26 dBu. 22 kHz BW Wrong: Dynamic Range = 120 dB Audio Specifications- . Required Conditions. A ratio is calculated by dividing the generator input voltage by the measured output voltage. the interfering signals are induced equally into each wire. Required Conditions.g. Most common-mode signals result from RFI (radio frequency interference) and EMI (electromagnetic interference. Correct: CMRR = 40 dB @ 1 kHz Wrong: CMRR = 40 dB What is tested? First. i. Next. using it to spec dynamic range in an analog product may indicate the unit has hum components that might otherwise restrict the dynamic range. How is it measured? The maximum output voltage is measured as described below. hum and buzz) signals inducing themselves into the connecting cable. unless stated otherwise. “Relative to what?” Second. but you still have to be careful when comparing units. For instance. Since most cables consist of a tightly twisted pair. A ratio is formed and the result expressed in dB. the manufacturer must state the frequency tested along with the CMR figure. or its gain is first determined and noted. e.. Use of A-weighting should only appear in dynamic range specs for digital products with data converters (see discussion under S/N). It is the job of a well designed balanced input stage to amplify differential signals. the maximum output voltage and then the output noise floor are measured and their ratio expressed in dB.CMR or CMRR. a generator is hooked up to drive both inputs simultaneously through two equal and carefully matched source resistors valued at one-half the expected source resistance. while simultaneously rejecting common-mode signals. and the answer expressed in dB.. each input is driven from one-half the normal source impedance. before you jump to buy the bigger number. This ratio is then multiplied by the gain of the unit. The results are assumed constant for all input levels. How is it measured? Either the unit is adjusted for unity gain. A common-mode signal drives both inputs with the same polarity. first ask. you have no way to evaluate the significance of the number. Most manufacturers spec this at 1 kHz for comparison reasons. and were any weighting filters used?” You cannot know which is better without knowing the required conditions. If one company says their product has a dynamic range of 120 dB and another says theirs is 126 dB. The other big contributors to common-mode signals are power supply and ground related problems between the source and the balanced input stage. Sounds simple and it is simple. Common-Mode Rejection or Common-Mode Rejection Ratio Dynamic Range What is tested? This gives a measure of a balanced input stage’s ability to reject common-mode signals.e. then the manufacturer must state what the maximum level is. otherwise. and the output noise floor is measured using an rms voltmeter fitted with a bandwidth filter (with the input generator set for zero volts).

Therefore knowing the frequency used for testing is essential. The impedances are assumed constant for all frequencies within the unit’s bandwidth and for all signal levels.Crosstalk or Channel Separation What is tested? Signals from one channel leaking into another channel. and the answer expressed in dB. Input and output reactive elements are usually small enough to be ignored. (Note that while this is true for input impedances. i.) Correct: Input Impedance = 20k Ω. This is because the parasitic capacitors formed by the traces are uniquely determined by the layout geometry. the crosstalk gets worst at a 6 dB/octave rate with increasing frequency.) Required Conditions. for instance. in fact.. For clarity when spec’ing balanced circuits. This happens between independent channels as well as between left and right stereo channels. Under no circumstances is the measured channel left open. Correct: Crosstalk = -60 dB. meanwhile the other channel is set for zero volts (its generator is left hooked up. a crosstalk spec of –60 dB is interpreted to mean the unwanted signal is 60 dB below the test signal. How is it measured? Rarely are these values actually measured. frequency-dependent – some heavily. A ratio is formed by dividing the unwanted signal by the above-noted output test value. How is it measured? A generator drives one channel and this channel’s output value is noted. with a characteristic rise of 6 dB/octave. This makes it strongly frequency dependent. And if it is only spec’d at 1 kHz (very common) then you can predict what it may be for higher frequencies. it is preferred to state whether the resistance is “floating” (exists between the two lines) or is ground referenced (exists from each line to ground). so the overall S/N ratio isn’t affected much. channel-to-channel Wrong: Crosstalk = -60 dB Input & Output Impedance What is tested? Input impedance measures the load that the unit represents to the driving source. say. the reason this usually isn’t a problem is that the signal level at high frequencies is also reduced by about the same 6 dB/octave rate. or alternatively the input is terminated with the expect source impedance). The only required information is whether the stated impedance is balanced or unbalanced (balanced impedances usually are exactly twice unbalanced ones). unless stated otherwise. Whatever signal is induced into the tested channel is measured at its output with an rms voltmeter and noted. Usually they are determined by inspection and analysis of the final schematic and stated as a pure resistance in Ωs. +4 dBu. The mechanism is capacitive coupling between the closely spaced traces and layers.e. or between all six channels of a 5. at 1 kHz. most output impedances are. (Phono input stages and other inputs designed for specific load reactance are exceptions. balanced line-to-line Wrong: Input Impedance = 20k Ω Audio Specifications- . but turned to zero. Required Conditions. For instance. For example. using the example from above of a –60 dB rating. Most crosstalk results from printed circuit board traces “talking” to each other. Since the ratio is always less than one (crosstalk is always less than the original signal) the expression results in negative dB ratings. while output impedance measures the source impedance that drives the next unit. then the crosstalk at 16 kHz probably degrades to –36 dB. But don’t panic. 20-20kHz.1 surround processor. Another important point is that crosstalk is assumed level independent unless otherwise noted. not the strength of the signal.

and that value is expressed in dBu. or frequency loss. watch out for just 1 kHz specs).Maximum Input Level What is tested? The input stage is measured to establish the maximum signal level in dBu that causes clipping or specified level of distortion. and that is the unit’s ability to drive long lines without stability problems. The preferred value is balanced and 1% distortion. <1% THD Wrong: Max Output Level = +26 dBu Audio Specifications- . Correct: Maximum Input Level = +20 dBu. balanced. over what frequency range is this spec valid (prefer 20 Hz – 20 kHz. or give the max cable capacitance in pF/meter. but often manufacturers use “visible clipping. <1% THD Wrong: Maximum Input Level = +20 dBu Maximum Output Level What is tested? The unit’s output is measured to establish the maximum signal possible before visible clipping or a specified level of distortion. The manufacturer must state whether the spec is for balanced or unbalanced use (usually balanced operation results in 6 dB more swing). increase the input signal until visible clipping is observed. 600 Ω operation is obsolete and no longer required except for specialized applications. The second is more difficult and potentially more serious. Required Conditions. Two important issues are present here: The first is the need to know whether a unit can swing enough unclipped volts for your application. No one would accept 10% distortion at the measurement point. and what load impedance is guaranteed (2k Ω or greater is preferred. Buyer beware. and while viewing the output. It is essential that all downstream gain and level controls be set low enough that you are assured the applied signal is clipping just the first stage. with broadcast and telecommunications noted as two of them). Required Conditions. an oscilloscope and a distortion analyzer. The results are assumed constant for all frequencies within the unit’s bandwidth and for all levels of input. How is it measured? During the final product process. How is it measured? The output is fixed with a standard load resistor and measured either balanced or unbalanced. Whether the applied signal is balanced or unbalanced and the amount of distortion or clipping used to establish the maximum must be stated. In the field. Check this by turning each level control and verifying that the clipped waveform just gets bigger or smaller and does not ever reduce the clipping. it is not stated at all – only the max value given without conditions. 2020 kHz. A 1 kHz input signal is increased in amplitude until the output measures the specified amount of distortion. using an oscilloscope and a distortion analyzer. what distortion was used for determination (with the preferred value being 1% THD). apply a 1 kHz source. so to hide it.” which is as much as 10% distortion. unless stated otherwise. >2k Ω. the design engineer uses an adjustable 1 kHz input signal. the signal is swept through the entire audio range to check that this level does not change with frequency. Next. Correct: Max Output Level = +26 dBu balanced. This last item applies only to signal processing units designed as line drivers: These should specify a max cable length and the specs of the cable – by either specific brand & type. and creates a false impression that the input stage can handle signals a few dB hotter than it really can.

Often a unit’s gain differs 6 dB between balanced and unbalanced hook-up. and probably by the hundreds. balanced-in to balanced-out Wrong: Maximum Gain = +6 dB Specifications Require Conditions Accurate audio measurements are difficult and expensive. There is nothing controversial here. unless stated otherwise. but they are the gristle that gives specifications truth. is this manufacturer NOT going to ship the product if it does not exactly meet the printed spec? Of course not. Note that it usually does not change the gain if the input is driven balanced or unbalanced. When you see specs without limits. Required Conditions. Limits are rare. To purchase the test equipment necessary to perform all the tests described here would cost you a minimum of $10. This is why virtually all purchasers of pro audio equipment must rely on the honesty and integrity of the manufacturers involved. And that price is for computer-controlled analog test equipment. The output level is divided by the input level and the result expressed in dB. ask yourself. So what is the real limit? At what point will the product not ship? If it’s off by 3 dB. expressed in dB. Correct: Maximum Gain = +6 dB. only the output connection is significant. Tolerances or Limits Another caveat for the informed buyer is to always look for tolerances or worstcase limits associated with the specs. or 5%. The results are assumed constant for all frequencies within the unit’s bandwidth and for all levels of input. How is it measured? With all level & gain controls set maximum. and the accuracy and completeness of their data sheets and sales materials.Maximum Gain What is tested? The ratio of the largest possible output signal as compared to a fixed input signal. and for an input of 1 kHz at an average level that does not clip the output. or 100 Hz – what? When does the manufacturer say no? The only way you can know is if they publish specification tolerances and limits. re +4 dBu. The product will ship. 22 kHz BW. the output of the unit is measured using an rms voltmeter.000. if you want the cool digital-based. unity gain Wrong: S/N = 90 dB Caveat Emptor Audio Specifications-0 . dual domain stuff – double it. Correct: S/N = 90 dB (± 2 dB). is called the Maximum Gain of a unit. but confusion results if the test results do not clearly state whether the test was done using balanced or unbalanced outputs.

5 dB (or +0/–3 dB) 1 kHz THD+N Total Harmonic Distortion plus Noise IM or IMD IM or IMD S/N or SNR EIN BW CMR or CMRR — Intermodulation Distortion (SMPTE method) Intermodulation Distortion (ITU-R method) (was CCIF.) Balanced or Unbalanced — Maximum Input Level dBu THD at Maximum Input Level Balanced or Unbalanced Minimum Load Impedance — Maximum Output Level dBu THD at Maximum Input Level Bandwidth Optional: Maximum Cable Length Balanced or Unbalanced Output — Maximum Gain dB (Assumed consfant over full BW & at all levels.-to-Chan.Common Signal Processing Specs With Required Conditions Abbr THD Name Total Harmonic Distortion Units Required Conditions Frequency Level % Gain Settings Harmonic Order Measured Frequency Level % Gain Settings Noise Bandwidth or Weighting Filter Type 2 Frequencies % Ratio Level Type 2 Frequencies % Ratio Level Reference Level dB Noise Bandwidth or Weighting Filter Gain Settings Input Terminating Impedance –dBu Gain Noise Bandwidth or Weighting Filter Hz Level Change re 1 kHz Frequency (Assumed independent of level. now changed to ITU-R) Signal-to-Noise Ratio Equivalent Input Noise or Input Referred Noise Frequency Response Common Mode Rejection or Common Mode Rejection Ratio Dynamic Range Maximum Output Level Noise Bandwidth or Weighting Filter Crosstalk (as –dB) –dB Frequency — or or Level Channel Separation (as +dB) +dB What-to-What Balanced or Unbalanced Floating or Ground Referenced — Input & Output Impedance Ω (Assumed frequency-independent with negligible reactance unless specified. dB +26 dBu 22 kHz BW (No Weighting Filter) 20 Hz – 20 kHz +4 dBu Chan. unless otherwise noted.) * Based on the common practices of pro audio signal processing manufacturers. unless noted otherwise) dB Preferred Values* 20 Hz – 20 kHz +4 dBu Unity (Max for Mic Preamps) At least 5th-order (5 harmonics) 20 Hz – 20 kHz +4 dBu Unity (Max for Mic Preamps) 22 kHz BW (or ITU-R 468 Curve) SMPTE 60 Hz/7 kHz 4:1 +4 dBu (60 Hz) ITU-R (or Difference-Tone) 13 kHz/14 kHz (or 19 kHz/20 kHz) 1:1 +4 dBu re +4 dBu 22 kHz BW (or ITU-R 468 Curve) Unity (Max for Mic Preamps) 150 Ω Maximum 22 kHz BW (Flat – No Weighting) +0/–0.20 kHz Cable Length & Type (or pF/meter) Balanced Audio Specifications- . & Left-to-Right Balanced No Preference Balanced 1% Balanced 2k Ω 1% 20 Hz .

Mukilteo WA -0 USA TEL --000 FAX -- WEB www. 1993). Skirrow. Portions of this note appeared previously in the May/ June & Sep/Oct 2000 issues of LIVESOUND! International magazine reproduced here with permission. pp.23 V Dynamic Range 112 dB S/N 90 dB -10 dBV Consumer Reference Level 315 mV -86 dBu Output Noise Floor 39µV Further Reading 1. FL. “Audio Measurements and Test Equipment. 1952). Metzler. Editor. Audio Eng. Analysis. Vol. on Audio Test & Measurement (Audio Engineering Society. 6. AES 11th Int. Sep. Proc. 4. 3. 47. Jerry C. Terman.. R.rane. Richard C.com Audio Specifications- 11628 1-03 . Cabot. Oxford. NY. 9. Beaverton. 5. & J. Signal Measurement. (McGraw-Hill. 1992). F. No.. ©Rane Corporation 00 th Ave. NY. Audio Measurement Handbook (Audio Precision Inc. 1999) pp. 2. and Testing (CRC Press. Boca Raton. 738-762 (Audio Engineering Society. Soc.5 V Headroom 22 dB +4 dBu Pro Audio Reference Level 1. 1999). 2000).” J.” Audio Engineer’s Reference Book 2nd Ed.E. E.. 1999. Michael Talbot-Smith. “Fundamentals of Modern Audio Measurement. Conf. 3-94 to 3-109. Electronic Measurements 2nd Ed. Peter. OR. W. Pettit. Whitaker. NY. M..Signal Processing Definitions & Typical Specs +26 dBu Maximum Output Level 15. (Focal Press.

Sign up to vote on this title
UsefulNot useful