Professional Documents
Culture Documents
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Wed, 05 Mar 2014 13:02:13 UTC
Contents
Articles
Saturation (magnetic) Intermodulation Volterra series 1 2 7
References
Article Sources and Contributors Image Sources, Licenses and Contributors 11 12
Article Licenses
License 13
Saturation (magnetic)
Saturation (magnetic)
Seen in some magnetic materials, saturation is the state reached when an increase in applied external magnetic field H cannot increase the magnetization of the material further, so the total magnetic flux density B levels off. It is a characteristic particularly of ferromagnetic materials, such as iron, nickel, cobalt and their alloys.
Description
Saturation is most clearly seen in the magnetization curve (also called BH curve or hysteresis curve) of a substance, as a bending to the right of the curve (see graph at right). As the H field increases, the B field approaches a maximum value asymptotically, the saturation level for the substance. Technically, above saturation, the B field continues increasing, but at the paramagnetic rate, which is 3 orders of magnitude smaller than the ferromagnetic rate seen below saturation. The relation between the magnetizing field H and the magnetic field B can also be expressed as the magnetic permeability: or the relative permeability , where is the vacuum
Magnetization curves of 9 ferromagnetic materials, showing saturation. 1.Sheet steel, 2.Silicon steel, 3.Cast steel, 4.Tungsten steel, 5.Magnet steel, 6.Cast iron, 7.Nickel, 8.Cobalt, 9.Magnetite
permeability. The permeability of ferromagnetic materials is not constant, but depends on H. In saturable materials the relative permeability increases with H to a maximum, then as it approaches saturation inverts and decreases toward one. Different materials have different saturation levels. For example, high permeability iron alloys used in transformers reach magnetic saturation at 1.6 - 2.2 teslas (T), whereas ferrites saturate at 0.2 - 0.5 T. Some amorphous alloys saturate at 1.2-1.3 T. Mu metal saturates at around 0.8 T.
Explanation
Ferromagnetic materials (like iron) are composed of microscopic regions called magnetic domains, that act like tiny permanent magnets that can change their direction of magnetization. Before an external magnetic field is applied to the material, the domains are oriented in random directions. Their tiny magnetic fields are oriented in random directions and cancel each other out, so the material has no significant magnetic field. When an external magnetizing field H is applied to the material, it penetrates the material and aligns the domains, causing Due to saturation, the magnetic permeability f of a ferromagnetic substance reaches a maximum their tiny magnetic fields to turn and align parallel to the external field, and then declines adding together to create a large magnetic field B which extends out from the material. This is called magnetization. The stronger the external magnetic field H, the more the domains align yielding a higher magnetic flux density B. Saturation occurs when practically all the domains are lined up, so further increases in H can't increase B beyond the increment that would be caused in a nonmagnetic material, in other words, cannot cause further alignment of the domains.
Saturation (magnetic)
References
Intermodulation
Intermodulation or intermodulation distortion (IMD) is the amplitude modulation of signals containing two or more different frequencies in a system with nonlinearities. The intermodulation between each frequency component will form additional signals at frequencies that are not just at harmonic frequencies (integer multiples) of either, but also at the sum and difference frequencies of the original frequencies and at multiples of those sum and difference frequencies. Intermodulation is caused by non-linear behaviour of the signal processing being used. The theoretical outcome of these non-linearities can be calculated by generating a Volterra series of the characteristic, while the usual approximation of those non-linearities is obtained by generating a Taylor series.
A frequency spectrum plot showing intermodulation between two injected signals at 270 and 275 MHz (the large spikes). Visible intermodulation products are seen as small spurs at 280 MHz and 265 MHz.
Intermodulation is rarely desirable in radio or audio processing, as it creates unwanted spurious emissions, often in the form of sidebands. For radio transmissions this increases the occupied bandwidth, leading to adjacent channel
Intermodulation interference, which can reduce audio clarity or increase spectrum usage. It should not be confused with harmonic distortion (which has common musical applications), nor with intentional modulation (such as a frequency mixer in superheterodyne receivers) where signals to be modulated are presented to an intentional nonlinear element (multiplied) (see non-linear mixers such as mixer diodes and even single-transistor oscillator-mixer circuits). In audio, the intermodulation products are nonharmonically related to the input frequencies and therefore "off-key" with respect to the common Western musical scale.
Causes of intermodulation
A linear system cannot produce intermodulation. If the input of a linear time-invariant system is a signal of a single frequency, then the output is a signal of the same frequency; only the amplitude and phase can differ from the input signal. However, non-linear systems generate harmonics, meaning that if the input of a non-linear system is a signal of a single frequency, then the output is a signal which includes a number of integer multiples of the input frequency; (i.e. some of ). Intermodulation occurs when the input to a non-linear system is composed of two or more frequencies. Consider an input signal that contains three frequency components at , , and ; which may be expressed as
where the
and
are the amplitudes and phases of the three components, respectively. , by passing our input through a non-linear function:
, and
frequencies), as well as a number of linear combinations of the fundamental frequencies, each of the form where , , and are arbitrary integers which can assume positive or negative values. These are the
intermodulation products (or IMPs). In general, each of these frequency components will have a different amplitude and phase, which depends on the specific non-linear function being used, and also on the amplitudes and phases of the original input components. More generally, given an input signal containing an arbitrary number of frequency components
, the output signal will contain a number of frequency components, each of which may be described by where the coefficients are arbitrary integer values.
Intermodulation
Intermodulation order
The order of a given intermodulation product is the sum of the absolute values of the coefficients,
For example, in our original example above, third-order intermodulation products (IMPs) occur where :
In many radio and audio applications, odd-order IMPs are of most interest, as they fall within the vicinity of the original frequency components, and may therefore interfere with the desired behaviour.
Distribution of third-order intermodulations: in blue the position of the fundamental carriers, in red the position of dominant IMPs, in green the position of specific IMPs.
Sources of PIM
Ferromagnetic materials are the most common materials to avoid and include ferrites, nickel, (including nickel plating) and steels (including some stainless steels.) These materials exhibit hysteresis when exposed to reversing magnetic fields resulting in PIM generation. PIM can also be generated in components with manufacturing or workmanship defects, such as cold or cracked solder joints or poorly made mechanical contacts. If these defects are exposed to high RF currents, PIM can be generated. As a result, RF equipment manufacturers perform factory PIM tests on components, to eliminate PIM caused by these design and manufacturing defects. In the field, PIM can be caused by components that were damaged in transit to the cell site, installation workmanship issues and by external PIM sources. Some of these include: Contaminated surfaces or contacts due to dirt, dust, moisture or oxidation. Loose mechanical junctions due to inadequate torque, poor alignment or poorly prepared contact surfaces. Loose mechanical junctions caused during transportation, shock or vibration. Metal flakes or shavings inside RF connections.
Intermodulation Inconsistent metal-to-metal contact between RF connector surfaces caused by any of the following: Trapped dielectric materials (adhesives, foam, etc.), cracks or distortions at the end of the outer conductor of coaxial cables, often caused by overtightening the back nut during installation, solid inner conductors distorted in the preparation process, hollow inner conductors excessively enlarged or made oval during the preparation process. PIM can also occur in connectors, or when conductors made of two galvanically unmatched metals come in contact with each other. Nearby metallic objects in the direct beam and side lobes of the transmit antenna including rusty bolts, roof flashing, vent pipes, guy wires, etc.
PIM Testing
IEC 62037 is the international standard for PIM testing and gives specific details as to PIM measurement setups. The standard specifies the use of two +43 dBm (20W) tones for the test signals for PIM testing. This power level has been used by RF equipment manufacturers for more than a decade to establish PASS / FAIL specifications for RF components.
Intermodulation signal as extraneous frequency modulation, and the resulting sideband products manifest as distortion. This distortion results in a thicker, grainier texture due to the excess non-musical sum and difference components riding above and below the harmonic content of the material.
Measurement
Intermodulation distortion in audio is usually specified as the Root Mean Square (RMS) value of the various sum-and-difference signals as a percentage of the original signal's RMS voltage, although it may be specified in terms of individual component strengths, in decibels, as is common with RF work. Audio IMD standard tests include SMPTE standard RP120-1994 where two signals (at 60Hz and 7kHz, with 4:1 amplitude ratios) are used for the test; many other standards (such as DIN, CCIF) use other frequencies and amplitude ratios. Opinion varies over the ideal ratio of test frequencies (e.g. 3:4,[3] or almost -but not exactly - 3:1 for example). After feeding the equipment under test with low distortion input sinewaves, the output distortion can be measured by using an electronic filter to remove the original frequencies, or spectral analysis may be made using Fourier Transformations in software or a dedicated spectrum analyser, or when determining intermodulation effects in communications equipment, may be made using the receiver under test itself. Using a modern network analyzer with two internal RF sources and sensitive RF detectors simplifies the measurement setup and also provides a sensitivity level comparable to spectrum analyzers. Furthermore, a calibrated VNA setup also removes mismatch errors from measurements which otherwise would be present in spectrum analyzer measurements. Meanwhile error-corrected IM measurement systems are available. These system support frequency converting vector-measurements of S-parameters.[4] The user can locate IM-sources and perform a vector or time-domain fitting or modelling of the IM-signals and components.
External links
Lloyd Butler (1997). "Intermodulation Performance and Measurement of Intermodulation Components" [5]. VK5BR. "Amateur Radio," August 1997. Retrieved 30 January 2012.
References
[1] Rane Pro Audio Reference for IM (http:/ / www. rane. com/ par-i. html#IM) [2] http:/ / waltjung. org/ PDFs/ SID_TIM_TAA77_P1. pdf Slewing Induced Distortion in Audio Amplifiers, Part 1 by Walter Jung in The Audio Amateur Issue 1/1977 [3] http:/ / www. leonaudio. com. au/ 3-4. ratio. distortion. measurement. pdf Graeme John Cohen: 3-4 Ratio; A method of measuring distortion products [4] Thalayasingam, K. and Heuermann, H., Novel Vector Non-Linear Measurement System for Intermodulation Measurements, European Microwave Conference, Rom, Italy, IEEE, 2009, Available online (http:/ / ieeexplore. ieee. org/ search/ searchresult. jsp?newsearch=true& queryText=thalayasingam& x=32& y=12=no) [5] http:/ / users. tpg. com. au/ ldbutler/ Intermodulation. htm
This article incorporatespublic domain material from the General Services Administration document "Federal Standard 1037C" (http://www.its.bldrdoc.gov/fs-1037/fs-1037c.htm) (in support of MIL-STD-188).
Volterra series
Volterra series
The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture 'memory' effects. The Taylor series can be used to approximate the response of a nonlinear system to a given input if the output of this system depends strictly on the input at that particular time. In the Volterra series the output of the nonlinear system depends on the input to the system at all other times. This provides the ability to capture the 'memory' effect of devices such as capacitors and inductors. It has been applied in the fields of medicine (biomedical engineering) and biology, especially neuroscience. It is also used in electrical engineering to model intermodulation distortion in many devices including power amplifiers and frequency mixers. Its main advantage lies in its generality: it can represent a wide range of systems. It is therefore sometimes referred to as a non-parametric model. In mathematics, a Volterra series denotes a functional expansion of a dynamic, nonlinear, time-invariant functional. Volterra series are frequently used in system identification. The Volterra series, which is used to prove the Volterra theorem, is a series of infinite sum of multidimensional convolutional integrals.
History
Volterra series is a modernized version of the theory of analytic functionals due to the Italian mathematician Vito Volterra in work dating from 1887.[1] Norbert Wiener became interested in this theory in the 1920s from contact with Volterra's student Paul Lvy. He applied his theory of the Brownian motion to the integration of Volterra analytic functionals. The use of Volterra series for system analysis originated from a restricted 1942 wartime report[2] of Wiener, then professor of mathematics at MIT. It used the series to make an approximate analysis of the effect of radar noise in a nonlinear receiver circuit. The report became public after the war.[3] As a general method of analysis of nonlinear systems, Volterra series came into use after about 1957 as the result of a series of reports, at first privately circulated, from MIT and elsewhere.[4] The name Volterra series came into use a few years later.
Mathematical theory
The theory of Volterra series can be viewed from two different perspectives: either one considers an operator mapping between two real (or complex) function spaces or a functional mapping from a real (or complex) function space into the real (or complex) numbers. The latter, functional perspective is in more frequent use, due to the assumed time-invariance of the system.
Continuous time
A continuous time-invariant system with x(t) as input and y(t) as output can be expanded in Volterra series as:
where ,
and
are called n-th order Volterra kernel which can be regarded as a higher-order impulse
response of the system. If N is finite, the series operator is said truncated. If a,b and N are finite, the series operator is called doubly-finite Volterra series.
Volterra series Sometimes the n-th order term is divided by n!, a convention which is convenient when considering the combination of Volterra systems by placing one after the other ('cascading'). The causality condition: Since in any physically realizable system the output can only depend on previous values of the input, the kernels will be zero if any of the variables are negative. The integrals may then be written over the half range from zero to infinity. So if the operator is causal, . Frchet's approximation theorem: The use of the Volterra series to represent a time-invariant functional relation is often justified by appealing to a theorem due to Frchet. This theorem states that such a system can be approximated uniformly and to an arbitrary degree of precision by a sufficiently high finite order Volterra series. The input set over which this approximation holds must be compact. This is usually taken to be the set of equicontinuous, uniformly bounded functions which is compact by the ArzelAscoli theorem. In many physical situations this assumption about the input set is a reasonable one. The theorem however gives no indication as to how many terms are needed for a good approximation which is the important question in applications.
Discrete time
where ,
If P is finite, the series operator is said truncated. If a,b and P are finite the series operator is called doubly-finite Volterra series. If the operator is causal. as symmetrical. In fact, for the . can write
commutativity of the multiplication it is always possible to symmetrize it without changing So for a causal system with symmetrical kernels we
Volterra series
Crosscorrelation method
This method, developed by Lee & Schetzen, orthogonalizes with respect to the actual mathematical description of the signal, i.e. the projection onto the new basis functionals is based on the knowledge of the moments of the random signal. To allow identification orthogonalization, Volterra series must be rearranged in terms of orthogonal non-homogeneous G operators (Wiener series):
whenever
is arbitrary omogeneous Volterra, x(n) is a Stationary white noise with zero mean and variance
A. Recalling that every Volterra functional is orthogonal to all Wiener functional of greater order, and considering the following Volterra functional
we can write
If x is SWN,
and by letting
, we have:
, it is
If we want to consider the diagonal points, the solution proposed by Lee and Schetzen is:
Efficient formulas and references for diagonal kernel point estimation can be found in [5] and .[6]
Volterra series
10
Linear regression
Linear regression is a standard tool from linear analysis. Hence, one of its main advantages is the widespread existence of standard tools for solving linear regressions efficiently. It has some educational value, since it highlights the basic property of Volterra series: linear combination of non-linear basis-functionals. For estimation the order of the original should be known, since the volterra basis-functionals are not orthogonal and estimation can thus not be performed incrementally.
Kernel method
This method was invented by Franz & Schlkopf and is based on statistical learning theory. Consequently, this approach is also based on minimizing the empirical error (often called empirical risk minimization). Franz and Schlkopf proposed that the kernel method could essentially replace the Volterra series representation, although noting that the latter is more intuitive.
Differential sampling
This method was developed by van Hemmen and coworkers and utilizes Dirac delta functions to sample the Volterra coefficients.
References
[1] Vito Volterra. Theory of Functionals and of Integrals and Integro-Differential Equations. New York: Dover Publications, 1959. [2] Wiener N: Response of a nonlinear device to noise. Radiation Lab MIT 1942, restricted. report V-16, no 129 (112 pp). Declassified Jul 1946, Published as rep. no. PB-1-58087, U.S. Dept. Commerce. URL: http:/ / www. dtic. mil/ dtic/ tr/ fulltext/ u2/ a800212. pdf [3] Ikehara S: A method of Wiener in a nonlinear circuit. MIT Dec 10 1951, tech. rep. no 217, Res. Lab. Electron. [4] Early MIT reports by Brilliant, Zames, George, Hause, Chesler can be found on dspace.mit.edu. [5] M. Pirani, S. Orcioni, and C. Turchetti, ``Diagonal kernel point estimation of n-th order discrete Volterra-Wiener systems,EURASIP Journal on Applied Signal Processing, vol. 2004, no. 12, pp. 1807--1816, Sept. 2004. [6] S. Orcioni, M. Pirani, and C. Turchetti, ``Advances in Lee-Schetzen method for Volterra filter identification,Multidimensional Systems and Signal Processing, vol. 16, no. 3, pp. 265--284, 2005.
Further reading
Barrett J.F: Bibliography of Volterra series, Hermite functional expansions, and related subjects. Dept. Electr. Engrg, Univ.Tech. Eindhoven, NL 1977, T-H report 77-E-71. (Chronological listing of early papers to 1977) URL: http://alexandria.tue.nl/extra1/erap/publichtml/7704263.pdf Bussgang, J.J.; Ehrman, L.; Graham, J.W: Analysis of nonlinear systems with multiple inputs, Proc. IEEE, vol.62, no.8, pp.10881119, Aug. 1974 Giannakis G.B & Serpendin E: A bibliography on nonlinear system identification. Signal Processing, 81 2001 533580. (Alphabetic listing to 2001) www.elsevier.nl/locate/sigpro Korenberg M.J. Hunter I.W: The Identification of Nonlinear Biological Systems: Volterra Kernel Approaches, Annals Biomedical Engineering (1996), Volume 24, Number 2. Kuo Y L: Frequency-domain analysis of weakly nonlinear networks, IEEE Trans. Circuits & Systems, vol.CS-11(4) Aug 1977; vol.CS-11(5) Oct 1977 26. Rugh W J: Nonlinear System Theory: The VolterraWiener Approach. Baltimore 1981 (Johns Hopkins Univ Press) Many online versions, e.g. www.ece.jhu.edu/~rugh/volterra/book.pdf Schetzen M: The Volterra and Wiener Theories of Nonlinear Systems, New York: Wiley, 1980.
11
12
License
13
License
Creative Commons Attribution-Share Alike 3.0 //creativecommons.org/licenses/by-sa/3.0/