Types of Hysteresis
Hysteresis phenomena are widespread in nature. Ferromagnetic hysteresis is just one among many examples. The mechanisms underlying hysteresis fall within the realm of nonequilibrium thermodynamics, a topic with many conceptual difficulties not yet fully resolved. As a consequence, the word hysteresis may be used with different meanings in different contexts, depending on the physical picture one has in mind and the approximations that one is introducing in the description. Thus it seems appropriate to clarify the physical picture and the approximations that will accompany the description of hysteresis given in this book. We shall concentrate on three approximate views of the behavior of metastable systems, rate
independent hysteresis, viscoustype ratedependent hysteresis, and thermal relaxation. In particular, we shall discuss how these approximations apply to bistable systems, which represent the simplest case displaying the phenomena we are interested in. Because we are going to discuss general aspects of hysteresis, not limited to ferromagnetic materials, we shall employ a general terminology. We consider a system, on which we act through some external action, named input or control variable or external field, and indicated by H. In the case of a magnetic material, H would represent the applied magnetic field. The system responds to this action and changes its internal state. The change is monitored by looking at the behavior of some quantity X, named output or state variable. In the magnetic case, this could be, for example, the total magnetic moment of the body. H is the independent variable and X is the dependent one, in the sense that we assume to know in advance the time dependence H(t) and that we want to describe and predict the corresponding time response X(t). This is not always the case. In the treatment of magnetic losses, for example, one usually considers dynamic hysteresis loops where the output, not the input, is specified (e.g., losses under sinusoidal magnetization). This situation necessarily requires some feedback control, which adjusts the input in order to pro31
32
duce the required output. Such feedback complications will not be considered in this chapter. The nature and the mathematical description of hysteresis can be qualitatively different depending on the tensor character of the input and output variables. We shall limit our considerations to scalar hysteresis, where both H and X are just numbers. In reality, the natural input and output variables for ferromagnetic materials are the magnetic field and the magnetization, which are vector quantities. Therefore, vector hysteresis is in principle expected to be the correct framework, and scalar hysteresis should emerge from the general picture under appropriate approximations. Yet, scalar hysteresis descriptions are often introduced from the beginning on the basis of some phenomenological assumption, mainly because they permit a simpler and more intuitive treatment of otherwise complicated phenomena. We will discuss the value of such approaches and their limitations on several occasions. Some aspects of vector hysteresis are considered in Chapter 8. Although not directly addressed in this book, one aspect of hysteresis is attracting increasing interest: its universal and ubiquitous nature. How general can the mathematical description of hysteresis be? Is there some universal paradigm that can be applied to all situations where hysteresis is observed? The ideas presented in this chapter and the more detailed discussion of Preisach systems given in Part V might also be useful as a contribution toward a better comprehension of these general questions.
2.1 W H A T WE M E A N BY HYSTERESIS In a loose sense, we might say that hysteresis appears when the output X is not a singlevalued function of the input H. Often, when using the word hysteresis, one has in mind a hysteresis loop of the type shown in Chapter 1, where there are two possible output values for any input value. We shall see that hysteresis phenomena have a much more complex nature, which cannot be reduced to the analysis of hysteresis loops only. Yet, it is a fact that hysteresis loops are the basic feature characterizing magnetic materials. Therefore, we begin by discussing certain situations where hysteresis loops naturally arise as a consequence of the existence of a phase lag between input and output. In linear systems, this behavior is described by a generalized susceptibility fully characterizing the internal structure of the system. This kind of situation gives rise to ratedependent hysteresis, depending on the rate of change of the input, and disappearing under quasistatic excitation. Subsequently, we introduce the concept of
33
memory and we discuss situations that give rise to rateindependent hysteresis as a consequence of memory mechanisms persistent in time. Finally, we show how the various types of ratedependent and rateindependent hysteresis can originate in a system whose free energy has a complicated structure, with many local minima corresponding to metastable states. As discussed in Chapter 1, this is the typical situation expected in magnetic systems. In the literature on hysteresis, there is a tendency to use the word hysteresis to describe rateindependent hysteresis only. Ratedependent effects are considered as additional phenomena that complicate the picture and should be ruled out as much as possible. For the physical systems considered in this book, this separation appears somewhat artificial and not very useful. Rateindependent hysteresis is nothing more than an approximation to processes that are intrinsically ratedependent, and we prefer to summarize under the word hysteresis the whole set of intimately connected phenomena arising from the simultaneous existence of metastable states, dissipation mechanisms with characteristic time scales, and thermal relaxation. On the other hand, it is just in the mathematical description of rateindependent hysteresis that one encounters the most interesting and challenging difficulties, and it is in this direction that important mathematical progress is being achieved.
If we plot X as a function of H under varying time t, we obtain the elliptical loops shown in Fig. 2.1. In this context, where the input is defined by the amplitude H 0 and the pulsation co, the problem is to know the functional dependence of the amplitude, X0(H0, co), and of the phase lag, ~H0, co), on H0 and co. As an example, Fig. 2.1 shows the set of loops obtained at fixed co and variable H 0 when X0 is proportional to H 0 and r is independent of H 0. There is no a priori reason to expect that the response of the system to a sinusoidal excitation should still be sinusoidal. This is determined by the internal structure of the system, and in general one will find that
34
FIGURE 2.1. Elliptical loops produced by inputoutput phase lag (Eq. (2.1)).
a sinusoidal excitation gives rise to a distorted output. An undistorted response like the one of Eq. (2.1) is typical of linear systems, where the superposition principle holds. This principle permits one to construct the response of a timeinvariant linear system to a generic input as the superposition of the responses to input impulses. Let qs(t  to) be the system response at time t to an input impulse 8(t  to) that occurred at the previous time t 0. Often qs(t) has the form
~ t ) = XiS(t) + q~d(t)O(t)
(2.2)
The two terms represent the instantaneous and the delayed system response. The presence of the Heaviside step function 8(t) ensures that causality holds and that the response follows the excitation but never anticipates it. We shall assume that q~d(t) is a regular function of t, such that q~d(t) ~ 0 for t ~ oo. This implies that the state occupied by the system under zero input, after all transients have died away, is always X = 0. According to the superposition principle, the system response to the generic input H(t) will be
t
X(t) = xiH(t) + f
co
(2.3)
The impulse response function ~(t) fully characterizes the internal structure of the system. Equation (2.3) acquires a particularly interesting form when we pass to Fourier transforms
z~ =
+co
z(t) exp(irot) dt
(2.4)
co
2.1 WHAT WE MEAN BY HYSTERESIS where z stands for H or X. In Fourier space, Eq. (2.3) becomes
x ~ = x(,~)n~
35
(2.5)
where X(w) = Xi +
oo
q~d(t)exp(i~) dt
0
(2.6)
is the Fourier transform of ~(t). ~ro) is named generalized susceptibility. It is a complex quantity, with a real and an imaginary part:
(2.7)
From Eq. (2.6) and the fact that q~d(t) is a real function, we deduce that X*(~o) = ~  ~ ) , which means that X'(~o) = X'(r X"(~o) = X"(r (2.8)
X' is an even function of ~ whereas X" is odd and changes sign with to. We can also express X in terms of its amplitude and phase, as X(~) = I~ exp(i~) where c o s 4 _ X' ~ s i n 4  X" I~ (2.10) (2.9)
These results make clear the behavior of a linear system under sinusoidal excitation. Equation (2.5) shows that a linear system responds to a sinusoidal input just as described by Eq. (2.1) and Fig. 2.1, with X0(H0,~a) = IX(tO)ill0, proportional to H 0, and ~H0,ro ) equal to the loss angle of Eq. (2.10), independent of H 0. As to the frequency dependence of ~ ) , we mentioned the fact that X"(~) is an odd function of o4 which usually changes sign by passing through the origin. This means that X"(0) = 0, i.e., after Eq. (2.10), ~0) = 0. Thus in the quasistatic limit where the input rate of change is arbitrarily small, the phase lag vanishes and we no longer have any hysteresis loop. The quasistatic system response X(H) reduces to the singlevalued linear relationship X(H) = X'(0) H (2.11)
The behavior described by Eq. (2.3) and Eq. (2.5) is an example of ratedependent hysteresis, disappearing when the excitation is sufficiently slow.
36
(2.12)
where U represents the system internal energy and 8Q is the heat absorbed by the system in the transformation. Let us now consider the case where the system is kept at constant temperature and is subjected to a periodic input. If the ensuing system response is itself periodic, the variation of the internal energy in one cycle will be zero, and we obtain from Eq. (2.12)
~cycle H dX =  ~cycle ~Q
(2.13)
According to Eq. (2.13), the area of the loop described by X(H) gives the work dissipated as heat in each cycle. This quantity is named energy loss per cycle. The term power loss is used when the dissipation per unit time rather than unit cycle is considered. Note that the loop area must always be positive, or, in other words, the X(H) loop must always be traversed counterclockwise. In the opposite case, one would have a cyclic transformation whose sole result would be the transformation into work of a certain heat amount absorbed from a single heat reservoir, which would contradict the second law of thermodynamics. This also shows that the cyclic transformation considered is irreversible, because the reversed transformation would not be admissible, for the same reason. In the case of the linear systems discussed in the previous section, the energy loss can be expressed in terms of the generalized susceptibility. Under sinusoidal excitation, we obtain from Eq. (2.1) and Eq. (2.13) that the loop area W is given by
W = ~cycle H d X = r
o sin~
(2.14)
(2.15)
37
Equation (2.15) shows that the dissipation is controlled by the imaginary part of the generalized susceptibility. We conclude by remarking that expressing losses in terms of loss angles is not necessarily limited to linear systems. In the evaluation of power losses in magnetic materials, the loss under sinusoidal output is usually considered. When one estimates the loss through Eq. (2.13), one can decompose the input H into its Fourier components. Due to the orthogonality properties of sinusoidal functions, only the fundamental harmonic will contribute to the integral. Therefore, the loss can still be expressed in the form of Eq. (2.14), where q~ represents the phase shift between the input fundamental harmonic and the output.
X(t) = xiH o +
(2.16)
As t increases beyond t 0, the first integral progressively becomes smaller, because q~d(t) ~ 0 for t ~ o% and the response tends to the limit X= Xi +
~d(t') dt'
Ho = X'(O) Ho
(2.17)
which coincides with the quasistatic response previously discussed (see Eq. (2.11)). When this limit is approached, the memory of the field history H(t) for t < t o is lost. In this sense, the memory effect has a nonpersistent character. Systems with a more complex internal structure, however, can have persistent memory of the past. By this wernean that the state of the system
38
under constant input keeps on depending on the past history of the input even after all transients have died out. In other words, given a certain input value H, the system occupies one of several possible states, and it is the input history that selects which of these states is actually occupied. The system remains in that state for indefinite time, if the external conditions are not modified, and this produces the persistent memory effect. According to this viewpoint, a system with persistent memory cannot be in thermodynamic equilibrium, because it is allowed to occupy one of several states, whereas the equilibrium state under given external conditions (H and temperature) is unique. If a system with persistent memory occupies a state that depends on its past history, then its future evolution, that is, the behavior of the curve X(H) starting from given values H I and X 1 will depend on that past history as well. In particular, having reached the point (H1,X1) under increasing or decreasing input will play a role. The expected situation is shown in Fig. 2.2. The point (H1, X1) has been reached under increasing H. A new evolution branch is generated if we stop increasing H and we start decreasing it. A point where the sign of the input variation is reversed is called a reversal point. From the thermodynamic viewpoint, the fact that, after the reversal point, the system does not trace back the same X(H) curve in reversed sense is again an indication that the system is not in thermodynamic equilibrium. In fact, if the system were in equilibrium for each value of the input, the X(H) curve would only depend on H and temperature, and thus would be exactly the same, independent of the sense in which it is traversed. Because a system with persistent memory has the possibility of occupying one of many different states, it is natural to consider how we can identify these states. This will depend on the internal structure of the system, and in particular on the fact that the system is endowed with local or nonlocal memory.
x1
X
/ /
J
FIGURE 2.2. Branching in rateindependent hysteresis.
39
Local memory. In a system with local memory, the values of H and X are sufficient to identify the state. The various states associated with given H are then necessarily characterized by different values of X. Any point in the H  X plane uniquely identifies one and only one state. Given the point (H 1, X1) of Fig. 2.2, if the system has local memory its subsequent evolution out of this point will at most depend on the sense (increasing or decreasing) of the input variation. Therefore, there will be only two possible curves originating from that point, one under increasing input and the other under decreasing input. The role of past history is to select which point (HI, X1) is reached by the system. Once this point is selected, the future evolution of the system is fully determined by the initial condition (HI, X1) and past history no longer plays a role. Nonlocal memory. In a system with nonlocal memory, H and X do not give a complete characterization of the system. Many states are associated with a given point of the H  X plane, and a whole set of X(H) curves starting from the single point (//1, X1) will exist, depending on past history. In a system with nonlocal memory, additional internal state variables are needed to complete the description of the system. In Part V, we shall discuss the nature of such internal variables in greater detail. This kind of behavior is very close to what is observed in magnetic materials. In Chapter 1, we gave several examples of magnetization curves dependent on field history, where branching and nonlocal memory effects play a dominant role.
Nowhere in the previous analysis of persistent memory did time have a role. No time scale was introduced and we simply assumed that, in order to see persistentmemory effects, one should wait long enough to let possible transients decay to zero. In this sense, hysteresis due to persistent memory is rateindependent, because it does not depend on the rate at which the input is varied (provided it is low enough), but only on the sequence of values attained by the input during the system evolution. As mentioned at the beginning of this chapter, rateindependence is often considered the most distinctive feature of hysteresis. Yet, from the thermodynamic viewpoint, it only represents an approximation of limited validity. In fact, any system spontaneously relaxes toward thermodynamic equilibrium under the action of thermal agitation. When equilibrium is reached, any memory of the past is lost, which shows that absolutely persistent memory cannot exist. In principle, by waiting long enough to let the system reach equilibrium under each input value, one could drive the system through a sequence of equilibrium states and no hysteresis at all would be observed. Nonetheless, rateindependence may naturally arise as a useful approximation when the input rate takes values in an
40
appropriate interval. On the one hand, the input rate must be low enough to rule out fast ratedependent effects, like, for example, eddycurrent damping in metals. On the other hand, the input rate must be large enough to prevent the system from significantly relaxing toward equilibrium. For all those systemsmand many magnetic systems fall within this categorym where thermal relaxation is extremely slow and proceeds with the logarithm of time, there will exist a substantial inputrate interval where both these requirements are simultaneously fulfilled. In this interval, the system will exhibit rateindependent hysteresis to a very good approximation.
41
We shall call this function Landaufree energy, after L.D. Landau, who developed on this basis a phenomenological theory of phase transitions. In spite of the formal similarity, there is an essential difference between GL and the Gibbs energy G at the equilibrium. G is a function of H and T only, in which X must be expressed as a function of H and T through the equation of state of the system. Conversely, GL is the energy of that particular restriction where the state variable X is forced to take a certain given value, as if it were an external constraint. The physical meaning of GL becomes clear in the frame of statistical mechanics, where GL appears as the result of a partial averaging process carried out over the partitionfunction Z(H, T) of the system. According to statistical thermodynamics, Z determines all thermodynamic properties, and is defined as
Z(H,T) =
~ exp
i
~B~
] = exp
kBT
,219,
In Eq. (2.19), ~i is a shortcut notation for any sum or integral over admissible microstates i, and Ei and Xi are the values of the system energy and of the variable X in microstate i. The evaluation of the sum gives the exponential of the Gibbs energy 2, G(H, T). Let us n o w carry out the thermodynamic sum in two steps, by summing up first over all the microstates in which X has a fixed value, and then over all possible X values. The result of the first partial sum gives the Landau free energy: exp
G(H,T)~
k~ j = ~
X
~
Xi=X
exp 
(2.20)
= ~ exp
X
GL(X, H,T))
If the separation of time scales previously mentioned holds to a sufficient approximation, then one can stop after the first partial average of Eq. (2.20), and take GL as the free energy of the system. According to the last equality of Eq. (2.20), e x p (  GL/kBZ) will be proportional to the probability of occupying a given X state, and the system will tend to stay in those states where GL is low. In general, it is extremely difficult to carry out the partial average explicitly and thus to calculate the Landau energy from statistical mechanics. Most often, one exploits syrmnetry arguments and 2Sometimes the same definition is said to give the Helmholtz, not the Gibbs free energy. This exchange of roles depends on whether one decides to consider the term  H X i a s part of the energy of microstate i or as potential energy in the external field H.
42
heuristic considerations to arrive at a reasonable, approximate estimate of the form taken by GL in particular cases. In the rest of this book, the energy GL will play a role in a number of considerations concerning hysteresis. We shall refer to GL simply as the free energy of the system, but the subscript L will be used whenever possible as a reminder of the many assumptions and approximations accompanying its use. During relaxation, the system can be thought of as being under the action of competing thermodynamic forces, the internal force [OF/OX]T and the external field H. There will be equilibrium between these two forces whenever the relation is fulfilled:
[O x]
T
221)
= 0 (2.22)
From Eq. (2.18) and Eq. (2.21), we see that when the system is in equilibrium, the GL free energy is at an extremum:
3Gcl
The equilibrium is stable when the extremum is a minimum. When Eq. (2.21) or Eq. (2.22) is satisfied, there is equilibrium between the thermodynamic forces acting on the system, but this does not mean that the system is in complete thermodynamic equilibrium. The situation is illustrated by Fig. 2.3. The arrows indicate the local minima of the system free energy. Each minimum is a solution of Eq. (2.22) and represents a possible metastable state for the system. In a metastable state, there is stable equilibrium between the forces acting on the system. Over short time periods, these forces tend to keep the system in the energy well initially occupied, but over longer time scales thermal agitation progresG,
t
FIGURE 2.3. Landau free energy with indication of metastable states.
43
sively makes other states accessible to the system. The system visits neighboring states through random thermal fluctuations, and once it reaches by chance a local energy maximum separating the initial state from neighboring energy wells, it spontaneously jumps to some other metastable state that becomes accessible. Through this mechanism, the system probes progressively larger regions of phase space, until it reaches thermodynamic equilibrium, where the probability to occupy any given state is dictated by Boltzmann statistics. How fast or slow the approach to equilibrium will be is determined by the height of the energy barriers separating neighboring states as compared with the strength of thermal agitation. This qualitative picture will be made more precise in Section 2.4. Let us consider how the various hysteresis mechanisms discussed so far emerge from this description. First of all, we see that persistent memory is the natural consequence of the presence of multiple metastable states. The system can occupy one of several local energy minima and past input history determines the minimum actually occupied. Note that, in the particular example of Fig. 2.3, each minimum is identified by the corresponding value of X, which means that the system represented in that example possesses local memory. In the case of nonlocal memory, G~ would be a function of additional internal variables and the energy profile of Fig. 2.3 would become a multidimensional one, with a much more complex structure. The memory is absolutely persistent in the zerotemperature limit, where the effect of thermal agitation can be neglected. This is the limit, further discussed in Section 2.2, where rateindependent hysteresis can be observed. If the external conditions do not change, the system will indefinitely remain in any local energy minimum it may initially occupy. The only way to force the system to evolve is to change the external field H. Due to the presence of the term  H X in Eq. (2.18), varying the external field distorts the energy profile, eventually transforming the initial Gr minimum into an inflection point. At that moment, the system loses stability and makes a spontaneous jump to the nearest local minimum. This event will be called a Barkhausen jump, and is analogous to the Barkhausen effect presented in Section 1.2. During the jump, the energy of the system suddenly decreases. Some energy is thus irreversibly dissipated as heat into the thermal bath. The dissipation mechanism by which this occurs will have some characteristic time scale. In the rateindependent approximation, one assumes that the external field changes so slowly that it practically remains unchanged during the time needed to complete the Barkhausen jump. In this limit, the role of the external field is just to force the system to pass from one local minimum to the next. The sequence of energy minima visited is the only important feature, and time plays no role. However, when the external field rate becomes
44
so high that appreciable field variations take place during individual Barkhausen jumps, then rateindependence no longer applies. The system no longer evolves through a sequence of spontaneous jumps but approaches a regime of forced dynamic evolution driven by the external field. This transition from rateindependent to ratedependent hysteresis will be discussed in Section 2.3.
(2.23)
where a is a positive constant. We use lowercase letters to indicate that the problem has been reduced to some convenient dimensionless form, in which all variables and parameters are dimensionless, fix) is shown in Fig. 2.4. It has two equal minima located at x ___C/1/2,and a m a x i m u m at x = 0. According to Eq. (2.18), the free energy under nonzero input h will be
=
gL(X;h) = X4  2l/x2  hx
(2.24)
3The temperature dependence is not important for the subsequent considerations and will be understood in the rest of this section and throughout Section 2.3.
45
0.50
0.25
x
N,,
0.00
0.25
! , I
1.0
13.5
0.0
05.
1.0
FIGURE 2.4. Free energy of bistable system described by Eq. (2.23), with a = 0.5. The energy minima are located at x = + x r = +_a1/2.
The metastable states u n d e r the generic field h are determined by the condition OgL / O x = 0, with O2gL / OX2 > 0, identifying local gL m i n i m a . T h e qualitative behavior of gL(X;h) for various values of h is shown in Fig. 2.5. W h e n h is large, the energy of interaction with the external field dominates and ga exhibits only one minimum. W h e n h increases from 0% at a certain field  h e to be determined later, a n e w energy m i n i m u m is formed. At h = 0 the two minima have the same energy. At h = h e the m i n i m u m initially occupied by the system becomes an inflection point:
FIGURE 2.5. Sequence of energy profiles calculated from Eq. (2.24) under different external fields, showing genesis of Barkhausen jumps and hysteresis loop. The numbers refer to the field values indicated in Fig. 2.6.
46
the system is no longer stable and makes a spontaneous and irreversible Barkhausen jump to the lower energy state. Only one m i n i m u m is present for higher fields. A similar situation is found when the field is progressively decreased from +o% except for the fact that the Barkhausen jump will n o w take place at h =  h e. We can represent the same situation by plotting the free energy gradient Of/Ox and by considering that, according to the equilibrium requirement 3g L / 3 x = 0, the condition h = Of/Ox must always be satisfied as the system evolves (Fig. 2.6). The Of/Ox profile can be decomposed into two stable branches, one for x <  x c and the other for x > x c, where 32f/Ox 2 > 0, and a central unstable branch where 32f/Ox 2 < O. T h e right stable branch is traversed when h decreases from +oo d o w n to h =  h c. At h =  h c, x = x c, the right branch ends and the system jumps to the point h =  h c, x =  x d of the left branch. A similar description applies for increasing fields. If we plot the state variable x as a function of the input field h, we obtain the hysteresis loop shown in Fig. 2.7. In spite of its simple structure, this loop already exhibits many of the features of the hysteresis loops actually observed in real systems. The two stable states existing at h = 0 are examples of remanent states. There the state variable takes the value x = +XF determined by the condition 3f/Ox  4x(x 2  a) = 0, which gives x r = +a 1/2. T h e t w o instability points h = he, x =  x c and h =  h e, x = x c, where the Barkhausen jumps take place, are points where the two conditions, 3gL/3X = 0 a n d 3 2 g c / 3 x 2 = 0, must simultaneously hold, that is, 4x 3  4ax  h = 0 3x 2  a = 0 (2.25)
The solution gives he = 8(a/3) 3/2, Xc = (a/3) 1/2. T h e final state x = xd > 0 reached by the system after the Barkhausen jump at h = he is also a solution of the equation 4 x ~  4axd  h e = 0, with 3 x y  a > 0. One finds xd = 2(a/3) 1/2. In Section 2.1.2, we showed that the area of the hysteresis loop measures the amount of work dissipated as heat during each excitation cycle. This dissipation takes place in the Barkhausen jumps and is directly related to the ga energy. Let us consider the shaded area zlW shown in Fig. 2.6. By construction, this area is equal to the integral
zlW= fxj, [ h e _ xc 03_~x]dX=_ fx s [0gL] dx xc L ox _1h = h~
(2.26)
= gc (xc;hc)  gc(Xd;hc)
This is exactly the energy decrease occurring when the system makes the
47
FIGURE 2.6. Equilibrium between thermodynamic forces in bistable system and genesis of hysteresis loop. The numbers refer to the energy profiles of Fig. 2.5.
(ho,xo)
S
(ho,xf)
I . l
2
1
'
h
FIGURE 2.7. Solid line: Hysteresis loop of bistable system. Compare with Fig. 2.6. Broken line: Phase coexistence curve obtained by applying Maxwell convention.
48
Barkhausen jump (see Fig. 2.5). The system suddenly reduces its energy and transfers the energy difference as heat to the thermal bath. The same occurs in the jump taking place under a decreasing field. As shown by Fig. 2.6, the areas associated with the two jumps just sum up to give the total hysteresis loop area. This description makes clear the two steps through which the work performed by external sources is transformed into heat. Initially, the system gains energy from the external field as the minimum it occupies grows in energy and becomes less and less deep. Then, at the point where the minimum becomes an inflection point, the energy previously gained is quickly transferred to the thermal bath as the system jumps to lower energy, and the whole process can start again. Note also that the description is rateindependent, because we have assumed that the system always occupies one of the energy minima existing at the given field, whatever the field rate of change is. In particular, when we draw the two horizontal branches of Fig. 2.6 associated with the two Barkhausen jumps, we are implicitly assuming that the field does not change appreciably during the time needed by the system to make the jump. In Section 2.3, we will see how ratedependent hysteresis naturally emerges when this assumption no longer holds.
49
x (n = 1), and two control parameters, h and a (k = 2). We include the parameter a among the control parameters because this is a situation of definite physical interest that will be encountered several times, for example in Chapter 5, when dealing with Weiss meanfield theory of ferromagnetism, and in Chapter 8, when discussing coherent magnetization rotation processes. Given the potential 4 V(x; h), one can, for each specific choice h = h 0 of the control parameter set, classify the equilibrium points where VV = 0 according to the properties of the Hessian matrix Vij = 32V/3xi3xj calculated at the equilibrium point. Two situations are possible.
Isolated or nondegenerate or Morse critical points. We shall simply term these points equilibrium points. These are points where VV = 0 and the determinant of the Hessian matrix is not zero, det Vij =~ 0, which means
that the eigenvalues of the Hessian matrix are all different from zero. If all the eigenvalues are positive, then the point is a local minimum for the potential and represents a state of local stable equilibrium. If, on the contrary, a certain number i of eigenvalues is negative, the point is named a Morse isaddle and describes an equilibrium situation unstable along certain directions. The number and the nature of equilibrium points determine the qualitative properties of the potential. For example, the potential of Eq. (2.24) may describe two qualitatively different situations, one in which there is only one potential minimum, and the other one where there are two minima separated by an unstable potential maximum (see Fig. 2.5).
Nonisolated or degenerate or nonMorse critical points. We shall simply term these points critical points. These are points where VV = 0 and det Vij 0. Some of the eigenvalues of the Hessian matrix, say l of them, are here equal to zero. Degenerate critical points play a fundamental role in the theory, since they control the qualitative properties of the potential as a function of the control parameters. To illustrate this fact, let x 0 be a certain critical point when the control parameter set takes the value h = h 0. The fact that x 0 is a critical point characterizes the behavior of the potential V(x; ho) when x varies in the neighborhood of x = x 0 and h is kept fixed, h = h 0. The degenerate character of the critical point depends on the choice h = h 0 and the general question arises of how the existence of degenerate critical points, and, more generally, how the properties of the potential will be modified when we vary the control parameters in the neighborhood of h = h 0. The deep conclusion reached by catastrophe theory is that the joint
4Wewill simply indicate by x and h the whole set of state variables and control parameters whenever there will be no risk of ambiguity.
50
x and h dependence of the potential around a critical point cannot be arbitrary, but it is limited to a few canonical forms, described by functions, named catastrophe functions or simply catastrophes, which are classified according to the number l of eigenvalues of the Hessian matrix that are zero at the critical point, and to the number k of control parameters. The classification of catastrophes can be found in the mentioned texts. Of interest for our purposes are the fold and cusp catastrophes.
V(x;h) = 3
x3
hx
(2.27)
The behavior of the potential under varying h is shown in Fig. 2.8. The control space is represented by the h axis, and the critical point is at h = 0. In fact, the set of equilibrium points for the potential satisfies the condition c~V _ x2 _ h = 0
Ox
(2.28)
This equation has two real roots when h > 0, which correspond to an energy m i n i m u m and an energy maximum. Conversely, no real solution exists when h < 0. The potential passes from one condition to the other at the critical point, where the two equilibrium points merge together and form an inflection point. This occurs when, in addition to Eq. (2.28), the condition c92V/ax 2 0 is fulfilled, i.e., after Eq. (2.27), at x = 0. By inserting this result into Eq. (2.28), we see that this can only occur when h = 0. The point h = 0 represents the set (in this case consisting of one
=
/j
v ~ v
0.5
FIGURE 2.8. Fold catastrophe.
0.5
51
point only) separating the regions of control space where the potential takes qualitatively different forms. This set is called the bifurcation set.
h 1x + h2x2
(2.29)
We immediately recognize the structure of the energy of the bistable system of Eq. (2.24), with the external field h and the parameter a as control parameters. The form of the catastrophe function governs the qualitative properties of the potential as a function of the control parameters. This is shown in Fig. 2.9. In order to understand the meaning of this
1
1I
2
1
2.9. Cusp catastrophe, with qualitative representation of energy profiles at different points in control space. Evolution along the horizontal line in the lower part of the figure generates the behavior shown in Figs. 2.5 and 2.6. The cusplike solid line represents the bifurcation set (Eq. (2.32)), the dashed line the Maxwell set.
FIGURE
52
figure, let us begin by considering that the set of equilibrium points for the potential of Eq.(2.24) is determined by the condition
cggL ~
3x
4X3  4ax  h = 0
(2.30)
This is a cubic equation that has one or three real roots, depending on the values of h and a. Having one or three equilibrium points changes the qualitative behavior of the potential (one minimum against two minima separated by a maximum). If we represent the regions of the (h, a) control space where the potential takes one of the two qualitatively different forms just mentioned, we obtain the phase diagram shown in Fig. 2.9. The boundary line where one passes from one type of behavior to the other is the bifurcation set. At the bifurcation set two equilibrium points merge together and disappear. This occurs through the formation of an inflection point, which, in addition to Eq. (2.30), also fulfills the condition c~2gL 12x2  4a = 0
3X 2
(2.31)
This equation has real solutions only for a > 0. In this case, one finds x = +(a/3) 1/2. By inserting this result into Eq. (2.30), one obtains the equation describing the bifurcation set:
3
(3)
h2 0
The cusplike appearance of the bifurcation set makes clear the origin of the name given to this type of catastrophe.
2.2.3 E v o l u t i o n
Knowledge of the number and the nature of equilibrium points does not clarify which of these points will actually be occupied by the system. In principle, this would require some dynamic description of the system evolution, of the type discussed in Sections 2.3 and 2.4. A simpler approach, adopted in catastrophe theory, is to summarize such dynamic aspects through appropriate conventions. Two extreme cases are in particular considered.
53
Maxwell convention. T h e system always occupies the state where the poten
54
states not homogeneous in space, made up of domains where the system is in one or the other phase (see Fig. 2.10), we can actually generate states that continuously span the interval  x r <<_ <x>  +Xr, by adjusting the relative volumes of the two phases. The free energy of the mixture is a simple linear combination of the energies of the two phases, if we can neglect the energy concentrated at the interfaces separating different domains. In this frame, the Maxwell set can be interpreted as the set where phase coexistence is realized. When the system moves along the vertical broken line of Fig. 2.7, both phases are present, and the system smoothly passes from the condition <x> =  X r (whole system in the x =  X r phase) to the condition <x> = XF (whole system in the x = x r phase) with no hysteresis, by adjusting the relative volume of domains. Magnetic domains are at the heart of the behavior of magnetic materials. However, the free energy structure responsible for them is more complicated than the simple bistable one considered here, and requires a specific detailed analysis. This aspect will be addressed in Part III.
55
forced evolution driven by the external field take place on comparable time scales, rateindependent approximations are no longer applicable, and more general frames of description are needed. We shall discuss this aspect for the bistable system of Section 2.2, which gives a simple and clear illustration of the intimate relationship existing between rateindependent and ratedependent hysteresis and, in particular, of the limit conditions under which rateindependent hysteresis naturally emerges from the general ratedependent picture.
(2.33)
where yis some positive constant. This is an equation of the type "velocity proportional to force," describing a viscoustype mechanism, of which y represents the friction constant, measuring the ability of the system to resist the action of the force. The dynamics is overdamped, with no inertial effects giving oscillations around equilibrium. By taking into account Eq. (2.18), we can write
dX Ydt = H(t) OF O X  H(t)  HF(X)
(2.34)
where we have introduced the field HF(X) = OF/OX. The explicit time dependence in H(t) recalls the fact that we are going to apply the equation to situations where the rate of change of the external field may attain significant values and affect the evolution of the system. On the other
56
hand, we can write an evolution equation for X only if X has a welldefined value at each instant of time. This means that the internal relaxation processes leading to a given value of X must act over times much shorter than the times over which X undergoes significant variations. This is consistent with our previous interpretation of GL (see Section 2.1.4). A system like the one described by Eq. (2.34), that is, by an equation of the form d X / d t = f(X,t), is called a dynamical system. If the function f explicitly depends on time, then the system is called a nonautonomous dynamical system. In addition, for the cases we are interested in, like the bistable system of Fig. 2.4,f is a nonlinear function of X, which also makes the system nonlinear. The general study of nonlinear, nonautonomous dynamical systems is rather difficult. In this section, we shall simply list and discuss a few qualitative aspects relevant to the interpretation of hysteresis phenomena. Texts giving a more detailed treatment are mentioned in the bibliographical notes. As a simple introductory example, it is instructive to consider the case where the system free energy is parabolic, F(X) = X2/2Xo, with X0 > 0. For any given field H, the system has only one equilibrium position, given by X = X0H. The quasistatic system response is thus linear, with no hysteresis. According to Eq. (2.34), under arbitrary dynamic excitation, the system is governed by the equation
dX X y ~ +   = H(t) (2.35) X0 This is the wellknown equation describing exponential relaxation effects. The solution is X(t) t = Xo mr f _oo e x p ( t  t ' ) H ( t ' ) d t ' ~ .
(2.36)
where T= YX0.Equation (2.36) describes a linear system of the type defined by Eq. (2.3). The generalized susceptibility, calculated from Eq. (2.6), is given by
=
A,(r
X0 1 + ir
r X0 i 1 + o~2~ 1 + oj2r2
(2.37)
The loss per cycle under sinusoidal excitation (Eq. (2.15)) becomes W = r I + r
ro "rXo
2 H2 = r176
(2.38)
The loss per cycle is proportional to the friction constant 7,, and for given output amplitude X0, is also proportional to the frequency of excitation. The process is purely ratedependent. Under quasistatic conditions, the loss goes to zero and the system response reduces to the nonhysteretic, linear law X = X0H.
57
(2.39)
dh du = r(u)
where r(u) represents the input field rate of change. Expressing the problem in the form of Eq. (2.39) is useful, because it shows that the equations governing the joint behavior of h and x become autonomous (i.e., with no explicit time dependence) whenever the field changes at some fixed rate r, independent of time. Dealing with an autonomous description introduces important simplifications. In particular, there is just one solution of the problem (one trajectory) passing through any given point (x0, h0) of the xh plane. We can then describe the behavior of the system by the socalled phase portrait of the equation, in which we represent the flow of trajectories passing through the various points of the plane of coordinates (x, h). Let us consider what this representation gives for a bistable system, where hF(x ) = Of/Ox = 4x3  4ax (see Eq. (2.23)). In this case, Eq. (2.39) takes the form
dx du dh du
J = h

4 x 3 Jr"
4ax
(2.40)
58
Figure 2.11 shows two phase portraits associated with different values of r. Notice that the trajectories of each portrait never intersect. There is just one trajectory passing through any given point of the plane. On the other hand, all trajectories intersect the hr_(x) equilibrium line with vertical slope, because at this point, according to Eq. (2.39), h = hr_(x), and thus dx/du = 0. Each portrait represents the behavior of the system when the external field increases at the constant rate r. The portrait under decreasing field, that is, under opposite r, can easily be deduced by symmetry considerations. In fact, by taking into account that h E (  x ) = hE(x), we see that, if
r = 0.25
2
.......
4 1
FIGURE
0.5
0.5
x 1.5
2.11. Phase portraits of Eq. (2.40), for a = 0.5. Top: r = 0.25. Bottom: r = 2.5. Broken line is an example of symmetric trajectory followed under negative r. The thick solid line represents the free energy gradient hr(x) = 4 x 3  4ax.
59
[x(u), h(u)] is a solution of Eq. (2.40), then [x(u), h(u)] is a solution of the same equation when r ~ r. Some of the trajectories under negative r are shown by the dashed lines of Fig. 2.11. The joint portraits under opposite r values are most useful in providing a straightforward method to construct dynamic hysteresis loops associated with triangular input waveforms. Such a waveform is in fact a sequence of constantslope ramps. The corresponding loop is thus obtained by identifying the two trajectories associated with opposite r values that intersect in correspondence of the desired peak input values. Examples of loops obtained by this construction for different values of r are shown in Fig. 2.12. Figure 2.12 gives evidence of the fact that the loop shape is substantially dependent on the input rate of change. In addition, even if the peak value and the frequency of oscillation of the input are the same, the loop shape still exhibits a residual dependence on the particular input waveform applied. The loop under triangular excitation is different from the one under sinusoidal excitation, as shown by the dashed line of Fig. 2.12.
1.0
t
0.5
!
11
0.0
X
I
/ /,'
,/'
0.5
1.0
4 ' '
FIGURE 2.12. Dynamic hysteresis loops predicted by Eq. (2.39) and Eq. (2.40). Continuous lines: Loops under triangular input waveform of peak value h0 = 2.5 predicted by Eq. (2.40) for a = 0 . 5 a n d r  +0.25, +2.5 (see Fig. 2.11). Rateindependent loop (r ~ 0) is also shown for comparison. Dashed line: Loop under sinusoidal input waveform h(u) = h o sin(r h0 = 2.5, r = 2.5, predicted by Eq. (2.39). The triangular and sinusoidol input waveforms with r = 2.5 have the same amplitude and the same period.
60
Ratedependent effects disappear when ]r] ~ 0. In this limit, rateindependent hysteresis naturally emerges from the general ratedependent description. To show how this occurs, let us focus attention onto the geometrical shape of the trajectories generated by Eq. (2.39) or Eq. (2.40), with no consideration of the time at which specific trajectory points are reached. From Eq. (2.39) we see that the trajectory h(x) obeys the differential equation
dh r dx = h  hF(x )
(2.41)
In the limit ]r] ~ 0, the product [h  hF(x)]'dh/dx must be close to zero. In addition, if for example r > 0, dh/dx and h  hF(x) must be both positive. We see that, whenever dh F / d x > 0, a possible solution is h ~ hF(x ) and dh/dx ~ dhF/dx > 0. In those regions where d h F / d x < 0, however, this solution is not possible, because one would get the product of terms with opposite signs. In this case, one can simply set dh/dx ~ 0, whatever is the value of h  hF(x). This is where a Barkhausen jump takes place. Analogous considerations can be made when r < 0. Thus, one obtains a loop made up of the stable branches of hF(x), joined by horizontal segments where Barkhausen jumps take place. The most striking property of the dynamic loops of Fig. 2.12 is certainly the increase with ]r] of the loop area, which points at increasing energy dissipation. The amount of energy dissipated at each instant of time can be calculated from Eq. (2.39). If the system evolves under isothermal conditions, the energy dissipated in a small transformation where the state variable varies by/ix is given by (h  hF) Zlx. In fact, hF(x) represents the energy gradient Of/Ox, so that (h  hF)zlx = h/Ix  / i f . As discussed in Chapter 4, under isothermal conditions, the difference between the work h/ix performed by external sources on the system and the amount of free energy Zlfstored into the system just represents the amount of energy dissipated as heat during the transformation. The dissipated energy [h  hF(x)]zlx is graphically represented by the shaded area shown in Fig. 2.13. The energy loss is given by the area between the h(x) trajectory and the hF(x) equilibrium line. This conclusion applies to any generic transformation, and does not require closed loop trajectories. On the other hand, it is consistent with the fact that the total energy loss in a closed loop is given by the loop area.
2.4 T H E R M A L RELAXATION
The role of temperature has been anticipated many times, but not yet discussed in this chapter. The results obtained in the previous sections
61
FIGURE 2.13. Shaded region represents the amount of energy dissipated in the x increment Ax. were all based on the working hypothesis that the system will indefinitely remain in whatever state initially occupied, unless the external field is modified. In the terminology of catastrophe theory, this means that the delay convention is adopted. Thermal agitation takes the system away from this condition. When thermal effects are dominant, a different, in a sense opposite, limit condition becomes the natural one, in which the system relaxes so fast toward thermodynamic equilibrium that when we probe it under arbitrary field, we always find it in the state of minimum free energy. In this limit we do not have hysteresis at all, and the Maxwell convention applies. An example is shown by the dashed line of Fig. 2.7. The real behavior of a given system will always be somewhere in between these two limits, as a result of the competition between the time scale over which the system approaches equilibrium and the time scale over which the field varies significantly and takes the system away from equilibrium. This gives rise to ratedependent effects of physical origin completely different from the one discussed in the previous section. In particular, the state of the system will change in time even if the field does not change at all. In this section, we discuss thermal relaxation under constant field, to illustrate some general concepts that can be applied to more complicated situations as well. 5 Two main points will be addressed. First, we shall introduce a proper frame to deal with the fact that, in the presence of thermal agitation, the state variable X(t) becomes a stochastic process, that is, a function exhibiting irregular fluctuations in time. This calls for a 5Thermal ratedependence under varying field will be considered in Section 14.2.
62
more refined definition of the concepts of state and state variable. Second, we shall discuss a valuable approximation, summarized by the term thermal activation, where, of all the details of the free energy landscape in which the system is evolving, only the energy extrema, that is, minima, maxima, or saddle points, control the relaxation. As in the rest of this chapter, we discuss these aspects for the bistable system introduced in Section 2.2. Having a onedimensional free energy with only one or two minima is an enormous simplification with respect to the general problem of the relaxation in a multidimensional, multivalley energy surface. However, we shall see in chapters 10 and 14 that the treatment of more complex systems can often be reduced just to the superposition of many bistable contributions of the type discussed in this section.
(2.42)
63
<X(t)> is a statisticalensemble average. We can imagine having a large number of identical systems under identical conditions. Each of them will evolve according to a different random function Xi(t), where i identifies a given system in the ensemble. If, at a certain time, we pick up the values of Xi(t ) over the whole ensemble and we take the average, we obtain <X(t)>. Note that there is no reason why the individual functions Xi(t ) should all be close to <X(t)>. For example, if Xi(t ) is equal to 1 for half of the ensemble and is equal to  1 for the other hall we get <X(t)> = 0, that is, neither 1 nor 1. As we shall see, this is just what may happen in bistable systems. In such cases, the naive g u e s s ~ t h a t the average <X(t)> should reproduce, apart from small fluctuations, the time behavior of Xi(t ) of each system in the ensemble~is not true. In this frame, the behavior of the system is determined by the properties of the probability distribution P(X,t). A first conclusion of general validity is drawn by noting that, whatever the initial distribution P(X,t = O) might be, at times large enough the system should reach thermodynamic equilibrium, characterized by some equilibrium distribution Peq(X). We know from Section 2.1.4 (Eq. (2.20)) that the probability to find a given value X in thermodynamic equilibrium is controlled by the exponential of the energy GL(X;H,T). Therefore, we conclude that
Peq(X)  P(X, t
~ oo) oc exp 
GL(X;H,T)~ ~ ,/
(2.43)
Let us apply this result to the bistable system of Fig. 2.4, under zero field (Fig. 2.14). If the variation of GL versus X is large with respect to kBT, then, due to the exponential dependence on energy in Eq. (2.43), Peq(X) exhibits two identical sharp peaks around the two energy minima. It is almost inevitable to reason in terms of two discrete states, (+) and (), and to describe the situation by saying that the system is with probability 1Ain the (+) state and with probability 1Ain the (  ) state. The average value of X at equilibrium is thus <X>eq = 0. HOW this result is reflected in the time behavior of the individual random functions Xi(t) is shown in Fig. 2.15. The individual system stays for some time around one of the two minima and then, aided by some favorable thermal fluctuation, has a chance to overcome the energy barrier at X = 0 and to pass into the other energy well. The system spends on the average half of its time in one minimum and half in the other, and thus, over time intervals long enough with respect to the typical transition times from one minimum to the other, it appears to occupy both minima with probability ~A. The random nature of the Xi(t ) function is recognized in the fine structure of its fluctuations around the minimum energy position, and, most of all, in the random location of the times at which the system jumps from one minimum to the other.
64
GL
()
(+)
initial state
()
(
final state
A
()
A
(+)
()
FIGURE 2.15. Bistable system: Typical time behavior of individual state variable
Xi(t).
Figure 2.15 shows well how rateindependent hysteresis emerges in this context. As we shall see shortly, the average time separation between subsequent jumps from one energy m i n i m u m to the other depends exponentially on the ratio between the energy barrier separating the two min
65
ima and kBT. If the temperature is low or the barrier large, this time can easily become huge. On the time scale of a typical experimental observation, we do not wait long enough to detect at least one of these transitions, which means the system simply remains in whatever state it was initially occupying. In the probability representation, shown in Fig. 2.14, this means that if we initially have a peak around one of the two minima, this peak will simply remain unchanged during the time of our observation.
dX
df = 
3GL + HT(t )
ox
(2.44)
where HT(t ) is an additional random force describing the coupling to the thermal bath. In most cases, the microscopic interactions described by HT(t) take place on a time scale much shorter than the time scale over which <X(t)> exhibits significant variations, and the values of the random force at different times appear to be independent of each other down to negligible time separations. A rapidly varying function of this sort is described by a stochastic process known as Gaussian white noise. Equation (2.44) is a stochastic differential equation, that is, a differential equation containing a random term, and, as such, it will produce random solutions X(t), just as we expect on physical grounds. Mathematically, this type of equation is known as a Langevin equation, and its study is part of the 6Some information on the various concepts and methods of the theory of stochastic processes mentioned in this and in the following section can be found in Appendix E.
66
theory of Markovian stochastic processes. Equation (2.44) is based on the assumption that a clearcut separation of time scales exists in the problem, so that the total force acting on the system can be expressed as the sum of the deterministic macroscopic force described by the free energy gradient and of a shorttimescale random force of thermal origin. The ensemble of the random processes Xi(t ) shown in Fig. 2.15 is the ensemble of the solutions of Eq. (2.44). The statistical properties of this ensemble are described by the distribution P(X,t) previously introduced, and it is natural to investigate the relationship between the properties of P(X,t) and the properties of Eq. (2.44). The theory of stochastic processes shows that, under appropriate conditions, given the stochastic process described by Eq. (2.44), the corresponding probability distribution P(X, t) will obey the partial differential equation
c~P
c~ (OG L )
_ c~2P
(2.45)
This equation is known as the Smoluchowski equation, and is a particular case of a class of equations known as FokkerPlanck equations. The timescale separation mentioned before is here reflected in the presence of two terms in the righthand side of Eq. (2.45). The former produces a drift of the distribution in the direction dictated by the free energy gradient, whereas the latter is a diffusive term, of strength proportional to the temperature, which describes thermal agitation. By solving Eq. (2.45) under appropriate initial and boundary conditions, in principle one has a general description of the relaxation process, and one can calculate the time evolution of the average state variable <X(t)> through Eq. (2.42). For example, the timeindependent, stationary solution of Eq. (2.45) is precisely the equilibrium distribution given by Eq. (2.43), so that at the end of the relaxation, the system will actually reach thermodynamic equilibrium. However, the point is that, except for a few particular cases, one does not know how to solve Eq. (2.45) for a generic form of GL. Some approximation must be introduced.
67
convenient neighborhoods of the locations, X+ and X_, of the two minima. One can thus attempt a discrete description of the problem, where, instead of a probability distribution that is a continuous function of X, one simply has two probabilities, P+(t) and P_(t), to study as a function of time, with the constraint P+ + P_ = 1. In this approximation, Eq. (2.42) reduces to
(X(t)) = X+P+ + X _ P _ = X _ + (X+  X _ ) P +
(2.46)
There are methods discussed in the literature which show that this discrete description can indeed be derived from Eq. (2.45), in the limit of energy barriers large with respect to kBT. We shall illustrate the form taken by this approximation for the bistable system pictured in Fig. 2.16, in which the two energy minima are at unequal levels. The lack of symmetry may be the result of the internal structure of the system, or may simply be due to the presence of a nonzero external field. The main aspects of the approximate description can be summarized as follows.
Master equation. P+(t) obeys the master equation
dP
dt
+ = w_P
 w+P+
(2.47)
where w+ and w_ represent appropriate transition rates, defined next. Equation (2.47) can be interpreted as follows. Suppose we have a statistical ensemble of identical systems. Then P+ and P_ will be proportional to the number of systems occupying the two minima. The systems in the (  ) or (+) state have a probability per unit time w_ or w+ to jump to the (+) or (  ) state. In the time interval dt, a number of systems proportional to P _ w _ d t and to P+w+dt will make opposite transitions from one minim u m to the other and will give opposite contributions to the variation in the number of systems in the (+) state. The two flows proceed independently of each other, and Eq. (2.47) keeps track of the total balance.
AGo
()
:::::j........ +__>.......................
.............................
..........................
FIGURE 2.16. Bistablesystem: Energy barriers and relaxation channels involved in thermal activation.
68
Transition rates. The transition rates w_+ depend on the free energy and on temperature. Remarkably, many details of the energy profile turn out to be eventually not important. The dominant role is played by the energy difference between each of the minima and the energy maximum that separates them (see Fig. 2.16). One finds that w_+ is expressed by the Arrhenius formula
w_+ = 10 exp 
kBC,
(2.48)
where Z~G_+> 0 are the two energy barriers indicated in Fig. 2.16. The characteristic time constant T0 summarizes other details of the free energy profile and of the temperature dependence. However, to a first approximation these aspects have a marginal role in comparison with the strong exponential dependence on AG_+/kBT. On the other hand, accurate predictions of T0 are difficult, and it is usually accepted that this constant is poorly known. Equation (2.48) can be qualitatively interpreted as follows. Let us consider for instance the (  ) state of Fig. 2.16. According to Boltzmann statistics, the relative probability of finding a given system of the statistical ensemble at the bottom and at the top of the energy barrier is given by exp(AG_/kBT ). If we imagine that the system has the possibility to update its state in a time interval of the order of T0, then Eq. (2.48) will represent the number of times per unit time in which the system will be found on top of the barrier. The state on top of the barrier is dynamically unstable and when the system is there it will jump, with a probability of the order of I (say 89 toward the other energy minimum accessible from the top of the barrier. This gives rise to a transition rate from the (  ) to the (+) state of the order of Eq. (2.48). Similar considerations hold for the other energy minimum. Equation (2.47) summarizes the effect of the two opposite flows.
dP+ _ dt
where
I"
1=
1
P+'eqw+ q w_ =2
2 \k~J
AG_  AG+ 2
AGc
AG_ + AG+ 2
AGu =
2.5 BIBLIOGRAPHICALNOTES The solution of Eq. (2.49), given the initial probability P+(t = 0), is P+(t)= P+,eq + [P+(0) P+,eq] e x p (  t )
69
(2.53)
Analogous expressions hold for the average state variable <X(t)> defined by Eq. (2.46). According to Eq. (2.53), the system relaxes exponentially, with a time constant ~given by Eq. (2.50). In the final equilibrium state described by Eq. (2.51), the two energy minima are both significantly populated only if IAGul ~ kBT. Otherwise, only the absolute minimum is occupied. When this is the case, only one of the two flows of Eq. (2.47) plays a role, the one describing jumps from the higher to the lower energy state. Jumps in the opposite direction have a negligible probability of occurring and the relaxation simply reduces to a continuous transfer of probability from one minimum to the other, until the whole probability is concentrated in one minimum only. The situation encountered in magnetic materials is inevitably much more complicated than what we have discussed here. Yet, as we shall see in chapters 10, 13, and 14, a description in terms of collections of bistable contributions may often be acceptable. Those cases can be treated on the basis of the present results, by introducing appropriate distributions of the various parameters involved in the description.
The mathematical description of hysteresis, mainly of a rateindependent nature, is addressed in [B.28B.31, B.111, B.115]. In [B.29], a clear presentation of the concepts of branching and local/nonlocal memory is given. The subject of metastability and approach to equilibrium is of fundamental physical importance and of challenging difficulty and has been only superficially addressed here [see B.32, B.38, B.41, B.45]. The conceptual difficulties encountered when attempting a rigorous definition of metastability in statistical mechanics are discussed in Ref. 2.1. Our summary of catastrophe theory in Section 2.2.2 has been largely inspired by [B.22]. The general connection between magnetization processes in magnetic systems and elementary catastrophes is analyzed in Ref. 2.2. System equilibrium and stability, and the properties of nonautonomous, nonlinear dynamical systems, considered in Section 2.3, are discussed in [B.23]. The concept of Landau free energy and its connection to partial statistical averages is neatly discussed in [B.37, Chapter 5]. Its physical signifi
70
cance and its limits of applicability are considered in Ref. 2.3, Ref. 2.4, and Ref. 2.5. For the use of the theory of stochastic processes to describe thermal relaxation and thermal activation, see Appendix E and the references therein, Ref. 2.6, and Ref. 2.7. The relevance of stochastic processes to nonequilibrium statistical mechanics is discussed in detail in [B.44, chapters 2 and 3]. An important example of application of the methods of Section 2.4 to thermal relaxation in magnetic systems is discussed in Ref. 2.8, Ref. 2.9, and Ref. 2.5. In Ref. 2.8, the discrete approximation of Section 2.4.3 is used, whereas in Ref. 2.9 and Ref. 2.5 a Langevin equation based on the more complete approach of Section 2.4.2 is considered. An interesting topic not discussed in this book is linearresponse theory and the fluctuationdissipation theorem, whereby one connects fluctuations in thermodynamic equilibrium to dissipation. For this see [B.38, B.42, B.44, B.46]. 2.10. Penrose and J. L. Lebowitz, "Towards a Rigorous Molecular Theory of Metastability," in [B.49], 323375. 2.2 M.A. Pinto, "Catastrophe Model for Micromagnetics," Phys. Rev. Lett. 24 (1987), 27982801; "Morphology of Micromagnetics," Phys. Rev. B 38 (1988), 68246831. 2.3 K. Binder, "Theory of FirstOrder Phase Transitions," Rep. Progr. Phys. 50 (1987), 783850. 2.4 J. D. Gunton, M. San Miguel, and P. S. Sahni, "The Dynamics of FirstOrder Phase Transitions," in Phase Transitions, Vol. 8 (London: Academic Press, 1983), 267466. 2.5 W. F. Brown, Jr., "Thermal Fluctuations of Fine Ferromagnetic Particles," IEEE Trans. Magn. 15 (1979), 11961208. 2.6 H. Haken, "Cooperative Phenomena in Systems far from Thermal Equilibrium and in Nonphysical Systems," Rev. Mod. Phys. 47 (1975), 67121. 2.7 P. Hanggi, P. Talkner, and M. Borkovec, "ReactionRate Theory: Fifty Years after Kramers," Rev. Mod. Phys. 62 (1990), 251341. 2.8 L. N6el, "Th6orie du Trainage Magn6tique des Ferromagn6tiques en Grains Fins avec Application aux Terres Cuites," Ann. Gfophys. 5 (1949), 99136 (in French). 2.9 W. E Brown, Jr., "Thermal Fluctuations of a SingleDomain Particle," Phys. Rev. 130 (1963), 16771686.
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.