You are on page 1of 40


Types of Hysteresis

Hysteresis phenomena are widespread in nature. Ferromagnetic hysteresis is just one among many examples. The mechanisms underlying hysteresis fall within the realm of nonequilibrium thermodynamics, a topic with many conceptual difficulties not yet fully resolved. As a consequence, the word hysteresis may be used with different meanings in different contexts, depending on the physical picture one has in mind and the approximations that one is introducing in the description. Thus it seems appropriate to clarify the physical picture and the approximations that will accompany the description of hysteresis given in this book. We shall concentrate on three approximate views of the behavior of metastable systems, rate-

independent hysteresis, viscous-type rate-dependent hysteresis, and thermal relaxation. In particular, we shall discuss how these approximations apply to bistable systems, which represent the simplest case displaying the phenomena we are interested in. Because we are going to discuss general aspects of hysteresis, not limited to ferromagnetic materials, we shall employ a general terminology. We consider a system, on which we act through some external action, named input or control variable or external field, and indicated by H. In the case of a magnetic material, H would represent the applied magnetic field. The system responds to this action and changes its internal state. The change is monitored by looking at the behavior of some quantity X, named output or state variable. In the magnetic case, this could be, for example, the total magnetic moment of the body. H is the independent variable and X is the dependent one, in the sense that we assume to know in advance the time dependence H(t) and that we want to describe and predict the corresponding time response X(t). This is not always the case. In the treatment of magnetic losses, for example, one usually considers dynamic hysteresis loops where the output, not the input, is specified (e.g., losses under sinusoidal magnetization). This situation necessarily requires some feedback control, which adjusts the input in order to pro31


CHAPTER 2 Types of Hysteresis

duce the required output. Such feedback complications will not be considered in this chapter. The nature and the mathematical description of hysteresis can be qualitatively different depending on the tensor character of the input and output variables. We shall limit our considerations to scalar hysteresis, where both H and X are just numbers. In reality, the natural input and output variables for ferromagnetic materials are the magnetic field and the magnetization, which are vector quantities. Therefore, vector hysteresis is in principle expected to be the correct framework, and scalar hysteresis should emerge from the general picture under appropriate approximations. Yet, scalar hysteresis descriptions are often introduced from the beginning on the basis of some phenomenological assumption, mainly because they permit a simpler and more intuitive treatment of otherwise complicated phenomena. We will discuss the value of such approaches and their limitations on several occasions. Some aspects of vector hysteresis are considered in Chapter 8. Although not directly addressed in this book, one aspect of hysteresis is attracting increasing interest: its universal and ubiquitous nature. How general can the mathematical description of hysteresis be? Is there some universal paradigm that can be applied to all situations where hysteresis is observed? The ideas presented in this chapter and the more detailed discussion of Preisach systems given in Part V might also be useful as a contribution toward a better comprehension of these general questions.

2.1 W H A T WE M E A N BY HYSTERESIS In a loose sense, we might say that hysteresis appears when the output X is not a single-valued function of the input H. Often, when using the word hysteresis, one has in mind a hysteresis loop of the type shown in Chapter 1, where there are two possible output values for any input value. We shall see that hysteresis phenomena have a much more complex nature, which cannot be reduced to the analysis of hysteresis loops only. Yet, it is a fact that hysteresis loops are the basic feature characterizing magnetic materials. Therefore, we begin by discussing certain situations where hysteresis loops naturally arise as a consequence of the existence of a phase lag between input and output. In linear systems, this behavior is described by a generalized susceptibility fully characterizing the internal structure of the system. This kind of situation gives rise to rate-dependent hysteresis, depending on the rate of change of the input, and disappearing under quasi-static excitation. Subsequently, we introduce the concept of



memory and we discuss situations that give rise to rate-independent hysteresis as a consequence of memory mechanisms persistent in time. Finally, we show how the various types of rate-dependent and rate-independent hysteresis can originate in a system whose free energy has a complicated structure, with many local minima corresponding to metastable states. As discussed in Chapter 1, this is the typical situation expected in magnetic systems. In the literature on hysteresis, there is a tendency to use the word hysteresis to describe rate-independent hysteresis only. Rate-dependent effects are considered as additional phenomena that complicate the picture and should be ruled out as much as possible. For the physical systems considered in this book, this separation appears somewhat artificial and not very useful. Rate-independent hysteresis is nothing more than an approximation to processes that are intrinsically rate-dependent, and we prefer to summarize under the word hysteresis the whole set of intimately connected phenomena arising from the simultaneous existence of metastable states, dissipation mechanisms with characteristic time scales, and thermal relaxation. On the other hand, it is just in the mathematical description of rate-independent hysteresis that one encounters the most interesting and challenging difficulties, and it is in this direction that important mathematical progress is being achieved.

2.1.1 Hysteresis: Lag

The correspondence between hysteresis and lag is close to the etymological meaning of the word. By this, we mean that there is hysteresis when the output is lagging the input. In the case where a sinusoidal input gives rise to a sinusoidal output, we will have
H(t) = H0 cos(,~t) X(t) = X o cos(~ - r (2.1)

If we plot X as a function of H under varying time t, we obtain the elliptical loops shown in Fig. 2.1. In this context, where the input is defined by the amplitude H 0 and the pulsation co, the problem is to know the functional dependence of the amplitude, X0(H0, co), and of the phase lag, ~H0, co), on H0 and co. As an example, Fig. 2.1 shows the set of loops obtained at fixed co and variable H 0 when X0 is proportional to H 0 and r is independent of H 0. There is no a priori reason to expect that the response of the system to a sinusoidal excitation should still be sinusoidal. This is determined by the internal structure of the system, and in general one will find that


CHAPTER 2 Types of Hysteresis

FIGURE 2.1. Elliptical loops produced by input-output phase lag (Eq. (2.1)).

a sinusoidal excitation gives rise to a distorted output. An undistorted response like the one of Eq. (2.1) is typical of linear systems, where the superposition principle holds. This principle permits one to construct the response of a time-invariant linear system to a generic input as the superposition of the responses to input impulses. Let qs(t - to) be the system response at time t to an input impulse 8(t - to) that occurred at the previous time t 0. Often qs(t) has the form

~ t ) = XiS(t) + q~d(t)O(t)


The two terms represent the instantaneous and the delayed system response. The presence of the Heaviside step function 8(t) ensures that causality holds and that the response follows the excitation but never anticipates it. We shall assume that q~d(t) is a regular function of t, such that q~d(t) ~ 0 for t ~ oo. This implies that the state occupied by the system under zero input, after all transients have died away, is always X = 0. According to the superposition principle, the system response to the generic input H(t) will be

X(t) = xiH(t) + f

q~d(t -- t')H(t') dt'


The impulse response function ~(t) fully characterizes the internal structure of the system. Equation (2.3) acquires a particularly interesting form when we pass to Fourier transforms

z~ =


z(t) exp(-irot) dt



2.1 WHAT WE MEAN BY HYSTERESIS where z stands for H or X. In Fourier space, Eq. (2.3) becomes
x ~ = x(,~)n~



where X(w) = Xi +


q~d(t)exp(--i~) dt


is the Fourier transform of ~(t). ~ro) is named generalized susceptibility. It is a complex quantity, with a real and an imaginary part:

to) = X'(~) - ix"(r


From Eq. (2.6) and the fact that q~d(t) is a real function, we deduce that X*(~o) = ~ - ~ ) , which means that X'(-~o) = X'(r X"(-~o) = -X"(r (2.8)

X' is an even function of ~ whereas X" is odd and changes sign with to. We can also express X in terms of its amplitude and phase, as X(~) = I~ exp(-i~) where c o s 4 -_ X' ~ s i n 4 - X" I~ (2.10) (2.9)

Given the symmetry properties expressed by Eq. (2.8), we have that

4 ( - , ~ ) = -4(,~).

These results make clear the behavior of a linear system under sinusoidal excitation. Equation (2.5) shows that a linear system responds to a sinusoidal input just as described by Eq. (2.1) and Fig. 2.1, with X0(H0,~a) = IX(tO)ill0, proportional to H 0, and ~H0,ro ) equal to the loss angle of Eq. (2.10), independent of H 0. As to the frequency dependence of ~ ) , we mentioned the fact that X"(~) is an odd function of o4 which usually changes sign by passing through the origin. This means that X"(0) = 0, i.e., after Eq. (2.10), ~0) = 0. Thus in the quasi-static limit where the input rate of change is arbitrarily small, the phase lag vanishes and we no longer have any hysteresis loop. The quasi-static system response X(H) reduces to the single-valued linear relationship X(H) = X'(0) H (2.11)

The behavior described by Eq. (2.3) and Eq. (2.5) is an example of ratedependent hysteresis, disappearing when the excitation is sufficiently slow.


CHAPTER 2 Types of Hysteresis

2.1.2 Hysteresis: Dissipation

The correspondence between hysteresis and dissipation naturally arises from the idea that hysteresis is an out-of-equilibrium phenomenon, where irreversible processes take place. Dissipation has to do with energy transformations from one form to another, and acquires a definite meaning only when we clarify how the system can exchange energy with the external world. We shall consider the case where the input H and the output X are conjugate work variables, that is, when H d X represents the work performed by external sources on the system in the infinitesimal transformation where the output varies by dX. The first law of thermodynamics then takes the form
dU = H d X + 8Q


where U represents the system internal energy and 8Q is the heat absorbed by the system in the transformation. Let us now consider the case where the system is kept at constant temperature and is subjected to a periodic input. If the ensuing system response is itself periodic, the variation of the internal energy in one cycle will be zero, and we obtain from Eq. (2.12)
~cycle H dX = - ~cycle ~Q


According to Eq. (2.13), the area of the loop described by X(H) gives the work dissipated as heat in each cycle. This quantity is named energy loss per cycle. The term power loss is used when the dissipation per unit time rather than unit cycle is considered. Note that the loop area must always be positive, or, in other words, the X(H) loop must always be traversed counterclockwise. In the opposite case, one would have a cyclic transformation whose sole result would be the transformation into work of a certain heat amount absorbed from a single heat reservoir, which would contradict the second law of thermodynamics. This also shows that the cyclic transformation considered is irreversible, because the reversed transformation would not be admissible, for the same reason. In the case of the linear systems discussed in the previous section, the energy loss can be expressed in terms of the generalized susceptibility. Under sinusoidal excitation, we obtain from Eq. (2.1) and Eq. (2.13) that the loop area W is given by
W = ~cycle H d X = r
o sin~


or, by taking into account Eq. (2.10),

W = r 2




Equation (2.15) shows that the dissipation is controlled by the imaginary part of the generalized susceptibility. We conclude by remarking that expressing losses in terms of loss angles is not necessarily limited to linear systems. In the evaluation of power losses in magnetic materials, the loss under sinusoidal output is usually considered. When one estimates the loss through Eq. (2.13), one can decompose the input H into its Fourier components. Due to the orthogonality properties of sinusoidal functions, only the fundamental harmonic will contribute to the integral. Therefore, the loss can still be expressed in the form of Eq. (2.14), where q~ represents the phase shift between the input fundamental harmonic and the output.

2.1.3 Hysteresis: Memory and branching

The statement that, in a system exhibiting hysteresis, future evolution depends on past history, is probably the most common way to present hysteresis. According to this viewpoint, a system with hysteresis is a system with memory, in which the output X(t) at time t depends not only on the input H(t) at the same time, but also on the values H(t') attained at previous times t' < t. We already met an example of memory in Section 2.1.1, when we discussed linear systems. Equation (2.3) is a relationship endowed with memory, for X(t) is determined by the time integral of the input over all previous times. This type of memory; however, has a nonpersistent character, in the following sense. Suppose that, whatever the input history for times t < t 0, at time t o H acquires the value H 0, which remains constant for all subsequent times. According to Eq. (2.3), the output at the generic time t > t o can be written as

X(t) = xiH o +

fto ft-to q~d(t -- t')H(t') dt' + H o q~d(t') dt' ~ oo 0


As t increases beyond t 0, the first integral progressively becomes smaller, because q~d(t) ~ 0 for t ~ o% and the response tends to the limit X= Xi +

~d(t') dt'

Ho = X'(O) Ho


which coincides with the quasi-static response previously discussed (see Eq. (2.11)). When this limit is approached, the memory of the field history H(t) for t < t o is lost. In this sense, the memory effect has a nonpersistent character. Systems with a more complex internal structure, however, can have persistent memory of the past. By this wernean that the state of the system


CHAPTER 2 Types of Hysteresis

under constant input keeps on depending on the past history of the input even after all transients have died out. In other words, given a certain input value H, the system occupies one of several possible states, and it is the input history that selects which of these states is actually occupied. The system remains in that state for indefinite time, if the external conditions are not modified, and this produces the persistent memory effect. According to this viewpoint, a system with persistent memory cannot be in thermodynamic equilibrium, because it is allowed to occupy one of several states, whereas the equilibrium state under given external conditions (H and temperature) is unique. If a system with persistent memory occupies a state that depends on its past history, then its future evolution, that is, the behavior of the curve X(H) starting from given values H I and X 1 will depend on that past history as well. In particular, having reached the point (H1,X1) under increasing or decreasing input will play a role. The expected situation is shown in Fig. 2.2. The point (H1, X1) has been reached under increasing H. A new evolution branch is generated if we stop increasing H and we start decreasing it. A point where the sign of the input variation is reversed is called a reversal point. From the thermodynamic viewpoint, the fact that, after the reversal point, the system does not trace back the same X(H) curve in reversed sense is again an indication that the system is not in thermodynamic equilibrium. In fact, if the system were in equilibrium for each value of the input, the X(H) curve would only depend on H and temperature, and thus would be exactly the same, independent of the sense in which it is traversed. Because a system with persistent memory has the possibility of occupying one of many different states, it is natural to consider how we can identify these states. This will depend on the internal structure of the system, and in particular on the fact that the system is endowed with local or nonlocal memory.


/ /

FIGURE 2.2. Branching in rate-independent hysteresis.



Local memory. In a system with local memory, the values of H and X are sufficient to identify the state. The various states associated with given H are then necessarily characterized by different values of X. Any point in the H - X plane uniquely identifies one and only one state. Given the point (H 1, X1) of Fig. 2.2, if the system has local memory its subsequent evolution out of this point will at most depend on the sense (increasing or decreasing) of the input variation. Therefore, there will be only two possible curves originating from that point, one under increasing input and the other under decreasing input. The role of past history is to select which point (HI, X1) is reached by the system. Once this point is selected, the future evolution of the system is fully determined by the initial condition (HI, X1) and past history no longer plays a role. Nonlocal memory. In a system with nonlocal memory, H and X do not give a complete characterization of the system. Many states are associated with a given point of the H - X plane, and a whole set of X(H) curves starting from the single point (/-/1, X1) will exist, depending on past history. In a system with nonlocal memory, additional internal state variables are needed to complete the description of the system. In Part V, we shall discuss the nature of such internal variables in greater detail. This kind of behavior is very close to what is observed in magnetic materials. In Chapter 1, we gave several examples of magnetization curves dependent on field history, where branching and nonlocal memory effects play a dominant role.
Nowhere in the previous analysis of persistent memory did time have a role. No time scale was introduced and we simply assumed that, in order to see persistent-memory effects, one should wait long enough to let possible transients decay to zero. In this sense, hysteresis due to persistent memory is rate-independent, because it does not depend on the rate at which the input is varied (provided it is low enough), but only on the sequence of values attained by the input during the system evolution. As mentioned at the beginning of this chapter, rate-independence is often considered the most distinctive feature of hysteresis. Yet, from the thermodynamic viewpoint, it only represents an approximation of limited validity. In fact, any system spontaneously relaxes toward thermodynamic equilibrium under the action of thermal agitation. When equilibrium is reached, any memory of the past is lost, which shows that absolutely persistent memory cannot exist. In principle, by waiting long enough to let the system reach equilibrium under each input value, one could drive the system through a sequence of equilibrium states and no hysteresis at all would be observed. Nonetheless, rate-independence may naturally arise as a useful approximation when the input rate takes values in an


CHAPTER 2 Types of Hysteresis

appropriate interval. On the one hand, the input rate must be low enough to rule out fast rate-dependent effects, like, for example, eddy-current damping in metals. On the other hand, the input rate must be large enough to prevent the system from significantly relaxing toward equilibrium. For all those systemsmand many magnetic systems fall within this categorym where thermal relaxation is extremely slow and proceeds with the logarithm of time, there will exist a substantial input-rate interval where both these requirements are simultaneously fulfilled. In this interval, the system will exhibit rate-independent hysteresis to a very good approximation.

2.1.4 Hysteresis: Metastability

When the connection between hysteresis and metastability is analyzed, one finds that it summarizes all the previous interpretations of hysteresis, which naturally emerge as approximate descriptions of the general picture. To illustrate this aspect, we need a proper frame for the treatment of metastability. To keep the discussion simple, we shall limit our considerations to systems with local memory, where H, X, and the temperature T provide a complete characterization of states. Let F(X, T) be the Helmholtz free energy of the system. 1 If H and X are conjugate work variables, as we are assuming throughout this chapter, then G(H, T) = F - H X is the corresponding Gibbs free energy. It is known from thermodynamics that G is the thermodynamic potential controlling spontaneous transformations under fixed H and T. Any transformation of this kind can only proceed in the sense of producing a decrease of G, and thermodynamic equilibrium is reached when G attains its globally minimum value. The occurrence of a spontaneous transformation implies the existence of internal degrees of freedom which can change even if H and T do not change. Let us consider the case where this internal degree of freedom is represented by X itself. In other words, we suppose that X varies during the relaxation process and that the internal processes leading to a certain value of X have characteristic relaxation times much shorter than the time scale over which X varies significantly and the system globally approaches equilibrium. This means that the system will relax by passing through a sequence of thermodynamic states, each characterized by a well-defined value of X. The energy of these nonequilibrium states is, for given H and T, a function of X, given by GL(X;H,T) = F(X,T) - H X 1Some aspects of thermodynamicsand metastability are discussed in Chapter 4. (2.18)



We shall call this function Landaufree energy, after L.D. Landau, who developed on this basis a phenomenological theory of phase transitions. In spite of the formal similarity, there is an essential difference between GL and the Gibbs energy G at the equilibrium. G is a function of H and T only, in which X must be expressed as a function of H and T through the equation of state of the system. Conversely, GL is the energy of that particular restriction where the state variable X is forced to take a certain given value, as if it were an external constraint. The physical meaning of GL becomes clear in the frame of statistical mechanics, where GL appears as the result of a partial averaging process carried out over the partitionfunction Z(H, T) of the system. According to statistical thermodynamics, Z determines all thermodynamic properties, and is defined as

Z(H,T) =

~ exp


] = exp



In Eq. (2.19), ~i is a shortcut notation for any sum or integral over admissible microstates i, and Ei and Xi are the values of the system energy and of the variable X in microstate i. The evaluation of the sum gives the exponential of the Gibbs energy 2, G(H, T). Let us n o w carry out the thermodynamic sum in two steps, by summing up first over all the microstates in which X has a fixed value, and then over all possible X values. The result of the first partial sum gives the Landau free energy: exp


k-~ j = ~


exp -

Ei ~ HXi~ ksT ] kBT


= ~ exp

GL(X, H,T))

If the separation of time scales previously mentioned holds to a sufficient approximation, then one can stop after the first partial average of Eq. (2.20), and take GL as the free energy of the system. According to the last equality of Eq. (2.20), e x p ( - GL/kBZ) will be proportional to the probability of occupying a given X state, and the system will tend to stay in those states where GL is low. In general, it is extremely difficult to carry out the partial average explicitly and thus to calculate the Landau energy from statistical mechanics. Most often, one exploits syrmnetry arguments and 2Sometimes the same definition is said to give the Helmholtz, not the Gibbs free energy. This exchange of roles depends on whether one decides to consider the term - H X i a s part of the energy of microstate i or as potential energy in the external field H.


CHAPTER 2 Types of Hysteresis

heuristic considerations to arrive at a reasonable, approximate estimate of the form taken by GL in particular cases. In the rest of this book, the energy GL will play a role in a number of considerations concerning hysteresis. We shall refer to GL simply as the free energy of the system, but the subscript L will be used whenever possible as a reminder of the many assumptions and approximations accompanying its use. During relaxation, the system can be thought of as being under the action of competing thermodynamic forces, the internal force [OF/OX]T and the external field H. There will be equilibrium between these two forces whenever the relation is fulfilled:

[O x]

= 0 (2.22)

From Eq. (2.18) and Eq. (2.21), we see that when the system is in equilibrium, the GL free energy is at an extremum:


The equilibrium is stable when the extremum is a minimum. When Eq. (2.21) or Eq. (2.22) is satisfied, there is equilibrium between the thermodynamic forces acting on the system, but this does not mean that the system is in complete thermodynamic equilibrium. The situation is illustrated by Fig. 2.3. The arrows indicate the local minima of the system free energy. Each minimum is a solution of Eq. (2.22) and represents a possible metastable state for the system. In a metastable state, there is stable equilibrium between the forces acting on the system. Over short time periods, these forces tend to keep the system in the energy well initially occupied, but over longer time scales thermal agitation progresG,

FIGURE 2.3. Landau free energy with indication of metastable states.



sively makes other states accessible to the system. The system visits neighboring states through random thermal fluctuations, and once it reaches by chance a local energy maximum separating the initial state from neighboring energy wells, it spontaneously jumps to some other metastable state that becomes accessible. Through this mechanism, the system probes progressively larger regions of phase space, until it reaches thermodynamic equilibrium, where the probability to occupy any given state is dictated by Boltzmann statistics. How fast or slow the approach to equilibrium will be is determined by the height of the energy barriers separating neighboring states as compared with the strength of thermal agitation. This qualitative picture will be made more precise in Section 2.4. Let us consider how the various hysteresis mechanisms discussed so far emerge from this description. First of all, we see that persistent memory is the natural consequence of the presence of multiple metastable states. The system can occupy one of several local energy minima and past input history determines the minimum actually occupied. Note that, in the particular example of Fig. 2.3, each minimum is identified by the corresponding value of X, which means that the system represented in that example possesses local memory. In the case of nonlocal memory, G~ would be a function of additional internal variables and the energy profile of Fig. 2.3 would become a multidimensional one, with a much more complex structure. The memory is absolutely persistent in the zerotemperature limit, where the effect of thermal agitation can be neglected. This is the limit, further discussed in Section 2.2, where rate-independent hysteresis can be observed. If the external conditions do not change, the system will indefinitely remain in any local energy minimum it may initially occupy. The only way to force the system to evolve is to change the external field H. Due to the presence of the term - H X in Eq. (2.18), varying the external field distorts the energy profile, eventually transforming the initial Gr minimum into an inflection point. At that moment, the system loses stability and makes a spontaneous jump to the nearest local minimum. This event will be called a Barkhausen jump, and is analogous to the Barkhausen effect presented in Section 1.2. During the jump, the energy of the system suddenly decreases. Some energy is thus irreversibly dissipated as heat into the thermal bath. The dissipation mechanism by which this occurs will have some characteristic time scale. In the rateindependent approximation, one assumes that the external field changes so slowly that it practically remains unchanged during the time needed to complete the Barkhausen jump. In this limit, the role of the external field is just to force the system to pass from one local minimum to the next. The sequence of energy minima visited is the only important feature, and time plays no role. However, when the external field rate becomes


CHAPTER 2 Types of Hysteresis

so high that appreciable field variations take place during individual Barkhausen jumps, then rate-independence no longer applies. The system no longer evolves through a sequence of spontaneous jumps but approaches a regime of forced dynamic evolution driven by the external field. This transition from rate-independent to rate-dependent hysteresis will be discussed in Section 2.3.


In this section we discuss rate-independent hysteresis and energy dissipation, along the lines anticipated at the end of the preceding section. In principle, one should address the subject by considering a multidimensional energy landscape, with a complicated multivalley structure and nonlocal memory effects. Yet, this is too complex a problem to be a good starting point. We prefer to show how many of the phenomena in which we are interested are already present in the simple case of a bistable system, whose free energy is characterized by two minima only. This is a system with local memory, where, in addition, there are at most two metastable states available to the system. In spite of its simplicity, this case illustrates well several important aspects present under more general conditions. In addition, we shall see, especially in chapters 10, 13, and 14, that complicated situations can often be approximately described just in terms of the superposition of many elementary bistable contributions.

2.2.1 Bistable systems

Let us consider a system whose free energy is given by the expression 3
fix) = x 4 2ax 2


where a is a positive constant. We use lowercase letters to indicate that the problem has been reduced to some convenient dimensionless form, in which all variables and parameters are dimensionless, fix) is shown in Fig. 2.4. It has two equal minima located at x ___C/1/2,and a m a x i m u m at x = 0. According to Eq. (2.18), the free energy under nonzero input h will be

gL(X;h) = X4 -- 2l/x2 -- hx


3The temperature dependence is not important for the subsequent considerations and will be understood in the rest of this section and throughout Section 2.3.







! , I






FIGURE 2.4. Free energy of bistable system described by Eq. (2.23), with a = 0.5. The energy minima are located at x = + x r = +_a1/2.

The metastable states u n d e r the generic field h are determined by the condition OgL / O x = 0, with O2gL / OX2 > 0, identifying local gL m i n i m a . T h e qualitative behavior of gL(X;h) for various values of h is shown in Fig. 2.5. W h e n h is large, the energy of interaction with the external field dominates and ga exhibits only one minimum. W h e n h increases from -0% at a certain field - h e to be determined later, a n e w energy m i n i m u m is formed. At h = 0 the two minima have the same energy. At h = h e the m i n i m u m initially occupied by the system becomes an inflection point:

FIGURE 2.5. Sequence of energy profiles calculated from Eq. (2.24) under different external fields, showing genesis of Barkhausen jumps and hysteresis loop. The numbers refer to the field values indicated in Fig. 2.6.


CHAPTER 2 Types of Hysteresis

the system is no longer stable and makes a spontaneous and irreversible Barkhausen jump to the lower energy state. Only one m i n i m u m is present for higher fields. A similar situation is found when the field is progressively decreased from +o% except for the fact that the Barkhausen jump will n o w take place at h = - h e. We can represent the same situation by plotting the free energy gradient Of/Ox and by considering that, according to the equilibrium requirement 3g L / 3 x = 0, the condition h = Of/Ox must always be satisfied as the system evolves (Fig. 2.6). The Of/Ox profile can be decomposed into two stable branches, one for x < - x c and the other for x > x c, where 32f/Ox 2 > 0, and a central unstable branch where 32f/Ox 2 < O. T h e right stable branch is traversed when h decreases from +oo d o w n to h = - h c. At h = - h c, x = x c, the right branch ends and the system jumps to the point h = - h c, x = - x d of the left branch. A similar description applies for increasing fields. If we plot the state variable x as a function of the input field h, we obtain the hysteresis loop shown in Fig. 2.7. In spite of its simple structure, this loop already exhibits many of the features of the hysteresis loops actually observed in real systems. The two stable states existing at h = 0 are examples of remanent states. There the state variable takes the value x = +XF determined by the condition 3f/Ox - 4x(x 2 - a) = 0, which gives x r = +a 1/2. T h e t w o instability points h = he, x = - x c and h = - h e, x = x c, where the Barkhausen jumps take place, are points where the two conditions, 3gL/3X = 0 a n d 3 2 g c / 3 x 2 = 0, must simultaneously hold, that is, 4x 3 - 4ax - h = 0 3x 2 - a = 0 (2.25)

The solution gives he = 8(a/3) 3/2, Xc = (a/3) 1/2. T h e final state x = xd > 0 reached by the system after the Barkhausen jump at h = he is also a solution of the equation 4 x ~ - 4axd - h e = 0, with 3 x y - a > 0. One finds xd = 2(a/3) 1/2. In Section 2.1.2, we showed that the area of the hysteresis loop measures the amount of work dissipated as heat during each excitation cycle. This dissipation takes place in the Barkhausen jumps and is directly related to the ga energy. Let us consider the shaded area zlW shown in Fig. 2.6. By construction, this area is equal to the integral
zlW= fxj, [ h e _ -xc 03_~x]dX=_ fx s [0gL] dx -xc L ox _1h = h~


= gc (-xc;hc) - gc(Xd;hc)

This is exactly the energy decrease occurring when the system makes the



FIGURE 2.6. Equilibrium between thermodynamic forces in bistable system and genesis of hysteresis loop. The numbers refer to the energy profiles of Fig. 2.5.

1.0 0.5 0.0 -0.5 -1.0



I . l




FIGURE 2.7. Solid line: Hysteresis loop of bistable system. Compare with Fig. 2.6. Broken line: Phase coexistence curve obtained by applying Maxwell convention.


CHAPTER 2 Types of Hysteresis

Barkhausen jump (see Fig. 2.5). The system suddenly reduces its energy and transfers the energy difference as heat to the thermal bath. The same occurs in the jump taking place under a decreasing field. As shown by Fig. 2.6, the areas associated with the two jumps just sum up to give the total hysteresis loop area. This description makes clear the two steps through which the work performed by external sources is transformed into heat. Initially, the system gains energy from the external field as the minimum it occupies grows in energy and becomes less and less deep. Then, at the point where the minimum becomes an inflection point, the energy previously gained is quickly transferred to the thermal bath as the system jumps to lower energy, and the whole process can start again. Note also that the description is rate-independent, because we have assumed that the system always occupies one of the energy minima existing at the given field, whatever the field rate of change is. In particular, when we draw the two horizontal branches of Fig. 2.6 associated with the two Barkhausen jumps, we are implicitly assuming that the field does not change appreciably during the time needed by the system to make the jump. In Section 2.3, we will see how rate-dependent hysteresis naturally emerges when this assumption no longer holds.

2.2.2 Catastrophe theory

The analysis of bistable systems presented in the previous section has many points of contact with a general mathematical approach known as catastrophe theory. It is not our purpose to give here a detailed account of this subject, for which we refer the reader to the texts mentioned in the bibliographical notes. Yet, we want to stress certain aspects and concepts, typical of catastrophe theory, that may have a role in magnetic hysteresis. Catastrophe theory has to do with systems described by some potential V(Xl, x2, 9 9 9 Xn; hi, h2, 9 9 9 , hk). The xi variables describe the state of the system and are named state variables, whereas the h i variables describe certain external actions exerted on the system and are named control parameters. For given values of the control parameters, the system is in equilibrium at any point in state space where VV = 0, where the operator V represents the gradient calculated with respect to the xi state variables. The objective of catastrophe theory is to study the qualitative properties of the distribution of equilibrium points as a function of control parameters. There is an evident analogy between the V potential and the free energies previously introduced. In particular, in the case of the bistable system of Eq. (2.24), we have a potential, ga(x;h,a), with one state variable,



x (n = 1), and two control parameters, h and a (k = 2). We include the parameter a among the control parameters because this is a situation of definite physical interest that will be encountered several times, for example in Chapter 5, when dealing with Weiss mean-field theory of ferromagnetism, and in Chapter 8, when discussing coherent magnetization rotation processes. Given the potential 4 V(x; h), one can, for each specific choice h = h 0 of the control parameter set, classify the equilibrium points where VV = 0 according to the properties of the Hessian matrix Vij = 32V/3xi3xj calculated at the equilibrium point. Two situations are possible.

Isolated or non-degenerate or Morse critical points. We shall simply term these points equilibrium points. These are points where VV = 0 and the determinant of the Hessian matrix is not zero, det Vij =~ 0, which means
that the eigenvalues of the Hessian matrix are all different from zero. If all the eigenvalues are positive, then the point is a local minimum for the potential and represents a state of local stable equilibrium. If, on the contrary, a certain number i of eigenvalues is negative, the point is named a Morse i-saddle and describes an equilibrium situation unstable along certain directions. The number and the nature of equilibrium points determine the qualitative properties of the potential. For example, the potential of Eq. (2.24) may describe two qualitatively different situations, one in which there is only one potential minimum, and the other one where there are two minima separated by an unstable potential maximum (see Fig. 2.5).

Non-isolated or degenerate or non-Morse critical points. We shall simply term these points critical points. These are points where VV = 0 and det Vij -0. Some of the eigenvalues of the Hessian matrix, say l of them, are here equal to zero. Degenerate critical points play a fundamental role in the theory, since they control the qualitative properties of the potential as a function of the control parameters. To illustrate this fact, let x 0 be a certain critical point when the control parameter set takes the value h = h 0. The fact that x 0 is a critical point characterizes the behavior of the potential V(x; ho) when x varies in the neighborhood of x = x 0 and h is kept fixed, h = h 0. The degenerate character of the critical point depends on the choice h = h 0 and the general question arises of how the existence of degenerate critical points, and, more generally, how the properties of the potential will be modified when we vary the control parameters in the neighborhood of h = h 0. The deep conclusion reached by catastrophe theory is that the joint
4Wewill simply indicate by x and h the whole set of state variables and control parameters whenever there will be no risk of ambiguity.


CHAPTER 2 Types of Hysteresis

x and h dependence of the potential around a critical point cannot be arbitrary, but it is limited to a few canonical forms, described by functions, named catastrophe functions or simply catastrophes, which are classified according to the number l of eigenvalues of the Hessian matrix that are zero at the critical point, and to the number k of control parameters. The classification of catastrophes can be found in the mentioned texts. Of interest for our purposes are the fold and cusp catastrophes.

Fold catastrophe. This catastrophe occurs in potentials V(x;h) depending

on one state variable and one control parameter. In the neighborhood of the critical point, the potential can be expressed, after some convenient change of variable, in the form

V(x;h) = 3




The behavior of the potential under varying h is shown in Fig. 2.8. The control space is represented by the h axis, and the critical point is at h = 0. In fact, the set of equilibrium points for the potential satisfies the condition c~V _ x2 _ h = 0



This equation has two real roots when h > 0, which correspond to an energy m i n i m u m and an energy maximum. Conversely, no real solution exists when h < 0. The potential passes from one condition to the other at the critical point, where the two equilibrium points merge together and form an inflection point. This occurs when, in addition to Eq. (2.28), the condition c92V/ax 2 0 is fulfilled, i.e., after Eq. (2.27), at x = 0. By inserting this result into Eq. (2.28), we see that this can only occur when h = 0. The point h = 0 represents the set (in this case consisting of one

v ~ v

FIGURE 2.8. Fold catastrophe.




point only) separating the regions of control space where the potential takes qualitatively different forms. This set is called the bifurcation set.

Cusp catastrophe. This catastrophe occurs in potentials V(x;hl,h2) depending

on one state variable and two control parameters. In the neighborhood of the critical point, the potential, after a suitable change of variables, takes the canonical form V(x;h 1,h2) -"
x4 q-

h 1x + h2x2


We immediately recognize the structure of the energy of the bistable system of Eq. (2.24), with the external field h and the parameter a as control parameters. The form of the catastrophe function governs the qualitative properties of the potential as a function of the control parameters. This is shown in Fig. 2.9. In order to understand the meaning of this





2.9. Cusp catastrophe, with qualitative representation of energy profiles at different points in control space. Evolution along the horizontal line in the lower part of the figure generates the behavior shown in Figs. 2.5 and 2.6. The cusplike solid line represents the bifurcation set (Eq. (2.32)), the dashed line the Maxwell set.


CHAPTER 2 Types of Hysteresis

figure, let us begin by considering that the set of equilibrium points for the potential of Eq.(2.24) is determined by the condition
cggL ~


4X3 -- 4ax -- h = 0


This is a cubic equation that has one or three real roots, depending on the values of h and a. Having one or three equilibrium points changes the qualitative behavior of the potential (one minimum against two minima separated by a maximum). If we represent the regions of the (h, a) control space where the potential takes one of the two qualitatively different forms just mentioned, we obtain the phase diagram shown in Fig. 2.9. The boundary line where one passes from one type of behavior to the other is the bifurcation set. At the bifurcation set two equilibrium points merge together and disappear. This occurs through the formation of an inflection point, which, in addition to Eq. (2.30), also fulfills the condition c~2gL-- 12x2 - 4a = 0
3X 2


This equation has real solutions only for a > 0. In this case, one finds x = +(a/3) 1/2. By inserting this result into Eq. (2.30), one obtains the equation describing the bifurcation set:


h2 0

The cusplike appearance of the bifurcation set makes clear the origin of the name given to this type of catastrophe.

2.2.3 E v o l u t i o n

rules and phase coexistence

Knowledge of the number and the nature of equilibrium points does not clarify which of these points will actually be occupied by the system. In principle, this would require some dynamic description of the system evolution, of the type discussed in Sections 2.3 and 2.4. A simpler approach, adopted in catastrophe theory, is to summarize such dynamic aspects through appropriate conventions. Two extreme cases are in particular considered.

Delay convention. The system remains in the equilibrium state initially

occupied until this state is made unstable by the action of the control parameters.



Maxwell convention. T h e system always occupies the state where the poten-

tial is at its absolute minimum.

The delay convention is exactly the one on which we have based our description of rate-independent hysteresis. The behavior illustrated by Fig. 2.5 can be rephrased in the frame of catastrophe theory as follows. The evolution of the system is represented by the horizontal line of Fig. 2.9. The points where the evolution line crosses the bifurcation set are the only points where Barkhausen jumps can take place. In this context, a Barkhausen jump is an event in which a small change in the control parameters entails a substantial change in the state variables. Any crossing of the bifurcation set will not necessary lead to a Barkhausen jump. For this to occur, the system must be in one of the equilibrium states disappearing at the crossing. With reference to the horizontal evolution line of Fig. 2.9, one can check that a Barkhausen jump occurs only when the evolution point exits the cusp region. In general, for a Barkhausen jump to occur, the system must enter the cusp region from one side (e.g., h < 0) and exit through the opposite one (e.g., h > 0). The Maxwell convention can instead be interpreted as an approximate description of situations where the effect of thermal agitation is so important that the system rapidly relaxes to the state of minimum energy in times much shorter than the time scale over which the input exhibits significant variations. When the Maxwell convention is adopted, a new bifurcation set can be defined, known as the Maxwell set. This is the set of points in control space where the value of the potential at two or more minima becomes the same. The Maxwell set for the potential of Eq. (2.24) is described by the condition h = 0 and is represented by the broken line of Fig. 2.9. When the system reaches the Maxwell set, the state variable changes by a finite amount even under constant field. This is shown, for example, by the broken line of Fig. 2.7. This discontinuity is, however, perfectly reversible and entails no energy dissipation. From the thermodynamic viewpoint, a free energy like the one shown in Fig. 2.4 points at the existence of two possible phases for the system, each corresponding to one of the two equivalent energy minima, x = x r and x = - x r. O n e might wonder if there may be any possibility to have stable states where x = 0. As x = 0 is an energy maximum, no such state is possible if we look for spatially homogeneous states, where x has the same value at each point in space. However, if we are dealing with a spatially extended system and we are only interested in the mean value <x> of x over the whole volume of the system, we can realize the condition <x> = 0 by a phase mixture, in which half of the volume is in the x = x r phase and half is in the x = --XF phase. By introducing the concept of


CHAPTER 2 Types of Hysteresis

states not homogeneous in space, made up of domains where the system is in one or the other phase (see Fig. 2.10), we can actually generate states that continuously span the interval - x r <<_ <x> --- +Xr, by adjusting the relative volumes of the two phases. The free energy of the mixture is a simple linear combination of the energies of the two phases, if we can neglect the energy concentrated at the interfaces separating different domains. In this frame, the Maxwell set can be interpreted as the set where phase coexistence is realized. When the system moves along the vertical broken line of Fig. 2.7, both phases are present, and the system smoothly passes from the condition <x> = - X r (whole system in the x = - X r phase) to the condition <x> = XF (whole system in the x = x r phase) with no hysteresis, by adjusting the relative volume of domains. Magnetic domains are at the heart of the behavior of magnetic materials. However, the free energy structure responsible for them is more complicated than the simple bistable one considered here, and requires a specific detailed analysis. This aspect will be addressed in Part III.


We said that rate-independent hysteresis applies when Barkhausen jumps develop in times much shorter than the time scale fixed by the field rate of change. Under this approximation, a Barkhausen jump appears as an instantaneous event, of which we do not see the internal structure. Yet, a system always needs a certain time to react to varying external actions. This is the case, for example, in metallic ferromagnets, where magnetization changes are damped by the production of eddy currents. When the spontaneous evolution of the system through Barkhausen jumps and the

FIGURE 2.10. Phase coexistence with domains.



forced evolution driven by the external field take place on comparable time scales, rate-independent approximations are no longer applicable, and more general frames of description are needed. We shall discuss this aspect for the bistable system of Section 2.2, which gives a simple and clear illustration of the intimate relationship existing between rate-independent and rate-dependent hysteresis and, in particular, of the limit conditions under which rate-independent hysteresis naturally emerges from the general rate-dependent picture.

2.3.1 Dynamical systems

Let us reconsider Eq. (2.18). The various thermodynamic forces acting on the system are described by the energy gradient OGL/OX. The equilibrium condition OGL/OX = 0 states that the total force acting on the system must be zero at equilibrium. The equilibrium condition has the form of a balance, H - aF/OX = 0, between the external field H and the gradient -OF/OX describing internal mechanisms. The natural question that arises at this point is: what will happen when the total force is not zero? How will the system react to an initial nonequilibrium condition? If the system is not in equilibrium, it will try to approach equilibrium by evolving in some direction. The state variable X will change in time, i.e., dX/dt ~ O. We want to know the dependence of dX/dt on 3GL/OX. If we are not too far from equilibrium, we can expand dX/dt in powers of 3GL/OX and tnmcate the expansion after the first-order term. Because both dX/dt and OGL ~coX must be zero at equilibrium, we arrive at the equation
dX Y dt OGL OX


where yis some positive constant. This is an equation of the type "velocity proportional to force," describing a viscous-type mechanism, of which y represents the friction constant, measuring the ability of the system to resist the action of the force. The dynamics is overdamped, with no inertial effects giving oscillations around equilibrium. By taking into account Eq. (2.18), we can write
dX Y--d-t = H(t) OF O X - H(t) - HF(X)


where we have introduced the field HF(X) = OF/OX. The explicit time dependence in H(t) recalls the fact that we are going to apply the equation to situations where the rate of change of the external field may attain significant values and affect the evolution of the system. On the other


CHAPTER 2 Types of Hysteresis

hand, we can write an evolution equation for X only if X has a welldefined value at each instant of time. This means that the internal relaxation processes leading to a given value of X must act over times much shorter than the times over which X undergoes significant variations. This is consistent with our previous interpretation of GL (see Section 2.1.4). A system like the one described by Eq. (2.34), that is, by an equation of the form d X / d t = f(X,t), is called a dynamical system. If the function f explicitly depends on time, then the system is called a nonautonomous dynamical system. In addition, for the cases we are interested in, like the bistable system of Fig. 2.4,f is a nonlinear function of X, which also makes the system nonlinear. The general study of nonlinear, nonautonomous dynamical systems is rather difficult. In this section, we shall simply list and discuss a few qualitative aspects relevant to the interpretation of hysteresis phenomena. Texts giving a more detailed treatment are mentioned in the bibliographical notes. As a simple introductory example, it is instructive to consider the case where the system free energy is parabolic, F(X) = X2/2Xo, with X0 > 0. For any given field H, the system has only one equilibrium position, given by X = X0H. The quasi-static system response is thus linear, with no hysteresis. According to Eq. (2.34), under arbitrary dynamic excitation, the system is governed by the equation
dX X y --~ + - - = H(t) (2.35) X0 This is the well-known equation describing exponential relaxation effects. The solution is X(t) t = Xo mr f _oo e x p ( t - t ' ) H ( t ' ) d t ' ~ .


where T= YX0.Equation (2.36) describes a linear system of the type defined by Eq. (2.3). The generalized susceptibility, calculated from Eq. (2.6), is given by


X0 1 + ir

r X0 -i 1 + o~2~ 1 + oj2r2


The loss per cycle under sinusoidal excitation (Eq. (2.15)) becomes W = r I + r
ro "rXo

2 H2 = r176


The loss per cycle is proportional to the friction constant 7,, and for given output amplitude X0, is also proportional to the frequency of excitation. The process is purely rate-dependent. Under quasi-static conditions, the loss goes to zero and the system response reduces to the nonhysteretic, linear law X = X0H.



2.3.2 Nonlinear systems

The reason why the previous example leads to a purely rate-dependent behavior is that the system free energy has only one minimum. We know from Section 2.1.4 that the presence of multiple metastable states is essential in order to have rate-independent hysteresis. Some coexistence of rate-independent and rate-dependent effects is thus expected when we introduce in Eq. (2.34) the gradient of a potential more complicated than a parabolic well, characterized by several local minima. When the free energy gradient H f f X ) is a generic function of X, Eq. (2.34) becomes a nonlinear nonautonomous first-order differential equation. Its structure is complicated enough to make it difficult to discuss its general properties. In particular, the explicit presence of time is a substantial complication. We shall limit the discussion to a few qualitative considerations. Because the values of the various physical constants will not play any role in the discussion, we assume that we have reduced Eq. (2.34) to a convenient dimensionless form, where the constants have been absorbed in the change of variables. To remind us of this fact, we use lowercase letters for the dimensionless variables and u for dimensionless time. In addition, it is useful to rewrite Eq. (2.34) as a set of coupled equations for x and h:
dx d---u = h - hF(x)


dh d---u = r(u)

where r(u) represents the input field rate of change. Expressing the problem in the form of Eq. (2.39) is useful, because it shows that the equations governing the joint behavior of h and x become autonomous (i.e., with no explicit time dependence) whenever the field changes at some fixed rate r, independent of time. Dealing with an autonomous description introduces important simplifications. In particular, there is just one solution of the problem (one trajectory) passing through any given point (x0, h0) of the x-h plane. We can then describe the behavior of the system by the so-called phase portrait of the equation, in which we represent the flow of trajectories passing through the various points of the plane of coordinates (x, h). Let us consider what this representation gives for a bistable system, where hF(x ) = Of/Ox = 4x3 - 4ax (see Eq. (2.23)). In this case, Eq. (2.39) takes the form
dx du dh du
J = h

4 x 3 Jr"




CHAPTER 2 Types of Hysteresis

Figure 2.11 shows two phase portraits associated with different values of r. Notice that the trajectories of each portrait never intersect. There is just one trajectory passing through any given point of the plane. On the other hand, all trajectories intersect the hr_(x) equilibrium line with vertical slope, because at this point, according to Eq. (2.39), h = hr_(x), and thus dx/du = 0. Each portrait represents the behavior of the system when the external field increases at the constant rate r. The portrait under decreasing field, that is, under opposite r, can easily be deduced by symmetry considerations. In fact, by taking into account that h E ( - x ) = -hE(x), we see that, if

r = 0.25


-4 -1 -0.5 0 0.5 1 1.5


-4 -1



x 1.5

2.11. Phase portraits of Eq. (2.40), for a = 0.5. Top: r = 0.25. Bottom: r = 2.5. Broken line is an example of symmetric trajectory followed under negative r. The thick solid line represents the free energy gradient hr(x) = 4 x 3 - 4ax.



[x(u), h(u)] is a solution of Eq. (2.40), then [-x(u), -h(u)] is a solution of the same equation when r --~ -r. Some of the trajectories under negative r are shown by the dashed lines of Fig. 2.11. The joint portraits under opposite r values are most useful in providing a straightforward method to construct dynamic hysteresis loops associated with triangular input waveforms. Such a waveform is in fact a sequence of constant-slope ramps. The corresponding loop is thus obtained by identifying the two trajectories associated with opposite r values that intersect in correspondence of the desired peak input values. Examples of loops obtained by this construction for different values of r are shown in Fig. 2.12. Figure 2.12 gives evidence of the fact that the loop shape is substantially dependent on the input rate of change. In addition, even if the peak value and the frequency of oscillation of the input are the same, the loop shape still exhibits a residual dependence on the particular input waveform applied. The loop under triangular excitation is different from the one under sinusoidal excitation, as shown by the dashed line of Fig. 2.12.




/ /,'


-4 ' '

FIGURE 2.12. Dynamic hysteresis loops predicted by Eq. (2.39) and Eq. (2.40). Continuous lines: Loops under triangular input waveform of peak value h0 = 2.5 predicted by Eq. (2.40) for a = 0 . 5 a n d r - +0.25, +2.5 (see Fig. 2.11). Rateindependent loop (r ~ 0) is also shown for comparison. Dashed line: Loop under sinusoidal input waveform h(u) = h o sin(r h0 = 2.5, r = 2.5, predicted by Eq. (2.39). The triangular and sinusoidol input waveforms with r = 2.5 have the same amplitude and the same period.


CHAPTER 2 Types of Hysteresis

Rate-dependent effects disappear when ]r] --~ 0. In this limit, rateindependent hysteresis naturally emerges from the general rate-dependent description. To show how this occurs, let us focus attention onto the geometrical shape of the trajectories generated by Eq. (2.39) or Eq. (2.40), with no consideration of the time at which specific trajectory points are reached. From Eq. (2.39) we see that the trajectory h(x) obeys the differential equation
dh r d---x = h - hF(x )


In the limit ]r] --~ 0, the product [h - hF(x)]'dh/dx must be close to zero. In addition, if for example r > 0, dh/dx and h - hF(x) must be both positive. We see that, whenever dh F / d x > 0, a possible solution is h -~ hF(x ) and dh/dx ~ dhF/dx > 0. In those regions where d h F / d x < 0, however, this solution is not possible, because one would get the product of terms with opposite signs. In this case, one can simply set dh/dx -~ 0, whatever is the value of h - hF(x). This is where a Barkhausen jump takes place. Analogous considerations can be made when r < 0. Thus, one obtains a loop made up of the stable branches of hF(x), joined by horizontal segments where Barkhausen jumps take place. The most striking property of the dynamic loops of Fig. 2.12 is certainly the increase with ]r] of the loop area, which points at increasing energy dissipation. The amount of energy dissipated at each instant of time can be calculated from Eq. (2.39). If the system evolves under isothermal conditions, the energy dissipated in a small transformation where the state variable varies by/ix is given by (h - hF) Zlx. In fact, hF(x) represents the energy gradient Of/Ox, so that (h - hF)zlx = h/Ix - / i f . As discussed in Chapter 4, under isothermal conditions, the difference between the work h/ix performed by external sources on the system and the amount of free energy Zlfstored into the system just represents the amount of energy dissipated as heat during the transformation. The dissipated energy [h - hF(x)]zlx is graphically represented by the shaded area shown in Fig. 2.13. The energy loss is given by the area between the h(x) trajectory and the hF(x) equilibrium line. This conclusion applies to any generic transformation, and does not require closed loop trajectories. On the other hand, it is consistent with the fact that the total energy loss in a closed loop is given by the loop area.


The role of temperature has been anticipated many times, but not yet discussed in this chapter. The results obtained in the previous sections



FIGURE 2.13. Shaded region represents the amount of energy dissipated in the x increment Ax. were all based on the working hypothesis that the system will indefinitely remain in whatever state initially occupied, unless the external field is modified. In the terminology of catastrophe theory, this means that the delay convention is adopted. Thermal agitation takes the system away from this condition. When thermal effects are dominant, a different, in a sense opposite, limit condition becomes the natural one, in which the system relaxes so fast toward thermodynamic equilibrium that when we probe it under arbitrary field, we always find it in the state of minimum free energy. In this limit we do not have hysteresis at all, and the Maxwell convention applies. An example is shown by the dashed line of Fig. 2.7. The real behavior of a given system will always be somewhere in between these two limits, as a result of the competition between the time scale over which the system approaches equilibrium and the time scale over which the field varies significantly and takes the system away from equilibrium. This gives rise to rate-dependent effects of physical origin completely different from the one discussed in the previous section. In particular, the state of the system will change in time even if the field does not change at all. In this section, we discuss thermal relaxation under constant field, to illustrate some general concepts that can be applied to more complicated situations as well. 5 Two main points will be addressed. First, we shall introduce a proper frame to deal with the fact that, in the presence of thermal agitation, the state variable X(t) becomes a stochastic process, that is, a function exhibiting irregular fluctuations in time. This calls for a 5Thermal rate-dependence under varying field will be considered in Section 14.2.


CHAPTER 2 Types of Hysteresis

more refined definition of the concepts of state and state variable. Second, we shall discuss a valuable approximation, summarized by the term thermal activation, where, of all the details of the free energy landscape in which the system is evolving, only the energy extrema, that is, minima, maxima, or saddle points, control the relaxation. As in the rest of this chapter, we discuss these aspects for the bistable system introduced in Section 2.2. Having a one-dimensional free energy with only one or two minima is an enormous simplification with respect to the general problem of the relaxation in a multidimensional, multivalley energy surface. However, we shall see in chapters 10 and 14 that the treatment of more complex systems can often be reduced just to the superposition of many bistable contributions of the type discussed in this section.

2.4.1 Random fluctuations

Let us reconsider Eq. (2.33). This is a deterministic equation for the state variable X(t). At each instant of time, X attains a well-defined value and one assumes that this value is all that is needed in order to study the problem. When thermal effects are important, this is no longer the case. The system is coupled to the thermal bath, and small amounts of energy are continuously and randomly exchanged between the system and the bath. It is through these microscopic interaction processes that thermodynamic equilibrium is established and conserved in time. The consequence is that the state variable will also undergo small random fluctuations in time. In other words, the function X(t) becomes a stochastic process, that is, an erratic function of time, reflecting the microscopic nature of the interaction processes with the bath. Under these conditions, X(t) ceases to be the natural quantity that one would like to associate with the state of the system. We are not interested in the complicated behavior of X(t) over the short time scale of microscopic interactions, but in its average behavior over the much longer time scale where the macroscopic evolution of the system becomes manifest. In order to deal with this situation, probabilistic methods and concepts are needed. Let us introduce the function P(X,t), such that P(X,t)dX represents the probability that, at time t, the value of the state variable lies in the interval (X, X + dX). P(X,t) is now the quantity characterizing the state of the system. In particular, the value of X that we associate with a given state is now the average

(X(t)) = f X P(X, t)dX




<X(t)> is a statistical-ensemble average. We can imagine having a large number of identical systems under identical conditions. Each of them will evolve according to a different random function Xi(t), where i identifies a given system in the ensemble. If, at a certain time, we pick up the values of Xi(t ) over the whole ensemble and we take the average, we obtain <X(t)>. Note that there is no reason why the individual functions Xi(t ) should all be close to <X(t)>. For example, if Xi(t ) is equal to 1 for half of the ensemble and is equal to - 1 for the other hall we get <X(t)> = 0, that is, neither 1 nor -1. As we shall see, this is just what may happen in bistable systems. In such cases, the naive g u e s s ~ t h a t the average <X(t)> should reproduce, apart from small fluctuations, the time behavior of Xi(t ) of each system in the ensemble~is not true. In this frame, the behavior of the system is determined by the properties of the probability distribution P(X,t). A first conclusion of general validity is drawn by noting that, whatever the initial distribution P(X,t = O) might be, at times large enough the system should reach thermodynamic equilibrium, characterized by some equilibrium distribution Peq(X). We know from Section 2.1.4 (Eq. (2.20)) that the probability to find a given value X in thermodynamic equilibrium is controlled by the exponential of the energy GL(X;H,T). Therefore, we conclude that
Peq(X) ---- P(X, t

~ oo) oc exp -

GL(X;H,T)~ ~ ,/


Let us apply this result to the bistable system of Fig. 2.4, under zero field (Fig. 2.14). If the variation of GL versus X is large with respect to kBT, then, due to the exponential dependence on energy in Eq. (2.43), Peq(X) exhibits two identical sharp peaks around the two energy minima. It is almost inevitable to reason in terms of two discrete states, (+) and (-), and to describe the situation by saying that the system is with probability 1Ain the (+) state and with probability 1Ain the ( - ) state. The average value of X at equilibrium is thus <X>eq = 0. HOW this result is reflected in the time behavior of the individual random functions Xi(t) is shown in Fig. 2.15. The individual system stays for some time around one of the two minima and then, aided by some favorable thermal fluctuation, has a chance to overcome the energy barrier at X = 0 and to pass into the other energy well. The system spends on the average half of its time in one minimum and half in the other, and thus, over time intervals long enough with respect to the typical transition times from one minimum to the other, it appears to occupy both minima with probability ~A. The random nature of the Xi(t ) function is recognized in the fine structure of its fluctuations around the minimum energy position, and, most of all, in the random location of the times at which the system jumps from one minimum to the other.


CHAPTER 2 Types of Hysteresis



initial state


final state



FIGURE 2.14. Bistable system: Behavior of probability distribution in approach to equilibrium.


FIGURE 2.15. Bistable system: Typical time behavior of individual state variable

Figure 2.15 shows well how rate-independent hysteresis emerges in this context. As we shall see shortly, the average time separation between subsequent jumps from one energy m i n i m u m to the other depends exponentially on the ratio between the energy barrier separating the two min-



ima and kBT. If the temperature is low or the barrier large, this time can easily become huge. On the time scale of a typical experimental observation, we do not wait long enough to detect at least one of these transitions, which means the system simply remains in whatever state it was initially occupying. In the probability representation, shown in Fig. 2.14, this means that if we initially have a peak around one of the two minima, this peak will simply remain unchanged during the time of our observation.

2.4.2 Langevin and Fokker-Planck equations 6

The qualitative considerations developed so far summarize the main physical aspects of thermal relaxation, but leave undetermined several quantitative aspects. We need appropriate mathematical tools to describe and study the random functions of Fig. 2.15, and to predict how a given initial distribution P(X,t = 0), like the single-peak distribution of Fig. 2.14, will evolve in time and will approach the final equilibrium distribution Peq(X) given by Eq. (2.43). We said before that the coupling of the system to the thermal bath entails microscopic interaction processes that produce random changes in X(t). These interactions can be described as additional forces that act on the system and contribute to the force balance expressed by Eq. (2.34). Therefore, in the presence of thermal fluctuations, we expect that the deterministic description given by Eq. (2.33) should be modified into

-d-f = -

3GL + HT(t )


where HT(t ) is an additional random force describing the coupling to the thermal bath. In most cases, the microscopic interactions described by HT(t) take place on a time scale much shorter than the time scale over which <X(t)> exhibits significant variations, and the values of the random force at different times appear to be independent of each other down to negligible time separations. A rapidly varying function of this sort is described by a stochastic process known as Gaussian white noise. Equation (2.44) is a stochastic differential equation, that is, a differential equation containing a random term, and, as such, it will produce random solutions X(t), just as we expect on physical grounds. Mathematically, this type of equation is known as a Langevin equation, and its study is part of the 6Some information on the various concepts and methods of the theory of stochastic processes mentioned in this and in the following section can be found in Appendix E.


CHAPTER 2 Types of Hysteresis

theory of Markovian stochastic processes. Equation (2.44) is based on the assumption that a clear-cut separation of time scales exists in the problem, so that the total force acting on the system can be expressed as the sum of the deterministic macroscopic force described by the free energy gradient and of a short-time-scale random force of thermal origin. The ensemble of the random processes Xi(t ) shown in Fig. 2.15 is the ensemble of the solutions of Eq. (2.44). The statistical properties of this ensemble are described by the distribution P(X,t) previously introduced, and it is natural to investigate the relationship between the properties of P(X,t) and the properties of Eq. (2.44). The theory of stochastic processes shows that, under appropriate conditions, given the stochastic process described by Eq. (2.44), the corresponding probability distribution P(X, t) will obey the partial differential equation

~' 3---t= 3X \ - - ~ P + kBT---~


c~ (OG L )

_ c~2P


This equation is known as the Smoluchowski equation, and is a particular case of a class of equations known as Fokker-Planck equations. The timescale separation mentioned before is here reflected in the presence of two terms in the right-hand side of Eq. (2.45). The former produces a drift of the distribution in the direction dictated by the free energy gradient, whereas the latter is a diffusive term, of strength proportional to the temperature, which describes thermal agitation. By solving Eq. (2.45) under appropriate initial and boundary conditions, in principle one has a general description of the relaxation process, and one can calculate the time evolution of the average state variable <X(t)> through Eq. (2.42). For example, the time-independent, stationary solution of Eq. (2.45) is precisely the equilibrium distribution given by Eq. (2.43), so that at the end of the relaxation, the system will actually reach thermodynamic equilibrium. However, the point is that, except for a few particular cases, one does not know how to solve Eq. (2.45) for a generic form of GL. Some approximation must be introduced.

2.4.3 Thermal activation

If the temperature is low enough with respect to the energy barriers involved in the problem, the probability distribution P(X,t) will have at any time the peaked structure shown in Fig. 2.14. One can introduce, with no ambiguity, the probabilities, P+ and P_, to find the system around one of the two energy minima, calculated by taking the integral of P(X,t) over



convenient neighborhoods of the locations, X+ and X_, of the two minima. One can thus attempt a discrete description of the problem, where, instead of a probability distribution that is a continuous function of X, one simply has two probabilities, P+(t) and P_(t), to study as a function of time, with the constraint P+ + P_ = 1. In this approximation, Eq. (2.42) reduces to
(X(t)) = X+P+ + X _ P _ = X _ + (X+ - X _ ) P +


There are methods discussed in the literature which show that this discrete description can indeed be derived from Eq. (2.45), in the limit of energy barriers large with respect to kBT. We shall illustrate the form taken by this approximation for the bistable system pictured in Fig. 2.16, in which the two energy minima are at unequal levels. The lack of symmetry may be the result of the internal structure of the system, or may simply be due to the presence of a nonzero external field. The main aspects of the approximate description can be summarized as follows.
Master equation. P+(t) obeys the master equation


+ = w_P

- w+P+


where w+ and w_ represent appropriate transition rates, defined next. Equation (2.47) can be interpreted as follows. Suppose we have a statistical ensemble of identical systems. Then P+ and P_ will be proportional to the number of systems occupying the two minima. The systems in the ( - ) or (+) state have a probability per unit time w_ or w+ to jump to the (+) or ( - ) state. In the time interval dt, a number of systems proportional to P _ w _ d t and to P+w+dt will make opposite transitions from one minim u m to the other and will give opposite contributions to the variation in the number of systems in the (+) state. The two flows proceed independently of each other, and Eq. (2.47) keeps track of the total balance.



:::::j........ +__>.......................



FIGURE 2.16. Bistablesystem: Energy barriers and relaxation channels involved in thermal activation.


CHAPTER 2 Types of Hysteresis

Transition rates. The transition rates w_+ depend on the free energy and on temperature. Remarkably, many details of the energy profile turn out to be eventually not important. The dominant role is played by the energy difference between each of the minima and the energy maximum that separates them (see Fig. 2.16). One finds that w_+ is expressed by the Arrhenius formula
w_+ = -10 exp -



where Z~G_+> 0 are the two energy barriers indicated in Fig. 2.16. The characteristic time constant T0 summarizes other details of the free energy profile and of the temperature dependence. However, to a first approximation these aspects have a marginal role in comparison with the strong exponential dependence on AG_+/kBT. On the other hand, accurate predictions of T0 are difficult, and it is usually accepted that this constant is poorly known. Equation (2.48) can be qualitatively interpreted as follows. Let us consider for instance the ( - ) state of Fig. 2.16. According to Boltzmann statistics, the relative probability of finding a given system of the statistical ensemble at the bottom and at the top of the energy barrier is given by exp(-AG_/kBT ). If we imagine that the system has the possibility to update its state in a time interval of the order of T0, then Eq. (2.48) will represent the number of times per unit time in which the system will be found on top of the barrier. The state on top of the barrier is dynamically unstable and when the system is there it will jump, with a probability of the order of I (say 89 toward the other energy minimum accessible from the top of the barrier. This gives rise to a transition rate from the ( - ) to the (+) state of the order of Eq. (2.48). Similar considerations hold for the other energy minimum. Equation (2.47) summarizes the effect of the two opposite flows.

Relaxation equation. By recalling

in the equivalent form

that P+ + P_ = 1, we can write Eq. (2.47) P+,eq - P+ (2.49)

dP+ _ dt



~AGU~aBIexp ( -,-7--~, aBI/AGc~ w+ + w_ = - 2 ch\,-:--~-j To

w 1 1

(2.50) (2.51) (2.52)

P+'eq--w+ q- w_ =2

2 \k--~J
AG_ - AG+ 2


AG_ + AG+ 2

AGu =

2.5 BIBLIOGRAPHICALNOTES The solution of Eq. (2.49), given the initial probability P+(t = 0), is P+(t)= P+,eq + [P+(0)- P+,eq] e x p ( - t )



Analogous expressions hold for the average state variable <X(t)> defined by Eq. (2.46). According to Eq. (2.53), the system relaxes exponentially, with a time constant ~-given by Eq. (2.50). In the final equilibrium state described by Eq. (2.51), the two energy minima are both significantly populated only if IAGul ~ kBT. Otherwise, only the absolute minimum is occupied. When this is the case, only one of the two flows of Eq. (2.47) plays a role, the one describing jumps from the higher to the lower energy state. Jumps in the opposite direction have a negligible probability of occurring and the relaxation simply reduces to a continuous transfer of probability from one minimum to the other, until the whole probability is concentrated in one minimum only. The situation encountered in magnetic materials is inevitably much more complicated than what we have discussed here. Yet, as we shall see in chapters 10, 13, and 14, a description in terms of collections of bistable contributions may often be acceptable. Those cases can be treated on the basis of the present results, by introducing appropriate distributions of the various parameters involved in the description.


The mathematical description of hysteresis, mainly of a rate-independent nature, is addressed in [B.28-B.31, B.111, B.115]. In [B.29], a clear presentation of the concepts of branching and local/nonlocal memory is given. The subject of metastability and approach to equilibrium is of fundamental physical importance and of challenging difficulty and has been only superficially addressed here [see B.32, B.38, B.41, B.45]. The conceptual difficulties encountered when attempting a rigorous definition of metastability in statistical mechanics are discussed in Ref. 2.1. Our summary of catastrophe theory in Section 2.2.2 has been largely inspired by [B.22]. The general connection between magnetization processes in magnetic systems and elementary catastrophes is analyzed in Ref. 2.2. System equilibrium and stability, and the properties of nonautonomous, nonlinear dynamical systems, considered in Section 2.3, are discussed in [B.23]. The concept of Landau free energy and its connection to partial statistical averages is neatly discussed in [B.37, Chapter 5]. Its physical signifi-


CHAPTER 2 Types of Hysteresis

cance and its limits of applicability are considered in Ref. 2.3, Ref. 2.4, and Ref. 2.5. For the use of the theory of stochastic processes to describe thermal relaxation and thermal activation, see Appendix E and the references therein, Ref. 2.6, and Ref. 2.7. The relevance of stochastic processes to nonequilibrium statistical mechanics is discussed in detail in [B.44, chapters 2 and 3]. An important example of application of the methods of Section 2.4 to thermal relaxation in magnetic systems is discussed in Ref. 2.8, Ref. 2.9, and Ref. 2.5. In Ref. 2.8, the discrete approximation of Section 2.4.3 is used, whereas in Ref. 2.9 and Ref. 2.5 a Langevin equation based on the more complete approach of Section 2.4.2 is considered. An interesting topic not discussed in this book is linear-response theory and the fluctuation-dissipation theorem, whereby one connects fluctuations in thermodynamic equilibrium to dissipation. For this see [B.38, B.42, B.44, B.46]. 2.10. Penrose and J. L. Lebowitz, "Towards a Rigorous Molecular Theory of Metastability," in [B.49], 323-375. 2.2 M.A. Pinto, "Catastrophe Model for Micromagnetics," Phys. Rev. Lett. 24 (1987), 2798-2801; "Morphology of Micromagnetics," Phys. Rev. B 38 (1988), 6824-6831. 2.3 K. Binder, "Theory of First-Order Phase Transitions," Rep. Progr. Phys. 50 (1987), 783--850. 2.4 J. D. Gunton, M. San Miguel, and P. S. Sahni, "The Dynamics of First-Order Phase Transitions," in Phase Transitions, Vol. 8 (London: Academic Press, 1983), 267-466. 2.5 W. F. Brown, Jr., "Thermal Fluctuations of Fine Ferromagnetic Particles," IEEE Trans. Magn. 15 (1979), 1196-1208. 2.6 H. Haken, "Cooperative Phenomena in Systems far from Thermal Equilibrium and in Nonphysical Systems," Rev. Mod. Phys. 47 (1975), 67-121. 2.7 P. Hanggi, P. Talkner, and M. Borkovec, "Reaction-Rate Theory: Fifty Years after Kramers," Rev. Mod. Phys. 62 (1990), 251-341. 2.8 L. N6el, "Th6orie du Trainage Magn6tique des Ferromagn6tiques en Grains Fins avec Application aux Terres Cuites," Ann. Gfophys. 5 (1949), 99-136 (in French). 2.9 W. E Brown, Jr., "Thermal Fluctuations of a Single-Domain Particle," Phys. Rev. 130 (1963), 1677-1686.