Professional Documents
Culture Documents
TABLE OF CONTENTS:
I Some Foundations from Thermodynamics and from Heat and Mass Transfer
2
1. The first law of thermodynamics. 2
2. Properties of simple systems – p, υ, T. Evaporation and condensation. 4
3. Humidity. 7
4. Energy transfer with and without mass ... Second Law limitation on energy transfer. 13
II General Law of Conservation 17
1. Definition of system and its boundary. 17
2. Quantity and its species – conservation thereof. 17
3. Rate of change of quantity in a system. 17
4. Examples of lumped systems from various domains: mechanical, power, electrical, economic … 18
III State Equations of Lumped Systems 25
1. Definition of state. 25
2. State differential equations. 25
3. Output equations. 25
4. State equations – initial value problem. 25
IV Linear Systems 27
1. Linearisation of state equations. 27
2. Transient behaviour of the state and system. 29
3. Asymptotic stability of the state. 30
4. Transfer functions. 31
V Discrete-Time Models of Systems – Numerical Integration & Simulation 33
1. Finite difference approximation – difference equations. 33
2. Stability of linear difference equations. 35
3. Stability of explicit and implicit Euler methods. 35
4. Further examples of single-step simulation methods: Trapezoidal (Tustin), Eitelberg, Runge-Kutta … 37
5. Step size control. 46
LABORATORIES 47
1. MATLAB arrays, array operations, graphics. 47
2. Relative humidity as a function of temperature. 48
3. MATLAB Matrix and vector operations. 48
4. Eigenvalue and stability calculations. 49
5. Introduction to Simulink, sources, sinks and other blocks. 49
6. Simulation of state models with initial conditions (given method and step-size). 49
7. Time-delay modules in Simulink. 50
8. Output to work-space and transient plotting. 50
9. Coding the explicit Euler method for a stiff example. 51
10. Coding the linearly implicit Eitelberg method for a stiff example. 51
11. Plotting of the stability bounds of Eitelberg’s A stable methods. 51
TUTORIALS 52
BIBLIOGRAPHY 53
SUMMATIVE EXERCISES 54
This chapter summarises some classical physics that has been developed by many
authors over many years. Generally, it is futile to try to give credit to anyone in
particular as most of this information is simply common knowledge. However, it is
acknowledged that the undergraduate textbook of Moran and Shapiro [1] has been
used extensively and due credit is hereby given to them.
(1.1)
We calculate the scalar product between the left and right hand sides of this identity
(1.2)
The left hand side of eq. (1.2) is the rate of change of the kinetic energy of the
constant mass m
(1.3)
If this body is accelerated by a force along a path described by the vector then
this force does work on this body at the rate of
(1.4)
In other words, the rate of kinetic energy increase of the constant mass m is equal to
the power PN – the rate of work done on this mass.
(1.5)
Energy is a quantitative property of the body in motion. Work is the mechanism by
which this energy is changed. The preceding derivation does not explicitly state that –
or under what conditions – this kinetic energy can be transformed into work. This is
related to the question of reversibility of processes – a major topic in thermodynamics.
It seems, most introductory texts, like [1], prefer the integrated form of eq. (1.5):
3
(1.6)
Both, the kinetic energy and work, are measured in units of
(1.7)
J stands for joule, after an English physicist Joule. However, it is pronounced as in
French. Even though torque is also measured in units of , it is not energy or
work.
One of the earliest extensions of the concept of kinetic energy seems to have been
the 'potential energy'. It can be motivated as follows. Under the given conditions, eq.
(1.5) fully describes the energy and work relationship of the body – if FN denotes the
total ('Newtonian') force vector acting on the body.
It turns out that it may be convenient to redefine some part, or all, of the work in eq.
(1.6) as negative potential energy. For example, if the total force vector FN has a
gravitational force component – where g is the gravitational acceleration
vector pointing towards the centre of the earth and it is (nearly) constant then we
obtain from eq. (1.6)
(1.8)
Substituting this equation into eq. (1.6) and re-arranging yields
(1.9)
It is a straight-forward matter to add other field forces to the gravitational field force
and thus extend the definition of the potential energy. Similarly, elastic potential
energy can be defined with respect to reversible elastic deformation forces. This
introduces the need to avoid double accounting of the relevant forces – either they are
covered on the energy side of the equation or on the work side of the equation, but not
both simultaneously!
Internal energy is used to cover a number of other forms of energy in bodies and
systems – such as thermal energy or energy stored in chemical, molecular and
atomic bonds. (The undergraduate text [1] considers elastic potential energy as part
of internal energy of a system.) Hence, eq. (1.5) is usually generalised as
(1.10)
(1.11)
End of example.
So far, only the kinetic energy change is strictly relying on the Newton's second law.
The gravitational potential energy is still based on classical Newtonian physics. The
internal energy could be considered a thermodynamic concept. It is based on a large
body of experimental data.
A fundamentally thermodynamic extension to eq. (1.10) is based on the discovery
that energy of a body of mass – or of a system in general – can be changed by other
4
means than work done by an applied macroscopic force. Energy can be transferred
by heat. For example, when water is heated on a hot plate its energy is increased
even though no mechanical work is done. "This type of interaction is called …
energy transfer by heat." [1, p. 44] Heat transfer rate to a body or system is denoted
by . Hence, eq. (1.10) becomes more fully now
(1.12)
Energy transfer by heat occurs only in the negative direction of the temperature
gradient. That is what we are taught in universities and it has been experimentally
and exhaustively verified. Nevertheless, one has to be careful. This statement is true
in observed macroscopic systems for conduction and radiation. It is not necessarily
true when mass and heat transfers coincide.
The commonly used terminology can be confusing, perhaps, because this area of
knowledge combines experimental scientific evidence from many fields of physics
and engineering and from different scientific traditions. The following quote from {1,
p. 49] should help:
"The terms work [W] and heat [Q] denote different means whereby energy is
transferred and not what is transferred. However, to achieve economy of expression
in subsequent discussions, W and Q are often referred to simply as work and heat
transfer, respectively. This less formal manner of speaking is commonly used in
engineering practice."
The first law of thermodynamics is nothing else but the energy balance – or energy
conservation – equation (1.12) above or its equivalent integral form below.
(1.13)
Solid, liquid (fluid), and vapour (gas) states. Critical point and triple line (point).
Phase change and two-phase mixtures. Sublimation - deposition, melting –
freezing/solidification, evaporation (vaporization) - condensation.
5
In the mixtures of phases – solid-vapour, liquid-vapour, or solid-liquid – the
temperature and pressure are not independent. One is defined by the other. The
ratio of material mass in the different phases of the mixture depends on the internal
energy of the mixture.
For actual calculations, use the appended tables in [1] or other sources, as in [10].
(1.14)
Define the quality and note that . Hence, eq. (1.14) can be
written as
(1.15)
(1.16)
End of example.
6
In many thermodynamic calculations, the internal energy U appears together with
the product between pressure p and the volume V. This combination is called
enthalpy . The specific enthalpy is expressed as
(1.17)
Just like in eq. (1.15) above, the specific internal energy and the specific enthalpy of
the wet steam can be calculated as follows.
(1.18)
(1.19)
Wet steam pressure and temperature are not independent. Hence the latent heat of
evaporation can be expressed as a function of pressure, , or of temperature,
.
(1.20)
(1.21)
(1.22)
7
(about 3000 m above sea level) the atmospheric pressure drops to about 70 kPa and
the air density at becomes 0.8 kg/m3.
End of example.
Another popular form of the ideal gas equation is obtained from eq. (1.21) by
multiplying both sides with the density:
(1.23)
could be called the molar density, because it indicates the number of moles in
unit volume, measured in kmol/m3. A kmol is equal to a fixed number of
‘elementary entities’ – here we refer to molecules. Clearly, the
ideal gas equation is completely independent of which molecules the gas is
composed of. The pressure and temperature relationship depends only on the
number of molecules in a cubic meter.
3. Humidity.
The concept of humidity concerns lay people and experts alike. It is very important
to both, but it is misunderstood by the general public. "The study of such systems
… is known as psychrometrics." [1, p. 579]
By definition, dry air contains no water vapour. Technically, moist or humid air refers
to a mixture of dry air with water vapour. And this is precisely where the difficulties
begin. Generally, mixtures of simple or not so simple substances – salt, sugar, liquid
acids, oil, oxygen, nitrogen or other substances in water, alcohol and other liquids –
may cause complex changes in the physical properties even when the mixed
quantities are small compared to one main substance. Amazingly, it turns out that
humidity has practically nothing to do with air – in or near the conditions in which
humans can survive in the atmosphere. Hence, I begin by completely ignoring the
air and the associated thick layer of cattle manure in common or popular
explanations.
Humidity, as measured in practice, is entirely a matter of some space containing
water vapour and it does not depend on whether this space also contains air or not!
Humidity is simply a property of water vapour in relation to the saturated vapour
line in Figure 1.1 above. The most relevant portion of this generic figure is
reproduced in Figure 1.2 specifically for water (T-ν diagramme is convenient here).
140 140
120 120
100 100
temperature [deg C]
temperature [deg C]
70 kPa
80 35 kPa 80
60 60
40 6 kPa vapour 40
20 liquid + vapour 20
0 0.61 kPa 0
solid + vapour
-20 0.10 kPa -20
-40 -40
0 2 4 -2 0 2
10 10 10 10 10 10
specific volume [m3/ kg] pressure [kPa]
(1.24)
Apart from adding a new name to the same old thing, it is not new. It is also of
rather little interest in typical applications in the earth’s atmosphere.
Of greater interest is what is called the relative humidity. But first, recall that to the
left of the vapour saturation curve in the T-ν diagramme, the steam (vapour) quality
was defined as the ratio of the mass of gaseous phase (vapour) to the
saturated wet steam mixture’s mass at the same temperature – .
The quality is 100% on the saturation curve and less to the left of it.
One can formally extend the definition range of x to characterise the quality of
vapour to the right of the vapour saturation curve – note that msat no longer contains
liquid water and mv is superheated. This is one, albeit not the most popular,
definition of relative humidity, . Divide the numerator and
the denominator by the volume V to yield . Here too, this ratio
is 100% on the vapour saturation line and less to the right of it.
Apparently, the most popular definition is based on the ratio of the superheated
vapour pressure to the saturation pressure at the same temperature:
(1.25)
However, within the limits of accuracy that the superheated vapour is governed by
the ideal gas equation, the two definitions actually coincide. Substitute
from eq. (1.21) into eq. (1.25) to yield a very good approximation for the definition in
eq. (1.25):
(1.26)
The popularity of the RH definition as the ratio of pressures in eq. (1.25) may have to
do with the fact that it relates to the concept of dew point, or dew point temperature.
For every superheated vapour state there is a unique constant pressure line on the
T-ν diagramme. This pressure line crosses the saturation curve at the saturation
temperature – this is called the dew point temperature Td. Once we know the dew
point temperature of the given superheated vapour, its pressure is uniquely defined
by this Td alone. Hence
(1.27)
9
This corollary to the definition in eq. (1.25) is independent of the ideal gas
approximation in eq. (1.26) and it is valid both above and below freezing
temperatures. The dew point temperature of superheated vapour could be measured
by cooling this vapour at constant pressure until the first signs of condensation or
deposition occur. (Something similar is used in meteorological practice when the wet
bulb technique is used to determine humidity in the air – see further below. The wet
bulb temperature is always between the air temperature and dew point of water.)
(Alternatively, the dew point could be defined by constant density cooling of the
superheated vapour – this would combine nicely with the relative humidity definition
as the density ratio. I am not aware of any such proposal actually having been made
previously.)
Both saturation curves in Figure 1.2 have very good analytic approximations. In
particular, the saturation T-p curve can be approximated with the Magnus equation.
(1.28)
Apparently, there is some doubt about the naming of eq. (1.28) – see [9]. Also, the
variously reported parameters are not particularly suitable in the temperature range
of Figure 1.2. The following parameters were chosen by me and yield saturation
curves that are difficult to visually distinguish from the tabulated values.
(1.29)
The temperature enters eq. (1.28) in °C and the pressure results in kPa. If high-
precision calculations are required, the reader needs to investigate the voluminous
literature and then decide which formula and what parameters are most suitable for
her or his particular needs. For the purposes of this course, the above parameters
are adequate.
Substituting the Magnus equation (1.28) into the ideal gas equation, we can
evaluate the saturation specific volume as follows. Recall that
.
(1.30)
Both calculated saturation curves are indicated in Figure 1.2 with dash-dotted lines
that are hard to visually distinguish from the (thick) solid lines that are based on
tabulated data.
Substituting the Magnus equation (1.28) into the relative humidity equation (1.27)
yields
(1.31)
Before continuing, let me clarify that all of the above theory – like thermodynamics
in general – relates to equilibrium (static) states of the substances and systems. Do
you notice the contradiction in terms? I would have proposed ‘thermostatics’, but it
seems too late for that. As a consequence, we need to distinguish between the
humidity at equilibrium and the dynamic and transient process of net evaporation,
sublimation, condensation and deposition. In the former subject matter, the
spontaneous evaporation or sublimation rate is balanced by the spontaneous
condensation or deposition rate. Non-zero net evaporation (or scientifically
vapourisation) can take place in various physically observable forms, such as
evaporation from the surface of a macroscopically homogeneous body of
liquid water (drops of water are homogeneous enough for this discussion)
without significant rapid change of the surface area;
evaporation by ‘boiling’, the onset of which is indicated by a significant
increase of the surface area between the two phases due to appearance of
vapour bubbles inside the previously homogeneous liquid phase.
10
The main difference between the two seems to be the rate of phase change that is
required to keep the phases close to equilibrium. For example, this rate of phase
change may depend on the rate of heating, or on the rate of pressure reduction. In a
sense, boiling is further away from thermodynamic equilibrium than surface
evaporation. However, both forms of evaporation still take place rather close to the
equilibrium saturation curves in Figure 1.2.
When boiling takes place close enough to the surface of the liquid phase, then the
vapour pressures above the liquid and in the bubbles inside the liquid must be
approximately equal. The same is true for vapour temperature above and inside the
liquid water.
Now we are ready for the tricky part – we add air! In so far as we are concerned
with humidity in thermodynamic equilibrium, we are primarily interested in the
gaseous atmosphere of air and water vapour mixture – that is to the right of the
saturation curve on the T-ν diagramme in Figure 1.2.
However, when we have to address the process of changing humidity, we may have
to understand also what is happening to the left of the saturation curve – in the
liquid or solid phase of water. In the following analysis, we ignore the air dissolved in
the liquid and solid phases of water. For example at equilibrium, it is less than
100mg per liter of water – mostly much less. It is less than 0.01% by mass.
Commonly, air denotes the mixture of all of its component gases – that includes
water vapour if present. However, in the present context we need to clearly
distinguish between (dry) air and water vapour in the mixture that is called humid
air. In humid air, all component gases have the same common temperature T.
However, the component densities, specific volumes and pressures (to be defined
shortly below) are not common and they are generally different from the uniform
mixture pressure p, density ρ, or specific volume ν. Because of the conservation of
mass, the individual component partial densities of vapour ρv = mv/V and air ρair =
mair/V add up to the mixture density:
(1.32)
Because of the conservation of the number of molecules in the non-reacting
mixtures, the number of moles of individual components in a unit volume add up to
the number of moles of the mixture – if the average molar mass is defined
appropriately. Denoting the average molar mass of humid air with M yields the
‘molar density’ balance equation:
(1.33)
On the one hand, this equation can be used to calculate the average air molar mass
(1.34)
This result is in agreement with the over two centuries old experimentally derived
Dalton’s law. Here, we go one step further. Define the partial pressure ratio and use
the relevant ideal gas equations to yield
11
(1.35)
This partial pressure ratio depends only on the molar density ratio and it is
independent of temperature. Hence, cooling and heating of humid air does not
change the partial pressure ratio – unless the humid air composition is changed. An
important consequence of this is that the dew point temperature of water can be
found by cooling humid air at constant air pressure – until the onset of
condensation or deposition, after which the air composition and partial pressure
ratios change.
Under the presently considered conditions, all humidity characteristics – including
RH – depend on the (partial) vapour pressure, (common) temperature and (molar)
density of vapour alone. This is correct to the right of the saturation curve, in the
gaseous phase of water.
Thus, for example, the relative humidity definition in eq. (1.25) has to be written as
(1.36)
However, all dew point related derived equations need no modification, because the
dew point temperature is not affected by the presence of air at the liquid-vapour
interface.
Whenever we have liquid or solid water present in the considered volume or system,
we need to consider carefully what happens at and across the interface between the
humid air and the liquid or solid phases of water (approximated as pure for the
purposes of this course).
In equilibrium, at the humid air interface:
The humid air temperature T is common to all phases and all gases on both
sides of the saturation curve.
The liquid pressure is equal to the total humid air pressure p. This pressure
is the weight of atmosphere above 1 m2 of area. Hence, it does not change
with local change of humidity, density, air composition, or temperature. It
changes with altitude.
A corollary is that the water tables and graphs need to be re-interpreted, because the
saturated vapour and liquid pressures are not necessarily equal at equal temperature!
Pressure in the solid phase seems to be of no interest in this section.
Humans feel the (relative) humidity indirectly. At saturation, no sweat from the skin
can evaporate spontaneously. The lower the RH value the faster can water from the
skin evaporate. Evaporation draws internal energy from the skin (to add latent heat
to water vapour). (However, the skin temperature can ideally only reach the wet bulb
temperature – not the lesser dew point. The reasons go beyond the scope of this
course.) Lower internal energy content of the skin (predominantly water) lowers the
temperature of the skin. That and the associated heat transfer is what we humans
feel.
The above example hit the limits of the equilibrium theory of water saturation rather
hard. A closely related subject matter is the common boiling of water in the
atmosphere. During evaporation from the liquid surface at any temperature, vapour
is produced at the liquid phase surface until its partial pressure in the air reaches
the saturation pressure at the given temperature. However, one cannot boil water at
any temperature, unless its pressure is equal to, or below, the corresponding
saturation pressure of pure water.
Boiling can happen locally in the vicinity of very hot surfaces, when the temperature
drops at a (small) distance from the heating surface – think also of Leidenfrost. That
goes beyond the bounds of this section. Here, boiling is associated with producing
and maintaining vapour bubbles within the liquid water at or very near saturation
conditions. The liquid and vapour saturation pressures on the liquid side of the
humid air interface are equal because there is no ingress of air into the vapour
bubbles – as opposed to the humid air above the boiling water surface. Hence in
principle, boiling cannot occur at or near equilibrium conditions with (humid) air
because – except when all of the dry air has been replaced with
water vapour near the surface of the boiling water. This is the reason why boiling
water vapour visibly condensates in the air – the surrounding air cools the 100%
humid vapour before it dissipates (and re-evaporates) in the surrounding drier air.
13
In conclusion, the atmospheric pressure is the liquid water saturation pressure and
it alone determines the equilibrium boiling temperature of pure water.
(1.37)
The altitude h enters this equation in km and the sea level pressure is p0 – let us
assume the average 101 kPa. Since the height of Table Mountain is a little more
than 1000 m, the atmospheric pressure on the mountain is about 90 kPa. As the
pressure drops, the relative humidity drops – contrary to some www nonsense.
Even if we assume the extreme of 100% humid air blowing in from the adjacent
Atlantic Ocean at, say, 30 °C and pv = 4.25 kPa, by the time it reaches the top of the
mountain, it would become drier if only its pressure dropped to 90 kPa. Since the
composition of the rising air does not change before cloud-forming, the vapour
partial pressure drops proportionally to the atmospheric pressure drop: pv =
4.25*90/101 kPa = 3.79 kPa. That would yield a drier RH = 3.79/4.25 = 89%, – to
the right of saturation – if there was no change in temperature.
The cause for cloud forming is the simultaneous cooling of the expanding humid air.
Let us calculate the maximum temperature on top of Table Mountain for clouds to
form there, when the sea level humidity is 80% at 30 °C. We could use the analytic
relationship in eq. (1.28), or we could use water tables. Let us do the latter and leave
the former as an exercise. At sea level, from eq. (1.36), pv = 0.8*4.25 kPa = 3.40 kPa.
On the mountain the vapour partial pressure drops to pv = 3.40*90/101 kPa = 3.03
kPa. From Table A-2, at 3.03 kPa, we find the saturation temperature of 24 °C – 6
degrees below the temperature in town. With RH = 100% at sea level, the necessary
temperature drop would have been a mere 2 degrees for clouds to form.
One should ask: why does air become cooler when it expands? Here, only a brief
explanation can be provided. As atmospheric air expands locally, it spends internal
energy on working against the atmospheric pressure according to eq. (1.11) – look
for ‘adiabatic expansion’ in the literature. Lower internal energy lowers the air
temperature. The adiabatic cooling rate of rising (expanding) unsaturated air is
about 10 degrees per km and about half that with simultaneous condensation.
End of example.
4. Energy transfer with and without mass: convection, conduction and radiation. Second
Law limitation on energy transfer.
The first law of thermodynamics was concerned with energy conservation in
something we call a system. What a system is was left vague in this course, but it
should be intuitively understood from the context. Now, we extend the analysis of
thermodynamic systems in two directions. First, the system is associated with a
volume with defined boundaries. These boundaries need not be fixed and the volume
may change. Second, energy balance alone is very seldom sufficient. We need to
consider the mass m of, or in, the system. In classical physics, mass is a quantity
that is conserved rather trivially. The rate of change of mass in the system is equal
to the net flow-rate of mass into the system across its boundary. (Outflow rate is
negative inflow rate.) This leads to the almost tautological differential equation
(1.38)
It is true that the actual calculation of the mass-flow rate may be complicated in
fluid and gaseous phases and that is why we need experts in mass transfer theory.
14
As a consequence of the mass transfer across the system boundary, we now have to
deepen the understanding of energy conservation in such systems.
Clearly, all mass has some internal, kinetic and potential energy. This energy is
carried across the system boundary together with the mass that crosses this
boundary. Let the boundary have a number of mass crossing areas with uniform
internal, kinetic and gravitational energies. Hence, eq. (1.12) needs to be extended
as follows.
(1.39)
It must not be forgotten that work transfer rate does not include work done by
the gravitational force on the mass within the system boundary – it was split off the
work transfer rate and redefined as potential energy change in the system. Here,
another splitting of work is added for cases where mass is transferred in the form of
fluids or gases. This transfer contains significant amount of work spent on
overcoming pressure at the system boundary. This is related to work done to change
volume in eq. (1.11) and is split off of . At a uniform fluid boundary, the flow work
rate can be calculated as – see p. 130 in [1]. We shall continue using
to denote all other energy transfer rates across the system boundary by work! Hence
eq. (1.39) becomes
(1.40)
This is the main (or only?) reason why the notion of enthalpy was introduced in
thermodynamics – and
(1.41)
Just remember, in eq. (1.41), excludes work done by gravitational field forces on
the system and work done to move material over the boundary at fluid interfaces
(that includes gases). The former is accounted for in potential energy and the latter
is accounted for in the specific enthalpy term. Some may call it an accounting trick,
I call it useful tautology.
Let us turn the attention to heat transfer. It is traditionally classified into three main
forms: heat transfer by conduction, by convection and by radiation.
Energy transfer by conduction takes place in solids, liquids and gases. Its rate is in
practice proportional to the temperature gradient and heat is transferred in the
opposite direction of the gradient. In one-dimensional transfer along the x axis that
is directed into the system, through a surface area dA that is perpendicular to x, one
can write
(1.42)
The thermal conductivity is denoted by . Generally, good electrical conductors are
also good thermal conductors and vice versa. For example, thermal and electrical
conductivity of solid aluminium are 240 W/(K·m) and 38 MS/m respectively. For
solid copper they are 400 W/(K·m) and 60 MS/m. Deionized saturated water
(without additives) has a thermal conductivity of only around 0.5 W/(K·m) and a
matching electrical conductivity of somewhere above 5 µS/m.
Energy transfer by convection is less precisely defined – it takes place at solid (or
liquid) surface of area dA at a temperature Tb in contact with moving gas or liquid at
a temperature Tf. Its rate is in practice proportional to the temperature difference
and heat is transferred in the direction of the lower of the two temperatures.
Assuming that Tf indicates the temperature outside the system,
15
(1.43)
The heat transfer coefficient h is an empirical number and depends strongly on the
fluid flow speed relative to the system boundary. Table 2.1 of [1] indicates values of
h in the range of 2-25 W/(K·m2) for gases in free convection and in the range of 25-
250 W/(K·m2) for gases in forced convection. Achievable values are orders of
magnitude higher with liquids.
Energy transfer by radiation takes place between solids, liquids and gases.
Calculation of its rate is very interesting, while rather complicated, but it goes
beyond the scope of this course. However, the foundation of these calculations is
simple enough.
All matter can emit and absorb electromagnetic radiation – in thermodynamics it is
called thermal radiation. The rate at which energy is emitted from a surface element
dA at a temperature Tb is quantified by a modified Stefan-Boltzmann law
(1.44)
The emissivity of the surface depends on surface properties. The Stefan-
Boltzmann constant . The complications in the
calculation of the net heat transfer arise from various causes, such as directional
variation of radiation, highly variable reflection or absorption of emissions from
other (visible) surfaces of varying temperatures and spectral influence on emissivity,
reflectivity and absorptivity. See for example [2] for further reading and Appendix 1
in [7] where the present writer documents the whole dynamical simulation model for
an industrial brazing furnace with predominantly radiated heat transfer.
Finally, we need to mention the second law of thermodynamics. Actually, there isn't
one. There are two popular explanations of the so-called second law – the Clausius
statement of the second law and the Kelvin-Planck statement of the second law.
Experts say they are equivalent. Hence, only one should suffice here as an
introduction.
The primary motive for the second law seems to have been the scientists' inability to
predict the direction of processes. The knowledge of the first law alone does not
(necessarily) determine the direction of a process. For example: will kinetic energy be
converted into potential energy, or the other way around; will a system store internal
energy on the basis of work transfer into the system, or will internal energy be spent
on system transferring work to other systems. It seems engineers do not have such
doubts.
Be that as it may, electrical engineers should be puzzled by the lack of evidence for
spontaneous heat flow against temperature gradient – there is no such lack of
evidence of electrical current flowing spontaneously against electric potential
gradient. For example, an alternating current transformer is perfectly able to raise
the voltage difference from the input to the output purely by passive means – there
is no need to add work into this process/system. In well designed transformers, the
losses are negligible – relatively speaking. Mechanical engineers are envious,
because they have not been able to find a 'transformer' that would raise temperature
differences without irreversibly adding work and losing very significant amounts of
energy.
We quote from p. 177 of [1]. "The Clausius statement of the second law asserts that:
It is impossible for any system to operate in such a way that the sole result
would be an energy transfer by heat from a cooler to a hotter body."
The most obvious two systems that relate directly to this formulation of the second
law are refrigerators and heat pumps. Both transfer energy from cooler to hotter
bodies. This does not violate the second law because both systems receive work from
outside the system. Their compressors are driven usually by electric motors or
directly by mechanical motors. Less commonly, electro-thermal effects, such as
Seebeck or Peltier, are used. In all known cases, there is some other effect that
accompanies heat transfer when energy is transferred in the direction of the
temperature gradient.
16
One of the most important engineering corollaries to the second law relates to
systems that produce work cyclically between two thermal reservoirs. One is at the
hot temperature THot and the other is at the cold temperature TCold. QHot and QCold are
heats transferred to the system from the hot reservoir and from the system to the
cold reservoir respectively during one cycle. The work produced during the cycle is
. If we are interested in a machine that uses thermal energy to
produce work, then the thermal (or energy) efficiency is defined as
(1.45)
For refrigerators and heat pumps, the coefficients of performance are defined as
and respectively.
From the second law of thermodynamics, the theoretically maximum efficiencies and
COPs of thermal cycles are
(1.46)
Since the cold side temperature is mostly just the environmental temperature of
about 300 K, the only way to improve an efficiency is by maximising the hot side
temperature. That means, for example, burning fuel in turbines and car motors at
maximum possible temperature – which is limited by materials used in such
turbines or motors.
17
II General Law of Conservation
18
a quantity then the generalised statement of conservation of such quantity can be
written as follows.
(2.1)
'Transfer' means transfer into the system across its boundary. 'Transform' means
transformation from another species of the quantity within the system boundary.
This equation is helpful in explaining why such properties of systems as
temperature and pressure are not quantities.
4. Examples of lumped systems from various domains: mechanical systems, power systems,
electrical systems and circuits, economics, and others.
(2.2)
Solving for Tmin and substituting numbers yields
19
(2.3)
This is a good starting point but not very realistic for the given 'under-powered'
passenger vehicle. (The corresponding zero-to-hundred time would be 10 seconds –
check it!) One of the problems is that car motors cannot be operated at a constant
(maximum) power delivery mode, which is motor speed dependent, among other
things. Here we shall consider another shortcoming of the above calculation – it
ignores entirely the losses between the power input and the car acceleration.
In order to keep it as simple as possible, we consider friction losses dominated by air
friction. Friction in the air is approximately proportional to the square of the speed:
(2.4)
We begin by considering the conservation of the kinetic energy of the laden car, Ek.
Its rate of change is
(2.5)
(2.6)
Let and . Substitute these and the other given values into
the differential equation (2.6):
(2.7)
Figure 2.1: Charge transfer through resistor. The switch is closed at t = t0.
Just before the switch is closed at the initial time t0, each capacitor has an initial
electrical charge – q01 and q02.
If we are interested in the entire circuit of Figure 1 as a system, then there is
nothing much of interest happening – the total charge q = q01 + q02 is conserved and
constant. The mass is not changing and there is no apparent energy transfer either
(we shall return to this later). Everything happens within the system. If this is of
interest, then we need to model at least one of the capacitors as a system (or a sub-
20
system). However, it is usually more straight-forward to model each capacitor as a
separate system – but not always.
Now we need to decide which fundamental quantities are of interest here. Mass is
not changing. Hence, there is no need to write mass conservation equations.
Electrical charge and energy in a capacitor are related algebraically:
(2.8)
Hence only one of them needs to be modelled – the other can always be calculated
from the model. We shall model the charges. Charge transfer rate is current i. Let us
define the current direction through R as positive when it flows from C1 to C2. Hence
the two charge conservation equations are
(2.9)
In order to solve these differential equations we need to define the current i. Before
that, we need to calculate the voltages at the two capacitors: . Hence,
(2.10)
The resulting model can be written as
(2.11)
Since this course is taught to electrical engineering students, an alternative
formulation of the same model is preferable, because electrical engineers do not
usually measure charges. Scientists may do so but not engineers. Hence, we shall
substitute into the left-hand side of eq. (2.9). Notice that only for constant
capacitances we obtain the simple differential equations for voltages,
(2.12)
We shall deal with solving such systems of linear differential equations a little later.
Presently, we return to the equation of current by noting that at all times the total
charge remains constant and . Hence, only one of the two
differential equations in eq. (2.11) needs to be solved to determine all transients in
this particular special case:
(2.13)
The solution to this first order differential equation is
(2.14)
After long enough time, the final charge distribution on the capacitors will be
(2.15)
21
Clearly, the total charge in the combined system cannot change in this example, but
what about energy? Let us start with the electrical energy in the two capacitors. At
the initial time, the total electrical energy is given as
(2.16)
The final total electrical energy is obtained from eq. (2.15) as
(2.17)
It is straight forward to show that the final energy is mostly less than the initial
energy by the following amount.
(2.18)
If, for example, C1 = C2 and only one of the two capacitors is initially charged, then
exactly one half of the electrical energy is ‘lost’ – compare equations (2.16) and
(2.17). Where does it go?
Well, students are taught that current flowing through a resistor heats this resistor.
That means energy is transferred from the capacitor subsystems to the resistor
subsystem. The current transient is obtained from eq. (2.10) by substituting q1 from
eq. (2.14):
(2.19)
The heating rate is classically given as . Integrating it gives the total energy
transfer to the resistor:
(2.20)
That seems satisfactory – at first sight. All of the lost electrical energy went to
heating the resistor. Curiously, this is entirely independent of the value of the
resistance, R!
Careful observers of electrical experiments should now object to something!
Something is not right here. The reason for my raising this point is the following.
If one connects two charged capacitors in parallel by closing a switch, or simply
bringing the capacitor leads into electrical contact, then one cannot fail to notice a
flash of light and an explosive sound. We assume negligible resistance and we
assume the voltages of the two capacitors are different before closing of the switch.
The light flash is electromagnetic radiation that removes some energy from the
combined system. Also, sound waves transport energy from this system. Both
amounts of energy must come from the difference in the electrical energy in eq.
(2.18). If nothing was heated – when there is no resistance – then all of the energy
difference is radiated away by electromagnetic field and by sound waves in a
medium. How the electrical energy loss is distributed between heating,
electromagnetic radiation, sound waves, or something else, I do not know precisely –
it would be very complicated to calculate this distribution from all available
theories/equations. However, the sum of these losses must be given by eq. (2.18) if
we believe in thermodynamics and hence in energy conservation.
In this circuit, with or without a resistor, charge transfer from a capacitor to another
is energy inefficient. Is this necessarily so? To answer this question in the negative,
one needs to find ways of reversing the lost energy difference in eq. (2.18). A thermal
loss is almost always only partially recoverable, if at all. However, electromagnetic
22
field loss is recoverable if it can be sufficiently contained near the capacitor circuit.
One such containment is achieved when the resistor R in Figure 1 is replaced by a
variably configured inductor. The inductor stores/contains energy in its magnetic
field and is capable of transferring practically all of it to a capacitor. A portion of
energy may be radiated away, but that can be limited by avoiding too high frequency
transients. This is the foundation for energy efficient switched-mode power supplies
of buck-boost type.
End of example.
(2.21)
A richer person may own productive capital, which depreciates at some rate that is
specific to the type of capital. Depreciation could be considered as destruction of
wealth, or it could be considered as transformed into the product that is sold to
create income – closer analysis reveals that there is some of both in reality. The
model becomes then
(2.22)
Do not assume that the additional negative wealth rate makes the rich person
poorer than the working person. While this is in many cases true, the successful
rich people arrange their earnings to be much larger than those of a working person
– because they can.
Equation (2.22) is a macro-economic model – it does not show the interactions
between the many persons that collaborate and compete in the same market, state,
or world. A more realistic model has to include many such differential equations and
additional equations that describe the interactions between these differential
equations. I think this should suffice as a starter. For example progressive taxation
of income, property and most other things, could be considered as part of expenses,
but their calculation can be very complex.
End of example.
(2.23)
If U and I denote the root mean square (RMS) line voltage and line current
respectively, then the three-phase generator delivers electrical power to a balanced
three-phase load according to
(2.24)
where is the well-known power factor. is the phase angle between the
sinusoidal phase-voltage and phase-current and it is equal to zero for ‘real’ or
resistive loads. Much engineering effort is put into designing and operating
electricity grids near unit power factor. Hence, is assumed in this example.
Now, we restrict attention to the case where a single turbo-generator unit supplies
electrical energy to a balanced resistive three-phase load. In addition, we consider
the load to be star-connected (Y-connected). If each phase resistance of the load is
denoted with Rp then the supply line current is related to the line voltage as
(2.25)
Hence, and
(2.26)
Inside the generator, the generator speed is related to the generated voltage through
magnetic field. Assuming negligible armature winding impedance, this relationship
is approximated by
(2.27)
where B denotes the (rotating) magnetic field flux density in the air gap between the
rotor and stator poles:
(2.28)
Ix is DC current that may be connected to the rotor coils via slip rings. Power
systems practitioners tell us that the excitation current is proportional to the
excitation voltage:
(2.29)
With this understanding, equations (2.27) to (2.29) can be combined to
(2.30)
In real generators, kx varies with the generator’s speed of rotation. Substitution into
eq. (2.26) yields
24
(2.31)
With constant inertia J, we obtain the final version of this model
(2.32)
In power systems engineering, there seems to exist a belief that field excitation
controls the generator voltage. However, eq. (2.26) does not allow constant rotor
speed in case of (transient) power imbalances and, in steady state, the excitation
cannot affect the voltage in eq. (2.30). Instead, it controls the steady state generator
speed.
In steady state, the generator rotates at a constant speed Ω and delivers electrical
power at the line voltage of
(2.33)
(2.34)
End of example.
25
III State Equations of lumped systems
1. Definition of state.
Using the conservation law of quantities leads quite naturally to a set of first order
differential equations that describe both the transient and the steady state
behaviour of the system of interest. Thermodynamics and system theory define the
state of a system differently. But the longer one studies both the more these
definitions look similar. The rest of this Capter III follows closely what I wrote almost
30 years ago in a book that deals with optimal estimation, [3].
Any variable that is differentiated on the left hand side of the conservation equations
is a state variable. We combine all these variables into a state vector. The state
vector defines the state of the system. We standardise the denotation as follows.
x is a state variable and x is a state (column-) vector with n variables xi as its
components.
u is an input variable and u is an input (column-) vector with p variables ui
as its components.
y is an output variable and y is an output (column-) vector with q variables
yi as its components.
n is the order of the system.
For example, in eq. (2.32) the generator rotational speed is the state. The excitation
voltage and fuel power are inputs. Anything else of interest can be defined as
output. For example the supply frequency or the supply voltage can be outputs if
they interest us. Both are calculated from the state and inputs – usually in an
algebraic equation.
Using this standard denotation we combine all equations into two vector equations.
(3.1)
The solution to the state differential equation is defined for any given input
by the initial condition . The dot above x indicates the time derivative.
3. Output equations.
The second equation is the output equation.
(3.2)
(3.3)
The state equations are generally non-linear and all variables are generally functions
of time. Sometimes, it is convenient to show explicitly the system’s dependence on
26
some parameters that are considered as separate from the inputs. If these
parameters are combined into a parameter vector p and they too depend on time t
then we get a more detailed form of a general model of a dynamical system:
(3.4)
For engineering design of dynamical system, there are not many good design
techniques that are applicable or effective on non-linear models. Fortunately, often a
good enough linear approximation to eq. (3.3) can be found. See below.
Example: Motorcar.
Let us consider the motorcar equation (2.6).
(3.5)
Since we may be interested in the location of the car along a given path, we add the
trivial ‘conservation of space’ equation that indicates how the travel distance s
changes over time:
(3.6)
The state vector contains the speed v and distance s in arbitrary order. Let us define
(3.7)
We choose the variable motor power as the input u. Let the distance travelled, s, and
the travelling speed be the outputs of interest:
(3.8)
The complete state equations are for this example:
(3.9)
The system parameters are kfrict and m.
End of example.
27
IV Linear systems
(4.1)
Equation (4.1) defines an operating trajectory. If the operating condition is constant,
then it is called an operating point. We assume that the first derivatives of
and exist at and . If these derivatives do not exist at the operating
condition then the system cannot be approximated with a linear system at this
trajectory or point. Now, the idea is to expand both non-linear functions in the
Taylor expansion. The Taylor series does not need to converge – we shall ignore the
second and higher order terms. Formally, we write
(4.2)
HOT stands for ‘Higher Order Terms’. It is customary to denote the four Jacobian
matrices as follows.
(4.3)
For small enough deviations from the operating condition, we obtain a linear
approximation for the state equations:
(4.4)
This is not what is found in the systems or control textbooks used by professors in
universities all over the world. The ‘standard’ formulation of the textbook state
equations is obtained by two additional assumptions:
The operating condition is a steady state operating point.
The state equations are written for the (small) deviations from this steady
state operating point.
28
Steady state is understood literally – the state . (Usually, this
requires that also – a deeper analysis of this exceeds the scope of this
text.) It follows that . The deviations are denoted by ,
, and . Now, the approximate linear model with constant
coefficients is obtained from eq. (4.4) as
(4.5)
Equation (4.5) is not really an equation that can be solved for the state vector ∆x – it
merely states that there is some approximation between the left and right hand
sides. It is customary to replace the approximation sign with the equality sign and to
drop the deviation sign ‘∆’. With this ‘slight-of-hand’, we obtain the standard form of
state equations:
(4.6)
Note that the solution of eq. (4.6) does not give us the original x or ∆x – it is an
approximation only. Among all the engineering approximations, this is usually not
among the most important considerations.
Example: … of linearisation.
A system is defined as:
(4.7)
Derive the linear state equations around the steady state for . The solution
follows.
In steady state,
(4.8)
It follows from eq. (4.3):
(4.9)
These matrices are to be substituted into eq. (4.6).
A comparison between the non-linear and linear system output transients is shown
in the following two figures. In both figures, the systems start from the same steady
state operating condition. In Figure 4.1, the deviations are ‘small’ and the linear
29
model is a good match. In figure 4.2, the deviations are evidently not ‘small’ and the
linear model step response deviates significantly from the response of the original
(non-linear) system. Calculation of these results is based on material that is
presented in Chapter V, further ahead.
non-linear output
6.4
linear output
6.3
6.2
6.1
data
5.9
5.8
5.7
5.6
5.5
0 5 10 15
Time (seconds)
Figure 4.1: Small deviations from the steady-state operating condition. The input
deviation is a step of +0.05 at t = 5.
8
data
0
0 5 10 15
Time (seconds)
Figure 4.2: Significant deviations from the steady-state operating condition. The
input deviation is a step of +0.5 at t = 5.
End of example.
30
In the time domain analysis, the matrix exponential is of crucial importance. It
is defined by taking the Taylor series of the scalar exponential and replacing a
with A:
(4.10)
(4.11)
(4.12)
(4.13)
In other words, the inverse of the matrix exponential exists always – it is not
singular.
(4.14)
(4.15)
(4.16)
Now the (transient) time-domain solution of the state differential equation is found
as follows.
(4.17)
(4.18)
(4.19)
(4.20)
(4.21)
Thus, the transient solution of the state differential equation and the corresponding
output are given as
(4.22)
The matrix exponential is so prominent in the above solution that it is called a
(4.23)
31
Calculate the time to accelerate from 50 to 100 km/h from the linearised
motorcar model.
Example: … continuation.
The system matrix of the previous example is
(4.24)
The characteristic equation is
(4.25)
The eigenvalues are -2 and ±2j. The system is not asymptotically stable.
End of example.
(4.26)
Solving for X yields
(4.27)
Comparing equations (4.22) and (4.27) we must conclude that
(4.28)
From the same comparison we find that the Laplace transform of a convolution
integral is equal to the product of the Laplace transforms of the two convolved
matrices:
(4.29)
In the classical systems’ and control theory, one is interested in the transfer
characteristics of systems. That means we assume . Now eliminate X from eq.
(4.27):
(4.30)
32
Whence the transfer matrix is defined as
(4.31)
For a single-input-single-output (SISO) system, this matrix is a scalar and it is
called the transfer function.
Example: … continuation.
From eq. (4.31):
(4.32)
Matrix inversion requires more calculations than solution of the associated linear
system of equations (in this case). Define
(4.33)
(4.34)
The transfer function results as
(4.35)
End of example.
(4.36)
33
The complex-valued and β are the poles and zeros of the transfer function. The
real-valued g is a gain. In much of the linear system theory, the transfer function is
defined after all identical pole-zero pairs have been cancelled in eq. (4.36).
Accordingly, the order of the transfer function may be less than the order of the
system’s state equations.
34
V Discrete-time models of systems –
numerical integration, numerical simulation.
(5.1)
We focus on the SDE here, because the OE is algebraic and explicitly defines the
output from the known input and the solution of the SDE. The terms ‘numerical
simulation’, ‘numerical solution of SDE’, and ‘numerical integration’ (of the right
hand side of eq. (5.1)) are used synonymously here.
Recall from mathematics that the derivative can be defined by the following limiting
process, if it converges:
(5.2)
If this limit exists, then there is a small enough positive , for which the following
is a good approximation.
(5.3)
The same finite difference expression in the right hand side of eq. (5.3) approximates
the derivative at any other time point close to the interval between and
. Thus, also,
(5.4)
Let us continue from eq. (5.3) at first. Solve for and substitute the state
derivative from eq. (5.1):
(5.5)
Equation (5.5) tells us that the new value of the state is approximated by the
expression on the right hand side. A recursive algorithm is obtained by replacing the
approximation sign with the equation sign – but then we no longer get the true
sequence of ! The strictly correct way to do this is by introducing a new
sequence
(5.6)
… and calculating the new sequence from the following algorithm
(5.7)
This is called the Euler method. For reasons that will become obvious later, here it is
called the explicit Euler method. From the point of view of system theory, eq. (5.7) is
a state difference equation. It defines the state in discrete-time as opposed to the
state differential equation that defines the state in continuous-time.
35
The accuracy of any numerical method depends primarily on the method itself, on
the chosen step size that may be varied from step to step, and on the differential
equation (5.1).
The contribution of the method to error is commonly characterised by its order. The
order of the method is equal to the order m of the error in the following equation:
(5.8)
For simplicity, a constant step-size is assumed for the error analysis. If eq. (5.8) is
true then the single-step error must be [4]:
(5.9)
For proof see for example Lambert (1973). The order of single-step numerical
methods can be determined by expanding in a Taylor series until we
find the first non-zero term. The order of the method is defined as one less than the
order of the first non-zero term in this Taylor expansion – or the highest order of the
initial (low order) zero-terms.
(5.10)
The Taylor expansion of the explicit Euler method in eq. (5.7) is already in the form
of a truncated Taylor expansion. Substitution into eq. (5.9) yields
(5.11)
Conclusion: the explicit Euler method is a first order numerical method, because the
Let us now turn the attention to eq. (5.4). Repeating the steps above, we arrive at the
implicit Euler method/algorithm:
(5.12)
Here, the next state is given implicitly – the non-linear algebraic equation still
needs to be solved for the unknown . This is often a very difficult task. The
important question is: is this effort worth it? The brief answer is: no and yes. The
longer version will have to wait a little longer. For now, we shall only prove its order.
36
The first three Taylor terms of the solution are given in eq. (5.10) above. The Taylor
expansion of the implicit Euler method in eq. (5.12) requires some effort. Note that
, hence :
(5.13)
Substitution of equations (5.10) and (5.13) into eq. (5.9) yields
(5.14)
Conclusion: both Euler methods are first order numerical methods, because the
global error in eq. (5.8) converges to 0 as fast as . From the point of view of the
accuracy of the method, clearly the explicit version is preferable. It is as accurate as
the implicit version but much simpler to evaluate – that is the reason for the ‘no’
part in the brief answer above.
End of example.
(5.15)
The pure state transition part in eq. (5.15) is exact and entirely discrete in time. The
transfer of the input, however, is entirely continuous-time. An LTI state difference
equation needs to have a discrete input term too:
(5.16)
This model is impossible for continuous-time systems – unless the shape of the
input is fixed and known between the time-points and , or we accept that it is
an approximation. This is indeed accurate when the analog system input is formed
by, for example, a pulse amplitude modulating digital-to-analog converter (DAC) at
the output of a computer, micro-controller, or similar. In that case, the system input
is constant between the DAC conversion instances and eq. (5.15) becomes (5.16)
with
(5.17)
A discrete-time system is asymptotically stable, iff all eigenvalues of Ad have a
magnitude less than one. This applies to eq. (5.15) as well as to eq. (5.16) and is
carried over from continuous-time systems via the following observation.
(5.18)
(5.19)
Lambert [4] defines two types of numerical method stability as follows.
A numerical method is said to be A-stable, if applied to a [asymptotically]
stable equation , it yields , where all the eigenvalues
of are inside the unit circle for . The method is said to be
L- stable, when in addition as .
Eitelberg [7] re-defined in 1983 the above two types of numerical method stability as
A0-stable and A1-stable respectively. More rigorously, Eitelberg [7] considers
numerical methods that, when applied to eq. (5.19), yield
(5.20)
(5.21)
(5.22)
Equation (5.22) implies and characterises single step convergence to steady state for
large step size. For the Runge-Kutta type of numerical methods considered in this
work, it follows from eq. (5.21) and does not need separate verification for each
method. (This may have been proven in one of my early publications.)
Applying the explicit Euler method (5.7) to the test equation (5.19) yields
(5.23)
Here,
(5.24)
38
Obviously, this simulation method is unstable with any when the step size is
too large. We calculate the geometric domain of for which the simulation is stable
with :
(5.25)
That means that the simulation is stable iff all eigenvalues are inside a circle with
the radius of and centred on . See Figure 5.1. The explicit Euler method
is not A-stable – or A0-stable.
Figure 5.1: Stability range of the explicit and implicit Euler methods.
Applying the implicit Euler method (5.12) to the test equation (5.19) yields
(5.26)
Here,
(5.27)
We calculate the geometric domain of for which the simulation is stable with
:
(5.28)
That means that the simulation is stable iff all eigenvalues are outside a circle
with the radius of and centred on . See Figure 5.1.
That means the implicit Euler method simulates any stable LTI system stably. It is
obviously A0-stable. Since an asymptotically stable system matrix A is not singular,
it follows that in eq. (5.27) and the implicit Euler method is also A1-
stable – eq. (5.26) delivers the correct steady state in a single long step. Note,
however, that for large enough step size, an unstable system will deceptively yield a
stable simulation result.
39
The trapezoidal method is also known as the Tustin’s method in the British
literature. It can be derived by averaging the approximations in eq. (5.3) and eq.
(5.4):
(5.29)
From here the (implicit) trapezoidal method is obtained as
(5.30)
The state increments of the explicit (eq. (5.7)) and implicit (eq. (5.12)) Euler
methods are averaged. Hence, the order of the method can be found by averaging
the respective Taylor expansions in eq. (5.11) and in eq. (5.14). It is not proven here,
but the trapezoidal method is second order accurate. That is an advantage over the
Euler methods, but the uniqueness of the trapezoidal method lies in its stability
characteristics.
Applying the trapezoidal method (5.30) to the LTI test equation (5.19) yields
(5.31)
Hence,
(5.32)
(5.33)
That means, with the trapezoidal method:
all stable systems are simulated stably with any positive step size;
all unstable system simulations are unstable with any positive step size.
(5.34)
It is the explicit Euler method with and it is the linearly implicit Euler
method with . The latter is A1-stable.
The papers [6] and [7] give examples of second order multi-stage Runge-Kutta type
methods that are independent of the Jacobian splitting or approximation, are
linearly implicit and are A-stable with . In the paper [5] of 1979, I defined a
general form of the 3-stage algorithms as follows.
(5.35a)
(5.35b)
(5.35c)
(5.35d)
(5.35e)
In the same paper, I admitted that I was not able to systematically calculate suitable
parameters a, b, c, and d. By trial and error, I found a set of parameters for a 2-
stage algorithm with a3 = 0. But I rejected it and until very recently published a
number of different sets of parameters for 3-stages – that gave me advantages for
simulation of very complex and stiff power plant steam generators. These
advantages included automatic step size control and A2 stabilty. The reader is
referred to the three publications [5], [6] and [7] for specific details and some useful
choices of parameters. Here, I present a previously unpublished new formulation.
In January 2023, I took a new look at this class of linearly implicit methods.
After about 45 years, I noticed that there was a more convenient and equivalent
formulation of the class of algorithms in equations (5.35) – replace akbk with ak, and
ckrbr with ckr. Furthermore, I no longer have a need for A2 stability and I think that
two stages are sufficient for a good A1 stable method with the property that its 2nd
order is independent of the use of the Jacobian matrix of the system. With only two
stages, eqs. (5.35) are now converted to
(5.36a)
(5.36b)
41
(5.36c)
(5.36d)
The two stages suffice to achieve 2nd order accuracy – see below. It is not shown
here that the two stages are not enough to reach 3rd order accuracy. The first three
terms of the Taylor expansion of the exact solution of a state differential equation
around a known x(ti) are given in eq. (5.10). To determine the conditions for the
parameters a, b, c, and d for the accuracy order of 2, we need to start with the same
xi = x(ti). We start by writing the Taylor expansions for the individual stages ∆x1(∆t)
and ∆x2(∆t). It is elementary that ∆x1(0) = ∆x2(0) = 0. For the first and second order
terms in the Taylor expansion, we need the first two derivatives of ∆xk(∆t) at ∆t = 0.
For the first stage (k = 1), we rewrite eq. (5.36b) as
(5.37)
Differentiate both sides of this equation twice with respect to ∆t:
(5.38)
At ∆t = 0:
(5.39)
Hence, the Taylor expansion can be written as
(5.40)
For the second stage (k = 2), we rewrite eq. (5.36c) as
(5.41)
Proceed as above:
42
(5.42)
At ∆t = 0:
(5.43)
Hence, the Taylor expansion can be written as
(5.44)
Substitute equations (5.40) and (5.44) into eq. (5.36a) to yield the Taylor expansion
of xi+1(∆t).
(5.45)
Now, compare the numerical solution in eq. (5.45) to the exact solution in eq. (5.10).
For first order accuracy, the only condition is a1 + a2 = 1. The conditions for second
order accuracy depend on how one intends to use the Jacobian matric J in the
algorithm (5.36). If S(J) = J then we have a set of three conditions:
(5.46a)
(5.46b)
(5.46c)
However, my goal in this work has always been the independence of the second
order accuracy from the choice or use of S(J). Hence, eq. (5.46b) has to be split in
two. The term with the ‘arbitrary’ S(J) in eq. (5.45) is neutralised by the condition
a1b1 + a2b2 = 0. Hence, I shall continue with the following set of four conditions for
second order accuracy.
(5.47a)
(5.47b)
43
(5.47c)
(5.47d)
Note that eq. (5.47b) makes the second order of the method independent of the
Jacobian ‘approximation’ – this is equivalent to using the explicit form of the
algorithm (5.36) by setting b1 = b2 = 0. The explicit form’s discrete-time eigenvalue
equation – with – is:
(5.48)
Prove eq. (5.48).
The stability bound is obtained from eq. (5.48) by setting and solving
this equation for .
(5.49)
This stability bound is shown in Figure 5.2 and it does not depend on the actual
values of the parameters – as long as they satisfy equations (5.47a) and (5.47c).
1.5
0.5
-0.5
-1
-1.5
Figure 5.2: Stability range of the complex-valued for any explicit version of
the 2nd order method (5.36).
To derive the implicit form’s discrete-time eigenvalue equation, equations (5.36) with
the full Jacobian are applied to the linear test equation and yield the
eigenvalues of Ad according to
(5.50a)
(5.50b)
(5.50c)
The simplifications in equations (5.50b) and (5.50c) are valid for both sets of second
order conditions expressed in equations (5.46) or (5.47).
The above 2nd order class of methods is …
44
A0-stable if
(5.51)
A1-stable if, in addition to eq. (5.51),
(5.52)
There are many coefficients that satisfy these equations and yield an A0 stable
simulation method – the reader is challenged to discover some good ones and
compare them. In the meantime, I shall consider A1 stable methods further. Hence
equations (5.47) need to be augmented with N2 = 0. The combined and re-ordered
equations for an A1 stable method are now
(5.53a)
(5.53b)
(5.53c)
(5.53d)
(5.53e)
For A0 and A1 stability, it is necessary that μJ in eq. (5.50a) has no pole in the left
half of the complex plane, hence b1 > 0 and b2 > 0. It then follows from eq. (5.53b)
that either a1 or a2 must be negative. If c21 > 0 in eq. (5.53e) then a1 < 0, a2 > 0 is the
only option.
Start by solving the three equations (5.53a-c) for the four unknowns a1, a2, b1, and
b2. Select a value for b1. Then from eq. (5.53c)
(5.54)
Substitute this b2 into eq. (5.53b):
(5.55)
The solution is
(5.56)
To summarise, the algorithm in equations (5.36) is 2nd order accurate with any
dimensionally correct matrix S when a value of b1 is selected from either of the two
ranges
(5.57a)
Then
(5.57b)
45
(5.57c)
(5.57d)
(5.57e)
(5.57f)
In the fully implicit form, S = J and
(5.58)
A cursory analysis indicates that the fully implicit version is A 1 stable with any value
of b1 in eq. (5.57a). The lover range of b1 values yields a significantly larger single
stability bound in the right half of the complex plane than the higher range of b1
values. The higher range creates a small stability bound inside the instability range,
while the lower range does not. The reader is encouraged to prove any or all of these
observations. The stability bounds will be calculated for a number of b1 values in the
Laboratory number 11.
Presently, my favourite choice is based on the lower range value of b1 = 2/5:
(5.59)
The corresponding implicit version’s stability bound is shown in Figure 5.3.
b1 = 0.4
10
8
stable stable
6
2
t)
instablility
0
Im(
range of t
-2
-4
-6
stable stable
-8
-10
-2 0 2 4 6 8 10 12 14 16
Re( t)
Figure 5.3: Stability bound of the complex-valued for the implicit version of
the two-stage method in eq. (5.36) with parameters from eq. (5.59).
46
(5.60)
Show that its stability bound is the same as shown in Figure 5.2.
It is a very simple explicit method that fits neatly into the format of equations (5.36)
with the parameters
(5.61)
These parameters satisfy the 2nd order accuracy conditions (5.46) or (5.47). No such
2nd order accurate linearly implicit A0 stable version with nonzero b1 or b2 is known
to me.
Try to find suitable nonzero values of b1 and b2 in eq. (5.61) for A0 stability
while retaining the 2nd order of the method. Alternatively, you may try to
prove that A0 stability is impossible without changes in the other parameters
of the Heun method.
(5.62)
It has a solution
(5.63)
(5.64)
Apply the explicit Euler method, in eq. (5.7), to the differential equation (5.62). This
yields, after the first step of the size t, an approximation on the initial tangent
of the solution in eq. (5.63). It is equal to the first two terms of the Taylor expansion
in eq. (5.64):
(5.65)
Application of the Heun method (5.60) to the differential equation (5.62) yields, after
the first step of the size t, an approximation :
(5.66)
Note that it is a third-order polynomial in t. Yet, the third-order term does not match
the corresponding term in the Taylor expansion (5.64). (Of course, this mismatch
was to be expected – it is only a second order method.)
The graphical comparison between the exact solution (shown with solid black line)
and the Euler and Heun approximations (shown with solid red and green lines
respectively) is shown in Figure 5.4. In addition, they are compared to the explicit
and implicit forms of the second order A1 stable method of Eitelberg in eqs. (5.36)
with parameters from eq. (5.59) (shown with dash-dotted and dashed blue lines
respectively). The explicit version is labelled with ‘explEit’ and the implicit version is
labelled with ‘implEit’.
47
2
Single Step Solution to dx/dt = -x Single Step Solution Detail
2.5
0.86
2 explEit
0.84
1.5
1 0.82
48
Laboratories (in parallel with lectures)
MATLAB denoted originally MATrix LABoratory for the use on early personal
computers. It was created by a number of people some of whom – such as Cleve
Moler – were involved in the creation of two packs of FORTRAN subroutines for
matrix and vector operations for use on physically huge mainframe computers – the
LINPACK (linear operations) and EISPACK (eigenvalue and eigenvector related). I
used these subroutines in 1970s when neither personal computing nor MATLAB
were known. My programs were coded on huge packs of cardboard cards and I do
not miss that technology. These programs (packs) were fatally sensitive to being
dropped on the ground.
While MATLAB allows use of looped code, it is strongly discouraged. Instead use of
vectorization and array operations is strongly encouraged.
This course is not meant to ‘teach’ MATLAB. Instead, necessary MATLAB skills are
learned by doing whatever it takes to legally get the required results or answers –
copying is not permitted. A starting base is given in each laboratory assignment.
However, students need to prepare for the laboratory by looking for additional
information from the web or other sources.
Use
>> size(t3)
to find out the number of elements in the various arrays.
Use
>> help size
to learn about MATLAB functions and operators.
Add grid lines to your plots and learn to add lines to existing plots. Add various
titles and text to your plots – programmatically and interactively.
49
2. Relative humidity as a function of temperature.
Create a graphical representation of relative humidity from the analytic relationship
(1.31):
(Prac.1)
T (temperature) and Td (the dew point temperature) are measured in C. RH results
0
(Prac.2)
Introduce the matrix M and the vectors b and x as follows.
(Prac.3)
This results in a set of simultaneous linear equations that can only be satisfied
approximately:
(Prac.4)
In the laboratory: select a spread of suitable saturation data between -40 °C and
160 °C. Then find the linear least squares solution to this equation. Explain why
A = 0.6 is a good choice for the Magnus equation in a temperature range that
includes 0 °C!
If you do not know the solution to this class of problems, you may use the simplified
summary from Eitelberg (1991) below.
50
4. Eigenvalue and stability calculations.
Let
(Prac.5)
6. Simulation of state models with initial conditions (given method and step-size).
Implement the motorcar example in eq. (2.6) in Simulink. Use the Euler method
and, initially, a step size of 0.1 s. You may have to modify the step size.
One of the tasks is to measure the time it takes to accelerate at maximum power to
100 kph from 50 kph. Show transients on a scope. Can you simulate from 0 initial
speed?
Extend the model to eq. (3.8). Show the speed and distance transients on the same
scope. How far does the car travel until it reaches 100 kph?
Pay close attention to physical SI and non-SI units.
51
7. Time-delay modules and sub-systems in Simulink.
Build the following model in the laboratory:
Step
r2 PI2 Transport P
Delay
uScope
1 1
In1 Out1
Transport y1
Delay
2 2
In2 Out2
y2
The background for this example will be explained in the laboratory. Keep the step
times apart by about 5 seconds in the two step blocks. Switch the disturbance step
alternatively to Out1 or to Out2 and observe what happens to the undisturbed other
output from the subsystem P.
Add a slider from the Dashboard sub-library to your model. Link it to the delay of
the control system and vary the delay time during your slowed-down simulation.
Clock
time u&y
Step
r2 PI2 P
uScope
52
9. Coding the explicit Euler and Heun methods for a stiff example.
In preparation for the practical: analyse the following function and script files.
The function file calculates the right hand side of the model differential
equation. The script file evaluates the explicit Euler method. Understand the
differences between script files and function files. Write the state differential
equations for this system.
% msturbostate1
x=[1000;7]; %initial state: generator rpm and excitation voltage [V]
u=[15;6]; % input: fuel flow [l/h] and excitation voltage demand [V]
endtime=29; % end time of simulation
dt=0.05; % time step [s]
nst=round(endtime/dt); % number of steps
tt(nst+1)=zeros;
xx(2,nst+1)=zeros; xx(:,1)=x;
for i=1:nst
xx(:,i+1)=xx(:,i)+dt*msturbogen1(xx(:,i),u);
tt(i+1)=tt(i)+dt;
end
subplot(2,1,1); plot(tt,xx(1,:))
subplot(2,1,2); plot(tt,xx(2,:))
In the laboratory, you will adapt this code to Heun method and compare both by
simulation with various values of the time step dt and time-varying excitation
voltages, such as .
10. Coding the linearly implicit A1 stable Eitelberg methods for a stiff example.
53
Tutorials (in parallel with lectures)
The purpose of these weekly tutorials is to reinforce the material that is taught in
the lectures. Generally, discussions must be initiated by students. Some homework
will be assigned in advance and it must be solved by the students before it can be
discussed.
At this time, the weekly tutorial topics are not fully fixed and tutorial work is not
formally assessed. Some tentative topics are given below.
Week 1: Familiarise with some sources, specifically the water tables. Linear
interpolation: https://www.youtube.com/watch?v=Cvc-XalN_kk .
Week 2: Swing – consider also heat transfer and free fall. Compare hfg and ufg.
Week 3: Evaluation of cν and cp from tabulated data and their use in calculations:
superheated steam (compressible) and liquid water (incompressible).
Week 4: Humid air by the sea from p. 11 – without preparation.
Week 5: Water saturation temperature from ideal gas equation above 60 °C.
Week 6: Consider an apartment of 100 m2 with a ceiling height of 2.8 m. Initially,
the 80% humid air in the apartment has a temperature of 35 °C. Then the split unit
air conditioner is switched on and the apartment is cooled down to 25 °C. The
refrigerant temperature of the air conditioner is operated at 7 °C – verify this in a
literature search.
What is the final air humidity and how much condensate water is removed from the
airtight apartment in this cooling process? {Should be nearly 7 litres – test this at
your home!}
Week 7: Extension of Tutorial 6: Let the initial and final temperatures be, as in
tutorial 6, 35 °C and 25 °C respectively. However, the refrigerant temperature of the
air conditioner is operated at 14 °C. Let the initial humidity RH 35 vary between 10%
and 80%. Draw a graph of the final humidity RH23 as a function of the initial
humidity. You can do the relevant calculations manually, but a (sophisticated)
MATLAB code would be preferable.
Week 8: Discussion of examples, homework, or summative exercises.
Week 9: A 9 m3 solid vessel is filled with a mixture of hydrogen and oxygen gases at
a common temperature of 300 K. The mass of each is 20 grams and 160 grams
respectively. The mixture is ignited and then cooled down to the initial temperature
of 300 K.
Calculate the pressure in the vessel before and after the hydrogen oxidation
reaction. Do you expect this reaction to be explosive? Do you expect any liquid water
after the cooling and why? What if there is more than 20 grams of hydrogen
initially? {The pressures should differ by a factor of 1.5.}
Week 10: Linearise the three-phase turbo-generator example (2.32) in section 4 of
Chapter II above and calculate its transfer matrix. Let the operating condition be
4000 rpm. The rotational inertia of the system is , the excitation
coefficient is , the operating value of excitation voltage is , the
fuel power transfer efficiency is , and the phase resistance of the load is
. Define the deviations from the operating values of fuel energy flow rate
and the excitation voltage as the two system inputs and
calculate its steady state value. Let the corresponding deviations from the operating
values of rotational speed and the line voltage be the two
outputs.
Week 11: Trapezoidal method with large or summative exercises.
Week 12: Discussion of examples, homework, or summative exercises.
Week 13: Discussion of examples, homework, or summative exercises.
54
Bibliography:
1.Michael J. Moran; Howard N. Shapiro: Fundamentals of Engineering
Thermodynamics. Wiley, Chichester, 5th ed., 2006.
2.Frank P. Incropera; David P. DeWitt: Fundamentals of Heat and Mass Transfer.
Wiley, New York, 4th ed., 1996.
3.Ed. Eitelberg: Optimal Estimation for Engineers. NOYB Press, Durban, 1991.
4.J.D. Lambert: Computational methods in ordinary differential equations. Wiley,
New York, 1973.
5.Ed. Eitelberg: Numerical simulation of stiff systems with a diagonal splitting
method. Mathematics and Computers in Simulation, XXI, 1979, pp. 109-115.
6.Ed. Eitelberg: Parameter studies of a class of robust L-stable integration methods.
Proceedings of the 10th IMACS World Congress, Volume 1, 1982, Montreal,
Canada, pp. 22-24.
7.Ed. Eitelberg: A simple A2-stable numerical method for state space models with
stiff oscillations. Mathematics and Computers in Simulation, XXV, 1983, pp.
346-355.
8.Ed. Eitelberg: Control Engineering. NOYB Press, Durban, 2000.
9.Mark G. Lawrence: The relationship between relative humidity and the dew-point
temperature in moist air. Bulletin of the American Meteorological Society
(BAMS), February 2005, pp. 225-233.
10.https://www.academia.edu/31361586/
THERMODYNAMICS_TABLES_BOOK_e_MORAN_AND_SHAPIRO_ALL_Fundame
ntals_of_Engineering_Thermodynamics_7TH_ED (added on the 26th of May
2020).
55
Summative Exercises:
56
Exercise 3: Temperature controlled filling of a bathtub.
A bathtub is filled with hot and cold water. The two flows mix to a uniform
temperature – with negligible delay. Ignore heat transfer between the bathtub body
and its content. You may assume that this water is incompressible and hence, the
specific internal energy is a function of temperature alone, . Alternatively,
.
Derive two state differential equations: for the mass of water and its temperature in
the bathtub. The variable mass flow rates and temperatures of the hot and cold
water are the four inputs: , , , and respectively. Build this process
in Simulink and add a relay-based temperature regulation system – as in Exercise 2
– that varies hot water flow rate to maintain the bath temperature close to a desired
reference temperature. You are welcome to replace the relay controller with a
proportional gain controller.
57