You are on page 1of 57

Dynamical System Modelling and Simulation

Professor Eduard Eitelberg

The subject matter of this course covers two distinct but


interlinked areas of knowledge or expertise: dynamical system
modelling and numerical simulation of dynamical systems.
The students will learn to derive mathematical models by
applying the 'law of conservation' to various common processes
with lumped parameters. The students will analyse the transient
behaviour of these models in a laboratory type environment,
where they will use numerical simulation methods to solve a
model's non-linear state differential equations.

TABLE OF CONTENTS:
I Some Foundations from Thermodynamics and from Heat and Mass Transfer
2
1. The first law of thermodynamics. 2
2. Properties of simple systems – p, υ, T. Evaporation and condensation. 4
3. Humidity. 7
4. Energy transfer with and without mass ... Second Law limitation on energy transfer. 13
II General Law of Conservation 17
1. Definition of system and its boundary. 17
2. Quantity and its species – conservation thereof. 17
3. Rate of change of quantity in a system. 17
4. Examples of lumped systems from various domains: mechanical, power, electrical, economic … 18
III State Equations of Lumped Systems 25
1. Definition of state. 25
2. State differential equations. 25
3. Output equations. 25
4. State equations – initial value problem. 25
IV Linear Systems 27
1. Linearisation of state equations. 27
2. Transient behaviour of the state and system. 29
3. Asymptotic stability of the state. 30
4. Transfer functions. 31
V Discrete-Time Models of Systems – Numerical Integration & Simulation 33
1. Finite difference approximation – difference equations. 33
2. Stability of linear difference equations. 35
3. Stability of explicit and implicit Euler methods. 35
4. Further examples of single-step simulation methods: Trapezoidal (Tustin), Eitelberg, Runge-Kutta … 37
5. Step size control. 46
LABORATORIES 47
1. MATLAB arrays, array operations, graphics. 47
2. Relative humidity as a function of temperature. 48
3. MATLAB Matrix and vector operations. 48
4. Eigenvalue and stability calculations. 49
5. Introduction to Simulink, sources, sinks and other blocks. 49
6. Simulation of state models with initial conditions (given method and step-size). 49
7. Time-delay modules in Simulink. 50
8. Output to work-space and transient plotting. 50
9. Coding the explicit Euler method for a stiff example. 51
10. Coding the linearly implicit Eitelberg method for a stiff example. 51
11. Plotting of the stability bounds of Eitelberg’s A stable methods. 51
TUTORIALS 52
BIBLIOGRAPHY 53
SUMMATIVE EXERCISES 54

Copyright: Dr. Eduard Eitelberg


Karmiel, 2018 — 2023
1
2
I Some foundations from thermodynamics and
from heat and mass transfer

This chapter summarises some classical physics that has been developed by many
authors over many years. Generally, it is futile to try to give credit to anyone in
particular as most of this information is simply common knowledge. However, it is
acknowledged that the undergraduate textbook of Moran and Shapiro [1] has been
used extensively and due credit is hereby given to them.

1. The First Law of Thermodynamics.


Science has always been concerned with accurate description of motion of bodies.
Galileo's work cannot be ignored, but Newton's laws of motion are fundamental in
classical physics. They "… led to the concepts of work, kinetic energy and potential
energy, and these led eventually to a broadened concept of energy." [1, p. 29]
We assume a non-accelerating frame of reference throughout this course, unless
specifically indicated otherwise. A small non-rotating body with constant mass m
that moves with the velocity (vector) v is governed by the Newton's second law

(1.1)
We calculate the scalar product between the left and right hand sides of this identity

with the velocity vector , along the path s:

(1.2)
The left hand side of eq. (1.2) is the rate of change of the kinetic energy of the
constant mass m

(1.3)

If this body is accelerated by a force along a path described by the vector then
this force does work on this body at the rate of

(1.4)
In other words, the rate of kinetic energy increase of the constant mass m is equal to
the power PN – the rate of work done on this mass.

(1.5)
Energy is a quantitative property of the body in motion. Work is the mechanism by
which this energy is changed. The preceding derivation does not explicitly state that –
or under what conditions – this kinetic energy can be transformed into work. This is
related to the question of reversibility of processes – a major topic in thermodynamics.
It seems, most introductory texts, like [1], prefer the integrated form of eq. (1.5):

3
(1.6)
Both, the kinetic energy and work, are measured in units of

(1.7)
J stands for joule, after an English physicist Joule. However, it is pronounced as in
French. Even though torque is also measured in units of , it is not energy or
work.
One of the earliest extensions of the concept of kinetic energy seems to have been
the 'potential energy'. It can be motivated as follows. Under the given conditions, eq.
(1.5) fully describes the energy and work relationship of the body – if FN denotes the
total ('Newtonian') force vector acting on the body.
It turns out that it may be convenient to redefine some part, or all, of the work in eq.
(1.6) as negative potential energy. For example, if the total force vector FN has a
gravitational force component – where g is the gravitational acceleration
vector pointing towards the centre of the earth and it is (nearly) constant then we
obtain from eq. (1.6)

(1.8)
Substituting this equation into eq. (1.6) and re-arranging yields

(1.9)
It is a straight-forward matter to add other field forces to the gravitational field force
and thus extend the definition of the potential energy. Similarly, elastic potential
energy can be defined with respect to reversible elastic deformation forces. This
introduces the need to avoid double accounting of the relevant forces – either they are
covered on the energy side of the equation or on the work side of the equation, but not
both simultaneously!

Internal energy is used to cover a number of other forms of energy in bodies and
systems – such as thermal energy or energy stored in chemical, molecular and
atomic bonds. (The undergraduate text [1] considers elastic potential energy as part
of internal energy of a system.) Hence, eq. (1.5) is usually generalised as

(1.10)

Example: Compression work.


Let a given amount of gas have a volume V and uniform pressure p. When this
amount of gas is compressed by applying force/pressure on its outer boundary then
something is working on this amount of gas at the rate

(1.11)
End of example.

So far, only the kinetic energy change is strictly relying on the Newton's second law.
The gravitational potential energy is still based on classical Newtonian physics. The
internal energy could be considered a thermodynamic concept. It is based on a large
body of experimental data.
A fundamentally thermodynamic extension to eq. (1.10) is based on the discovery
that energy of a body of mass – or of a system in general – can be changed by other
4
means than work done by an applied macroscopic force. Energy can be transferred
by heat. For example, when water is heated on a hot plate its energy is increased
even though no mechanical work is done. "This type of interaction is called …
energy transfer by heat." [1, p. 44] Heat transfer rate to a body or system is denoted
by . Hence, eq. (1.10) becomes more fully now

(1.12)
Energy transfer by heat occurs only in the negative direction of the temperature
gradient. That is what we are taught in universities and it has been experimentally
and exhaustively verified. Nevertheless, one has to be careful. This statement is true
in observed macroscopic systems for conduction and radiation. It is not necessarily
true when mass and heat transfers coincide.
The commonly used terminology can be confusing, perhaps, because this area of
knowledge combines experimental scientific evidence from many fields of physics
and engineering and from different scientific traditions. The following quote from {1,
p. 49] should help:
"The terms work [W] and heat [Q] denote different means whereby energy is
transferred and not what is transferred. However, to achieve economy of expression
in subsequent discussions, W and Q are often referred to simply as work and heat
transfer, respectively. This less formal manner of speaking is commonly used in
engineering practice."
The first law of thermodynamics is nothing else but the energy balance – or energy
conservation – equation (1.12) above or its equivalent integral form below.

(1.13)

 Prove that in a frictionless swing.

2. Properties of simple systems – p, υ, T. Evaporation and condensation.


"The state of a closed system at equilibrium is its condition as described by the
values of its thermodynamic properties [such as temperature T, pressure p, or
density ρ]. From observation … it is known that not all of these properties are
independent of one another, and the state can be uniquely determined by giving the
values of the independent properties." [1, p. 69] The other thermodynamic properties
can be calculated from the set of chosen independent properties.
In simple compressible systems, such as pure water or uniform mixture of non-
reacting gases, only two thermodynamic properties are independent. Properties like
velocity and elevation above the sea-level are excluded from this discussion. In such
systems, for example, temperature and density may be chosen as the two
independent properties. Then pressure can be calculated as and the
specific internal energy can be calculated as . Incidentally, some authors
prefer to use the specific volume instead of the density. Note that, in this
section, the thermodynamic internal energy of simple compressible systems is
denoted by U, instead of the more general Ei.
Follow [1] pp. 70—82…

Solid, liquid (fluid), and vapour (gas) states. Critical point and triple line (point).
Phase change and two-phase mixtures. Sublimation - deposition, melting –
freezing/solidification, evaporation (vaporization) - condensation.

5
In the mixtures of phases – solid-vapour, liquid-vapour, or solid-liquid – the
temperature and pressure are not independent. One is defined by the other. The
ratio of material mass in the different phases of the mixture depends on the internal
energy of the mixture.
For actual calculations, use the appended tables in [1] or other sources, as in [10].

Figure 1.1: Properties of pure, simple compressible substances.


Steam (mixture) quality x, at saturation. Specific volume of steam (mixture of vapour
and liquid):

(1.14)

Define the quality and note that . Hence, eq. (1.14) can be
written as

(1.15)

Example: 1 kg of steam at 100 °C and x = 0.9.

From table A-2 of [1], and . That means,


from 1.04 liter of water at 100 °C, by adding internal energy, we obtain about 1500
times more volume of (slightly wet) steam:

(1.16)
End of example.

6
In many thermodynamic calculations, the internal energy U appears together with
the product between pressure p and the volume V. This combination is called
enthalpy . The specific enthalpy is expressed as

(1.17)
Just like in eq. (1.15) above, the specific internal energy and the specific enthalpy of
the wet steam can be calculated as follows.

(1.18)

(1.19)
Wet steam pressure and temperature are not independent. Hence the latent heat of
evaporation can be expressed as a function of pressure, , or of temperature,
.

 Why is ? When are they equal?


When the internal energy or enthalpy are (approximately) linearly dependent on
temperature, then many calculations are simplified by using the notion of specific
heat. There are actually two separate definitions:

(1.20)

In incompressible substances (like liquids and solids), . And when, in


addition, c is constant then and .
Gases, in general, cannot be approximated as incompressible. However, most gases
(p << pc or T >> Tc) behave approximately like an ideal gas, according to the ideal gas
equation

(1.21)

The universal gas constant and M is the molecular mass of


the gas measured in kg/kmol. Note that using kJ in R leads to pressure measured
in kPa, if the specific volume and temperature are measured in straight SI units. For
specific heat of ideal gases, the reader is referred to [1] or other relevant literature.

Example: Density of air from ideal gas equation.


Dry air is composed of mainly nitrogen (78% of mole fraction) and oxygen (21% of
mole fraction). From table A-1 of [1], and .
We could calculate the average molar weight as is done in [1], but it is simpler,
albeit very slightly less accurate, to take the nitrogen molar mass as representing
air. For the masochists, for dry air . This is really of no great
practical value, because the usually humid air contains variable amount of water
molecules with the molecular mass of and this would lower the
average molecular mass of the air – often even below 28 kg/kmol. Hence, the air
density is obtained from eq. (1.21) as

(1.22)

Assuming an atmospheric pressure at sea level of and a temperature of


yields an air density of 1.2 kg/m . Verify that this corresponds to
3

actually measured values. However, at the height of the Drakensberg escarpment

7
(about 3000 m above sea level) the atmospheric pressure drops to about 70 kPa and
the air density at becomes 0.8 kg/m3.
End of example.

Another popular form of the ideal gas equation is obtained from eq. (1.21) by
multiplying both sides with the density:

(1.23)

could be called the molar density, because it indicates the number of moles in
unit volume, measured in kmol/m3. A kmol is equal to a fixed number of
‘elementary entities’ – here we refer to molecules. Clearly, the
ideal gas equation is completely independent of which molecules the gas is
composed of. The pressure and temperature relationship depends only on the
number of molecules in a cubic meter.

3. Humidity.
The concept of humidity concerns lay people and experts alike. It is very important
to both, but it is misunderstood by the general public. "The study of such systems
… is known as psychrometrics." [1, p. 579]
By definition, dry air contains no water vapour. Technically, moist or humid air refers
to a mixture of dry air with water vapour. And this is precisely where the difficulties
begin. Generally, mixtures of simple or not so simple substances – salt, sugar, liquid
acids, oil, oxygen, nitrogen or other substances in water, alcohol and other liquids –
may cause complex changes in the physical properties even when the mixed
quantities are small compared to one main substance. Amazingly, it turns out that
humidity has practically nothing to do with air – in or near the conditions in which
humans can survive in the atmosphere. Hence, I begin by completely ignoring the
air and the associated thick layer of cattle manure in common or popular
explanations.
Humidity, as measured in practice, is entirely a matter of some space containing
water vapour and it does not depend on whether this space also contains air or not!
Humidity is simply a property of water vapour in relation to the saturated vapour
line in Figure 1.1 above. The most relevant portion of this generic figure is
reproduced in Figure 1.2 specifically for water (T-ν diagramme is convenient here).

T- v diagramme of water. Saturation temperature and pressure.


160 160

140 140

120 120

100 100
temperature [deg C]

temperature [deg C]

70 kPa
80 35 kPa 80

60 60

40 6 kPa vapour 40

20 liquid + vapour 20

0 0.61 kPa 0
solid + vapour
-20 0.10 kPa -20

-40 -40
0 2 4 -2 0 2
10 10 10 10 10 10
specific volume [m3/ kg] pressure [kPa]

Figure 1.2: Pure water saturation.


The constant pressure lines to the right of the (thick) saturation curve in the T-ν
diagramme can be based on tabulated data or they can be calculated from the ideal
gas equation for H2O – I tried both and was not able to visually distinguish them in
8
the relatively low-resolution T-ν diagramme of Figure 1.2. As a matter of fact, the 6,
35 and 70 kPa lines are based on water tables and the 0.61 and 0.10 kPa lines are
calculated from the ideal gas equation.
In Figure 1.2, the saturation pressure and temperature relationship is based on
tabulated data and it is shown with a thick solid line style. It is also calculated from
the ideal gas equation – pressure is calculated from the tabulated saturation
temperature and specific volume. The resulting dashed line is visually
indistinguishable from the solid curve. However, one needs to be careful here. When
this same curve is calculated in the ideal gas equation from tabulated saturation
pressure and specific volume, a visually significant deviation of temperature is
noticeable above about 60 °C.
 Show this difference at a few points on the saturation curve.
The concept of humidity relates to the superheated vapour state to the right of the
saturation curve in the T-ν diagramme. Curiously, the so called absolute humidity is
nothing else but the density of the superheated vapour:

(1.24)
Apart from adding a new name to the same old thing, it is not new. It is also of
rather little interest in typical applications in the earth’s atmosphere.
Of greater interest is what is called the relative humidity. But first, recall that to the
left of the vapour saturation curve in the T-ν diagramme, the steam (vapour) quality
was defined as the ratio of the mass of gaseous phase (vapour) to the
saturated wet steam mixture’s mass at the same temperature – .
The quality is 100% on the saturation curve and less to the left of it.
One can formally extend the definition range of x to characterise the quality of
vapour to the right of the vapour saturation curve – note that msat no longer contains
liquid water and mv is superheated. This is one, albeit not the most popular,
definition of relative humidity, . Divide the numerator and
the denominator by the volume V to yield . Here too, this ratio
is 100% on the vapour saturation line and less to the right of it.
Apparently, the most popular definition is based on the ratio of the superheated
vapour pressure to the saturation pressure at the same temperature:

(1.25)
However, within the limits of accuracy that the superheated vapour is governed by
the ideal gas equation, the two definitions actually coincide. Substitute
from eq. (1.21) into eq. (1.25) to yield a very good approximation for the definition in
eq. (1.25):

(1.26)
The popularity of the RH definition as the ratio of pressures in eq. (1.25) may have to
do with the fact that it relates to the concept of dew point, or dew point temperature.
For every superheated vapour state there is a unique constant pressure line on the
T-ν diagramme. This pressure line crosses the saturation curve at the saturation
temperature – this is called the dew point temperature Td. Once we know the dew
point temperature of the given superheated vapour, its pressure is uniquely defined
by this Td alone. Hence

(1.27)

9
This corollary to the definition in eq. (1.25) is independent of the ideal gas
approximation in eq. (1.26) and it is valid both above and below freezing
temperatures. The dew point temperature of superheated vapour could be measured
by cooling this vapour at constant pressure until the first signs of condensation or
deposition occur. (Something similar is used in meteorological practice when the wet
bulb technique is used to determine humidity in the air – see further below. The wet
bulb temperature is always between the air temperature and dew point of water.)
(Alternatively, the dew point could be defined by constant density cooling of the
superheated vapour – this would combine nicely with the relative humidity definition
as the density ratio. I am not aware of any such proposal actually having been made
previously.)
Both saturation curves in Figure 1.2 have very good analytic approximations. In
particular, the saturation T-p curve can be approximated with the Magnus equation.

(1.28)
Apparently, there is some doubt about the naming of eq. (1.28) – see [9]. Also, the
variously reported parameters are not particularly suitable in the temperature range
of Figure 1.2. The following parameters were chosen by me and yield saturation
curves that are difficult to visually distinguish from the tabulated values.

(1.29)
The temperature enters eq. (1.28) in °C and the pressure results in kPa. If high-
precision calculations are required, the reader needs to investigate the voluminous
literature and then decide which formula and what parameters are most suitable for
her or his particular needs. For the purposes of this course, the above parameters
are adequate.
Substituting the Magnus equation (1.28) into the ideal gas equation, we can
evaluate the saturation specific volume as follows. Recall that
.

(1.30)
Both calculated saturation curves are indicated in Figure 1.2 with dash-dotted lines
that are hard to visually distinguish from the (thick) solid lines that are based on
tabulated data.
Substituting the Magnus equation (1.28) into the relative humidity equation (1.27)
yields

(1.31)
Before continuing, let me clarify that all of the above theory – like thermodynamics
in general – relates to equilibrium (static) states of the substances and systems. Do
you notice the contradiction in terms? I would have proposed ‘thermostatics’, but it
seems too late for that. As a consequence, we need to distinguish between the
humidity at equilibrium and the dynamic and transient process of net evaporation,
sublimation, condensation and deposition. In the former subject matter, the
spontaneous evaporation or sublimation rate is balanced by the spontaneous
condensation or deposition rate. Non-zero net evaporation (or scientifically
vapourisation) can take place in various physically observable forms, such as
 evaporation from the surface of a macroscopically homogeneous body of
liquid water (drops of water are homogeneous enough for this discussion)
without significant rapid change of the surface area;
 evaporation by ‘boiling’, the onset of which is indicated by a significant
increase of the surface area between the two phases due to appearance of
vapour bubbles inside the previously homogeneous liquid phase.

10
The main difference between the two seems to be the rate of phase change that is
required to keep the phases close to equilibrium. For example, this rate of phase
change may depend on the rate of heating, or on the rate of pressure reduction. In a
sense, boiling is further away from thermodynamic equilibrium than surface
evaporation. However, both forms of evaporation still take place rather close to the
equilibrium saturation curves in Figure 1.2.
When boiling takes place close enough to the surface of the liquid phase, then the
vapour pressures above the liquid and in the bubbles inside the liquid must be
approximately equal. The same is true for vapour temperature above and inside the
liquid water.
Now we are ready for the tricky part – we add air! In so far as we are concerned
with humidity in thermodynamic equilibrium, we are primarily interested in the
gaseous atmosphere of air and water vapour mixture – that is to the right of the
saturation curve on the T-ν diagramme in Figure 1.2.
However, when we have to address the process of changing humidity, we may have
to understand also what is happening to the left of the saturation curve – in the
liquid or solid phase of water. In the following analysis, we ignore the air dissolved in
the liquid and solid phases of water. For example at equilibrium, it is less than
100mg per liter of water – mostly much less. It is less than 0.01% by mass.
Commonly, air denotes the mixture of all of its component gases – that includes
water vapour if present. However, in the present context we need to clearly
distinguish between (dry) air and water vapour in the mixture that is called humid
air. In humid air, all component gases have the same common temperature T.
However, the component densities, specific volumes and pressures (to be defined
shortly below) are not common and they are generally different from the uniform
mixture pressure p, density ρ, or specific volume ν. Because of the conservation of
mass, the individual component partial densities of vapour ρv = mv/V and air ρair =
mair/V add up to the mixture density:

(1.32)
Because of the conservation of the number of molecules in the non-reacting
mixtures, the number of moles of individual components in a unit volume add up to
the number of moles of the mixture – if the average molar mass is defined
appropriately. Denoting the average molar mass of humid air with M yields the
‘molar density’ balance equation:

(1.33)
On the one hand, this equation can be used to calculate the average air molar mass

. On the other hand, it leads to the conclusion


that also the component pressures add up to the mixture pressure p.
The vapour and dry air component pressures are called the partial pressures of
vapour pv and of dry air pair respectively. Partial vapour pressure pv is defined as the
pressure that the same mass of water vapour would have in the same volume with
all of the air removed. The partial pressure of the air pair is defined similarly. In the
conditions of interest, dry air, superheated vapour and their mixture behave as ideal

gases. From the ideal gas equation (1.23) we obtain ,

and . Substituting these molar densities into eq.


(1.33) yields

(1.34)
This result is in agreement with the over two centuries old experimentally derived
Dalton’s law. Here, we go one step further. Define the partial pressure ratio and use
the relevant ideal gas equations to yield

11
(1.35)
This partial pressure ratio depends only on the molar density ratio and it is
independent of temperature. Hence, cooling and heating of humid air does not
change the partial pressure ratio – unless the humid air composition is changed. An
important consequence of this is that the dew point temperature of water can be
found by cooling humid air at constant air pressure – until the onset of
condensation or deposition, after which the air composition and partial pressure
ratios change.
Under the presently considered conditions, all humidity characteristics – including
RH – depend on the (partial) vapour pressure, (common) temperature and (molar)
density of vapour alone. This is correct to the right of the saturation curve, in the
gaseous phase of water.
Thus, for example, the relative humidity definition in eq. (1.25) has to be written as

(1.36)
However, all dew point related derived equations need no modification, because the
dew point temperature is not affected by the presence of air at the liquid-vapour
interface.
Whenever we have liquid or solid water present in the considered volume or system,
we need to consider carefully what happens at and across the interface between the
humid air and the liquid or solid phases of water (approximated as pure for the
purposes of this course).
In equilibrium, at the humid air interface:
 The humid air temperature T is common to all phases and all gases on both
sides of the saturation curve.
 The liquid pressure is equal to the total humid air pressure p. This pressure
is the weight of atmosphere above 1 m2 of area. Hence, it does not change
with local change of humidity, density, air composition, or temperature. It
changes with altitude.
A corollary is that the water tables and graphs need to be re-interpreted, because the
saturated vapour and liquid pressures are not necessarily equal at equal temperature!
Pressure in the solid phase seems to be of no interest in this section.
Humans feel the (relative) humidity indirectly. At saturation, no sweat from the skin
can evaporate spontaneously. The lower the RH value the faster can water from the
skin evaporate. Evaporation draws internal energy from the skin (to add latent heat
to water vapour). (However, the skin temperature can ideally only reach the wet bulb
temperature – not the lesser dew point. The reasons go beyond the scope of this
course.) Lower internal energy content of the skin (predominantly water) lowers the
temperature of the skin. That and the associated heat transfer is what we humans
feel.

Example: Humid air by the sea.

Assume an atmospheric pressure near a beach of and an equilibrium


air and water temperature of . At this temperature the vapour
pressure of water, from table A-2 of [1], is 1.705 kPa. In equilibrium, at 100%
relative humidity, the vapour partial pressure is saturated at
and the dew point temperature is . From the same table, we find
that the vapour has a (partial) density of 0.0128 kg/m3.
The water pressure at the surface cannot be 1.7 kPa, because it has to balance the
entire atmospheric pressure of 101 kPa on its surface! Table A-2 of [1] instructs us
that at this pressure, the saturated water temperature must be 100 °C – which is at
12
first sight confusing. Clearly, at liquid water and humid air interface, the liquid
phase cannot reach water saturation temperature – only the vapour phase does but
at a much lower partial pressure under atmospheric conditions.
Now assume that the sun is rapidly heating the beach and this humid air without
significant additional evaporation from the sea. At the partial
pressure is still the same, because the atmospheric pressure is
essentially constant and the humid air composition remains constant. However, at
this higher temperature the saturation pressure of water vapour, from table A-2 of
[1], is now . Eq. (1.25) (or eq. (1.36)) yields the warm air relative
humidity as RH = 1.705/4.246 = 40%. The analytic approximation in eq. (1.31)
yields 39%.
In addition, we find from this table that the saturated vapour density would be
0.0304 kg/m3. The approximate evaluation of the relative humidity in eq. (1.26)
yields 0.0128/0.0304 = 42%. Note that the dew point temperature is still
and it is significantly below the air temperature, as it should when the
air is not saturated.
If one waited long enough so that the liquid and vapour phases of water came to a
new equilibrium at near the sea, then the air would again be saturated at
100% humidity. This is seldom realistic on the sea beach – the water temperature is
not likely to increase by 15 degrees in a few hours of sunshine to reach this new
hypothetical equilibrium. It would be more realistic near a shallow body of static
water – like shallow lake or relatively closed bay. The humidity close to the slowly
warming water will increase above 40%, but it goes beyond this course to try to
guess by how much and how soon. This requires modelling of the transient
evaporation processes together with the heat and mass transfer in the sea and in
the atmosphere.

The hypothetical equilibrium vapour partial pressure would rise to . In


reality, it would rise to some value between the initial 1.7 kPa and the hypothetical
4.25 kPa. When this (saturated) air is cooled down to again, then more
than half of the water in the air (17 grams out of 30 grams in every cubic meter of
air) would condensate on some surfaces. Something like this is what we see on
parked cars early in cool mornings near the sea and elsewhere.
End of example.

The above example hit the limits of the equilibrium theory of water saturation rather
hard. A closely related subject matter is the common boiling of water in the
atmosphere. During evaporation from the liquid surface at any temperature, vapour
is produced at the liquid phase surface until its partial pressure in the air reaches
the saturation pressure at the given temperature. However, one cannot boil water at
any temperature, unless its pressure is equal to, or below, the corresponding
saturation pressure of pure water.
Boiling can happen locally in the vicinity of very hot surfaces, when the temperature
drops at a (small) distance from the heating surface – think also of Leidenfrost. That
goes beyond the bounds of this section. Here, boiling is associated with producing
and maintaining vapour bubbles within the liquid water at or very near saturation
conditions. The liquid and vapour saturation pressures on the liquid side of the
humid air interface are equal because there is no ingress of air into the vapour
bubbles – as opposed to the humid air above the boiling water surface. Hence in
principle, boiling cannot occur at or near equilibrium conditions with (humid) air
because – except when all of the dry air has been replaced with
water vapour near the surface of the boiling water. This is the reason why boiling
water vapour visibly condensates in the air – the surrounding air cools the 100%
humid vapour before it dissipates (and re-evaporates) in the surrounding drier air.

13
In conclusion, the atmospheric pressure is the liquid water saturation pressure and
it alone determines the equilibrium boiling temperature of pure water.

Example: Clear weather clouds on top of mountains.


Visitors to Cape Town, are sometimes surprised to find sunny weather in town and
on the adjacent sea while only the top of Table Mountain is enveloped in a cloud.
When wind is blowing humid air against a mountain, the air has to gain altitude in
order to get over the mountain. It is well known that atmospheric pressure declines
as the height above sea level increases. This relationship can be approximated with

(1.37)
The altitude h enters this equation in km and the sea level pressure is p0 – let us
assume the average 101 kPa. Since the height of Table Mountain is a little more
than 1000 m, the atmospheric pressure on the mountain is about 90 kPa. As the
pressure drops, the relative humidity drops – contrary to some www nonsense.
Even if we assume the extreme of 100% humid air blowing in from the adjacent
Atlantic Ocean at, say, 30 °C and pv = 4.25 kPa, by the time it reaches the top of the
mountain, it would become drier if only its pressure dropped to 90 kPa. Since the
composition of the rising air does not change before cloud-forming, the vapour
partial pressure drops proportionally to the atmospheric pressure drop: pv =
4.25*90/101 kPa = 3.79 kPa. That would yield a drier RH = 3.79/4.25 = 89%, – to
the right of saturation – if there was no change in temperature.
The cause for cloud forming is the simultaneous cooling of the expanding humid air.
Let us calculate the maximum temperature on top of Table Mountain for clouds to
form there, when the sea level humidity is 80% at 30 °C. We could use the analytic
relationship in eq. (1.28), or we could use water tables. Let us do the latter and leave
the former as an exercise. At sea level, from eq. (1.36), pv = 0.8*4.25 kPa = 3.40 kPa.
On the mountain the vapour partial pressure drops to pv = 3.40*90/101 kPa = 3.03
kPa. From Table A-2, at 3.03 kPa, we find the saturation temperature of 24 °C – 6
degrees below the temperature in town. With RH = 100% at sea level, the necessary
temperature drop would have been a mere 2 degrees for clouds to form.
One should ask: why does air become cooler when it expands? Here, only a brief
explanation can be provided. As atmospheric air expands locally, it spends internal
energy on working against the atmospheric pressure according to eq. (1.11) – look
for ‘adiabatic expansion’ in the literature. Lower internal energy lowers the air
temperature. The adiabatic cooling rate of rising (expanding) unsaturated air is
about 10 degrees per km and about half that with simultaneous condensation.
End of example.

4. Energy transfer with and without mass: convection, conduction and radiation. Second
Law limitation on energy transfer.
The first law of thermodynamics was concerned with energy conservation in
something we call a system. What a system is was left vague in this course, but it
should be intuitively understood from the context. Now, we extend the analysis of
thermodynamic systems in two directions. First, the system is associated with a
volume with defined boundaries. These boundaries need not be fixed and the volume
may change. Second, energy balance alone is very seldom sufficient. We need to
consider the mass m of, or in, the system. In classical physics, mass is a quantity
that is conserved rather trivially. The rate of change of mass in the system is equal
to the net flow-rate of mass into the system across its boundary. (Outflow rate is
negative inflow rate.) This leads to the almost tautological differential equation

(1.38)
It is true that the actual calculation of the mass-flow rate may be complicated in
fluid and gaseous phases and that is why we need experts in mass transfer theory.
14
As a consequence of the mass transfer across the system boundary, we now have to
deepen the understanding of energy conservation in such systems.
Clearly, all mass has some internal, kinetic and potential energy. This energy is
carried across the system boundary together with the mass that crosses this
boundary. Let the boundary have a number of mass crossing areas with uniform
internal, kinetic and gravitational energies. Hence, eq. (1.12) needs to be extended
as follows.

(1.39)

It must not be forgotten that work transfer rate does not include work done by
the gravitational force on the mass within the system boundary – it was split off the
work transfer rate and redefined as potential energy change in the system. Here,
another splitting of work is added for cases where mass is transferred in the form of
fluids or gases. This transfer contains significant amount of work spent on
overcoming pressure at the system boundary. This is related to work done to change
volume in eq. (1.11) and is split off of . At a uniform fluid boundary, the flow work
rate can be calculated as – see p. 130 in [1]. We shall continue using
to denote all other energy transfer rates across the system boundary by work! Hence
eq. (1.39) becomes

(1.40)
This is the main (or only?) reason why the notion of enthalpy was introduced in
thermodynamics – and

(1.41)

Just remember, in eq. (1.41), excludes work done by gravitational field forces on
the system and work done to move material over the boundary at fluid interfaces
(that includes gases). The former is accounted for in potential energy and the latter
is accounted for in the specific enthalpy term. Some may call it an accounting trick,
I call it useful tautology.
Let us turn the attention to heat transfer. It is traditionally classified into three main
forms: heat transfer by conduction, by convection and by radiation.
Energy transfer by conduction takes place in solids, liquids and gases. Its rate is in
practice proportional to the temperature gradient and heat is transferred in the
opposite direction of the gradient. In one-dimensional transfer along the x axis that
is directed into the system, through a surface area dA that is perpendicular to x, one
can write

(1.42)
The thermal conductivity is denoted by . Generally, good electrical conductors are
also good thermal conductors and vice versa. For example, thermal and electrical
conductivity of solid aluminium are 240 W/(K·m) and 38 MS/m respectively. For
solid copper they are 400 W/(K·m) and 60 MS/m. Deionized saturated water
(without additives) has a thermal conductivity of only around 0.5 W/(K·m) and a
matching electrical conductivity of somewhere above 5 µS/m.
Energy transfer by convection is less precisely defined – it takes place at solid (or
liquid) surface of area dA at a temperature Tb in contact with moving gas or liquid at
a temperature Tf. Its rate is in practice proportional to the temperature difference
and heat is transferred in the direction of the lower of the two temperatures.
Assuming that Tf indicates the temperature outside the system,
15
(1.43)
The heat transfer coefficient h is an empirical number and depends strongly on the
fluid flow speed relative to the system boundary. Table 2.1 of [1] indicates values of
h in the range of 2-25 W/(K·m2) for gases in free convection and in the range of 25-
250 W/(K·m2) for gases in forced convection. Achievable values are orders of
magnitude higher with liquids.
Energy transfer by radiation takes place between solids, liquids and gases.
Calculation of its rate is very interesting, while rather complicated, but it goes
beyond the scope of this course. However, the foundation of these calculations is
simple enough.
All matter can emit and absorb electromagnetic radiation – in thermodynamics it is
called thermal radiation. The rate at which energy is emitted from a surface element
dA at a temperature Tb is quantified by a modified Stefan-Boltzmann law

(1.44)
The emissivity of the surface depends on surface properties. The Stefan-
Boltzmann constant . The complications in the
calculation of the net heat transfer arise from various causes, such as directional
variation of radiation, highly variable reflection or absorption of emissions from
other (visible) surfaces of varying temperatures and spectral influence on emissivity,
reflectivity and absorptivity. See for example [2] for further reading and Appendix 1
in [7] where the present writer documents the whole dynamical simulation model for
an industrial brazing furnace with predominantly radiated heat transfer.
Finally, we need to mention the second law of thermodynamics. Actually, there isn't
one. There are two popular explanations of the so-called second law – the Clausius
statement of the second law and the Kelvin-Planck statement of the second law.
Experts say they are equivalent. Hence, only one should suffice here as an
introduction.
The primary motive for the second law seems to have been the scientists' inability to
predict the direction of processes. The knowledge of the first law alone does not
(necessarily) determine the direction of a process. For example: will kinetic energy be
converted into potential energy, or the other way around; will a system store internal
energy on the basis of work transfer into the system, or will internal energy be spent
on system transferring work to other systems. It seems engineers do not have such
doubts.
Be that as it may, electrical engineers should be puzzled by the lack of evidence for
spontaneous heat flow against temperature gradient – there is no such lack of
evidence of electrical current flowing spontaneously against electric potential
gradient. For example, an alternating current transformer is perfectly able to raise
the voltage difference from the input to the output purely by passive means – there
is no need to add work into this process/system. In well designed transformers, the
losses are negligible – relatively speaking. Mechanical engineers are envious,
because they have not been able to find a 'transformer' that would raise temperature
differences without irreversibly adding work and losing very significant amounts of
energy.
We quote from p. 177 of [1]. "The Clausius statement of the second law asserts that:
It is impossible for any system to operate in such a way that the sole result
would be an energy transfer by heat from a cooler to a hotter body."
The most obvious two systems that relate directly to this formulation of the second
law are refrigerators and heat pumps. Both transfer energy from cooler to hotter
bodies. This does not violate the second law because both systems receive work from
outside the system. Their compressors are driven usually by electric motors or
directly by mechanical motors. Less commonly, electro-thermal effects, such as
Seebeck or Peltier, are used. In all known cases, there is some other effect that
accompanies heat transfer when energy is transferred in the direction of the
temperature gradient.

16
One of the most important engineering corollaries to the second law relates to
systems that produce work cyclically between two thermal reservoirs. One is at the
hot temperature THot and the other is at the cold temperature TCold. QHot and QCold are
heats transferred to the system from the hot reservoir and from the system to the
cold reservoir respectively during one cycle. The work produced during the cycle is
. If we are interested in a machine that uses thermal energy to
produce work, then the thermal (or energy) efficiency is defined as

(1.45)
For refrigerators and heat pumps, the coefficients of performance are defined as

and respectively.
From the second law of thermodynamics, the theoretically maximum efficiencies and
COPs of thermal cycles are

(1.46)
Since the cold side temperature is mostly just the environmental temperature of
about 300 K, the only way to improve an efficiency is by maximising the hot side
temperature. That means, for example, burning fuel in turbines and car motors at
maximum possible temperature – which is limited by materials used in such
turbines or motors.

Example: COP of a refrigerator.


Assume the food needs to be kept at 7 °C in a hot summer day where the
environmental temperature is 37 °C. The theoretically possible COP is 280/30 = 9.3.
In practice, much smaller numbers are achieved because of various losses.
End of example.

17
II General Law of Conservation

1. Definition of system and its boundary.


A system can be anything or any region in space, but for the purposes of this
course, it has to have a well defined boundary that separates the inside (the system)
and outside (the environment of the system). The boundary does not have to be
physically different from the system or from its environment. It just needs to be well
enough defined. The system may be infinitesimally small or it may have significant
size.

2. Quantity and its species – conservation thereof.


On the basis of current general knowledge, there are only three fundamental
physical quantities in the world: mass, energy and electrical charge.
In classical physics, all three are indestructible, cannot be created, and cannot be
transformed from one to the other. The last observation is not critical at all to the
following theory and may even be incorrect. The relativity theoretical equivalence of
mass and energy may be deeply related to the question about the difference between
inertial and gravitational mass – which does not seem to have been answered
properly. Einstein ‘simply’ postulated them to be the same – there was no counter
evidence apparently.
In some situations, it may be convenient to define space as the fourth quantity. One
can arbitrarily define non-physical quantities. One such example is wealth (or its
subspecies money) in hypothetical economic systems – in reality it is not a quantity
because it can be created and destroyed by governments or other actors. On the
other hand, it may be convenient to model wealth as being – approximately or over
short periods of time – a quantity.
In describing real systems, it may be useful or necessary to distinguish between
different species of the quantities. For example one may be interested in the
dynamically changing mass of water vapour in the total mass of air. One may be
interested in the kinetic energy. In classical physics and chemistry, a species of
mass can be transformed from one species of mass to another species of the same
quantity.
In conclusion, the quantities are conserved. The species of the quantities need not
be conserved individually.

Example: Was the world created?


Believers of ‘çreation' should not have any conflict with the law of conservation.
Various flavours of bible do not state that god created the world from nothing. God
allegedly created the world from something called 'tohu-va-bohu'. Clearly the
statement alleges transformation instead of creation. The meaning of the pre-world
substance is not explained in the written evidence.
End of example.

3. Rate of change of quantity in a system.


Introductory level of thermodynamics deals with mass and energy conservation.
Only transfer across the system boundary was considered in these differential
equations – such as (1.12), (1.26), or (1.29) above.
Here we shall deal with species of various quantities. These species too are
quantities for the purpose of a general theory of dynamical system modelling. If Q is

18
a quantity then the generalised statement of conservation of such quantity can be
written as follows.

(2.1)
'Transfer' means transfer into the system across its boundary. 'Transform' means
transformation from another species of the quantity within the system boundary.
This equation is helpful in explaining why such properties of systems as
temperature and pressure are not quantities.

4. Examples of lumped systems from various domains: mechanical systems, power systems,
electrical systems and circuits, economics, and others.

Example: Domestic freezer.


4 kg of vegetables and 2 kg of meat is at room temperature of 25 °C. Both are
predominantly water. All 6 kg of food is put into a freezer and frozen to –10 °C. We
shall calculate the shortest possible time that it takes to freeze the food from 25 °C
to –10 °C. We shall solve this problem without differential equations – it is that
simple.
The freezer is connected to 240 V AC supply via a 10 A circuit breaker. We assume
that the freezer has a COP of 4.
Electrical motor uses electrical power to transfer work to the freezer and thereby
remove internal energy from the 6 kg of food. The maximum power available is
240×10 W = 2.4 kW.
The specific heat of food/water is about 4.2 kJ/(kg·K) and that of ice is about 2.1
kJ/(kg·K). The latent heat of ice melting is about 336 kJ/kg. In total, the freezer
needs to remove 6×(4.2×25 + 336 + 2.1×10) kJ = 2772 kJ of energy. A quarter of that
amount is required from electricity supply: 2772/4 kJ = 693 kJ.
Hence the minimum duration of this freezing is 693/2.4 s = 4 minutes and 49
seconds.
 Which thermodynamic part of this process lasts the longest – by far – and
what is its duration in minutes and seconds?
End of example.

Example: Speed of a motorcar.


A small motorcar of 1300 kg laden mass and with a motor capable of delivering up
to 50 kW of power is accelerated on an even and straight road from 50 km/h. What
is the shortest time it takes for this car to reach 100 km/h? We want this answer to
be realistic.
An over-simplified answer would simply consider the given data above. Since power
P is the scalar product between the force and velocity vectors and both vectors are
aligned, according to the road related assumption, we deal with scalar force and
speed and . Obviously, the fastest acceleration is obtained at the maximum
force at any speed: . The corresponding acceleration is
and hence . Separating variables yields .
Integrating yields

(2.2)
Solving for Tmin and substituting numbers yields

19
(2.3)
This is a good starting point but not very realistic for the given 'under-powered'
passenger vehicle. (The corresponding zero-to-hundred time would be 10 seconds –
check it!) One of the problems is that car motors cannot be operated at a constant
(maximum) power delivery mode, which is motor speed dependent, among other
things. Here we shall consider another shortcoming of the above calculation – it
ignores entirely the losses between the power input and the car acceleration.
In order to keep it as simple as possible, we consider friction losses dominated by air
friction. Friction in the air is approximately proportional to the square of the speed:

(2.4)
We begin by considering the conservation of the kinetic energy of the laden car, Ek.
Its rate of change is

(2.5)

Substitute into eq. (2.5):

(2.6)

Let and . Substitute these and the other given values into
the differential equation (2.6):

(2.7)

The initial condition is and the end-condition is .I


do not know of an analytical method to solve this non-linear differential equation.
The numerical solution of this problem needs to wait until much later in this course.
However, this car's maximum (steady) speed can be calculated presently. From eq.
(2.7) we obtain it as 143 km/h.
End of example.

Example: Charge transfer between capacitors.


The circuit consists of two capacitors connected by a resistor.

Figure 2.1: Charge transfer through resistor. The switch is closed at t = t0.
Just before the switch is closed at the initial time t0, each capacitor has an initial
electrical charge – q01 and q02.
If we are interested in the entire circuit of Figure 1 as a system, then there is
nothing much of interest happening – the total charge q = q01 + q02 is conserved and
constant. The mass is not changing and there is no apparent energy transfer either
(we shall return to this later). Everything happens within the system. If this is of
interest, then we need to model at least one of the capacitors as a system (or a sub-

20
system). However, it is usually more straight-forward to model each capacitor as a
separate system – but not always.
Now we need to decide which fundamental quantities are of interest here. Mass is
not changing. Hence, there is no need to write mass conservation equations.
Electrical charge and energy in a capacitor are related algebraically:

(2.8)
Hence only one of them needs to be modelled – the other can always be calculated
from the model. We shall model the charges. Charge transfer rate is current i. Let us
define the current direction through R as positive when it flows from C1 to C2. Hence
the two charge conservation equations are

(2.9)
In order to solve these differential equations we need to define the current i. Before
that, we need to calculate the voltages at the two capacitors: . Hence,

(2.10)
The resulting model can be written as

(2.11)
Since this course is taught to electrical engineering students, an alternative
formulation of the same model is preferable, because electrical engineers do not
usually measure charges. Scientists may do so but not engineers. Hence, we shall
substitute into the left-hand side of eq. (2.9). Notice that only for constant
capacitances we obtain the simple differential equations for voltages,

(2.12)
We shall deal with solving such systems of linear differential equations a little later.
Presently, we return to the equation of current by noting that at all times the total
charge remains constant and . Hence, only one of the two
differential equations in eq. (2.11) needs to be solved to determine all transients in
this particular special case:

(2.13)
The solution to this first order differential equation is

(2.14)
After long enough time, the final charge distribution on the capacitors will be

(2.15)

21
Clearly, the total charge in the combined system cannot change in this example, but
what about energy? Let us start with the electrical energy in the two capacitors. At
the initial time, the total electrical energy is given as

(2.16)
The final total electrical energy is obtained from eq. (2.15) as

(2.17)
It is straight forward to show that the final energy is mostly less than the initial
energy by the following amount.

(2.18)
If, for example, C1 = C2 and only one of the two capacitors is initially charged, then
exactly one half of the electrical energy is ‘lost’ – compare equations (2.16) and
(2.17). Where does it go?
Well, students are taught that current flowing through a resistor heats this resistor.
That means energy is transferred from the capacitor subsystems to the resistor
subsystem. The current transient is obtained from eq. (2.10) by substituting q1 from
eq. (2.14):

(2.19)

The heating rate is classically given as . Integrating it gives the total energy
transfer to the resistor:

(2.20)
That seems satisfactory – at first sight. All of the lost electrical energy went to
heating the resistor. Curiously, this is entirely independent of the value of the
resistance, R!
Careful observers of electrical experiments should now object to something!
Something is not right here. The reason for my raising this point is the following.
If one connects two charged capacitors in parallel by closing a switch, or simply
bringing the capacitor leads into electrical contact, then one cannot fail to notice a
flash of light and an explosive sound. We assume negligible resistance and we
assume the voltages of the two capacitors are different before closing of the switch.
The light flash is electromagnetic radiation that removes some energy from the
combined system. Also, sound waves transport energy from this system. Both
amounts of energy must come from the difference in the electrical energy in eq.
(2.18). If nothing was heated – when there is no resistance – then all of the energy
difference is radiated away by electromagnetic field and by sound waves in a
medium. How the electrical energy loss is distributed between heating,
electromagnetic radiation, sound waves, or something else, I do not know precisely –
it would be very complicated to calculate this distribution from all available
theories/equations. However, the sum of these losses must be given by eq. (2.18) if
we believe in thermodynamics and hence in energy conservation.
In this circuit, with or without a resistor, charge transfer from a capacitor to another
is energy inefficient. Is this necessarily so? To answer this question in the negative,
one needs to find ways of reversing the lost energy difference in eq. (2.18). A thermal
loss is almost always only partially recoverable, if at all. However, electromagnetic
22
field loss is recoverable if it can be sufficiently contained near the capacitor circuit.
One such containment is achieved when the resistor R in Figure 1 is replaced by a
variably configured inductor. The inductor stores/contains energy in its magnetic
field and is capable of transferring practically all of it to a capacitor. A portion of
energy may be radiated away, but that can be limited by avoiding too high frequency
transients. This is the foundation for energy efficient switched-mode power supplies
of buck-boost type.
End of example.

Example: Personal wealth.


Money is something that is instinctively used in economics as a quantity – after all
money is counted. The history of money is interesting, but since more than half a
century the world is using ‘fiat money’. Unfortunately, it can be created from
nothing and can be destroyed without transformation or transfer. However, for
much of private use, it can be modelled as if it was a quantity. In fact all of economic
value is measured in the units of money. I do not know of a state – apart from Israel
– where the change in the purchasing power of money is used to regularly
recalculate the worth of loans or other monetary obligations. For the sake of this
simplified example, macro-economic inflation and devaluation are ignored. This has
been well justified in Israel since at least 2011 to now (2018).
Wealth can be defined in many ways. Let us define it as the price of everything a
person owns and can sell in the somewhat hypothetical market. Personal
consumables, are therefore not significantly part of wealth. In average, the most
significant part of a person’s wealth in monetary terms is real estate – land and
buildings. Machines and expertise are parts of wealth, so are farm animals. It goes
far beyond this engineering course to even attempt a description of what determines
the market price of the various components of wealth from one day to another – let
us ignore the market volatility or other changes of the price.
Denote a person’s wealth with w (measured in shekels). The growth rate of this
wealth is equal to the wealth acquisition rate plus wealth transformation rate. In
case of a working person with no ownership of production facilities or other forms of
investment, the model is simple.

(2.21)
A richer person may own productive capital, which depreciates at some rate that is
specific to the type of capital. Depreciation could be considered as destruction of
wealth, or it could be considered as transformed into the product that is sold to
create income – closer analysis reveals that there is some of both in reality. The
model becomes then

(2.22)
Do not assume that the additional negative wealth rate makes the rich person
poorer than the working person. While this is in many cases true, the successful
rich people arrange their earnings to be much larger than those of a working person
– because they can.
Equation (2.22) is a macro-economic model – it does not show the interactions
between the many persons that collaborate and compete in the same market, state,
or world. A more realistic model has to include many such differential equations and
additional equations that describe the interactions between these differential
equations. I think this should suffice as a starter. For example progressive taxation
of income, property and most other things, could be considered as part of expenses,
but their calculation can be very complex.
End of example.

Example: Three-phase turbo-generator.


23
The system of interest is a three-phase electrical generator and that section of the
turbine that is mechanically coupled to the generator axis. If we ignore the changes
in the internal energy of the turbo-generator, then the only relevant part is the
rotational kinetic energy . J is the effective rotational inertia of the turbo-
generator and Ω is the angular speed of rotation – the supply frequency is
proportional to it.
Energy transfer rate is power. The turbine receives some of the fuel power Pfuel that is
combusted (burnt). The fraction of the fuel power that is coupled to the generator
can be called the efficiency η. This efficiency may vary with operating conditions,
including with the speed of rotation. The generator delivers electrical power Pel.
Thus, the energy balance equation can be written as

(2.23)
If U and I denote the root mean square (RMS) line voltage and line current
respectively, then the three-phase generator delivers electrical power to a balanced
three-phase load according to

(2.24)

where is the well-known power factor. is the phase angle between the
sinusoidal phase-voltage and phase-current and it is equal to zero for ‘real’ or
resistive loads. Much engineering effort is put into designing and operating
electricity grids near unit power factor. Hence, is assumed in this example.
Now, we restrict attention to the case where a single turbo-generator unit supplies
electrical energy to a balanced resistive three-phase load. In addition, we consider
the load to be star-connected (Y-connected). If each phase resistance of the load is
denoted with Rp then the supply line current is related to the line voltage as

(2.25)

Hence, and

(2.26)
Inside the generator, the generator speed is related to the generated voltage through
magnetic field. Assuming negligible armature winding impedance, this relationship
is approximated by

(2.27)
where B denotes the (rotating) magnetic field flux density in the air gap between the
rotor and stator poles:

(2.28)
Ix is DC current that may be connected to the rotor coils via slip rings. Power
systems practitioners tell us that the excitation current is proportional to the
excitation voltage:

(2.29)
With this understanding, equations (2.27) to (2.29) can be combined to

(2.30)
In real generators, kx varies with the generator’s speed of rotation. Substitution into
eq. (2.26) yields
24
(2.31)
With constant inertia J, we obtain the final version of this model

(2.32)
In power systems engineering, there seems to exist a belief that field excitation
controls the generator voltage. However, eq. (2.26) does not allow constant rotor
speed in case of (transient) power imbalances and, in steady state, the excitation
cannot affect the voltage in eq. (2.30). Instead, it controls the steady state generator
speed.
In steady state, the generator rotates at a constant speed Ω and delivers electrical
power at the line voltage of

(2.33)

Substituting into eq. (2.32) and solving for Ω yields

(2.34)
End of example.

25
III State Equations of lumped systems

1. Definition of state.
Using the conservation law of quantities leads quite naturally to a set of first order
differential equations that describe both the transient and the steady state
behaviour of the system of interest. Thermodynamics and system theory define the
state of a system differently. But the longer one studies both the more these
definitions look similar. The rest of this Capter III follows closely what I wrote almost
30 years ago in a book that deals with optimal estimation, [3].
Any variable that is differentiated on the left hand side of the conservation equations
is a state variable. We combine all these variables into a state vector. The state
vector defines the state of the system. We standardise the denotation as follows.
 x is a state variable and x is a state (column-) vector with n variables xi as its
components.
 u is an input variable and u is an input (column-) vector with p variables ui
as its components.
 y is an output variable and y is an output (column-) vector with q variables
yi as its components.
 n is the order of the system.
For example, in eq. (2.32) the generator rotational speed is the state. The excitation
voltage and fuel power are inputs. Anything else of interest can be defined as
output. For example the supply frequency or the supply voltage can be outputs if
they interest us. Both are calculated from the state and inputs – usually in an
algebraic equation.
Using this standard denotation we combine all equations into two vector equations.

2. State differential equations.

(3.1)

The solution to the state differential equation is defined for any given input
by the initial condition . The dot above x indicates the time derivative.

3. Output equations.
The second equation is the output equation.

(3.2)

4. State equations – initial value problem.


Both of these equations together are called the state equations. Mathematically, they
constitute an initial value problem because, in this course, the initial state is
defined. In optimisation problems, often the same state equations constitute a final
value problem, or a boundary value problem – depending on whether the final state
is given alone or a combination of initial and final values is given.

(3.3)
The state equations are generally non-linear and all variables are generally functions
of time. Sometimes, it is convenient to show explicitly the system’s dependence on
26
some parameters that are considered as separate from the inputs. If these
parameters are combined into a parameter vector p and they too depend on time t
then we get a more detailed form of a general model of a dynamical system:

(3.4)
For engineering design of dynamical system, there are not many good design
techniques that are applicable or effective on non-linear models. Fortunately, often a
good enough linear approximation to eq. (3.3) can be found. See below.

Example: Motorcar.
Let us consider the motorcar equation (2.6).

(3.5)
Since we may be interested in the location of the car along a given path, we add the
trivial ‘conservation of space’ equation that indicates how the travel distance s
changes over time:

(3.6)
The state vector contains the speed v and distance s in arbitrary order. Let us define

(3.7)
We choose the variable motor power as the input u. Let the distance travelled, s, and
the travelling speed be the outputs of interest:

(3.8)
The complete state equations are for this example:

(3.9)
The system parameters are kfrict and m.
End of example.

27
IV Linear systems

1. Linearisation of state equations.

All of the non-linearity in eq. (3.3) is contained in the algebraic functions


and . Sometimes one is interested in a range of the independent variables, in
which one can replace the curved hyper-surfaces fi and gj with planes that
approximate the curved surfaces in some sense. One of the most popular
approximations, however, is obtained by concentrating on some vicinity of an
operating condition. The operating condition may be varying in time and is defined as

(4.1)
Equation (4.1) defines an operating trajectory. If the operating condition is constant,
then it is called an operating point. We assume that the first derivatives of
and exist at and . If these derivatives do not exist at the operating
condition then the system cannot be approximated with a linear system at this
trajectory or point. Now, the idea is to expand both non-linear functions in the
Taylor expansion. The Taylor series does not need to converge – we shall ignore the
second and higher order terms. Formally, we write

(4.2)
HOT stands for ‘Higher Order Terms’. It is customary to denote the four Jacobian
matrices as follows.

(4.3)
For small enough deviations from the operating condition, we obtain a linear
approximation for the state equations:

(4.4)
This is not what is found in the systems or control textbooks used by professors in
universities all over the world. The ‘standard’ formulation of the textbook state
equations is obtained by two additional assumptions:
 The operating condition is a steady state operating point.
 The state equations are written for the (small) deviations from this steady
state operating point.

28
Steady state is understood literally – the state . (Usually, this
requires that also – a deeper analysis of this exceeds the scope of this
text.) It follows that . The deviations are denoted by ,
, and . Now, the approximate linear model with constant
coefficients is obtained from eq. (4.4) as

(4.5)
Equation (4.5) is not really an equation that can be solved for the state vector ∆x – it
merely states that there is some approximation between the left and right hand
sides. It is customary to replace the approximation sign with the equality sign and to
drop the deviation sign ‘∆’. With this ‘slight-of-hand’, we obtain the standard form of
state equations:

(4.6)
Note that the solution of eq. (4.6) does not give us the original x or ∆x – it is an
approximation only. Among all the engineering approximations, this is usually not
among the most important considerations.

Example: … of linearisation.
A system is defined as:

(4.7)

Derive the linear state equations around the steady state for . The solution
follows.
In steady state,

(4.8)
It follows from eq. (4.3):

(4.9)
These matrices are to be substituted into eq. (4.6).
A comparison between the non-linear and linear system output transients is shown
in the following two figures. In both figures, the systems start from the same steady
state operating condition. In Figure 4.1, the deviations are ‘small’ and the linear
29
model is a good match. In figure 4.2, the deviations are evidently not ‘small’ and the
linear model step response deviates significantly from the response of the original
(non-linear) system. Calculation of these results is based on material that is
presented in Chapter V, further ahead.

Time Series Plot:


6.5

non-linear output
6.4
linear output

6.3

6.2

6.1
data

5.9

5.8

5.7

5.6

5.5
0 5 10 15
Time (seconds)

Figure 4.1: Small deviations from the steady-state operating condition. The input
deviation is a step of +0.05 at t = 5.

Time Series Plot:


12
non-linear output
linear output
10

8
data

0
0 5 10 15
Time (seconds)

Figure 4.2: Significant deviations from the steady-state operating condition. The
input deviation is a step of +0.5 at t = 5.
End of example.

 Linearise the motorcar model (2.6) around 80 km/h. Extend this


linearisation to the more complete model (3.9) – this requires addition of a
parameter vector to the conventional LTI state differential equation.

2. Transient behaviour of the state and system.

30
In the time domain analysis, the matrix exponential is of crucial importance. It
is defined by taking the Taylor series of the scalar exponential and replacing a
with A:

(4.10)

This series converges for any (square) A and t – hence is analytic.


Some important properties of the matrix exponential are:

 (4.11)
 (4.12)

 (4.13)
In other words, the inverse of the matrix exponential exists always – it is not
singular.

 can be differentiated and integrated in its series form:

(4.14)

(4.15)

 (4.16)
Now the (transient) time-domain solution of the state differential equation is found
as follows.

(4.17)

(4.18)

(4.19)

Change of variable and integrate from to . In linear system theory, it


is convenient to define the initial time to be to the left of . Thus an impulse input
at is unambiguously after the initial time.

(4.20)

(4.21)
Thus, the transient solution of the state differential equation and the corresponding
output are given as

(4.22)
The matrix exponential is so prominent in the above solution that it is called a

transition matrix . Its actual evaluation is not needed in this or in most


other courses. If someone needs it, Laplace transform can be used – see eq. (4.28)
further below. For numerical simulation, it is important to note that

(4.23)
31
 Calculate the time to accelerate from 50 to 100 km/h from the linearised
motorcar model.

3. Asymptotic stability of the state.


The linear time-invariant (LTI) system in eq. (4.6) is asymptotically stable iff

. An equivalent alternative statement is: the LTI system is


asymptotically stable iff all n eigenvalues of the system matrix A have negative real
parts.

Example: … continuation.
The system matrix of the previous example is

(4.24)
The characteristic equation is

(4.25)
The eigenvalues are -2 and ±2j. The system is not asymptotically stable.
End of example.

4. Transfer functions, poles and zeros.


Equation (4.6) is a time-domain representation of an LTI system. An alternative is to
represent the same system in the complex frequency domain. Without any serious
restriction of generality, we assume and we apply the single-sided Laplace
transform to the SDE in eq. (4.6):

(4.26)
Solving for X yields

(4.27)
Comparing equations (4.22) and (4.27) we must conclude that

(4.28)
From the same comparison we find that the Laplace transform of a convolution
integral is equal to the product of the Laplace transforms of the two convolved
matrices:

(4.29)
In the classical systems’ and control theory, one is interested in the transfer
characteristics of systems. That means we assume . Now eliminate X from eq.
(4.27):

(4.30)
32
Whence the transfer matrix is defined as

(4.31)
For a single-input-single-output (SISO) system, this matrix is a scalar and it is
called the transfer function.

When the complex variable s is replaced by the imaginary variable , then we


obtain the system ‘frequency response’ , or . is the frequency
measured in radians per suitable unit of time. However, the frequency response is
not the system’s response to frequency. Rather, it indicates the systems response
gain to a sinusoidal input at the frequency . (This will be taught thoroughly in our
Signals and Systems course.)

Example: … continuation.
From eq. (4.31):

(4.32)
Matrix inversion requires more calculations than solution of the associated linear
system of equations (in this case). Define

(4.33)

Solve for B* by using Gauss elimination or whatever you prefer. Here,

(4.34)
The transfer function results as

(4.35)
End of example.

State equations of order n yield a rational transfer function ,


where N(s) and D(s) are the numerator and denominator polynomials of s
respectively. It is not difficult to show that D(s) is the characteristic polynomial of
the system matrix A. The order of D(s) is n and the order of N(s) is m ≤ n. Iff D = 0 in
the state equations, then m < n.
Both polynomials can be factored into n and m multiplicative terms respectively:

(4.36)
33
The complex-valued  and β are the poles and zeros of the transfer function. The
real-valued g is a gain. In much of the linear system theory, the transfer function is
defined after all identical pole-zero pairs have been cancelled in eq. (4.36).
Accordingly, the order of the transfer function may be less than the order of the
system’s state equations.

34
V Discrete-time models of systems –
numerical integration, numerical simulation.

1. Finite difference approximation – difference equations.


Now we return to the original non-linear model of the dynamical system – the state
differential equation in eq. (3.4). However, for the purposes of this chapter, the
system input and parameter vectors are mostly of no particular interest. Hence, the
right hand side of the SDE is abbreviated as follows

(5.1)
We focus on the SDE here, because the OE is algebraic and explicitly defines the
output from the known input and the solution of the SDE. The terms ‘numerical
simulation’, ‘numerical solution of SDE’, and ‘numerical integration’ (of the right
hand side of eq. (5.1)) are used synonymously here.
Recall from mathematics that the derivative can be defined by the following limiting
process, if it converges:

(5.2)

If this limit exists, then there is a small enough positive , for which the following
is a good approximation.

(5.3)
The same finite difference expression in the right hand side of eq. (5.3) approximates
the derivative at any other time point close to the interval between and
. Thus, also,

(5.4)

Let us continue from eq. (5.3) at first. Solve for and substitute the state
derivative from eq. (5.1):

(5.5)
Equation (5.5) tells us that the new value of the state is approximated by the
expression on the right hand side. A recursive algorithm is obtained by replacing the
approximation sign with the equation sign – but then we no longer get the true
sequence of ! The strictly correct way to do this is by introducing a new
sequence

(5.6)
… and calculating the new sequence from the following algorithm

(5.7)
This is called the Euler method. For reasons that will become obvious later, here it is
called the explicit Euler method. From the point of view of system theory, eq. (5.7) is
a state difference equation. It defines the state in discrete-time as opposed to the
state differential equation that defines the state in continuous-time.
35
The accuracy of any numerical method depends primarily on the method itself, on
the chosen step size that may be varied from step to step, and on the differential
equation (5.1).
The contribution of the method to error is commonly characterised by its order. The
order of the method is equal to the order m of the error in the following equation:

(5.8)
For simplicity, a constant step-size is assumed for the error analysis. If eq. (5.8) is
true then the single-step error must be [4]:

(5.9)
For proof see for example Lambert (1973). The order of single-step numerical
methods can be determined by expanding in a Taylor series until we
find the first non-zero term. The order of the method is defined as one less than the
order of the first non-zero term in this Taylor expansion – or the highest order of the
initial (low order) zero-terms.

Example: Order of the explicit Euler method.


The first three Taylor terms of the solution are

(5.10)
The Taylor expansion of the explicit Euler method in eq. (5.7) is already in the form
of a truncated Taylor expansion. Substitution into eq. (5.9) yields

(5.11)
Conclusion: the explicit Euler method is a first order numerical method, because the

global error in eq. (5.8) converges to 0 as fast as .


End of example.

Let us now turn the attention to eq. (5.4). Repeating the steps above, we arrive at the
implicit Euler method/algorithm:

(5.12)

Here, the next state is given implicitly – the non-linear algebraic equation still
needs to be solved for the unknown . This is often a very difficult task. The
important question is: is this effort worth it? The brief answer is: no and yes. The
longer version will have to wait a little longer. For now, we shall only prove its order.

Example: Order of the implicit Euler method.

36
The first three Taylor terms of the solution are given in eq. (5.10) above. The Taylor
expansion of the implicit Euler method in eq. (5.12) requires some effort. Note that

, hence :

(5.13)
Substitution of equations (5.10) and (5.13) into eq. (5.9) yields

(5.14)
Conclusion: both Euler methods are first order numerical methods, because the

global error in eq. (5.8) converges to 0 as fast as . From the point of view of the
accuracy of the method, clearly the explicit version is preferable. It is as accurate as
the implicit version but much simpler to evaluate – that is the reason for the ‘no’
part in the brief answer above.
End of example.

2. Stability of linear difference equations.


For LTI SDE there is a more direct discretisation option. We assume for simplicity a
constant step size and apply eq. (4.22) from to :

(5.15)
The pure state transition part in eq. (5.15) is exact and entirely discrete in time. The
transfer of the input, however, is entirely continuous-time. An LTI state difference
equation needs to have a discrete input term too:

(5.16)
This model is impossible for continuous-time systems – unless the shape of the
input is fixed and known between the time-points and , or we accept that it is
an approximation. This is indeed accurate when the analog system input is formed
by, for example, a pulse amplitude modulating digital-to-analog converter (DAC) at
the output of a computer, micro-controller, or similar. In that case, the system input
is constant between the DAC conversion instances and eq. (5.15) becomes (5.16)
with

(5.17)
A discrete-time system is asymptotically stable, iff all eigenvalues of Ad have a
magnitude less than one. This applies to eq. (5.15) as well as to eq. (5.16) and is
carried over from continuous-time systems via the following observation.

The matrix exponential is by definition a polynomial of A – albeit seemingly of


infinite order. It is proven by mathematicians that the infinite order Taylor
expansion can be replaced by an n-th order polynomial. There is a general theorem
37
in mathematics which proves that the eigenvalue of a matrix polynomial is the same
polynomial of the eigenvalue of the matrix.

Let be an eigenvalue of A and let be the corresponding eigenvalue of


. It follows that

(5.18)

3. Stability of explicit and implicit Euler methods.


The stability of a numerical simulation – just like its accuracy – depends primarily
on the method itself, on the chosen step size that may be varied from step to
step, and on the differential equation (5.1). It is customary to define the stability of a
numerical method by the stability of the difference equation that arises from the
application of this numerical method with a constant step-size to an asymptotically
stable homogeneous LTI differential equation . However, this writer prefers to
work with the corresponding non-homogeneous equation

(5.19)
Lambert [4] defines two types of numerical method stability as follows.
A numerical method is said to be A-stable, if applied to a [asymptotically]
stable equation , it yields , where all the eigenvalues
of are inside the unit circle for . The method is said to be
L- stable, when in addition as .
Eitelberg [7] re-defined in 1983 the above two types of numerical method stability as
A0-stable and A1-stable respectively. More rigorously, Eitelberg [7] considers
numerical methods that, when applied to eq. (5.19), yield

(5.20)

with and rational functions of A, or functions that can be


expressed as rational functions of A (such as ). Explicit and implicit Runge-
Kutta type and linear multi-step methods satisfy this assumption – the Euler
methods are the simplest explicit and implicit Runge-Kutta methods. The following
definition is slightly modified from that given in Eitelberg [7], but it is equivalent.
Definition. A method is Ak-stable, if applied to an asymptotically stable test
equation (5.19) it delivers an asymptotically stable eq. (5.20) for all and

(5.21)

(5.22)
Equation (5.22) implies and characterises single step convergence to steady state for
large step size. For the Runge-Kutta type of numerical methods considered in this
work, it follows from eq. (5.21) and does not need separate verification for each
method. (This may have been proven in one of my early publications.)
Applying the explicit Euler method (5.7) to the test equation (5.19) yields

(5.23)
Here,

(5.24)
38
Obviously, this simulation method is unstable with any when the step size is
too large. We calculate the geometric domain of for which the simulation is stable
with :

(5.25)
That means that the simulation is stable iff all eigenvalues are inside a circle with
the radius of and centred on . See Figure 5.1. The explicit Euler method
is not A-stable – or A0-stable.

Figure 5.1: Stability range of the explicit and implicit Euler methods.
Applying the implicit Euler method (5.12) to the test equation (5.19) yields

(5.26)
Here,

(5.27)
We calculate the geometric domain of for which the simulation is stable with
:

(5.28)
That means that the simulation is stable iff all eigenvalues are outside a circle
with the radius of and centred on . See Figure 5.1.
That means the implicit Euler method simulates any stable LTI system stably. It is
obviously A0-stable. Since an asymptotically stable system matrix A is not singular,

it follows that in eq. (5.27) and the implicit Euler method is also A1-
stable – eq. (5.26) delivers the correct steady state in a single long step. Note,
however, that for large enough step size, an unstable system will deceptively yield a
stable simulation result.

4. Further examples of single-step simulation methods: Trapezoidal (Tustin), Eitelberg,


Runge-Kutta, …

39
The trapezoidal method is also known as the Tustin’s method in the British
literature. It can be derived by averaging the approximations in eq. (5.3) and eq.
(5.4):

(5.29)
From here the (implicit) trapezoidal method is obtained as

(5.30)
The state increments of the explicit (eq. (5.7)) and implicit (eq. (5.12)) Euler
methods are averaged. Hence, the order of the method can be found by averaging
the respective Taylor expansions in eq. (5.11) and in eq. (5.14). It is not proven here,
but the trapezoidal method is second order accurate. That is an advantage over the
Euler methods, but the uniqueness of the trapezoidal method lies in its stability
characteristics.
Applying the trapezoidal method (5.30) to the LTI test equation (5.19) yields

(5.31)
Hence,

(5.32)

(5.33)
That means, with the trapezoidal method:
 all stable systems are simulated stably with any positive step size;
 all unstable system simulations are unstable with any positive step size.

The trapezoidal method is A0-stable, but it is not A1-stable. As , .


This is observed as a persistent numerical oscillation around the steady state of the
solution for large enough simulation time step size.
 Show this oscillation by using eq. (5.31) and derive its average value.

In my youth, I developed what I called linearly implicit numerical methods. I


discovered that the stability of a method did not require solution of nonlinear
algebraic equations – the stability could be achieved by linearizing the non-linear
state differential equations along the solution trajectory. That required the
evaluation of the full system Jacobian and the solution of
40
associated linear equations. In my youth, computer time was very expensive and
solution of large systems of linear equations could become costly – albeit not nearly
as costly as the solution of the nonlinear versions of them. Hence, I developed
linearly implicit methods whose accuracy order did not depend on the accuracy of
the Jacobian or of its use at all. See my papers [5], [6], and [7]. In my industrial
work, I used certain dominant blocks of the full Jacobian and ignored all non-
dominant elements of the Jacobian – I called this splitting of the Jacobian and
denoted the resulting matrix with . At the two extremes, and
. It is elementary to show that the following method is of first order
independently of .

(5.34)

It is the explicit Euler method with and it is the linearly implicit Euler
method with . The latter is A1-stable.
The papers [6] and [7] give examples of second order multi-stage Runge-Kutta type
methods that are independent of the Jacobian splitting or approximation, are
linearly implicit and are A-stable with . In the paper [5] of 1979, I defined a
general form of the 3-stage algorithms as follows.

(5.35a)

(5.35b)

(5.35c)

(5.35d)

(5.35e)
In the same paper, I admitted that I was not able to systematically calculate suitable
parameters a, b, c, and d. By trial and error, I found a set of parameters for a 2-
stage algorithm with a3 = 0. But I rejected it and until very recently published a
number of different sets of parameters for 3-stages – that gave me advantages for
simulation of very complex and stiff power plant steam generators. These
advantages included automatic step size control and A2 stabilty. The reader is
referred to the three publications [5], [6] and [7] for specific details and some useful
choices of parameters. Here, I present a previously unpublished new formulation.

In January 2023, I took a new look at this class of linearly implicit methods.
After about 45 years, I noticed that there was a more convenient and equivalent
formulation of the class of algorithms in equations (5.35) – replace akbk with ak, and
ckrbr with ckr. Furthermore, I no longer have a need for A2 stability and I think that
two stages are sufficient for a good A1 stable method with the property that its 2nd
order is independent of the use of the Jacobian matrix of the system. With only two
stages, eqs. (5.35) are now converted to

(5.36a)

(5.36b)

41
(5.36c)

(5.36d)
The two stages suffice to achieve 2nd order accuracy – see below. It is not shown
here that the two stages are not enough to reach 3rd order accuracy. The first three
terms of the Taylor expansion of the exact solution of a state differential equation
around a known x(ti) are given in eq. (5.10). To determine the conditions for the
parameters a, b, c, and d for the accuracy order of 2, we need to start with the same
xi = x(ti). We start by writing the Taylor expansions for the individual stages ∆x1(∆t)
and ∆x2(∆t). It is elementary that ∆x1(0) = ∆x2(0) = 0. For the first and second order
terms in the Taylor expansion, we need the first two derivatives of ∆xk(∆t) at ∆t = 0.
For the first stage (k = 1), we rewrite eq. (5.36b) as

(5.37)
Differentiate both sides of this equation twice with respect to ∆t:

(5.38)
At ∆t = 0:

(5.39)
Hence, the Taylor expansion can be written as

(5.40)
For the second stage (k = 2), we rewrite eq. (5.36c) as

(5.41)
Proceed as above:

42
(5.42)
At ∆t = 0:

(5.43)
Hence, the Taylor expansion can be written as

(5.44)
Substitute equations (5.40) and (5.44) into eq. (5.36a) to yield the Taylor expansion
of xi+1(∆t).

(5.45)
Now, compare the numerical solution in eq. (5.45) to the exact solution in eq. (5.10).
For first order accuracy, the only condition is a1 + a2 = 1. The conditions for second
order accuracy depend on how one intends to use the Jacobian matric J in the
algorithm (5.36). If S(J) = J then we have a set of three conditions:

(5.46a)

(5.46b)

(5.46c)
However, my goal in this work has always been the independence of the second
order accuracy from the choice or use of S(J). Hence, eq. (5.46b) has to be split in
two. The term with the ‘arbitrary’ S(J) in eq. (5.45) is neutralised by the condition
a1b1 + a2b2 = 0. Hence, I shall continue with the following set of four conditions for
second order accuracy.

(5.47a)

(5.47b)
43
(5.47c)

(5.47d)
Note that eq. (5.47b) makes the second order of the method independent of the
Jacobian ‘approximation’ – this is equivalent to using the explicit form of the
algorithm (5.36) by setting b1 = b2 = 0. The explicit form’s discrete-time eigenvalue
equation – with – is:

(5.48)
 Prove eq. (5.48).

The stability bound is obtained from eq. (5.48) by setting and solving
this equation for .

(5.49)
This stability bound is shown in Figure 5.2 and it does not depend on the actual
values of the parameters – as long as they satisfy equations (5.47a) and (5.47c).

1.5

0.5

-0.5

-1

-1.5

-3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1

Figure 5.2: Stability range of the complex-valued for any explicit version of
the 2nd order method (5.36).
To derive the implicit form’s discrete-time eigenvalue equation, equations (5.36) with
the full Jacobian are applied to the linear test equation and yield the
eigenvalues of Ad according to

(5.50a)

(5.50b)

(5.50c)
The simplifications in equations (5.50b) and (5.50c) are valid for both sets of second
order conditions expressed in equations (5.46) or (5.47).
The above 2nd order class of methods is …

44
 A0-stable if

(5.51)
 A1-stable if, in addition to eq. (5.51),

(5.52)
There are many coefficients that satisfy these equations and yield an A0 stable
simulation method – the reader is challenged to discover some good ones and
compare them. In the meantime, I shall consider A1 stable methods further. Hence
equations (5.47) need to be augmented with N2 = 0. The combined and re-ordered
equations for an A1 stable method are now

(5.53a)

(5.53b)

(5.53c)

(5.53d)

(5.53e)
For A0 and A1 stability, it is necessary that μJ in eq. (5.50a) has no pole in the left
half of the complex plane, hence b1 > 0 and b2 > 0. It then follows from eq. (5.53b)
that either a1 or a2 must be negative. If c21 > 0 in eq. (5.53e) then a1 < 0, a2 > 0 is the
only option.
Start by solving the three equations (5.53a-c) for the four unknowns a1, a2, b1, and
b2. Select a value for b1. Then from eq. (5.53c)

(5.54)
Substitute this b2 into eq. (5.53b):

(5.55)
The solution is

(5.56)
To summarise, the algorithm in equations (5.36) is 2nd order accurate with any
dimensionally correct matrix S when a value of b1 is selected from either of the two
ranges

(5.57a)
Then

(5.57b)
45
(5.57c)

(5.57d)

(5.57e)

(5.57f)
In the fully implicit form, S = J and

(5.58)
A cursory analysis indicates that the fully implicit version is A 1 stable with any value
of b1 in eq. (5.57a). The lover range of b1 values yields a significantly larger single
stability bound in the right half of the complex plane than the higher range of b1
values. The higher range creates a small stability bound inside the instability range,
while the lower range does not. The reader is encouraged to prove any or all of these
observations. The stability bounds will be calculated for a number of b1 values in the
Laboratory number 11.
Presently, my favourite choice is based on the lower range value of b1 = 2/5:

(5.59)
The corresponding implicit version’s stability bound is shown in Figure 5.3.

b1 = 0.4
10

8
stable stable
6

2
t)

instablility
0
Im(

range of t
-2

-4

-6
stable stable
-8

-10
-2 0 2 4 6 8 10 12 14 16
Re( t)

Figure 5.3: Stability bound of the complex-valued for the implicit version of
the two-stage method in eq. (5.36) with parameters from eq. (5.59).

The well-known second-order Heun (a.k.a. predictor-corrector) method can be


written in the form of the general Runge-Kutta methods as follows:

46
(5.60)
 Show that its stability bound is the same as shown in Figure 5.2.
It is a very simple explicit method that fits neatly into the format of equations (5.36)
with the parameters

(5.61)
These parameters satisfy the 2nd order accuracy conditions (5.46) or (5.47). No such
2nd order accurate linearly implicit A0 stable version with nonzero b1 or b2 is known
to me.
 Try to find suitable nonzero values of b1 and b2 in eq. (5.61) for A0 stability
while retaining the 2nd order of the method. Alternatively, you may try to
prove that A0 stability is impossible without changes in the other parameters
of the Heun method.

Example: A graphical comparison of the single-step accuracy between a few


useful methods.
Let us consider the following homogeneous example

(5.62)
It has a solution

(5.63)

The first four terms of the Taylor expansion around are

(5.64)
Apply the explicit Euler method, in eq. (5.7), to the differential equation (5.62). This
yields, after the first step of the size t, an approximation on the initial tangent
of the solution in eq. (5.63). It is equal to the first two terms of the Taylor expansion
in eq. (5.64):

(5.65)
Application of the Heun method (5.60) to the differential equation (5.62) yields, after
the first step of the size t, an approximation :

(5.66)
Note that it is a third-order polynomial in t. Yet, the third-order term does not match
the corresponding term in the Taylor expansion (5.64). (Of course, this mismatch
was to be expected – it is only a second order method.)
The graphical comparison between the exact solution (shown with solid black line)
and the Euler and Heun approximations (shown with solid red and green lines
respectively) is shown in Figure 5.4. In addition, they are compared to the explicit
and implicit forms of the second order A1 stable method of Eitelberg in eqs. (5.36)
with parameters from eq. (5.59) (shown with dash-dotted and dashed blue lines
respectively). The explicit version is labelled with ‘explEit’ and the implicit version is
labelled with ‘implEit’.

47
2
Single Step Solution to dx/dt = -x Single Step Solution Detail
2.5
0.86
2 explEit
0.84
1.5

1 0.82

0.5 x(t) 0.8


0 implEit
0.78
-0.5 explEit
0.76
-1 Euler
Euler 0.74 Heun
-1.5
Heun 0.72 trapezoidal
-2 x(t) &
trapezoidal implEit
-2.5 0.7
0 0.5 1 1.5 2 2.5 3 0.15 0.2 0.25 0.3 0.35 0.4
time (step size) time (step size)

Figure 5.4: Comparison of various calculated single step approximations to the


exact solution in eq. (5.63). Here, .
It is a coincidence that the implEit (blue dashed) curve is practically
indistinguishable from the exact solution. However, the fact that all four 2nd order
methods are significantly more accurate than the 1st order Euler method for small
enough step sizes is not a coincidence. Neither is the fact that the implEit (blue
dashed) curve is incomparably more accurate than any of the presented alternatives
for large step sizes.
 Show that the trapezoidal method (5.30) cannot be applied to the example
(5.62) for (very) large values of . Hence, compare the trapezoidal method to
the analytic solution for . In other words, prove that the
(magenta and cyan) dotted lines in Figure 5.4 are correct.
End of example.

5. Step size control.


For small enough ∆t, the difference between a second order and a first order
accurate xi+1 approximates the step error of the first order calculation. One can set
some experimentally determined bounds for this step error estimate. When this
bound is exceeded, the step size needs to be reduced by some factor and the
calculation needs to be repeated until the step error bounds are not exceeded. When
the error estimate is sufficiently below the set bound, the step size may be increased
by some factor.
Even though the above relates to estimating the first order method’s error, one can
still use the second order calculation even though one does not know how accurate
it is – it is assumed to be more accurate than the first order result. I have used these
ideas very successfully in the published and unpublished work in my youth. It
seems reasonable to assume that some of these ideas are used in the variable step
size algorithms in the Simulink toolbox of Matlab. In earlier versions of Simulink,
the step size control was not always reliable. Therefore, I generally use fixed step
size in Simulink and have found no need to rely on the currently undisclosed
method(s) of step size control.

48
Laboratories (in parallel with lectures)

MATLAB denoted originally MATrix LABoratory for the use on early personal
computers. It was created by a number of people some of whom – such as Cleve
Moler – were involved in the creation of two packs of FORTRAN subroutines for
matrix and vector operations for use on physically huge mainframe computers – the
LINPACK (linear operations) and EISPACK (eigenvalue and eigenvector related). I
used these subroutines in 1970s when neither personal computing nor MATLAB
were known. My programs were coded on huge packs of cardboard cards and I do
not miss that technology. These programs (packs) were fatally sensitive to being
dropped on the ground.
While MATLAB allows use of looped code, it is strongly discouraged. Instead use of
vectorization and array operations is strongly encouraged.
This course is not meant to ‘teach’ MATLAB. Instead, necessary MATLAB skills are
learned by doing whatever it takes to legally get the required results or answers –
copying is not permitted. A starting base is given in each laboratory assignment.
However, students need to prepare for the laboratory by looking for additional
information from the web or other sources.

1. MATLAB arrays, array operations, graphics.


Prepare for the practical by self-study:
>> t1=0:0.5:10;
>> t2=linspace(0,10,21);
>> t3=logspace(-2,1,21);
>> plot(t1,t2,'.')
>> plot(t1,t3,'.')
>> semilogy(t1,t3,'.')

Study in the laboratory:


>> y=3-70*t1+20*t1.^2-1.3*t1.^3;
>> plot(t1,y)
Find, by using suitable MATLAB commands, the t1 values that correspond to the
maximum and minimum values of y.

Use
>> size(t3)
to find out the number of elements in the various arrays.

Use
>> help size
to learn about MATLAB functions and operators.

Add grid lines to your plots and learn to add lines to existing plots. Add various
titles and text to your plots – programmatically and interactively.

49
2. Relative humidity as a function of temperature.
Create a graphical representation of relative humidity from the analytic relationship
(1.31):

(Prac.1)
T (temperature) and Td (the dew point temperature) are measured in C. RH results
0

in %. It is defined to be between 0 and 100% for .


Calculate and plot two or more relative humidity graphs as functions of T, for the
dew point temperature values that include the last digit in your ID number and 15
degrees higher. You are welcome to add graphs for other dew-point values.
Write your code in a script file. Learn to use ^ in your strings to write superscript –
such as ‘^oC’ to get ºC or ‘m^2’ to get m2. Learn to use \ to write Greek letters – such
as ‘\omega*t’ to get ω*t.
{The underlying theory for this task is not yet covered. Treat it as a purely
mathematical problem. The connection to physics is shown in a later lecture.}

3. MATLAB Matrix and vector operations.


Prepare for the practical by self-study:
Consider the Magnus equation (1.28), assume A = 0.6 kPa and determine the
remaining parameters B and C from tabulated saturation pressure and temperature
as follows.
Transform eq. (1.28) into a system of linear equations for the unknown B and C:

(Prac.2)
Introduce the matrix M and the vectors b and x as follows.

(Prac.3)
This results in a set of simultaneous linear equations that can only be satisfied
approximately:

(Prac.4)
In the laboratory: select a spread of suitable saturation data between -40 °C and
160 °C. Then find the linear least squares solution to this equation. Explain why
A = 0.6 is a good choice for the Magnus equation in a temperature range that
includes 0 °C!
If you do not know the solution to this class of problems, you may use the simplified
summary from Eitelberg (1991) below.

An approximate equation in the form has a linear least squares solution

if we give no preference to any individual equation or equations.


Compare graphically the analytical pressure values from the Magnus equation with
the original data that you selected from tables. Show the analytical relations with
smooth solid lines and the original data with markers.

50
4. Eigenvalue and stability calculations.
Let

(Prac.5)

Denote the eigenvalues of A by . Verify numerically that all four

equal the 4 eigenvalues of .


Show this graphically for an array of between 0.1 and 1. Use eye(size(A)) to define
the identity matrix I. In the final graphical presentation, label the four functions
with the corresponding numerical values of . Students are required to
complete the task in two separate styles. Plot the two sets of four (real) eigenvalues
as functions of by connecting the points with some coloured line styles. In case
two or more lines coincide, start plotting with a solid line style (such as ‘r-‘ or ‘-b’)
and then using line styles with gaps (such as ‘r-.‘ or ‘--b’). This requires creating
whole arrays before plotting.
Some students may get unexpected results that may require replacing some lines
with marker sequences (something like ‘rx’ or ‘ob’). Some and have singular
points: (a) avoid division with 0 by adjusting the values very slightly; (b) do not
display graphically very large values of or .
(This practical confirms some of the trapezoidal method stability theory in eq.
(5.32).)

5. Introduction to Simulink, sources, sinks and other blocks.


This is an introductory training session. No specific preparation is prescribed, but
tasks will be defined in the laboratory. Transfer function blocks (integrator with and
without negative feedback – such as ) and their step responses will
be studied. Numerical instability will be demonstrated. Use blocks from sources and
sinks (scope).
Students must solve the second order ordinary differential equation
– at first with zero initial conditions. Non-zero initial values will
be given to students in the laboratory.

6. Simulation of state models with initial conditions (given method and step-size).
Implement the motorcar example in eq. (2.6) in Simulink. Use the Euler method
and, initially, a step size of 0.1 s. You may have to modify the step size.
One of the tasks is to measure the time it takes to accelerate at maximum power to
100 kph from 50 kph. Show transients on a scope. Can you simulate from 0 initial
speed?
Extend the model to eq. (3.8). Show the speed and distance transients on the same
scope. How far does the car travel until it reaches 100 kph?
Pay close attention to physical SI and non-SI units.

51
7. Time-delay modules and sub-systems in Simulink.
Build the following model in the laboratory:

Step

r1 PI1 In1 Out1

1 In2 Out2 yScope

r2 PI2 Transport P
Delay

uScope

The 2x2 subsystem P (the plant) is to be built as

1 1
In1 Out1
Transport y1
Delay

2 2
In2 Out2
y2

The background for this example will be explained in the laboratory. Keep the step
times apart by about 5 seconds in the two step blocks. Switch the disturbance step
alternatively to Out1 or to Out2 and observe what happens to the undisturbed other
output from the subsystem P.
Add a slider from the Dashboard sub-library to your model. Link it to the delay of
the control system and vary the delay time during your slowed-down simulation.

8. Output to work-space and transient plotting.


Build the following model in the laboratory.
This is the same plant as in practical number 7, but with an inconsiderate control
scheme.
The emphasis here is on accessing the simulation results in the MATLAB workspace
from the command window and creating professional-looking plots. Create sub-plots
and learn to modify properties of graphical objects – such as colour, linewidth, or
markers. Use the command ‘get’ to learn about the properties you can change and
use the command ‘set’ to change them appropriately. Learn to mix your own colours
– such as orange or turquoise – from the primary R, G, and B.

Clock
time u&y

Step

r1 PI1 In1 Out1

1 In2 Out2 yScope

r2 PI2 P

uScope

52
9. Coding the explicit Euler and Heun methods for a stiff example.
In preparation for the practical: analyse the following function and script files.
The function file calculates the right hand side of the model differential
equation. The script file evaluates the explicit Euler method. Understand the
differences between script files and function files. Write the state differential
equations for this system.

function [ fx ] = msturbogen1( x,u )


% msturbogen1: generator speed derivative in fx(1) and
% excitation voltage in fx(2). u is the vector of fuel flow between
% 13 and 20 l/h and excitation voltage between 3 and 10 V
% respectively.
% The excitation voltage dynamics is very fast compared to gen
% speed. Hence this is a stiff system.
a2=10; fx=[0;0];
fx(1) = 1000*u(1)/x(1) - 0.003*x(1)*x(2)*x(2);
fx(2) = a2*(-x(2)+u(2));
end

% msturbostate1
x=[1000;7]; %initial state: generator rpm and excitation voltage [V]
u=[15;6]; % input: fuel flow [l/h] and excitation voltage demand [V]
endtime=29; % end time of simulation
dt=0.05; % time step [s]
nst=round(endtime/dt); % number of steps
tt(nst+1)=zeros;
xx(2,nst+1)=zeros; xx(:,1)=x;
for i=1:nst
xx(:,i+1)=xx(:,i)+dt*msturbogen1(xx(:,i),u);
tt(i+1)=tt(i)+dt;
end
subplot(2,1,1); plot(tt,xx(1,:))
subplot(2,1,2); plot(tt,xx(2,:))

In the laboratory, you will adapt this code to Heun method and compare both by
simulation with various values of the time step dt and time-varying excitation
voltages, such as .

10. Coding the linearly implicit A1 stable Eitelberg methods for a stiff example.

In preparation for the practical: Derive the Jacobian matrix


for the system that was simulated in practical number 9.
During the practical, you will modify the previous script file to evaluate the implicit
Eitelberg method with much larger step sizes. You shall compare these results with
those obtained with the explicit Euler and Heun methods. You will learn to use only
parts of the Jacobian – meaning .

11. Plotting of the stability bounds of Eitelberg’s A stable methods.


In preparation for the practical: Investigate various ideas of finding the stability
bound in Figure 5.3. Study the Matlab commands/functions ‘fzero’ and ‘roots’.
In addition to b1 = 0.4 eq. (5.59), select at least one additional value from the upper
range in eq. (5.57a) and a few outside the prescribed ranges. Plot your bound(s) and
indicate all parameter values in or next to your plot.
During the practical, calculate and draw the bounds. Determine on which side of the
bounds the simulation is stable or unstable.

53
Tutorials (in parallel with lectures)

The purpose of these weekly tutorials is to reinforce the material that is taught in
the lectures. Generally, discussions must be initiated by students. Some homework
will be assigned in advance and it must be solved by the students before it can be
discussed.
At this time, the weekly tutorial topics are not fully fixed and tutorial work is not
formally assessed. Some tentative topics are given below.
Week 1: Familiarise with some sources, specifically the water tables. Linear
interpolation: https://www.youtube.com/watch?v=Cvc-XalN_kk .
Week 2: Swing – consider also heat transfer and free fall. Compare hfg and ufg.
Week 3: Evaluation of cν and cp from tabulated data and their use in calculations:
superheated steam (compressible) and liquid water (incompressible).
Week 4: Humid air by the sea from p. 11 – without preparation.
Week 5: Water saturation temperature from ideal gas equation above 60 °C.
Week 6: Consider an apartment of 100 m2 with a ceiling height of 2.8 m. Initially,
the 80% humid air in the apartment has a temperature of 35 °C. Then the split unit
air conditioner is switched on and the apartment is cooled down to 25 °C. The
refrigerant temperature of the air conditioner is operated at 7 °C – verify this in a
literature search.
What is the final air humidity and how much condensate water is removed from the
airtight apartment in this cooling process? {Should be nearly 7 litres – test this at
your home!}
Week 7: Extension of Tutorial 6: Let the initial and final temperatures be, as in
tutorial 6, 35 °C and 25 °C respectively. However, the refrigerant temperature of the
air conditioner is operated at 14 °C. Let the initial humidity RH 35 vary between 10%
and 80%. Draw a graph of the final humidity RH23 as a function of the initial
humidity. You can do the relevant calculations manually, but a (sophisticated)
MATLAB code would be preferable.
Week 8: Discussion of examples, homework, or summative exercises.
Week 9: A 9 m3 solid vessel is filled with a mixture of hydrogen and oxygen gases at
a common temperature of 300 K. The mass of each is 20 grams and 160 grams
respectively. The mixture is ignited and then cooled down to the initial temperature
of 300 K.
Calculate the pressure in the vessel before and after the hydrogen oxidation
reaction. Do you expect this reaction to be explosive? Do you expect any liquid water
after the cooling and why? What if there is more than 20 grams of hydrogen
initially? {The pressures should differ by a factor of 1.5.}
Week 10: Linearise the three-phase turbo-generator example (2.32) in section 4 of
Chapter II above and calculate its transfer matrix. Let the operating condition be
4000 rpm. The rotational inertia of the system is , the excitation
coefficient is , the operating value of excitation voltage is , the
fuel power transfer efficiency is , and the phase resistance of the load is
. Define the deviations from the operating values of fuel energy flow rate
and the excitation voltage as the two system inputs and
calculate its steady state value. Let the corresponding deviations from the operating
values of rotational speed and the line voltage be the two
outputs.
Week 11: Trapezoidal method with large or summative exercises.
Week 12: Discussion of examples, homework, or summative exercises.
Week 13: Discussion of examples, homework, or summative exercises.

54
Bibliography:
1.Michael J. Moran; Howard N. Shapiro: Fundamentals of Engineering
Thermodynamics. Wiley, Chichester, 5th ed., 2006.
2.Frank P. Incropera; David P. DeWitt: Fundamentals of Heat and Mass Transfer.
Wiley, New York, 4th ed., 1996.
3.Ed. Eitelberg: Optimal Estimation for Engineers. NOYB Press, Durban, 1991.
4.J.D. Lambert: Computational methods in ordinary differential equations. Wiley,
New York, 1973.
5.Ed. Eitelberg: Numerical simulation of stiff systems with a diagonal splitting
method. Mathematics and Computers in Simulation, XXI, 1979, pp. 109-115.
6.Ed. Eitelberg: Parameter studies of a class of robust L-stable integration methods.
Proceedings of the 10th IMACS World Congress, Volume 1, 1982, Montreal,
Canada, pp. 22-24.
7.Ed. Eitelberg: A simple A2-stable numerical method for state space models with
stiff oscillations. Mathematics and Computers in Simulation, XXV, 1983, pp.
346-355.
8.Ed. Eitelberg: Control Engineering. NOYB Press, Durban, 2000.
9.Mark G. Lawrence: The relationship between relative humidity and the dew-point
temperature in moist air. Bulletin of the American Meteorological Society
(BAMS), February 2005, pp. 225-233.
10.https://www.academia.edu/31361586/
THERMODYNAMICS_TABLES_BOOK_e_MORAN_AND_SHAPIRO_ALL_Fundame
ntals_of_Engineering_Thermodynamics_7TH_ED (added on the 26th of May
2020).

55
Summative Exercises:

Exercise 1: Motorcar on sloping road.


Modify the motorcar model in eq. (2.6) by allowing the road to slope at an angle β
with respect to the horizon. β = 0 for a level road, β < 0 for a downhill road section
and β > 0 for an uphill road section. Write a general model like in eq. (2.6), but with
an additional term that depends on the road angle β. Assume the following
numerical data: P ≤ 50 kW, m = 1300 kg, kfrict = 0.8 kg·s/m, and g = 9.8 m/s2.
(a) Assume this car is travelling on the road number 87 between the Kinneret
and Katzrin. Calculate the approximate angle β of a hypothetical straight
road between the junction with road number 92 at the bottom and the
Katzrin turn-off to road number 9088 at the top. You may find the relevant
data from topographical maps of Southern Golan – it does not have to be
terribly accurate.
(b) Find the steady state speed of this car for P = 50 kW, when it is travelling up
this road. Give your answer in km/h and compare it to the value calculated
after eq. (2.7).
(c) Find the steady state speed of this car for P = 0 W, when it is travelling down
this road. Give your answer in km/h and compare it to the value calculated
after eq. (2.7).
(d) Simulate this car’s speed and distance – in Simulink or in a MATLAB code –
when it is coasting down from the Katzrin junction with P = 0 W – starting at
v = 0 m/s. Does it reach the steady state speed that was calculated in (c)
before reaching the junction with road 92?
(e) Compare the potential energy loss to the kinetic energy gain between the top
and bottom positions of the car in section (d). Explain the difference.

Exercise 2: Domestic electric water heater.


(You can find technical data, for example from the Israeli company Chromagen.)
Here we assume a 100 litre tank with 230 V electrical power supply and up to 4
litres per minute through-flow rate. Inlet temperature varies between 15 and 35 ºC.
Assume that the water temperature T is uniform in the tank.
Derive the state equations that describe this system and simulate the outflow
temperature in Simulink.
The outflow temperature is kept around a reference temperature Tref by controlling
the electrical current with a ‘digital thermostat’ according to the following logic:
% Digital thermostat
On=1; Off=0; HalfSpan=2; %degrees C
if T>(Tref+HalfSpan)
Switch=Off;
elseif T<(Tref-HalfSpan)
Switch=On;
else
{if Current<10; Switch=Off; else; Switch=On; end}
end
Current=Switch*20; % Ampere
This can be called a pseudo code – its implementation in a real code depends on the
software environment and may require modifications or additions, such as memory
for the last current value. It describes what is also known as a relay with hysteresis,
which is available in Simulink. Add the thermostat feedback to the Simulink model
and simulate the outflow temperature under variable through-flow rate and inflow
temperature conditions.

56
Exercise 3: Temperature controlled filling of a bathtub.
A bathtub is filled with hot and cold water. The two flows mix to a uniform
temperature – with negligible delay. Ignore heat transfer between the bathtub body
and its content. You may assume that this water is incompressible and hence, the
specific internal energy is a function of temperature alone, . Alternatively,
.
Derive two state differential equations: for the mass of water and its temperature in
the bathtub. The variable mass flow rates and temperatures of the hot and cold
water are the four inputs: , , , and respectively. Build this process
in Simulink and add a relay-based temperature regulation system – as in Exercise 2
– that varies hot water flow rate to maintain the bath temperature close to a desired
reference temperature. You are welcome to replace the relay controller with a
proportional gain controller.

Exercise 4: Pressure cooking on a gas cooker.


This exercise goes beyond the intended level of this undergraduate course. Most
students should not spend much effort on it. Even the top students at the Braude
College should not necessarily expect to solve it completely. It is meant to push you
to your limits and perhaps have a glimpse at what else you may have to learn in
order to solve and understand such a simple everyday domestic process. If you do
attempt this exercise, then I advise you to carefully and purposefully simplify the
problem by neglecting a number of “small terms”.
Some water is poured into a 10 litre pressure cooker and the lid is closed. This water
and the pot are heated on a domestic gas cooker. Model the liquid water mass and
temperature from the initial volume of 1 litre and 20 ºC to complete evaporation.
Assume the domestic gas burning rate of 0.2 litres per second. Find from literature a
typical calorific value of domestic gas and assume that 70% of this energy is
transferred to the pressure cooker and water. The pressure cooker is made of
aluminium and has a mass of 1 kg.
The pressure cooker’s operating pressure is regulated to the usual 100 kPa gauge
pressure – that means to 100 kPa above the local air pressure. Assume an ideal
pressure regulating valve of the cooker. In other words, ignore the dead zone
between the opening and closing pressures of the valve and assume a large enough
valve opening.
Assume that the temperatures of aluminium, liquid water and water vapour inside
the cooker are all equal. Ignore heat transfer between the pressure cooker and the
surrounding air.
Derive the state equations that describe this system and simulate this system with a
MATLAB code. Display the pressure cooker temperature and pressure, among other
variables.

Last update: November 2021.


Karmiel

57

You might also like