Rheinisch–Westf¨lische a Technische Hochschule Aachen

Lecture Notes

Introduction to Simulation Techniques
Lehrstuhl f¨r Prozesstechnik u Dr.-Ing. Ralph Schneider

Version 1.2 Copyright: R. Schneider 2006

Copyright 2006 by Wolfgang Marquardt Lehrstuhl f¨r Prozesstechnik u RWTH Aachen University Templergraben 55 D - 52056 Aachen Germany Tel.: Fax: E-mail: WWW: +49 (0) 241 - 80-94668 +49 (0) 241 - 80-92326 secretary@lpt.rwth-aachen.de http://www.lpt.rwth-aachen.de

Dieses Skript ist urheberrechtlich gesch¨tzt und darf nur zur Benutzung im Rahmen der u Vorlesung ,,Introduction to Simulation Techniques“ an der RWTH Aachen kopiert werden. Jede weitergehende Nutzung bedarf der ausdr¨cklichen schriftlichen Genehmigung. u In diesem Skript werden Materialien anderer Autoren zu Ausbildungszwecken verwendet. Dies impliziert nicht, dass die Materialien frei von Copyright sind. Dieses Skript zur Vorlesung ,,Introduction to Simulation Techniques“ wurde nach bestem Wissen erstellt. Jedoch kann keine Garantie f¨r Richtigkeit der gemachten Angaben sowie Freiheit von u Tippfehlern ubernommen werden. Der Stoffumfang der Pr¨fungen im Fach ,,Introduc¨ u tion to Simulation Techniques“ richtet sich nach den Darstellungen in Vorlesungen und ¨ Ubungen, nicht nach diesem Skript.

The copyright of these lecture notes is reserved. Copies may only be made for use within the lecture ”Introduction to Simulation Techniques” at RWTH Aachen University. Any further use requires a written permission. In these lecture notes, materials of other authors are used for educational purposes. This does not imply that these materials are free of copyright. These notes for the lecture ”Introduction to Simulation Techniques” have been created to the best knowledge of the authors. However, correctness of the given information as well as absence of typing errors cannot be guaranteed. The assessment load for examinations in ”Introduction to Simulation Techniques” conforms to the presentations in lectures and exercises, not to these notes.

Preface
This manuscript accompanies the lecture “Introduction to Simulation Techniques” which may be attended by students of the master programme “Simulation Techniques in Mechanical Engineering”, students of the Lehramtsstudiengang “Technische Informatik”, students of Mechanical Engineering whose major course of study is “Grundlagen des Maschinenbaus” as well as a 3rd technical elective course in Mechanical Engineering. This lecture was offered for the first time in the summer semester 2001. The manuscript aims at minimizing the effort of taking notes during the lecture as far as possible and tries to represent the basics of simulation techniques in a compact manner. The topics treated in the manuscript are very extensive and can therefore be discussed only in a summarizing way in an one-term lecture. Well-known material from other lectures is only covered briefly. It is presupposed that the reader is familiar with the basics of numerics, mechanics, thermodynamics and programming. Above all Martin Schlegel contributed to the success of this manuscript, both with critical remarks and helpful comments as well as the continuous revision of the text and the figures. Aidong Yang did a lot of work in polishing the first English version of this manuscript. Beyond that Ngamraht Kitiphimon and Sarah Jones have to be mentioned, who provided the first German and English versions of the manuscript, respectively. My thanks to all of them. The lecture is based on the lecture “Simulationstechnik” offered by Professor M. Zeitz at the University of Stuttgart. I would like to express cordial thanks to him for his permission of using his lecture notes. Despite repeated and careful revision of the manuscript incorrect representations might not be excluded. I am grateful for each hint about (possible) errors, gaps in the material selection, didactical weaknesses or unclear representations, in order to improve the manuscript further. These lecture notes are also offered on the homepage of Process Systems Engineering (http://www.lpt.rwth-aachen.de), where it can be downloaded by any interested reader. I hope, that with the publication on the internet a faster correction of errors can be achieved. I would like to ask the readers to submit suggestions for changes and corrections by email (simulationtechniques@lpt.rwth-aachen.de). Each email will be answered.

Aachen, in July 2004

Ralph Schneider

.

. . . . . . . . . . . . . .2 Abstraction . . .4 Jacobian Matrix . .4 Block-Oriented Representation of Dynamic Systems . . . . 3. . .1 Block-Oriented Representation of Linear Systems .8 Stability . . . .8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . 5 . . . . . . . . . . . . .4 Simulation Model . . . . 12 . . . . . . . . . . . . . . . . .1 State Representation of Linear Dynamic Systems . . . .8. . . . . . . . . . . .7 Eigenvalues and eigenvectors . . . . . . . . . . . . . .3. 1. . . . . . . . . . . .6 Linearization of a Dynamic System around the Stationary State . . . . . . 3. . . . . . . .5 Graphical Representation . . . . 3. . . . . . . . . . . . . . . . . . . .3 Complex eigenvalues of a 2 × 2 system matrix . . . . . . .1 Example . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . .2 System matrix with real eigenvalues . . . . . . . . . . . . . . . . . .3.3 State Representation of Nonlinear Dynamic Systems . . . . . . . 7 . . . . . . . . . 8 .2 Block-Oriented Representation of Nonlinear Systems . . . . .1 What is Simulation? .3. . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . 3. . . . .8. . . . . . . . . . . . . . . . . . . . . . . . . . .Contents 1 Introduction 1. . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . . . . . 5 . . . . .3. . . 1. .3. . . . . . . . . . .3. . . . . . . .7 Numerical Solution . . . . .2 Generalized Representation . . . . . .4 General case . . . . . . 7 . . 5 . . . . . . . . . . .3. . . . . . .6 Analysis of the Model . . . . . . . . 3. .1 One state variable . . . . . . 1. . . . . . . . . .5 Linearization of Real Functions . . . . . . 1. . . . . . . . . . . . . . . . 2. 2. . . 1. . . . . . . . . . . . . . . . . . .3 Mathematical Model . . . 3. . . . . . . . . . . 1 . . . . 2. . . . . . . . . . . . . . . . . . . . . 2. . . . . 3. . . .8 Simulation . . . . . 1 . . .8. . . . . .1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 The State Space . . . . . 10 . . . . . 2. . V . . . . . . . .2 Solvability . .1 Lipschitz continuity . . . . . . . . .3.2 Simulation Procedure . . . . . . . . . . . . . . . 1. . 2. . . . . . . . . . . . . . . 1. . . 1. . . . 1. . . . . . . . . . .3 Introductory Example for the Simulation Procedure 1. 12 15 15 19 20 20 23 24 24 26 27 27 28 30 31 32 34 35 36 36 37 38 38 2 Representation of Dynamic Systems 2. . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . .4. . 3.3. . . . . . . . . . . . . . . . . . .9 Applications of Simulators . . . .3 Stationary States . . . . 4 . . . . . . 3 Model Analysis 3. . . . . 5 . . . . . .4. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .3 Solvability of Differential-Algebraic Systems . .1 Newton’s Method for Scalar Equations . . 5. . . . . . . . . . .3 Semi-Implicit Euler Method . . . . . . . . . . . . . . . . . 5. . . . . 5. . . . . . . . .3 Multiple-Step Methods . . . .1. . . . . .11 Problem: Discontinuous Right-Hand Side of a Differential Equation . . . . . . .1. . . . . . 5. . . . . . . . . . . . . . . . . . . . 5. . . . . . . .2. . 5. . . . . . . . . . . . . . . 5. . . . .2 Newton-Raphson Method for Equation Systems . . 7. . . . . . . . . . . .6 Consistency Condition for One-Step Methods . . . . . . . . . . 6 Algebraic Equation Systems 6. . . . . . . . . . . . . . .10 Problem: Stiff Differential Equations . . . . . . . . . . . . .Contents 3. . . . . . . .1 Linear Equation Systems . . . . . . .2. . . . . . . . . 6. .1 Predictor-Corrector Method . . . . . . . . . . . .1 Principles of Numerical Integration . . . . . . . . . . . . . .3 Convergence Problems of the Newton-Raphson Method 7 Differential-Algebraic Systems 7. . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . .2. . . . . . . .2. . 5. . . . . . . .2. . . . 6. . .5 Runge-Kutta Method of Fourth Order . . . . . . .3 Consistency . . . 5. . . . . . . . . . . . . . . 53 53 53 55 57 59 60 61 61 62 63 63 64 65 65 67 67 68 69 70 71 71 72 74 77 77 78 79 79 80 80 .2. . . . . . . . . . . . . . . .2 Nonlinear Equation Systems . .2 Implicit Euler Method (Euler Backward Method) 5. . . . 6.2 Rounding Errors . . . .1 Depiction of Differential-Algebraic Systems . . . . . . . . . . . . . . . . .3.3 Linear Differential-Algebraic System .1 Solvability of the Nonlinear Equation System . .2. .1 Floating Point Numbers . . 7. . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3. . . . . . . . . . . .2. . . . . . . .2. . . . . . . . . . . . . . . . . . . .2 One-Step Methods . . . . . . . . . . . . . . . . . . . 6.1 Problem Definition and Terminology . . . . . . . 5. . . .1 Solution Methods for Linear Equation Systems .3 Conditioning . . . . 46 4. . . . . . . . . . .1. . . . . . VI . . . . 7. . . . . . . . . . . . . .2 Solution Methods for Nonlinear Equation Systems . . . . . . . . .1 Explicit Euler Method (Euler Forward Method) . 43 3. 5. .2 A Simple Integration Method . . .4 Step Length Control . . . . . . . . . . . . . 6.2. . . . . . . . . . .1 General Nonlinear Implicit Form . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . .2. . 44 4 Basic Numerical Concepts 45 4. . . 6. . . . . . . . . . . . . .4 Heun’s Method . . . 6. . . . . . . . .2. . . .1. . . .1. . . . . . . . . . . . . . . 49 5 Numerical Integration of Ordinary Differential Equations 5. . . . . . . . . . . . . . . . . . . . . . . . .2 Explicit Differential-Algebraic System . . . . .1.2. . . . . . . . . . . . . . . . . . .1. .2 Numerical Methods for Solving Differential-Algebraic 7. . . . . . . . . 45 4. . . . . . . . . . Systems . . . . . . . .9 Time Characteristics . . . . . .

. . . .Contents 8 Partial Differential Equations 8. . . . . . . . . . .1 Basic Concepts . . .3 Galerkin Method . . . . . . . . . . . . . . . . . . . . .2 Representation of Partial Differential Equations 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Recursive Regression . . . . . . . . . . . 9. . .3. . . . . . . . . 9. . . . . . . . . . .5. . 9. . . . .3 Numerical Solution Methods . . . . . . . . . . . . . . . . . . .1. 9. . . . . . . . . .3. . 8. 121 . . . . . . . .3. . 8.3 Method of Weighted Least Squares . . . . . . . . . . . . . . . . 10. . . . . . . . . . .5. . . . . . . . . . .1 Example . . . . . . . . . . . . . . . . . . . . . . . 10. . . . . . . . . . . . . . .3. . . . . . . . . . . . . 117 . . . . . . . . . . . . . . . . . . . . 9. . .3. . .1. . . . . 9. . . . . . . . .4 Automaton Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 9. . 9. .3. .1 Collocation Method . . . 8. .2 Time Basis . . . . . . . . . . . . . . . . . . . . . . . . 10. .3 States and State Transitions . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . .4 Multiple Inputs and Parameters . . . . . . . . . . . . .1. . . . . .3. . . . .5. . . .2. . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . 9. . .2. . . . . . . .5. . . . .2 Simulation of Petri Nets . . . . . . . . . .4 Example . .2 Control Volume Method .2 Method of Weighted Residuals . . . . . . . . . . . . . .2. .2 Least Squares Method . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . 121 . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . .1 Discrete Time Petri Nets . . . . . . . . . . . . 8.3. 9. . . . . . . . . .1 Example . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . .2 Boundedness and Safety . . . . . . . . . . . . . .3.4 Summary . 8. . . . . . . . . . . . . . 9. . . . . . . . . . . 9. . . . . . . . . .5.5 Petri Nets . . . .3.2 State Model . . . .3. . . . . . . 10. . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . 9. . .3.1 Reachability . . . . . . . . 8. . . . . . 10 Parameter Identification 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 General Parameter Estimation Problem 10. .2. .3 Graph Theory . . . . 8.4 Liveness . . . . . .2 Representation of Graphs and Digraphs with Matrices 9. . . . . . 9. . . . . . . . 118 . . . . . 9. . . .2. . . . . . . . . . . 120 . . . . . . . . . . . . . . 85 85 86 87 87 88 90 91 93 93 93 94 97 99 100 100 101 101 101 102 103 104 106 108 108 108 110 110 112 113 113 113 113 114 115 9 Discrete Event Systems 9. .1 Finite Differences . . . . . .2 Simulation Tools . . . . . . . . . . .4 Continuous Time Petri Nets . . . . . . . . . 8. 8. . .1 Search Methods . . . .2. .2. . . . . . . . . . . . . . . . . . . . . .1 Introductory Example . .1 Representation Form . . . . . . . . . 9. . . . . . . . . . . . . . . . . . . . .3 Characteristics of Petri Nets . . . . . . . . . . . . 117 . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . .6. . . . . . . . . .1. . . . . . . . . . 10. . . . . . . . . . . . .2 Problem of the Boundaries . . 9. . 123 VII .1 Method of Lines . . . . . . . . . . . . . . . . . . . . . .1 Classification of Discrete Event Models . . . . . . . . . . . . .1 Models for Discrete Event Systems . . 9. . . . . . . . . . . .3 Deadlock . . . . . . . . . . 123 . . . . . .

. . . . . . . . . . . 11. . . . . . . . . . . . . . . . . . . . . . . . . . .7 Potentials and Problems . . . . . . . . . . . . 11. .2 Level of Problem Orientation . .6. . . . . . . . . . 11. . . . . . . . . . . . . . . . . . . . .3 Numerics . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11. . . . . .6. . . . . . . . . 124 10. . . . . . . . .5 Parameter Identification . . . . . . . . . . . . . . 125 10. . . . . . . 11. .1 Application Level . .1. . . . . . . . . .1 Successive Variation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Simplex Methods . . . . . . . . . . . . 11. . . . 11. . . . . . . . . . . . .2 Modeling . . Bibliography 127 127 128 130 130 131 131 131 131 134 134 135 137 . . . . . . . . . . . . . . . . 11. . . . .3 Language Level . .4. . . . . . . . . . . . . . . . . .4. . . . . VIII . . . . . . . . . . . . .3 Nelder-Mead Method .1. . . . . .1.6 Use of Simulators . .4 Structure of a Simulation System 11. . . .1 Problem Definition . 11. . . . . . . . . . . . . . . . . . . . . . .4. . .6. . . . . . . . . . . 126 11 Summary 11. . . . . . . .4 Simulators . . . . . . . . . . . . .Contents 10.

1.g. In this example the reality is represented in the form of a mathematical model.1).1 Introduction 1.1: Flight simulator as an example of a simulator. 2): Simulation is the process of designing a model of a real system and conducting experiments with this model for the purpose either of understanding the behavior of the system and its underlying causes or of evaluating various designs of an artificial system or strategies for the operation of the system. The VDI guideline 3633 (Verein Deutscher Ingenieure. Finally. e. Figure 1.1 What is Simulation? Simulation (“virtual reality”. 1996) defines simulation in the following way: 1 . the results can be visually displayed. A more rigorous definition of (computer aided) simulation can be found in Shannon (1975. The model equations are solved with a numerical algorithm. “the experiment on the computer”) is also called the third pillar of science next to theory and experiment. the flight simulator (see Fig. p. We all know examples of simulation techniques from the area of computer games.

ˆ stock market. environmental destruction. world wide undernourishment. only a few decades were needed to lead to the exhaustion of raw material resources. Examples of application areas where simulation studies are used are: ˆ flight simulators. that with the continuation of those days’ economy and population growth. and pollution. 2 . Figure 1. ˆ chemical processes. ˆ flexible manufacturing.2: Modeling as an abstraction. ˆ software development. which is transferable to the reality. With the help of simulation the temporal behavior of complex systems can be discovered (simulation method). the execution. which presented and interpreted a world model in the seventies. In a broader sense. simulation means the preparation. ˆ war gaming. ˆ power plants.1 Introduction Simulation is the process of emulating a system with its dynamic processes in an experimental model in order to receive some knowledge. and thereby to a dramatic population breakdown. ˆ weather forecast. The obtained simulation results predicted. Simulation became well-known in connection with the book of ?. and the evaluation of aimed experiments by means of a simulation model.

ˆ less dangerous to people. an object M is a model of an object A to the extent that B can use M to answer questions that interest him about A. This becomes clear through the definition of a model by Minsky (1965) and is illustrated in Fig. different models are obtained. ˆ less harmful for the environment. This is because modeling is an intended simplification of reality through abstraction. the following two quotations should be mentioned: Karl Ganzhorn. Essentially. ˆ and much more economical than real experiments. 1. ˆ faster. society. Reasons for this are that (computer) simulations (also called simulation experiments) are usually ˆ simpler. you should be aware of the differences between reality and its representation by the computer. administration.3: Models – definition by Minsky (1965).1 What is Simulation? As Fig.”   ¥ £¤  ¨© 3 . but solely an approximation! According to the approximation used. For the significance of modeling and simulation. 1982) “Models and simulation techniques are considered the most important area of future information processing in technology. models are useful. IBM (IBM Nachrichten. Although reality is not completely reproducible. ¨¦ § £¡ ¢ Figure 1.3: To an observer B.2 shows.1. it is not reality that is represented on the computer. economy. and politics. 1.

4: Simulation procedure. we will deal with methods of parameter estimation (identification. optimization) in the context of adjusting models to experiments. and a short survey of the software tools available today for these system classes will be given.] As we make our plant operations more flexible to respond to business opportunities. [.2 Simulation Procedure The simulation procedure (see Fig. Exxon (Conference of Foundations of Computer-Aided Process Operations. Simulation techniques are not subject to any concrete field of application. the numerical treatment will be briefly discussed..   ¥ ¦¤  ¤ §    ¦¨ ¡ ¡¢ §  £  ¨ ©  ¢ £¥ £   §    ¦¨ ©  ¢ £¥   ¦¤  ¥ ¤ ¦  § ¨ ¥ ¨ ¡¢ £ ¡ ©   ¥ ¨    ©  ¡   ¨ Figure 1.” 1. it treats a methodology.4) serves us as a guideline through this course. Afterwards.. Rather. for example steady-state models. The examples in the lecture and the tutorials will illustrate this interdisciplinary character. efficient modeling and simulation techniques will become commonly used tools. as well as discrete time and discrete event models will be discussed. 4 ¡   £ ¨ §   § . After this introductory chapter. dynamic models with lumped and distributed parameters.1 Introduction Ralph P. which can be widely used in many applications. 1987) “Modeling and simulation technology are keys to achieve manufacturing excellence and to assess risk in unit operation. The different model types will be introduced with the help of examples. 1. Schlenker. different kinds of simulation models.

the real world problem has to be formulated.6 this simplified model of the vehicle is depicted. 1.3. you state a mathematical description of the problem with physical (and possibly chemical) equations. the following example is presented. In our example you use the equation of motion ¥ ¤ £ ¡ 5 . in which the main steps in the simulation procedure are mentioned.3 Mathematical Model Based on this revised picture. This example deals with the determination of the vertical movement of a motor vehicle during a period of time through a ride over a wavy road.g. For this reason. In Fig.5: Depiction of the vertical movement of a motor vehicle.5) () () = ( )= ( )= ( ) ¢ ¢ ¢   = Figure 1.2 Abstraction Generally it is not our intention or within our capability to capture all aspects of the real world problem. you can simplify the problem e. In our example.1. the wheels and the car body as two mass points. through regarding both. an appropriate selection of the effects to be considered and practical simplifications of the reality have to be made.3.3. 1. for which we want to find answers with the help of simulation.1 Problem At first.3 Introductory Example for the Simulation Procedure 1. 1.3 Introductory Example for the Simulation Procedure For illustration purposes. (see Fig. 1. 1.

d.4) for the distance y and the velocity y at time t0 . m¨ = −dyy− ccA y− cR (y − yss). . initial conditions y ˙ ˆX the initial conditions.1) (1. The simulation experiment is defined through the following known quantities: the following known quantities: model structure. m¨ = −d˙˙ − A y − cR (y − y ). (t). y¨y¨(t).3) (1. t > t00.3) represents a second-order linear differential equation equation with the initial Equation (1. The differential equation and the initial ˙ for the distance y and the velocityproblem. yy := := cc := k(t) := k(t) (1. Figure 1. The differential equation and the initial conditions form the model of the problem. y(y. y(t). ˆX the model structure (equation 1.3c).2) (1. m¨ = yy forces. y(t0 ) = y0 ˙ ˙ (1. y˙ ). (time variant!) the ˆX the time variantinput k(t) andand input (k(t)). ˆ the values of parameters (m..11 Introduction Introduction ECT DQF[ URTKPI  FCORGT YJGGN  CZKU VKTG UVTGGV F$ P$ >> P5 G \$ UOCNN EQPUVCPV P5 = P F5 \5 = \ \6 Figure 1. ˙ ˆX y(t). y(t). in X (time variant!) parameter (m. y(t0 ) = y0 ˙ ˙ (1. d. ˙ 6 6 t>t .6: Depiction of the simplified motor vehicle model.3) represents a second-order linear differentialwith the initial conditions conditions y(t0 ) = y0 .4) y(t0 ) = y0 . . yy or or m¨ + y˙ + (cA + R y = cR y m¨ + dd˙y+ (cA + ccR))y = cR yss . the example). (Newton’s law) for the wheel: (Newton’s law) for the wheel: m¨ = forces. 0 0 0 0 The wanted quantities are: The wanted quantities are: y(t).1) (1.3) (1. The simulation experiment is defined through ˙ conditions form the model of the y at time t0 .6: Depiction of the simplified motor vehicle model.2) t > t0 t > t0 (1. c) – which may be time-variant in general! –.

10) can be represented in a graphical “block-oriented” way (see Fig. the conversion would result in a system of first-order differential equations (so called state description). ¨ dt m x1 (t0 ) = y0 .1.      §  ! #$ "  ¤© ¨ £ ¤¢ 54 2 3 6 DC & '% ( 0) 1  78 Figure 1.10) 1. x2 (t0 ) = y0 .3 Introductory Example for the Simulation Procedure 1. ˙ (1.3) and (1.8) This is a time continuous. In this example.7: Block-oriented representation of the motor vehicle model.6) which leads to the following differential equations: x1 = ˙ dx1 = y = x2 . So the “equationoriented” description of the model is: x1 = x2 .(1.5) (1. (1. ˙ 1 x2 = (−dx2 − cx1 + k(t)).5 Graphical Representation The model (1.3. you do certain conversions on it.7).3.9) .7) (1. ˙ m x1 (t0 ) = y0 .4 Simulation Model For simulation purposes. 9 @ ¦ A B E ¥ ¡   7 . dynamic system of order n (here n = 2). x2 (t0 ) = y0 . you often do not rely on the model as it has been built through abstraction (which are in our example the equations (1. characterized by the according initial conditions. ˙ (1. 1.9) (1. ˙ dt dx2 1 x2 = ˙ = y = (−dx2 − cx1 + k(t)) .4)). Rather. We define the variables: x1 = y x2 = y ˙ distance. The states correspond to energy storages. velocity.

(1. b. an analysis of the model is necessary. e. c (1. 1. k(t) = const. inputs.8). t0 +T ]). You obtain these for instance through approximations. Simulation Time The model must at least be simulated over the time T (t ∈ [t0 .6 Analysis of the Model For preparation of a reasonable simulation. or trial simulations with guessed quantities. T can be determined by means of dynamic parameters (see Fig. and initial conditions (see existence theorem for first-order differential equation systems in Section 3.12) 8 .1 Introduction 1. In our example.11) With this and equation (1. m m =: ω 2 ω 1 with the frequency f = 2π = 2π c 1 m and the period Θ = f = 2π m.2).g. Θ Θ Θ Figure 1. you obtain the oscillation differential equation: y+ ¨ 1 c y = k = const . in view of the following points: a.3.3).8: Dynamic parameters. Wellposedness The problem must have definite solutions for all meaningful values of parameters. so that you can see the consequences of the different influential parameters. process knowledge (experiment with the real examination object). you can perform the following approximations: d = 0.

∆ ¡   Figure 1.9). You can get this through approximative calculations or on the basis of process knowledge. The steady state is characterized by the fact that no time-dependencies of the state variables x1 = y and x2 = y occur.g. As an empirical formula. 10 200 accuracy graphics £ ¢ (1. an imagination of its order of magnitude is useful. we can calculate the steady-state solution of the system.1. Θ as above excitation typical time for the characterization of k(t) ). the following choice of a time step length is valid: ∆t = min Tmin T . Expected Solution Space For the visualization of the solution.14) (1. In our example. i. the simulation time T can be determined by: T = 5 · Tmax (1.17) 9 . Time Step Length for Integrator For many numerical integration algorithms you need a clue on how to choose an appropriate time step length ∆t (see Fig. the extreme values of the wheel movement.e. As an empirical formula. excitation). ˙ ¨ (1. ˙ ˙ x2 = y = 0.16) d. it can be determined with the conditions ˙ x1 = y = 0.9: Time step length ∆t.15) c. 1. (1.3 Introductory Example for the Simulation Procedure A model is characterized through the time parameters Tmax and Tmin : Tmax = max( period . you are able to determine e. For a constant k(t) = k.13) Tmin = min(period.

x2.10: Integration block.max = ω y. ˙ x(0) = x0 . not a vector variable. The same ideas.3) is given by (assuming d = 0): y(t) = x1 (t) = y(1 − cos(ωt)).1 Introduction Then equation (1. In the following. ˙ therefore x1.20) (1.min = −ω y.max = 2 y.18) The dynamic solution of equation (1.3) yields an expression for the steady-state solution y: cy = k. c (1.24) The time integration provides: t x(t) = x(0) + 0 f (x(τ ))dτ. (1. (1.25) 10 . can also be applied to vectors. hence y= k . x2. we proceed based on the assumption that x is a scalar. however.19) (1. In Fig. y(t) = x2 (t) = y ω sin(ωt).10 this is depicted symbolically with an integration block. x1. A model shall be defined through x = f (x).min = 0.23) (1.    (1.21) 1. 1.22) (1. The formulated differential equations have to be integrated with respect to time.3.7 Numerical Solution The analysis of the model is followed by its numerical solution. © = ¤ ¢ ¦ ¡  §¨ ¥ £ Figure 1.

3 Introductory Example for the Simulation Procedure GSWKFKUVCPV VKOG ITKF W WN − WN WN + I WN − WN WN + W Figure 1. 1. With this. 1. Figure 1. (1.29) (1.26) can be approximated (see Fig. equation (1.1.11: Numerical integration. An example for this is The the (explicit) f (x(τ ))dτ in equation (1.26) 1.28) (1.25) t +1 (1. 1.11: Numerical integration. 1.26) xk+1 = xk + h · fk (1.11).12).27) (1. (1.3 Introductory Example for the Simulation Procedure The time axis will now be subdivided into an equidistant time grid (see Fig. An term tkk Euler method: example for this is the (explicit) Euler method: xk+1 = xk + h · fk withwith xk+1 ≈ x(tk+1 ) xk ≈ x(tk ) xk+1 ≈ x(tk+1 ) xk ≈ x(tk ) (1.30) (1.11).11). The second term in (1.24) can be approximated (see Fig.29) 11 .27) fk f= = (xk(x ) f f) k k h h =k+1k+1 tk tk = t t −− The numerical method is coded in a computer program (see Fig.25) can be rewritten as: tk+1 x(tk+1 ) = x(0) + 0 tk f (x(τ ))dτ tk+1 = x(0) + 0 f (x(τ ))dτ + tk tk +1 f (x(τ ))dτ = x(tk ) + tk f (x(τ ))dτ.31) (1.28) (1.

or ˆ model modification (friction. the simulation can be used for problem solving. m).g. 1.3.13) between simulation and experiment (after examination of comparable experimental conditions).9 Applications of Simulators If a sufficiently accurate model of the reality is finally constructed. 1. d. It is important to compare the results of these simulations with real experiments in order to be able to make statements on e.8 Simulation Up to now. the solution of simulation problems has been discussed. Often only an iterative model adjustment leads to a satisfactory correspondence between simulation and reality.1 Introduction ¡ Figure 1. sophisticated vehicle model).3.12: Numerical integration as a calculation program. Examples for applications of the simulation models in our example are: 12   . whether model simplifications are acceptable and the numerical solution method applied is suitable: simulation experiment ⇓ simulated distances and velocities matching through test conditions If relevant deviations occur (Fig. 1. then the simulation has to be modified through ⇐⇒ real experiment ⇓ measured distances and velocities on test track ˆ parameter adaptation (c.

(e) c. prediction These different applications can be divided into online and off-line applications: off-line online. d. y) ˙ c. (e) β ∗     ∗ Figure 1. education e.g. active spring c(y.1.g.14: Time axes in the simulation and real time. Predictive simulation e.3 Introductory Example for the Simulation Procedure                     Figure 1. Synthesis e. b.g. Analysis of the spring behavior |¨R | < MR y |¨A | < MA y safety comfort b. “Hardware in the loop” e. active spring as a component in a simulated suspension model d.g.13: Deviations between simulation and real experiment.       13 . Training e. a. real time a.

1. (1. time compression.32) 14 .  = 1 T t  β= ∗ = ∗ <1 T t   >1 simulation can be divided with regard to its time scales: real time.1 Introduction On the basis of Fig.14. time extension.

.1) Note that for real systems. Starting with one or more linear differential equations of higher order. Its mathematical representation m¨ = − c1 y − d1 y + c2 (u − y) + d2 (u − y) y ˙ ˙ ˙ for t > 0. 1999). y(0). . we are going to implement a transformation such that we get a system of differential equations of first order (Zeitz. This alternative representation of the model is useful for determining certain characteristics of the system in consideration. In Figure 2. .1: Linear transfer system.1 a linear transfer system is depicted (comparable to systems in control engineering). for instance its stability. .2. (2. ˙ ˙ ¨ t >0. which is described through ordinary differential equations with time as the independent variable. .3 is given. (2. . + an y (n) = b0 u + b1 u + b2 u + . The model for this system is given through a generic linear differential equation of order n: a0 y + a1 y + . .2 Representation of Dynamic Systems 2. In this chapter we will generalize this system representation. as we will see in Chapter 5. . y (n−1) (0) and u(t) as well as all derivatives of u(t) ˙ for t ≥ 0 have to be known.2) 15 . Furthermore. The initial conditions for y(0). the condition m ≤ n always holds: The system state and the output depend only on previous states and inputs. common numerical integration methods require a system of differential equations of first order.1 State Representation of Linear Dynamic Systems The example of the vertical movement of a motor vehicle in the previous chapter led to a time continuous dynamic system (with lumped parameters). a more detailed model of the vertical movement of a motor vehicle depicted in section 1. 2. + bm u(m) . Attention has to be paid to the time derivatives of the input variables! In Fig. () () Figure 2.

it becomes clear that the differential equation of this model is a special case of the generic equation (2. For example.3. The initial conditions at t = 0 is y(0) = y0 . the derivatives u(t∗ ) and u(t∗ ) are not defined.3) (2.2).3: Step change on u(t). y(0) = y0 . the inputs u(t) may not be differentiable. this term equals to zero. ˙ ¨ ()   ¤ ¡ £ ¦ ¥ Figure 2. the representation (2.4) After a simple transformation of equation (2. ˙ ˙ (2. In our ¨ example.2 Representation of Dynamic Systems § ¦ ¡   m £ ¥ ¢ ¤ () © ¨ ()  Figure 2.2: Modeling of the vertical movement of a vehicle.5) Note that in favor of a more general solution.5) is unsuitable for simulation purposes because in general. 16 ¢ . with a step change of u at t∗ as depicted in Figure 2.1) with n = 2 and m = 1: 1 ·¨ + y a2 d1 + d2 c1 + c2 y+ ˙ y = m m a1 a0 c2 d2 u+ u ˙ m m b0 b1 (+0)u. ¨ b2  (2. Indeed. we have introduced the term b2 u.

the general case n = m = 2 as in equation (2. u.7) As equation (2. . m integrations ˙ ¨ are required: 0+ 0+ .2. i. consisting of two differential equations. .10) and (2.e. in the numerical methods).9) (2.14) (2.13) (2. (2. equations like (2. .7) is of second order in u. As we want to eliminate all derivatives u. and one algebraic ˙ ˙ equation..11) (2.1) are transformed through successive integration at time 0 in the interval from t = 0 to t = 0+.. We get the following equation system..12): x1 = b0 u − a0 y .1 State Representation of Linear Dynamic Systems As such discontinuities cause difficulties in simulation (i. ˙ ˙ (2..8) (2. ˙ x2 = x1 + b1 u − a1 y .12) (2. t=0 t=0 m integrals differential equation dτ .6) Inspect. After the first integration. y(0) = y0 . (2. ˙ =: x1 ˙ hence y(t) = x1 + b1 u − a1 y + b2 u . dτ .15) 17 . namely the definitions of x1 and x2 in equation (2.10) =: x2 ˙ = x2 + b2 u . for instance. ˙ ˙ and the second integration yields y(t) = (x1 + b1 u − a1 y) dτ + b2 u (2.11) respectively. equation (2. we get y(t) = ˙ (b0 u − a0 y) dτ + b1 u − a1 y + b2 u . u(m) .. ˙ y = x2 + b2 u .5): By solving this equation with respect to the highest order derivative in y you obtain y = b0 u − a0 y + b1 u − a1 y + b2 u ¨ ˙ ˙ ¨ with the initial conditions y(0) = y0 . two integrations are required to obtain a representation free of derivatives of u.e.

18).7). They can be obtained with the help of the initial conditions y(0) = y0 and y(0) = y0 : ˙ ˙ y(0) = x2 (0) + b2 u(0) ⇒ x2 (0) = y0 − b2 u(0) .(2.18) requires the initial conditions x1 (0) and x2 (0) at time t = 0. .20) So the initial conditions are known. it is reasonable to replace y in (2.19) After differentiation of equation (2.15): x1 = −a0 x2 + (b0 − a0 b2 )u .25) (2.17) leads to y(0) = x1 (0) + b1 u(0) − a1 y(0) + b2 u(0). . ˙ ˙ (2. x2 . . yp ] ∈ R T p T m (state vector). .17) (2. ˙ y = x2 + b2 u . . the temporal derivatives of the state variables x1 and x2 . . the evolution of the system.27) 18 .e. .16) .24) A generalization of the state representation (2. is given in function of the current state of the system (the xi ) and the input variable u. (2.23) (2.16) (2. (2. u2 . (2.2 Representation of Dynamic Systems As y is an output variable of the system. . xn ]T ∈ Rn u = [u1 . if the input u(t) is a step function. u(t) = then lim u(t) = 1. .16) . (2.14) by inserting equation (2. y2 . The solution of the equation system (2.22) (2. ˙ ˙ hence x1 (0) = y0 + a1 y0 − b1 u(0) − b2 u(0).18) This form is called the state representation of the linear system (2. um ] ∈ R y = [y1 . . if u(t).18) renders the following representation in matrix notation: x = [x1 . t ≥ 0 is known and if lim u(t) exists.26) (2. i. ˙ ˙ ˙ Applying (2. t ≥ 0.21) (2. ˙ x2 = x1 − a1 x2 + (b1 − a1 b2 )u . t↓0 0 1 t < 0. (input vector). . .(2. For t↓0 instance. (output vector). In this representation.13) and (2. you obtain y(0) = x2 (0) + b2 u(0).

The special case n = 2 (two-dimensional state space) is illustrated in Fig. 2. you obtain a generic linear state representation in the following form: x = Ax + Bu. 2.4.5. For a fixed t it represents a The state vector x(t) describes the solution of the model. 1982).4: Trajectory in the state space (according to F¨llinger and Franke. o 0 = () () = Figure 2. right. Fig.31) Consequently.4: Trajectory in the state space (according to F¨llinger and Franke. ˙ y = Cx + Du. o from x(t ) (F¨llinger and Franke. This 19 illustration is also called phase plot. the simulation task is to solve a system of linear differential equations of first order. 2. 2. 1982). 2. left.2 The State Space In our example this means: x1 ˙ = x2 ˙ x ˙ 0 −a0 1 −a1 A system matrix n×n x1 + x2 x n×1 x1 + x2 x n×1 b0 − a0 b2 b1 − a1 b2 B input matrix n×m u u m×1 (2. For variation of t.30) (2. For different u(t) with the same x0 . . you obtain different trajectories. 1982).29) output matrix p×n transmission matrix p×m Therefore. it describes a state curve or trajectory starting vector in the state space. For a fixed t it represents a vector in the state space. shows in contrast a time domain depiction. o For given inputs u(t). For variation of t.2 The State Space 2. o Figure 2.2. t ≥ t0 . the trajectory is definitely determined through the initial conditions x(t0 ) = x0 .2 The State Space 2.5.28) y = y 0 1 C b2 D u u m×1 (2. (2. 1982). x(0) = x0 .4. This is illustrated in Fig. This is illustrated in Fig. it describes a state curve or trajectory starting from x(t0 ) (F¨llinger and Franke.2 The State Space The state vector x(t) describes the solution of the model.

In Fig. 20 . ˆ “1” is the prey. 1982. 2. the trajectory is definitely determined through the initial conditions x(t0 ) = x0 . ˆ birth rate. The special case n = 2 (two-dimensional state space) is illustrated in Fig. 2. 1926). 2.5.2 Representation of Dynamic Systems For given inputs u(t). Volterra. For different u(t) with the same x0 . shows a time domain depiction.3. “2” is the predator. right. for predator ∼ number of prey and predator. o 1999). left. we extend the state representation to the nonlinear case (F¨llinger and Franke. x1 x2 x(0) t x(t ) t x2 x1 t phase plot time domain depiction Figure 2. Zeitz. Fig.5.1 Example Now we study a common example from ecology: the predator-prey model (Lotka. 1925. you obtain different trajectories. This illustration is also called phase plot. t ≥ t0 . We make the following assumptions: ˆ As an input quantity we only use the catch. for prey ∼ number of prey.6 the corresponding ecology system is depicted schematically. Therefore.5: Phase plot and time domain depiction. there is no immigration or migration. 2. ˆ death rate ∼ number of prey or predator.3 State Representation of Nonlinear Dynamic Systems Most real systems cannot be sufficiently represented with linear relations. 2.

˙ and for the predator: x2 = ˙ b2 x1 x2 − c2 x2 − v2 u. b2 a1 − c1 with equation (2. A formula representation of x1 (t) and x2 (t) is hardly possible. you get: temporal increase loss loss change = through − through − through of the animals birth death catch Hence.2. it is nonlinear. ˆ death rate of prey caused by predator ∼ number of prey and predator. for the prey: x1 = a1 x1 − b1 x1 x2 − c1 x1 − v1 u. (2. Now we can determine the stationary states (equilibrium states) x1 and x2 .32) As this state space model includes the term x1 x2 .35) ¡¢ (2.34) (2.37) 21 .6: Predator-prey model. for which the conditions x1 = 0 and x2 = 0 hold (see also ˙ ˙ Section 3.33) x1 (0)=x10 . Henceforth.(2.33) by assuming that no catch takes place (v1 = v2 = 0).35) x2 = .32) . Considering the balance for the temporal change of the number of animals in the ecology system.3): (a1 − b1 x2 − c1 )x1 = 0 (b2 x1 − c2 )x2 = 0 Besides the trivial solution x1 = x2 = 0 you obtain: c2 with equation (2.3 State Representation of Nonlinear Dynamic Systems ()   () Figure 2. what does the solution for x1 (t) and x2 (t) look like. the question comes up.34) x1 = .36) (2. (2. x2 (0)=x20 . b1 (2. We simplify the model (2.

x1 = b2 x1 − c2 ln |x1 | + C. i. where you divide equation (2. = (2.7 a particular system is described.prey Figure 2. We observe steady state oscillations with different amplitudes and frequencies. depends on the initial conditions x10 and x20 .7 also shows the behavior of the system for the two special cases x10 = 0 and x20 = 0. the amount of prey fish increases. no predator. 2. the system towards the stationary state [0.7 shows an illustration in the state space. i. The trajectories run through in the arrow direction with increasing time.33) by equation (2. x1 c2 = b2 dx1 − dx1 .32) and then separate the variables x1 and x2 : dx2 /dt dx2 = dx1 /dt dx1 a1 − b1 x2 − c1 dx2 x2 a1 − c1 dx2 − b1 dx2 x2 (a1 − c1 ) ln |x2 | − b1 x2 x2 b2 x1 − c2 · . In this case you can use a trick. 0]T . Fig. 2. the number of predator fish decreases. a1 − b1 x2 − c1 x1 b2 x1 − c2 = dx1 . 2 . If x20 = 0 and x10 > 0.40) (2.38) (2. The system converges towards [∞. you can assume that the prey is a harmful insect. which 22 .41) C is an integration parameter. The representation in the state space already gives an insight into the influences of external effects on the system. but some prey at t = 0. 2.41) you obtain a trajectory bundle of closed curves. For example. no prey at t = 0. 0]T . With equation (2. By which of the curves in Fig.7: State trajectories of the predator-prey model.predator p2 stationary state 2 a1 − c1 b1 p1 ′ p1 ′ p2 0 0 stationary state 1 c2 b2 1 .2 Representation of Dynamic Systems In the following step the non-stationary solution trajectory has to be worked out. If x10 = 0.39) (2.e. Fig.e.

.49) can be transformed into an autonomous one with the additional variable xn+1 = t and the supplementary equation xn+1 = 1. An alternative would be a continuous intervention u(t). xn (0) xn0     y1 g1 (x. However. . Using chemicals to reduce the predator and the prey from p1 to p1 has the consequence that later a much greater amount of prey will appear. u). this would lead to the question of determining the optimal trajectory of u(t). u(t)) = f (x..2 Generalized Representation The general nonlinear state space model can be described in the following way: ˙ x = f (x. x(0) = x0 . e.g. this leads to a so-called nonlinear. u). = .= . xn ˙ fn (x. .44) (2. u)     x1 (0) x10  . .   . So a better time for combating would be a decisive moment with a large amount of prey. . Every non-autonomous system ˙ x = f (x. u) .47) If the time t does not appear explicitly or implicitly through u(t) in the functions f and g. 2. control affine state space model: ˙ x = f 1 (x) + f 2 (x)u.46) .  . u(t)) = g(x. x(0) = x0 . .3 State Representation of Nonlinear Dynamic Systems shall be reduced through the use of chemicals. t) (2.   . . .2. . ˙ 23 ..  . xn or     x1 ˙ f1 (x. u) fn (2.3. (2. these systems are called autonomous. (2. .    . t). x =  . . . . y = g(x. Note that you deal with a vector model:     x1 f1  . y = g(x.   . p2 .48) (2. u)  . xn+1 (0) = 0.42) (2. yp gp (x.  =  . y = g 1 (x) + g 2 (x)u.45) So the simulation task consists of solving a system of nonlinear differential equations. f =  .43) If f and g are both linear in the input quantity u. . (2.

2 Representation of Dynamic Systems

Example For the non-autonomous system y = ty, ˙ t > 0; y(0) = y0 , (2.50)

we get with x1 := y, x2 := t: x1 ˙ x x = 1 2 , x2 ˙ 1 t > 0; x1 (0) y = 0 . x2 (0) 0 (2.51)

2.4 Block-Oriented Representation of Dynamic Systems
2.4.1 Block-Oriented Representation of Linear Systems
The state space model can also be illustrated graphically. For this purpose you use a graph, which consists of nodes (vertices) and arcs (edges) (cf. outline of graph theory in section 9.3). In the graphical representation, an edge is assigned a direction, which corresponds to a signal flux. The nodes of the graph represent the functional elements, which modify the signals according to some defined rules. Such function blocks may have one or more inputs and outputs. Normally, you associate an edge in a graph with a scalar quantity, although generalization to a vector quantity is possible and also common. In this case, a functional element has to be interpreted as a vector, which converts a vector input into a vector output. The most important function blocks for illustrating linear dynamic systems are

ˆ the integrator, ˆ the summarizer, ˆ the branch, and ˆ the gain.
These blocks are depicted in Fig. 2.8 graphically. Additionally, the corresponding elements in the commercial software Simulink (MathWorks, 2002) are shown. An application of the block-oriented representation to the example of the motor vehicle is shown in Fig. 2.9. Some parts of a graph can be aggregated to a (structured) function element in order to improve the clarity, which converts inputs into outputs through the rules given by those aggregated parts. This procedure is exemplarily represented for the instance of our wheel model in Fig. 2.10.

24

2.4 Block-Oriented Representation of Dynamic Systems

¢

¢

 

¡

+ =

= = =
© ¨ §

£

= ⋅

¦

¥

+ + −

Figure 2.8: Block-oriented representation of dynamic systems.

7

B    

!

¤

¡¢

8

 

$

%

C DE

4 03

2

1

£

¤

#"

@A

5

6

9

©¨

Figure 2.9: Block-oriented representation of the wheel model.   



F GH

) 0(

'

&

¥

¦

§

25

2 Representation of Dynamic Systems

#

"

$

% 

¨ 

¨   

!

£

¨¦ §

¢¡

Figure 2.10: Aggregation of model parts.

2.4.2 Block-Oriented Representation of Nonlinear Systems
Nonlinear systems can also be illustrated graphically. The generalization of the depiction of linear systems is easily to handle by introducing a “nonlinear gain”. For this gain the output quantities are nonlinear functions of the input quantities of the function block. The block diagram of the predator-prey model (with a0 = a1 + c1 ) is shown in Fig. 2.11. It contains a nonlinear multiplier block, which multiplies x1 and x2 .
¦ §

¤¥ 

©

&

' 

¡¢

©¨

 

(

"

! 



2 4 53 

1

6 

)

0

$% 

#  

¤

Figure 2.11: Block-oriented representation of the predator-prey model.

26 

¥ 

&

'

£

 

the maximum norm x ∞ = max xi 1≤i≤n (3. Therefore.. x2 ∈ Rn f (x1 ) − f (x2 ) ≤ L x1 − x2 .g.. We look at an autonomous system of nonlinear differential equations of the form: dx = f (x). the outputs can also be determined after the simulation (which is normally not done for practical reasons). · (3. if and only if its derivative is bounded. continuity of a function is a necessary condition for Lipschitz continuity. the following theorem is valid: A function f is Lipschitz continuous. 3. For differentiable functions.3 Model Analysis Before solving the model equations a model analysis should be performed (see simulation procedure. a constant L ∈ R exists such that ∀x ∈ Rn : f (x) ≤ L. n 1 (3. (3.e. In the following sections.5) 27 .4) in order to support the implementation of the model in a simulator.2) may be an arbitrary vector norm for Rm or Rn respectively. A function f : Rn → Rm is called Lipschitz continuous if a constant L > 0 exists such that for all x1 .4) In general. i.1 Lipschitz continuity Before we will discuss a criterion for the solvability of autonomous differential equations. e. dt x(0) = x0 . (3.1) The output equations do not have to be considered because they are an explicit function of states and inputs. + x2 . 1. Fig. different aspects of the model analysis will be briefly introduced.3) or the Euclidean norm x 2 = 1 n x2 + . we will introduce the term Lipschitz continuity.

6) With the definition above.1: Lipschitz-continuous and non-Lipschitz-continuous functions.7) If f is Lipschitz-continuous. 3. Consider the function f : R → R. 3. because its derivative f (x) = −e−x is not bounded. Example a. 3. but not a necessary condition (see example 1). b. Example 3.1(a)) is not Lipschitz continuous. Note that this is a sufficient. f (x) = e−x (Fig. f (x) = 1 + |x| (Fig. x2 ∈ R.1 28 .2 Solvability We consider an ordinary differential equation of the form ˙ x = f (x) for t > 0 . The function f : R → R.3 Model Analysis f(x) 3 f(x) 2 0 −2 −4 −2 3 f(x) 2 2 1 1 0 −2 −1 0 1 x −1 0 1 x 0 −2 −1 0 1 x (a) f (x) = e – not Lipschitz continuous −x (b) not continuous – not Lipschitz continuous (c) f (x) = 1 + |x| – not differentiable. For any x1 . L = 1 is a Lipschitz constant of f . then the differential equation has a unique solution x(t). As the function in Figure 3. the following relations hold (the last transformation is a direct result of the triangle inequality): |f (x1 ) − f (x2 )| = (1 + |x1 |) − (1 + |x2 |) = |x1 | − |x2 | ≤ |x1 − x2 |. By applying the definition above. (3. it is not Lipschitz continuous.1(c)). it follows immediately that f is Lipschitz continuous. (3. x(0) = x0 . c.1(b) is even not continuous. but Lipschitz continuous Figure 3. we will show that f is Lipschitz continuous.

3.11) = e − 1. (3. x b.1.2: Unique solution x(t). which can be calculated analytically: dx = e−x . we have seen that f (x) = e−x is not Lipschitz continuous. Nevertheless.2(a)).2(b)).2 Solvability x(t) 1 1 x(t) 0. 3. for t ∈ (ln 2. a. ln 2] .5 0.5 (a) x = e ˙ −x 1 . x(0) = −1 ˙ Figure 3.10) (3. As f (x) = 1 + |x| is Lipschitz continuous (cf.9) (3.1). 3. The calculation of the analytical solution x(t) = 1 − 2e−t et−ln 2 − 1 for t ∈ [0.13) has a unique solution (Fig.5 0 0 0.5 ln 2 1 1. ˙ x(0) = −1 (3.8) has a unique solution (Fig. 29 . In Section 3. the differential equation x = 1 + |x| for t > 0 . t=e −e and therefore x(t) = ln(1 + t) for t ≥ 0 .5 0 −0. x(0) = 0 1. dt dt = ex dx = d(ex ) . x(0) = 0 (3.5 t (b) x = 1 + |x|.12) x x(0) (3. the differential equation x = e−x ˙ for t > 0 .5 t −1 0 0.14) is left as an exercise to the reader. Section 3. ∞) (3.

a stationary state is characterized by the fact that the state variables x ∈ Rn of a system ˙ x = f (x.3 Stationary States In addition to the possible existence of a dynamic. then the system will remain in the respective stationary state. i. ˆ If x (0) = 0 and x (0) > 0. (3.3 Model Analysis 3.7. the question of whether a system provides one or more stationary (steady) states is of interest.16) Therefore. For the calculation of stationary states. (3. u) x(0) = x0 are constant with respect to time: ˙ x = 0. but it follows the trajectories shown in Fig. in general it depends on the initial values x(0) to which stationary state a system converges and whether it converges at all. ˆ If x (0) = 0 and x (0) > 0. then the system will converge towards x . they are a property of the system itself. (3. the stationary states x of a system can be calculated by solving the equation f (x.1 we have seen that the predator-prey model has two stationary states: x1 = 0 . then the system will converge towards [∞. the system does not converge towards 2 1 2 1 2 1 T 1 2 a stationary state. ˆ For all other x(0) with x (0) > 0. time-dependent solution. Nevertheless.15) (3. 0 1 x2 = x2 b2 a1 −c1 b1 . Example In Section 2. 2. u) = 0 . the linear and the nonlinear case are to be distinguished: 30 .19) ˆ If x(0) = x or x(0) = x .17) for t > 0 .3. (3. for a given input u. x (0) > 0.e. 0] . but it will never reach this state. By definition.18) This condition shows that the stationary states of a system do not depend on initial values.

 ∂fn ∂x1  . . then for any input u a solution x such that A x + B u = 0 exists.4 Jacobian Matrix The Jacobian matrix of a differential equation system is defined in the following way: ∂f1  ∂x1  = . Its calculation can be done either ˆ analytically. ˙ ˆ nonlinear case (x = f (x. . xj − ∆xj .4 Jacobian Matrix ˙ ˆ linear case (x = Ax + Bu): A x = −B u (3. . . . The Jacobian matrix must have full rank for all x in the neighborhood of the searched stationary state x. .  ∂f1 ∂xn  .  ∂fn  ∂xn ∂fi ∂f = ∂xj ∂x (3.23) 31 .j=1. . .. det A = 0. . .21) has to be satisfied. i.  .. . As a necessary condition the implicit function theorem det ∂f ∂x Jacobian matrix =0 if ||x − x|| < ε.n ..e. ) 2∆xj (3... (3. . . ) − fi (.. . e.22) i. .g. . u)): In the nonlinear case no sufficient conditions exist. 3. or ˆ numerically: ∂fi ∂xj ≈ fi (.  .. manually or by a computer algebra system such as Maple or Mathematica. ... xj + ∆xj .3.20) If the rank of the matrix rank(A) = n.

x (x) = ex + x d x e x=x · (x − x) dx = ex + ex · (x − x) = e · (1 − x + x) .. Fig.28) (3. = f (x) + Note that a linearization of a function requires differentiability of the function at the respective point.25) dx x=x 2 dx2 x=x If the Taylor expansion is truncated after the second (linear) term. we get the a linear approximation of f at x: df flin.27) (3. . a function cannot be linearized at saltuses or breakpoints (cf.x (x) ≈ f (x). Among other reasons.3 Model Analysis The pattern of the Jacobian-Matrix reflects the coupling structure of the differential equation system. 0 × × 0 × f5 ↓ ↓ ∂fi ∂xj x x x x x ∂f ∂x =0 =0 In this example. 3.26) dx x=x If ∆x := x − x is sufficiently small.5 Linearization of Real Functions The Taylor expansion of a function f : R → R at x is1 ∞ f (x) = i=0 1 di f i! dxi x=x · (x − x)i (3. In particular. . (3. (3.24) df 1 d2 f (x − x) + (x − x)2 + . (3.x (x) = f (x) + (x − x) . Take a look at the following example. then flin. . . The linearization of f at x is flin. this is important for the storage as well as for the numerical evaluation of the matrices (sparse numerical methods). You distinguish between dense (much more non-zero than zero entries) and sparse (much more zero entries) matrices. 3.29) . a non-zero entry is marked by ×:  1 2 3 4 5 0 × 0 × 0 f1 0 × × 0 0 f2   = . .3).. Example Let f (x) = ex . the function f1 depends only on x2 and x4 .

1 df .

dx x=x is the value of f = df dx at x. 32 .

31) (3. m ∈ N can be linearized by truncating its Taylor expansion: f lin. The linearization at x = [0. a function f : Rn → Rm with n.x (x) = f (x) + ∂f ∂x x=x (x − x) .4). the linearization is flin.34) Let f (x) = f lin (x) = = 33 . =:∆x (3. in the neighborhood of x = 0 (see Figure 3.33) (3.32) Example x1 + x2 + exp x1 2 . 1]T is 4 − sin x1 0 + 12 + exp 0 1 + exp x1 2x2 + ∆x 4 − sin 0 − cos x1 0 2 2 2 + ∆x .30) Similarly.3: Possibilities of linearization.0 = 1 + x . and in the neighborhood of x = 1.5 Linearization of Real Functions Figure 3. we get flin. For instance.3.1 = ex . (3. 4 −1 0 (3.

If we assume that the system is and remains in the neighborhood of a stationary state x — which in general depends on the stationary input u — then we can replace f by its linearization with respect to x and u at the stationary state: ˙ x = f (x. =:∆u (3.3 Model Analysis 4 f (x ) = e x 3. u) = 0.35) x=x.37) 34 . 3. as x is a constant that d ˙ does not depend on t.0 (x ) = 1+x x Figure 3. we have always f (x. Thus.5 1 0.4: Linearization of f (x) = ex at x = 0 and x = 1.1 (x ) = ex f lin. Furthermore.6 Linearization of a Dynamic System around the Stationary State Consider a dynamic system ˙ x = f (x. u) with x ∈ Rn and u ∈ Rm .u=u (u − u) .36) At a stationary state.u=u ∂f ∂u x=x.5 2 1.5 3 2.5 0 -2 -1 0 1 2 f lin. we have x = dx = dt (∆x + x) = d∆x . we get dt dt d∆x = A∆x + B∆u dt (3. u) + ∂f ∂x (x − x) + =:∆x (3.

 ∂f1 ∂xn     ∂fn  ∂xn x=x.u=u A= ∂f ∂x x=x.  ∂f1 ∂um     ∂fn  ∂um x=x.42) Example Let   1 0 −1 A = 0 1 −1 .38) .  ∂fn ∂u1  ∂f1 ∂u2 ∂fn ∂u2 .43) 35 . (3. and ∂f1  ∂u1  = .  . . we transform equation (3. equation (3.. 0 1 1 (3..u=u (3.39) ..7 Eigenvalues and eigenvectors with ∂f1  ∂x1  = . .41) holds if and only if det(λI − A) = 0 .. (3. λ ∈ C and v ∈ Cn .3. I denotes the n-dimensional identity matrix. (3.u=u B= ∂f ∂u x=x.40) Here.40): (λI − A)v = 0 . In order to calculate the eigenvalues of a matrix.7 Eigenvalues and eigenvectors For a given matrix A ∈ Cn×n . The system matrix A is an n × n-matrix. det(λI − A) is the characteristic polynomial of A. As v = 0. the input matrix B is an n × m-matrix.. v = 0 are called an eigenvalue and an eigenvector of A if the following equation holds: Av = λv .u=u .. 3...  ∂fn ∂x1  ∂f1 ∂x2 ∂fn ∂x2 .41) (3.  .

n = 1 and A = [k]. (3. dt (3.e. Then equation (3.45) A ∈ Rn×n is an n × n-matrix. In order to make statements about the stability of a system.1 One state variable In the simplest case.e. Av = λv .47) (3. (3.37) becomes ˙ x = Ax . we will solve the homogeneous differential equation system (3. (3. i.49) 36 .48) (3. ˙ x(0) = x0 .46) It becomes obvious that the eigenvalues of A provide valuable information about the dynamic behavior of the system. i. x(0) = x0 . i. and λ3 = 1 − i. λ2 = 1 + i. and its roots are λ1 = 1.e. The system is described by the scalar differential equation x = kx . ∆u = 0.37) under the assumption that there are no perturbations of the input variables.3 Model Analysis The characteristic polynomial of A is   λ−1 0 1 λ−1 1  det(λI − A) = det  0 0 −1 λ − 1 = (λ − 1)3 + (λ − 1) = (λ − 1) [ (λ − 1)2 + 1 ] . because d λt ve = λveλt = Aveλt . Furthermore. Note that k is the eigenvalue of A.44) 3. we simplify the notation by writing x instead of ∆ x.8. Assume that v ∈ Cn is an eigenvector of A and that λ is the corresponding eigenvalue. 3.8 Stability A further important issue in the model analysis is the stability of a system after a perturbation ∆x0 of a stationary state x. the system has only one state variable x = x1 . It is evident that x(t) = veλt ˙ is a solution of x = Ax. where n is the number of state variables xi .

3.8 Stability

which has the solution x(t) = x0 ekt . (3.50)

According to the value of k, the following statements about the system’s stability can be made:

ˆ If k < 0, then lim x(t) = 0. Hence, the system will return to its stationary state after a (small) perturbation. The system is stable. ˆ If k > 0 (and x = 0), then |lim x(t)| = ∞. The system will not return to the
t→∞ 0 t→∞

ˆ For k = 0, the system is said to be “on the limit of stability”.
3.8.2 System matrix with real eigenvalues

stationary state; rather, the absolute value of the state variable x grows towards infinity. The system is unstable.

Now let A ∈ Rn×n be a square matrix whose eigenvalues λi are all real, λi ∈ R ∀i = {1, ..., n} . (3.51)

It is known from linear algebra that a matrix T ∈ Rn×n , det(T ) = 0, exists, such that   λ1   .. T −1 AT = diag λi =  (3.52)  = Λ. . λn Λ is diagonal matrix whose entries are the eigenvalues of A. Now, the differential equation (3.45) can be transformed as follows: ˙ x = Ax = A T T −1 x , T
−1

(3.53) (3.54)

˙ x=T

−1

AT T

identity −1

x = ΛT −1 x .

We apply a coordinate transformation to the state vector x and get a new state vector z: z = T −1 x ⇔ x = Tz . (3.55)

In terms of the transformed state vector, equation (3.54) yields the following differential equation: ˙ z = Λz or z i = λ i zi ˙ ∀ i ∈ {1, ..., n} . (3.57) (3.56)

These are n independent differential equations, one for each of the transformed state variables zi . Hence, for each zi the argumentation in Section 3.8.1 is valid. We get the following criteria for the stability of the system:

37

3 Model Analysis

ˆ If all eigenvalues are negative, i.e. λ < 0∀ i ∈ {1, ..., n}, then the system is stable. ˆ If there is at least one positive eigenvalue, i.e. ∃i ∈ {1, ..., n} : λ > 0, then the
i i

system is unstable.

3.8.3 Complex eigenvalues of a 2 × 2 system matrix
Let A ∈ R2×2 be a matrix of the form A= a −b , b a a, b = 0 . (3.58)

The eigenvalues are the roots of the characteristic polynomial 0 = det(A − λI) = (a − λ)2 + b2 , hence λ+ = a + bi , λ− = a − bi . (3.60) (3.59)

By equation (3.47) we get that the two components x1 and x2 of the solution x(t) are linear combinations of eλ and eλ
−t +t

= e(a+bi)t = eat [ cos(bt) + i sin(bt) ] = e(a−bi)t = eat [ cos(bt) − i sin(bt) ] .

(3.61)

(3.62)

Therefore, the real parts of x1 and x2 are oscillations of the frequency b/2π, which are damped (for a < 0) or excited (for a > 0) by the factor eat . For the stability of the system, this means that

ˆ for a < 0, the system is stable, and ˆ for a > 0, the system is unstable.

3.8.4 General case
Let A ∈ Rn×n be a matrix with k ∈ N0 real eigenvalues λreal and l ∈ N0 pairs of conjugate i complex eigenvalues λcom + = aj + bj i, λcom − = aj − bj i. Then a matrix T ∈ Rn×n , j j det(T ) = 0, exists, such that   real λ1   ..   .   real   λk     a1 −b1 −1 −1   (3.63) T AT =   b1 a1     ..   .    al −bl  bl al

38

3.8 Stability

Hence, by a similar consideration as in Section 3.8.2, we can reduce the general case to the one-dimensional case with a real eigenvalue in Section 3.8.1 and the two-dimensional case with complex eigenvalues in Section 3.8.3. ˙ As a condition for the stability of a system x = Ax, we get:

ˆ If all real eigenvalues of A are negative and if the real parts of all complex eigenvalues of A are negative, then the system is stable. ˆ If there is at least one positive real eigenvalue or at least one complex eigenvalue with a positive real part, then the system is unstable. ˆ If A has pairs of conjugate complex eigenvalues a ± bi, then the system shows oscillations with frequency b/2π. Figure 3.5 shows a graphical representation of these criteria. ˙ The solution of x = Ax takes the form of a linear combination
n

x(t) =
i=1

eλi t ci .

(3.64)

The values of the ci depend on the initial conditions x(0) = x0 . The addends eλi t ci are also called the eigenmodes of the system. Example Consider the differential equation system   1 0 −1 ˙ x = Ax = 0 1 −1 x , 0 1 1 x(0) = [1, 2, 0]T . In Section 3.7 we have calculated the eigenvalues λ1 = 1 , λ2 = 1 + i , λ3 = 1 − i . (3.67)

(3.65) (3.66)

As there is at least one eigenvalue with a positive real part – in fact all real parts are positive – the system is unstable. Furthermore, the system shows an oscillation with frequency 2π. Although this is rarely done in practice, we apply the coordinate transformation described above for illustrative purposes. The matrix Λ of the eigenvalues is     λ1 0 0 1 0 0 Λ =  0 Re(λ2 ) − Im(λ2 ) = 0 1 −1 , (3.68) 0 Im(λ2 ) Re(λ2 ) 0 1 1

39

system stable λ λ ©   eigenmode with respect to λ1 trends with growing t to ∞ ⇒ system instable   ¤ ¦ λ  §   λ λ λ ¢   ¡ λ λ λ £ conjugate complex eigenvalues λ1. negative eigenvalues ⇒ aperiodical damping.2 ⇒ damped oscillation.3 Model Analysis ¤ ¥   § ¦ λ λ λ λ λ λ ¡ ©  ¨  real. system stable ¥ conjugate complex eigenvalues λ1. 40 ¨   £ ¢ .5: Influence of the eigenvalues on the stability of a system.2 ⇒ undamped oscillation system instable Figure 3.

we get the solution for x = T z: x1 (t) = z1 (t) + z2 (t) x2 (t) = z2 (t) x3 (t) = z3 (t) 2 3 = et (−1 + 2 cos t) ..75) z3 (t) = c3 e(1+i)t + c4 e(1−i)t . Hint: Solve the linear equation system for c1 . t t (3.81) (3. c4 that consists of the initial conditions for z2 and z3 as well as two further equations resulting from the differential equations z2 = z2 − z3 and z3 = z2 + z3 . z2 (t) = c1 e (1+i)t (3.74) (3.3.65) becomes ˙ z = Λz . 0 0 1 T −1   1 −1 0 = 0 1 0 .83) The calculation of a matrix T fulfilling the given condition is not the subject of this course.. If we define z = T −1 x . 0] .76) (3. ˙ ˙ 41 .72) The solution in terms of z is z1 (t) = −et .77) = −et .79) (3. z(0) = T −1 (3. c3 = −i and thus z1 (t) z2 (t) = e (1+i)t 3 c2 = 1 .80) = 2e cos t . 2. = 2e sin t . t z3 (t) = −ie(1+i)t + ie(1−i)t for the latter transformation. . = 2e cos t . = 2et sin t .82) (3. 0 0 1 (3. c4 = i . T (3. +e (1−i)t (3.73) + c2 e (1−i)t . (3.8 Stability and for2   1 1 0 T = 0 1 0 . The calculation of the coefficients ci is left as an exercise to the reader..69) we have Λ = T −1 AT . (3. we have used Euler’s formula eiφ = cos φ + i sin φ. The coefficients ci are c1 = 1 . Finally. then equation (3.71) x(0) = [−1.70) (3.78) (3.

= com ± |bi | | Im(λi )| (3.87) The minimal and the maximal time characteristic of the entire system are Tmin = min(Titime constant .84) real t 1 |λreal | i ˆ For pairs of imaginary eigenvalues λ Tioscillation = 2π |λimag ± | = 2π |bi | are a measure for the increase or decrease of the corresponding eigenmode eλi imag ± i ci . = ai ± bi i ∈ C with ai .3 Model Analysis 3. if the system is stable. . . bi = 0. you choose as simulation time tsim = 5·Tmax . Otherwise. Tioscillation ) and Tmax = max(Titime constant . 20 5 h ∈ Tmin 42 . You choose the time step length h = ∆t ≤ α · Tmin with the time step index α = 1 1 .9 Time Characteristics The time characteristics of a linearized system can be determined by means of its eigenvalues: ˆ For real eigenvalues λ Titime constant = real i ∈ R.88) As already introduced. . the corresponding eigenmodes show damped or excited oscillations (depending on the sign of ai ) with the time constant 1 1 = com ± |ai | | Re(λi )| (3. Tioscillation ) (3. leading to (5 . tsim depends on an upper limit M : |x(t)| ≤ M or the problem formulation. (3.89) (3.86) and the oscillation period Tioscillation = 2π 2π . = ±bi i ∈ Ri. 20) · h = Tmin .85) imag ± ˆ For complex eigenvalues λ Titime constant = is the oscillation period of the eigenmode eλi com ± i tc i = [ cos(bi t) ± i sin(bi t) ]ci . the time constants (3.

3.10 Problem: Stiff Differential Equations

Example For the example in Section 3.8, we have found the real eigenvalue λ1 = 1 with
time constant T1 =

1 =1 |1|

(3.90)

and the complex eigenvalues λ2,3 = 1 ± i with
time T2,3 constant = oscillation T2,3 =

1 = 1, |1|

(3.91) (3.92)

2π = 2π . |i|

The maximal time characteristic of the system is 2π, and its minimal time characteristic is 1.

3.10 Problem: Stiff Differential Equations
Differential equations with very different time characteristics (time constants, periods, eigenvalues) are called stiff differential equations. Examples for these are given in Fig. 3.6.

ω
  ¢ ¤

§

ω

¡

£

Figure 3.6: Stiff systems: very different time constants. The stiffness measure SM makes use of the comparison of the minimal to the maximal time constant: SM = Tmax > 103 . . . 107 Tmin stiff (3.93)

tsim 5 · Tmax 5 = = SM . Because of h α · Tmin α this, stiff systems require long computation times. This problem can be solved by using integration methods with variable time steps hk (k = 0, 1, . . . ) as it will be described in Chapter 5. The computation time can be considered as ∼

¦

¥

43

3 Model Analysis

3.11 Problem: Discontinuous Right-Hand Side of a Differential Equation
Discontinuous right-hand sides of a differential equation can cause problems with the integration method (examples are given in Fig. 3.7).
¡ £  

Figure 3.7: Discontinuities in u and f . An improvement can be achieved with an integration method which detects the discontinuity and solves the differential equations piecewisely with the corresponding new initial conditions.

44

¢

4 Basic Numerical Concepts
In this chapter, a few basic concepts of numerical mathematics, which are of interest for the simulation techniques, will be reviewed very briefly. For detailed presentations the well-known fundamental books (e.g. Press et al. (1990), Engeln-M¨ller and Reutter u (1993)) and lecture notes (e.g. Dahmen (1994)) are recommended. The requirements on numerical methods can be summarized in these key words:

ˆ robustness (the solution should be found), ˆ reliability (high accuracy), and ˆ efficiency (short computation time).
This chapter deals with major issues regarding possible numerical errors and their effects. Generally, you can distinguish the following kinds of errors:

ˆ model errors, ˆ data errors, ˆ rounding errors, and ˆ method errors.
4.1 Floating Point Numbers
Numbers are saved in computers as floating point numbers in the normalized floating point representation. A well-known example for this representation of real numbers is the scientific notation: 12 = 1.20E1 = 1.20 · 101 , 0.375 = 3.75E -1 = 3.75 · 10−1 . (4.1)

In these examples, number are represented in the form f · 10e , where f is a three-digit number with 1 ≤ f < 10. The exponent e is an integer. This notation can be generalized in several respects. For a particular floating point representation, we have to choose:

ˆ a base b ∈ N, b ≥ 2,
45

34 · 10−1 . As x = rd(x) for x ∈ A.e. (4. and the exponent e is an integer with r ≤ e ≤ R.375 = 1 · 2−2 + 1 · 2−3 = (1 · 20 + 1 · 2−1 ) · 2−2 = 1.2) where the mantissa f is a number with d digits. Note that all remarks are in principle valid for any choice of the base b. ˆ and the number of digits for the representation of a number. and d = 3. the examples in equation (4.3) Therefore. d = 3) 3. For instance. i. In a binary representation. not all real numbers x ∈ R can be written in floating point notation. Therefore.1 · 211 . R = 5. we consider the calculation of the sum 1 + 6. 1 For instance.2 Rounding Errors For any choice of b. 4.33 · 10−1 < 1 < 3. this rounding function fulfils the condition |x − rd(x) machine number | ≤ |x − g| for all g ∈ A. and R. we have (b = 10. binary numbers are very well suited for the representation of numbers in computers: The two digits can be mapped to the two states “voltage” and “no voltage” in an electrical circuit. d ∈ N. numeric calculations with floating point numbers contain rounding / errors. We choose the 3 base b = 10 and the mantissa size d = 3. then we get the following representations for the examples above: 12 = 1 · 23 + 1 · 22 = (1 · 20 + 1 · 2−1 ) · 23 = 1. if we choose b = 10. a floating point number x can be written x = f · be . If we choose the base b = 2. we introduce a rounding function rd : R → A that maps a real number x to the “nearest” floating point number.4) Consequently. a floating point number x∗ represents all real x in within an interval of length ∆x∗ (see Fig. the set A of the floating point numbers f · be is finite. Binary numbers are marked with . In general. r. only the two digits “0” and “1” are needed. 3 (4.4 Basic Numerical Concepts ˆ a minimum exponent r ∈ Z and a maximum exponent R ∈ Z with r ≤ R. d.1 · 2−10 .1). (4. we will use b = 10 in the following discussion. As an example. Note that ∆x∗ depends on the absolute value of x∗ . 0.789. r = −5. for x = 3 . 4. 46 . Due to this limitation. for any x ∈ R. As the reader is assumed to be much more familiar with decimal numbers.1) are valid floating point numbers. 1 ≤ f < b.

. . x∗ )) 1 n 1 n f = f + ∆f.6) (4.79 · 100 .8) (4. Computation errors.789 − 6.13) 47 . The input values xi of a calculation must be represented as floating point numbers x∗ .. + −∗ = ¢ ¡ (4.5) ∆ ≤ ¢ ∆   ∗ In the example..33 · 10−1 + 6.7) b....001. x∗ ) = rd(f (x∗ . a.33 · 10−1 . In general.. f = rd(f ) = 7.1: Representation of a number on the computer.003.12 = 0.123 − 7. x∗ ) is n 1 no valid floating point number and must be rounded: f ∗ (x∗ .4.12) The deviation between a real number x and its floating point representation x∗ is called relative rounding error δx. the precise result of a calculation f (x∗ .9) ∗ £ ¤ ≤ ∗   Example: f = 3. x (4. . ∗ ∗ 0 ¡ (4. 3 3000 ∆x2 = 6. referring to the true value: δx = x − x∗ .12 · 10 .789) = 6. ∆x1 = (4. we have 1 rd( ) = 3.333 = ≈ 0.79 · 100 = 7..11) (4.123 · 100 . ∆f = f − f = 7. 1 1 − 0. Input errors. 3 rd(6.10) (4. i ∗ £ (4..000333. The application of the rounding function causes an input error i ∆xi : xi = x∗ + ∆xi . .79 = −0.2 Rounding Errors ∆ ∗ ∗ ∆ ∗ ∆ ∗ = ( ) Figure 4.

As an example for effects of rounding errors. (−1)k (2k)! =1− ∞ = k=0 (4.14159265 ≈ π and for x = 31. k! 0! 1! 2! 3! x2 x4 x6 + − + ...15) In a computer-program (written e.0 term = 1. Its series expansion is the following: ∞ cos x = k=0 cos(k) (0) k cos 0 0 sin 0 1 cos 0 2 sin 0 3 x = x − x − x + x + .. eps) # initialization cosine = 0. you obtain for x = 3. we will study the calculation of the cosine function.0 k = 0 xx = x*x # loop DO WHILE cosine k term END DO (ABS(term) > ABS(cosine)* eps) = cosine+term = k+2 = (-term*xx)/(k*(k-1)) return cosine If you apply this program.4 Basic Numerical Concepts The machine accuracy ε (resolution of the computer) is an upper limit for the relative rounding error: δx ≤ ε for all x.593623E+04.14) For an ordinary PC.000000E+00 48 . 2! 4! 6! x2k . the value cosine(x) = −1. this value is 10−16 . in Fortran or C) the calculation can be realized for instance in the following way: function cosine(x. (4.4159265 ≈ 10π the value cosine(x) = −8..g.

2. √ For a given input value x. affect the solution. (from the third digit on all numerals get lost) 4.0962506E+12 + term60 = −0. As an introductory example. the approximation ∆f df ≈ ∆x dx (4. Because the size of the mantissa is limited to d = 7.g. let f = x be the precise output value. rounding errors.3 Conditioning The conditioning of a problem indicates how strongly errors in the input data. Therefore.2: Effect of an input error in square root calculation. (4. 60th sum term: term60 = −8.16) f(x) f+∆f f slope: 1 2 x x x+∆x x Figure 4.0962506 · 1012 . Furthermore. As shown in Figure 4. you obtain: term30 = −3.3 Conditioning The cause for this effect are so-called cancellation errors: 30th sum term: term30 = −3.0963317E+12 The solution of the series expansion depends on the summation order. we consider the calculation of the square root of an input value x: f (x) = √ x. an input error ∆x causes an output error ∆f .0000811E+12 −3. e. it is sensible to change the summation order and to calculate the small terms first.4. for small ∆x.17) x=x 49 .1062036 · 107 .

x2 and. an error in each of the input values affects the output.1. . . (4. 4. an output value f can depend on n input values xj . The amplification factor or relative condition number Kj is a measure for the contribution of an error in the corresponding input value. Hence.23) For the multiplication. x2 ) = x1 x2 . K2 = 1. The smaller the absolute values of the amplification factors are. 50 . .18) x=x For the relative errors δx and δf . x2 ) x1 x2 for x1 . the better the conditioning of the problem is. by analogy.25) x1 . (4. This influence can be calculated using a Taylor expansion of f which is truncated after the first-order term: n f (x + ∆x) = f (x) + j=1 ∂f ∂xj ∆xj + . Consequently.22) The relative error δxj of each input value contributes to the relative output error.20) For the relative error δf . a problem is ill-conditioned with respect to an input xj if |Kj | is big (> 1). 2 x 2 (4. 2 1 K= . The conditioning of the arithmetic operations is given in Table. we get the relation ∆f δf = ≈ f ∆x √ 2 x √ x = 1 ∆x 1 · = δx. K1 = ∂f ∂x1 x1 x1 = x2 · =1 f (x1 . we obtain under the conditions f (x) = 0 and ∀j ∈ {1. . . 2 (4. hence ∆f ≈ ∆x df dx ∆x = √ 2 x (4. n} : xj = 0 the expression f (x + ∆x) − f (x) δf = = f (x) n n j=1 ∂f ∂xj x=x Kj ∆xj xj · . x=x (4. We have already determined the amplification factor for the calculation of the square root (equation (4.24) (4. f (x) xj δxj (4.19) In general. we get: f (x1 . x2 = 0. .21) δf = j=1 Kj δxj .4 Basic Numerical Concepts is valid. .19)): 1 δ√x = δx.

x1 − (x2 + ∆x2 ) = 0. x2 > 0. This is called an error damping: x1 = 1 . thus the errors δx1 and/or δx2 become −x −x amplified. a perturbation of the input value x2 by 1% results in a cancellation error of 99% with respect to the correct result! 51 . Especially the case of x1 x2 and δx1 δx2 (and vice versa) yields a relatively small value of δx1 +x2 . in particular.3 Conditioning Operation Addition Subtraction f (x1 . (x1 + ∆x1 ) + x2 = 1011 .4. and square root calculation are not “dangerous” operations. dividing. ∆x2 = 0. ∆x1 = 10 . ⇒ x1 + x2 = 1001 . δx1 −x2 = 1 ⇒ In this example.99 . because the relative errors of the input errors do not propagate into the results in an amplified way.1: Conditioning of some mathematical operations (x1 . 1011 − 1001 ≈ 0. because x1x1 2 and x1x2 2 lie in a range +x +x between 0 and 1. x1 = x2 − x1x2 2 −x 1 −1 Multiplication Division Square root 1 1 1 2 δx1 + δx2 δx1 − δx2 1 2 δx1 x1 > 0 Table 4.01 = 0. the Kj are constant.99 . δx1 +x2 = 1001 ⇒ For substraction is/are x1x1 2 and/or x1x2 2 > 1. Addition is also a well-conditioned operation. δx1 = 10 x2 = 1000 . δx2 = 0.01 .01 . 1 − 0. x2 ) x1 + x2 x1 − x2 x1 · x2 x1 /x2 √ x1 K1 x1 x1 +x2 x1 x1 −x2 K2 x2 x1 +x2 δf x1 x1 +x2 δx1 x1 x1 −x2 δx1 Restrictions + − x2 x1 +x2 δx2 x2 x1 −x2 δx2 x1 . x2 > 0 x1 .01 x2 = 99 . which causes a cancellation error during the calculation of x − y: x1 = 100 . ⇒ x1 − x2 = 1 . x2 = 0) Multiplying. A big amplification occurs if x ≈ y.

4 Basic Numerical Concepts 52 .

1: List of different integration methods available in Simulink (MathWorks.¡   ‡ ø Å ù Æ û ¦ú È ¦Ç üý ÉÊ Ë Ì Í ©  “” ’‘ s t u  qr §¨ Ž ¦ p ¦i ¥ ¦¤ Œ h £ ‹ g ¢ e ˆ Š i‰ ‹Œ pq r s ¡t u ¢£ vw ‘’ ¤ ÐÑ "Ò Ó š› Ô Õ œ ž R Ö× ¡Ÿ   Ø ÙÚÛ ¢£ () 01 ”• ¦§ "– — àá ¨ â © ª ¬« 7 ­® °b¯ ç è é ±² G "F êë ´ 4³ kj H Q R IP ö÷ Ñ Ó dÒ S TU ¦ Ø× ¨ ª…© ‘ ’ • ” ‚“ «¬ Ý ¨Ü ® ­ ¯ Þß e fcd á ¨à ã ⠁ µ ¶´ æ yäå ¸ · „… † "‡ ˆ Šf‰ ç é¨è » h¹º z |¨{ X ` ¨Y x yvw VW pq ± ° ²³ u t U s¨r klm o¨n P R Q TS GH I ÙÚÛ XY wx yz { | } ~  € a b` § ji x ‚y € q rs ¿ tuv WV ü ÿ‚ý þ úû ¨Ž Ô hÕ Ö  g hf ¥¨¤ e vw B D¨C F&E ù ¨ø @A Š Œ¨‹ ™ d˜ u ¨t  ! #¨" % &$ — s lm o pn ¼ ½» º ¡ £¨¢ 7 9 8   Ð –ÎÏ ˆ‰ q r   yŸ • –“” 6 ì "í î ¶bµ ¹ R·¸ ó ò õ¨ô p i   g h i ñ d𠝞 ‡† h 5 4  © Ë Í¨Ì  ’¨‘ f g îï „…‚ƒ f È Ê¨É À ÂÁ   š œ¨› ‡ ‰¨ˆ 1 3¨2 ¥ §¨¦ "d e @89 ìí € cde Ç ¤Æ ™ ¤˜ † ¤… 0¤) £ ¤¢ ˜™ ¾¿ ë ê Å — ~¨} „ ( ¡ 65 ½ ¨¼ b ¨a 3 42 Ä – ƒ '   ¥¤ Ý ‘ “ b’ Ü ‰ ' "‡ ˆ & "% …† #$ "‚ƒ „ ! " —™"˜ xy €     –• "Î Ï wv  f h ig 5. x. Simulink (MathWorks.1). 2002) often have accommodated various numerical integration procedures for selection (compare Fig. x(t0 ) = x0 . (5.g. Ž ¡  ¡ÿ þ  © ” ©“ ¥¦ ¨©§  –— ƒ„ ™˜ š › ¡œ ¡ˆ ‰  žŸ ‘’ ¡ s  “ ”• – ˜— "ã ä ¡x y • €‚ ©  … † ‡ ¡   !" $¡# & % '( 0) åæ "Þ ß 0 = F (x.2) 53 . In this chapter the basics of numerical integration will be discussed and different integration methods will be introduced.3. 2002).1. ˙   ¢£ ¥x¤ § s¦ ª ¨© fd ™ e 1 32 46©5 7 ±² k l ¡m n op AC¡B D E F H¡G õ ½ö ÷ ÀÁ Ä fàò "ó ô ¾ ï ð ñ A B DEC f¬ « ­ gh ® ¯° 9©8 @ ¡i j ´³ · ¶ †©µ r sq tu wxv z sy {| } ¡~  € ‚ …†©ƒ „ a c db X `©Y ‚ ƒ IP Q S¡R U T VW The simulation of processes which are described by differential equations (see the vehicle example in Section 1. 5.1 Problem Definition and Terminology 5. Simulation tools such as e. t) ∀ t ≥ t0 .1) (5.3) requires methods for numerical integration.1 Principles of Numerical Integration 5 Numerical Integration of Ordinary Differential Equations The general (implicit) form of a scalar ordinary differential equation of first order is Figure 5.

k = 1. it is possible to transform a DE in implicit form into the explicit form. and therefore the set {tk |k = 1. (5. 54 . Rather.. N such that we get an equidistant time grid. tk+1 = tk + hk . The time step length between tk and tk+1 is denoted hk .5 Numerical Integration of Ordinary Differential Equations Example 0 = 2x + x ∀ t ≥ 0 .. . We will denote this exact solution as x(t).2 we have discussed a sufficient criterion for the existence of a unique solution.5) (5.2) are fulfilled.8) (5. t) ∀ t ≥ t0 . ˙ x(t0 ) = x0 .10) To simplify the following discussion. Example The example above can easily be transformed into x 2 x(0) = 1 . Each tk represents a discrete time point. ˙ x(0) = 1 . Sometimes.. x(t) cannot be calculated analytically. i. we will assume a constant time step length h = hk . Note that in this example.1) as well as the initial condition (5.. there is no explicit dependency of F on t.. More precisely. (5.3) (5. xk ) such that xk ≈ x(tk ) (5.4) We will restrict our presentation to differential equations that are given in the explicit form x = f (x.6) Solving a differential equation (DE) means to find a function x : R → R such that the differential equation (5. . (5.7) (5. we must apply a numerical integration method in order to get an approximation of x(t).e. In Section 3. a numerical method yields a set of N pairs (tk .9) for all k. x=− ˙ ∀t ≥ 0. In general. N } is called a discrete time grid..

13) and the initial condition x(t0 ) = x0 into equation (5.12) x(t1 ) = x(t0 ) + f fk ∆xk+1 h · fk tk tk+1 tk f dt = h · fk + ∆xk+1 tk+1 t Figure 5. t0 (5.2 A Simple Integration Method An integration of equation (5. t1 ) .2: t0 +h f (x. t0 (5. t0 ) .1.2 – with the product h · f (x0 .12) is fulfilled by the solution x(t).5. an approximation x2 for x(t2 ) is calculated: x2 = x1 + h · f (x1 .13) The error of the approximation corresponds to the area between the curve and the rectangle.12). t0 ) – represented by the rectangle in Figure 5. By inserting equation (5. but it cannot be used to calculate x(t1 ) as an exact evaluation of the integral is not possible. we get an approximation x1 for x(t1 ): x1 = x0 + h · f (x0 . A simple strategy is to replace the integral – represented by the dashed area in Figure 5. t)dτ ≈ h · f (x0 . t)dτ . t0 ) . (5.14) 55 . In a similar way.1 Principles of Numerical Integration 5.11) (5.5) from t0 to t1 = t0 + h yields t0 +h x(t1 ) − x(t0 ) = t0 t0 +h f (x. t)dτ .2: Illustration of the rectangle rule. Equation (5. Numerical integration methods are based on the idea to approximate the integral term. f (x.15) (5.

we get the integration rule xk+1 = xk + h · f (xk . can be approximated (tangent rule) f (xk . tk ) = x(tk ) ≈ ˙ hence x(tk+1 ) ≈ x(tk ) + hf (xk . in addition to the error of the integral approximation. tk ) . h (5.t t. Section 5. (5.19) 56 . is xk+1 = xk + h · f (xk . } } + +   + ⋅ Figure 5.3): The derivative of x(t) w. evaluated at tk .r. The general form of this integration method. tk ) . we use the approximation x1 for x(t2 ).3: Illustration of the tangent rule. tk ) . known as the explicit Euler method (cf.1). Therefore.16) The accuracy of each xk depends on the accuracy of its predecessor and of the accuracy of the approximation for the integral.2. (5. ˙ 2 x(0) = 1 .20) (5.5 Numerical Integration of Ordinary Differential Equations Note that for the calculation of x2 ≈ x(t2 ). (5. Example We apply the explicit Euler method to solve the differential equation x x=− .17) By replacing the exact values of x(tk ) and x(tk+1 ) with the corresponding approximations. an error of x1 influences the accuracy of x2 . The explicit Euler method can also be deduced by means of an alternative consideration (see Figure 5.18) x(tk+1 ) − x(tk ) .

24) h  f (x0 .23) 1. and ˆ extrapolation methods (which are not subject of the lecture). ∆(x0 . ˆ explicit and ˆ implicit methods.22) (5. t0 . These methods can be further divided into 5.0 1.0 exact solution e−t/2 0.5 0 0 0.5 explicit Euler with h = 0.3 Consistency The difference quotient of the exact solution x(t) between t0 and t1 = t0 + h (see Figure 5.0) ≈ x2 = x1 = 0.5.0 Figure 5. The application of equation (5.5) is   x(t0 + h) − x0 if h = 0 .5 1.21) 2 2 If we choose the step size h = 0. t) = − x . h) = (5. (5.5) ≈ x1 = x0 = 0.5 2. Numerical integration methods can be divided into ˆ one-step methods.16) to this problem leads 2 to xk h xk+1 = xk + hf (xk . tk ) = xk − h = (1 − )xk . t0 ) if h = 0 .e.4): 3 x(t0 + 1 · h) = x(0. 57 .4: Illustration of the explicit Euler method.5625 .1.5.1 Principles of Numerical Integration i. t0 = 0.75 4 3 x(t0 + 2 · h) = x(1. ˆ multiple-step methods. x0 = 1. we get the following approximations of the correct result x(t) = e−t/2 for t1 = t0 + 1 · h and t2 = t0 + 2 · h (see Figure 5. Numerical integration methods differ in how the area/gradient is determined. f (x. 4 (5.

h (5. t0 ) . h) = f (x0 .25) Φ depends on the integration method by which x1 is calculated. h) = ∆(x0 . we demand that the local discretization error is small for a small step size h: h→0 lim τ (x0 . For a reasonable integration method. h (5. we get Φ(x0 . (5. t0 . h) = x1 (x0 .5 Numerical Integration of Ordinary Differential Equations x x0 = x(t0 ) ∆ x(t1 ) x1 t0 t1 t Φ x(t) Figure 5. the method is consistent. (5.27) can be rewritten h→0 lim Φ(x0 . h) . t0 ) − x0 = f (x0 .29) As limh→0 Φ(x0 .28) Example We show that the explicit Euler method is consistent. h) = x0 + hf (x0 . By inserting equation (5. t0 . h) = f (x0 . t0 .5: Difference quotient of exact solution and approximation. t0 . As limh→0 ∆(x0 . 58 . we say that the method is consistent. t0 . t0 . The local discretization error is defined as τ (x0 . t0 ). h) − x0 .16) into (5. t0 ).25). t0 . h) − Φ(x0 . h) = 0 .26) τ is a measure for the deviation of the numerical approximation x1 from the exact solution x(t1 ). t0 ) . this is why we have defined ∆ and Φ for k = 0 – we know the exact value x0 . h) = f (x0 .27) If this equation holds for an integration method. equation (5. whereas the difference quotient of a numerical approximation of x(t) is given by Φ(x0 . (5. t0 . t0 . The term local error emphasizes that τ describes the error of one integration step under the precondition that the calculation of xk+1 is based on an exact value xk = x(tk ). t0 .

h) ∈ O(h) . K. m m k = 0. t0 . . we define the order of consistency p as the error order of the local discretization error: τ (x0 . wi = 1. t0 ) . h) = 1 [x1 − x0 ] h 1 = [x0 + hf (x0 . t0 . Hereby. you determine an average gradient s(xk . Therefore. (5. The general one-step method reads: xk+1 = xk + h s(xk .31) = f (x0 . i. we say that the integration method itself is of order p. . xk+1 . tk+1 ].2 One-Step Methods Consistency is a condition for a reasonable integration method. all admissible f . with i=1 (5. tk . 59 .2 One-Step Methods One-step methods are also called Runge-Kutta methods. t0 . The explicit Euler method is of order 1. h) over an interval [tk . t0 ) + O(h) .34) (5. h) = ∆(x0 . xk+1 . (5. t0 . but the definition above does not allow for the comparison of different integration methods. . tk . h) − Φ(x0 . h) ∈ O(hp ) .e. . and τ (x0 . we have ∆(x0 . Φ(x0 . Example For the explicit Euler method.30) If an integration method is of consistency order p for all problems. and t0 . x0 . h).32) (5.5. t0 ) − x0 ] h = f (x0 . h) = 1 [ x(t0 + h) − x0 ] h 1 dx = x(t0 ) + · h + O(h2 ) − x0 h dt t=t0 = dx dt t=t0 + O(h) (5. t0 . t0 .33) 5.35) xk+1 = xk + h i=1 wi si .

ˆ Accuracy: For a high accuracy small step lengths are required.36)   +   # + "        § ! ¡ ¤ + +  ¦   £ Figure 5. The characteristics of the explicit Euler method are: ˆ Efficiency: Quick calculation of one step.5 Numerical Integration of Ordinary Differential Equations x 4 3 2 1 h=6 h=1 0 2 −1 −2 4 6 8 10 h=4 12 t Figure 5. 5. This leads to stability problems for stiff systems.7) is the simplest integration method.1 Explicit Euler Method (Euler Forward Method) The explicit Euler method (see Fig.6: Explicit Euler method: Influence of step size h. ¨ (5. 60  ¢ ¥ © . 5.7: Explicit Euler method.2. The algorithmic equation reads: xk+1 = xk + h · fk .

2 One-Step Methods 5.  ¢ ¥ © (5.38) has to be solved. Therefore an iteration or an approximation is required. that fk+1 = f (xk+1 ) . This increases the computational effort significantly. The characteristics of an implicit Euler method are: (5. 5.2. 5.8) uses the function value at the digit k + 1 instead of the function value at the digit k.5.3 Semi-Implicit Euler Method The implicit Euler method. ˆ Accuracy: For a high accuracy small step lengths are required. xk+1 = xk + hf (xk+1 ) . However. Therefore the algorithmic equation reads: xk+1 = xk + h · fk+1 . The semi-implicit Euler method is based on the idea to approximate f (xk+1 ) be means of a linearization of f at xk : f (xk+1 ) = f (xk ) + ∂f ∂x x=xk ∆xk + O( ∆xk 2 ) . the stability of the algorithm is ensured. The method is called implicit due to the fact. (5.40) 61 .39) requires the solution of a non-linear equation system for xk+1 because f (xk+1 ) is unknown.8: Implicit Euler method. has to be solved for xk+1 . for which the nonlinear equation 0 = xk+1 − xk − h · f (xk+1 ).37)   +   # + "        ¡ § ! ¤ + +  ¦   £ Figure 5. ¨ (5.2.2 Implicit Euler Method (Euler Backward Method) The implicit Euler method (see Fig.38) ˆ Efficiency: For every step the nonlinear equation (5.

= xk + h 2 (∗) ¢£ &  (5. tk ). 5. Thus. 2 (5. equation (5.9).39) becomes xk+1 = xk + h f (xk ) + ∂f ∂x · (xk+1 − xk ) . which is also called Runge-Kutta method of second order. tk + h).43) In the subsequent corrector step the gradient is averaged with this value and the initial value: xk+1 fk + f (xp ) k+1 .44) s1 = f (xk . (5. h xk+1 = xk + (s1 + s2 ).2.4 Heun’s Method Heun’s Method (see Fig.     + = ( + ) ! "  ¥¦  ¡  ¤ ¨ ©§ + + #$ xp = xk + hfk .5 Numerical Integration of Ordinary Differential Equations where ∆xk = xk+1 − xk .41) x=xk This is a linear equation for xk+1 .42) 5. s2 = f (xk + hs1 . 5.46) (5.47) The term (∗) corresponds to an average of two gradients (compare Fig. In a predictor step you obtain a first approximation of the value xp : k+1 (5. It is a two-stage one-step method.9) is a so-called predictor-corrector method. 62 .45) (5. (5. Its solution is xk+1 = xk + h I −h ∂f ∂x −1 x=xk · f (xk ) . k+1 % Figure 5.9: Heun’s method.

53) As an example this is shown for two one-step methods: 63 .10). §  + = ( + ⋅ + ⋅  + ) "    % $ # ¨ © ¦ ¥  ¡  ¤  ! +   £ Figure 5. tk + ) 2 2 h h s3 = f (xk + s2 .50) (5.51) (5.2. 5.5.2 One-Step Methods 5.48) (5. which are determined at specific nodes. s1 = f (xk . xk+1 .2. it must be ensured that the method converges for an infinitely small step length towards the solution of the differential equation: h→0 lim xk+1 − xk ! = lim s(xk . h) = f (x(tk ). tk ) h h s2 = f (xk + s1 .52) Through the averaging the accuracy of the approximation (of the gradient) is strongly increased. tk .6 Consistency Condition for One-Step Methods In order to guarantee the consistency of an integration procedure.10: Runge-Kutta method of fourth order.49) (5. tk + ) 2 2 s4 = f (xk + hs3 .5 Runge-Kutta Method of Fourth Order This method averages four gradients (see Fig. Therefore it is called a four-stage one-step method. tk ) = x(tk ) ˙ h→0 h (5. 5. tk + h) h xk+1 = xk + (s1 + 2s2 + 2s3 + s4 ) 6 averaged gradients ¢   (5.

The general form of a multiple-step method reads: m m xk+1 = i=1 αi xk+1−i + h i=0 βi fk+1−i k = m − 1. Hereby you proceed in the following way (see Fig. αi = 0: Adams-Moulton method) The following points have to be noted: 64 ( ) 0 % 21 (5. £   − − ¤ ¥ +  © Figure 5. α1 = 1.g. . . . fk . α1 = 1. . n ≥ 1 for the determination of xk+1 .11): a. .g. c. . h) = lim h→0 f (xk ) + f (xk + hfk ) 2 = f (x(tk )) = x(tk ) ˙ (5. αi = 0: Adams-Bashforth method) β0 = 0: implicit multiple-step method (e. 5. fk−n+1 .5 Numerical Integration of Ordinary Differential Equations a) Heun h→0 lim s(xk .55) 5. Calculation of xk+1 = xk + Fp . m.54) b) Runge-Kutta of fourth order h→0 lim 1 1 [s1 + 2s2 + 2s3 + s4 ] = [fk + 2fk + 2fk + fk ] = x(tk ) ˙ 6 6 (5. Interpolation of a polynomial pn (t) of order n through fk−n . tk+1 b. ! "  ¨ #$ Hereby you distinguish again between implicit and explicit methods: β0 = 0: explicit multiple-step method (e.3 Multiple-Step Methods Multiple-step methods use an interpolation polynomial for x(t) and/or x(t) = f (x) over ˙ more than one interval [tk−n .11: Multiple-step method. Determination of the area F p = tk ¦§ pn (t) dt. .56) = ¡¢ ()  &' +      . tk+1 ].

the Adams-Moulton-Bashforth method (3.4 Step Length Control ˆ k = 0. is stored interim. you enlarge the step length. . E. m − 1 requires a start-up calculation. . Moreover.g. An initial value can be determined with the help of an explicit method (predictor). if fk−1 . . 5.g.58) 5. this increases significantly the organizational effort.5. with a one-step method with the same accuracy. 3 3 3 The consistency of this method is ensured: 1 1 2 xk+1 − xk 3 xk − 3 xk−1 + 3 hfk+1 lim = lim h→0 h→0 h h xk − xk−1 2 = lim + fk+1 h→0 3h 3 1 2 = x(tk ) + x(tk ) ˙ ˙ 3 3 = x(tk ) ˙ (5. e. you reduce the step length.. However. the local error of xk in calculation of xk are estimated with two methods of different accuracy. If the error is (too) small. ˆ The computational effort for explicit multi-step methods is only 1 ∗ f -evaluation k per step. generally fk+1 has to be determined iteratively. fk−2 . You proceed in the following way: At ∼ first. . .4 Step Length Control Methods with step length control pursue the goal of obtaining the desired accuracy within the least possible number of integration steps. implicit method): 4 1 2 xk+1 = xk − xk−1 + hfk+1 .3. . 65 . As an example we will investigate the Gear-Method of second order (β = 0. If this error is big.1 Predictor-Corrector Method For the use of implicit multiple-step methods./4. order) comprises ˆ the Adams-Bashforth method of third order (polynomial of second order) as a predicator and ˆ the Adams-Moulton method of fourth order (polynomial of third order) as a corrector. .57) (5. with multiple-step methods the order m can be adapted.

5 Numerical Integration of Ordinary Differential Equations 66 .

3) (6. KAC xA − xC = 0. in algebraic equation systems no temporal dependencies of the state variables appear. (6. In this chapter.5) . ! ! (6. the total mole balance can be set up: xA + xB + xC = 1. Thus. we have three equations (6.2) Solely component A exists at time t0 . 6.6) In terms of the mole fractions. xC have to be found. rAC = k3 cA − k4 cB = 0. (6. The mole fractions xA .7) 67 .1) (6.(6.1 Linear Equation Systems An example from chemistry serves us as an introductory example.6 Algebraic Equation Systems In contrast to differential equation systems.4) The equilibrium constants Ki are defined as: KAB = KAC xB xA xC = xA or or KAB xA − xB = 0. In the equilibrium the reaction rates ri are dependent on the reaction rate constants ki and the concentrations ci : rAB = k1 cA − k2 cB = 0. linear and nonlinear equation systems as well as the corresponding solution approaches will be introduced. We consider two parallel equilibrium reactions of first order describing the production of the components B and C from the component A: A A k1 k2 k3 k4 B C (6.5) (6. xB .7) for the three unknown variables. which are present in an equilibrium.

1. KAC 0 −1   xA x = xB  .1 Solution Methods for Linear Equation Systems The solution of the linear equation system (6. xC    1 b = 0 . Backward substitution Rx = z ¢ = ¡   ¢ 68 ¡ .6 Algebraic Equation Systems The general form of a linear equation system can be written as: Ax = b (6. or the determinant det A = 0. 6. The equation system is solvable.10) = © ¨   § ¤ ¡£ ¥ ¡ ¢ ¦ =  2.9) This equation system is solvable because det A = 1 + KAC + KAB = 0. This comprises three steps: 1. In our example this means:  1 1 1 A = KAB −1 0  . b ∈ Rn . if the rank A = n. x can be calculated by means of the Gauß elimination.8) with A : (n×n)-matrix and x.8) can be represented as follows: x = A−1 b. 0 (6. Triangulation of A = L · R (6. Forward substitution Lz = b ⇒ z =   3.

6.13 b) we obtain: (1 + KAB )xB + (6.15) This solution is then inserted into (6. 2000)). 6. ! ! (6. rACD = k3 cA − k4 cC cD = 0. 1 + KAB + KAC (a) (b) (c) (6. which are often implemented into standard subroutines of commercial software (e.g.2 Nonlinear Equation Systems In our example this means: 1 KAB KAC 1 −1 0 1 0 −1 1 ·(−KAB ) ·(−KAC ) 0 0 (6.12) (6.19) (6.11) 1 1 1 1 0 −(1 + KAB ) −KAB −KAB ·(−KAC ) 0 −KAC −(1 + KAC ) −KAC ·(1 + KAB ) 1 1 1 1 0 −(1 + KAB ) −KAB −KAB 0 0 −(1 + KAB + KAC ) −KAC (6.16) The Gauß elimination is considered to be the most important algorithm in the numerical mathematics.17) (6. 1 + KAB + KAC KAB · KAC = KAB .2 Nonlinear Equation Systems We consider again the example of Section 6.13 a): xA = 1 .14) By inserting the above into (6.1 and enlarge it in a way such that in addition to the components B and C a new component D is involved in the reactions: k1 A A k2 k3 B+D C +D (6.13 c) gives: xc = KAC . There are many numerically “robust” modifications. 1 + KAB + KAC (6. MATLAB (MathWorks.13) (6.18) k4 In the equilibrium the following equations hold: rABD = k1 cA − k2 cB cD = 0.20) 69 .

2. xD have to be found.(6.21) . .   . the molar fractions xA . xB . also a necessary condition. xB . The general form of a nonlinear equation can be written as:     f1 (x1 . . .28) (6. The total mole balance for the four components is: xA + xB + xC + xD = 1. An alternative formulation for the solvability of the nonlinear equation system. xB . f2 (xA . (6. if exactly one variable xj can be assigned to every equation fi = 0.24) (6. xB . xn )  0     or f (x) = 0. = 0. xD ) = xA + xB + xC + xD − 1= 0. . reads: The nonlinear equation system is only solvable. x2 . xD ) = xB + xC − xD f3 (xA .   = . .1 Solvability of the Nonlinear Equation System The necessary condition for the solvability of a nonlinear equation system is: det ∂f ∂x =0 ∀ x. . . xC . the mole balance of the component D is given by: xD = xB + xC . KACD xA = xC xD (6. xB .22) (6. . .27) (6. xC . xC .26) (6.6 Algebraic Equation Systems Again. = 0. x2 . which appear in the equilibrium. xD ) = KABD xA − xB xD f4 (xA . so that this equation can be used for calculating the assigned variable under the pre-requisite that all other variables are known. fn (x1 . .29) 0 (6. . xC . . xn ) In our example this means: f1 (xA . xC . Furthermore. xn ) 0  f2 (x1 . x2 .24) for the four unknown mole fractions.25) 6. (6. .23) (6. .21) you obtain four equations (6. xD ) = KACD xA − xC xD = 0. the problem is that in many cases the Jacobian matrix can only determined with high computational effort. Together with the two nonlinear equations for the equilibrium constant Ki : KABD = KACD xB xD xA xC xD = xA or or KABD xA = xB xD .30) Jacobian matrix Here. 70 . . .

1 Newton’s Method for Scalar Equations First.1.7 (x4 − 5) − 8 2 =0 =0 =0 (6. dx (6. In the incidence matrix.31) (6. (6.33) (6. An iteration procedure is sought to determine the zero(s) x∗ satisfying f (x∗ ) = 0.34) (6. one expands a Taylor series for f (xi+1 ) = 0 at xi .37) This method is called Newton’s method for scalar equations and is graphically illustrated in Fig 6.2. we study the case of a scalar equation f (x). . In order to achieve this.32) (6.2. 1.2 Nonlinear Equation Systems For an illustration of this necessary condition.36) You truncate it after the first-order terms: xi+1 = xi − f (xi ) fx (xi ) i = 0.2 Solution Methods for Nonlinear Equation Systems 6. it is indicated which equation fi depends on which variable xi (marked by ×).6. as in our example. x1 f1 f2 f3 f4 f5 × × × × × x2 x3 x4 × × × × x5 One then tries to assign to each equation exactly one variable as its solution (marked by ). 6.35) x4 − 3x1 + 6 = 0 x1 x3 − x5 + 4 = 0 We make use of the incidence matrix which helps us to determine information on the structural regularity of the problem. 71 . with i as the iteration index: f (xi+1 ) = f (xi ) + df i i+1 (x )(x − xi ) + · · · = 0. . . we consider the following example: f1 : f2 : f3 : f4 : f5 : x1 + x4 − 10 2 x2 x3 x4 − x5 − 6 x1 x1. If this succeeds.2. the structural regularity is given.

. . starting with a sensible initial value x0 . . . ∂x  . ∂x2 . . . . . ∂x2  ∂f1 i (x ) ∂xn  ∂f2 i  (x )  ∂xn  . 1. . .   ∂fn (xi ) ∂x1  ∂f1 i (x ) . is referred to as Newton-Raphson method: ∂f i (x )∆xi = −f (xi ).  ∂fn i  (x ) ∂xn (6. ∂x (6.  . .6 Algebraic Equation Systems f(x) x* x 3 x2 x1 x Figure 6. . . as shown below.38) ∂f1 i  ∂x1 (x )   ∂f2 i (x )  ∂f i (x ) =  ∂x1  . . . .40) This is a linear equation system of the form A∆xi = b and contains n linear equations for ∆xi .41) 72 .2 Newton-Raphson Method for Equation Systems In the vector case f (x) = 0 the procedure is similar to the scalar case. i0 < imax (6.2. ∂fn i (x ) . After their solution (see Section 6. ∂x2 ∂f2 i (x ) . an update (called Newton step or Newton correction) of xi+1 is carried out: xi+1 = xi + ∆xi i = 0.1: Newton’s method for scalar equations.39) The solution procedure for the determination of the zero vector x.1). 6. One also expands a Taylor series for f (xi+1 ) = 0 for the vector xi with the iteration index i: f (xi+1 ) = f (xi ) + ∂f i (x ) ∂x Jacobian matrix (xi+1 − xi ) + · · · = 0 ∆xi (6. .2.

51) With this solution you carry out the second iteration step f (x1 ) = . f2 (x ) = 0.45) (6.50) As new approximation you obtain: x1 = x0 + ∆x0 = 2 1 4 . (6.44) f1 (x1 .46) The Jacobian matrix reads: ∂f = ∂x 3x2 − 2x1 x2 −3x2 − x1 1 2 .43) (6. we want to consider the first steps of the solution procedure in a simple example. (6.49) . (6. 2x1 2x2 (6. (6. The example contains two nonlinear equations: x3 − x3 − x2 x2 1 2 1 x2 1 + x2 2 = = 7. (6. 0 ∂f 0 (x ) = ∂x 12 −4 . 73 . . 4 0 (6.47) As initial value we choose x0 = x0 1 x0 2 = 2 .42) In order to come to a better understanding of the Newton-Raphson method. 4.6. 0.2 Nonlinear Equation Systems The solution of the linear equation system and the following correction continue until either the maximum index imax or the absolute and/or relative accuracy ε is reached: ∆xi0 < εabs + εrel xi0 .48) The linear equation to be solved is: 12 −4 4 0 with the solution ∆x0 = 0 1 4 ∆x0 1 ∆x0 2 = 1 0 (6. x2 ) = x2 1 − x2 2 −4 = = 0. . x2 ) = x3 − x3 − x2 x2 − 7 1 2 1 f1 (x1 . 0 Inserted for the first iteration step it results in: f1 (x0 ) = 1.

that the search direction leads into an area from which the method can no longer reach the roots. 74 .2) The zero x∗ can 2 only be found. the method cannot converge to the root. Difficult Problem (see Fig 6. because the reciprocal values of the gradient of the function tend towards infinity. if the initial value x0 is chosen on the right of the pole. If the initial value is chosen outside this range.2. x2 . Convergence Range in the Case of Multiple Solutions (see Fig 6. This is due to the fact. the solution converges towards one of the two outer zeros.6 Algebraic Equation Systems 6. The Newton-Raphson method is basically not suitable for such functions. The method leads then to the proximity of the extreme value and fails there.3) The zero x∗ of the function can only be found. if the initial value x0 is chosen between the two extreme values of the function.2. In the following we study graphical interpretations of various problems. that the convergence range of the Newton-Raphson method is problem specific. The method has to be aborted due to numerical singularity.4) In this example. f(x) x* 1 x* 2 x* 3 x Figure 6.2: Convergence range in the case of multiple solutions. divergence appears in spite of an initial value x0 in the proximity of the roots x1 . Divergence and Singularity (see Fig 6.3 Convergence Problems of the Newton-Raphson Method Convergence problems within the Newton-Raphson method can occur for various reasons. If it is chosen on the left-hand side. That means.

75 .3: Divergence and singularity.6. Figure 6.2 Nonlinear Equation Systems Figure 6.4: Difficult problem.

6 Algebraic Equation Systems 76 .

u2 . which were both introduced in the previous chapters. R2 ). Ohm’s law and ¨   77 . The introductory example comes from the area of electrical engineering. 7.1). R2 . 7. The electrical currents i0 . a combination of both can also appear: the so-called differentialalgebraic (equation) systems (DAE systems). uC . il . The voltage source U0 as well as R1 . L and C are assumed to be known. iC and the voltages u1 .1: Electrical circuit example 1. £ ¢ $ # "  ¥  ¤ !   + %   § ¦ &   ¡   © Figure 7. i2 . ul are to be found. i1 .7 Differential-Algebraic Systems Besides differential equation systems and algebraic systems.1 Depiction of Differential-Algebraic Systems Example 1 We observe an electrical circuit consisting of two resistors (R1 . a coil with the inductivity L and a capacitor with the capacity C (see Fig.

d) f ∈ Rn−k .1) (7.7 Differential-Algebraic Systems Kirchhoff’s loop rule yields the model equations: u1 = R1 · i1 u2 = R2 · i2 diL uL = L · dt duC iC = C · dt U0 = u1 + uc uC = u2 uL = u1 + u2 i0 = i1 + iL i1 = i2 + iC (7.1. which consists of two differential equations (7. 7.13) (7. z.4).14) (7. g ∈ Rk (7. (7. d).8) (7.14) comprises a system of (non-) linear algebraic equations.13) comprises a system of ordinary differential equations.10) state vector. (7. (7. For the representation of differential-algebraic systems several different characteristic classes can be distinguished. and seven algebraic equations. it is called a differential-algebraic system.12) The system can be transformed to: ˙ 0 = f (z.4) (7. vector of the input quantities and parameter.3).7) (7. Therefore. z.11) (7.5) (7. 78 .9) This leads to an equation system.1 General Nonlinear Implicit Form The general implicit form of a differential-algebraic system is: ˙ 0 = F (z.6) (7.3) (7. d) 0 = g(z. with z ∈ Rn : d∈R : d (7.2) (7.

15) (7.g. Many control engineering and system theoretical concepts are based on systems of the form (7.g. i2 . uL ] . linear differential-algebraic systems with constant coefficients play a special role: ˙ M z = Az + Bd.1. in chemical engineering: ˙ x = f (x. d). uC ]T . (7. y : vector of the algebraic state variable. y. u2 . d). x : vector of the differential state variable. through linearization of nonlinear implicit or explicit differentialalgebraic systems. y = [i0 . u1 .17) (7. d).1. which are not differentiated.1.18) (7.16) y consists of variables. y. z = x.20) You obtain such models e.3 Linear Differential-Algebraic System In addition to nonlinear differential-algebraic systems.1 Depiction of Differential-Algebraic Systems 7.1 and occurs often e.19) 7. Example 1 is an example for such an explicit DAE system. (7. i1 . y. Therefore. This can be easily seen after the following transformations: diL 1 = uL dt L 1 duC = iC dt C g1 : g2 : g3 : g4 : g5 : g6 : g7 : with x = [iL . 79 .2 Explicit Differential-Algebraic System The explicit differential-algebraic system is a special case of 7. y. 0 = g(x. T T      ˙ x = f (x. (7.20). R1 .7. L. R2 . C] . iC . d = [U0 . the vector z can be divided into differential and algebraic variables. d). y T . 0 = u1 − R1 · i1 0 = u2 − R2 · i2 0 = U0 − u1 − uC 0 = uC − u2 0 = uL − u1 − u2 0 = i0 − i1 − iL 0 = i1 − i2 − iC                          0 = g(x.

(7. In this example. This is not always the case. which was already introduced in Section 6.14) a solution exists. 7. this means that the rank [gy ] = k or ∂y necessary condition is given by the analysis of the structural rank of the incidence matrix. For the differential-algebraic system (7. Example 2 In example 2 a third resistor R3 is introduced replacing the capacitor in example 1 (see Fig.. which were described in detail in the previous chapters. From this it follows that iC can be determined from g7 . the solution is even unique. that the electrical current i0 can only be determined with equation g6 . 80 . In case of example 1. A of an explicit DAE system. the incidence matrix looks as follows: i0 g1 g2 g3 g4 g5 g6 g7 1 i1 5 i2 iC u1 × u2 uL 4 × 6 7 × × × × 2 × 3 It can be seen. In the case ∂g is regular. as the following example shows.1.7 Differential-Algebraic Systems 7.2). i1 from g1 .13). if the system is of differential index 1.3 Solvability of Differential-Algebraic Systems Differential-Algebraic systems are not necessarily solvable.2. i2 from g2 . The structural integrity is given. 7. u1 from g3 and u2 from g4 .2 Numerical Methods for Solving Differential-Algebraic Systems Differential-algebraic systems are often solved through a combination of differential equation solvers and nonlinear equation solvers. uL from g5 .

32) The incidence matrix has the following appearance: 81 . y = [i0 . U0 = u1 + u3 . T T (7. i2 . i0 = i1 + iL . u2 = R2 · i2 . dt L uL = u1 + u2 . R3 . & $    (7. i1 . d = [U0 . diL 1 = · uL . In addition to the already known equations u1 = R1 · i1 .7.25) (7.24) (7. uL ] .26) you obtain u3 = R3 · i3 . L] .22) (7. (7.23) (7. R2 .31) (7. which consists of one differential equation and eight algebraic equations: x = [iL ].21) (7.27) (7. u2 . i1 = i2 + i3 . u3 . u1 .30) (7.28) (7. u3 = u2 . i3 . R1 .2: Electrical circuit example 2.29) This is an explicit differential-algebraic system.3 Solvability of Differential-Algebraic Systems  £    ©  ¢ ¨  " ¡  + # § ¦ %   ! ¥ ¤ ' Figure 7.

remedy can be achieved through a simple transformation (see Fig. Models with algebraic loops run slower than models without. 7.3(a)) we consider the algebraic loop in y = A(x − By) Here. E. In contrast to example 1. u3 can be determined from g3 . The problem of algebraic loops is only solvable iteratively.7 Differential-Algebraic Systems i0 g1 g2 g3 g4 g5 g6 g7 g8 1 i1 × i2 i3 u1 4 u2 u3 uL 6 × × 8 3 × 2 × × 5 × × × 7 × i0 can only be calculated through equation g6 . but the problem is still structurally solvable. with which variable and equation you want to proceed further. In Simulink a solution method based on Newton’s method is used. Now you have to decide.3(b)): (1 + AB)y = Ax (7. algebraic loops should be avoided. The decision to calculate u3 with g3 leads to the problem of algebraic loops: u3 = g3 (u1 ) u1 = g1 (i1 )      i1 = g7 (i2 .g. Doing so. . you see here that no unique solution exists. As example (see Fig. . i2 from g2 . i1 from g7 . you decide that u1 has to be determined from g1 . uL only with g5 . il from g8 and u2 from g4 . If possible.33) 82 . ) The output variable u3 is at the same time an input variable. i3 )     i3 = g8 (u3 ) ⇒ u3 = g(u3 .34) (7. 7. .

i1 = i2 + iL .43) (7.40) (7.35) (7.7. In addition to the already known model equations u1 = R1 · i1 .39) (7. diL uL = L · . dt you obtain U0 = u1 + uL .36) (7.3: Algebraic loop. Example 3 We consider a further example 3 (see Fig.41) (7. (7.38) $  ¦    83 . uL = u2 .4). u2 = R2 · i2 . £ ¢    ! ¦ §¥ ¥  § ¤   + & ©  # "  ¨  ¡   % Figure 7.4: Electrical circuit example 3. uC = u1 + u2 .37) (7. dt duC iC = C · . i0 = i1 + iC .42) (7. 7. where capacitor and coil from example 1 are exchanged.3 Solvability of Differential-Algebraic Systems   ¡ ¡ £ ¢£¤ ¨© (b) transformed ¢ (a) original Figure 7.

84 . y and d correspond to those of example 1. Therefore. the differential-algebraic system is not solvable in this form. The vectors x.7 Differential-Algebraic Systems The model includes two differential equations and seven algebraic equations. This is referred to as structural singularity. that for the calculation of i0 and iC at each case only the equation g6 is available. The incidence matrix looks like the following: i0 g1 g2 g3 g4 g5 g6 g7 × × × × × × i1 × × × × × i2 iC u1 × × × × u2 uL It can be seen.

8.1). .1 Introductory Example For the introduction of partial differential equations (PDE) we consider the example of a heat conductor. q : heat flux density . ∂x You insert this equation into (8. t) ] (8. t) + ∂x (x. t) ∂t ∂T (x.1: Schematic representation of a heat conductor.4) temperature conductivity 85 . . ¢ ¨ ¡ § +  ¤ ¥£   ¦ Figure 8. t) c ∂x2 β (8. The temperature T of the heat conductor is a function of the position in space and time. t) . t) ∂t = A [q(x. In the heat conductor there exists a heat flux from regions of higher temperature to regions of lower temperature (see Fig. t) − q(x + dx.1) and obtain: q(x. The heat flux density q can be represented as follows according to Fourier’s law: ∂T (x. λ : heat conductivity. t) ∂t = Aλ = ∂T 2 (x. t)dx + .3) (8. c : specific heat capacity. t) dx ∂x2 λ ∂T 2 (x. The differential heat balance for a volume element with the area A and the thickness dx reads: A dx c ∂T (x.1)  with : density.  (8. t) = −λ A dx c ∂T (x.8 Partial Differential Equations 8.2)  © Taylor series: ∂q q(x.

zx . an initial condition: T (x. a(·). t) ∂x ∂T (l. x.7) (8. For the time t the first derivative is ∂2T . b(·). t) λ ∂x = 0 4 = ε(Ta − T (l. t) corresponds to the order of its highest derivative. For a complete mathematical model additional boundary conditions have to be determined.2 Representation of Partial Differential Equations The highest order of derivative occurring in a (partial) differential equation is called the order of the differential equation. t) = T2 (t) b) Neumann conditions ∂T (0.4) is the heat conductivity equation. t) ∂x = 0 = 0 (8. c(·) constant or function of t and x. ∂T . c(·) function of t. because it contains derivatives with respect to space as well as derivatives with respect to time. Therefore one boundary condition for t is ∂t needed. 86 . t) = T1 (t) T (l. b(·). x occurs in the second derivative ∂x2 Examples for these are: a) Dirichlet conditions T (0.11) You distinguish between linear. t)) reads: a(·)ztt + 2b(·)ztx + c(·)zxx + d(·)zt + e(·)zx + f (·)z + g(·) = 0 (8. in all other cases. z. So two boundary conditions are needed for x.8) (8.5) (8. The number of the conditions for every independent variable (x. quasi-linear and nonlinear partial differential equations: linear: quasi-linear: nonlinear: a(·). e. t)4 ) (8. t0 ) = T0 (x).g. the temperature profile along the heat conductor at time t0 .8 Partial Differential Equations (8.10) 8. The general form of a partial differential equation of second order (z = z(x. zt . This is a partial differential equation.6) c) Robbins conditions ∂T (0. t) ∂x ∂T (l.9) (8.

t) = T (l.8. b2 − ac = 0.13) (8. The heat conductivity equation (8. In our example we choose: β = 1 πx T (x.3 Numerical Solution Methods Linear partial differential equations of second order can be further distinguished into hyperbolic.16) 8.3. one retains the time coordinate and discretizes the spatial coordinate. t) = 0 (8. b2 − ac < 0. l (8.1 Method of Lines The idea of the method of lines (Schiesser. The solution of vector differential equations can be obtained accordingly. 87 . 1991) is the parameterization of the solution function in such a way that it depends only on one continuous coordinate.12) is a linear. because a = b = 0 and c = β = const and so b2 − ac = 0. Usually. the partial differential-algebraic system is transformed into a differential-algebraic system with lumped parameters which can be solved according to Chapter 7. 0) = sin l T (0. elliptic partial differential equation of second order. in most cases an analytical solution of partial differential equations is not possible.15) The analytical solution reads then: 0 21 π At −@ 2 l T (x. 8. t) = e sin πx .4) Tt − βTxx = 0 (8. In that way.14) (8. The choice of a suitable numerical solution method strongly depends on the type of the partial differential equation. parabolic and elliptic differential equations: hyperbolic: parabolic: elliptic: b2 − ac > 0. Except for linear partial differential equations that only contain a small number of variables.3 Numerical Solution Methods The explanation of the numerical solution methods is based on scalar partial differential equations.

8 Partial Differential Equations The evaluation is done at N+1 selected nodes xk (see Fig.25) In the case of a first-oder approximation. t).1. . zN (t)] T (8. N. one solves this equation with respect to dzk zk+1 1 d2 zk zk+1 − zk = − ∆x = − O1 (∆x) 2 dx ∆x 2 dx ∆x 88 .18) (8. 8. x) ≈ [z0 (t). (8. 8.19) = = Figure 8.20) (8. . . dx 2 dx2 (8. . the equations are: Tt |xk − βTxx |xk k 0 = 0 k T0 .2): k l k = 0. . k = 1. k or Ttk − βTxx = 0. 8. . For this purpose you use the method of finite differences. N. .1 Finite Differences First of all. . . This is illustrated in Fig. . . xk ) xk = z(t.23) T (t = t0 ) = n k = 1. zk+1 is expanded into a Taylor series: zk+1 = zk + dzk 1 d 2 zk ∆x + ∆x2 + . T (t) = T2 (t). .17) (8.22) (8. . . Inserting the above into the partial differential equation. For the use of the method of lines the differentials with respect to the spatial coordinate have to be determined. In the example of the heat conduction equation.24) dzk : dx (8. N N zk (t) = z(t.3.21) (8. T (t) = T1 (t). .3. z1 (t). one receives n differential equations which have to be solved at the points xk . . . .2: Discretization of the solution function z(x.

Admittedly. this is a bad approximation.28) (8.27) dzk d 2 zk and is obtained.3: Illustration of the method of lines.2). there are the following degrees of freedom: ˆ number of nodes.26) dx dx2 by four and substracting (8. a linear equation system in dzk ∆x dx −3zk + 4zk+1 − zk+2 + O2 (∆x) 2∆x 4zk+1 − zk+2 = 3zk + 2 dzk dx = (8.26) (8. . . – backward differences: only points on the left-hand side are considered.29) only points on the right-hand side are considered (compare with Table 8.1). .27) from this equation. in which both zk+1 and zk+2 are expanded into a Taylor series: dzk 1 d 2 zk ∆x + ∆x2 + . ˆ selection of nodes: – forward differences: as in (8. dx 2 dx2 zk+1 = zk + zk+2   (8. . An approximation of second order is better.3 Numerical Solution Methods = = = = Figure 8. 89 . – central differences: both sides are considered (compare with Table 8.8.29) In the approximation by finite differences. dx 2 dx2 dzk 1 d 2 zk = zk + (2∆x) + (2∆x)2 + . you obtain: In this way. By multiplying (8.

1: One-sided differences. if points are not available which are necessary for the evaluation of the differences. both methods reduce the approximation order at the boundaries. Derivative ∆x ∂z ∂x xk label 2-point 3-point 4-point 5-point 6-point 7-point error order 1 2 3 4 5 6 1 2 3 4 5 zk −1 −3 2 − 11 6 − 25 12 − 137 60 − 49 20 1 2 35 12 15 4 203 45 zk+1 1 2 3 4 5 6 −2 −5 − 26 3 − 77 6 − 87 5 zk+2 zk+3 zk+4 zk+5 zk+6 1 −2 3 −2 1 3 3 4 10 3 20 3 −3 −5 − 15 2 1 4 19 4 107 6 117 4 −1 4 −5 4 − 15 4 1 5 6 5 1 −6 ∆x2 ∂2z ∂x2 x k 3-point 4-point 5-point 6-point 7-point −1 − 14 3 13 − 254 9 11 12 61 12 33 2 −5 6 − 27 5 137 180 8. ˆ Sliding Differences (see Fig. Among others there are the following solutions: ˆ Extrapolation method (see Fig. 8. In the central five-point formula. this applies for example to zk−1 and zk−2 . 90 . However.3. 8.4): – Extrapolation points are introduced outside of the defined range.8 Partial Differential Equations Table 8.1. – These points are then used in the approximation formulas.5): The center point is shifted successively.2 Problem of the Boundaries At the boundaries of the defined range of the spatial coordinate x problems may arise in using finite differences.

.3 Numerical Solution Methods Table 8.2 Method of Weighted Residuals The method of weighted residuals (Lapidus and Pinder. coefficients. known basis of the function system. t) = i=0 αi (t)ϕi (x) (8. .4: Extrapolation method.3. . 8. Derivative h ∂z ∂x xk label 3-point 5-point 7-point error order 2 4 6 2 4 6 zk−3 zk−2 zk−1 1 −2 zk 0 0 0 −2 −5 2 − 49 18 zk+1 1 2 2 3 3 4 zk+2 zk+3 1 12 1 − 60 3 20 −2 3 −3 4 1 1 − 12 3 − 20 1 60 h2 ∂2z ∂x2 x k 3-point 5-point 7-point 1 4 3 3 2 1 − 12 3 − 20 1 90 1 − 12 1 90 3 − 20 4 3 3 2 Figure 8. t) with a finite functions series: ∼ N z(x. . αi .8.30) with ϕi . The quality of the approximation depends on the dimension N and the choice of the basis 91 .2: Central differences. 1982) approximates the solution z(x.

x) = 0. ∼ ∼ ∼ ∼ ∼ (8. In the example of the heat conduction equation you proceed as follows: ∂2T ∂T −β 2 = 0 ∂t ∂x T (x.8 Partial Differential Equations Figure 8.34) This corresponds to a representation in “residual”-form.30): ∼ N T (x. t) − T2 (t) = RRBl (t) = 0. ϕ.37) (8. t) − T1 (t) = 0 ∂T (l. leading to equation residuals Ri . t) − T2 (t) = 0 ∂x (8. ∂t ∂x dt T (0. ∂x 92 .31) (8. t) = RP DGL (α.38) (8. The sought solution T is approximated by applying (8. ∂2T dα ∂T − β 2 = RP DGL (x. because only zeros occur on the right-hand side. t) − T0 (x) = RAB (x) = 0. The task is now to determine the coefficients αi . because only an approximation of the real solution is used. .39) ∂T (l.5: Sliding differences. t0 ) − T0 (x) = 0 T (0.33) (8. T (0.35) This is inserted into the heat conducting equation.36) (8. t) − T1 (t) = RRB0 (t) = 0. t) = i=0 αi (t)ϕi (x) (8.32) (8.

43) You obtain as the resulting equation system: xi R α. . .44) 8.2. N. . N. dt i = 1. .1 Collocation Method In the collocation method Dirac impulses are used as weighting functions: l R α.40) The choice of the weights wi characterizes the weighted residuals method.41) As a consequence. . if the suitable weighted equation residual R disappears in the means over the considered range: l R α. The three most important methods are briefly introduced in the following sections (see Fig. . N. 8. dt i = 1.3. 8. (8. .3 Numerical Solution Methods ∼ An approximation z is considered acceptable. .3. dt (8. . dα .6).2. x dx = 0. i = 1. . In other words.45) 93 . 0 dα . N.42) 8. ∂αi ∼ i = 1. 0 dα . xi dt = 0. x δ(x − xi )dx = 0.2 Control Volume Method The control volume method uses weights wi only in an interval between two active nodes: 1 0 if xi−1 < x < xi elsewhere wi (x) = (8. you do not have to solve an integral system. xi−1 dα . x wi (x)dx = 0. . .3 Galerkin Method The Galerkin method uses as weighting functions the sensitivity of the approximation function with respect to the parameters to be determined. but rather only an algebraic equation system: R α.2. . . (8. .8. t) wi (x) = = ϕi (x). (8. .3. the weighting functions are the basis functions: ∂ z(x. (8.

Tu = 1 .7):   t − ti−1  ti−1 ≤ t ≤ ti    ti − ti−1 ϕi (t) = (8.8 Partial Differential Equations ϕ ¡ • ϕ       ϕ ϕ     • Figure 8.48) 94 . dt T (0) = 1.6: Different weighting functions (according to Lapidus and Pinder. Let k = 2 .47) t  i+1 − t   ti ≤ t ≤ ti+1  ti+1 − ti ∼ N is selected to be three and you receive the approximation for T : ∼ 3 T = j=1 Tj ϕj (t).4 Example The method of weighted residuals shall be illustrated with the example of an ordinary differential equation of first order. 8.46) k is a proportionality constant. 8.     (8. Newton’s cooling law. (8. An object with the temperature T is exposed to its environment with the temperature Tu and cools down: dT + k(T − Tu ) = 0. As basis function ϕ(t) the so-called “hat” function is used (see Fig.3.2. and the length of the object shall 2 be normalized to be one. 1982).

0 i = 1. (8. i = 1. i = 1. ∼ Tj is the value of the temperature of T at the node j.3 Numerical Solution Methods ϕ   ϕ ϕ ϕ Figure 8. (8.49) with ∼ dT R(t) = + k(T − Tu ) dt ∼ (8.52) 95 . 2.50) you obtain 1   3  Tj dϕ + kϕj dt − kTu  wi (t) dt = 0. 2. The condition for the weighted residuals reads: 1 R(t)wi dt = 0. 3 (8. 3.8. 2. 3.51) 0 j=1 This equation is to be solved both with the Galerkin method and the collocation method: Galerkin method With the Galerkin method you obtain the following expression as a result of the weighting with the basis functions: 3 1 1 Tj j=1 0 dϕi + kϕj ϕi dt = dt 0 kTu ϕi dt.7: Hat function.

96 .55) (8. 0. you can see that the best solution in this case is (8.54) (b). (c) (a). T3 ] = [1.678. T3 ] = [1. in the Galerkin method one should delete rows and columns with known coefficients. 0. 0.625] (8.684.625. If you compare the exact solution with the numerically calculated ones. yielding  −1 + 1 −1 + 2 0 k 3 k 6 k 6   (8. T3 ] = [1.568]. 0. 0.571] [T1 .56) (8. 0.55). Generally speaking. T3 ] = [1.53) For instance.625. with ϕ1 = 1 − 2t the first integral is 1/2 1/2 dϕ1 ϕ1 + kϕ1 ϕ1 dt = dt 0 0 1/2 −2 + 4t + k(1 − 2t)2 dt = 1 k 1 k − =− + 2 6 2 6  k −2t + 2t2 − (1 − 2t)3 6 = −1 + 0 leading (with T1 = 1 as initial condition) to  kTu 1+ 0 1 (a) 1 2    2k k  T2 =  kTu  (b) 1+ 6 3 2  kT  u T3 (c) −1 + k 1 + k 6 3 2 Therefore. (c) (a).   0   1   k T ϕ dt u 3 1/2 (8. T2 . T2 .8 Partial Differential Equations In matrix notation this yields  1/2 1/2 dϕ1 dϕ2 ϕ1 + kϕ1 ϕ1 dt ϕ1 + kϕ2 ϕ1 dt  dt dt 0 0  1/2 1 dϕ  dϕ1 2  ϕ2 + kϕ1 ϕ2 dt ϕ2 + kϕ2 ϕ2 dt  dt dt 0 0  1 dϕ2  0 ϕ3 + kϕ2 ϕ3 dt dt 1/2  0 1 1/2 1 1/2  dϕ3 ϕ2 + kϕ3 ϕ2 dt dϕ3 ϕ3 + kϕ3 ϕ3 dt   T1        dt T2  =         dt T3  1/2  k Tu ϕ1 dt 0   1     k Tu ϕ2 dt  . 0. 0. we have three equations for two unknown variables. T2 . (b) [T1 .46) reads: T (t) = T (0)e−kt + Tu (1 − e−kt ) leading to: [T1 . T2 .57) The exact analytical solution of (8.550] [T1 .

25.59) k 2     0 T1 1 0  T2  = kTu  kTu T3 2+ k 2 (8. 3. which are frequently not analytically evaluable .75 (8.60) From that we can derive the approximated solution: T1 T2 T3 = 1.8. ˆ There are no general statements on the most suitable method. e. centering. you receive an equation system without the integrals of the equation residuals (due to the use of Dirac impulses for weighting): 3 1 Tj j=1 0 dϕi + kϕj dt − kTu δ(t − ti )dt = 0.75  1 −2 + 0 0 2+ k 2 −2 + k 2  0 0 dϕ3 + kϕ3 | dt 0. 97 pendent on the choice of the basis function.555 . .25   dϕ2 + kϕ2 | 0 dt 0. ˆ If an analytical evaluation is not possible. i = 1. Using the collocation method. t2 = 0.58) The collocation points can be chosen as desired.g. t3 = 0. because the best choice is strongly problem dependent.4 Summary Collocation Method Applying the collocation method to our example.75.25 dt + kϕ2 0.25   kTu |  T3 00.75     kTu |  T1    0      T2  =  kTu |      0.58):  dϕ1 0  dt + kϕ1 | 0   dϕ1 dϕ2  | |  dt + kϕ1 0. (8. 2. boundary treatment). With this you obtain (8.667.4 Summary ˆ In collocation methods only the equation residuals have to be determined. 8. 0. the choice of the location of the collocation points is crucial for a high approximation quality. ˆ The approximation quality of the method of the weighted residuals is strongly deˆ In the differences method many degrees of freedom exist (order. 0.: t1 = 0 (initial condition). but not their integrals. the computational effort increases drastically.

8 Partial Differential Equations 98 .

1 b. 9. 9.1 shows a comparison of trajectories of discrete event systems and continuous systems. Input signals are the discrete events. Fig.9 Discrete Event Systems In discrete event systems (DES) the system state only changes at discrete time moments at which events occur. 9. A discrete time system (Fig.1 a). the states change continuously in time and (usually) have continuous input variables (Fig.1: Trajectories for continuous and time discrete systems (according to Kiencke. 1997). In continuous systems. 9. state x state x time t time t a) time and value continuous b) time continuous and value discrete state x state x time t time t c) time discrete and value continuous d) time and value discrete Figure 9. In discrete event systems.d). which only depend on the events which cause the state change (Fig. the states change at unpredictable moments of time.1 c) only approximates a 99 .

The ability decreases with the alcohol content (continuous variable) in the drivers blood.1. ‡ > < Figure 9. Discrete events occur either as natural discrete events or through the reduction of a continuous state transition. but are captured only at discrete moments. This transition can be (artificially) defined as a discrete event. 100 . ˆ Natural discrete events Examples: Quantities are defined as discrete events. if the transition process is relatively short or can even be neglected. 9. – Signal change in traffic controls – Number of trains in a railway station neglecting the in and out driving trains – Floors during an elevator trip ˆ Reduction of the transition A state passes into a successive state through a continuous transition. The variables still have continuous values.1 Classification of Discrete Event Models Discrete event models are going to be characterized in this chapter on the basis of different criteria (Kiencke. As an example we observe the driving ability of a motor vehicle driver.2. if for example only qualitative changes are of interest. Thus the ability to drive is a continuous variable. The law declares that one loses the ability to drive with an alcohol content of > 0.2: State graph of the driving ability.5 .1 Representation Form Discrete event models can be either represented as mathematical or as graphical models. 9.9 Discrete Event Systems continuous time system. 9. 1997). A graphical illustration as a discrete state graph is shown in Fig.

no time basis is available in discrete time models. The advantage of this representation is the vividness. Every graphical model can be transformed into a mathematical model (e. .2 Time Basis In opposite to time continuous models.9. τ with x = [x1 . ΦEX .2 State Model ˆ Mathematical model ˆ Graphical model The objects are represented through mathematical variables as well as input and output functions. . . . .2). The input and output functions are executed through graphical operations (see Section 9.2 State Model A discrete event system is described with a model M . xk ]T z = [z1 . The model analysis is carried out for a logical sequence.1. z. z2 . y.g. Thereby. Ω.1. graphical symbols often cannot completely represent a real process. On the other hand. ΦIN . y2 . . The variables are depicted as graphical symbols. . with matrix operations). 9. which can be represented by a 7-tuple: M = x. the state transitions take place with conditional probabilities.3 States and State Transitions You distinguish between deterministic and stochastic states and state transitions. . Stochastic states only appear with a specific probability.1) vector of inputs (external events). x2 .   ΦIN : z → z    mathematical ΦEX : z × x → z functions  Ω:z×x→y    τ : z → R+ state transition function for internal events state transition function for external events output function residence time function 101 . vector of states . 9.3). . . yn ] T T (9. 9. . zm ] y = [y1 . vector of outputs. (see Section 9. .

1 Example We consider a liquid vessel as an illustrative example. (9. ΦIN .2) 9. M = z. and the residence time function τ are dependent on time. Three process states (filling. Ω. This characteristic eases the mathematical analysis. A new state will be reached after the end of the residence time: z[t + τ (zi )] = ΦIN (zi ) State transition function for external events The system is in the state z(t). Dependency of the the state transitions on the system’s previous history A model is called “memoryless”. x(t)] Input variables x A model is called autonomous. τ Time dependency If the state transition functions ΦIN . one calls the system time-variant. emptying. x(t)] Output function y(t) = Ω[z(t). the output function Ω. ΦEX .9 Discrete Event Systems State transition function for internal events The system is in the state zi with its corresponding residence time τ (zi ). Because of an external event x(t) it switches to the new state z z (t) = ΦEX [z(t). if the conditional transition probability for the successive state is only dependent on the immediate state zi .3) (9.3). y. v2 = 0 ˙ ˙ =0 =0 (filling) (emptying) (equilibrium) if v1 = v2 ˙ ˙ 102 . v2 = 0 ˙ ˙ v1 = 0. 9.2. The filling and emptying of the vessel is supposed to be realized by means of a digital feedforward control (see Fig. otherwise timeinvariant. equilibrium) are considered in the model: ˙ h > 0 if ˙ h < 0 if ˙ h=0 v1 = 0.4) (9. if it works without being influenced by the environment (without inputs x).

which are well-suited for the description and the analysis of discrete event systems.5) (9. £ 103 . 2 The process states can be represented with the Boolean variables x1 and x2 in the following way: “filling”: x1 x2 x1 x2 “emptying”: “equilibrium”: x1 x2 + x1 x2 The state vector z must contain two components z1 and z2 . equilibrium. For these we have: ˙ ˙ xi = 1 if vi = 0 ˙ 0 if vi = 0 ˙ i = 1.3 Graph Theory The graph theory provides methods.6) 9.3 Graph Theory ¡   ¢ ¤ Figure 9.9. because three states have to be represented: z = z = z = 1 0 0 1 0 0 filling. emptying. This yields to the following state equations: z1 = z 1 x1 x2 + z1 z 2 x1 x2 .3: Model of a liquid vessel. The inputs are x1 = v1 und x2 = v2 . z2 = z 1 x1 x2 + z1 z 2 x1 x2 (9.

ˆ Directed graph A directed graph (see Fig.9). Figure 9. 9. 9.4).6). if they have the same starting and ending vertex (see Fig. The set V of the vertices and the set E of the edges are called graph G (see Fig. ˆ Predecessor and successor ˆ Sink and source ˆ Parallel arrows ˆ Loop 104 Vertices.4: Illustration of a graph. 9.9 Discrete Event Systems 9.7). . are called predecessor and successor of k. A directed edge is called arrow. which are directly connected with an arrow to another vertex k. 9. Sinks are vertices without any successor (see Fig. 9. respectively (see Fig.5: Directed graph. In the following some important concepts of the graph theory will be explained. Two arrows are called parallel.5) is a graph. Sources are vertices without any predecessor.1 Basic Concepts The elements of a system are called vertices. 9. A degenerate edge (arrow) of a graph which joins a vertex to itself is called loop (see Fig. Two vertices get related to each other through a connecting edge. Figure 9. in which all edges are directed.3.8).

7: Sink and source.9: Loop.9.10). Figure 9.6: Predecessor and successor.8: Parallel arrows. Fig. 105 . 9. Figure 9. ˆ Simple graph A graph which is permitted to contain loops and multiple edges is called a simple or general graph (see.3 Graph Theory Figure 9. Figure 9.

ˆ Finite graph ˆ Digraph A simple and finite graph is called a digraph. Thereby.10: Simple graph.3. .   . one distinguishes between the adjacency matrix and the incidence matrix.  . A vertex w of a digraph is called reachable from a vertex V . ˆ Reachability A graph is called finite.9 Discrete Event Systems Figure 9.7) an1 an2 · · · n is the number of vertices. . . This sequence of arrows is called reachability graph 9. The adjacency matrix  a11 a12  a21 a22  A= . if there exists a path from the starting vertex V to the ending vertex w. . if the sum of vertices as well as the sum of arrows or edges of this graph are finite. . . The rows correspond to the vertices. . . A represents the edges between the single vertices:  · · · a1n · · · a2n   . (9.  . . . For the elements of the adjacency matrix..2 Representation of Graphs and Digraphs with Matrices One has to model mathematically discrete event systems in a matrix representation in order to simulate them.. in1 in2 · · · inm 106 .  . and the columns to the edges:   i11 i12 · · · i1m  i21 i22 · · · i2m    I= . the following holds: aij = 1 0 if the edge from vertex i to j exists otherwise In the incidence matrix I the n vertices and m edges of a graph are represented. .  . . .8) . ann (9.

11) The column sum equals zero because every edge starts at one vertex and leads to another one. the directions of the edges are distinguished:  1  ikl = −1   0 if el is positive incident with vk (edge el leaves from the vertex vk ) if el is negative incident with vk ( edge el leads to the vertex vk ) otherwise (9.. .3 Graph Theory The edge el is called incident with the vertex vk ..11): ¡ ©   ¤ £  ¨ §   ¥ Figure 9. n (9. if it starts at vk or if it ends at vk ..9. .. . m (9. n l = 1.. This characteristic can be used for error checking.11: Example of a graph. The above introduced matrices shall be illustrated in the following example (see Fig. .12) 107 . . . 9.9) In a digraph.. The adjacency matrix reads: 1  1 0 2 0  3 0 A=  4 0  5 0 6 0 2 1 0 0 0 0 1 3 0 1 0 0 0 0 4 0 1 1 0 0 0 5 0 0 1 1 0 0 6  0 0  0  0  1 0 ¦  ¢ (9.10) In a digraph the elements of the incidence matrix satisfy: n ikl = 0 k=1 l = 1. ikl = 1 0 if el is incident with vk otherwise k = 1.

HPSIM ˆ simulation systems for special DES: Simple++ for manufacturing systems 9. A time basis does not exist. C(++). therefore the residence time function τ is not needed.4 Automaton Models Examples for automaton models are: pay phones. The input xi at the state zi definitely determines the successive state zi+1 and the output yi . washing machines.14) (9. ΦEX (zi . y. xi ) = zi+1 Ω(zi .1 Models for Discrete Event Systems The following model types are described in more detail in the following sections: (9. One characteristic of an automaton is.9 Discrete Event Systems The incidence matrix reads correspondingly: a b c d e f g h  0 0 0 0 0 0 0 1 1 1 0 0 0 0 −1 2 −1 1   0 0 −1 1 0 1 0 0 3  I=  0 0 0 4  0 −1 0 −1 1  0 0 0 −1 −1 1 0 5 0 0 0 0 0 0 −1 1 6 0  9.2 Simulation Tools Various tools exist for the simulation of discrete event systems: ˆ common programming languages: FORTRAN.3. Thus. Because the system reacts exclusively on inputs. money transaction machines.3. a sequence of discrete time events from outside is present. SIMAN. PASCAL ˆ simulation languages for general DES: GPSS. xi ) = yi Therefore. z. Ω . also no state transition function is needed for the internal events ΦIN .15) 108 . that it is operated from outside.2.2. ΦEX . an automaton model is a 5-tuple x. (9.13) ˆ automaton models ˆ Petri net models 9.

. First of all we define the inputs. input x1 . . states z1 . .13 shows the resulting state graph of the soda machine model. The state transition in the automaton model (state graph) is illustrated in Fig. . output drink 1 y3 . output coin Fig.9. . insert coin choose drink 1 choose drink 2 recall coin 2. x2 . After inserting a coin you can choose between two possible drinks or you can recall the coin. the states. output y1 . x4 . output drink 2 y4 . 9.12.13: State graph of the soda machine model. soda machine ready z2 . . A soda machine serves us an example of an automaton. . . . .12: State graph in an automaton model. .4 Automaton Models = ¡ ¡ = Ω( ) Φ ( + ¤ )         Figure 9. ¥ ¨¦ ¤§   © ' ) 2 0 &$ % 1 ( − ¢£  ¡ Figure 9. . amount of money sufficient 3. Upon maloperation a signal sounds. and the outputs of the model: 1. . . . . x3 . . signal sounds y2 . . . . 9.    9 B@ 4 75 8A 36 " ¢£ # ! 109 .

between places and places as well as connections of transitions with transitions are not allowed.5.5 Petri Nets A Petri net is a special graph (Hanisch. Tokens are produced. which are stored in the places. 9. 9. Figure 9. ˆ Continuous time models: Time Petri nets predict additionally when the events occur.9 Discrete Event Systems 9. Thereby.1 Discrete Time Petri Nets The mathematical representation of Petri nets is given in matrix form. you distinguish the matrices for the pre.14): ˆ Places are represented by circles and describe (possible) states. deleted or moved by the firing of the transitions. One distinguishes between two Petri net models: ˆ Discrete time (causal) models: Petri nets describe logically what happens in which sequence. The elements of Petri nets are (see Fig. The dynamic flow of a Petri net is represented by the transitions between the marked states (token distribution in all places). ˆ Places and transitions are connected with directed edges.and the post-weights: W − : matrix for pre-weights (connection of a place with a transition) W + : matrix for post-weights (connection of a transition with a place) 110 .14: Graphical elements of a Petri net. Thereby. ˆ Transitions describe (possible) events and are represented as bars. connections ˆ The actual system states are represented through tokens. It is a directed digraph with two kinds of nodes. namely places and transitions. 1992).

A transmitter sends a message to a receiver and waits for an answer (see Fig. ¦ § § ¨ © ¡ ¢   © ¨ ¦ ¢   ¡ ¤ ¥ Figure 9. The incidence matrices W of this Petri net reads: t 1 1 0  0 = 0  0 0 t2 0 1 0 1 0 0 t3 0 0 1 0 0 0 t4  0 P1 0P2  0P3  0P4  1P5 1 P6 t 1 0 1  0 = 0  0 1 t2 0 0 1 0 0 0 t3 0 0 0 1 1 0 t4  1 P1 0P2  0P3  0P4  0P5 0 P6 W− W+ £ ¤ ¥ (9.15).9.16) W = W+ − W− t t 2 t3 t4   1 −1 0 0 1 P1  1 −1 0 0 P2   0 1 −1 0 P3   = 0 P4   0 −1 1 0 0 1 −1P5 1 0 0 −1 P6 (9. As an example we will study the message exchange between a message receiver and a message transmitter. With the matrices W − and W + the incidence matrix W of a Petri net can be calculated: W = W + − W − .5 Petri Nets The weights state how many tokens can be taken away or enter according to the capacity restriction of a node. 9.17) 111 .15: Petri net model for message exchange.

.26) (9. n ˆ Transitions Thereby... For this purpose the following vectors are defined: ˆ Marking (state) vector m: vector of the numbers of tokens m in all n places p at i i the discrete time point r ˆ Capacities k of the places p i m(r) = [m1 (r). the following restriction holds: mi (r) ≤ ki t = [t1 . the following holds: − + mi (r) = mi (r − 1) − wij + wij ≤ ki (9.20) In order to activate transitions. .. In all pre-places a sufficient amount of tokens has to be available: − ∀pi ∈ utj : mi (r − 1) ≥ wij − wij = 0 (9... kn ] T (9.18) k = [k1 ..23) An individual verification of both activation conditions is a necessary condition. .28) 112 .25) The new marking vector can be described by the introduction of a switching vector u at the time moment r: u(r) = [u1 (r). which leads to a change of the marking vector (m(r −1) → m(r)).24) In the following.. . 2. u2 (r). um (r)]T m(r) = m(r − 1) − W u(r) + W u(r) m(r) = m(r − 1) + W u(r) − + (9.19) i = 1. The capacity in all post-places is not allowed to exceed the maximal capacity after the firing of the transition: − + ∀pi ∈ tj u : mi (r − 1) ≤ ki + wij − wij (9. t2 . the information concerning the position of the tokens.. we assume that no conflicts occur.. the following conditions have to be fulfilled: 1. . k2 .22) After the transition. mn (r)]T i (9.27) (9..21) 2.5. m2 (r).2 Simulation of Petri Nets In order to simulate Petri nets. the capacity restrictions as well as the firing conditions have to be represented.9 Discrete Event Systems 9. . . . . Then we obtain the new state with: − + mi (r) = mi (r − 1) − wij uj (r) + wij uj (r) (9.. tm ]T (9. As the result you obtain the activation function uj for the transition tj : uj (r) = 1 0 ∀ tj activated otherwise (9.

Therefore. which transports M0 into Ml . 5 & # $  © 113 . ( )= [ § 9 8 ] A ( )= [  ]   @ ¡ ¤  ¢£ ¥¦  ¨  7 C ( )=[ 4 ] ( )=[ % 6  ] B  ' !  " () 01 23 Figure 9. 9. We consider again the slightly modified example of the messages exchange (see Fig.3 Deadlock A deadlock in a Petri net is a transition (or a set of transitions) which cannot fire.9.3.16: Reachability graph for the message exchange model.3 Characteristics of Petri Nets One advantage of Petri nets lies in the fact that many characteristics of control systems can be analyzed with the help of Petri nets. one can avoid predecessors of dangerous states by the use of suitable design methods. 9.5.5. One sees clearly that all states can be reached from any starting state. 9. 9.5.5 Petri Nets 9. In this case the Petri net design has to be improved. Through the introduction of the new transitions t5 and t6 deadlocks appear. if there is a firing sequence.3. In the example of the message exchange the reachability graph is given in Fig. One uses the reachability graph for instance to review. This reachability graph is the basis for many analysis methods. we want to explain the most important characteristics of Petri nets in the following section.3. 9.16. Furthermore.5.2 Boundedness and Safety A Petri net is said to be bounded if in no single place pi more than a certain maximal amount of ki tokens are present. whether desired states are not and undesired states are reachable.17). In the case of ki = 1 the Petri net is also said to be safe.1 Reachability A state Ml is called reachable for a starting state M0 .

a Petri net should be live. Fig. this does not hold true in the inverse case.3. ¦   £ ¤ £ ¤ ¥   ¡ © ( )=[ ]  ( )=[ ¢ ] § ( )=[ ] ( )=[ ] ¨ Figure 9. However. 9.17: Deadlock in the message exchange model. For practical uses.19 shows a live variant of the 114 ¨ ¨ ¢ ¥ . A live Petri net is always deadlock free. Fig.18: Non-live Petri net.5. 9.18 shows an example of a deadlock free but non-live Petri net including the reachability graph. 9.9 Discrete Event Systems ¦ ¦ § ¨ ©   ¢ ¡ £   © ¨ § ¢   ¡ ¤ ¥ Figure 9.4 Liveness Transitions that cannot be longer activated are said to be non-live.

1997).5.19: Live message exchange model. One possibility is a temporal restriction of the token permeability of the Petri net (see Fig.9. ¢ £ ¢ £ τ ¡     ¢ £ τ τ ¢   £ § τ τ ¡ Figure 9.5 Petri Nets Petri net for the modified example of the messages exchange. 9.   ¡ ¤ ¥ 115 .4 Continuous Time Petri Nets Continuous time Petri nets offer the possibility of representing the state-dependent and the time-dependent behavior in only one model. 9. ¢ ¡ £ ¢   ¡ ¤ ¥ ¦ ¦   £ § Figure 9.20: Token in a timed place (according to Kiencke.20). The temporal behavior is represented by timed Petri net elements.

the transition fires delayed after the delay time τ . 1997). τ τ +τ   Figure 9.21: Delayed flow of the tokens (according to Kiencke. 9.9 Discrete Event Systems Another possibility is a delay in the flow of the tokens (see Fig. Hereby.21). 116   .

which finally determine the concrete behavior of the model. which are obtained during a fermentation.1: Measured values for a high cell density fermentation of Escherichia coli. still have to be defined and/or identified. Shortly afterwards. This system can be solved according to the methods introduced in Chapter 7.g.10 Parameter Identification 10. balance equations).4). The parameters of the model. This task is performed according to the simulation procedure (see Fig. The structure of a simulation model is determined by setting up basic mathematical relations (e. which delivers as much product as possible at the end of the fermentation. 50 40 glucose [g/l] 30 20 10 0 0 10 20 30 time [h] 40 biomass glucose product addition inductor 125 100 75 50 25 0 biomass [g/l]. It concerns the fermentation of Escherichia coli. After a so-called lag phase the biomass grows exponentially. substrate (glucose). The goal of the simulation task is to determine an optimal feed profile. 1999) as the model structure. one obtains a differential-algebraic system which consists of ten coupled nonlinear differential equations of first order and two nonlinear algebraic equations. the product generation stops and the biomass dissolves (dies). and the product. A so-called compartment model is used (Schneider. The compartment model contains 30 parameters   117 .1 Example We consider the simulation of a bioprocess as an introductory example. As the result of the model building in this case. product [%] Figure 10.1 the measured values for biomass. Through the addition of an inductor (IPTG) the product generation is started. In Fig. are represented. 10. 1.

In order to evaluate the deviations between the real outputs (measurements) and the simulated outputs (output quantities). . If the deviations are significant. . The equation of the real system is yk = a xk + nk k = 1. the parameters are changed until a good fit is obtained. an objective function is formulated. There are 15 parameters which can be determined using preliminary considerations and biological knowledge. .2: Parameter identification procedure. 10. The task of the parameter identification is to determine the remaining parameters of the simulation model on the basis of measurements. N (10.10 Parameter Identification (20 model parameters and 10 initial conditions). .1) 118 . Figure 10. An extension to multiple-variable systems (MIMO systems) and multiple parameters is possible. The degrees of freedom for parameter identification are the choice of the objective function and the calculation rule for the determination of the model parameters. 10. With the help of identification methods the parameters are determined in such a way that the objective function is minimized.2 (Norton. 1997).2 Least Squares Method We consider a linear single-input single-output system (SISO system). The measurements are compared to the simulated values. The general parameter identification procedure is depicted in Fig.

.5) (10. . a= ˆ (10. . y ˆ ˆ = [e1 . In the vector form you obtain the following representation: xT y T T T = [x1 .2 Least Squares Method with y : x : n : a : output or measurement. In case of the least squares method the objective function has the following form: N N N Q= k=1 e2 k = k=1 (yk − yk ) = ˆ k=1 2 (yk − axk )2 ˆ (10. . . .6) (10. y2 . parameter. . . . eN ].8) y ˆ e The objective function arises accordingly as follows: Q = eT e = (y − ax)T · (y − ax) = y T y − 2ˆy T x + a2 xT x.10) ∂ˆ a The sufficient condition is always satisfied.3) (10. The objective function evaluates the deviations between the simulated and the measured values. y2 . The model equation reads: yk = a · xk . yN ].12) 119 . xN ]. the sought parameter a is given by: ˆ yT x . = [ˆ1 . ˆ ˆ a ˆ (10. input. The error is counted quadratically in the objective function in order to prevent the positive and negative deviations of the measurements from the model predictions from balancing themselves. The necessary condition for the minimum of Q is: ∂Q = −2y T x + 2ˆxT x = 0. = [y1 . ˆ ˆ The error between the real system and the model is given by ek = yk − yk . because of ∂2Q = 2xT x > 0. .2) The aim of the parameter identification is the determination of the parameter a in a way ˆ that the objective function Q is minimized. . ˆ (10.4) N denotes the number of measurements.10. .9) We study the valid necessary and sufficient conditions for a minimum of Q.11) (10. .7) (10. xT x This equation is called regression equation. yN ]. e2 . . disturbance. . x2 . . a (10. (10. ∂ˆ2 a Therefore.

17) (10.18) (10.3 Method of Weighted Least Squares The method of least squares does not use any information about the disturbance n. ··· σ 2 (nN ) (10. Information about the disturbance can be available e.  . (10.13) Λ is a weighting matrix. λk k (10.19) (10. if Λ is the covariance matrix of the disturbance.14) (10. this variance matrix is a sensible choice for Λ. a ∂ˆ a The sought parameter a arises as follows: ˆ a= ˆ y T C −1 x .g. .10 Parameter Identification 10. .21) 120 .20) This leads to a simplification of the objective function Q= 1 T e e. E[ni nT ] j = δij C(ti ).15) With this you obtain as the objective function Q = (y − ax)T C −1 (y − ax) = y T C −1 y − 2ˆy T C −1 x + a2 xT C −1 x ˆ ˆ a ˆ The necessary condition for the minimum of the objective function leads to: ∂Q = −2y T C −1 x + 2ˆxT C −1 x = 0. Therefore. . . The variance matrix has the  2 σ (n1 ) 0  0 σ 2 (n2 )  C= . If such information exists. 0 0 following form:  ··· 0 ··· 0   .. . .16) This identification method is called Markov estimation. . σ2 (10. xT C −1 x (10. it can be used for an improvement of the result of the parameter identification. In many cases the expectation E can be characterized by E[n(ti )] = E[n] = 0. in form of the variance matrix C. .  . Λ = C. The idea of the method of weighted least squares consists of weighting the errors e with factors λk : N Q= k=1 1 2 e = eT Λ−1 e.

= . yN The model equations read: y = X · a.4 Multiple Inputs and Parameters 10. am nN (10.. 10.  . . .28) (10. . the method of recursive ˆ regression is applied.  +  . . .24) k = 1. With one output and N measurements the following equation is obtained: yk = a1 x1 + a2 x2 + · · · + am xm + nk k k k In vector form the equations are:    1 x1 y1  y2   x1    2 y = Xa + n or  . The goal is the determination of new estimates with the help of the old ones and the updated measurements.  .g. ˆ ˆ Correspondingly. the parameter ak+1 is estimated on the basis of each new measured value yk+1 using previously calculated quantities.  . ˆ (10. . . . ˆ ˆ ˆ ˆ The necessary condition ∂Q = −2X T y + 2X T X a = 0 ˆ ∂ˆ a leads to the normal equation X T X a = X T y.26) (10. you will have numerical problems inverting the matrix.27) (10. In this method. This possibility arises. (10. . if the parameter estimation can be carried out in parallel with consecutive measuring (e. . for adaptive control).22) x2 1 ··· . N. the objective function is: ˆ Q = eT e = (y − y )T · (y − y ) = (y − X a)T · (y − X a) ˆ ˆ ˆ = y T y − aT X T y − y T X a + aT X T X a. ˆ with which the sought parameters a can be calculated: ˆ a = (X T X)−1 X T y. 121 .25) (10.  .4 Multiple Inputs and Parameters The introduced methods can be extended for the case of m inputs x and parameters a.5 Recursive Regression If one wants to (quickly) carry out online parameter identifications. x2 N ··· ··· .   . .10. . .23) If (X T X)−1 is ill-conditioned.   . ··· x1 N xm N     xm n1 a1 1 m  a   n2  x2   2    .

xm k    | x1 k+1   2    | xk+1    ·  = X T y k + xk+1 yk+1 . .34) We now define the matrix S k as S k = (X T X k )−1 k and obtain ˆ ˆ ak+1 = ak + S k+1 reads S k+1 = (X T X k + xk+1 xT )−1 = (S −1 + xk+1 xT )−1 . k+1 ˆ error (10. xm xm · · · 1 2  x1 k x1 k . . .   | .35) ˆ S k+1 S −1 − I ak + S k+1 xk+1 yk+1 .33) (10. (10. k+1 k k+1 The estimation equation of the recursive regression is ˆ ˆ ak+1 = ak + S k+1 xk+1 (yk+1 − xT ak ). .38) ˆ a0 = 0. k k The (k + 1)-th estimation is: ˆ ak+1 = (X T X k+1 )−1 X T y k+1 . k (10.28)): ˆ ak = (X T X k )−1 X T y k . k k+1 k+1 k therefore S −1 can be written as k S −1 = S −1 − xk+1 xT . .40) 122 .  (10.36) (10. . .29) can be transformed to ˆ X T X k ak = X T y k .37) (10. (10.10 Parameter Identification The k-th estimation (up to the present k measurements are available) is given as (see (10. k k+1 (10.  . .39) with S k+1 (because of the matrix inversion lemma): S k+1 = S k − 1 1+ xT S k xk+1 k+1 S k xk+1 xT S k .30) (10. k k This equation is inserted into (10..32) (10.29) X T y k+1 k+1 (10.31) you obtain ˆ ak+1 = (X T X k+1 )−1 (X T y k + xk+1 yk+1 ). k+1 S 0 = I.  yk   —  | xm k+1 yk+1 y1 y2 . k+1 k+1 With  x1 x2 · · · 1 1  x1 x2 · · · 2  2 = . .   k . a k+1 k+1 extension (10.32): ˆ ˆ ˆ ak+1 = (X T X k+1 )−1 Xk T X k ak + (X T X k+1 )−1 xk+1 yk+1 +ˆ k − ak . .

or cannot be made available at all. Once a model exists which represents well the measured values (for example after a parameter identification). as briefly introduced in the following. 10. p. ti ).1 Search Methods In search methods the optimum is determined simply through an evaluation of the objective function.6 General Parameter Estimation Problem In contrast to the previously considered problems.10. This offers advantages especially for such cases in which the derivatives are very difficult to calculate. p. p In the following sections the search methods are briefly discussed. x ˆ ˆ ˆ Q = h(y. search methods are less efficient than gradient or Newton methods. in the lecture only the socalled hill-climbing methods shall be considered. can be used for different purposes. one has to clarify the connection between the sought parameters and the objective function. p ˆ Gradient methods: The objective function Q(ˆ) and the derivative (the gradient) p Q (ˆ ) are evaluated.g. Out of the wide field of mathematical optimization methods. one can use optimization methods e. The lines with the same value of the objective function (level curves) can be e. ellipses.42) (10. p ˆ Newton methods: The objective function Q(ˆ). An optimization method is an algorithm which changes the sought parameters until the value of objective function is as small as possible. Hereby. as well. p). rather it involves a nonlinear optimization problem which has to be solved with optimization methods. (compare with simulation procedure → use of simulator). Optimization methods. A direct solution ˆ for p is not possible. ˆ ˆ p x(0) = x0 (ˆ ). 123 . In this example the minimum is in Q∗ . For this reason we consider in Fig. In general. y .3 the representation of an objective function in a two-dimensional space with respect to the sought parameters. we now consider the general case of a ˆ nonlinear estimation problem. the gradient Q (ˆ) and the second p p p p derivative Qpp (ˆ ) are evaluated.g. Λ. Different hill-climbing methods can be distinguished (Hoffmann and Hofmann. for design improvement etc.41) (10.6 General Parameter Estimation Problem 10.6.43) The already introduced quadratic objective function is used here. ˆ x ˆ ˆ y (ti ) = g(ˆ . In order to understand these methods. also a parameter vector p is sought which minimizes a given objective function Q: ˙ x = f (ˆ . t). 10. On such a level curve the values of the objective function remain the same independently of the combination of the parameters. (10. 1971): ˆ Search methods: Only the objective function Q(ˆ) is evaluated.

1. j j = 1. . ¡¢  ¦ ¨ ©§  ¥£ ¤      % & (' ! " $# Figure 10.3: Objective function in the two-dimensional space. . n. The new parameter vector is given by: pj+1 = pj + λ∗ v j .4: Successive variation of variables.1 Successive Variation of Variables The method of the successive variation of variables starts at an initial point p(1) .4).6. Then again p1 is varied and so forth.44) The search direction v j is determined through the choice of the coordinate system. The change of the objective function determines the length λ of the vector v. Then one varies p2 (with the p new (constant) value for p1 ) until the objective function reaches a minimum (see Fig. One varies p1 (for constant p2 ) until Q(ˆ ) reaches a minimum.10 Parameter Identification ¡ ¢  ¦ ¨ ©§     Figure 10. (10. 10. . 124 ¤ ¥£     . 10. .

For these points the objective function is evaluated.5) the objective function is evaluated and again the point with the worst objective function is reflected. £ ¢  " !     ¨ © § ¦ ¥ ¤   ¡ Figure 10. For this new parameter combination (point 4 in Fig. 2. 3). 10. and so on.   125 . at the beginning n + 1 points are determined (in Fig. £ ¢  " !     ¨ © § ¦ ¥ ¤   # $   ¡ Figure 10.5: Simplex method. 10.5 the points 1.2 Simplex Methods For the simplex method.1.6 General Parameter Estimation Problem 10. defining an n-dimensional simplex. The point with the worst objective function value is reflected in the centroid of the remaining n points (reflection).6. For the use of such optimization methods it is not guaranteed that the global optimum is always found.6: Problem within the simplex method.10.

other possibilities of determining new parameter combinations are considered (see Fig.1.8 the simulation results calculated with the optimal parameter set is compared to the measurements. 50 40 glucose [g/l] 30 20 10 0 0 10 20 30 time [h] 40 biomass glucose addition inductor 125 8 acetate.10 Parameter Identification In Fig.7: Nelder-Mead method. ¥ ¤ ¡   ¢ ¡ £ ¢ ( )< ( ) ¤ ¤ ¡ ¥ § £ £ § ¢ ¡ ( )≥ ( ) ¨ £ ( )≤ ( ) ( )> ( ) ¨ ¨ ¢ Figure 10.6. With the Nelder-Mead method 15 parameters of the compartment model of the introductory example are determined. Because of the structure of the level curves the simplex method remains in a local minimum.6 such a case is depicted for the simplex method. 10. In Fig. nitrogen [g/l] 100 biomass [g/l] 75 50 25 0 acetate nitrogen product addition inductor 1 0.7). 10.5 0 © £ 6 ¡ ¦ 4 2 0 0 10 20 time [h] 30 40 Figure 10. a search is performed into one direction as long as the objective function improves. 10.8: Simulated trajectories after parameter identification. 126 product [−] ¡       . In doing so. In addition to the reflection used in the simplex method. 10.3 Nelder-Mead Method The Nelder-Mead method comprises a modified simplex method. the global optimum is not found.

A system is a set with interrelated parts (components). 1991) If a system component cannot be further decomposed. 1. The set of all values of the state variables forms the state of the system.1 Problem Definition In order to define the simulation problem. These system elements have specific characteristics (attributes).11 Summary This chapter gives a summary of the course oriented along the simulation procedure (see Fig.4) and Page (1991). The problem definition corresponds to the definition of the system’s borders.1: Basic concepts in systems theory (according to Page. Therefore in the following we are going to define the term system in more detail. Figure 11. it is called a system element. 11. 11.1). we have to start with the definition of the questions of interest and the goals we want to reach with the help of simulation. You can distinguish between the following systems: 127 . Changes in these characteristics correspond to changes in the state variables. which interact to reach a common goal (see Fig.

3). 128 . 1991). Therefore. according to their state transition (see Fig. they are going to be discussed and classified in more detail in this section. or according to the application of the model (see Fig. dynamic (graphical) mathematical models play an important role in simulation techniques.2). 11. Classification of Dynamic Mathematical Models As illustrated in the course. models can be classified according to their transformation and their analysis method (see Fig 11.2: Classification of models according to the used transformation and analysis method (according to Page.11 Summary open system at least one input or output static system ↔ closed system no inputs or outputs dynamic system time dependent change of the state ↔ cybernetic system ↔ non-cybernetic system with without feedback loops 11.2 Modeling Among other things. 11.4). Figure 11.

2 Modeling Figure 11. Examples. 1991).g. which comprise not only derivatives of the time but also of the space coordinates. In the course.3: Classification of models according to the type of state transition (according to Page. Figure 11. Discrete event systems are a special case of time discrete systems. 1991). The graphical mathematical representation is carried out by e. Petri nets. lumped systems include a finite number of states. the example of a heat conductor has been studied. The time steps during the process are determined by incidentally occurring external events or by functions of the actual values of the state variables. Their mathematical representation can be given by systems of ordinary differential equations of first order or algebraic equation systems or a combination of both (DAE system). The mathematical representation of distributed systems is given by partial differential equations. are the vertical movement of a motor vehicle and an electrical circuit. ˆ Time continuous and time discrete systems: ˆ Discrete event systems: In time continuous systems the state variables are continuous functions of time. which have been discussed during the course.4: Classification of models according to their application (according to Page. ˆ Lumped and distributed systems: ˆ Time invariant and time variant systems: 129 . In the course we studied the examples of a soda machine and a traffic light circuit.11. In contrast to distributed systems.

The numerical methods treated in the course shall be listed in note form in this section: ˆ equation systems – linear: Gauß elimination ˆ ordinary differential equation systems – implicit/explicit method – nonlinear: Newton-Raphson method – one-step method/multi-step method ˆ differential-algebraic systems: solvability ˆ partial differential equations – method of lines.4 Simulators Simulation software can be distinguished according to different criteria: 130 . method of differences ˆ discrete event systems – method of weighted residuals – graphical representation – mathematical representation 11. the model parameters are time independent as well. Therefore. An example is a chemical reaction system. In contrast to dynamic models.3 Numerics The predator-prey model has been introduced as an example for a nonlinear model. the behavior of steady-state models is time independent.1) ˆ Steady-state and dynamic models: 11.11 Summary ˆ Linear and nonlinear systems: In time invariant systems the systems function is independent of time (autonomous system). In linear systems the superposition principle holds true: f (u1 + u2 ) = f (u1 ) + f (u2 ) (11.

4.1 Application Level ˆ Software on the level of programming languages: ˆ Software on the level of models: compilers or interpreters of a programming language.11. the complexity of these modules in different simulation systems varies significantly.g. compiler for general high-level programming languages ˆ simulation specific software: software especially developed for the purpose of simulation (e. graphical user interface. results analysis.3 Language Level 11.g. Simulink) ˆ software oriented along specific classes of problems and specific model types ˆ software oriented along the problem definition of one application domain ˆ general programming languages e. FORTRAN. Matlab) completely implemented models. only the input parameters are freely selectable (e. Not all of these modules have to be present in a simulation system. and simulation data base can be distinguished.4. Thereby. flight simulator) integrated.4 Simulators 11. the modules modeling component. 131 . interactive working environment (e.4 Structure of a Simulation System Fig.g. model and method data base.g. Furthermore. which is suitable for the implementation of simulation models (e. 11. Simulink) ˆ Software on the level of support systems: 11. PASCAL ˆ Simulation packages collection (library) of procedures and functions ˆ Simulation languages – low-level simulation languages – high-level simulation languages: implementation of models is eased – systems-oriented simulation language: direct modeling of specific system classes 11.g.4. C(++). experiment control.4.5 shows the general structure of a simulation system.2 Level of Problem Orientation ˆ universally usable software: e.g.

132 . Simulation environments can be classified according to the following criteria and characteristics: ˆ with or without programming environment ˆ which model classes are going to be supported ˆ complexity of the model library ˆ complexity of the numerical methods ˆ on-line or offline graphics ˆ possible simulation experiments ˆ open or closed structure (interfaces) ˆ carrier language: FORTRAN.and block-oriented simulators are illustrated by the following example.5: Structure of a simulation system (according to Page..11 Summary ⋅ Figure 11. 1991). ˆ compiler or interpreter ˆ platform: PC. .. VAX.. . UNIX.. C.or block-oriented The differences between equation. ˆ equation.

x12 ) y21 = g21 (x21 ) y31 = g31 (x31 ) y32 = g32 (x31 ) y41 = g41 (x41 .8) (11. y41 ) = 0 The connections between the model blocks are described by: x12 − y31 = 0 x21 − y11 = 0 x31 − y21 = 0 x42 − y32 = 0 (11. A block-oriented simulator solves each block separately in a specific order.4) (11.2) (11. y11 ) = 0 f2 (x21 . 11.6.(11. The equations of all model blocks are given by: f1 (x11 .5) An equation-oriented simulator solves (11. x12 .11. y31 . an equation-oriented simulator has no graphical representation (input) of the model structure.2) . For this reason. Example In Fig. There is no block structure identifiable in the equations anymore.11) (11.7) (11.6: Block-oriented representation of a simulation model. From (11.14) The known inputs of the system are: x11 and x41 .6) (11.4 Simulators @ 89 1   ( 0 ) 4 ¨©  ' ¡ £ &   ¥  ¢  ¤ 3 2    " ! 7 § % Figure 11.5) the equations of the block-oriented simulator can be derived: y11 = g11 (x11 .2) (11. x42 . the block-oriented representation of a simulation model is given. y32 ) = 0 f4 (x41 .13) (11. x42 ) (11.10) (11.12) (11. y21 ) = 0 f3 (x31 .9) (11.9) simultaneously. One possible solution strategy for the equations is the following: # $ ¦ 56 133 .3) (11.

two cases can be distinguished: ˆ direct solution: (weighted) least squares method ˆ indirect solution using optimization methods 11. that the differences between simulated and measured values are going to be minimized. ˆ The experiment costs are too high.5 Parameter Identification In parameter identification the parameters of a model are calculated in such a way. Calculate y11 . . . . Estimate x12 2.11 Summary 1. x12 − y31 =? a) = 0: calculate y41 b) = 0: set x12 = y31 and repeat step 2 11.6 Use of Simulators The major reasons for the use of simulators are: ˆ The system is not available. then y21 . ˆ There are disturbances within the real system. 134 . ˆ The time constants of the system are too large. y32 3. Thereby. y31 . these differences have to be squared and summarized in an objective function. . ˆ The variables of interest are difficult to measure or not measurable at all. Therefore. With the use of some calculation algorithm one tries to minimize this objective function. ˆ Experiments within the system are too dangerous.

1991). Therefore. 11. Figure 11.7 Potentials and Problems 11. 135 .11.7 Potentials and Problems In spite of the many different possibilities and chances offered by simulation.7.7: Potentials and problems of modeling and simulation (according to Page. the potentials and problems of modeling and simulation are summarized in Fig. you have to assume that the use of simulation for a specific problem ist justified (and affordable).

11 Summary 136 .

Bachmann. Breitenecker. (1995).. L.Automatisierungstechnik 48: A57–A60. at Automatisierungstechnik 48: A37–A40. (1992). F. C. (1993). 12th European Simulation Multiconference. Simulationstechnik. C. (2000b). F. Teil 13. in D. Bischoff.Bibliography Astr¨m. H. B. Clauß. J. (1967). and Schwarz. u u Institut f¨r Geometrie und Praktische Mathematik. 10. and Zehnder. P. (1992).. H. number 1 in AIChE Continuing Education Series. Bratley. Teil 10. and Schwarz. Elmqvist. and Mattsson. (1994). Prentice Hall. K.7. Simulation Fundamentals. pp. C. New York. and Wiesmann. u Engeln-M¨ller. Teil 15. at . Troch. Atherton and P. B. (1976). P. Numerik-Algorithmen mit Fortran 77u Programmen. Manchester. Borne (eds). Beater. Braunschweig. P. Braunschweig. Bossel. Kohlas. Simulation modelling formalism: Ordinary differential equations. B. 137 .00. Cellier. American Institute of Chemical Engineers. I. Teil 14. Schneider. (1998). and Kopacek. F. New York. and Schrage. (1987). S. BI Wissenschaftsverlag. A... G. Objektorientierte Modellierung Physikalischer Systeme. Objektorientierte Modellierung Physikalischer Systeme. A Guide to Simulation. Clauß. Objektorientierte Modellierung Physikalischer Systeme.Automatisierungstechnik 48: A53–A56.. (2000). Oxford. Modellbildung und Simulation. D. K. Dahmen. Fox. Evolution of continuous-time modeling o and simulation. Springer. Springer. K. ESM’ 98. P. RWTH Aachen. Schneider. pp. Simulationstechnik. W. Mannheim.. A. (2000a). Fundamentals of Process Analysis and Simulation. and Himmelblau. 1–10. and Reutter. 420–423.Automatisierungstechnik 48: A49–A52. at . Vieweg & Sohn Verlagsgesellschaft mbH. at . Bauknecht. H. Pergamon Press. bestellt. Einf¨hrung in die numerische Mathematik (f¨r Maschinenbauer). (2000). Concise Encyclopedia of Modelling & Simulation. (eds) (1990). Bennett. Objektorientierte Modellierung Physikalischer Systeme. Berlin. P. Vieweg.

Einf¨hrung in die Simulao u tionstechnik. M. Measurement. A. Modelica . Oldenbourg. D. u Kramer. M¨nchen.Bibliography Fishman. R. Lawrence Erlbaum. (1998). Deutsche Gesellschaft f¨r Operations Research. Simulation. U. Simulationstechnik. Oertel. u Lapidus. 138 . G. G. J. Computer Simulation and Modeling: An Introduction. and Hofmann. (1998). and Neculau. Law. John Wiley. and Wailt.. Hanisch. John Wiley & Sons. McGraw Hill. (1969). H. Lehman. and Schneeweiß. Technische Universit¨t Wien. Numerical Solution of Partial Differential Equations in Science and Engineering. a Jain. (1991).-M. M¨nchen. Einf¨hrung in die Optimierung. New York. Principles of Discrete Event Simulation. M. (1978). u Korn.G: Teubner. G. Aachen. O. u Hoffmann. Modelica . (1982). Oldenbourg. u K¨cher. (1999).a unified object-oriented language for system modeling and simulation. Gordon. H. Holzinger. Stuttgart. Einf¨hrung in die Zustandsbeschreibung dynamischer o u Systeme. F¨llinger. The Art of Computer Systems Performance Analysis: Techniques for Experimental Design. and Kelton. Oldenbourg. Kiencke. (1982). M¨nchen. (1971). and Pinder.a language for equation-based physical modeling and high performance simulation. Master’s thesis. (1997). (1992).. Digitale Simulation kontinuierlicher Systeme. Petri-Netze in der Verfahrenstechnik. Hillsdale. Methoden und Modelle der Kontinuierlichen Simulation. W. bestellt. Hohmann. New York. 10. V. P. APPLIED PARALLED COMPUTING 1541: 149–160. D. Carl Hanser. (1996). Shaker.00. P. R. and Modeling. M¨nchen. (1999). C. Modellorientierte Parallelisierung von Modellen mit verteilten Parametern. M. Prentice-Hall. U. H. G. G. System Simulation. Englewood Cliffs. B. and Engelson. Ereignisdiskrete Systeme. R. ECOOP’98 (the 12th European Conference on ObjectOriented Programming). L. New York. Simulation Modeling and Analysis. and Franke. Verlag Chemie. Gipser. (1991). Systemdynamik und Simulation. M¨nchen. Matt. (1977). (1983). u Fritzson.7. (1998). (1972). John Wiley & Sons. U. Fritzson. Oldenbourg. New York. u Weinheim.

mind and models. MathWorks (2002). Natick.. Kalenich (ed. E. Oldenbourg. (1997). J. D. Otter. Teil 1. Meadows.9. in A. H. (2000). 415–420. Simulation . Oxford. M. Product informationen online available at http://www. M¨nchen. at Automatisierungstechnik 47: A9–A12. (1998). Object-oriented business process modeling and siumlation: A discrete event system specification framework. Deutsche VerlagsAnstalt. Spartan Books. B. (1999a). Otter. Proceedings International Federation of Information Processing Congress. and Zeigler. (1925). Modellbildung. (1992). M. N.mathworks. nicht nachweisbar. J. 22. Simulation und Identifikation dynamischer Systeme. Pergamon Press. Objektorientierte Modellierung Physikalischer Systeme. A. and Zahn. D. H. Atherton and P. Objektorientierte Modellierung Physikalischer Systeme. Pergamon Press. An Introduction to Identification. Bericht des Club of Rome zur Lage der Menschheit. (1992). Dynamic process simulation . Academic Press. in D. Nidumolu. Menon. Die Grenzen des Wachstums. R. 409–415. Club of Rome. (2000). Inc.). Minsky.00. Objektorientierte Modellierung Physikalischer Systeme.Bibliography Liebl. u u Lotka. USA. pp. Simulation Practice and Theory 6: 533–571. Teil 2. Simulation languages. Oxford. W. Objektorientierte Modellierung Physikalischer Systeme. Teil 4. D. Concise Encyclopedia of Modelling & Simulation. E. Odum. (1992). San Diego.CPC IV. at Automatisierungstechnik 47: A5–A8. 139 . S.. Simulation modelling formalism: Bond graphs.com/ (viewed 25 March 2002). (1999d). M. at Automatisierungstechnik 47: A1–A4. Borne (eds). Matter. Mezencev. Otter.Problemorientierte Einf¨hrung. pp. (1991). Otter. MathWorks (2000). Norton. Margolis. (1999b). Simulink for model-based and system-level design. Chemical Process Control . in D. (1999c). Williams and Wilkins. (1965). (1992). M. Marquardt. Washington. pp. Version 6. bestellt. at Automatisierungstechnik 47: A13–A16. Berlin. Modeling for All Scales: An Introduction to System Simulation.. Concise Encyclopedia of Modelling & Simulation. Borne (eds). D. M. L. F. Academic Press. Baltimore. in W. M¨ller.recent progress and future challenges. The MathWorks. and Odum. Atherton and P. o Springer. 45–49. Teil 3. Meadows. Arkun (eds). Elements of Physical Biology. Ray and Y. Using MATLAB.

(1990). CAE von Regelsystemen. D. (1991). (1999a).-U. Elmqvist. (1998). H.The Art of Scientific Computing. Information Sciences 124: 173–192. BASIC Technical Systems Simulation. (1990).Bibliography Otter. Akademie-Verlag. (1975). Teil 8. and Bachmann. Prentice Hall. Roberts. at Automatisierungstechnik 48: A65–A68. M. Simulationsmethoden. The Numerical Method of Lines. Productivity Press. (1989). H. bestellt. Otter. Reihe 8. B. S. Otter. Cambridge University Press. (1999). Page. Shannon.Automatisierungstechnik 47: A17–A20. W. H. Sadoun. Butterworth & Co. BSB B. Systems Simulation: The Art and Science. B. Objektorientierte Modellierung Physikalischer Systeme. 140 . 10. Cambridge. D. Objektorientierte Modellierung Physikalischer Systeme. Simulation Technischer Systeme. H. A. S.. Otter. at . (1999). atp .00.7. S. and Schlegel. bestellt. (2000). San Diego. (1991).Automatisierungstechnik 47: A21–A24. B. u Piehler. (1999b). Flannery. (1976). Objektorientierte Modellierung Physikalischer Systeme. Objektorientierte Modellierung Physikalischer Systeme. N. Carl Hanser. Untersuchung eines adaptiven pr¨diktiven Regelungsverfahrens a zur Optimierung von bioverfahrenstechnsichen Prozessen. A.Automatisierungstechnik 47: A25–A28. M. 855. Teil 9. Teil 5. R. Fortschritt-Berichte VDI. VDI Verlag. Schneider. Leipzig. at . Berlin. (1999). and Mattsson.G. Introduction to Computer Simulation: A System Dynamics Modeling Approach. (2000). and Zschiesche. B. London.Automatisierungstechnik 47: A29–A32. Objektorientierte Modellierung Physikalischer Systeme.7. at . Applied system simulation: A review study.. Springer. Teil 17.. Schiesser. u Sch¨ne. W. R. Teil 7. at . D¨sseldorf. Diskrete Simulation. M.. Otter. G. Digitale Simulation. at . M. P. Elmqvist. Englewood Cliffs. Teil 6. Otter. Teukolsky. o Schumann. Teubner.Automatisierungstechnische Praxis 40: 48–63. Press. Eine Einf¨hrung mit Modula-2.00. e. and Vetterling. B. R. Savic. (1994). W. Numerical Recipes . Nr. Academic Press. Berlin. M. and Mattson. and Savic.. A. J. 7. (1973). (1999). M. Objektorientierte Modellierung Physikalischer Systeme. and Bachmann. M. T. Schwarze.Automatisierungstechnik 47: A33–A36.

in D. and Pert. and Gilles.und Produktionssystemen .. W¨llhaf. Simulationstechnik. Zeitz. G. pp. (1926). Teil 12. 3–19. H. pp. Multifacetted Modelling and Discrete Event Simulation. (1984b). R. 409–448. 335–371. K.Automatisierungstechnik 48: A45–A48. Aachen. Objektorientierte Modellierung Physikalischer Systeme. Atherton and P. Oxford University Press. B. Objektorientierte Modellierung Physikalischer Systeme.Bibliography Tummescheit. at ..Begriffsdefinitionen. (1995). Helget. Simulation modelling formalism: Partial differential equations. VDI-Richtlinie 3633: Simulation von Logistik-. pp. Teil 11.ASIM-Symposium in Stuttgart. New York. (1984a). at . Variations and fluctuations of the number of individuals in animal species living together. Woolfson. (ed. and Tiller. Lecture notes. Tummescheit. Academic Press. Marquardt. A. Zeigler. Institut f¨r Systemdynamik und u Regelungstechnik. Volterra. Simulationstechnik. (1999). Tummescheit. Oxford. Watzdorf. (2000). (2000b). N. Academic Press. H. W. Borne (eds).). 9. F.. chapter System Architectures for Multifacetted Modelling. B. pp. Zeigler. a 141 . An Introduction to Computer Simulation. Teil 16. (1999). Concise Encyclopedia of Modelling & Simulation. R. Objektorientierte Modellierung und Simulation verfahrenstechnischer o Mehrproduktanlagen. M.Automatisierungstechnik 48: A41–A44. chapter Multifacetted Modelling: Motivation and Perspective. M. E. at . New York. Shaker. Pergamon Press. Objektorientierte Modellierung Physikalischer Systeme. Materialfluß. in C. in G. 423–428. Verein Deutscher Ingenieure (1996). Animal Ecology. Zeik (eds). McGraw-Hill. H. (1992). pp. (2000a). Universit¨t Stuttgart. Kampe and M. S. V. M. 83–88. Allg¨wer. (1994).Automatisierungstechnik 48: A61–A64. Dynamische o Simulation verfahrenstechnischer Prozesse und Anlagen -Ein Vergleich von Werkzeugen. Tzafestas. Multifacetted Modelling and Discrete Event Simulation.

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.