0 Up votes0 Down votes

646 views254 pagesNov 08, 2010

© Attribution Non-Commercial (BY-NC)

PDF, TXT or read online from Scribd

Attribution Non-Commercial (BY-NC)

646 views

Attribution Non-Commercial (BY-NC)

- Wei Prater
- Wei-prater Analysis of Complex Reaction Systems
- DirectX questions
- complex_gnc05.pdf
- Chapter 6 - Ordinary Differential Equation
- The Coefficient of Subgrade Reaction-misunderstood Concept
- Full Text 01
- 2017_05_final_hashash_4.2.17
- 1-s2.0-S0009250914007696-main
- Strong,Weak, FE Form of 1-DScalar
- 19 MacCormack Technique
- A Class of Three Stage Implicit Rational Runge-Kutta Schemes for Approximation of Second Order Ordinary Differential Equations.pdf
- 3 EDC1
- unittopicandrationale
- semester 1 final exam
- New Application of Differential Transformation Method for Improved Boussinesq Equation
- 2 Plane Elasticity 1
- Synchronization of Electric Circuits
- S0002-9947-01-02864-1
- vanderpol-oscillator-draft.pdf

You are on page 1of 254

1 On models 1

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Classification of chemical process models . . . . . . . . . 4

1.3 Lumped parameter, steady state models . . . . . . . . . . 5

1.3.1 Example of a stagewise separation process . . . . 5

1.3.2 Process flow sheet simulation . . . . . . . . . . . . 9

1.3.3 Example of a multicomponent flash . . . . . . . . . 11

1.3.4 Example of a phenomenalogical model . . . . . . . 13

1.3.5 Example of reactors in series . . . . . . . . . . . . . 14

1.4 Lumped parameter, dynamic models . . . . . . . . . . . . 15

1.4.1 Example of cooling a molten metal . . . . . . . . . 15

1.4.2 Ozone decomposition . . . . . . . . . . . . . . . . . 16

1.5 Distributed parameter, steady state models . . . . . . . . 17

1.5.1 Heat transfer through a tappered fin . . . . . . . . 17

1.6 Distributed parameter, dynamic models . . . . . . . . . . 20

1.6.1 Heat transfer through a tappered fin . . . . . . . . 20

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.2 Bisection method . . . . . . . . . . . . . . . . . . . . . . . . 24

2.3 Regula-falsi method . . . . . . . . . . . . . . . . . . . . . . 29

2.4 Secant method . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.5 Newton’s method . . . . . . . . . . . . . . . . . . . . . . . . 31

2.6 Muller’s method . . . . . . . . . . . . . . . . . . . . . . . . . 32

2.7 Fixed point iteration . . . . . . . . . . . . . . . . . . . . . . 34

2.8 Error analysis and convergence acceleration . . . . . . . . 37

2.8.1 Convergence of Newton scheme . . . . . . . . . . . 40

2.9 Deflation technique . . . . . . . . . . . . . . . . . . . . . . . 41

2.10 Parameter continuation . . . . . . . . . . . . . . . . . . . . 43

2.10.1 Euler-Newton continuation . . . . . . . . . . . . . . 43

2.10.2 Homotopy continuation . . . . . . . . . . . . . . . . 43

i

CONTENTS ii

2.11.1 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.11.2 Mathematika . . . . . . . . . . . . . . . . . . . . . . . 44

2.12 Exercise problems . . . . . . . . . . . . . . . . . . . . . . . 45

2.12.1 Multicomponent, isothermal flash model . . . . . . 45

2.12.2 Compressible flow in a pipe . . . . . . . . . . . . . 46

2.12.3 A model for separation processes . . . . . . . . . . 47

2.12.4 Peng-Robinson Equation of State . . . . . . . . . . . 49

2.12.5 Transient heat conduction in semi-infinite slab . . 50

2.12.6 Turbulent flow in a parallel pipeline system . . . . 51

3.1 Matrix notation . . . . . . . . . . . . . . . . . . . . . . . . . 54

3.1.1 Basic operations . . . . . . . . . . . . . . . . . . . . . 54

3.2 Matrices with special structure . . . . . . . . . . . . . . . . 57

3.3 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.3.1 Laplace expansion of the determinant . . . . . . . 59

3.4 Direct methods . . . . . . . . . . . . . . . . . . . . . . . . . 60

3.4.1 Cramers rule . . . . . . . . . . . . . . . . . . . . . . . 60

3.4.2 Matrix inverse . . . . . . . . . . . . . . . . . . . . . . 61

3.4.3 Gaussian elimination . . . . . . . . . . . . . . . . . . 64

3.4.4 Thomas algorithm . . . . . . . . . . . . . . . . . . . 70

3.4.5 Gaussian elimination - Symbolic representaion . . 71

3.4.6 LU decomposition . . . . . . . . . . . . . . . . . . . 75

3.5 Iterative methods . . . . . . . . . . . . . . . . . . . . . . . . 78

3.5.1 Jacobi iteration . . . . . . . . . . . . . . . . . . . . . 80

3.5.2 Gauss-Seidel iteration . . . . . . . . . . . . . . . . . 81

3.5.3 Successive over-relaxation (SOR) scheme . . . . . . 82

3.5.4 Iterative refinement of direct solutions . . . . . . . 84

3.6 Gram-Schmidt orthogonalization procedure . . . . . . . . 85

3.7 The eigenvalue problem . . . . . . . . . . . . . . . . . . . . 86

3.7.1 Wei-Prater analysis of a reaction system . . . . . . 86

3.8 Singular value decomposition . . . . . . . . . . . . . . . . 86

3.9 Genaralized inverse . . . . . . . . . . . . . . . . . . . . . . . 86

3.10 Power iteration . . . . . . . . . . . . . . . . . . . . . . . . . 86

3.11 Software tools . . . . . . . . . . . . . . . . . . . . . . . . . . 86

3.11.1 Lapack, Eispack library . . . . . . . . . . . . . . . . . 86

3.11.2 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . 86

3.11.3 Mathematica . . . . . . . . . . . . . . . . . . . . . . . 86

3.12 Exercise problems . . . . . . . . . . . . . . . . . . . . . . . 86

3.12.1 Laminar flow through a pipeline network . . . . . 86

CONTENTS iii

4 Nonlinear equations 89

4.1 Newton’s method . . . . . . . . . . . . . . . . . . . . . . . . 90

4.2 Euler-Newton continuation . . . . . . . . . . . . . . . . . . 96

4.3 Arc-length continuation . . . . . . . . . . . . . . . . . . . . 96

4.4 Quasi-Newton methods . . . . . . . . . . . . . . . . . . . . 96

4.4.1 Levenberg-Marquadt method . . . . . . . . . . . . . 96

4.4.2 Steepest descent method . . . . . . . . . . . . . . . 96

4.4.3 Broyden’s method . . . . . . . . . . . . . . . . . . . 96

4.5 Exercise problems . . . . . . . . . . . . . . . . . . . . . . . 96

4.5.1 Turbulent flow through a pipeline network . . . . 96

5 Functional approximations 98

5.1 Approximate representation of functions . . . . . . . . . 99

5.1.1 Series expansion . . . . . . . . . . . . . . . . . . . . 99

5.1.2 Polynomial approximation . . . . . . . . . . . . . . 99

5.2 Approximate representation of data . . . . . . . . . . . . . 104

5.2.1 Least squares approximation . . . . . . . . . . . . . 105

5.3 Difference operators . . . . . . . . . . . . . . . . . . . . . . 107

5.3.1 Operator algebra . . . . . . . . . . . . . . . . . . . . 108

5.3.2 Newton forward difference approximation . . . . . 109

5.3.3 Newton backward difference approximation . . . . 112

5.4 Inverse interpolation . . . . . . . . . . . . . . . . . . . . . . 116

5.5 Lagrange polynomials . . . . . . . . . . . . . . . . . . . . . 118

5.6 Numerical differentiation . . . . . . . . . . . . . . . . . . . 122

5.6.1 Approximations for first order derivatives . . . . . 122

5.6.2 Approximations for second order derivatives . . . 124

5.6.3 Taylor series approach . . . . . . . . . . . . . . . . . 126

5.7 Numerical integration . . . . . . . . . . . . . . . . . . . . . 126

5.7.1 Romberg Extrapolation . . . . . . . . . . . . . . . . 129

5.7.2 Gaussian quadratures . . . . . . . . . . . . . . . . . 132

5.7.3 Multiple integrals . . . . . . . . . . . . . . . . . . . . 132

5.8 Orthogonal functions . . . . . . . . . . . . . . . . . . . . . 132

5.9 Piecewise continuous functions - splines . . . . . . . . . . 132

6.1 Model equations and initial conditions . . . . . . . . . . . 133

6.1.1 Higher order differential equations . . . . . . . . . 134

6.2 Taylor series expansion . . . . . . . . . . . . . . . . . . . . 135

6.2.1 Alternate derivation using interpolation polynomials 135

6.2.2 Stability limits . . . . . . . . . . . . . . . . . . . . . . 137

6.2.3 Stiff differential equations . . . . . . . . . . . . . . 139

6.3 Multistep methods . . . . . . . . . . . . . . . . . . . . . . . 141

CONTENTS iv

6.3.2 Implicit schemes . . . . . . . . . . . . . . . . . . . . 142

6.3.3 Automatic stepsize control . . . . . . . . . . . . . . 143

6.4 Runge-Kutta Methods . . . . . . . . . . . . . . . . . . . . . 145

6.4.1 Explicit schemes . . . . . . . . . . . . . . . . . . . . 146

6.4.2 Euler formula revisited . . . . . . . . . . . . . . . . 147

6.4.3 A two-stage (v = 2) Runge-Kutta scheme . . . . . . 147

6.4.4 Semi-implicit & implicit schemes . . . . . . . . . . . 150

6.4.5 Semi-Implicit forms of Rosenbrock . . . . . . . . . 151

6.5 Dynamical systems theory . . . . . . . . . . . . . . . . . . 152

7.1 Model equations and boundary conditions . . . . . . . . . 153

7.2 Finite difference method . . . . . . . . . . . . . . . . . . . . 156

7.2.1 Linear problem with constant coefficients . . . . . 156

7.2.2 Linear problem with variable coefficients . . . . . . 157

7.2.3 Nonlinear problem . . . . . . . . . . . . . . . . . . . 159

7.3 Quasilinearization of nonlinear equations . . . . . . . . . 161

7.4 Control volume method . . . . . . . . . . . . . . . . . . . . 163

7.5 Shooting method . . . . . . . . . . . . . . . . . . . . . . . . 163

7.6 Collocation methods . . . . . . . . . . . . . . . . . . . . . . 163

7.7 Method of weighted residuals . . . . . . . . . . . . . . . . 163

A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1

A.2 Userid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.5

A.3 Overview of the network . . . . . . . . . . . . . . . . . . . . A.5

A.4 The client-server model . . . . . . . . . . . . . . . . . . . . A.8

A.4.1 File servers . . . . . . . . . . . . . . . . . . . . . . . . A.8

A.4.2 License servers . . . . . . . . . . . . . . . . . . . . . A.9

A.4.3 X servers . . . . . . . . . . . . . . . . . . . . . . . . . A.11

A.4.4 News server . . . . . . . . . . . . . . . . . . . . . . . A.13

A.4.5 Gopher service . . . . . . . . . . . . . . . . . . . . . A.15

A.4.6 World wide web service . . . . . . . . . . . . . . . . A.16

A.5 Communication . . . . . . . . . . . . . . . . . . . . . . . . . A.16

A.5.1 Software tools . . . . . . . . . . . . . . . . . . . . . . A.16

A.5.2 Connection to the AIX machines from OS/2 machines A.17

A.5.3 Connection to the AIX machines from a home com-

puter . . . . . . . . . . . . . . . . . . . . . . . . . . . A.19

A.5.4 Connection to the AIX machine from a DOS machine

using tn3270 . . . . . . . . . . . . . . . . . . . . . . . A.20

A.5.5 File transfer with Kermit . . . . . . . . . . . . . . . A.21

CONTENTS v

A.5.7 File transfer from AIX to OS/2 machines . . . . . . A.23

A.6 Operating systems . . . . . . . . . . . . . . . . . . . . . . . A.23

A.6.1 How to signon and logout . . . . . . . . . . . . . . . A.23

A.6.2 Customizing your environment - the .profile file . A.24

A.6.3 File management . . . . . . . . . . . . . . . . . . . . A.24

A.6.4 How to get online help . . . . . . . . . . . . . . . . . A.25

A.7 Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.26

A.7.1 Emacs - the ultimate in editors . . . . . . . . . . . . A.26

A.8 Fortran compilers . . . . . . . . . . . . . . . . . . . . . . . . A.30

A.9 Debuggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.31

A.10Application programs . . . . . . . . . . . . . . . . . . . . . A.32

A.10.1 ASPEN PLUS . . . . . . . . . . . . . . . . . . . . . . . A.32

A.10.2 Xmgr . . . . . . . . . . . . . . . . . . . . . . . . . . . A.32

A.10.3 TEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.33

A.10.4 pine - the mail reader . . . . . . . . . . . . . . . . . A.33

A.10.5 tin - the news reader . . . . . . . . . . . . . . . . . . A.34

A.11Distributed Queueing System . . . . . . . . . . . . . . . . . A.34

A.11.1 Example of a CPU intensive MATLAB job . . . . . . A.35

A.11.2 Example of a FLOW3D job . . . . . . . . . . . . . . . A.36

A.12Printing reports, graphs etc. . . . . . . . . . . . . . . . . . A.36

A.12.1 Using the network printer from AIX machines . . A.38

A.12.2 Using the network printer from OS/2 machines . . A.39

A.13Anonymous ftp service . . . . . . . . . . . . . . . . . . . . A.39

B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1

B.2 Starting a MATLAB session . . . . . . . . . . . . . . . . . . B.2

B.2.1 Direct access on AIX machines . . . . . . . . . . . . B.2

B.2.2 Remote access from OS/2 machines or home com-

puter . . . . . . . . . . . . . . . . . . . . . . . . . . . B.3

B.3 MATLAB basics . . . . . . . . . . . . . . . . . . . . . . . . . B.4

B.3.1 Using built in HELP, DEMO features . . . . . . . . . B.5

B.3.2 Data entry, line editing features of MATLAB . . . . B.8

B.3.3 Linear algebra related functions in MATLAB . . . . B.13

B.3.4 Root finding . . . . . . . . . . . . . . . . . . . . . . . B.15

B.3.5 Curve fitting . . . . . . . . . . . . . . . . . . . . . . . B.16

B.3.6 Numerical integration, ordinary differential equations B.16

B.3.7 Basic graphics capabilities . . . . . . . . . . . . . . B.17

B.3.8 Control System Toolbox . . . . . . . . . . . . . . . . B.19

B.3.9 Producing printed output of a MATLAB session . . B.21

B.3.10 What are m-files? . . . . . . . . . . . . . . . . . . . . B.23

CONTENTS vi

B.3.12 Debugging tools . . . . . . . . . . . . . . . . . . . . . B.26

C.1 Introduction to the shell and the desktop . . . . . . . . . C.1

C.1.1 The ".profile" file . . . . . . . . . . . . . . . . . . . . C.2

C.2 Managing files . . . . . . . . . . . . . . . . . . . . . . . . . . C.4

C.2.1 Making sense of the directory listing - the "ls" command

C.4

C.2.2 Changing permission on files - the "chmod" command C.6

C.2.3 Moving files . . . . . . . . . . . . . . . . . . . . . . . C.6

C.2.4 Copying files . . . . . . . . . . . . . . . . . . . . . . C.7

C.2.5 Changing ownership of files - the "chown" command C.8

C.2.6 Compressing files - the "compress" command . . C.8

C.2.7 Removing files - the "rm" command . . . . . . . . C.8

C.3 Managing processes . . . . . . . . . . . . . . . . . . . . . . C.9

C.3.1 Examining jobs or processes - the "ps" command C.9

C.3.2 Suspending jobs or processes . . . . . . . . . . . . C.10

C.3.3 Terminating jobs or processes - the "kill" command C.10

C.3.4 Initiating background jobs - the "nohup" command C.10

C.3.5 Script files & scheduling jobs - the "at" command C.11

C.4 List of other useful AIX command . . . . . . . . . . . . . . C.12

Bibliography R.1

List of Tables

2.2 Feed composition & equilibrium ratio of a natural gas mix-

ture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.2 Saturation temperature vs. pressure from steam tables . 105

5.3 Inverse interpolation . . . . . . . . . . . . . . . . . . . . . . 118

5.4 Summary of difference approximations for derivatives . 124

network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.7

A.2 List of software on the chemical and materials engineering

network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.12

A.3 List of ftp commands . . . . . . . . . . . . . . . . . . . . . A.22

A.4 List of frequently used emacs functions and their key bind-

ings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.29

B.2 General purpose MATLAB Ver 4.0 commands . . . . . . . B.7

B.3 Linear algebra related functions in MATLAB . . . . . . . . B.14

B.4 Graphics related function in MATLAB . . . . . . . . . . . . B.18

B.5 List of functions from control system tool box . . . . . . B.22

B.6 Program control related help topics . . . . . . . . . . . . . B.27

vii

List of Figures

1.2 Three stage separation process . . . . . . . . . . . . . . . . 5

1.3 Linear and nonlinear equilibrium model . . . . . . . . . . 8

1.4 Example of material balance equations in a process flow

sheet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.5 Multicomponent, isothermal flash process . . . . . . . . . 12

1.6 Reactors in series . . . . . . . . . . . . . . . . . . . . . . . . 14

1.7 Heat transfer from a molten metal . . . . . . . . . . . . . 16

1.8 Heat transfer through a fin . . . . . . . . . . . . . . . . . . 18

2.2 Graphical representation of some simple root finding algo-

rithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.3 MATLAB implementation of the bisection algorithm . . . 27

2.4 MATLAB implementation of the secant algorithm . . . . 30

2.5 Graphical representation of Muller’s scheme . . . . . . . 33

2.6 Graphical representation of fixed point scheme . . . . . . 36

2.7 Graphical illustration of mean value theorem . . . . . . . 38

2.8 Graphical illustration deflation technique . . . . . . . . . 42

2.9 Multicomponent flash function . . . . . . . . . . . . . . . . 45

2.10 Turbulent flow in a parallel pipe . . . . . . . . . . . . . . . 51

matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.2 Naive Gaussian elimination shceme . . . . . . . . . . . . . 67

3.3 MATLAB implementation of naive Gaussian elimination . 68

3.4 Thomas algorithm . . . . . . . . . . . . . . . . . . . . . . . 72

3.5 MATLAB implementation of Thomas algorithm . . . . . . 73

3.6 MATLAB implementation of LU decomposition algorithm 77

3.7 MATLAB implementation of Gauss-Seidel algorithm . . . 83

3.8 Laminar flow in a pipe network . . . . . . . . . . . . . . . 86

viii

LIST OF FIGURES 1

4.2 CSTR in series example - function & Jacobian evaluation 95

different levels of truncation . . . . . . . . . . . . . . . . . 100

5.2 MATLAB implementation illustrating steps of functional ap-

proximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.3 Structure of Newton forward difference table for m equally

spaced data . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

5.4 Example of a Newton forward difference table . . . . . . 112

5.5 Structure of Newton backward difference table for 5 equally

spaced data . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

5.6 Example of a Newton backward difference table . . . . . 115

5.7 Example of a inverse interpolation . . . . . . . . . . . . . . 116

5.8 Structure of divided difference table for 4 unequally spaced

data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

5.9 MATLAB implementation of Lagrange interpolation polyno-

mial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5.10 Illustration of Romberg extrapolation . . . . . . . . . . . . 131

5.11 MATLAB implementation of quadrature evaluation . . . . 132

6.2 Stepsize control strategies for multistep methods . . . . 144

6.3 MATLAB implementation of ozone decomposition model 145

6.4 Results of ozone decomposition model shows a stiff system

behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

data points . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

7.2 One dimensional control volume discretization . . . . . . 163

A.2 Logical dependencies between various machines in the net-

work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.6

A.3 Network File System server and clients . . . . . . . . . . . A.9

A.4 Network Information Service . . . . . . . . . . . . . . . . . A.10

A.5 X-Windows client-server model . . . . . . . . . . . . . . . . A.14

A.6 Anatomy of an emacs window . . . . . . . . . . . . . . . . A.27

To see a World in a Grain of Sand,

And a Heaven in a Wild Flower,

Hold Infinity in the palm of your hand,

And Eternity in an hour

— WILLIAM BLAKE

Chapter 1

computational models

1.1 Introduction

be explore and understand physical and chemical processes involved in

converting a raw material into a useful product. Use this knowledge in

designing, constructing and operating chemical process plants. This def-

inition is as arbitrary as anything else that one might propose. In fact if

one substitutes the word chemical in the above definition with any other

(such a mechanical or electrical) it would remain equally valid. This is

because the basic principles and the scientific methodology we use to

uncover such principles remain the same in any field of engineering or

science. A broader, although highly personal, view of the our attempt

to understand the physical world, describe it in the language of math-

ematics, and finally investigate its consequence by means of analytical,

graphics or numerical methods is shown in Figure 1.1.

A mathematical model is at best an approximation to the physical

world. Such models are constructed based on certain conservation prin-

ciples and/or empirical observations. Those curious about the nature of

the physical laws should read a delightful little book by Feynman (1967)

on the character of physical laws. As a matter of convenience mathemat-

ical models can be classified as linear or non-linear, steady-state or dy-

1

1.1. INTRODUCTION 2

this world consists of process

plants involving transport,

reaction & equilibrium

Physical world processes

based on based on

thought experiments (physcical experiments)

Introduce concepts like force, energy etc.

(need a good eye)

(need a good mind)

(e.g. mass, energy) relationships like y=f(x)

or between observed (y) and

Phenomenological models manipulated (x) variables

(e.g. equation of state) (curve fitting)

Classification I

Linear models

In light of ignorance we tend tend

to assume linearity

Mathematical model

(dating back to flat earth model!) (an approximation to the physical world)

Non-linear models Classification II

In chemical process models

nonlinearity arises primarily in the Steady state models

description of equilibrium models Lumped - algebraic equations

(phase behavior), reaction rate Distributed - ordinary differential equations(BVP)

models or fluid flow models. - partial differential equations (elliptic)

Before the computer era (<1950)

graphical methods were often used Dynamic models

to solve such problems. Lumped - ordinary differential equations(IVP)

They have been replaced largely Distributed - partial differential equations (parabolic)

by computer simulation tools.

Numerical model

Input (an approximation to the mathematical model) Output

examples

ASPEN, HYSIM,

PROCESS, FLOW3D,

HCOMP, SPEEDUP,

DREM

1.1. INTRODUCTION 3

are provided later in this chapter. In general, non-linearity is found to

occur quite naturally and frequently in nature; it is also very difficult to

analyse non-linear models without the aid of a computer.

A numerical model (or a computer simulation tool) is an approxima-

tion to the mathematical model. Although the importance of mathemat-

ical modelling in chemical engineering was recognized since the early

1920’s, it was the text by Bird et al. (1960) on Transport Phenomena

that has proved to be the major inspiration in exploring the link between

the physical world and the mathematical one for transport processes in-

volving momentum, heat and mass transfer. Since then a number of

outstanding texts have appeared that explore this relationship for reac-

tion and equilibrium processes as well. While such studies form the core

of chemical engineering curriculum, the importance of sharpening our

mathematical abilities, and the need to incorporate these as part of the

curriculum was recognized and aided by the appearance of early text

books by Amundson (1966) and Jenson & Jeffreys (1963). These dealt

specifically with mathematical applications of chemical engineering. The

texts by Lapidus (1962) and Rosenbrock (1966) served a similar purpose

in introducing digital computational methods in the analysis of chemical

processes.

We are now at a new threshold; computers have become quite per-

vasive. Significant advances have been made in our ability to analyse

non-linear systems. The advances in both the hardware and software

technology have been revolutionary. As a result of these advances, com-

puter aided design and analysis has become a standard tool as evidenced

by the success of several commercial packages such as ASPEN PLUS, PRO-

CESS, HYSIM (steady state process simulators), FLOW3D, FLUENT (fluid

dynamics simulators), HCOMP, DREM (multiphase and transient pipeline

flow simulators) etc. In addition to such simulators that are specific for

certain classes of problems, general purpose mathematical tools such as

MATLAB (for matrix linear algebra functions), Mathematika and MAPLE

(for symbolic computation) provide easy access to a vast array of mathe-

matical functions and the ability to process them both symbolically and

numerically. Such packaged tools tend to accomplish the following: (i)

codify the most advanced algorithms, (ii) assemble a vast database (in

the case of physical properties) and knowledge base in the case of math-

ematical functions (in MAPLE and Mathematika) and (iii) make these ac-

cessible to the end user through an intuitive interface. While this puts a

lot of power at the hands of the end user, in order to use them wisely and

interpret the results correctly, the users are expected to have a sound

knowledge of the relationship between the physical world and the mathe-

1.2. CLASSIFICATION OF CHEMICAL PROCESS MODELS 4

matical model and that between the mathematical model and the numer-

ical approximation. One is well served to remember the cliche garbage

in, garbage out!

In this course we examine the link between the mathematical and

the numerical model. In so doing modern computational tools are used

quite liberally. Concepts of a networked computer environment are dis-

cussed in Appendix A. A basic introduction to MATLAB can be found

in Appendix B. MATLAB is used throughout in illustrating various algo-

rithms.

and energy of process streams from the raw material stage to the finished

product state. The state of a stream is characterized by the concentra-

tion of the various species that it carries and its temperature, pressure

and flow rates. Applying the laws of conservation of mass, energy and

momentum allows us to track changes in the state of the system. Typi-

cally we subject the raw material streams to either physical treatment to

add or remove chemical species exploiting such property differences as

density, solubility, volatility, diffusivity etc. (transport and equilibrium

processes) or, chemical treatment to alter the chemical structure (reac-

tion processes).

If the state variables are assumed to be independent of time and spa-

tial position, then we often have a lumped parameter, steady state model

resulting in a set of coupled algebraic equations. If they are assumed to

have no spatial variation, but are time dependent, then we have lumped

parameter, dynamic models which result in ordinary differential equa-

tions of the initial value type. If there is no time dependence, but there is

a spatial variation and that too restricted to one dimension (for reasons

of symmetry or scale), then we have ordinary differential equations of

the boundary value type. If both spatial and time dependence are im-

portant, then we end up with partial differential equations, which are

further classified into parabolic, elliptic and hyperbolic equations. The

classification outlined in this paragraph are illustrated with specific ex-

amples in the next sections. The examples are drawn from transport,

equilibrium and reaction processes. The objective is to sensitize you to

the model building process in the hope that you would begin to appre-

ciate the relationship between the physical world and the mathematical

model that represents it.

1.3. LUMPED PARAMETER, STEADY STATE MODELS 5

L, x0 x1 x2 L, x3

1 2 3

V, y1 y2 y3 V, y4

Consider a stagewise separation process shown in Figure 1.2. Suppose

we wish to process a gas stream at a rate of V kmole/s containing a

pollutant at a mole fraction of y4 . We wish to remove the pollutant

by scrubbing it with a solvent in a counter-current 3-stage separation

device. The liquid rate is, say, L kmole/s and it contains a pollutant con-

centration of x0 (which may be zero for a pure solvent stream). Only the

pollutant transfers from the gas phase to the liquid phase and we make

use of the solubility differences between the inert carrier gas and the

pollutant. So far we have made an attempt to describe a physical world.

Is the description adequate to formulate a mathematical model? How

do we know that such a model should result in a steady state, lumped

parameter model? The answer is no, we don’t! We need to further de-

fine and refine the problem statement. For a process engineer this is the

most important step - viz. understand the objective of the task and the

nature of the process (the physical world) to formulate a mathematical

model. Let us continue with the description of the problem.

The state variables in this problem are (L, V , x0 , x1 , x2 , x3 , y1 , y2 , y3 , y4 ).

By focusing only the steady state operation, we remove the dependence

of state variables on time. Such a model cannot clearly answer any ques-

tions concerning start up or shutdown of this process. Next, we assume

that in each stage the gas and liquid are mixed thoroughly so that there

is no spatial variation of concentration within the equipment. This is the

so-called lumped parameter approximation.

We further assume that the streams leaving a stage are in thermody-

namic equilibrium. This implies that for a given inlet streams, no matter

what we do inside the process equipment, the exit concentrations cannot

be changed as they have reached an invariant state. To state it another

way, there is a unique relationship, y = f (x), between the exit concen-

1.3. LUMPED PARAMETER, STEADY STATE MODELS 6

laboratory and entered into a database. Often this relation is expressed

as, y = Kx where K is called the equilibrium ratio; at extremely low con-

centration range K may be assumed to be a constant (results in a linear

model), while at higher concentrations the equilibrium ratio may itself be

a function of concentration, K(x) (results in a non-linear model). While

experience and experimentation suggest that such relationships do ex-

ist, study of equilibrium thermodynamics takes this one step further in

attempting to construct predictive models for the function, y = f (x) by

examining the equilibrium process at a molecular level. Let us continue

with the assumption that the equilibrium model is given by

yn = Kxn n = 1, 2, 3 (1.1)

relationship is valid for each stage of the separation process. This yields

us three equations, but recall that the state of this 3-stage separation

train is described by 10 variables: (L, V , x0 , x1 , x2 , x3 , y1 , y2 , y3 , y4 ). At

this stage we ask ourselves if there are other laws or principles that this

system should obey. Conservation laws such as mass, momentum and

energy conservation should come to mind. In the present case our ob-

jective has been narrowly focused on tracking the concentration of the

pollutant in each of the three stages. In particular we have not concerned

ourselves with flow and heat effects. Let us speculate briefly what these

effects might be! Heat transfer effects might include heat of absorption,

while flow effects will include imperfect mixing in a stage. The later

in fact has serious consequence in negating two of our earlier assump-

tions: viz. lumped parameter system implying concentration is spatially

uniform in each stage and the exit streams are in thermodynamic equi-

librium. Nevertheless, we still proceed with the assumption of perfect

mixing; a model description that takes into accounts such realities often

becomes intractable. Neglecting heat and flow effects, we have only mass

conservation principle. Applying this for the pollutant species around

each of the three stages, we obtain,

Stage 2: V (y3 − y2 ) = L(x2 − x1 ) (1.2)

Stage 3: V (y4 − y3 ) = L(x3 − x2 )

Note that in each of these equations, the left hand side represents the

amount of pollutant that has been removed from the gas phase and the

right hand side represents the same amount of material absorbed into

1.3. LUMPED PARAMETER, STEADY STATE MODELS 7

the liquid phase. Now we have a total of six equations, but still ten

variables. Hence we conclude that we have four degrees of freedom.

This implies that we can choose four of the variables and the remaining

six variables must be determined by satisfying the six equations.

One can also write an overall balance, taking all three stages as one

group:

Overall: V (y4 − y1 ) = L(x3 − x0 ) (1.3)

(1.2) produces equation (1.3). This will be used later in introducing con-

cepts of linear independence and rank of matrices.

Let us assume that we pick the four variables associated with the inlet

streams to be specified, viz. (L, V , x0 , y4 ). Defining S = L/KV (a known

value) and eliminating variables (y1 , y2 , y3 ) from equations (1.2) we get

the following system of three equations

(1 + S) −1 0 x1 Sx0

−S (1 + S) −1 x2 = 0 (1.4)

0 −S (1 + S) x3 y4 /K

form as

Tx = b (1.5)

d1 c1 0 (1 + S) −1 0

T = a1 d2 c2 = −S (1 + S) −1

0 a2 d3 0 −S (1 + S)

and

x1 b1 Sx0

x = x2 b = b2 = 0

x3 b3 y4 /K

form as in equation (1.5), we can generalize the model to any number of

1.3. LUMPED PARAMETER, STEADY STATE MODELS 8

y = f(x)

x

K

y=

1

x

x)

1

K(

K=constant

y=

1

x x1 x

d1 c1 0 ··· 0 x1 b1

a x

1 d2 c2 ··· 0 2 b2

. .. ..

T = .. x= b=

. .

0 0 an−2 dn−1 cn−1 xn−1 bn−1

0 ··· 0 an−1 dn xn bn

where ai = −S, di = (1 + S), ci = −1. Efficient algorithms for solving

such system will be developed in Chapter 3.

constant as in figure 1.3a, if they are found to be dependent on concen-

trations x, then we have a nonlinear system of equations. One can then

interpret the K(x) values to be the slopes of the chord as shown in fig-

ure 1.3b, which is no longer a constant, but depends on x. This implies

that, S(x) = L/K(x)V and hence T becomes a function of x. Thus the

elements in T cannot be determined without knowing the solution x.

An intutive approach to resolving this dilemma in solving such systems,

T (x)x = b might be to make an initial guess for x old and use this guess

to evaluate, K(x old ), S(x old ) and hence T (x old ) and obtain a new solu-

tion for x new by solving the linearized system, T (x old )x new = b. One

can repeat this procedure until the difference between the new and the

old values of x becomes vanishingly small. Although there are numer-

ous variations on this scheme, a large class of non-linear problems are

1.3. LUMPED PARAMETER, STEADY STATE MODELS 9

solved within the conceptual frame work (i) estimating an initial guess

(ii) devising an algorithm to improve the estimate and (iii) checking for

convergence of the result.

ified viz. (L, V , x0 , y4 ). This would be typical for performance analysis

problems where the output of an existing process is desired, given its

inlet conditions. A design engineer, who gets into this game at an ear-

lier stage, might face an alternate problem. For example, environmental

regulations might dictate that the exit concentration of the pollutant y1

be below a certain acceptable level. Thus the four degrees of freedom

might be used up in specifying (V , x0 , y4 , y1 ). Assuming once again a

linear equilibrium model (K constant), the system of equations (1.2) in

the unknown set (L, x2 , x3 ) can be written as:

f2 (L, x2 , x3 ; V , y1 , x0 ) := KV (x3 − x2 ) − L(x2 − y1 /K) = 0 (1.6)

f3 (L, x2 , x3 ; V , y4 ) := V (y4 − Kx3 ) − L(x3 − x2 ) = 0

tions are non-linear! Why? Although the mathematical model has re-

mained the same for various specifications, we have nice tridiagonal ma-

trix structure for some specifications while no such structure exists for

others.

Consider the flow sheet shown in figure 1.4. It is an extremely simple

unit consisting of a reactor and a separator. We are given the mole frac-

tions of components in the recycle stream and the exit stream from the

reactor. We are asked to determine the molar rates of CO and H2 in the

inlet stream, the recyle rate R and the product rate, P . In analysing this

problem we setup a series of material balance equations. Focusing on

the reactor loop (loop 1) shown in dashed line in figure 1.4, we can write

the following three component material balance equations:

or

x + 0.306R = 101.475

1.3. LUMPED PARAMETER, STEADY STATE MODELS 10

mole fr of H2 = 0.694

mole fr of CO= 0.302

R (recycle rate) mole fr of CH3OH= 0.004

Reactor Condenser

x: CO P (rate): CH3OH

y: H2

loop 1

F = 275 mol/min

mole fr of H2 = 0.631

mole fr of CO= 0.274

mole fr of CH3OH= 0.095

sheet.

or

y + 0.702R = 225.775

or

x + 0.306R = 101.475

linear algebra, these three equations do not form a linearly independent

set of equations. So we proceed to construct additional equations by

examining material balance around the full flow sheet (loop 2). These

give rise to:

C-balance: x=P

H2 -balance: y = 2P

1.3. LUMPED PARAMETER, STEADY STATE MODELS 11

1 0 0.306 0 101.475

x

0 1 0.702 0 225.775

y

1 0 0.306 0 = 101.475 (1.7)

R

1 0 0 −1 0

P

0 1 0 −2 0

Recognizing the redundancy between the first and third equations and

also combining equations four and five to eliminate P , we can write the

above set in an alternate form as

1 0 0.306 x 101.475

0 1 0.702 y = 225.775 (1.8)

−2 1 0 R 0

cess. This also results in a lumped, steady state model description. It

is also an example of how a potentially large system system of algebraic

equations can be reduced to a single equation in one unknown through

clever manipulations. Thus root finding algorithms could be used effi-

ciently to solve this system. A sketch of the process is shown in figure

1.5. A feed stream of known flow rate, F , composition (mole fractions),

{zi |i = 1 · · · N}, temperature, TF and pressure, PF is flashed into a drum

maintained at a temperature and pressure of (T , P ), respectively. Under

right conditions, the feed will split into a vapor phase and a liquid phase.

The objective is to predict the flow rate and compositions of the vapor,

(V , yi ) and the liquid (L, xi ) phases. Each exit stream contains (N + 1)

unknowns. The assumptions are that the process is operating under

steady conditions, perfect mixing takes place inside the drum (lumped

approximation) and the exit streams are in thermodynamic equilibrium.

The model equations are as follows:

Observe the

F zi = V yi + Lxi i = 1···N (1.10) nonlinear terms in

this equation: viz.

product of

unknowns V & yi

and L & xi

1.3. LUMPED PARAMETER, STEADY STATE MODELS 12

V

yi i=1 .. N

40

Non-physical

30 solution

20

10

F f(ψ)

0

Zi i=1 .. N

TF 10

PF Real solution

20

30

3 2 1 0 1 2 3

L ψ

xi i=1 .. N

F =V +L (1.11)

N

X N

X

yi = xi = 1 (1.12)

i i

for the (2N + 2) unknowns; but it is easy to verify that summing equa-

tions (1.10) over all components and using the mole fraction constraint,

results in equation (1.11). Thus, equation (1.11) is not an independent

one. Although these equations could be solved as a system of nonlinear

algebraic equations, a much more efficient scheme is to eliminate all,

except one variable and reduce the system into a single equation. First

eliminate yi from equation (1.10) using (1.9) to obtain

N

X N

X

(xi − yi ) = 0 or (1 − Ki )xi = 0

i i

1.3. LUMPED PARAMETER, STEADY STATE MODELS 13

N

X (1 − Ki )F zi

=0

i

Ki V + L

the final form of the flash equation as.

N

X (1 − Ki )zi

=0 (1.13)

i

(Ki − 1)ψ + 1

of roots that a nonlinear equation posses cannot be known a priori. A

possible sketch of the function is shown in figure 1.5b. Since ψ is defined

as the fraction of feed that appears as vapor, (V /F ), the physical world

dictates that it must lie between (0, 1) and it is sufficient if the search

for the root is limited to this range. The flash equation (1.13)may posses

other roots outside the range of interest (0, 1). Such roots are valid

mathematical solutions of the problem, they are not physically relevant.

In the previous two examples, models were built based on conservation

laws. Models based on empirical observations are also quit common.

The Pressure-Volume-Temperature (PVT) behavior of gases, for exam-

ple, could be modeled by the ideal gas law viz. P V = nRT . A more

refined model, called the Peng-Robinson equation of state is used widely

in chemical engineering literature. It is given by the following equations:

RT a(T )

P= − (1.14)

(V − b) V (V + b) + b(V − b)

where

R 2 Tc2

a(T ) = 0.45724 α(Tr , ω)

Pc

RTc

b = 0.0778

Pc

p

α1/2 = 1 + m(1 − Tr )

m = 0.37464 + 1.54226ω − 0.26992ω2

Here Tc , Pc are the critical temperature and pressure of the component

and ω is the accentric factor. These are properties of a component.

1.3. LUMPED PARAMETER, STEADY STATE MODELS 14

a0 a1 ai-1 ai aN-1 aN

F

1 i N

V V V

the compressibility factor. Equation (1.14) can be rearranged as a cubic

equation in Z as follows:

the pressure and temperature (P , T ) are given and the material is iden-

tified ( i.e., Tc , Pc , ω are known), the coefficients (A, B) in equation (1.15)

can be calculated and hence the cubic equation can be solved to find the

roots, Z. This allows the determination of the volume (or density) from

Z = P V /RT .

An example from chemical reaction engineering process that gives rise

to a system of nonlinear equations is that of a continuously stirred tank

reactor in series. A sketch is shown in figure 1.6 Consider an isother-

mal, irreversible second order reaction. The composition in each reactor

is assumed to be spatially homogeneous due to thorough mixing. The

reaction rate expression is given by,

r = kV a2i

k is the reaction rate constant and V is the volume of the reactor. A

material balance under steady state conditions on the i − th reactor

results in,

kV a2i = F (ai−1 − ai ) (1.16)

Letting β = kV /F , we have the following n-simultaneous nonlinear equa-

tions.

1.4. LUMPED PARAMETER, DYNAMIC MODELS 15

While we have constructed N equations, there are (N + 2) variables in

total. They are [a0 · · · aN β]. Hence we have two degrees of freedom. In

analysing an existing reactor train, for example, one might regard (β, a0 )

to be known and solve for the remaining N variables including the exit

concentration aN (and hence the conversion). In a design situation one

might wish to achieve a specific conversion and hence regards (a0 , aN )

as knowns and solve for remaining N variables including β (and hence

the volume V ).

Lumped parameter, dynamic models arise typically when the spatial vari-

ation of the state variables can be ignored for some reason, but time vari-

ation cannot be ignored. Let us consider an example from heat transfer.

A sample of molten metal at an inital temperature of Ti is placed in a

crucible (at an initial temperature of T∞ ) and allowed to cool by convec-

tion. A sketch is shown in figure 1.7. Let T1 (t) be the temperature of

the molten metal at any time t and T2 (t) be the temperature of the cru-

cible. The argument used to justify neglecting spatial variation is that

the thermal conductivity of the two materials are sufficiently large to

keep the temperature of each material uniform within its boundaries.

The conservation law statement is:

{rate of accumulation} = {rate in} - {rate out}+ {rate of generation}

Applying this first to the molten metal,

d

(m1 Cp1 T1 ) = −h1 A1 (T1 − T2 ) (1.18)

dt | {z }

heat loss from 1 to 2

Energy balance on the crucible results in,

d

(m2 Cp2 T2 ) = h1 A1 (T1 − T2 ) − h2 A2 (T2 − T∞ ) (1.19)

dt | {z } | {z }

heat gain by 2 from 1 heat loss from 2 to ∞

dθ

= Aθ + b

dt

1.4. LUMPED PARAMETER, DYNAMIC MODELS 16

Insulation

Molten metal

T1

T2

T∞

where

" # h A h1 A1

T1 − m11Cp1 m1 Cp1 0

θ= A= h A

1

h A +h A

b= h2 A2 T∞

T2 + m21Cp1 − 1 m12 Cp2 2 m2 Cp2

2 2

" #

Ti

θ(t = 0) =

T∞

known. A1 is the heat trasfer area at the metal-crucible interface, A2

is the area at the crucible-air interface. (h1 , h2 ) are the corresponding

heat transfer coefficients, (m1 , m2 ) are the corresponding mass of the

materials, (Cp1 , Cp2 ) are the specific heats of the two materials. Since all

of these are assumed to be known constants, the problem is linear.

have been proposed to model the decomposition of ozone in the atmo-

sphere. Let us consider a simple two-step model.

O3 + O2 z O + 2O2

O3 + O → 2O2

1.5. DISTRIBUTED PARAMETER, STEADY STATE MODELS 17

fying the mechanisms of ozone depletion in the atmosphere. For lack

of better data, the compositions were assumed to be spatially homoge-

neous in the atmosphere, although we know now that there can be spatial

variations. For the present purpose we will assume the compositions to

be spatially uniform. Let y1 be the composition of O3 and y2 be that of

O. The model equations are,

dy1

= f1 (y1 , y2 ) = −y1 − y1 y2 + κy2 (1.20)

dt

dy2

= f2 (y1 , y2 ) = y1 − y1 y2 − κy2 (1.21)

dt

The initial compositions are y(t = 0) = [1.0, 0.0]. The parameters are

= 1/98 and κ = 3.0. This is a system of two non-linear ordinary dif-

ferential equations. It is an interesting problem in the limit of → 0. In

the reaction analysis literature, the consequence of this limit is known

as the quasi-steady-state-approximation. The physical interpretation is

that the second reaction is much faster than the first one so that it can

be assumed to have reached the equilibirum state at every instant of

time. The second equation becomes an albegraic one. In the applied

mathematics literatture it is called the singular perturbation problem.

From the computational point of view this limit gives rise to a phenom-

ena called stiff systems. We will explore these features further in later

chapters.

Let us examine an example from transport processes. Consider the use

of a fin to enhance the rate of heat transfer. Basically, a fin provides a

large heat transfer surface in a compact design. In the design and perfor-

mance analysis of fins one might be interested in a variety of questions

such as what is the efficiency of the fin? (as a corollary what is a useful

definition of fin efficiency?), How many fins are required to dissipate a

certain heat load?, What is the optimal shape of the fin that maximizes

the heat dissipation for minimum weight of fin material? How long does

it take for the fin to reach a steady state? etc. You will learn to develop

answers to these questions in a heat transfer course. Our interest at this

stage is to develop a feel for the model building process. A sketch of a

fin is shown in figure 1.8. Let us first examine the steady state behavior

1.5. DISTRIBUTED PARAMETER, STEADY STATE MODELS 18

W

L

hP δx

T(x) T∞

T0

δx

hP δx

t0

y z

qA|x qA|x+δx

x

of a planar fin shown in figure 1.8a. The base of the fin is maintained

at a uniform temperature of T0 and the ambient temperature is T∞ . The

state variable that we are interested in predicting is the temperature of

the fin, T . In general in might be a function of all three spatial positions

i.e., T (x, y, z). (Note time, t is eliminated by assuming steady state). If

we know something about the length scales of the fin and the material

property of the fin, we make further assumptions that will reduce the

complexity of the problem. Let us also assume that the fin is made of a What types of

homogeneous material i.e., its thermal conductivity, k is independent of materials might

position. If the length, L, of the fin is much larger than the thickness, t0 , violate this

assumption?

then we might argue that the temperature variation in the y direction

will be smaller than that in the x direction. Thus we can assume T to

be uniform in the y direction. Next, we examine what happens in the

z direction? This argument is somewhat subtle as it is based on sym-

metries in the system. The basic premise here is that symmetric causes

produce symmetric effects. An excellent and easily accessible exposi-

tion on this topic can be found in Golubiksky and Stewart (1993). First

we assume that the ends of the fin in the z direction are at infinity (or

W >> L) so that the end effects can be neglected. Since the temperature

gradient within the fin is caused by the driving force T0 and T∞ which

are independent of z direction, we can expect the fin to respond in a

1.5. DISTRIBUTED PARAMETER, STEADY STATE MODELS 19

however small it may be, is always present in a planar fin. By making W

large compared to L we reduce the error caused by the two-dimensional

effect near the ends. On the other hand the azimuthal symmetry in the

circular fin (figure 1.8b) make the problem truly one dimensional with

temperature varying only in the radial direction. Now that we have a bet-

ter feel for what kinds of arguments or assumptions make this problem

one-dimensional, let us proceed with constructing the model equation.

Since the temperature variation is present only in the x direction, we

take an elemental control volume of thickness δx and identify the input

and output sources of energy into this control volume. See figure 1.8.

Energy enters by conduction mechanism at a rate of (qA)|x through the

left boundary at x and leaves at a rate of (qA)|x+δx through the right

boundary at x + δx. Heat is also lost by convection through the upper

and lower boundaries, which is represented by hP δx(T − T∞ ). Here q

is the heat flux (J/s · m2 ) by conduction. This is given by another phe-

dT

nomenological model called the Fourier law: q = −k dx . k in the Fourier

law defines a material property called thermal conductivity (J/m·s ·o C).

A is the cross-sectional area which is allowed to be a function of x (ta-

pered fin), h is the heat transfer coefficient (J/s · m2 ·o C). P = 2W is

the perimeter (m). The conservation law statement is:

{rate of accumulation} = {rate in} - {rate out}+ {rate of generation}

In symbolic terms, it is given by,

0 = (qA)|x − (qA)|x+δx − hP δx(T − T∞ )

Dividing by δx and taking the limit of δx → 0, we obtain,

d(qA)

0=− − hP (T − T∞ )

dx

Using the Fourier law to replace q in terms of T ,

d dT

kA − hP (T − T∞ ) = 0 (1.22)

dx dx

physical description dictates that two conditions be specified at the two

ends of the fin, viz.

T (x = 0) = T0

T (x = L) = T∞ (1.23)

dT = 0

dx x=L

1.6. DISTRIBUTED PARAMETER, DYNAMIC MODELS 20

This problem can be solved to obtain T (x) provided the geometrical pa-

rameters {A(x), P }, the material property, k and the heat transfer envi-

ronment {h, T∞ , To } are known. The problem is nonlinear if the thermal

conductivity is a fucntion of temperature, k(T ). In order to determine

the effectiveness of the fin, one is interested in the total rate of heat

transfer, Q through the fin. This is can be computed in one of two ways

as given by,

ZL

dT

Q= hP [T (x) − T∞ ] dx = −k · A(x = 0)

o dx x=0

As an example of a distributed, dynamic model let us re-examine the fin

problem, but during the early transient phase. Let the fin be initially at

the uniform ambient temperature of T∞ . At time t = 0 suppose the base

of the fin at x = 0 is brought to a temperature of To . One migth ask

questions like, how long will it take for the fin to reach a steady state?

what will be the temperature at the tip of the fin at a given time? etc.

Now we have the temperature as a function of both position and time,

i.e., T (x, t). We can represent the rate of accumulation within a control

d(ρAδxCp T )

volume symbolically as, dt where the term in paranthesis is the

energy (J) at any time within the control volume, Cp is the specific heat

of the fin material (J/kg ·o C) and ρ is the density of the material. Thus

the transient energy balance becomes,

∂(ρAδxCp T )

= (qA)|x − (qA)|x+δx − hP δx(T − T∞ )

∂t

Dividing by δx and taking the limit of δx → 0, we obtain,

∂(ρACp T ) ∂(qA)

=− − hP (T − T∞ )

∂t ∂x

Finally using the Fourier’s law to replace q in terms of T ,

∂(ρACp T ) ∂ ∂T

= kA − hP (T − T∞ ) (1.24)

∂t ∂x ∂x

both (x, t) and equation (1.24) is a partial differential equation. In ad-

dition to the boundary conditions specified in equation (1.23), we need

1.6. DISTRIBUTED PARAMETER, DYNAMIC MODELS 21

conditions are

IC : T (x, t = 0) = T∞ ,

BC1 : T (x = 0, t) = T0 , (1.25)

BC2 : T (x = L, t) = T∞

Detection is, or ought to be, an exact science, and

should be treated in the same cold and unemotional

manner. You have attempted to tinge it with ro-

manticism, which produces much the same effect

as if you worked a love-story or an elopement into

the fifth proposition of Euclid.

Chapter 2

equation

2.1 Introduction

nonlinear algebraic equation of the form,

f (x) = 0 (2.1)

only at selected values of x = r , called the roots. The equation can be a

simple polynomial as we saw with the Peng-Robinson equation of state in

section 1.3.4 or a more complicated function as in the multicomponent

flash example discussed in section 1.3.3. If the equation depends on

other parameters, as is often the case, we will represent them as

f (x; p) = 0 (2.2)

graph the function f vs. x for a range of values of x. The objective

of such an exercise is to graphically locate the values of x = r where

the function crosses the x axis. While such a graphical approach has

an intuitive appeal, it is difficult to generalize such methods to higher

22

2.1. INTRODUCTION 23

(a) f(x) := sin(x), f(r)=0 at r=nπ, n=0,1,.. (b) f(x):= x3 + 4 x2 + x -6, f(r)=0 at r=-3,-2,1

1

10

0.5 7.5

5

0 2.5

0

-0.5

-2.5

-5

-1

-7.5 -5 -2.5 0 2.5 5 7.5 -3 -2 -1 0 1 2 3

(c) f(x):= x3 + 3 x2 - 4, f(r)=0 at r=-2,-2,1 (d) f(x):= x3 - x2 + x - 1, f(r)=0 at r=1

6

10

4 5

2 0

0 -5

-2 -10

-4 -15

-3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3

rithms that can be generalized and refined successively to handle a large

class of nonlinear problems.

Before we embark on such a task, some of the potential problems are

illustrated with specific examples. In the general nonlinear case, there

is no way to know a priori, how many values of r can be found that will

satisfy the equation, particularly if the entire range of x in (−∞, ∞) is

considered. For example, the simple equation

f (x) := sin(x) = 0

The graph of the function shown in figure 2.1a illustrates this clearly.

Often, the physical description of the problem that gave rise to the math-

ematical function, will also provide information on the range of values

of x that are of interest. For example, in the multicomponent flash equa-

tion, the problem was so formulated that the dependent variable had the

physical interpretation of fraction of feed in vapor; hence this fraction

must be between (0, 1). Although the mathematical equation may have

many other roots, outside of this range, they would lack any physical

meaning and hence would not be of interest.

Algebraic theory tells us that the total number of roots of a polyno-

mial is equal to the degree of the polynomial, but not all of them may be

real roots. Furthermore, if the coefficients of the polynomial are all real,

2.2. BISECTION METHOD 24

then any complex root must occur in pairs. Consider the three cubic

equations given below:

f (x) := x 3 − x 2 + x − 1 = 0 r =1

These are graphed in figures 2.1b,c,d respectively. In the first case there

are three distinct roots. The function has a non-zero slope at each value

of the root and such roots are called simple roots. In the second case

we have a degeneracy or a non-simple root at r = −2. This problem

manifests itself in a graphical representation with a zero slope of the

function at the multiple root, r = −2. If the coefficients of the polyno-

mial were slightly different, the curve could have moved slightly upward

giving rise to two distinct roots or downwards yielding no roots in this

region. Algebraically, we can see that the root is a multiple one with a

multiplicity of 2 by factoring the function into (x + 2)(x + 2)(x − 1). In

the third case there is only a single real root.

We begin by constructing some simple algorithms that have an in-

tuitive appeal. They are easy to represent graphically and symbolically

so that one can appreciate the connection between the two represen-

tations. Subsequently we can refine the computational algorithms to

meet the challenges posed by more difficult problems, while keeping the

graphical representation as a visual aid.

There are essentially three key steps in any root finding algorithm.

They are:

step 2 produced a result of desired accuracy?

The crux of the algorithm is often in the second step and the objective

in devising various clever schemes is to get from the initial guess to the

final result as quickly as possible.

algorithm is shown in figure 2.2a. In step 1, we make two guesses x1 and

2.2. BISECTION METHOD 25

f2 f2

f3

Discard x1

x1 r x1 x3

x3 x2 r x2

Discard x2

f3 (keep the two values

f1 (keep the two values f1 that bracket the root, r)

that bracket the root, r)

( x2 − x1 )

x3 = x1 − f 1

x3=(x1+x2)/2 (f 2 − f 1 )

f3 f2

Discard x1

x1 r x1 slope = f’1

r

x3 x2 x2

(keep the two most f1

f1

recent values)

(x x )

x3 = x1 − f 1 2 − 1

(f 2 − f 1 )

{ δ

x2 = x1 −

f1

f ’1

rithms

function values have opposite signs, it implies that it passes through

zero somewhere between x1 and x2 and hence we can proceed to the

second step of producing a better estimate of the root, x3 . If there is

no sign change, it might imply that there is no root between (x1 , x2 ). So What else might it

we have to make a set of alternate guesses. The scheme for producing a imply?

better estimate is also an extremely simple one of using the average of

the two initial guesses, viz.

x1 + x2

x3 = (2.3)

2

still, store the value of x3 in the variable x2 so that we are poised to

repeat step 2. If the situation were such as the one shown in figure 2.2b,

2.2. BISECTION METHOD 26

variable x1 and repeat step 2. In either case, (x1 , x2 ) will have better

guesses than the original values.

The final step is to check if we are close enough to the desired root

r so that we can terminate the repeated application of step 2. One test

might be to check if the absolute difference between two successive val-

ues of x is smaller than a specified tolerance, i.e.,

|xi+1 − xi | ≤

at the end of every iteration is below a certain tolerance, i.e.,

|f (xi )| ≤

2 is repeated. A MATLAB function, constructed in the form of a m-file,

is shown in figure 2.3.

oped in section §1.3.3. The equation is

N

X (1 − Ki )zi

=0 (2.4)

i

(Ki − 1)ψ + 1

variable. Implementation of this function in MATLAB is shown below as

an m-file. See Appendix B for

an introduction to

MATLAB.

2.2. BISECTION METHOD 27

function r=bisect(Fun,x,tol,trace)

%BISECT find the root of "Fun" using bisection scheme

% Fun - the name of the external function

% x - vector of length 2, (initial guesses)

% tol - error criterion

% trace - print intermediate results

%

% Usage bisect(’flash’,[0,1])

% flash is the name of the external function.

% [0,1] is the initial guess

%Check inputs

if nargin < 4, trace=0; end

if nargin < 3, tol=eps; trace=0; end

if (length(x) ˜= 2)

error(’Please provide two initial guesses’)

end

error(’No sign change - no roots’)

end;

x3 = (x(1) + x(2))/2; %Update the guess

f3 = feval(Fun,x3); %Cal. f(x3)

if sign(f(1)*f3) < 0, x(2)=x3; else x(1)=x3; end;

if trace, fprintf(1,’%3i %12.5f %12.5f\n’, i,x3,f3); end

end

error(’Exceeded maximum number of iterations’)

2.2. BISECTION METHOD 28

function f=flash(psi)

% K is a vector of any length of equil ratios.

% z is the feed composition (same length as K)

% K, z are defined as global in main

% psi is the vapor fraction.

global K z

if ( length(K) ˜= length(z) )

error(’Number of K values & compositions do not match’)

end

n=length(psi);

for i = 1:n

f(i)=sum( ((K-1).*z) ./ (1+(K-1)*psi(i)) );

end

Observe that this function, while being concise, is fairly general to han-

dle any number of components N and a vector of guesses psi of any

length and return a vector of function values, one corresponding to each

element in the guessed variable psi. Assuming that you have such a

function defined in a file named flash.m, you are encouraged to work

through the following exercise using MATLAB.

» global K z

» z=[.25 .25 .25 .25] %define a 4-component system

» K=[2 1.5 0.5 0.1] %define equilibrium values

» bisect(’flash’,[0,.1]) %find the root using bisect ans=0.0434

» x=0:0.05:1; %create a vector of equally spaced data

» y=flash(x); %evaluate the function

» plot(x,y) %plot the function

gorithm as a function in MATLAB and used it to solve an example prob-

lem from multicomponent flash. The function bisect can be used to

solve any other root finding problem as long as you define the problem

you want to solve as another MATLAB function along the lines of the

example function flash.m.

While we managed to solve the problem, we did not concern our-

selves with questions such as, (i) how many iterations did it take to con- Is it always

verge? and, (ii) can we improve the iteration scheme in step 2 to reduce possible to find

the number of iterations? By design, the bisection scheme will always such a guess?

Consider the

pathological case

of r = −2 in

figure 2.1c!

2.3. REGULA-FALSI METHOD 29

This method, however, converges rather slowly and we attempt to devise

algorithms that improve the rate of convergence.

Instead of using the average of the two initial guesses as we did with

the bisection scheme, we can attempt to approximate the function f (x)

by straight line (a linear approximation) since we know two points on

the function f (x). This is illustrated graphically in figure 2.2b with the

dashed line approximating the function. We can then determine the root,

x3 of this linear function, f˜(x̃). The equation for the dashed straight

line in figure 2.2b is

x̃ − x1 x2 − x 1

f˜(x̃) := =

f˜ − f1 f2 − f 1

f˜ = 0 in the above equation. This results in,

x2 − x 1

x3 = x1 − f1 (2.5)

f2 − f 1

original function at x3 , f (x3 ) clearly will not be zero (unless the scheme

has converged) as shown in figure 2.2b; but x3 will be closer to r than

either x1 or x2 . We can then retain x3 and one of x1 or x2 in such a

manner that the root r remains bracketed. This is achieved by following

the same logic as in the bisection algorithm to discard the x value that

does not bracket the root.

The MATLAB function bisect can be easily adapted to implement

Regula-Falsi method by merely replacing equation (2.3) with equation

(2.5) for step 2.

regula-falsi method, but differs by retaining the two most recent values

of x viz. x2 and x3 and always discarding the oldest value x1 . This

simple change from the Regula-Falsi scheme produces a dramatic dif-

ference in the rate of convergence. A MATLAB implementation of the

2.4. SECANT METHOD 30

function r=secant(Fun,x,tol,trace)

%SECANT find the root of a function "Fun" using secant scheme

% Fun - the name of the external function

% x - vector of length 2, (initial guesses)

% tol - error criterion

% trace - print intermediate results

%

% Usage secant(’flash’,[0,1])

% Here flash is the name of the external function.

% [0,1] is the initial guess

%Check inputs

if nargin < 4, trace=0; end

if nargin < 3, tol=eps; trace=0; end

if (length(x) ˜= 2)

error(’Please provide two initial guesses’)

end

x3 = x(1) - f(1)*(x(2)-x(1))/(f(2)-f(1)) ; %Update(step 2)

f3 = feval(Fun,x3); %Cal. f(x3)

x(1) = x(2);f(1) = f(2); x(2) = x3; f(2) = f3;

if trace, fprintf(1,’%3i %12.5f %12.5f\n’, i,x3,f3); end

end

error(’Exceeded maximum number of iterations’)

2.5. NEWTON’S METHOD 31

secant method is shown in figure 2.4. The convergence rate of the se-

cant method was analyzed by Jeeves (1958).

The Newton method is by far the most powerful and widely used algo-

rithm for finding the roots of nonlinear equations. A graphical repre-

sentation of the algorithm is shown in figure 2.2d. This algorithm also

relies on constructing a linear approximation of the function; But this is

achieved by taking the tangent to the function at a given point. Hence

this scheme requires only one initial guess, x1 . The linear function f˜(x̃)

shown by dashed line in figure 2.2d is,

f˜ − f1

f˜(x̃) :=

x̃ − x1

conditions one can solve the above equation for x2 as,

f1

x2 = x1 − (2.6)

f10

which forms the iterative process (step 2). Note that this algorithm re-

quires that the derivative of the function be evaluated at every iteration,

which can be a computationally expensive operation.

While we have relied on the geometrical interpretation so far in con-

structing the algorithms, we can also derive Newton’s scheme from a

Taylor series expansion of a function. This is an instructive exercise,

for it will enable us to generalize the Newton’s scheme to higher dimen-

sional ( i.e., more than two equations) systems as well as provide some

information on the rate of convergence of the iterative scheme.

The Taylor series representation of a function around a reference

point, xi is,

X∞

f (k) (xi ) k

f (xi + δ) = δ (2.7)

k=0

k!

where f (k) (xi ) is the k−th derivative of the function at xi and δ is a small

displacement from xi . While the infinite series expansion is an exact

representation of the function, it requires all the higher order derivative

of the function at the reference point. We can construct various levels of

2.6. MULLER’S METHOD 32

at finite terms. For example a three term expansion (k = 0, 1, 2) is

2δ

f˜(xi + δ) = f (xi ) + f 0 (xi )δ + f 00 (xi ) + O(δ3 )

2!

where the symbol O(δ3 ) stands as a reminder of the higher order terms

(three and above in this case) that have been neglected. The error in-

troduced by such omission of higher order terms is called truncation

error. In fact to derive the Newton scheme, we neglect the quadratic

term O(δ2 ) also. In figure 2.2d, taking the reference point to be xi = x1 ,

the displacement to be δ = xi+1 − xi , and recognising that f˜(xi + δ) = 0

we can rewrite the truncted two-term series as,

f (xi )

xi+1 = xi −

f 0 (xi )

and is important at least from the point of illustrating such generaliza-

tions. Instead of making two guesses and constructing an approximate

linear function as we did with the secant method, we can choose three

inital guesses and construct a quadratic approximation to the original

function and find the roots of the quadratic. A graphical representation

of this is shown in figure 2.5. The three initial guesses are (x0 , x1 , x2 )

and the corresponding function values are represented by (f1 , f2 , f3 )

respectively. We construct a second degree polynomial as,

p2 (v) = av 2 + bv + c

of a new independent variable v, which is merely a translation of the

original independent variable x by x0 . An alternate view is to regard

v as the distances measured from reference point x0 , so that v = 0

at this new origin. (a, b, c) are the coefficients of the quadratic that

must be determined in such a way that p2 (v) passes through the three

2.6. MULLER’S METHOD 33

f1

f0 f(x)

p2(v)

x2

r x0 x1

h1 = x1 - x0

f2 h2 = x0 - x2

h2 = (x0 − x2 ) and requiring that the polynomial pass through the three

points, we get,

p2 (h1 ) = ah21 + bh1 + c = f1

p2 (−h2 ) = ah22 − bh2 + c = f2

The reason for coordinate shift should be clear by now. This enables c

to be found directly. Solving the remaining two equations we obtain a

and b as follows:

γf1 − f0 (1 + γ) + f2

a=

γh21 (1 + γ)

f1 − f0 − ah21

b=

h1

where γ = h2 /h1 . So far we have only constructed an approximate rep-

resentation of the original function, f (x) ≈ p2 (v). The next step is to

find the roots of this approximate function, p2 (v) = 0. These are given

by, √

−b ± b2 − 4ac

v = r̃ − x0 =

a2

2.7. FIXED POINT ITERATION 34

2c

r̃ = x0 − √ (2.8)

b± b2 − 4ac

Since p2 (v) is a quadratic, there are clearly two roots r̃ in equation (2.8).

In order the take the root closest to x0 we choose the largest denominator

in equation (2.8). In summary, the sequential procedure for impliment-

ing Muller’s scheme is as follows:

• Guess (x0 , x1 , x2 )

• Compute c = f (x0 ).

the new root in its place and repeat.

Note that Muller’s method converges almost quadratically (as does New-

ton’s scheme), but requires only one additional function evaluation at

every iteration which is comparable to the computational load of the se-

cant method. In particular derivative evaluation is not required, which is

a major advantage as compared to Newton’s method. Also, this scheme

can converge to complex roots even while starting with real initial guesses

as long as provision is made for handling complex arithmetic in the com-

puter program. MATLAB handles complex arithmetic quite naturally.

range the given equation f (x) = 0 into a form,

x = g(x)

2.7. FIXED POINT ITERATION 35

Then, starting with a guess xi , we can evaluate g(xi ) from the right hand

side of the above equation and the result itself is regarded as a better

estimate of the root, i.e.,

this process unique. For example, we can always let g(x) = x + f (x).

Such an iterative scheme need not always converge. Let us examine the

possible behavior of the iterates with a specific example. In particular we

will illustrate that different choices of g(x) lead to different behavior.

Consider the function

f (x) = x 2 − x − 6 = 0 (2.10)

p

x = x+6

√ the intersection of two

curves, y = x (the left hand side) and y = x + 6 (the right hand side).

See figure 2.6 for a graphical illustration.

p Starting with an initial guess,

say x0 = 4, we compute x1 = g(x0 ) = x0 + 6. This is tantamount to

stepping between the y = x and y = g(x) curves as shown in figure

2.6a. It is clear that the sequence will converge monotonically to the

root r = 3. The table 2.1 shows the first ten iterates, starting with an

inital guess of x0 = 5.

Observe that the slope of the function at the root is g 0 (r = 3) <

1. We will show shortly that the condition for convergence is indeed

|g 0 (r )| < 1. As an alternate formulation consider rewritting equation

(2.10) as x = g(x) = 6/(x − 1). Now, g(x) has a singularity at x = 1. A

graphical illustration is shown in figure 2.6b. Using this new g(x), but

starting at the same initial guess x0 = 4 the sequence diverges initially in

an oscillatory fashion around the root r = 3, but eventually is attracted

to the other root at r = −2, also in an oscillatory fashion. Observe

that the slopes at the two roots are: g 0 (3) = −3/2 and g 0 (−2) = −2/3.

Both are negative and hence the oscillatory behavior. The one with abso-

lute magnitude greater than unity diverges and the other with absolute

magnitude less than unity converges. Finally consider the formulation

x = g(x) = (x 2 − 6). The behavior for this case is shown in figure 2.6c.

For reasons indicated above, the sequence will not converge to either

root! The following shows the MATLAB implementation for generating

the iterative sequence for the first case. Enter this into a file called g.m.

2.7. FIXED POINT ITERATION 36

(a) g ( x ) =

x + 6 6

(b) g ( x ) = ( x − 1 )

x

y=

x

y=

|g’(r=3) = -3/2 | > 1 r

x0

y=g(x) r

)

g’(r=3) = 1/6 < 1 =g(x

r x1 y

|g’(r=-2) = -2/3| < 1

x0

(c) g ( x ) = ( x2 − 6 )

y=g(x)

x

y=

g’(r=3) = 6 > 1

r

x0

2.8. ERROR ANALYSIS AND CONVERGENCE ACCELERATION 37

Iteration Iterate

Number

x1 5.0000000

x2 3.3166248

x3 3.0523147

x4 3.0087065

x5 3.0014507

x6 3.0002418

x7 3.0000403

x8 3.0000067

x9 3.0000011

x10 3.0000002

p

Table 2.1: The first ten iterates of xi+1 = xi + 6 starting with x0 = 5

function x=g(x)

for i=1:10

fprintf(1,’%2i %12.5e\n’,i,x); %print the iterates

x=sqrt(x+6); %also try x=6/(x-1) and x=(xˆ2-6) here

end

Invoke this function from within MATLAB with various initial guesses,

e.g., try initial guess of 5 by entering,

» g(5)

A simple error analysis can be developed for the fixed point iterative

scheme which will provide not only a criterion for convergence, but also

clues for accelerating convergence with very little additional computa-

tional effort. We are clearly moving away from the realm of intuition to

the realm of analysis! Consider the fixed point iteration xi+1 = g(xi ).

After convergence to the root r we will have r = g(r ). Subtracting the

two equations we get,

2.8. ERROR ANALYSIS AND CONVERGENCE ACCELERATION 38

xi

g(xi) - g(r)

g’(ξ)

xi - r

ξ

r

r < ξ <xi

g(xi ) − g(r )

(xi+1 − r ) = (xi − r )

(xi − r )

step i and hence the above equation can be written as,

where we have used the mean value theorem to replace slope of the

chord by the tangent to the curve at some suitable value of x = ξ, i.e.,

(g(xi ) − g(r ))

= g 0 (ξ)

(xi − r )

ure 2.7. From equation (2.11), it is clear the error will decrease with every

iteration if the slope |g 0 (ξ)| < 1; otherwise the error will be amplified

at every iteration. Since the error in the current step is proportional to

that of the previous step, we conclude that the rate of convergence of

the fixed point iteration is linear. The development has been reasonably

rigorous so far. We now take a more pragmatic step and assume that

g 0 (ξ) = K is a constant in the neighbourhodd of the root r . Then we

have the sequence, Will K be the same

constant at every

e2 = Ke1 , e3 = Ke2 = K 2 e1 , e4 = Ke3 = K 3 e1 ··· iteration?

2.8. ERROR ANALYSIS AND CONVERGENCE ACCELERATION 39

It should be clear now that en → 0 as n → ∞ only if |K| < 1. . We refer Well, if we know

to equation (2.12) as the error propagation solution since it provides a the error in the

solution of estimating the error at any step n, provided the error at the first step, none of

this analysis would

first step e1 and K are known.

be necessary!

We can develop a convergence acceleration scheme using the error so-

r = x1 − e1 would

lution (2.12) to estimate the three unknowns (r , K, e1 ) in the second form do it!

of equation (2.12). Once we have generated three iterates, (xn , xn+1 , xn+2 ),

we can use equation (2.12) to write down,

xn = r + K n−1 e1

xn+1 = r + K n e1 (2.13)

n+1

xn+2 = r +K e1

estimate r (and K, e1 as well). If K were to remain a true constant with

every iteration, r would be the correct root; since K is not a constant

in general, r is only an estimate of the root, hopefully a better estimate

than any of (xn , xn+1 , xn+2 ). Now let us proceed to construct a solution

for r from the above three equations. We will define a first order forward

difference operator ∆ as,

∆xn = xn+1 − xn

d

derivative operator dx defines a rule. When ∆ operates on xn it is com-

puted using the rule shown on the right hand side. Now, if we apply the

operator ∆ to xn+1 we should have,

order derivatives), we should get,

You can verify that using equation (2.13) in the above definitions, we get,

(∆xn )2

= K n−1 e1 = xn − r

∆2 x n

2.8. ERROR ANALYSIS AND CONVERGENCE ACCELERATION 40

2 2

(∆xn )2 xn+1 − 2xn xn+1 + xn

r = xn − = xn −

∆ 2 xn xn+2 − 2xn+1 + xn

Thus the three iterates (xn , xn+1 , xn+2 ) can be plugged into the right

hand side of the above equation to get a better estimate of r .

iterates of table 2.1.

x1 = 5.0000000

x2 = 3.3166248

x3 = 3.0523147

∆x1 = (x2 − x1 ) = −1.6833752

∆x2 = (x3 − x2 ) = −0.26431010

∆2 x1 = (∆x2 − ∆x1 ) = 1.41906510

(∆x1 )2 (−1.6833752)2

r = x1 − = 5.000000 − = 3.0030852

∆ 2 x1 1.41906510

Compare this with the fourth iterate produce in the original sequence

x4 = 3.0087065.

point iteration scheme where g(x) has been specified in a special manner

as,

f (xn )

xn+1 = xn − 0 = g(xn )

f (xn )

Hence,

(f 0 )2 − f f 00 f f 00

g 0 (x) = 1 − = 0 2 < 1

(f 0 )2 (f )

tion such as f 0 (r ) = 0) and the inequality should hold near the root r .

Thus the Newton method is guaranteed to converge as long as we have

a good initial guess. Having progressed this far, we can take the next

2.9. DEFLATION TECHNIQUE 41

step and ask the question about the rate of convergence of the Newton

method. A Taylor series expansion of g(x) around r is,

g 00 (r )

g(xn ) = g(r ) + g 0 (r )(xn − r ) + (xn − r )2 + · · ·

2

Recognizing that en+1 = xn+1 − r = g(xn ) − r , en = (xn − r ) and

g 0 (r ) = 0, the truncated Taylor series expansion can be rearranged as,

g 00 (r ) 2

en+1 = en

2

which shows that the error at any step goes down as the square of the

previous step - i.e., quadratically! This manifests itself in the form of

doubling accuracy at every iteration.

tional roots of f (x) = 0, we can start with a different initial guess and

hope that the new initial guess lies within the region of attraction of a

root different from r . Choosing a different initial guess does not guar-

antee that the iteration scheme will not be attracted to the root already

discovered. In order to ensure that we stay away from the known root, r ,

we can choose to deflate the original function by constructing a modified

function,

g(x) = f (x)/(x − r )

which does not have r as a root. For a single equation the concepts are

best illustrated with a graphical example. Consider the illustration in

figure 2.8a where the original function,

f (x) := (x − 2) sin(2x)e−0.8x

deflated function, g(x) = f (x)/(x − 2) is shown in figure 2.8b. Since

r = 2 turns out to be a simple root of f (x) = 0, the deflated function

g(x) = 0 does not contain the already discovered root at x = 2. Hence

starting with a different initial guess and applying an iterative method

like the secant or Newton scheme on the function g(x) will result in

convergence to another root. This process can obviously be repeated by

deflating successively found roots. For example if we know two roots r1

and r2 then a new function can be constructed as

f (x)

h(x) = .

(x − r1 )(x − r2 )

2.9. DEFLATION TECHNIQUE 42

0.1 0.2

(a) (c) f ( x ) = ( x − 2 ) 2 sin ( 2x )e− 0.8x

0.05

0.1

0

0

-0.05 deflated root

deflated root

-0.1 f ( x ) = ( x − 2 ) sin ( 2x )e− 0.8x -0.1

0 2 4 6 8 10 -0.2

0 2 4 6 8 10

0.1 0.2

deflated root

deflated root

(b) (d)

0.05

0.1

0

0

-0.05

-0.1

-0.1

g( x ) = f ( x ) ( x − 2 ) g( x ) = f ( x ) ( x − 2 )

0 2 4 6 8 10 -0.2

0 2 4 6 8 10

propagation of round off errors. For example if the roots r1 , r2 are known Can you think of a

to only a few significant digits, then the definition of the deflated func- way to alleviate

tion h(x) will inherit these errors and hence the roots of h(x) = 0 will this problem?

not be as accurate as those of the original equation f (x) = 0.

roots. A sketch of the function,

root - i.e., occurs with a multiplicity of two. Hence the deflated function What

g(x) = f (x)/(x − 2) still has r = 2 as a simple root as seen in figure computational

2.8d. problem might this

pose?

2.10. PARAMETER CONTINUATION 43

2.10.2 Homotopy continuation

2.11.1 MATLAB

The MATLAB function for determining roots of a polynomial is called

roots. You can invoke it by entering,

» roots(c)

form,

pn (x) = c1 x n + c2 x n−1 + · · · + cn x + cn+1 .

Let us consider the factored form of the polynomial p3 (x) = (x + 2)(x +

i)(x − i) so that we know the roots are at (−2, ±i). To check whether

MATLAB can find the roots of this polynomial we need to construct the

coefficients of the expanded polynomial. This can be done with the con-

volve fucntion conv(f1,f2) as follows.

» f2 = [1 i] %Here we define coeff of (x+i) as [1 i]

» f3 = [1 -i] %Here we define coeff of (x-i) as [1 -i]

» c=conv(conv(f1,f2),f3) % c contains coeff of polynomial

» r=roots(c) %returns roots of polynomial defined by c

Note that the function roots finds all of the roots of a polynomial,

including complex ones.

The MATLAB function for finding a real root of any real, single non-

linear algebraic equation (not necessarily a polynomial) is called fzero.

You can invoke it by entering,

» fzero(’fn’,x)

where fn is the name of a m-file that defines the function, x is the initial

guess for the root. This fzero is not based on a very robust algorithm.

If the function you want to solve has singularities, or multiple roots, the

scheme fails to converge, often without producing any appropriate er-

ror or warning messages. Hence use with caution. After it produces an

2.11. SOFTWARE TOOLS 44

the root. As an example try the multicomponent flash problem consid-

ered previously. You are encouraged to try the following steps during a

MATLAB session.

Note that the

desired result is

»global K z; % define K,z to be global root=0.0949203.

»K=[2 1.5 0.5 0.2]; % define K values But starting with

»z=[.25 .25 .25 .25]; % define z values different initial

»root=fzero(’flash’,.5) % solve x0=0.5, ⇒ ans=0.0949 guesses, MATLAB

»flash(root) % check solution produces different

results! Why? Try

»root=fzero(’flash’,-.85)% solve x0=-0.85, ⇒ ans=-1.0000

plotting the

»flash(root) % check solution function over the

»root=fzero(’flash’,1.85)% solve x0=1.85, ⇒ ans=1.6698 range ψ ∈ [−5, 5]

»flash(root) % check solution in MATLAB and see

»root=fzero(’flash’,1.90)% solve x0=1.90, ⇒ ans=2.0 if you can

»flash(root) % check solution understand

MATLAB behavior!

(Clue: sign change)

2.11.2 Mathematika

Mathematika is another powerful software package for mathematical

analysis including symbolic processing. It also has an inteactive envi-

ronment; i.e., commands, functions are executed as soon as you enter

them. If you invoke Mathematica using a Graphical User Interface (GUI)

then, the ploting functions will display the graphs. Otherwise you are

limited to the use of computational features of Mathematica. A complete

reference to Mathematica can be found in the book by Wolfram (1988).

Solve. In MATLAB we constructed a m-file to define a function. In

Mathematica this is done in one line within the workspace. Anything

delimited by (* *) is treated as comments and ignored by Mathematica.

You may wish to work through the following exercise.

Observe that first

letter of all

(* Define the multicomponent flash function f[psi] Mathematica

for a 4-component system. functions is in

K[[i]], z[[i]] are called lists in Mathematica. uppercase.

Treat them as arrays.

2.12. EXERCISE PROBLEMS 45

10 0.05

0

5

-0.05

f(ψ) 0 -0.1

-0.15

-5

-0.2

-10

-4 -2 0 2 4 0 0.1 0.2 0.3 0.4 0.5

ψ ψ

over the index range {1,N} *)

f[psi_]:=Sum[(K[[i]]-1)*z[[i]]/((K[[i]]-1)*psi+1), {i,4}]

z={.25, .25, .25, .25} (* Define z values *)

Solve[f[x] == 0, x ] (* Solve f(x)=0 *)

Plot[f[x],{x,-5,5}, Frame -> True] (* Plot f[x] over 5 to 5 *)

Plot[f[x],{x,0,.5}, Frame -> True] (* Plot f[x] over 0 to 0.5 *)

Note that no initial guess is needed. Mathematica finds all the three

roots. They are:

{{x -> -1.57727}, {x -> 0.0949203}, {x -> 1.66984}}

Ofcourse, only the second root is of interest. Others do not have any

physical relevance. Note also that while ploting functions, Mathematica

samples the function at sufficient number of data points ( i.e., x-values)

to provide a smooth function. Graphs of the flash equation produced by

Mathematica are shown in figure 2.9.

A sketch of a multicomponent flash process is shown in figure 1.5. The

following equation, which was derived in Chapter 1, models the multi-

component flash process. This is a single non-linear algebraic equation

2.12. EXERCISE PROBLEMS 46

Component i zi Ki

Carbon dioxide 1 0.0046 1.650

Methane 2 0.8345 3.090

Ethane 3 0.0381 0.720

Propane 4 0.0163 0.390

Isobutane 5 0.0050 0.210

n-Butane 6 0.0074 0.175

Pentanes 7 0.0287 0.093

Hexanes 8 0.0220 0.065

Heptanes+ 9 0.0434 0.036

Table 2.2: Feed composition & equilibrium ratio of a natural gas mixture

in the unknown ψ, which represents the fraction of feed that goes into

the vapor phase.

N

X (1 − Ki )zi

=0 (2.14)

i=1

(Ki − 1)ψ + 1

This equation has several roots, but not all of them have any physical For a bonus point

meaning. Only the root for ψ ∈ [0, 1] is of interest. you may choose to

The test data in Table 2.2, (taken from Katz etal) relate to the flashing find the number of

roots for the test

of a natural gas stream at 1600 psia and 120o F . Determine the fraction

data above!

ψ using the secant algorithm given in figure 2.4 and another root finding

function that is provided in MATLAB named fzero.

In a fluid mechanics course you might come across the Weymouth equa-

tion, which is used for relating the pressure drop vs. flow rate in a

pipeline carrying compressible gases. It is given by,

" #0.5

To (P12 − P22 )

Qo = 433.54 d2.667 η (2.15)

Po Lσ T

where

To is the standard temperature = 520o R

2.12. EXERCISE PROBLEMS 47

P1 is the upstream pressure, (?), psia

P2 is the downstream pressure, (21.7), psia

L is the length of pipe = 0.1894 miles

σ is the specific gravity of gas (air=1) = 0.7

T is the actual gas temperature = 530o R

d is the diameter of the pipe, (?) inches

η is the efficiency = 0.7 (a fudge factor!)

pressure using the secant (initial guess of [5, 45]) and fzero (initial

guess of 25) functions. Compare the flops which stands for the

floating point operations.

24.7 psia. Other conditions remaining the same as in previous part,

determine the diameter of the pipe that should be used using the

secant (initial guess of [4, 8]) and fzero (initial guess of 6) functions.

Compare the flops.

Consider a stagewise separation process shown in Figure 1.2. A model

for this process was developed in Chapter 1. The variables of interest

are (L, V , x0 , x1 , x2 , · · · , xn , y1 , y2 , y3 , · · · , yn+1 ). Under the assump-

tion of linear equilibrium model, yi = Kxi it is possible to successively

eliminate all of the variables and obtain the following single, analytical

expression relating the input, (x0 , yn+1 ), the output, xn , the separation

factor S = L/KV and the number of stages n.

h i

[x0 − xn ] (1/S)n+1 − (1/S)

= (2.16)

x0 − yn+1 /K (1/S)n+1 − 1

single equation relating six variables, viz. (x0 , xn , yn+1 , K, S, n). Given

any five of these variables, we can solve for the 6th one. Your task is to

formulate this problem as a root finding problem of the type

f (x; p) = 0

set consisting of the remaining five known values. Write an m-file to

2.12. EXERCISE PROBLEMS 48

h i

(x0 − xn ) (1/S)n+1 − (1/S)

f (x; p) = − =0 (2.17)

(x0 − yn+1 /K) (1/S)n+1 − 1

1. In a typical design problem you might be given the flow rates, (say

L = 10, V = 10), the inlet compositions (say, x0 = 0.8, yn+1 = 0)

and a specified recovery (xn = 0.1615). Your task is to determine

the number of stages (n) required to meet the specifications. Take

the equilibrium ratio to be K = 0.8. Here the unknown variable x

is associated with n and the others form the parameter set. Solve

for n using secant and bisection methods using initial guesses of

[10, 30]. Report the number of iterations required for convergence

to the MATLAB built-in convergence tolerance of eps = 10−16 . You

can use the secant.m and bisect.m algorithms outlined in figures

2.4,2.3. You must construct a m-file to define the function rep-

resented by equation (2.17) in terms of the unknown. Here is a

sample function for the first case.

function f=KBS1(x)

% Kremser-Brown-Souders equation

% number of stages is unknown i.e. solve for x=n

x0 = 0.8; ynp1= 0; xn=0.1615; S=L/K/V; %Known values

m=length(x);

for i=1:m

n=x(i);

f(i) = (x0-xn)/(x0-ynp1/K) - ( (1/S)ˆ(n+1) - (1/S)) ...

/ ( (1/S)ˆ(n+1) - 1);

end

Make sure that you understand what the above function does! In

the next two parts you will have to modify this function to solve

for a different unknown! Create a file named KBS1.m and enter the

above function. Then to solve the problem from within MATLAB

enter

» secant(’KBS1’,[10,30],eps,1)

2.12. EXERCISE PROBLEMS 49

You may wish to plot the function to graphically locate the root

using

» x=10:1:30;

» y=KBS1(x);

» plot(x,y)

» plot([10:1:30],KBS1(10:1:30))

ing process with a known number of stages (say, n = 10). Suppose

x0 = 0.8, yn+1 = 0, L = 10, xn = 0.01488. Find the amount of gas

V that can be processed. Use an initial guess of [5, 20] and [5, 30]

with both secant and bisect algorithms. Record and comment on

your observations.

of gas to be processed (V = 10) may be given. You will have to de-

termine the exit composition xn . Take n = 20, x0 = 0.8, yn+1 =

0, L = 10. Try initial guesses of [0, .2] and [0, 1] on both bisection

and secant algorithms. Record and comment on your observations.

The phase behavior of fluids can be predicted with the help of equations

of state. The one developed by Peng & Robinson is particularly well

tuned, accurate and hence is widely used. The equation is given below.

RT a(T )

P= − (2.18)

(V − b) V (V + b) + b(V − b)

where

R 2 Tc2 RTc √ p

a(T ) = 0.45724 α(Tr , ω), b = 0.0778 , α = 1 + m(1 − Tr )

Pc Pc

m = 0.37464 + 1.54226ω − 0.26992ω2 , Tr = T /Tc , and Z = P V /RT .

cubic equation in Z

2.12. EXERCISE PROBLEMS 50

Use equation (2.19) to compute the density of CO2 in gmole/lit at

P = 20.684MP a and T = 299.82o K. The critical properties required

for CO2 are Tc = 304.2o K, Pc = 7.3862MP a and ω = 0.225, R =

8314P a m3 /kmol o K.

a) Use the function roots(c) in MATLAB to find all the roots of the cubic

equation (2.19) in terms of Z. In MATLAB, how does the function

roots differ from the function fzero?

b) Use the secant method to find the real roots of the above equation.

ods, convert it into molar density and compare with the experimen-

tal value of 20.814gmole/lit

d) Consider the case where you are given (P , V ) and you are asked to

find T . Develop and implement the Newton iteration to solve for

this case. Use the above equation to compute the temperature of

CO2 in o K at P = 20.684 × 106 P a and V = .04783lit/gmole.

Compare the number of iterations required to obtain a solution to

a tolerance of ||f || < 10−15 using an initial guess of T = 250 by

Newton method with that required by the secant method with an

initial guess of [200,310].

e) Suppose that you are given (T , V ) and you are asked to find P ,

which form of equation will you choose? Eqn. (2.18) or Eqn.(2.19)?

What method of solution would you recommend?

Many engineering problems can be cast in the form of determining the

roots of a nonlinear algebraic equation. One such example arises in de-

termining the time required to cool a solid body at a given point to a

predetermined temperature level.

Consider a semi-infinite solid, initially at a temperature of Ti = 200o C

and one side of it is suddenly exposed to an ambient temperature of

Ta = 70o C. The heat transfer coefficient between the solid and sur-

roundings is h = 525W /(m2o C). The thermal conductivity of the solid

is k = 215W /mo C and the thermal diffusivity of the solid is α = 8.4 ×

2.12. EXERCISE PROBLEMS 51

QT=0.045 m3/s

A B

10−5 m2 /s. Determine the time required to cool the solid at a distance

of x = 4cm measured from the exposed surface, to T = 120o C. The

temperature profile as a function of time and distance is given by the

following expression.

h i √

θ = 1 − er f (ξ) − e(hx/k+τ) [1 − er f (ξ + τ)]

(T −Ti ) √x

where the dimensionless temperature, θ = (Ta −Ti ) , and ξ = 2 αt

and

h2 αt

τ= k2

, t is the time, x is the distance and er f is the error function.

Consider the flow of an incompressible (ρ = 1000kg/m3 ), Newtonian

fluid (µ = 0.001P a · s) in a parallel pipe system shown in figure 2.10.

The lengths, diameters, roughness for the pipes as well as the total flow

rate are as shown in figure 2.10. Your task is to determine the individual

flow rates in each of the pipe segments 1 and 2. The equation to be

satisfied is obtained based on the fact that the pressure drop between

points A and B is the same. The equation is

L1 v12 L2 v22

f1 (v1 ) = f2 (v2 ) (2.20)

D1 2 D2 2

where v1 , v2 are the velocities in the two pipes and f1 , f2 are the friction

factors given by the Churchill equation.

" #1/12

8 12 1

fi (vi ) = 8 ∗ +

Rei (A + B)1.5

where

16 16

1 37530 D i vi ρ

A = 2.457 ln 0.9

B= and Rei =

(1/Rei ) + 0.27(i /Di ) Rei µ

2.12. EXERCISE PROBLEMS 52

π

(D12 v1 + D22 v2 ) = QT

4

This problem can be formulated as two equation in two unknowns (v1 , v2 ),

but your task is to pose this as a single equation in one unknown, v1 , by

rearranging equation 2.20 as,

L1 v12 L2 v22

F (v1 ) = f1 (v1 ) − f2 (v2 ) =0

D1 2 D2 2

i.e., for a given guess of v1 , write a m-file that will calculate F (v1 ). Then

carryout the following calculations.

• Solve the problem using secant algorithm with initial guess of [4.5,5.5].

[Ans:v1 = 4.8703]

• Suppose the total flow rate is increased to 0.09 m3 /s, what will be

the new velocities in the pipes.

0.09, other values being the same. Is there is flow rate QT for

which the velocities in both pipes will be the same? If so what is

it? [Ans:QT = 0.0017]

of QT .

• Discuss the pros and cons of implementing the Newton method for

this problem.

Mathematics, rightly viewed, possesses not only

truth, but supreme beauty – a beauty cold and aus-

tere, like that of sculpture.

— BERTRAND RUSSELL

Chapter 3

equation

Topics from linear algebra form the core of numerical analysis. Almost

every conceivable problem, be it curve fitting, optimization, simulation

of flow sheets or simulation of distributed parameter systems requiring

solution of differential equations, require at some stage the solution of a

system (often a large system!) of algebraic equations. MATLAB (acronym

for MATrix LABoratory) was in fact conceived as a collection of tools to

aid in the interactive learning and analysis of linear systems and was

derived from a well known core of linear algebra routines written in

FORTRAN called LINPACK.

algebra. We make frequent reference to MATLAB implimentation of var-

ious concepts throughout this chapter. The reader is encouraged to try

out these interactively during a MATLAB session. For a more complete

treatment of topics in linear algebra see Hager (1985) and Barnett (1990).

The text by Amundson (1966) is also an excellent source with specific

examples drawn from Chemical Engineering. For a more rigorous, ax-

iomatic introduction within the frame work of linear opeartor theory

see Ramakrishna and Amundson (1985).

53

3.1. MATRIX NOTATION 54

algebraic equations in a compact form in sections §1.3.1 and §1.3.2.

While a matrix, as an object, is represented in bold face, its constituent

elements are represented in index notation or as subscripted arrays in

programming languages. For example the following are equivalent.

A = [aij ], i = 1, · · · , m; j = 1, · · · , n

in row i and column j position. A vector can be thought of as an object

with a single row or column. A row vector is represented by,

x = [x1 x2 · · · xn ]

y1

y

2

y= ..

.

ym

These elements can be real or complex.

Having defined objects like vectors and matrices, we can extend the

notions of basic arithmetic opeartions between scalar numbers to higher

dimensional objects like vectors and matrices. The reasons for doing so

are many. It not only allows us to express a large system of equations

in a compact symbolic form, but a study of the properties of such ob-

jects allows us to develop and codify very efficient ways of solving and

analysing large linear systems. Packages like MATLAB and Mathematica

present to us a vast array of such codified algorithms. As an engineer

you should develop a conceptual understanding of the underlying prin-

ciples and the skills to use such packages. But the most important task

is to indentify each element of a vector or a matrix, which is tied closely

to the physical description of the problem.

The arithmetic opeartions are defined both in symbolic form and using

index notation. The later actually provides the algorithm for implement-

ing the rules of operation using any programing language. The addition

operation between two matrices is defined as,

3.1. MATRIX NOTATION 55

Clearly all the matrices involved must have the same dimension. Note

that the addition operation is commutative as with its scalar counter

part. i.e.,

A+B=B+A

Matrix addition is also associative, i.e., independent of the order in which

it is carried out, e.g.,

A + B + C = (A + B) + C = A + (B + C)

of the matrix by the scalar, i.e.,

multiplication rules as follwos:

dimension m × r ) is defined as, MATLAB syntax for

the product

m

X operator between

multiplication: A=BC ⇒ aij = bik ckj matrices is

k=1

A=B*C

and the resultant matrix has the dimension n × r . The operation in-

dicated in the index notation is carried out for each value of the free

indices i = 1 · · · n and j = 1 · · · r . The product is defined only if the

dimensions of B, C are compatible - i.e., number of columns in B should

equal the number of rows in C. This implies that while the product B

C may be defined, the product C B may not even be defined! Even when

they are dimensionally compatible, in general

BC 6= BC

Multiplying a scalar number by unity leaves it unchanged. Extension

of this notion to matrices resutls in the definition of identity matrix, MATLAB function

for producing an

1 0 ··· 0 identity matrix of

0 1 ··· 0 (

1 i=j size N is

I= ..

⇒ δij = I=eye(N)

. 0 1 0 0 i 6= j

0 0 ··· 1

3.1. MATRIX NOTATION 56

sion leaves the original matrix unchanged, i.e.,

AI = A

to matrices. Division operation can be thought of as the inverse of the MATLAB function

multiplciation operation. For example, given a number, say 2, we can for finding the

define its inverse, x in such a way that the product of the two numbers inverse of a matrix

A is

produce unity. i.e., 2 × x = 1 or x = 2−1 . In a similar way, given a matrix

B=inv(A)

A, can we define the inverse matrix B such that

AB = I or B = A−1

will be addressed late in this chapter.

For a square matrix, powers of a matrix A can be defined as,

A2 = AA A3 = AAA = A2 A = AA2

Note that Ap Aq = Ap+q for positve integers p and q. Having extended MATLAB operator

the definition of powers, we can extend the definition of exponential for producing the

from scalars to square matrices as follows. For a scalar α it is, n-th power of a

matrix A is,

α2 X∞

αk Aˆn

α

e =1+α+ + ··· = while the syntax

2 k=0

k! for producing

element-by-

For a matrix A the exponential matrix can be defined as, element power

is,

X∞

A2 Ak A.ˆn.

eA = I + A + + ··· = Make sure that you

2 k=0

k!

understand the

difference between

One operation that does not have a direct counter part in the scalar world

these two

is the transpose of a matrix. It is defined the result of exchanging the operations!

rows and columns of a matrix, i.e.,

MATLAB function

B = A0 ⇒ bij = aji exp(A)

evaluates the

It is easy to verify that exponential

(A + B)0 = A0 + B0 element-by-

element

Something that is not so easy to verify, nevertheless true, is while

expm(A)

(AB)0 = B0 A0 evaluates the true

matrix exponential.

3.2. MATRICES WITH SPECIAL STRUCTURE 57

d11 0 ··· 0

0 d 0

22 · · ·

D= .. ..

. 0 . 0

0 0 · · · dnn

agonal,

l11 0 ··· 0

l

21 l22 · · · 0

L = ..

..

. . 0

ln1 ln2 · · · lnn

A upper triangular matrix U has non-zero elements on or above the

diagonal,

u11 u12 · · · u1n

0 u22 · · · u2n

U = ..

..

0 0 . .

0 0 · · · unn

A tridiagonal matrix T has non-zero elements on the diagonal and one

off diagonal row on each side of the diagonal

t11 t12 0 ··· 0

t

21 t22 t23 0 0

.. .. ..

. . .

T = 0 0

.

.

. 0 tn−1,n−2 tn−1,n−1 tn−1,n

0 ··· 0 tn,n−1 tn,n

specific strucutre such as above, but with a small number (typically 10

to 15 %) of non-zero elements.

3.3 Determinant

value is associated with the matrix that does not change with certain row

or column operations on the matrix - i.e., it is one of the scalar invariants

of the matrix. In the context of solving a system of linear equations the

3.3. DETERMINANT 58

solvable uniquely. It is formed by summing all possible products formed

by choosing one and only one element from each row and column of the

matrix. The precise definition, taken from Amundson (1966), is

X

det(A) = |A| = (−1)h (a1l1 a2l2 · · · anln ) (3.1)

such that only one element appears from each row and column. The

summation involves a total of n! terms accounted for as follows: for the

first element l1 in the product there are n choices, followed by (n − 1)

choices for the second element l2 , (n − 2) choices for the third element

l3 etc. resulting in a total of n! choices for a particular product. Note

that in this way of counting, the set of second subscripts {l1 , l2 , · · · ln }

will contain all of the numbers in the range 1 to n, but they will not be in

their natural order {1, 2, · · · n}. hence, h is the number of permutations

required to arrange {l1 , l2 , · · · ln } in their natural order. MATLAB function

This definition is neither intutive nor computationaly efficient. But it for computing the

is instructive in understanding the following properties of determinants. determinant of a

square matrix is

1. The determinant of a diagonal matrix D, is simply the product of det(A)

all the diagonal elements, i.e.,

n

Y

det(D) = dkk

k=1

2. A little thought should convince you that it is the same for lower

or upper triangular matrices as well, viz.

n

Y

det(L) = lkk

k=1

3. It should also be clear that if all the elements of any row or column

are zero, then the determinant is zero.

4. If every element of any row or column of a matrix is multiplied

by a scalar, it is equivalent to multiplying the determinant of the

original matrix by the same scalar, i.e.,

ka11 ka12 · · · ka1n a11 a12 · · · ka1n

a a22 · · · a2n

21 a21 a22 · · · ka2n

. .. = = k det(A)

. .. .. .. ..

. . . . . .

an1 an2 · · · ann an1 an2 · · · kann

3.3. DETERMINANT 59

of that row (or column) and another row (or column) leaves the

determinant unchanged.

a matrix are indentical the determinant is zero.

change of the determinant.

A definition of determinant that you might have seen in an earlier linear

algebra course is

( Pn

aik Aik for any i

det(A) = |A| = Pk=1

n (3.2)

k=1 akj Akj for any j

of A obtained by deleting ith row and kth column of A. Note that the

expansion in equation (3.2) can be carried out along any row i or column

j of the original matrix A.

Example

Consider the matrix derived in Chapter 1 for the recycle example, viz.

equation (1.8). Let us calculate the determinant of the matrix using the

Laplace expansion algorithm around the first row.

1 0 0.306

det(A) = 0 1 0.702

−2 1 0

1 0.702 0 0.702 0 1

= 1 + (−1)1+2 × 0 × + (−1)1+3 × 0.306 ×

1 0 −2 0 −2 1

= 1 × (−0.702) + 0 + 0.306 × 2 = −0.09

3.4. DIRECT METHODS 60

»det(A) % calculate the determinant

Consider a 2 × 2 system of equations,

" #" # " #

a11 a12 x1 b1

=

a21 a22 x2 b2

det(A) x1 = det(A(1))

where the matrix A(1) is obtained from A after replacing the first column

with the vector b. i.e.,

b a

1 12

A(1) =

b2 a22

det(A(1)) det(A(k)) det(A(n))

x1 = , ··· xk = , ··· xn = .

det(A) det(A) det(A)

where A(k) is an n × n matrix obtained from A by replacing the kth

column with the vector b. It should be clear from the above that, in order

to have a unique solution, the determinant of A should be non-zero. If

the determinant is zero, then such matrices are called singular.

Example

Continuing with the recycle problem (equation (1.8) of Chapter 1), solu-

tion using Cramer’s rule can be implemented with MATLAB as follows:

1 0 0.306 x1 101.48

A x = b ⇒ 0 1 0.702 x2 = 225.78

−2 1 0 x3 0

3.4. DIRECT METHODS 61

»b=[101.48 225.78 0]’ % Define right hand side vector b

»A1=[b, A(:,[2 3])] % Define A(1)

»A2=[A(:,1),b, A(:, 3)] % Define A(2)

»A3=[A(:,[1 2]), b ] % Define A(3)

»x(1) = det(A1)/det(A) % solve for coponent x(1)

»x(2) = det(A2)/det(A) % solve for coponent x(2)

»x(3) = det(A3)/det(A) % solve for coponent x(3)

»norm(A*x’-b) % Check residual

tiplied by A produces the identity matrix - i.e., AB = I; but we did not

develop a scheme for finding B. We can do so now by combining Cramer’s

rule and Laplace expansion for a determinant as follows. Using Laplace

expansion of the determinant of A(k) around column k,

where Aik are the cofactors of A. The components of the solution vector,

x are,

x1 = (b1 A11 + b2 A21 + · · · + bn An1 )/det(A)

The right hand side of this system of equations can be written as a vector

matrix product as follows,

x1 A11 A21 ··· An1 b1

x2 A12 A22 ··· An2 b2

1

.. = .. .. ..

det(A) ..

. . . . .

xn A1n A2n · · · Ann bn

or

x =Bb

3.4. DIRECT METHODS 62

A11 A21 · · · An1

A A22 · · · An2

1 12 adj(A)

B = A−1 = . .. =

..

det(A) .. . . det(A)

A1n A2n · · · Ann

The above equation can be thought of as the definition for the adjoint

of a matrix. It is obtained by simply replacing each element with its

cofactor and then transposing the resulting matrix.

d11 0 ··· 0

0 d ··· 0

22

D=

.. ..

. 0 . 0

0 0 · · · dnn

is given by,

1

d11 0 ··· 0

1

0 ··· 0

d22

D−1 = . ..

.. 0 . 0

1

0 0 ··· dnn

DD−1 = I.

upper triangular matrix, then the elements of V = U −1 , can be found

sequentially in an efficient manner by simply using the definition UV =

I. This equation, in expanded form, is

u11 u12 · · · u1n v11 v12 · · · v1n 1 0 ··· 0

0 u22 · · · u2n

v21 v22 · · · v2n 0 1 · · · 0

..

.. = .

.. .. ..

0 0 . . . . . ..

0 0 · · · unn vn1 vn2 · · · vnn 0 0 ··· 1

We can develop the algorithm ( i.e., find out the rules) by simply carry-

ing out the matrix multiplication on the left hand side and equating it

3.4. DIRECT METHODS 63

that V is also upper triangular, i.e.,

of each element of n-th row of U (consisting mostly of zeros!) with the

corresponding element of the 1-st column of V . The only non-zero term

in this product is

unn vn1 = 0

Since unn 6= 0 it is clear that vn1 = 0. Carrying out similar arguments in

a sequential manner it is easy to verify equation 3.3 and thus establish

that V is also upper triangular.

The non-zero elements of V can also be found in a sequential manner

as follows. For each of the diagonal elements (i, i) summing the product

of each element of i-th row of U with the corresponding element of the

i-th column of V , the only non-zero term is,

1

vii = i = 1, · · · , n (3.4)

uii

Next, for each of the upper elements (i, j) summing the product of

each element of i-th row of U with the corresponding element of the j-th

column of V , we get,

j

X

uii vij + uir vr j = 0

r =i+1

j

1 X

vij =− uir vr j j = 2, · · · , n; j > i; i = j − 1, 1 (3.5)

uii r =i+1

erwise, it may involve unknown elements vr j on the right hand side.

First, all of the diagonal elements of V ( viz. vii ) must be calcualted from

equation (3.4) as they are needed on the right hand side of equation (3.5).

Next the order indicated in equation (3.5), viz. increasing j from 2 to n

and for each j decreasing i from (j − 1) to 1, sould be obeyed to avoid

having unknowns values appearing on the right hand side of (3.5).

3.4. DIRECT METHODS 64

to illustrate precisely the order of the calculations. Note that the built-

in, general purpose MATLAB inverse function ( viz. inv(U) ) does not

take into account the special structure of a triangular matrix and hence

is computationally more expensive than the invu funcion of figure 3.1.

This is illustrated with the following example.

Example

1 2 3 4

0 2 3 1

U =

0 0 1 2

0 0 0 4

Let us find its inverse using both the built-in MATLAB function inv(U)

and the function invu(U) of figure 3.1 that is applicable sepcifically for

an upper triangular matrix. You can also compare the floating point

opeartion count for each of the algorithm. Work through the following

example using MATLAB. Make sure that the function invu of figure 3.1

in the search path of MATLAB.

»0 2 3 1;

»0 0 1 2;

»0 0 0 4]; % Definition of matrix A complete

»flops(0); % initialize flop count

»inv(U) % MATLAB built-in function

»flops % examine flops for MATLAB built-in function (ans: 131)

»flops(0); % initialize flop count

»invu(U) % inverse of upper triangular matrix

»flops % examine flops for gauss solver (ans: 51)

Gaussian elimination is one of the most efficient algorithms for solving

a large system of linear algebraic equations. It is based on a systematic

generalization of a rather intuitive elimination process that we routinely

apply to a small, say, (2 × 2) systems. e.g.,

10x1 + 2x2 = 4

3.4. DIRECT METHODS 65

function v=invu(u)

% Invert upper triangular matrix

% u - (nxn) matrix

% v - inv(a)

for i=2:n

for j=1:i-1

v(i,j)=0;

end

end

for i=1:n

v(i,i)=1/u(i,i);

end

for j=2:n

for i=j-1:-1:1

v(i,j) = -1/u(i,i)*sum(u(i,i+1:j)*v(i+1:j,j));

end

end

matrix

3.4. DIRECT METHODS 66

x1 + 4x2 = 3

eliminate the variable x1 from the second equation, viz. (4 − 2x2 )/10 +

4x2 = 3 which is solved to get x2 = 0.6842. In the second phase, the

value of x2 is back substituted into the first equation and we get x1 =

0.2632. We could have reversed the order and eliminated x1 from the

first equation after rearranging the second equation as x1 = (3 − 4x2 ).

Thus there are two phases to the algorithm: (a) forward elimination of

one variable at a time until the last equation contains only one unknown;

(b) back substitution of variables. Also, note that we have used two rules

during the elimination process: (i) two equations (or two rows) can be

interchanged as it is merely a matter of book keeping and it does not in

any way alter the problem formulation, (ii) we can replace any equation

with a linear combination of itself with another equation. A conceptual

description of a naive Gaussian elimination algorithm is shown in figure

3.2. All of the arithmetic operations needed to eliminate one variable at

a time are identified in the illustration. Study that carefully.

onal elements are zero, although this is not a requirement for existence

of a solution. The reason for avoiding zeros on the diagoanls is to avoid

division by zeros in step 2 of the illustration 3.2. If there are zeros on

the diagonal, we can interchange two equations in such a way the di-

agonals do not contain zeros. This process is called pivoting. Even if

we organize the equations in such a way that there are no zeros on the

diagonal, we may end up with a zero on the diagonal during the elimina-

tion process (likely to occur in step 3 of illustration 3.2). If that situation

arises, we can continue to exchange that particular row with another one

to avoid division by zero. If the matrix is singular we wil eventually end

up with an unavoidable zero on the diagonal. This situation will arise if

the original set of equations is not linearly independent; in other words

the rank of the matrix A is less than n. Due to the finite precision of

computers, the floating point operation in step 3 of illustration 3.2 will

not result usually in an exact zero, but rather a very small number. Loss

of precision due to round off errors is a common problem with direct

methods involving large system of equations since any error introduced

at one step corrupts all subsequent calcualtions.

through the function gauss.m. Note that it is

merely for

illustrating the

concepts involved

in the elimination

process; MATLAB

backslash, \

operator provides

a much more

elegant solution to

solve Ax = b in

3.4. DIRECT METHODS 67

ċ

ċ aa ċa b

ċa b

a11 a12 1n a1, n + 1 a11 a12 1n 1

a21 a22 a2, n + 1 a

2

ċ a? b

2n

21 a22 2n

?

an1 an2 ċ a? nn an, n + 1

?

an1 an2 nn n

a12

1 a11 ċ a1n a1, n + 1

a11

for j=i+1:n+1;

a a ċa a11

a2, n + 1 a(i,j)=a(i,j)/a(i,i);

21 22

2n

end

?

an1 an2 ċ a? nn

an, n + 1

STEP 3 : Make all elements in column i below diagonal into 0

for j=i+1:n 0 a22 − a21 a1×2 ċa − a21 a1×n a2, n + 1 − a21 a1×, n + 1 for k=i+1:n+1;

2n

a(j,k)=a(j,k)- a(j,i)*a(i,k);

? ?

end 0 an2 − an1 a×

12 ċa nn

× ×

− an1 a1n an, n + 1 − an1 a1, n + 1

end

×

1 a12 ċa × a×1, n + 1

ċa

1n

0 1 × a×2, n + 1

?

2n

?

0 0 ċ 1 a×n, n + 1

×

1 a12 ċa × a×1, n + 1

ċa

1n

0 1 × a×2, n + 1 for j=n-1:-1:1;

?

2n a(j,n+1) = a(j,n+1) - a(j,j+1:n)*a(j+1:n,n+1);

?

0 0 ċ 1 a×n, n + 1

end

a(n,n+1) = a(n,n+1)

3.4. DIRECT METHODS 68

function x=gauss(a,b)

% Naive Gaussian elimination. Cannot have zeros on diagonal

% a - (nxn) matrix

% b - column vector of length n

n=length(b); % get length of b

if (m ˜= n)

error(’a and b do not have the same number of rows’)

end

a(:,n+1)=b;

for i=1:n

%Step 2: make diagonal elements into 1.0

a(i,i+1:n+1) = a(i,i+1:n+1)/a(i,i);

for j=i+1:n

a(j,i+1:n+1) = a(j,i+1:n+1) - a(j,i)*a(i,i+1:n+1);

end

end

%Step 4: begin back substitution

for j=n-1:-1:1

a(j,n+1) = a(j,n+1) - a(j,j+1:n)*a(j+1:n,n+1);

end

%return solution

x=a(:,n+1)

3.4. DIRECT METHODS 69

Example

Let us continue with the recycle problem (equation (1.8) of Chapter 1).

First we obtain solution using the built-in MATLAB linear equation solver

( viz. x = A\b and record the floating point operations (flops). Then

we solve with the Gaussian elimination function gauss and compare the

flops. Note that in order to use the naive Gaussian elimination function,

we need to switch the 2nd and 3rd equations to avoid division by zero.

»-2 1 0;

»0 1 0.702]; % Definition of matrix A complete

»b=[101.48 0 225.78]’ ;% Define right hand side column vector b

»flops(0) ; % initialize flop count

»A\b % solution is : [23.8920 47.7840 253.5556]

»flops % examine flops for MATLAB internal solver (ans: 71)

»flops(0) ; % initialize flop count

»gauss(A,b) % solution is, of course : [23.8920 47.7840 253.5556]

»flops % examine flops for gauss solver (ans: 75)

»flops(0) ; % initialize flop count

»inv(A)*b % obtain solution using matrix inverse

»flops % examine flops for MATLAB internal solver (ans: 102)

The need for pivoting can be illustrated with the following simple exam-

ple.

x1 + x2 = 1

x1 + x2 = 2

" #" # " #

1 x1 1

=

1 1 x2 2

we first make the diagonal into unity, which results in

1 1

x1 + x2 =

3.4. DIRECT METHODS 70

Next we eliminate the variable x1 from the 2nd equation which resutls

in,

1 1

1− x2 = 2 −

Rearranging this and using back substitution we finally get x2 and x1 as,

1

2−

x2 = 1

1−

1 x2

x1 = −

The problem in computing x1 as → 0 should be clear now. As crosses

the threshold of finite precision of the computation (hardware or soft-

ware), taking the difference of two large numbers of comparable magni-

tude, can result in significant loss of precision. Let us solve the problem

once again after rearranging the equations as,

x1 + x2 = 2

x1 + x2 = 1

and apply Gaussian elimination once again. Since the diagonal element

in the first equation is already unity, we can eliminate x1 from the 2nd

equation to obtain,

1 − 2

(1 − )x2 = 1 − 2 or x2 =

1−

Back substitution yields,

x1 = 2 − x2

Both these computations are well behaved as → 0.

We can actually demonstrate this using the MATLAB function gauss

shown in figure 3.3 and compare it with the MATLAB built-in function

A\b which does use pivoting to rearrange the equations and minimize

the loss of precision. The results are compared in table 3.1 for in

the range of 10−15 to 10−17 . Since MATLAB uses double precision, this

range of is the threshold for loss of precision. Observe that the naive

Gaussian elimination produces incorrect results for < 10−16 .

Many problems such as the stagewise separation problem we saw in sec-

tion §1.3.1 or the solution of differential equations that we will see in

3.4. DIRECT METHODS 71

without pivoting MATLAB

gauss(A,b) A\b

1 × 10−15 [1 1] [1, 1]

1 × 10−16 [2 1] [1, 1]

1 × 10−17 [0 1] [1, 1]

a tridiagonal matrix structure.

d1 c1 0 ··· 0 x1 b1

a d c · · · 0 x b

1 2 2 2 2

.. .. ..

T = .

x =

.

b =

.

0 0 an−2 dn−1 cn−1 xn−1 bn−1

0 ··· 0 an−1 dn xn bn

Since we know where the zero elements are, we do not have to carry out

the elimination steps on those entries of the matrix T ; but the essential

steps in the algorithm remain the same as in the Gaussian elimination

scheme and are illustrated in figure 3.4. MATLAB implementation is

shown in figure 3.5.

Given a square matrix A of dimension n × n it is possible to write it is

as the product of two matrices B and C, i.e., A = BC. This process is

called factorization and is in fact not at all unique - i.e., there are inifnitely

many possiblilities for B and C. This is clear with a simple counting of

the unknowns - viz. there are 2×n2 unknown elements in B and C while

only n2 equations can be obtained by equating each element of A with

the corresponding element from the product BC.

The extra degrees of freedom can be used to specify any specific

structure for B and C. For example we can require B = L be a lower

triangular matrix and C = U be an upper triangular matrix.This process

is called LU factorization or decomposition. Since each triangular matrix

has n × (n + 1)/2 unknowns, we still have a total of n2 + n unknowns.

The extra n degrees of freedom is often used in one of three ways:

3.4. DIRECT METHODS 72

Given

d1 c1 0 0 ? b1

a1 ? b2

0 a
d

d2 c2 0

0 0 ?

0 n−2 n−1 cn − 1 bn − 1

? ? ? an − 1 dn bn

STEP 1: Eliminate lower diagonal elements

d1 c1 0 0 ? b1 d(j) = d(j) - {a(j-1)/d(j-1)}*c(j-1)

for j=2:n 0 d 2∗ c2 ? b2∗

0

b(j) = b(j) - {a(j-1)/d(j-1)}*b(j-1)

0
0 ?

0 0 0 d∗n − 1 cn − 1 b∗n − 1

end ? ? ? 0 d∗n b∗n

STEP 2: Back substitution

d1 c1 0 0 ? b1

d2∗ c2 ? b2∗

0

0 for i=n-1:-1:1

0
0 ?

b(i) = {b(i) - c(i)*b(i+1)}/d(i);

end

0 0 0 d∗n − 1 cn − 1 b∗n − 1

? ? ? 0 d∗n b∗n b(n) = b(n)/d(n)

Solution is stored in b

3.4. DIRECT METHODS 73

function x=thomas(a,b,c,d)

% Thomas algorithm for tridiagonal systems

% d - diagonal elements, n

% b - right hand side forcing term, n

% a - lower diagonal elements, (n-1)

% c - upper diagonal elements, (n-1)

nb=length(b); % get length of b

nc=length(c); % get length of c

nd=length(d); % get length of d

if (nd ˜= nb | na ˜= nc | (nd-1) ˜= na)

error(’array dimensions not consistent’)

end

n=length(d);

%Step 1: forward elimination

for i=2:n

fctr=a(i-1)/d(i-1);

d(i) = d(i) - fctr*c(i-1);

b(i) = b(i) - fctr*b(i-1);

end

b(n) = b(n)/d(n);

for j=n-1:-1:1

b(j) = (b(j) - c(j)*b(j+1))/d(j);

end

%return solution

x=b;

3.4. DIRECT METHODS 74

that ofU - i.e., lii = uii .

factorize a matrix into a product of lower and upper trinagular matrices,

it does not tell us how to find out the unknown elements.

Revisiting the Gaussian elimination method from a different perspec-

tive, will show the connection between LU factorization and Gaussian

elimination. Note that the algorithm outlined in section §3.4.3 is the

most computationally efficient scheme for implimenting Guassian elimi-

nation. The method to be outlined below is not computationally efficient,

but it is a useful conceptual aid in showing the connection between Guas-

sian elimination and LU factorization. Steps 2 and 3 of figure 3.2 that

involve making the diagonal into unity and all the elements below the di-

agonal into zero is equivalent to pre-multuplying A by L1 - i.e., L1 A = U1

or,

(1) (1)

1

0 · · · 0 a11 a 12 · · · a1n 1 a12 · · · a1n

aa1121

− a21 a22 · · · a2n

(1) (1)

a11 1 · · · 0 0 a22 · · · a2n

.. .. .. .. = .. ..

. 0 . .

. 0 . .

an1

− a11 0 0 1 an1 an2 · · · ann (1)

0 an2 · · · ann

(1)

L2 U1 = U2 or, in expanded form,

1 0 0 ··· 0 (1) (1) (1)

1 1 a12 a13 · · · a1n

0 0 · · · 0 (1) (1)

(1)

1 a12 · · · a1n (2) (2)

a22

(1) (1) 0 1 a23 · · · a2n

(1)

0 a22 · · · a2n

a

0 − 32 1 ··· 0 ..

a22

(1) .. .. =

0 0

(2)

a33 ··· .

.

.

.. .. .. .. ..

. . 0 . 0 (1) (1) 0 0 . ··· .

an2

(1) 0 an2 · · · ann (2) (2)

0 − (1) 0 0 1 0 0 an3 · · · ann

a22

L1 A = U1

L2 U1 = U2

L 3 U2 = U 3

L 4 U3 = U 4

3.4. DIRECT METHODS 75

Note that each Lj is a lower triangular matrix with non-zero elements on

the j-th column and unity on other diagonal elements. Eliminating all of

the intermediate Uj we obtain,

Since the product of all lower triangular matrices is yet another lower

triangular matrix, we can write the above equation as,

LA = U

- i.e., L̂ = L−1 . Hence a given square matrix A can be factored into a

product of a lower and upper triangular matrix as,

A = L−1 U = L̂U

for constructing both L̂ and U, it is quite inefficient. A more direct and

efficient algorithm is developed next in section §3.4.6.

3.4.6 LU decomposition

Consider the product of L and U as shown in the expanded form be-

low. All of the elements of L and U are unkonwn. By carrying out the

matrix product on the left hand side and equating element-by-element

to the right hand side, we can develop sufficient number of equations

to find out all of the unkown elements on the left hand side. The trick,

however is, (as we did with inverting a triangular matrix) to carry out the

calculations in a particualr sequence so that no more than one unknown

appears in each equation.

l11 0 0 ··· 0 1 u12 u13 · · · u1n a11 a12 ··· a1n

l

21 l22 0 ··· 0 0 1 u23 · · · u2n a21 a22 ··· a2n

l31 l32 l33 · · · 0 0 0 1 · · · u3n = a31 a32 ··· a3n

. .. ..

. .. .. . .. .. .. . .. ..

. . . . 0 .. . . . . .. . ··· .

ln1 ln2 ln3 · · · lnn 0 0 0 ··· 1 an1 an2 · · · ann

multiplication and equating we obtain,

3.4. DIRECT METHODS 76

inefficient to

proceed to the 2nd

u1j = a1j /l11 j = 2, · · · , n (3.7) column of L. Why?

expression for any element i in column j of L is,

j−1

X

lij = aij − lik ukj j = 2, · · · , n i = j, · · · , n (3.8)

k=1

h Pj−1 i

aji − k=1 ljk uki

uji = j = 2, · · · , n i = j + 1, · · · , n (3.9)

ljj

order to illustrate the implementation of equations (3.6-3.9) as an algo-

rithm, a MATLAB function called LU.m is shown in figure 3.6. Note that

MATLAB provides a built-in function for LU decomposition called lu(A).

Recognizing that A can be factored into the product LU, one can im-

plement an efficient scheme for solving a system of linear algebraic equa-

tions Ax = b repeatedly, particularly when the matrix A remains un-

changed, but different solutions are required for different forcing terms

on the right hand side, b. The equation

Ax = b

can be written as

LUx = b ⇒ Ux = L−1 b = b0

and hence

x = U −1 b0

are stored in the LU factored matrix and as we saw earlier it is rela-

tively efficient to invert triangular matrices. Hence two additional vector-

matrix products provide a solution for each new value of b.

3.4. DIRECT METHODS 77

function [L,U]=LU(a)

% Naive LU decomposition

% a - (nxn) matrix

% L,U - are (nxn) factored matrices

% Usage [L,U]=LU(A)

L(:,1)=a(:,1);

U(1,:)=a(1,:)/L(1,1);

for j=2:n

for i = j:n

L(i,j) = a(i,j) - sum(L(i,1:j-1)’.*U(1:j-1,j));

end

U(j,j) = 1;

for i=j+1:n

U(j,i)=(a(j,i) - sum(L(j,1:j-1)’.*U(1:j-1,i) ) )/L(j,j);

end

end

3.5. ITERATIVE METHODS 78

Example

Work through the following exercise in MATLAB to get a feel for the

built-in MATLAB implementation of LU factorization with that given in

figure 3.6. Before you work through the exercise make sure that the file

LU.m that contains the function illustrated in figure 3.6 is in the MATLAB

path. Also, be aware of the upper case function LU of figure 3.6 and the

lower case lu which is the built-in function.

»b=[101.48 225.78 0]’ % Define right hand vector b

»flops(0) % initialize flop count

»x=A\b % solve using built-in solver

»flops % flops = 74

»flops(0) % re-initialize flop count

»[l,u]=LU(A) %Use algorithm in figure 3.6

»flops % flops = 24

»x=inv(u)*(inv(l)*b) % Solve linear system

»flops % Cumulative flops = 213

»flops(0) % re-initialize flop count

»[L,U]=lu(A) %use built-in function

»flops % flops = 9

»x=inv(U)*(inv(L)*b) % Solve linear system

»flops % Cumulative flops = 183

The direct methods discussed in section §3.4 have the advantage of pro-

ducing the solution in a finite number of calculations. They suffer, how-

ever, from loss of precision due to accumulated round off errors. This

problem is particulalry sever in large dimensional systems (more than

10,000 equations). Iterative methods, on the other hand, produce the

result in an asymptotic manner by repeated application of a simple al-

gorithm. Hence the number of floating point operations required to pro-

duce the final result cannot be known a priori. But they have the natural

ability to eliminate errors at every step of the iteration. For an author-

itative account of iterative methods for large linear systems see Young

(1971).

Iterative methods rely on the concepts developed in Chapter 2. They

are extended naturally from a single equation (one-dimensional system)

3.5. ITERATIVE METHODS 79

allels that of section §2.7 on fixed point iterations schemes. Given an

equation of the form, A x = b we can rearrange it into a form,

and the above equation as an iterative map that maps a point x (p) into

another point x (p+1) in the n-dimensional vector space. Starting with an

initial guess x (0) we calculate successive iterates x (1) , x (2) · · · until the

sequence converges. The only difference from chapter 2 is that the above

iteration is applied to a higher dimensional system of (n) equations. Note

that G(x) is also vector. Since we are dealing with a linear system, G will

be a linear function of x which is constructed from the given A matrix.

G can typically be represented as

into the form x = g(x) in several different ways. In a similar manner,

a given equation Ax = b can be rearranged into the form x (p+1) =

G(x (p) ) in more than one way. Different choices of G results in different

iterative methods. In section §2.7 we also saw that the condition for

convergence of the seuqence xi+1 = g(xi ) is g 0 (r ) < 1. Recognizing

that the derivative of G(x (p) ) with respect to x (p) is a matrix, G0 = T

a convergence condition similar to that found for the scalar case must

depend on the properties of the matrix T . Another way to demonstrate

this is as follows. Once the sequence x (1) , x (2) · · · converges to, say, r

equation (3.11) becomes,

r = T r + c.

Subtracting equation (3.11) from the above,

eration level p, we have

(p+1) = T (p) .

Thus, the error at step (p + 1) depend on the error at step (p). If the

matrix T has the property of amplifying the error at any step, then the

iterative sequence will diverge. The property of the matrix T that de-

termines this feature is called the spectral radius. The spectral radius is

3.5. ITERATIVE METHODS 80

the iterative sequence the spectral radius of T should be less than unity,

The Jacobi iteration rearranges the given equations in the form,

(p+1) (p) (p) (p)

x1 = (b1 − a12 x2 − a13 x3 − · · · − a1n xn )/a11

j−1 n

(p+1)

X (p)

X (p)

xj = bj − ajk xk − ajk xk /ajj (3.13)

k=1 k=j+1

(p+1) (p) (p) (p)

xn = (bn − an1 x1 − an2 x2 − · · · − an,n−1 xn−1 )/ann

where the variable xj has been extracted form the j − th equation and

expressed as a function of the remaining variables. The above set of

equations can be applied repetitively to update each component of the

unknown vector x=(x1 , x2 , · · · , xn ) provided an inital guess is known

for x. The above equation can be written in matrix form as, Note that MATLAB

functions

Lx (p) + Dx (p+1) + Ux (p) = b diag

tril

where the matrices D, L, U are defined in term of components of A as triu

follows. are useful in

a11 0 ··· 0 extracting parts of

0 a a given matrix A

22 · · · 0

D= .. ..

. 0 . 0

0 0 · · · ann

0 0 ··· 0 0 a12 · · · a1n

a21 0 ··· 0 0 0 · · · a2n

L=

.. ..

U = .. ..

. . 0 0 0 . .

an1 an2 ··· 0 0 0 ··· 0

which can be rearranged as,

and hence G(x (p) ) = −D−1 (L + U)x (p) + D−1 b and G0 = T = −D−1 (L +

U). This method has been shown to be convergent as long as the original

matrix A is diagonally dominant, i.e.,

3.5. ITERATIVE METHODS 81

n

X

|aii | ≥ aij ı = 1, · · · , n

j=1,j6=i

elements can be zero. If any is found to be zero, one can easily exchange

the positions of any two equations to avoid this problem. Equation (3.13)

is used in actual computational implementation, while the matrix form

of the equation (3.14) is useful for conceptual description and conver-

gence analysis. Note that each element in the equation set (3.13) can be

updated independent of the others in any order because the right hand

side of equation (3.13) is evaluated at the p-th level of iteration. This

method requires that x (p) and x (p+1) be stored as two separate vectors

until all elements of x (p+1) have been updated using equation (3.13). A

minor variation of the algorithm which uses a new value of the element

in x (p+1) as soon as it is available is called the Gauss-Seidel method. It

has the dual advantage of faster convergence than the Jacobi iteration

as well as reduced storage requirement for only one array x.

form,

x1 = (b1 − a12 x2 − a13 x3 − · · · − a1n xn )/a11

j−1 n

(p+1)

X (p+1)

X (p)

xj = bj − ajk xk − ajk xk /ajj (3.15)

k=1 k=j+1

(p+1) (p+1) (p+1) (p+1)

xn = (bn − an1 x1 − an2 x2 − · · · − an,n−1 xn−1 )/ann

Observe that known values of the elements in x (p+1) are used on the

right hand side of the above equations (3.15) as soon as they are available

within the same iteration. We have used the superscripts p and (p + 1)

explicitly in equation (3.15) to indicate where the newest values occur.

In a computer program there is no need to assign separate arrays for

p and (p + 1) levels of iteration. Using just a single array for x will

automatically propagate the newest values as soon as they are updated.

The above equation can be written symbolically in matrix form as,

3.5. ITERATIVE METHODS 82

get,

x (p+1) = (L + D)−1 (b − Ux (p) ) (3.16)

and hence G(x (p) ) = −(L + D)−1 Ux (p) + (L + D)−1 b and G0 = T =

−(L + D)−1 U. Thus the convergence of this scheme depends on the

spectral radius of the matrix, T = −(L + D)−1 U. This method has

also been shown to be convergent as long as the original matrix A is

diagonally dominant.

MATLAB implementation of the Gauss-Seidel algorithm is shown in

figure 3.7.

The relaxation scheme can be thought of as a convergence acceleration

scheme that can be applied to any of the basic iterative methods like

Jacobi or Gauss-Seidel schemes. We introduce an extra parameter, ω

often called the relaxation parameter and choose its value in such a way

that we can either speed up convergence by using ω > 1 (called over-

relaxation ) or in some difficult problems with poor initial guess we can

attempt to enlarge the region of convergence using ω < 1 (called under-

relaxation). Let us illustrate the implementation with the Gauss-Seidel

scheme. The basic Gauss-Seidel scheme is:

j−1 n

(p+1)

X (p+1)

X (p)

t := xj = bj − ajk xk − ajk xk /ajj (3.17)

k=1 k=j+1

(p+1)

Instead of accepting the value of xj computed from the above for-

mula as the current value, we store it in a temporary variable t and form

(p+1)

a better (or accelerated) estimate of xj from,

xj = xj + ω [t − xj ]

scheme. For ω > 1, then the difference between two successive iterates

(the term in the square bracketts) is amplified and added to the current

(p)

value xj .

The above opeartions can be written in symbollic matrix form as,

where the term in braces represent the Gauss-Seidel scheme. After ex-

tracting x (p+1) from the above equation, it can be cast in the standard

3.5. ITERATIVE METHODS 83

function x=GS(a,b,x,tol,max)

% Gauss-Seidel iteration

% a - (nxn) matrix

% b - column vector of length n

% x - initial guess vector x

% tol - convergence tolerance

% max - maximum number of iterations

% Usage x=GS(A,b,x)

n=length(b); % get length of b

if (m ˜= n)

error(’a and b do not have the same number of rows’)

end

if nargin < 5, max=100; end

if nargin < 4, max=100; tol=eps; end

if nargin == 2

error(’Initial guess is required’)

end

count=0;

x(1) = ( b(1) - a(1,2:n)*x(2:n) )/a(1,1);

for i=2:n-1

x(i) = (b(i) - a(i,1:i-1)*x(1:i-1) - ...

a(i,i+1:n)*x(i+1:n) )/a(i,i);

end

x(n) = ( b(n) - a(n,1:n-1)*x(1:n-1) )/a(n,n);

count=count+1;

end

fprintf(1,’Maximum iteration %3i exceeded\n’,count)

fprintf(1,’Residual is %12.5e\n ’,norm(a*x-b) )

end

3.5. ITERATIVE METHODS 84

x (p+1) = (D + ωL)−1 [(1 − ω)D − ωU]x (p) + ω(D + ωL)−1 b (3.18)

Thus the convergence of the relaxation method depends on the spectral

radius of the matrix T (ω) := (D + ωL)−1 [(1 − ω)D − ωU]. Since this

matrix is a function of ω we have gained a measure of control over the

convergence of the iterative scheme. It has been shown (Young, 1971)

that the SOR method is convergent for 0 < ω < 2 and that there is an

optimum value of ω which results in the maximum rate of convergence.

The optimum value of ω is very problem dependent and often difficult

to determine precisely. For linear problems, typical values in the range

of ω ≈ 1.7 ∼ 1.8 are used.

We have seen that solutions obtained with direct methods are prone

to accumulation of round-off errors, while iterative methods have the

natural ability to remove errors. In an attempt to combine the best of

both worlds, one might construct an algorithm that takes the error-prone

solution from a direct method as an initial guess to an iterative method

and thus improve the accuracy of the solution.

Let us illustrate this concept as applied to improving the accuracy of

a matrix inverse. Suppose B is an approximate (error-prone) inverse of a

given matrix A obtained by one of the direct methods outlined in section

§3.4. If B is the error in B then

A(B + B ) = I or AB = (I − AB)

We do not, of course, know B and our task is to attempt to estimate

it approximately. Premultiplying above equation by B, and recognizing

BA ≈ I, we have

B = B(I − AB)

Observe carefully that we have used the approximation BA ≈ I on the

left hand side where products of order unity are involved and not on the

right hand side where difference between numbers of order unity are

involved. Now we have an estimate of the error B which can be added

to the approximate result B to obtain,

B + B = B + B(I − AB) = B(2I − AB)

Hence the iterative sequence should be,

3.6. GRAM-SCHMIDT ORTHOGONALIZATION PROCEDURE 85

objective is to produce an orthonormal set of vectors {ui | i = 1, · · · , n}.

√

We begin by normalizing the x 1 vector using the norm ||x 1 || = x 1 · x 1

and call it u1 .

x1

u1 =

||x 1 ||

each one. For example we construct u02 by subtracting u1 from x 2 in such

a way that u02 contains no components of u1 - i.e.,

u02 = x 2 − c0 u1

Similarly we have,

u03 = x 3 − c1 u1 − c2 u2

In general we have,

s−1

X u0s

u0s = x s − (uTj · x s ) uj us = s = 2, · · · n; (3.20)

j=1

||u0s ||

3.7. THE EIGENVALUE PROBLEM 86

3 v35 5 p

5

1 v12 2

p1 v34

p2

v46 p6

4 6

3.11.2 MATLAB

3.11.3 Mathematica

Consider laminar flow through the network shown in figure 3.8. The gov-

erning equations are the pressure drop equations for each pipe element

i − j and the mass balance equation at each node.

The pressure drop between nodes i and j is given by,

3.12. EXERCISE PROBLEMS 87

32µlij

pi − pj = αij vij where αij = (3.21)

d2ij

The mass balance at node 2 is given, for example by,

Similar equations apply at nodes 3 and 4. Let the unknown vector be

There will be six momentum balance equations, one for each pipe ele-

ment, and three mass balance (for incompressible fluids volume balance)

equations, one at each node. Arrange them as a system of nine equations

in nine unknowns and solve the resulting set of equations. Take the vis-

cosity of the fluid, µ = 0.1P a · s. The dimensions of the pipes are given

below.

Table 1

Element no 12 23 24 34 35 46

lij (m) 1000 800 800 900 1000 1000

p1 = 300kP a and p5 = p6 = 100kP a. You need to assemble the

system of equations in the form A x = b. Report flops. When

reporting flops, report only for that particular operation - i.e., ini-

tialize the counter using flops(0) before every operation.

• Compute the LU factor of A using built-in function lu. Report

flops. What is the structure of L? Explain. The function LU

provided in the lecture notes will fail on this matrix. Why?

• Compute the solution using inv(A)*b. Report flops.

• Compute the rank of A. Report Flops.

• Since A is sparse ( i.e., mostly zeros) we can avoid unnecessary

operations, by using sparse matrix solvers. MATLAB Ver4.0

(not 3.5) provides such a facility. Sparse matrices are stored

3.12. EXERCISE PROBLEMS 88

try in the matrix and s its corresponding value. The MATLAB

function find(A) examines A and returns the triplets. Use,

» [ii,jj,s]=find(A)

» S=sparse(ii,jj,s)

Then to solve using the sparse solver and keep flops count,

use

To graphically view the structure of the sparse matrix, use

» spy(S)

windows for any graphics display of results!

• Compute the determinant of the sparse matrix, S (should be

the same as the full matrix!). Report and compare flops.

changed to 150kP a.

part (b) is b2, report and explain the difference in flops for the fol-

lowing two ways of obtaining the two solutions using the sparse

matrix. Note that in the first case both solutions are obtained si-

multaneously.

» flops(0);x1 = S\b1, x2 = S\b2,flops

Repeat the above experiment with the full matrix, A and report

flops.

if a valve on line 34 is shut so that there is no flow in that line 34.

Let knowledge grow from more to more,

But more of reverence in us dwell;

That mind and soul, according well,

May make one music as before.

— ALFRED TENNYSON

Chapter 4

equations

finding the roots of a system of nonlinear algebraic equations of the

form,

f (x) = 0 (4.1)

where f (x) is a vector function of x - i.e., there are n equations which

can be written in expanded or component form as,

f1 (x1 , x2 , · · · , xn ) = 0

f2 (x1 , x2 , · · · , xn ) = 0

···

fn (x1 , x2 , · · · , xn ) = 0

As with the scalar case, the equation is satisfied only at selected values of

x = r = [r1 , r2 , · · · , rn ], called the roots. The separation process model

discussed in section §1.3.1 (variations 2 and 3, in particular) and the

reaction sequence model of section §1.3.5 are two of the many exam-

ples in chemical engineering that give rise to such non-linear system of

equations. As with the scalar case, the equations often depend on other

parameters, and we will represent them as

f (x; p) = 0 (4.2)

89

4.1. NEWTON’S METHOD 90

may be required to construct solution families for ranges of values of

p - i.e., x(p). This task is most efficiently achieved using continuation

methods. For some of the recent developments on algorithms for non-

linear equations see Ortega and Rheinboldt (1970), Rabinowitz (1970),

Scales (1985) and Drazin (1992).

is easy to develop as shown in figure 2.2d. This is difficult to visualize

for higher dimensional systems. The algorithm developed in section

§2.5 can, however, be generalized easily to higher dimensional systems.

The basic concept of linearizing a nonlinear function remains the same

as with a scalar case. We need to make use of the multivariate form

of the Taylor series expansion. We will illustrate the concepts with a

two-dimensional system of equations written in component form as,

f1 (x1 , x2 ) = 0

f2 (x1 , x2 ) = 0

Thus the vectors f (x) = [f1 (x1 , x2 ), f2 (x1 , x2 )] and x = [x1 , x2 ] con-

tain two elements. Let the roots be represented by r = [r1 , r2 ] - i.e.,

f (r) = 0.

Suppose x (0) be some known initial guess for the solution vector x

and let the root be at a small displacement δ from x (0) - i.e.,

r = x (0) + δ

repeatedly to get closer to the root r. Variations in the function value

f1 (x1 , x2 ) can be caused by variations in either components x1 or x2 .

Recognizing this, a bi-variate Taylor series expansion around x (0) can be

written as,

f1 (x1 + δ 1 , x2 + δ2 ) = f1 (x1 , x2 ) +

∂f1

∂f1

δ + δ2 +O(δ2 )

∂x1 [x1(0) ,x2(0) ] ∂x2 [x1(0) ,x2(0) ]

1

| {z } | {z }

variation due to x1 variation due to x2

4.1. NEWTON’S METHOD 91

f2 (x1 + δ 1 , x2 + δ2 ) = f2 (x1 , x2 ) +

∂f2

∂f2

δ1 + δ2 +O(δ2 )

∂x1 [x1(0) ,x2(0) ] ∂x2 [x1(0) ,x2(0) ]

| {z } | {z }

variation due to x1 variation due to x2

in the above equations and this step is the essence of the linearization

process. Since x (0) + δ = r and f (r) = 0, the left hand sides of the

above equations are zero. Thus we get,

(0) (0) ∂f1

∂f1

0 = f1 (x1 , x2 ) + δ + δ2

∂x1 [x1(0) ,x2(0) ] ∂x2 [x1(0) ,x2(0) ]

1

(0) (0) ∂f2

∂f2

0 = f2 (x1 , x2 ) + δ + δ2

∂x1 [x1(0) ,x2(0) ] ∂x2 [x1(0) ,x2(0) ]

1

These are two linear equations in two unknowns [δ1 , δ2 ]. Note that the

two functions [f1 , f2 ] and their four partial derivatives are required to

(0) (0)

be evaluated at the guess value of [x1 , x2 ]. The above equations can

be arranged into matrix form as,

" # " # " #

∂f1 ∂f1

0 f1 δ1

= + ∂x1

∂f2

∂x2

∂f2

0 f2 δ2

∂x1 ∂x2

or in symbolic form

0 = f (0) + J (0) δ

where J (0) is called the Jacobian matrix and the superscript is a reminder

that quantities are evaluated using the current guess value of x (0) . Thus,

the displacement vector δ is obtained by solving the linear system,

δ = −J −1 f

In general then, given x (0) , the algorithm consists of (i) evaluating the

function and the Jacobian at the current iterate x (k) , (ii) solving the linear

system for the displacement vector δ(k) and (iii) finding the new estimate

for the iterate x (k+1) from the equations

4.1. NEWTON’S METHOD 92

Convergence check

The final step is to check if we are close enough to the desired root r

so that we can terminate the iteration. One test might be to check if the

absolute difference between two successive values of x is smaller than

a specified tolerance. This can be done by computing the norm of δ

|δ| ≤

at the end of every iteration is below a certain tolerance. Since we are

dealing with vectors, once again a norm of f must be calculated.

|f | ≤

qP

n 2

i=1 fi

|f | =

n

where n is the dimension of the system.

In addition to the above convergence tests, we might wish to place a

limit on the number of times the iteration is repeated. A MATLAB func-

tion, constructed in the form of a m-file, is shown in figure 4.1. Note that

this MATLAB function requires an initial guess as well as two external

fucntions for computing the function values and the Jacobian.

equations that model a series of continuously stirred tank reactors. A

sketch is shown in figure 1.6 Recall that the model equations are given

by,

fi := βa2i + ai − ai−1 = 0 i = 1···n (4.4)

As discussed in section §1.3.5, there are n equations and (n + 2)

variables in total. Hence we have two degrees of freedom. We consider

a design situation where inlet and outlet concentrations are specified

as, say, a0 = 5.0, an = 0.5 mol/lit. The unknown vector consists of n

variable elements,

x = {a1 , a2 · · · an−1 , β}

We are required to determine the volume of each reactor for a given

number n. The volume is given by the expression β = kV /F . The rate

constant is k = 0.125 lit/(mol min) and the feed rate is F = 25 lit/min.

4.1. NEWTON’S METHOD 93

function x=newton(Fun,Jac,x,tol,trace)

% Newton method for a system of nonlinear equations

% Fun - name of the external function to compute f

% Jac - name of the externan function to compute J

% x - vector of initial guesses

% tol - error criterion

% trace - print intermediate results

%

% Usage newton(’Fun’,’Jac’,x)

%Check inputs

if nargin < 5, trace=0; end

if nargin < 4, tol=eps; trace=0; end

max=25;

n=length(x);

count=0;

f=1;

while (norm(f) > tol & count < max), %check convergence

f = feval(Fun,x); %evaluate the function

J = feval(Jac,x); %evaluate Jacobian

x = x -J\f; %update the guess

count=count+1;

if trace,

fprintf(1,’Iter.# = %3i Resid = %12.5e\n’, count,norm(f));

end

end

fprintf(1,’Maximum iteration %3i exceeded\n’,count)

fprintf(1,’Residual is %12.5e\n ’,norm(f) )

end

4.1. NEWTON’S METHOD 94

also get all of the intermediate concentrations. Before we can invoke the

algorithm of figure 4.1 we need to write two functions for evaluation the

function values and the Jacobian. The Jacobian is given by,

∂f1 ∂f1

∂x1 0 0 · · · ∂xn

∂f2 ∂f2 ∂f2

∂x1 ∂x2 0 ··· ∂xn

∂f3 ∂f3 ∂f3

J = 0 ∂x2 ∂x3 · · · ∂xn

.. .. .. ..

. 0 . . .

∂fn ∂fn

0 ··· 0 ∂xn−1 ∂xn

2xn x1 + 1 0 0 · · · x12

−1 2xn x2 + 1 0 · · · x22

0 −1 2xn x3 + 1 · · · x32

= (4.5)

.. .. .. ..

. .

. 0 .

0 ··· 0 −1 a2n

the Jacobian in equation (4.5) are shown in figure 4.2. Work through

the following example after ensuring that the three m-files newton.m,

cstrF.m and cstrJ.m are the MATLAB search path.

»r=newton(’cstrF’,’cstrJ’,x,1.e-10,1) % call Newton scheme

Iter.# = 1 Resid = 4.06325e+00

Iter.# = 2 Resid = 1.25795e+01

Iter.# = 3 Resid = 2.79982e+00

Iter.# = 4 Resid = 4.69658e-01

Iter.# = 5 Resid = 2.41737e-01

Iter.# = 6 Resid = 4.74318e-03

Iter.# = 7 Resid = 1.61759e-06

Iter.# = 8 Resid = 1.25103e-12

r =

2.2262

1.2919

0.8691

0.6399

0.5597

4.1. NEWTON’S METHOD 95

function f=cstrF(x)

% Reactor in series model, the function

% x=[a(1),a(2), .., a(n-1),beta]

% f(i) = beta a(i)ˆ2 + a(i) - a(i-1)

n=length(x);

a0=5.0; an=0.5; %define parameters in equation

for i = 2:n-1

f(i)= x(n)*x(i)ˆ2 + x(i) - x(i-1);

end

f(n) = x(n)*anˆ2 + an - x(n-1);

f=f’;

function J=cstrJ(x)

% Reactor in series model, the Jacobian

% x=[a(1),a(2), .., a(n-1),beta]

n=length(x);

a0=5.0; an=0.5; %define parameters in equation

J(1,1) = x(n)*2*x(1) + 1;

J(1,n) = x(1)ˆ2;

for i = 2:n-1

J(i,i) = x(n)*2*x(i) + 1;

J(i,i-1) = -1;

J(i,n) = x(i)ˆ2;

end

J(n,n) = anˆ2;

J(n,n-1) = -1;

4.2. EULER-NEWTON CONTINUATION 96

»x=[1:-.1:.1]’ % repeat solution for n=10

»r=newton(’cstrF’,’cstrJ’,x,1.e-10,1) % call Newton scheme

»V=r(10)*25/.125 % compute V (ans:44.9859)

In the above example, observe first that the number of reactors is defined

implicitly by the length of the vector x. Secondly observe the quadratic

convergence of the iteration sequence - viz. the residual goes down from

10−3 in iteration number 6 to 10−6 in iteration number 7 and 10−12 in

iteration number 8. In other words the number of significant digits of

accuracy doubles with every iteration once the iteration reaches close to

the root.

Consider turbulent flow through the network shown in figure 3.8. The

governing equations are the pressure drop equations for each pipe ele-

ment i − j and the mass balance equation at each node. The pressure

drop between nodes i and j is given by,

2 2f ρlij

pi − pj = αij vij where αij = (4.6)

d2ij

In general the friction factor f is given by the Moody chart or its equiv-

alent Churchill correlation. In fully developed turbulent flow it is rel-

atively insensitive to changes in Re. Hence take it to be a constant

f = 0.005.

4.5. EXERCISE PROBLEMS 97

There will be six momentum balance equations, one for each pipe ele-

ment, and three mass balance (for incompressible fluids volume balance)

equations, one at each node. Arrange them as a system of nine equations

in nine unknowns and solve the resulting set of equations. Take the vis-

cosity of the fluid, µ = 0.1P a · s and the density as ρ = 1000kg/m3 . The

dimensions of the pipes are given below.

Table 1

Element no 12 23 24 34 35 46

lij (m) 1000 800 800 900 1000 1000

a) Use MATLAB to solve this problem using Newton method for the

specified pressures of p1 = 300kP a and p5 = p6 = 100kP a. Re-

port the number of iterations and the flops.

number of iterations and the flops.

the number of iterations and the flops.

lem. Report the number of iterations and the flops.

changed to 150kP a.

if a valve on line 34 is shut so that there is no flow in that line 34.

The chess-board is the world; the pieces are the phe-

nomena of the universe; the rules of the game are

what we call the laws of Nature. The player on the

other side is hidden from us. We know that his play

is always fair, just, and patient. But also we know,

to our cost, that he never overlooks a mistake, or

makes the smallest allowance for ignorance.

— T.H. HUXLEY

Chapter 5

Functional approximations

of linear and nonlinear algebraic equations. Before we undertake the

development of algorithms for differential equations, we need to develop

some basic concepts of functional approximations. In this respect the

present chapter is a bridge between the realms of lumped parameter

models and distributed and/or dynamic models.

that we encounter frequently. In the first class of problem, a known

function f (x) is approximated by another function, Pn (x) for reasons

of computational necessity or expediency. As modelers of physical phe-

nomena, we often encounter a second class of problem in which there

is a need to represent an experimentally observed, discrete set of data

of the form {xi , fi |i = 1, · · · n} as a function of the form f (x) over the

domain of the independent variable x.

98

5.1. APPROXIMATE REPRESENTATION OF FUNCTIONS 99

As an example of the first class of problem, consider the evaluation of

the error function given by,

Zx

2 2

er f (x) = √ e−ξ dξ

π 0

Since the integral does not have a closed form expression, we have to

use a series expansion for,

X∞

2 (−1)k ξ 2k

e−ξ =

k=0

k!

expansion term-by-term to obtain,

∞

2 X (−1)k x 2k+1

er f (x) = √

π k=0 (2k + 1)k!

n

2 X (−1)k x 2k+1

er f (x) ≈ P2n+1 (x) = √ + R(x)

π k=0 (2k + 1)k!

truncating such a series is called the truncation error and the magnitude

of the residual function, R(x) represents the magnitude of the truncation

error. For x close to zero a few terms of the series (small n) are adequate.

The convergence of the series is demonstrated in Table 5.1. It is clear

that as we go farther away from x = 0, more terms are required for

P2n+1 (x) to represent er f (x) accurately. The error distribution, defined

as (x, n) := |er f (x) − P2n+1 (x)|, is shown in figure 5.1. It is clear

from figure 5.1a, that for a fixed number of terms, say n = 8, the error

increases with increasing values of x. For larger values of x, more terms

are required to keep the error small. For x = 2.0, more than 10 terms

are required to get the error under control.

In the above example we chose to construct an approximate function to

represent f (x) = er f (x) by expanding f (x) in Taylor series around x =

5.1. APPROXIMATE REPRESENTATION OF FUNCTIONS 100

2 0.5207 0.865091 2.85856

4 0.5205 0.843449 2.09437

6 0.5205 0.842714 1.33124

8 0.5205 0.842701 1.05793

10 0.5205 0.842701 1.00318

20 0.5205 0.842701 0.995322

Exact 0.5205 0.842701 0.995322

0.0007 0.000025

0.0006 (a)

0.00002 (b) x=0.5

0.0005

error

0.0004 0.000015

0.0003 n=2 4 6 8 0.00001

0.0002 -6

0.0001 5. 10

0 0

0 0.5 1 1.5 2 2 4 6 8 10

x n

1.75

0.01 (c) x=1.0 (d) x=2.0

1.5

0.008 1.25

error

0.006 1

0.75

0.004

0.5

0.002

0.25

0 0

2 4 6 8 10 2 4 6 8 10

n n

Figure 5.1: Error distribution of (x, n) := |er f (x) − P2n+1 (x)| for dif-

ferent levels of truncation

5.1. APPROXIMATE REPRESENTATION OF FUNCTIONS 101

Also, since the expansion was around x = 0, the approximation fails

increasingly as x moves away from zero. In another kind of functional

approximation we can attempt to get a good representation of a given

function f (x) over a range x ∈ [a, b]. We do this by choosing a set of

n basis functions, {φi (x)|i = 1 · · · n} that are linearly independent and

representing the approximation as,

n

X

f (x) ≈ Pn (x) = ai φi (x)

i=1

Here the basis functions φi (x) are known functions, chosen with care

to form a linearly independent set and ai are unknown constants that

are to be determined in such a way that we can make Pn (x) as good an

approximation to f (x) as possible - i.e., we can define an error as the

difference between the exact function and the approximate representa-

tion,

(x; ai ) = |f (x) − Pn (x)|

and device a scheme to select ai such that the error is minimized.

Example

So far we have outlined certain general concepts, but left open the choice

of a specific basis functions φi (x), the definition of the norm |.| in the

error or the minimization procedure to get ai .

Let the basis functions be

φi (x) = x i−1 i = 1, · · · n

Hence the approximate function will be a polynomial of degree (n − 1)

of the form,

Xn

Pn−1 (x) = ai x i−1

i=1

selected points in the range of interest x ∈ [a, b]. We choose n points

{xk |k = 1, · · · n} because we have introduced n degrees of freedom

(unknowns) in ai . A naive choice would be to space these collocation

points equally in the interval [a, b] - i.e.,

(b − a)

xk = a + (k − 1) k = 1, · · · , n

(n − 1)

5.1. APPROXIMATE REPRESENTATION OF FUNCTIONS 102

zero - i.e.,

(xk ; ai ) = f (xk ) − Pn−1 (xk ) = 0

or

n

X

ai xki−1 = f (xk ) k = 1, · · · , n (5.1)

i=1

in matrix form

Pa = f

where the elements of matrix P are given by, Pk,i = xki−1 and the vectors

are a = [a1 , · · · , an ] and f = [f (x1 ), · · · , f (xn )]. Thus we have re-

duced the functional approximation problem to one of solving a system

of linear algebraic equations and tools of chapter 3 become useful!

Let us be even more specific now and focus on approximating the

error function f (x) = er f (x) over the interval x ∈ [0.1, 0.5]. Let us

also choose n = 5 - i.e., a quartic polynomial. This will allow us to write

out the final steps of the approximation problem explicitly. The equally

spaced collocation points are,

1 x1 x12 x13 x14 1.0 0.10 0.010 0.0010 0.0001

1 x2 x22 x23 x24 1.0 0.20 0.040 0.0080 0.0016

P =

1 x3 x32 x33 x34 =

1.0 0.30 0.090 0.0270 0.0081

1 x4 x42 x43 x44 1.0 0.40 0.160 0.0640 0.0256

1 x5 x52 x53 x54 1.0 0.50 0.250 0.1250 0.0625

cedure for a specified degree of polynomial n is given in figure 5.2. Recall

that we had made a comment earlier that the basis function φi (x) =

x i−1 i = 1, · · · n is a poor choice. We can understand why this is so,

5.1. APPROXIMATE REPRESENTATION OF FUNCTIONS 103

function a=erf_apprx(n)

% Illustration functional (polynomial) approximation

% fits error function in (0.1, 0.5) to a

% polynomial of degree n

%define interval

a = 0.1; b=0.5;

x=a + [0:(n-1)] *(b-a)/(n-1);

f=erf(x); %Note that erf is a MATLAB function

for k=1:n

P(k,:) = x(k).ˆ[0:n-1];

end

fprintf(1,’Det. of P for deg. %2i is = %12.5e\n’, n,det(P) );

a=P\f’;

proximation

5.2. APPROXIMATE REPRESENTATION OF DATA 104

by using the function shown in figure 5.2 for increasing degree of poly-

nomials. The matrix P becomes poorly scaled and nearly singular with

increasing degree of polynomial as evidence by computing the determi-

nant of P. For example the determinant of P is 1.60000 × 10−2 for n = 3

and it goes down rapidly to 1.21597 × 10−12 for n = 6. Selecting cer-

tain orthogonal polynomials such as Chebyshev polynomials and using

the roots of such polynomials as the collocation points results in well

conditioned matrices and improved accuracy. More on this in section

§5.8.

Note that MATLAB has a function called polyfit(x,y,n) that will

accept a set of pairwise data {xk , yk = f (xk ) | k = 1, · · · , m} and pro-

duce a polynomial fit of degree n (which can be different from m) using a

least-squares minimization. Try using the function polyfit(x,y,n) for

the above example and compare the polynomial coefficients a produced

by the two approaches.

»y=erf(x) % Calculate the function at Collocation Points

»a=polyfit(x,y,4)% Fit 4th degree polynomial. Coefficients in a

»polyval(a,x) % Evaluate the polynomial at collocation pts.

»erf(x) % Compare with exact values at the same pts.

§5.1.2 in the context of constructing approximate representations of

complicated functions (such as the error function). We will develop and

apply these ideas further in later chapters for solving differential equa-

tions. Let us briefly explore the problem of constructing approximate

functions for representing a discrete set of m pairs of data points

{(xk , fk ) | k = 1, · · · , m}

saturation temperature vs. pressure data taken from steam tables and

shown in Table 5.2. Here the functional form that we wish to construct

is to represent pressure as a function of temperature, P (T ) over the tem-

perature range T ∈ [220, 232]. A number of choices present themselves.

5.2. APPROXIMATE REPRESENTATION OF DATA 105

T (o F ) P (psia)

220.0000 17.1860

224.0000 18.5560

228.0000 20.0150

232.0000 21.5670

each of the four data points over the temperature range T ∈ [220, 232].

This will be considered as a global polynomial as it covers the entire

range of interest in T .

lower degree with a limited range of applicability. For example, we

can take the first three data points and fit a quadratic polynomial,

and the last three points and fit a different quadratic polynomial.

degree less than three, that will not pass through any of the given

data points, but will produce a function that minimizes the error

over the entire range of T ∈ [220, 232].

to the first two choices and hence they warrant no further discussion.

Hence we develop the algorithm only the third choice dealing with the

least-squares minimization concept.

the above example) and we wish to fit a global polynomial of degree n

(n < m) we define the error at every observation point as,

k = (Pn−1 (xk ) − fk ) k = 1, · · · , m

The basis functions are still the set, {x i−1 | i = 1, · · · n} and the poly-

nomial is

Xn

Pn−1 (x) = ai x i−1

i=1

5.2. APPROXIMATE REPRESENTATION OF DATA 106

an objective function which is the sum of squares of the error at every

observation point - viz.

Pm 2 Pm Pm Pn i−1

k=1 k k=1 (Pn−1 (xk ) − fk )

2

k=1 ( i=1 ai xk − f k )2

J(a) = = =

m m m

The scalar objective function J(a) is a function of n unknowns ai . From

elemetary calculus, the condition for the function J(a) to have a mini-

mum is,

∂J(a)

=0

∂a

This condition provides n linear equations of the form Pa = b that can

be solved to obtain a. The expanded form of the equations are,

Pm Pm Pm Pm Pm

k=1 1 k=1 xk k=1 xk2 ··· k=1 xkn−1 a 1 fk

Pm Pm 2

Pm 3 Pm n Pmk=1

k=1 xk xk xk ··· xk a2 k=1 fk xk

Pm Pk=1 Pk=1 Pmk=1 n+1 Pm

2 m 3 m 4 a3 2

k=1 kx k=1 kx x

k=1 k k=1 kx

= k=1 fk xk

.. .. .. .. . ..

. . . . .. .

P

Pm n−1 Pm n Pm n+1 Pm 2(n−1) an m n−1

k=1 x k k=1 x k=1 x

k ···

k k=1 x k k=1 fk xk

Observe that the equations are not only linear, but the matrix is also

symmetric. Work through the following example using MATLAB to gen-

erate a quadratic, least-squares fit for the data shown in Table 5.2. Make

sure that you understand what is being done at each stage of the cal-

cualtion. This example illustrates a cubic fit that passes through each of

the four data points, followed by use of the cubic fit to interpolate data

at intermediate temperatures of T = [222, 226, 230]. In the last part the

least squares solution is obtained using the procedure developed in this

section. Finally MATLAB’s polyfit is used to generate the same least

squares solution!

»f=[17.186,18.556,20.015,21.567] % Define pressures

»a3=polyfit(x,f,3) % Fit a cubic. Coefficients in a3

»polyval(a3,x) % Check cubic passes through pts.

»xi=[222,226,230] % Define interpolation points

»polyval(a3,xi) % Evaluate at interpolation pts.

»%get ready for least square solution!

»x2=x.ˆ2 % Evaluate x 2

»x3=x.ˆ3 % Evaluate x 3

»x4=x.ˆ4 % Evaluate x 4

5.3. DIFFERENCE OPERATORS 107

»sum(x), sum(x2), sum(x3); ...

»sum(x2), sum(x3), sum(x4) ]

»b=[sum(f), f*x’, f*x2’] % Define right hand side

»a = P\b0 % ans: (82.0202,-0.9203,0.0028)

»c=polyfit(x,f,2) % Let MATLAB do it! compare c & a

»norm(f-polyval(a3,x)) % error in cubic fit 3.3516 × 10−14

»norm(f-polyval(c,x)) % error in least squares fit 8.9443 × 10−4

in such a way that they required a solution of a system of linear alge-

braic equation. For uniformly spaced data, introduction of difference

operators and difference tables, allows us to solve the same polynomial

approximation problem in a more elegant manner without the need for

solving s system of algebraic equations. This difference operator ap-

proach also lends itself naturally to recursive construction of higher

degree polynomials with very little additional computation as well as

extension to numerical differentiation and integration of discrete set of

data.

Consider the set of data {(xi , fi ) | i = 1, · · · , m} where the indepen-

dent variable, x is varied uniformly generating an equally spaced data -

i.e.,

xi+1 = xi + h, i = 1, · · · m or xi = x1 + (i − 1)h

is defined by,

ence and shift operators as shown below.

5.3. DIFFERENCE OPERATORS 108

Shift operator

We can also add the differential operator to the above list.

Differential operator

df (x)

Df (x) = = f 0 (x) (5.6)

dx

The difference operators are nothing but rules of calculations, just

like a differential operator defines a rule for differentiation. Clearly these

rules can be applied repeatedly to obtaind higher order differences. For

example a second order forward difference with respect to reference

point i is,

Having introduced some new definitions of operators, we can discover

some interesting relationships between various operators such as the

following.

∆fi = fi+1 − fi and Efi = fi+1

Combining these two we can write,

Since the operand fi is the same on both sides of the equation, the op-

erators (which define certain rules and hence have certain effects on the

operand fi ) have an equivalent effect given by,

∆ = (E − 1) or E = (1 + ∆) (5.7)

Equation (5.7) can then be applied on any other operand like fi+k . All

of the operators satisfy the distributive, commutative and associative

rules of algebra. Also, repeated application of the operation can be rep-

resented by,

E α = (1 + ∆)α

5.3. DIFFERENCE OPERATORS 109

Note that E α f (x) simply implies that the function f is evaluated after

shifting the independent variable by α - i.e.,

E α f (x) = f (x + αh)

where we have introduce the inverse of the shift operator E to shift back-

wards. Combining these we can write,

∇fi = fi − E −1 fi = (1 − E −1 )fi

Once again recognizing that the operand fi is the same on both sides of

the equation, the operators are related by,

∇ = (1 − E −1 ) or E −1 = (1 − ∇) or E = (1 − ∇)−1 (5.8)

Yet another relation between the shift operator E and the differential

operator D can be developed by considering the Taylor series expansion

of f (x + h),

h2 00

f (x + h) = f (x) + hf 0 (x) + f (x) + · · ·

2!

which can be written in operator notation as,

" #

h2 D 2

Ef (x) = 1 + hD + + · · · f (x)

2!

E = ehD (5.9)

ators can be played indefinitely, let us turn to developing some useful

algorithms from these.

Our objective is to construct a polynomial representation for the discrete

set of data {(xi , fi ) | i = 1, · · · , m} using an alternate approach from

that of section §5.1.2. Assuming that there is a function f (x) represent- Is such an

assumption always

valid?

5.3. DIFFERENCE OPERATORS 110

residual error. Given a set of m data points we know at least one way

(section §5.1.2) to can construct a polynomial of degree (m − 1). Now

let us use the power of operator algebra to develop an alternate way to

construct such a polynomial and in the process, also learn something

about the residual function R(x). Applying equation (5.7) repeatedly α

time on f (x) we get,

Now for integer values of α the right hand side is the binomial expansion

while for any real number, it yields an infinite series. Using such an

expansion the above equation can be written as,

α(α − 1) 2 α(α − 1)(α − 2) 3

f (x + αh) = 1 + α∆ + ∆ + ∆ + ···

2! 3!

α(α − 1)(α − 2) · · · (α − n + 1) n

∆ + · · · f (x)(5.10)

n!

Up to this point in our development we have merely used tricks of

operator algebra. We will now make the direct connection to the given,

discrete set of data {(xi , fi ) | i = 1, · · · , m}. Taking x1 as the reference

point, the transformation

x = x1 + αh

0, 1, · · · (m − 1) we retrieve the equally spaced data set {x1 , x2 , · · · xm }

and for non-integer (real) values of α we can reach the other values of

x ∈ (x1 , xm ). Splitting equation (5.10) into two parts,

α(α − 1) 2 α(α − 1)(α − 2) 3

f (x1 + αh) = 1 + α∆ + ∆ + ∆ + ···

2! 3!

α(α − 1)(α − 2) · · · (α − m + 2) m−1

∆ f (x1 ) + R(x)

(m − 1)!

gree (m − 1) in the transformed variable α. We still need to determine

the numbers {∆f (x1 ), ∆2 f (x1 ), · · · , ∆(m−1) f (x1 )}. These can be com-

puted and organized as a forward difference table shown in figure 5.3.

Since forward differences are needed for constructing the polynomial, it

5.3. DIFFERENCE OPERATORS 111

x1 f1

∆f1

x2 f2 ∆2f1

∆f2 ∆3f1

x3 f3 ∆2f2 ∆4f1

∆f3 ∆3f2

x4 f4 ∆2f3

∆f4 ∆m-1f1

x5 f5

∆4fm-4

∆3fm-3

xm-1 fm-1 ∆2fm-2

∆fm-1

xm fm

spaced data

α(α − 1) 2 α(α − 1)(α − 2) 3

Pm−1 (x1 + αh) = 1 + α∆ + ∆ + ∆ + ···

2! 3!

α(α − 1)(α − 2) · · · (α − m + 2) m−1

∆ f (x1 ) + O(hm ) (5.11)

(m − 1)!

The polynomial in equation (5.11) will pass through the given data set

{(xi , fi ) | i = 1, · · · , m} - i.e., for integer values of α = 0, 1, · · · (m − 1)

it will return values of {f1 , f2 , · · · fm }. This implies that the residual

function R(x) will have roots at the data points {xi | i = 1, · · · , m}. For

a polynomial of degree (m − 1), shown in equation (5.11), the residual at

other values of x is typically represented as R(x) ≈ O(hm ) to suggest

that the leading term in the truncated part of the series is of order m.

Example

A set if five (m = 5) equally spaced data points and the forward differ-

ence table for the data are shown in figure 5.4. For this example, clearly

h = 1 and x = x1 + α. We can take the reference point as x1 = 2 and

construct the following linear, quadratic and cubic polynomials, respec-

tively. Note that

P4 (2 + α) =

P3 (2 + α) for this

case! Why?

5.3. DIFFERENCE OPERATORS 112

x1 = 2 f1 = 8

∆f 1 = 19

x2 = 3 f 2 = 27 ∆2 f 1 = 18

∆f 2 = 37 ∆3 f 1 = 6

x3 = 4 f 3 = 64 ∆2 f 2 = 24 ∆4 f 1 = 0

∆f 3 = 61 ∆3 f 2 = 6

x4 = 5 f 4 = 125 ∆2 f 3 = 30

∆f 4 = 91

x5 = 6 f 5 = 216

α(α − 1)

P2 (2 + α) = (8) + α(19) + (18) + O(h3 )

2!

α(α − 1) α(α − 1)(α − 2)

P3 (2 + α) = (8) + α(19) + (18) + (6) + O(h4 )

2! 3!

You can verify easily that P1 (2 + α) passes through {x1 , x2 }, P2 (2 + α)

passes through {x1 , x2 , x3 } and P3 (2+α) passes through {x1 , x2 , x3 , x4 }.

For finding the interpolated value of f (x = 3.5) for example, first deter-

mine the values of α at x = 3.5 from the equation x = x1 + αh. It is

α = (3.5 − 2)/1 = 1.5. Using this value in the cubic polynomial,

1.5(0.5) 1.5(0.5)(−0.5)

P3 (2 + 1.5) = (8) + 1.5(19) + (18) + (6) = 42.875

2! 3!

As another example, by taking x3 = 4 as the reference point we can

construct the following quadratic polynomial

α(α − 1)

P2 (4 + α) = (64) + α(61) + (30)

2!

which will pass through the data set {x3 , x4 , x5 }. This illustration should

show that once the difference talbe is constructed, a variety of polyno-

mials of varying degrees can be constructed quite easily.

An equivalent class of polynomials using the backward difference oper-

ator based on equation (5.8) can be developed. Applying equation (5.8)

5.3. DIFFERENCE OPERATORS 113

α(α + 1) 2 α(α + 1)(α + 2) 3

f (x + αh) = 1 + α∇ + ∇ + ∇ + ···

2! 3!

α(α + 1)(α + 2) · · · (α + n − 1) n

∇ + · · · f (x)(5.12)

n!

As with the Newton forward formula, the above equation (5.12) termi-

nates at a finite number of terms for integer values of α and for non-

integer values, it will always be an infinite series which must be tran-

cated, thus sustaining a trunctaion error.

In making the precise connection to a given discrete data set {(xi , fi ) | i =

0, −1 · · · , −n}, typically the largest value of x (say, x0 ) is taken as the

reference point. The transformation

x = x0 + αh

makes α the new independent variable and for negative integer values

of α = −1, · · · − n we retrieve the equally spaced data set {x−1 , · · · x−n }

and for non-integer (real) values of α we can reach the other values of

x ∈ (x−n , x0 ). Splitting equation (5.12) into two parts,

α(α + 1) 2 α(α + 1)(α + 2) 3

f (x0 + αh) = 1 + α∇ + ∇ + ∇ + ···

2! 3!

α(α + 1)(α + 2) · · · (α + n − 1) n

∇ + · · · f (x0 ) + R(x)

n!

we can recognize the terms in the square brackets as a polynomial of

degree n in the transformed variable α. We still need to determine

the numbers {∇f (x0 ), ∇2 f (x0 ), · · · , ∇n f (x0 )}. These can be computed

and organized as a backward difference table shown in figure 5.5. Since

backward differences are needed for constructing the polynomial, it is

called the Newton backward difference polynomial and it is given by,

α(α + 1) 2 α(α + 1)(α + 2) 3

Pn (x0 + αh) = 1 + α∇ + ∇ + ∇ + ···

2! 3!

α(α + 1)(α + 2) · · · (α + n − 1) n

∇ f (x0 ) + O(hn+1 ) (5.13)

n!

The polynomial in equation (5.13) will pass through the given data set

{(xi , fi ) | i = 0, · · · , −n} - i.e., for integer values of α = 0, −1, · · · − n it

will return values of {f0 , f−1 , · · · f−n }. At other values of x the residual

will be of order O(hn+1 ).

5.3. DIFFERENCE OPERATORS 114

x− 4 f − 4

∇f − 3

x− 3 f − 3 ∇2 f − 2

∇f − 2 ∇3 f − 1

x− 2 f − 2 ∇2 f − 1 ∇4 f 0

∇f − 1 ∇3 f 0

x− 1 f − 1 ∇2 f 0

∇f 0

x0 f 0

spaced data

5.3. DIFFERENCE OPERATORS 115

x− 4 = 2 f −4 = 8 ? ? ? ? ?

? ? ∇f − 3 = 19 ? ? ? ?

x− 3 = 3 f − 3 = 27 ? ∇2 f − 2 = 18 ? ? ?

x

? NBF

ar?

ound -2 ∇f − 2 = 37 ? ? ∇ f −1 = 6

3 ?

x− 2 = 4 f − 2 = 64 ? ∇ f − 1 = 24

2 ? ? ∇ f0 = 0

4

? NFF aro

? und x-2 ∇f − 1 = 61 ? ? ∇ f0 = 6

3 ?

x− 1 = 5 f − 1 = 125 ? ∇ f 0 = 30

2 ? ? ?

? ? ∇f 0 = 91 ? ? ? ?

x0 = 6 f 0 = 216 ound ?x 0 ? ? ? ?

ar

NBF

Example

A set if five (n = 4) equally spaced data points and the backward differ-

ence table for the data are shown in figure 5.6. This is the same example

as used in the previous section! It is clear that h = 1 and x = x0 + α.

In the previous case we constructed a linear, quadratic and cubic poly-

nomials, with x3 = 4 as the reference point. In the present case let us

use the same reference point, but it is labelled as x−2 = 4. A quadratic

backward difference polynomial in α is,

α(α + 1)

P2 (4 + α) = (64) + α(37) + (18) + O(h3 )

2!

which passes through the points (x−2 , f−2 ), (x−3 , f−3 ) and (x−4 , f−4 )

for α = 0, −1, −2, respectively. Recall that the forward difference poly-

nomial around the same point was, Calculate the

interpolated value

α(α − 1) of f (4.5) from

P2 (4 + α) = (64) + α(61) + (30)

2! these two

polynomials

which passes through the three forward point for α = 0, 1, 2. Although

they are based on the same reference point, these are two different poly-

nomials passing through a different set of data points.

As a final example, let us construct a quadratic backward difference

polynomial around x0 = 6. It is, Is this polynomial

different from the

α(α + 1) NFF, P2 (4 + α)

P2 (6 + α) = (216) + α(91) + (30)

2! constructed above?

5.4. INVERSE INTERPOLATION 116

150

125

f(x=?)=100

100

f(x) 75

P2

(4+

(spurious root)

(desired root)

x=0.286962

x=4.64637

α)

50

25

f(x) = x3

0

-2 0 2 4 6

x

be able to evaluate the function f (x) at values of x other than the ones

in the discrete set of given data points (xi , fi ). The objective of inverse

interpolation is to determine the independent variable x for a given value

of f using a given discrete data set (xi , fi ). If xi are equally spaced, we

can combined two of the tools (polynomial curve fitting and root finding)

to meet this objective, although this must be done with caution.

We illustrate this with the example data shown in figure 5.4. Suppose

we wish to find the value of x where f = 100. Using the three data points

in the neighbourhood of f = 100 in figure 5.4 viz. (x3 , x4 , x5 ), and using

a quadratic polynomial fit, we have,

α(α − 1)

P2 (4 + α) = (64) + α(61) + (30)

2!

A graph of this polynomial approximation P2 (4 + α) and the actual func-

tion f (x) = x 3 used to generate the data given in figure 5.4 are shown

in figure 5.7. It is clear that the polynomial approximation is quite good

in the range of x ∈ (4, 6), but becomes a poor approximation for lower

values of x. Note, in particular, that if we solve the inverse interpolation

problem by setting

α(α − 1)

P2 (4 + α) − 100 = 0 or (64) + α(61) + (30) − 100 = 0

2!

5.4. INVERSE INTERPOLATION 117

the desired root while the other at α = −3.71304 or x = 0.286962 is a

spurious one. This problem can become compounded as we use higher

degree polynomial in an effort to improve accuracy.

In order to achieve high accuracy, but stay close to the desired root,

we can generate an initial guess from a linear interpolation, followed by

constructing a fixed point iteration scheme on the polynomial approxi-

mation of the desired accuracy. Convergence is generally fast as shown

in Dahlquist and Björck (1974). Suppose we wish to find x corresponding

to f (x) = d, the desired function value. We first construct a polynomial

of degree (m − 1) to represent the tabular data.

α(α − 1) 2 α(α − 1)(α − 2) 3

Pm−1 (x1 + αh) = 1 + α∆ + ∆ + ∆ + ···

2! 3!

α(α − 1)(α − 2) · · · (α − m + 2) m−1

∆ f (x1 )

(m − 1)!

Then we let f (x1 + αh) = d and rearrange the polynomial in the form

αi+1 = g(αi ) i = 0, 1, 2 · · ·

1 α(α − 1) 2 α(α − 1)(α − 2) 3

g(α) = d − f1 − ∆ f1 − ∆ f1 + · · ·

∆f1 2! 3!

and the initial guess obtained by truncating the polynomial after the

linear term,

d − f1

α0 =

∆f1

Example

Continuing with the task of finding x where f (x) = 100 for the data

shown in figure 5.4, the fixed point iterate is,

d − f1 100 − 64

α0 = = = 0.5902

∆f1 61

The first ten iterates, produced from the m-file given below,

5.5. LAGRANGE POLYNOMIALS 118

i αi

1 .59016393

2 .64964028

3 .64613306

4 .64638815

5 .64636980

6 .64637112

7 .64637102

8 .64637103

9 .64637103

10 .64637103

function a=g(a)

for i=1:10

fprintf(1,’%2i %12.7e\n’,i,a);

a=(100 - 64 - 15*a*(a-1))/61;

end

using equally spaced data in x. For a data set {xi , fi |i = 0, · · · n}, that

contains unequally spaced data in the independent variable x, we can

construct Lagrange interpolation formula as follows.

n

X

Pn (x) = fi δi (x) (5.14)

i=0

where

n

Y x − xj

δi (x) =

xi − x j

j=0,j6=i

Note that (

0 j 6= i

δi (xj ) =

1 j=i

5.5. LAGRANGE POLYNOMIALS 119

(5.14) that Pn (xj ) = fj - i.e., the polynomial passes through the data

points (xj , fj ).

An alternate way to construct the Lagrange polynomial is based on

introducing the divided difference and constructing a divided difference

table. The polynomial itself is written in the form

n

X i

Y

Pn (x) = ai (x − xj−1 ) (5.15)

i=0 j=0

= a0 + a1 (x − x0 ) + a2 (x − x0 )(x − x1 ) + · · ·

+an (x − x0 ) · · · (x − xn−1 )

The advantage of writing it the form shown in equation (5.15) is that the

unknown coefficients ai can be constructed recursively or found directly

from the divided difference table. The first divided difference is defined

by the equation,

f1 − f 0

f [x0 , x1 ] =

x1 − x 0

Similarly the second divided difference is defined as,

f [x1 , x2 ] − f [x0 , x1 ]

f [x0 , x1 , x2 ] =

x2 − x 0

With these definitions, we return to the task of finding the coefficients

ai in equation (5.15) For example, the first coefficient a0 is,

Pn (x0 ) = a0 = f [x0 ] = f0

The second coefficient, a1 , is obtained from,

Pn (x1 ) = a0 + a1 (x1 − x0 ) = f1

which can be rearranged as,

f1 − f0

a1 = = f [x0 , x1 ]

x1 − x 0

The third coefficient is obtained from,

Pn (x2 ) = a0 + a1 (x2 − x0 ) + a2 (x2 − x0 )(x2 − x1 ) = f2

The only unknown here is a2 , which after some rearrangement becomes,

f [x1 , x2 ] − f [x0 , x1 ]

a2 = = f [x0 , x1 , x2 ]

x2 − x 0

In general the n-th coefficient is the n-th divided difference.

an = f [x0 , x1 , · · · , xn ]

5.5. LAGRANGE POLYNOMIALS 120

x0 = 1.0 f 0 = 1.000

f[ x0 , x1 ] = 3.6400

x1 = 1.2 f 1 = 1.728 f[ x0 , x1 , x2 ] = 3.700

f[ x1 , x2 ] = 5.4900 f[ x0 , x1 , x2 , x3 ] = 1.000

x2 = 1.5 f 2 = 3.375 f[ x1 , x2 , x3 ] = 4.300

f[ x2 , x3 ] = 7.2100

x3 = 1.6 f 3 = 4.096

data

Example

Consider the example data and the divided difference table shown in fig-

ure 5.8. If we wish to construct a quadratic polynomial passing through

(x0 , f0 ), (x1 , f1 ), (x2 , f2 ) for example using equation (5.14), it will be

P2 (x) = f0 + f1 + f2

(x0 − x1)(x0 − x2 ) (x1 − x0 )(x1 − x2 ) (x2 − x0 )(x2 − x1 )

(x − 1.2)(x − 1.5) (x − 1)(x − 1.5) (x − 1)(x − 1.2)

= 1.00 + 1.728 + 3.375

(1 − 1.2)(1 − 1.5) (1.2 − 1)(1.2 − 1.5) (1.5 − 1)(1.5 − 1.2)

The same polynomial using equation (5.15) and the difference table shown

in figure 5.8 will be written as,

= 1.000 + 3.64(x − 1) + 3.70(x − 1)(x − 1.2)

tional data point (x3 , f3 ) Lagrange polynomial based on equation (5.14)

requires a complete reconstruction of the equation, while that based on

equation (5.15) is simply,

f [x0 , x1 , x2 , x3 ](x − x0 )(x − x1 )(x − x2 )

1(x − 1)(x − 1.2)(x − 1.5)

mula shown in equation (5.14) is given in figure 5.9. This function ac-

5.5. LAGRANGE POLYNOMIALS 121

function f=LagrangeP(xt,ft,x)

% (xt,ft) are the table of unequally spaced values

% x is where interpolated values are required

% f the interpolated values are returned

m=length(x);

nx=length(xt);

ny=length(ft);

if (nx ˜= ny),

error(’ (xt,ft) do not have the same # values’)

end

for k=1:m

sum = 0;

for i=1:nx

delt(i)=1;

for j=1:nx

if (j ˜= i),

delt(i) = delt(i)*(x(k)-xt(j))/(xt(i)-xt(j));

end

end

sum = sum + ft(i) * delt(i) ;

end

f(k)=sum;

end

mial

5.6. NUMERICAL DIFFERENTIATION 122

cepts a table of values (xt, f t), constructs the highest degree Lagrange

polynomial that is possible and finally evaluates and returns the inter-

polated values of the function y at specified values of x.

»ft=[1 1.728 3.375 4.096] % Define ft, the function values

»x=[1.0:0.1:1.6] % x locations for interpolation

»f=LagrangeP(xt,ft,x) % interpolated f values.

sections §5.3.2 or §5.5, we can proceed to construct algorithms for ap-

proximate representations of derivatives.

Consider the Newton forward formula given in equation (5.11)

α(α − 1) 2

f (x) ≈ Pm−1 (x1 + αh) = 1 + α∆ + ∆ +

2!

α(α − 1)(α − 2) 3 α(α − 1) · · · (α − m + 2) m−1

∆ + ··· + ∆ f (x1 ) + O(hm )

3! (m − 1)!

that passes through the given data set {(xi , fi ) | i = 1, · · · , m}. Note

that the independent variable x has been transformed into α using x =

x1 + αh, hence dx/dα = h. Now, the first derivative is obtained as,

0 df dPm−1 dPm−1 dα 1 α + (α − 1) 2

f (x) = ≈ = = ∆+ ∆ +

dx dx dα dx h 2

{α(α − 1) + (α − 1)(α − 2) + α(α − 2)} 3

∆ + · · · f (x1 )(5.16)

6

Equation (5.16) forms the basis of deriving a class of approximations for

first derivatives from a tabular set of data. Note that the equation (5.16)

is still a function in α and hence it can be used to evaluate the derivative

at any value of x = x1 + αh. Also, the series can be truncated after

any number of terms. Thus, a whole class of successively more accurate

representations for the first derivative can be constructed from equa-

tion (5.16) by truncating the series at higher order terms. For example

5.6. NUMERICAL DIFFERENTIATION 123

(5.16) reduces to,

0 1 1 2 1 3 1 4 1 m−1

f (x1 ) = ∆ − ∆ + ∆ − ∆ ··· ± ∆ f (x1 )+O(hm−1 )

h 2 3 4 m−1

This equation can also be obtained directly using equation (5.9) as,

E = ehD or hD = ln E = ln (1 + ∆)

∆2 ∆3 ∆4

hD = ∆ − + − + ···

2 3 4

Operating both sides with f (x1 ) ( i.e., using x1 as the reference point),

we get,

" #

0 1 ∆2 ∆3 ∆4

Df (x1 ) = f (x1 ) = ∆− + − + · · · f (x1 ) (5.17)

h 2 3 4

1

f 0 (x1 ) = [∆f (x1 )] + O(h)

h

1

= [f2 − f1 ] + O(h)

h

which is a 2-point, first order accurate, forward difference approximation

for first derivative at x1 . Truncating the series after the first two terms

(m = 3),

1 1

f 0 (x1 ) = ∆f (x1 ) − ∆2 f (x1 ) + O(h2 )

h 2

1 1

= (f2 − f1 ) − (f1 − 2f2 + f3 ) + O(h2 )

h 2

1

= [−3f1 + 4f2 − f3 ] + O(h2 )

2h

which is the 3-point, second order accurate, forward difference approx-

imation for the first derivative at x1 . Clearly both are approximate rep-

resentations of the first derivative at x1 , but the second one is more

accurate since the truncation error is of the order h2 .

Note that while, equation (5.17) is evaluated at the reference point

on both sides of the equation, the earlier equation (5.16) is a polynomial

that is constructed around the reference point x1 , but can be evaluated at

5.6. NUMERICAL DIFFERENTIATION 124

at xi error

f 0 (xi ) (fi+1 − fi )/h O(h)

f 0 (xi ) (fi − fi−1 )/h O(h)

f 0 (xi ) (−3fi + 4fi+1 − fi+2 )/2h O(h2 )

f 0 (xi ) (+3fi − 4fi−1 + fi−2 )/2h O(h2 )

f 0 (xi ) (fi+1 − fi−1 )/2h O(h2 )

f 0 (xi ) (fi−2 − 8fi−1 + 8fi+1 − fi+2 )/12h O(h4 )

f 00 (xi ) (fi+1 − 2fi + fi−1 )/h2 O(h2 )

f 00 (xi ) (fi+2 − 2fi+1 + fi )/h2 O(h)

f 00 (xi ) (−fi−3 + 4fi−2 − 5fi−1 + 2fi )/h2 O(h2 )

f 00 (xi ) (−fi+3 + 4fi+2 − 5fi+1 + 2fi )/h2 O(h2 )

the first derivative at x = x2 or α = 1. Two term truncation of equation

(5.16) yields,

0 1 1 2

f (x2 ) = ∆ + ∆ f (x2 ) + O(h2 )

h 2

or

1

f 0 (x2 ) = [f3 − f1 ] + O(h2 )

2h

which is a 3-point, second order accurate, central difference approxima-

tion for the first derivative at x2 .

Going through a similar exercise as above with the Newton backward

difference formula (5.13), truncating the series at various levels and us-

ing diffrent reference points, one can easily develop a whole class of

approximations for first order derivatives. Some of the useful ones are

summarized in Table 5.4.

The second derivative of the polynomial approximation is obtained by

taking the derivative of equation (5.16) one more time - viz.

00 d dPm−1 dα dα 1 h

f (x) ≈ = 2 ∆2 +

dα dα dx dx h

{α + (α − 1) + (α − 1) + (α − 2) + α + (α − 2)} 3

∆

6

+ · · ·] f (x1 ) + O(hm−2 ) (5.18)

5.6. NUMERICAL DIFFERENTIATION 125

1 11 4 5 5 137 6

f 00 (x1 ) = ∆ 2

− ∆ 3

+ ∆ − − ∆ + ∆ · · · f (x1 ) (5.19)

h2 12 6 180

This equation can also be obtained directly using equation (5.9) as,

" #2

2 ∆2 ∆3 ∆4 ∆5

(hD) = ∆− + − + ···

2 3 4 5

2 3 11 4 5 5 137 6 7 7 363 8

= ∆ −∆ + ∆ − ∆ + ∆ − ∆ + ∆ ···

12 6 180 10 560

get,

2 00 1 2 3 11 4

D f (x1 ) = f (x1 ) = 2 ∆ − ∆ + ∆ − · · · f (x1 )

h 12

Truncating after one term,

1 h 2 i 1

f 00 (x1 ) = 2

∆ f (x 1 ) = 2 (f1 − 2f2 + f3 ) + O(h)

h h

1 h 2 i

f 00 (x1 ) = 2

∆ f (x1 ) − ∆3 f (x1 )

h

1

= (2f1 − 5f2 + 4f3 − f4 ) + O(h2 )

h2

" #

00 1 2 δ3

f (x2 ) = 2 ∆ −0· f (x1 ) + O(h2 )

h 6

Note that the third order term turns out to be zero and hence this for-

mula turns out to be more accurate. This is a 3-point, second order ac-

curate central difference approximation for the second derivative given

as,

1 h i 1

f 00 (x2 ) = 2 ∆2 f (x1 ) = 2 (f1 − 2f2 + f3 ) + O(h2 )

h h

5.7. NUMERICAL INTEGRATION 126

One can derive finite difference approximations from Taylor series ex-

pansion also. Consider the following expansions around xi .

h2 00 h3 000

f (xi + h) = f (xi ) + hf 0 (xi ) + f (xi ) + f (xi ) + · · ·

2 3!

h2 00 h3 000

f (xi − h) = f (xi ) − hf 0 (xi ) + f (xi ) − f (xi ) + · · ·

2 3!

Subtracting the second from the first equation, and extracting f 0 (xi ), we

get,

f (xi + h) − f (xi − h) h2 00

f 0 (xi ) = − f (xi ) + · · ·

2h 6

or,

fi+1 − fi−1

f 0 (xi ) = + O(h2 )

2h

which is a central difference formula for the first derivative that we de-

rived in the last section §5.6. Adding the two Taylor series expansions

above, we get,

h4 0000

fi+1 + fi−1 = 2fi + h2 f 00 (xi ) + f (xi ) + · · ·

12

or,

1

f 00 (xi ) = [fi+1 + fi−1 − 2fi ] + O(h2 )

h2

which is a central difference formula for the second derivative that we

derived in the last section §5.6.

when the function is complicated and hence is not easy to integrate ana-

lytically or (ii) the data is given in equally spaced, tabular from. In either

case the starting point is to use the functional approximation methods

seen in earlier sections followed by the integration of the approximate

function. By doing this formally, with the Newton forward polynomials,

we can develop a class of integration formulas. Consider the integral,

Zb

f (x)dx (5.20)

a

5.7. NUMERICAL INTEGRATION 127

with an error of order hn+1 , we can use this approximation to carry out

the integration. We first divide the interval x ∈ [a, b] into n subdivisions

as shown in the sketch below; hence there will be

f(x)

f1 f

2 fi

x=b

x=-a

1 2 i n-1

i=0 i=n

h = (b − a)/n, x = x0 + αh dx = hdα

x1 . We have

Z x1 Z1 Z1

f (x)dx ≈ P1 (x0 + αh)hdα + O(h2 )hdα

x0 0 0

or,

Z x1 Z1

f (x)dx ≈ [1 + α∆] f0 h dα + O(h3 )

x0 0

Z x1

h

f (x)dx ≈ [f0 + f1 ] + O(h3 ) (5.21)

x0 2 | {z }

local error

This formula is the well known trapezoidal rule for numerical integra-

tion. The geometrical interpretation is that it represents the shaded area

under the curve. Note that while numerical differentiation, as developed

in equation (5.16), lowers the order of the truncation error by one due to

the term dα/dx = 1/h numerical integration increases the order of the

5.7. NUMERICAL INTEGRATION 128

truncation error by one due to the term dx = hdα. In the above formula

the truncation error is of order O(h3 ). It is called the local truncation

error since it is the error in integrating over one interval x ∈ (x0 , x1 ).

To obtain the complete integral over the interval x ∈ [a, b] we apply

equation (5.21) repeatedly over each of the subdivisions as,

Zb n Z xi

X Xn X n

h

f (x)dx = f (x)dx = [fi−1 + fi ] + O(h3 )

a

i=1

xi−1

i=1

2 i=1

| {z }

global error

of order O(h2 ). Thus the trapezoidal rule has a local truncation error

of order O(h3 ) and a global truncation error of order O(h2 ) and the

equation is,

Zb n

hX

f (x)dx = [fi−1 + fi ] + O(h2 ) (5.22)

a 2 i=1 | {z }

global error

the range of x ∈ [x0 , x2 ] we obtain the Simpson’s rule.

Z x2 Z2 Z2

f (x) dx ≈ P2 (x0 + αh) h dα + O(h3 )hdα

x0 0 0

or,

Z x2 Z2

α(α − 1) 2

f (x) dx ≈ 1 + α∆ + ∆ f0 h dα + O(h4 )

x0 0 2

Z x2

h

f (x) dx ≈ [f0 + 4f1 + f2 ] + O(h4 ) (5.23)

x0 3 | {z }

local error

Note that the next neglected term in the polynomial P2 (x0 + αh) that

corresponds to order O(h3 ) term viz.

Z2

α(α − 1)(α − 2)) 3

∆ f0 h dα

0 3!

turns out to be exactly zero, thus making the local truncation error in

the Simpson’s rule to be actually of order O(h5 ) Repeated application of

5.7. NUMERICAL INTEGRATION 129

Zb

h

f (x)dx = [f0 + 4f1 + 2f2 + 4f3 + 2f4 + · · · + fn ] + O(h4 )

a 3 | {z }

global error

(5.24)

Note that in applying Simpson’s rule repeatedly over the interval x ∈

[a, b], we must have an even number of intervals (n even) or equivalently

an odd number of points.

An idea similar to that used in section §2.8 to accelerate convergence

is the notion of extrapolation to improve accuracy of numerical integra-

tion. The basic idea is to estimate the truncation error by evaluating the

integral on two different grid sizes, h1 and h2 . Let us apply this idea on

the trapezoidal rule which has a global truncation error of O(h2 ). Let

the exact integral be represented as,

I = I(h1 ) + E(h1 )

where I(h1 ) is the approximate estimate of the integral using grid size

of h1 and E(h1 ) is the error. Similarly we have,

I = I(h2 ) + E(h2 )

E(h1 ) h2

= 12

E(h2 ) h2

or,

h21

I(h1 ) + E(h2 ) = I(h2 ) + E(h2 )

h22

which can be solved to obtain E(h2 ) as,

I(h1 ) − I(h2 )

E(h2 ) =

1 − (h1 /h2 )2

5.7. NUMERICAL INTEGRATION 130

1

I = I(h2 ) + [I(h2 ) − I(h1 )]

(h1 /h2 )2 − 1

If h2 = h1 /2 then we have,

1

I = I(h2 ) + [I(h2 ) − I(h1 )]

22 − 1

O(h2 ) term, the above equation will have an error of order O(h4 ) which

is the next leading term in the Taylor series expansion. By repeated

application of the above approach to estimate and eliminate successively

higher order terms, we can arrive at the following general formula for

Romberg extrapolation.

Ij,k = (5.25)

4k−1 − 1

Z 0.8

(0.2 + 25x − 200x 2 + 675x 3 − 900x 4 + 400x 5 ) dx = 1.64053334

0

(5.26)

which has the exact value as shown. A sketch of the function f (x) and

the Romberg extrapolation results are shown in figure 5.10. It is clear

from this example that by combining three rather poor estimates of the

integral on grids of h = 0.8, 0.4 and 0.2, a result accurate to eight sig-

nificant digits has been obtained! For example, I2,2 is obtained by using

j = 2 and k = 2 which results in,

I2,2 = = = 1.6234667

4−1 3

I1,3 = = = 1.64053334

42 − 1 15

5.7. NUMERICAL INTEGRATION 131

3.5

3

2.5

2

f(x)

1.5

1

0.5

0

0 0.2 0.4 0.6 0.8

x

O ( h2 ) O ( h4 ) O ( h6 )

(k = 1) (k = 2) (k = 3)

j = 1 h = 0.8 0.1728 1.3674667 1.64053334

j = 2 h = 0.4 1.0688 1.6234667 1.64053334

j = 3 h = 0.2 1.4848 1.6394667

j = 4 h = 0.1 1.6008

5.8. ORTHOGONAL FUNCTIONS 132

function f=int_ex(x)

%defines a 5th degree polynomial

m=length(x);

for i=1:m

f(i) = 0.2 + 25*x(i) - 200*x(i)ˆ2 + ...

675*x(i)ˆ3 - 900*x(i)ˆ4 + 400*x(i)ˆ5;

end

MATLAB example

recursive Simpson’s quadrature. You must of course, define the func-

tion through a m-file which should accept a vector of input arguments

and return the corresponding function values. A m-file that implements

equation (5.26) is shown in figure 5.11. After creating such a file work

through the following example during an interactive session.

»quad(’int ex’,0,0.8,1e-5,1) % Display results graphically

»quad(’int ex’,0,0.8,1e-8) % Note the warning messges!

5.7.3 Multiple integrals

Education has produced a vast population able to

read but unable to distinguish what is worth read-

ing.

— G.M. TREVELYAN

Chapter 6

- Initial value problems

nonlinear ordinary differential equations of the initial value type. Such

models arise in describing lumped parameter, dynamic models. Entire

books (Lapidus & Seinfeld, 1971; Lambert, 1973) are devoted to the de-

velopment of algorithms for such problems. We will develop only ele-

mentary concepts of single and multistep methods, implicit and explicit

methods, and introduce concepts of numerical stability and stiffness.

General purpose routines such as LSODE that implement several of

advanced features such as automatic step size control, error control etc.

are available from NETLIB.

cally as a system of first order equations of the form,

dy

= f (y, t) (6.1)

dt

y(t = t0 ) = y 0 (6.2)

and our objective is to construct an approximate representation of the

function y(t) over some interval of interest t ∈ [t0 , tf ] that satisfy the

133

6.1. MODEL EQUATIONS AND INITIAL CONDITIONS 134

If the functions f (y, t) depend on t explicitly, then the equations are

called non-autonomous; otherwise they are called an autonomous sys-

tem of equations.

A higher order differential equation (say of order n) can be converted into

an equivalent system of (n) first order equations. Consider the equation,

dn θ dn−1 θ dθ

an + a n−1 + · · · a1 + a0 θ = b (6.3)

dt n dt n−1 dt

subject to a set of n initial conditions at t0 of the form,

dn−1 θ

= cn−1

dt n−1 t0

dn−2 θ

= cn−2

dt n−2 t0

.. .

. = .. (6.4)

dθ

= c1

dt t0

θ|t0 = c0

Since all of these conditions are given at t0 , this remains an initial value

problem. Equation (6.3) can be recast into a system of n first order equa-

tions of the form (6.1) as follows. Let us define θ and all of its (n − 1)

successive higher derivatives as

dθ d2 θ dn−1 θ

y1 (t) = θ(t), y2 (t) = , y3 (t) = , ··· yn (t) =

dt dt 2 dt n−1

Then we have,

dy1

= y2 , y1 (t0 ) = c0

dt

dy2

= y3 , y2 (t0 ) = c1

dt

.. .

. = .. (6.5)

dyn−1

= yn , yn−1 (t0 ) = cn−2

dt

dyn 1

= [b − a0 y1 − a1 y2 − · · · − an−1 yn ], , yn (t0 ) = cn−1

dt an

6.2. TAYLOR SERIES EXPANSION 135

where the last equation has been obtained from the n-th order equation

(6.3). Also shown in equations (6.5), are the transformed initial condi-

tions from equation (6.4) in terms of the new variable set y.

Note that the coefficients {a0 , a1 , · · · an , b} in equation (6.3) can in

general be nonlinear functions of θ and its derivatives. This nonlinearity

will reflect in equations (6.5),since the coefficients {a0 , a1 , · · · an , b} will

be functions of the transformed variables {y1 , y2 · · · yn }.

dy

= f (y), y(t0 ) = y0 (6.6)

dt

an approximate solution to y(t) at a discrete set of points {tn |n =

0, 1, · · ·}. We can achieve this by constructing a Taylor series expansion

for y(t) around tn with a step size of h as,

h2

y(tn + h) = y(tn ) + y 0 (tn )h + y 00 (tn ) + ···

2

Truncating after the linear term and recognizing that y 0 (tn ) = f (yn ),

we have the Euler scheme for generating the solution sequence,

| {z }

local error

O(h2 ). It is called a single-step method because it requires only the value

at yn to predict the value at next step yn+1 . It is explicit because the right

hand side terms [yn + hf (yn )] can be computed explicitly using known

value of yn .

Z yn+1 Z tn+1

dy = f (y)dt (6.8)

yn tn

6.2. TAYLOR SERIES EXPANSION 136

approximate the function f (y) we can recover, not only the Euler scheme,

but develop a mechanism for obtaining a whole class of implicit and mul-

tistep methods. First let us use the m-th degree Newton forward poly-

nomial from equation (5.11), viz.

α(α − 1) 2

f (y) ≈ Pm (tn + αh) = 1 + α∆ + ∆ + ···

2!

α(α − 1)(α − 2) · · · (α − m + 1) m

∆ fn + O(hm+1 )

(m)!

where tn has been used as the reference point, fn means f (yn ) and h

is the step size. Since t = tn + αh we have dt = hdα. Using a one term

expansion in equation (6.8), ( i.e., m = 0) results in,

Z tn+1 Z tn+1

yn+1 − yn = P0 (tn + αh)dt + O(h)dt

tn t

| n {z }

local truncation error

Z1 Z1

= P0 (tn + αh) hdα + O(h)hdα

0 0

Z1

= fn hdα + O(h2 )

0

= fn h + O(h2 )

which is the same equation as (6.7). This approach, however, lends itself

naturally to further development of higher order methods. For example

a two-term expansion ( i.e., m = 1) results in,

Z tn+1 Z tn+1

yn+1 − yn = P1 (tn + αh)dt + O(h2 )dt

tn t

| n {z }

local truncation error

Z1 Z1

= P1 (tn + αh) hdα + O(h2 )hdα

0 0

Z1

= [fn + α∆fn ] hdα + O(h3 )

0

" #1

α2

= h α fn + ∆fn + O(h3 )

2 0

1

= h fn + (fn+1 − fn ) + O(h3 ).

2

6.2. TAYLOR SERIES EXPANSION 137

Hence we have the final form of the modified Euler scheme as,

h

yn+1 = yn + [fn + fn+1 ] + O(h3 ) n = 0, 1, · · · (6.9)

2 | {z }

local error

Both the Euler method given in equation (6.7) and the modified Euler

scheme given by equation (6.9) are single-step methods since only yn

is required to predict yn+1 . The modified Euler method is an implicit

scheme since we need to compute fn+1 which depends on yn+1 . Note

that implicit schemes requires the solution of a nonlinear algebraic equa-

tion at every time step. Thus to calculate yn+1 from equation (6.9) we

need to use an iterative method that involves providing an initial guess

for yn+1 and using equation (6.9) as a fixed point iteration scheme until

yn+1 converges to desired accuracy. At a first glance, this might appear

to be a disadvantage of the implicit schemes. However, implicit schemes

have the ability to anticipate sharp changes in the solution between yn

and yn+1 and hence are suitable (in fact required) for solving the so

called stiff differential equations.

This initial guess could be provided by the Euler method ( viz. equa-

tion (6.7)). When an explicit scheme is combined with an implicit scheme

in this manner, we have the so called predictor-corrector scheme. The

Euler and modified Euler predictor-corrector pair is,

P C hh P

i

yn+1 = yn + hf (yn ) and yn+1 = yn + f (yn ) + f (yn+1 )

2

(6.10)

where the superscript P represents the predicted value from an explicit

scheme and C represents the corrected value from an implicit scheme.

It should be clear that extending the Newton forward polynomial to

a three-term expansion will not be fruitful, since that would involve not

only fn+1 , but also fn+2 . We can, however, use Newton backward poly-

nomials to develop higher order methods as will be done in section §6.3.

But, let us explore first the reason for and the circumstances under which

implicit schemes are useful.

Let us consider a model, linear equation,

dy

= λ y, y(t = 0) = 1

dt

6.2. TAYLOR SERIES EXPANSION 138

y(t) = eλt

For λ < 0 the exact solution decreases monotonically to zero as t → ∞.

Let us examine the sequence {yn |n = 0, 1, · · ·} generated by the explicit,

Euler scheme and the implicit, modified Euler scheme. Note that in this

model problem the function f (y) = λy. The Euler equation is,

yn+1 = yn + hf (yn ) = yn + hλyn = [1 + hλ]yn

Thus the sequence is,

y1 = [1 + hλ]y0

y2 = [1 + hλ]y1 = [1 + hλ]2 y0

y3 = [1 + hλ]y2 = [1 + hλ]3 y0

.. .

. = ..

leading to the general solution,

yn = [1 + hλ]n y0

When the step size h is chosen to be too large (more specifically |hλ| > 2

in this case), the sequence will diverge, while the exact solution remains

bounded. This phenomenon is called numerical instability caused by the

discretization. Explicit methods in general have such a stability bound

on the step size h.

Let us examine the behavior of an implicit scheme - viz. the modified

Euler scheme.

h h

yn+1 = yn + [fn + fn+1 ] = yn + λyn + λyn+1

2 2

Note that yn+1 appears on both sides. Solving for yn+1 we get,

1 + hλ/2

yn+1 = yn

1 − hλ/2

Thus the sequence is,

1 + hλ/2

y1 = y0

1 − hλ/2

1 + hλ/2 1 + hλ/2 2

y2 = y1 = y0

1 − hλ/2 1 − hλ/2

1 + hλ/2 1 + hλ/2 3

y3 = y2 = y0

1 − hλ/2 1 − hλ/2

.. ..

. = .

6.2. TAYLOR SERIES EXPANSION 139

x

fast time scale solution: e-kt/m

c

k x

t

n

1 + hλ/2

yn = y0

1 − hλ/2

h i

1+hλ/2

It is clear that for λ < 0, the ratio 1−hλ/2 < 1 for any choice of step

size h. Thus the implicit scheme is absolutely stable. Hence, for explicit

schemes, the choice of h is governed by both stability and truncation er-

ror considerations while for implicit schemes only truncation error con-

siderations dictate the choice of step size, h.

represents the characteristic time scale of the problem. For a second

order equation (or equivalently a system of two first-order equations),

there will be two such time scales λ1 and λ2 . If the time scales are widely

separated in magnitude then we have a stiff system of differential equa-

tions. Consider the spring and dash pot model shown in figure 6.1. The

displacement x is modelled by the force balance equation,

d2 x dx

m +k + cx = 0

dt 2 dt

and x is the displacement. Let us assume that it is subject to the initial

conditions x(t = 0) = 0 and x 0 (t = 0) = constant. We can write the

characteristic equation as,

m 2 c

λ +λ+ =0

k k

6.2. TAYLOR SERIES EXPANSION 140

p

−1 ± 1 − 4mc/k2

λ=

(2m/k)

k c

λ1 = − and λ2 = −

m k

we have λ1 >> λ2 and this limit corresponds to the stiff behavior of the

solution. In general the stiffness ratio is defined as the ratio of the largest

to the smallest eigenvalues. In this example the stiffness ratio is (k2 /mc)

and it becomes large as m is made small. The solution satisfying the first

initial condition is,

−kt/m ct/k

x(t) = A1 [e

| {z } − |e {z }]

f ast slow

where the fast and slow response terms are as shown. The sketch in

figure 6.1 also shows the fast and slow response solutions. Note that if

m = 0, the order of the differential equation drops by one and λ1 is the

only time scale for the problem. This kind of phenomena also occurs

in a number of chemical reaction systems, where some of the reactions

can occur on a rapid time scale while others take place on a longer time

scale. The ozone decomposition model discussed in section §1.4.2 is

another example of stiff differential equations.

It should now be clear that stiffness phenomena corresponds to large

eigenvalues and fast response regions where the solution changes rapidly.

In a system of n first order equations there will be n characteristic roots

or eigenvalues. If λmax is the largest eigenvalue, then explicit schemes

will typically have a numerical stability limit of the form |hλmax | <

constant. Hence explicit schemes require that extremely small step

size h be used in regions where the system responds very rapidly; oth-

erwise the integration sequence will diverge. Implicit schemes that are

absolutely stable have no such restrictions. The integration sequence

using implicit schemes will remain bounded. The choice of step size is

determined only by the desired accuracy of the solution. Stability analy-

sis for a variety of explicit and implicit methods are discussed in greater

detail by Lapidus and Seinfeld (1971).

6.3. MULTISTEP METHODS 141

Consider approximating the function f (y) in equation (6.8) by the fol-

lowing m-th degree Newton backward polynomial from equation (5.13),

α(α + 1) 2

f (y) ≈ Pm (tn + αh) = 1 + α∇ + ∇ + ···

2!

α(α + 1)(α + 2) · · · (α + m − 1) m

∇ fn + O(hm+1 )

m!

Here, tn has been used as the reference point. Since this polynomial in-

volves only points at earlier times such as {fn , fn−1 , fn−2 · · ·}, we can de-

velop a class of explicit schemes of high orders. These are called Adams-

Bashforth schemes. Consider a three-term expansion ( i.e., m = 2). Equa-

tion (6.8) becomes,

Z tn+1 Z tn+1

yn+1 − yn = P2 (tn + αh)dt + O(h3 )dt

tn t

| n {z }

local truncation error

Z1 Z1

= P2 (tn + αh) hdα + O(h3 )hdα

0 0

Z1

α(α + 1) 2

= fn + α∇fn + ∇ fn hdα + O(h4 )

0 2!

" ( ) #1

α2 1 α3 α2

= h α fn + ∇fn + + ∇ fn + O(h4 )

2

2 2! 3 2 0

1 5

= h fn + (fn − fn−1 ) + (fn − 2fn−1 + fn−2 ) + O(h4 ).

2 12

which can be rearranged into the form,

h

yn+1 = yn + [23 fn − 16 fn−1 + 5 fn−2 ] + O(h4 ) n = 2, 3, 4, · · ·

12 | {z }

local error

(6.11)

The following points should be observed on the above equation.

dict yn+1 .

problem, we know only y0 . Hence y1 and y2 must be generated

6.3. MULTISTEP METHODS 142

multistep scheme.

side.

not suitable for stiff differential equations.

ror.

The next higher order scheme can be developed from a four-term ex-

pansion ( i.e., m = 3) . This is called 5-th order Adams-Bashforth scheme.

viz.

Z tn+1 Z tn+1

yn+1 − yn = P3 (tn + αh)dt + O(h4 )dt

tn t

| n {z }

local truncation error

Z1 Z1

= P3 (tn + αh) hdα + O(h4 )hdα

0 0

Z1

α(α + 1) 2 α(α + 1)(α + 2) 3

= fn + α∇fn + ∇ fn + ∇ fn hdα + O(h5 )

0 2! 3!

1 5 2 3 3

= h fn + ∇fn + ∇ fn + ∇ fn + O(h5 )

2 12 8

which can be rearranged into the form,

h

yn+1 = yn + [55 fn − 59 fn−1 + 37 fn−2 − 9fn−3 ] + O(h5 ) n = 3, 4, · · ·

24 | {z }

local error

(6.12)

In order to construct implicit schemes we need to construct backward

polynomial approximations with tn+1 as the reference point. viz.

α(α + 1) 2

f (y) ≈ Pm (tn+1 + αh) = 1 + α∇ + ∇ + ···

2!

α(α + 1)(α + 2) · · · (α + m − 1) m

∇ fn+1 + O(hm+1 )

m!

6.3. MULTISTEP METHODS 143

In this manner fn+1 is introduced on the right hand side. This class

of implicit schemes are called Adams-Moulton schemes. We are still

integrating one step from tn to tn+1 . Since t = tn+1 + αh and the limits

of integration in α become (−1, 0). A four-term expansion results in,

Z tn+1 Z tn+1

yn+1 − yn = P3 (tn+1 + αh)dt + O(h4 )dt

tn t

| n {z }

local truncation error

Z0 Z0

= P3 (tn + αh) hdα + O(h4 )hdα

−1 −1

Z0

α(α + 1) 2 α(α + 1)(α + 2) 3

= fn+1 + α∇fn+1 + ∇ fn+1 + ∇ fn+1 hdα + O(h5

−1 2! 3!

1 1 2 1 3

= h fn+1 − ∇fn+1 − ∇ fn+1 − ∇ fn+1 + O(h5 )

2 12 24

h

yn+1 = yn + [9 fn+1 + 19 fn − 5 fn−1 + fn−2 ] + O(h5 ) n = 2, 4, · · ·

24 | {z }

local error

(6.13)

The pair of explicit-implicit schemes given by (6.12,6.13) respectively can

be used as a predictor-corrector pair.

Some of the start up and step size control issues are illustrated in figure

6.2 for the 5-th order Adams schemes developed in the last section. Note

that only y0 is given and hence (y1 , y2 , y3 ) must be generated using

some other single step methods with a step size of h before the 5-th

order Adams scheme given by equations (6.12,6.13) can be used. In doing

so, it is important to realize that any error introduced in these three steps

are likely to be propagated during the transient phase of the simulation.

Hence if lower order schemes are used to generate (y1 , y2 , y3 ), then

smaller step sizes must be used. The difference between the predicted

and corrected values could be used as a measure of the truncation error.

If this error is below an acceptable tolerance, then we can choose to

double the next step size. But this can begin only after y6 has been

computed, because we need four previous values at equal intervals of

(2h) - i.e., (y0 , y2 , y4 , y6 ). If at any time during the intergration process,

the difference between the predicted and corrected values is above the

6.3. MULTISTEP METHODS 144

generate y4.5 and y5.5

possible to double

step size

y0 } y1 y2 y3 y4 y5 y6

given

begin using

generate using

Adams scheme

other methods

tolerance, then we must halve the step size and repeat the calculation

for that step. In so doing, we need to generate intermediate values at

intervals of (h/2). For example if the result for y7 does not meet the

tolerance, then we repeat the calculation from y6 with a step size of

h/2. We need to generate intermediate values at y4.5 and y5.5 . This can

be done using the Newton backward interpolation polynomials; but the

truncation errors in the interpolating polynomials should be of the same

order and the Adams scheme. Specifically the interpolation rules are:

1

yn− 1 = [35yn + 140yn−1 − 70yn−2 + 28yn−3 − 5yn−4 ]

2 128

1

yn− 3 = [−yn + 24yn−1 + 54yn−2 − 16yn−3 + 3yn−4 ]

2 64

Example

MATLAB has several built-in functions for solving initial value problems.

The functions named ADAMS and GEAR, use multistep methods. All of

the ODE solvers in MATLAB are part of the SIMULINK toolbox. Hence

the m-files that define the problem must have a special structure. In

this section we illustrate how to use these functions to solve the ozone

decomposition model. Recall that the equations are,

dy1

= f1 (y1 , y2 ) = −y1 − y1 y2 + κy2

dt

dy2

= f2 (y1 , y2 ) = (y1 − y1 y2 − κy2 )/

dt

The initial compositions are y(t = 0) = [1.0, 0.0]. The parameters are

= 1/98 and κ = 3.0. The equations are clearly nonlinear. The m-file

6.4. RUNGE-KUTTA METHODS 145

function [ydot,y0]=ozone(t,y,u,flag)

k=3.0;epsilon=1/98;

if abs(flag) == 1

ydot(1) = -y(1) - y(2)*y(1) + k*epsilon*y(2);

ydot(2) = (y(1)-y(1)*y(2)-epsilon*k*y(2))/epsilon;

elseif flag == 0

ydot=[2,0,0,0,0,0]; %first element=number of equations

y0=[1 0]; %initial conditions

else

ydot=[];

end

in such a way that it should return the derivatives y 0 (t) in the variable

ydot when iflag==1 and it should return the number of equations and

the initial condition when iflag==0 as shown in the figure 6.3.

To use the GEAR and ADAMS functions and integrate the ozone model

to a final time of 3.0 do the following during an interactive MATLAB ses-

sion.

»type ozone % test that ozone.m exists

»[t,y]=gear(’ozone’,tf) % Integrate using GEAR

»[t,y]=adams(’ozone’,tf) % Integrate using ADAMS

The independent variable t and the solution y at the same time values

are returned in the corresponding varaibles. These results are shown

graphically in figure 6.4. Observe that y2 (t) increases rapidly from an

initial condition of zero and hence the system is very stiff during the

early times.

cient use of previously generated results, Runge-Kutta methods achieve

6.4. RUNGE-KUTTA METHODS 146

1.00

y1

0.75 y2

y1 or y2

0.50

0.25

0

0 1 2 3

time

behavior

the same goal in a single step, but at the expense of requiring many

function evaluations per step. Being single-step schemes, they are self-

starters. They are also classified as explicit, semi-implicit and implicit

schemes. Implicit schemes require solution of a set on non-linear alge-

braic equations at every time step, but they are suitable for stiff differ-

ential equations.

v

X

yn+1 = yn + w i ki (6.14)

i=1

i−1

X

ki = h f tn + ci , yn + aij kj , c1 = 0, i = 1, 2, · · · v

j=1

a specific scheme entails determining the best possibile values for these

constants by matching the expansion of this formula with a Taylor series

expansion. Often these parameter values are given in tabular form as,

6.4. RUNGE-KUTTA METHODS 147

0

c2 a21

c3 a31 a32

c4 a41 a42 a43

w1 w2 w3 w4

or

c A

w

Let us consider v = 1 in equation (6.14) and write the equations explicitly

as,

yn+1 = yn + w1 k1 (6.15)

k1 = h f (tn , yn )

or

yn+1 = yn + w1 h f (tn , yn )

The procedure to determine w1 is to match the above equaiton with the

Taylor series expansion,

h2 00 h3 000

yn+1 = yn + hyn0 + y + y + ··· (6.16)

2 n 3! n

The first term on the right hand side is the same in both equations.

Recongnizing that y 0 = f the second term will also match if we make

w1 = 1 and this results in recovering the Euler scheme developed in

equation (6.7). We cannot match with any higher order terms and hence

the local truncation error is of order O(h2 ).

Let us consider v = 2 in equation (6.14) and write the equations explicitly

as,

yn+1 = yn + w1 k1 + w2 k2 (6.17)

k1 = h f (tn , yn )

k2 = h f (tn + c2 h, yn + a21 k1 )

6.4. RUNGE-KUTTA METHODS 148

This scheme has four unknown paramters {w1 , w2 , c2 , a21 } which must

be determined by matching the Taylor series expansion of equation (6.17)

with the equation (6.16). Expanding equation (6.17) we get,

or,

" #

∂f ∂f

yn+1 = yn + w1 h fn + w2 h f + (c2 h) + a21 (hf ) + ···

∂t ∂y n

(6.18)

Substituting for y 0 and its higher derivatives in terms of f in equation

(6.16) and expanding, we get,

h2 df

yn+1 = yn + hfn + + O(h3 )

2 dt

or " #

h2 ∂f ∂f ∂y

yn+1 = yn + hfn + + + O(h3 ) (6.19)

2 ∂t ∂y ∂t

Now comparing the fn terms between equations (6.18) and (6.19) we

require that

w1 + w 2 = 1

∂f

for the two equations to match. Next comparing ∂t terms, we require,

w2 c2 = 1/2

∂f ∂y

Finally comparing ∂y ∂t we require,

w2 a21 = 1/2

Thus, we have matched all terms of order O(h2 ) leaving a truncation er-

ror or order O(h3 ). In that process we have developed 3 constraint equa-

tions on the four unknonws {w1 , w2 , c2 , a21 } appearing in the 2-stage

RUnge-Kutta scheme (6.17). Any choice of values for {w1 , w2 , c2 , a21 }

that satisfies the above three constraints will result in a 3-rd order, 2-

stage Runge-Kutta scheme. Since there are four variables and only three

equations, we have one extra degree of freedom. Hence the solution is

not unique. Two sets of results are:

and

w1 = 1/2, w2 = 1/2, c2 = 1, a21 = 1

6.4. RUNGE-KUTTA METHODS 149

The later is the equivalent of the predictor-corrector pair using Euler and

modified Euler schemes developed in equations (6.10). In summary this

scheme is a 2-stage RK method since it requires two function evaluations

per step. It is explicit and has a local truncation error of O(h3 ).

Using the first set of parameter values in equation (6.17), we have,

2 2

yn+1 = yn + k1 + k2

3 3

k1 = h f (tn , yn )

3 3

k2 = h f (tn + h, yn + k1 )

2 2

or in tabular form,

0

3/2 3/2

2/3 2/3

the matching process with the Taylor series expansion to higher order

terms. An explicit fourth-order form that matches with the Taylor series

to h4 terms (and hence has a truncation error of O(h5 ) is,

0

1/2 1/2

1/2 0 1/2

1 0 0 1

1/6 2/6 2/6 1/6

1

yn+1 = yn + [k1 + 2k2 + 2k3 + k4 ] (6.20)

6

k1 = h f (tn , yn )

h k1

k2 = h f (tn + , yn + )

2 2

h k2

k3 = h f (tn + , yn + )

2 2

k4 = h f (tn + h, yn + k3 )

6.4. RUNGE-KUTTA METHODS 150

Embedded forms

that use a common set of function evaluations to predict two estimates of

the solution at yn+1 . Typically a lower order scheme is embedded within

a higher order scheme. The motivation for developing such schemes is

to have a convenient estimate of the local truncation error at every time

step, which can then be used to develop a step size control stragegy. A

popular scheme, called RK45, is given below.

k1 0

1 1

k2 4 4

3 3 9

k3 8 32 32

12 1932 7200 7296

k4 13 2197 − 2197 2197

439 3680 −845

k5 1 216 −8 513 410

216 0 2565 4104 − 15 ← yn+1

1 8

− 3544

1859 11

k6 2 − 27 2 2565 4104 − 40

16 6656 28561 9 2 (5)

135 0 12825 56430 − 50 55 ← yn+1

tion called rk45.m and a m-file called ode45.m.

In the general form given in equation (6.14), we fixed c1 = 0 and A to be

lower triangular. These constraints on the paramters ensure that each

of the ki could be computed explicitly without the need for iterative

solution. Let us now relax these constraints and write the general form

as,

v

X

yn+1 = yn + w i ki (6.21)

i=1

v

X

ki = h f tn + ci , yn + aij kj , i = 1, 2, · · · v

j=1

yn+1 = yn + w1 k1 + w2 k2 (6.22)

6.4. RUNGE-KUTTA METHODS 151

k2 = h f (tn + c2 h, yn + a21 k1 + a22 k2 )

c1 a11 a12

c2 a21 a22

w1 w2

to solve two sets of nonlinear algebraic equations simulataneously. On

the positive side of such schemes, fully implicit nature of the algorithm

results in numerically stable schemes making them suitable for stiff dif-

ferential equations. Also, a two-stage scheme (v = 2) has eight param-

eters, and hence we can match these equations with the Taylor series

expansion to higher order terms. Hence more accurate formulas can be

constructed. An example of a fully implicit, 2-stage, 4-th order accurate

scheme is the Gauss form given by,

√ √

(3 − √3)/6 1/4 √ (3 − 2 3)/12

(3 + 3)/6 (3 − 2 3)/12 1/4

1/2 1/2

(1971).

While the fully-implicit forms of the last section §6.4.4, have desirable

stability properties, they are computationally demanding since a system

of non-linear algebraic equations must be solved iteratively at every time

step. In an effort to reduce the computational demand while retaining

the stability characteristics, Rosenbrock (1963) proposed a special form

of the algorithm. These are suitable for autonomous system of equations

of the form,

dy

= f (y) y(t = t0 ) = y 0

dt

A 2-stage, 3-rd order scheme is shown below.

y n+1 = y n + w1 k 1 + w2 k 2 (6.23)

−1

k1 = h [I − ha1 J(y n )] f (y n )

k2 = h [I − ha2 J(y n + c1 k1 )]−1 f (y n + b1 k1 )

6.5. DYNAMICAL SYSTEMS THEORY 152

p p

a1 = 1 + 6/6, a2 = 1 − 6/6

w1 = −0.41315432, w2 = 1.41315432

√ p √

−6 − 6 + 58 + 20 6

b1 = c1 = √

6+2 6

∂f

Here J = ∂ y is the Jacobian, which must be evaluated at every time

step. Note that the main advantage in using equation (6.23) is that k1 , k2

could be computed without the need for iteration, although it requires

two matrix inverse computations per step.

He who can, does.

He who cannot teaches.

Chapter 7

- Boundary value problems

ear or nonlinear) ordinary differential equations of the boundary value

type. Such equations arise in describing distributed, steady state mod-

els in one spatial dimension. The differential equations are transformed

into systems of (linear and nonlinear) algebraic equations through a dis-

cretization process. In doing so, we use the tools and concepts devel-

oped in Chapter 5. In particular we will develop (i) finite difference meth-

ods using the difference approximations given in Table 5.4, (ii) shooting

methods based on methods for initial value problems seen in chapter 6

and (iii) the method of weighted residuals using notions of functional

approximation developed in Chapter 5.

We will conclude this chapter with an illustration of a powerful soft-

ware package called COLSYS that solves a system of multi-point bound-

ary value problems using the collocation method, cubic splines and adap-

tive mesh refinement. It is available from NETLIB.

Consider the model for heat transfer through a fin developed in section

§1.5.1. We will consider three specific variations on this model equation

(1.22) and the associated boundary conditions. First let us scale the

problem by introducing the following dimensionless temperature and

153

7.1. MODEL EQUATIONS AND BOUNDARY CONDITIONS 154

distance variables,

T − T∞ x

θ= ξ=

T0 − T ∞ L

Using these definitions, equation (1.22) can be rewritten as,

" #

d dθ

kA − hP L2 θ = 0 (7.1)

dξ dξ

θ(ξ = 0) = 1 θ(ξ = 1) = 0

where the objective is to find the continuous function θ(ξ) over the

domain of interest, viz. ξ ∈ [0, 1] for a prescribed set of parameters

{k, A, h, P , L}. All of these parameters can be constants, or some of them

might be dependent on the position ξ ( e.g., A(ξ)) or on the unknown

temperature itself ( e.g., k(θ)). We examine each case next.

For constant area, A and thermal conductivity, k, equation (7.1) re-

sults in a second order linear differential equation with constant coeffi-

cients for which an analytical solution is possible. But we focus only on

developing methodologies for obtaining a numerical solutions.

d2 θ hP L2

− θ = 0 (7.2)

dξ 2 kA

θ(ξ = 0) = 1, θ(ξ = 1) = 0

are specified, the boundary condition is called the Dirichlet boundary

conditions.

In a variation of the above model, if we let the area be a variable

A(ξ) ( i.e., tapered fin), but keep the thermal conductivity, k, constant,

we obtain a variable coefficient, linear boundary value problem, still of

second order.

d2 θ dA(ξ) dθ hP L2

A(ξ) 2

+ − θ = 0 (7.3)

dξ dξ dξ k

dθ

θ(ξ = 0) = 1, = 0

dξ ξ=1

7.1. MODEL EQUATIONS AND BOUNDARY CONDITIONS 155

ary condition at ξ = 1. Such boundary conditions, where the derivatives

are specified, are called Newman boundary conditions. The temperature

value at ξ = 1 is an unknown and must be found as part of the solution

procedure.

In yet another variation, consider the case where the thermal conduc-

tivity is a function of temperature, viz. k(θ) = α + βθ 2 where α and β

are experimentally determined constants. Let the area, A be a constant.

The equation is nonlinear and must be solved numerically.

" #2

d2 θ dk(θ) dθ hP L2

k(θ) 2 + − θ = 0 (7.4)

dξ dθ dξ A

" #

dθ

θ(ξ = 0) = 1, k + hθ = 0

dξ ξ=1

the mixed or Robin boundary condition has been used. Once again, the

temperature value at x = L is an unknown and must be found as part of

the solution procedure.

All of these problems can be represented symbolically as,

Dθ = f on Ω

Bθ = g on ∂Ω

and ∂Ω represents its boundary. Our task is to obtain an approximate

solution, θ̃ to the above problem. The approximation consists in con-

structing a discrete version of the differential equations which results in

a system algebraic equations. If the differential equations are linear (as

in equations (7.2,7.3), then the resulting discrete, algebraic equations will

also be linear of the type Aθ̃ = b and methods of Chapter 3 can be used

to obtain the final approximate solution. If the differential equations

are nonlinear (as in equation (7.4)) then the resulting discrete, algebraic

equations will also be nonlinear of the type F(θ̃) = 0 and methods of

Chapter 4 can be used to obtain the final approximate solution.

In the following sections we develop various schemes for construct-

ing approximate solutions.

7.2. FINITE DIFFERENCE METHOD 156

θ(ξ)

θ1

θ2

θi

∆ξ

ξ=0

ξ=1

1 2 i n

i=0 i=n+1

Figure 7.1: One dimensional finite difference grid of equally spaced data

points

Let us consider equation (7.2) which is a linear problem subject to Dirich-

let boundary conditions. In solving equation (7.2) by the finite difference

method, we divide the domain Ω = ξ ∈ [0, 1] into (n + 1) equally spaced

subdivisions as shown in figure 7.1. The distance between the two grid

points is denoted by ∆ξ. The grid spacing ∆ξ and the value of the inde-

pendent variable ξ at the nodal point i are given by,

1−0

∆ξ = ξi = i ∆ξ i = 0, 1, · · · (n + 1)

n+1

Next, instead of attempting to find a solution θ(ξ) as a continuous func-

tion of ξ that satisfies the differential equation (7.2) exactly at every ξ, we

content ourselves with finding an approximate solution at the selected

nodal points shown in the figure - i.e., {θ(ξi ) = θ̃i |i = 1, 2, · · · n}, where

n is the number of interior grid points. Note that for the Dirichlet type

of boundary conditions θ0 and θn+1 are known. Hence, it remains to

determine only n unknowns at the interior points. We obtain n equa-

tions by evaluating the differential equation (7.2) at the interior nodal

points. In doing this we replace all the derivatives by the corresponding

finite difference approximation from Table 5.4. Clearly, we have several

choices; but it is important to match the truncation error in every term to

be of the same order. We illustrate this process using central difference

approximations.

7.2. FINITE DIFFERENCE METHOD 157

in equation (7.2), we obtain,

" #

θ̃i−1 − 2θ̃i + θ̃i+1 2 hP L2

+ O((∆ξ) ) − θ̃i = 0 i = 1, 2, · · · n

(∆ξ)2 kA

The term in the square brackets is the central difference approximation

for the second derivative and O((∆ξ)2 ) is included merely to remind us

of the order of the truncation error. We now have a system of n linear

algebraic equations. Let us write these out explicitly for n = 4.

" #

hP L2 2

θ̃0 − 2θ̃1 + θ̃2 − (∆ξ) θ̃1 = 0

kA

" #

hP L2 2

θ̃1 − 2θ̃2 + θ̃3 − (∆ξ) θ̃2 = 0

kA

" #

hP L2 2

θ̃2 − 2θ̃3 + θ̃4 − (∆ξ) θ̃3 = 0

kA

" #

hP L2 2

θ̃3 − 2θ̃4 + θ̃5 − (∆ξ) θ̃4 = 0

kA

θ̃0 in the first equation and θ̃5 in the last equation are known from the

Dirichlet boundary conditions. The above equations can be expressed in

matrix notation as, T θ̃ = b,

−(2 + α) 1 0 0 θ̃1 −θ̃0

1 −(2 + α) 1 0 θ̃2

= 0

0

0 1 −(2 + α) 1 θ̃

3

0 0 1 −(2 + α) θ̃4 −θ̃5

(7.5)

hP L2 2

where α = kA (∆ξ) . Note that the boundary values θ̃0 and θ̃5 appear

as forcing terms on the right hand side. Equation (7.5) is the discrete

version of equation (7.2). Once the structure is apparent, we can increase

n to reduce (∆ξ) and hence reduce the truncation error. In the limit of

∆ξ → 0 the solution to equations (7.5) will approach that of equation

(7.2). The matrix size will increase with decreasing ∆ξ and increasing n.

The matrix T is tridiagonal and hence the Thomas algorithm developed

in section §3.4.4 can be used to get the solution.

Next we consider equation (7.3) which is also a linear problem subject

to Dirichlet condition at ξ = 0 and Neuman boundary condition at ξ =

7.2. FINITE DIFFERENCE METHOD 158

function of ξ. The discretization procedure remains the same as seen in

the previous section §7.2.1, with the exception that θ̃n+1 is now included

in the unknown set

θ̃ = {θ̃1 , θ̃2 , · · · , θ̃n+1 }.

But we have the Neuman boundary condition as an extra condition that

will provide the additional equation. Using the central difference ap-

proximations for both the second and first derivatives in equation (7.3),

we obtain,

" #

θ̃i−1 − 2θ̃i + θ̃i+1 2

A(ξ

+ i) + O((∆ξ) )

(∆ξ)2

" #

θ̃ i+1 − θ̃ i−1 hP L2

A0 (ξi ) + O((∆ξ)2 ) − θ̃i = 0 i = 1, 2, · · · n + 1

2(∆ξ) k

Once again the truncation error term has been retained at this stage, only

to emphasize that it should preferably be of the same order for every

derivative that has been replaced by a difference approximation; other-

wise the effective error is of the same order as the term with the lowest

order truncation error. Multiplying throughout by (∆ξ)2 and collecting

like terms together, we get,

" #

0 (∆ξ) hP L2 (∆ξ)2

A(ξi ) − A (ξi ) θ̃i−1 + −2A(ξi ) − θ̃i +

2 k

0 (∆ξ)

A(ξi ) + A (ξi ) θ̃i+1 = 0, i = 1, 2, · · · n + 1

2

Letting

0 (∆ξ)

ai = A(ξi ) − A (ξi )

" 2 #

hP L (∆ξ)2

2

di = −2A(ξi ) − i = 1, · · · n + 1

k

0 (∆ξ)

ci = A(ξi ) + A (ξi )

2

we can rewrite the equation as,

Observe that the coefficients {ai , di , ci } in the above equations are known.

However, unlike in equation (7.5), they vary with the grid point location

i. Also, for i = 1, θ̃0 on the left boundary is known through the Dirichlet

7.2. FINITE DIFFERENCE METHOD 159

tion since it contains the unknown θ̃n+2 which lies outside the domain

of interest. So far we have not used the Neuman boundary condition at

the right boundary ξn+1 = 1. Using the central difference approximation

for the first derivative to discretize the Neuman boundary condition we

get, " #

dθ

θ̃n+2 − θ̃n

≈ =0

dξ ξn+1 =1 2∆ξ

which implies θ̃n+2 = θ̃n . This can be used to eliminate θ̃n+2 from the

last equation, which becomes,

b. For n = 4, as an example, we get the follwoing five equations.

d1 c1 0 0 0 θ̃1 −a1 θ̃0

a2 d2 c2 0 0 θ̃2 0

0 a3 d3 c3 0 θ̃3 = 0 (7.6)

0 0 a4 d4 c4

θ̃4

0

0 0 0 a5 + c5 d5 θ̃5 0

Equation (7.6) is the discrete version of equation (7.3). Once the structure

is apparent, we can increase n to reduce (∆ξ) and hence reduce the

truncation error. In the limit of ∆ξ → 0 the solution to equations (7.6)

will approach that of equation (7.3). The matrix size will increase with

decreasing ∆ξ and increasing n. The matrix T is tridiagonal and hence

the Thomas algorithm developed in section §3.4.4 can be used to get the

solution.

Conceptually there is no difference in discretizing a linear or a non-linear

differential equation. The process of constructing a grid and replacing

the differential equations with the difference equations at each grid point

is the same. The main difference lies in the choice of solution technique

available for solving the nonlinear algebraic equations. Let us consider

the nonlinear model represented by equation (7.4). In this case we have

a Robin condition at ξ = 1 and hence θ̃n+1 is unknown. Thus the un-

knowns on the discrete grid consist of

7.2. FINITE DIFFERENCE METHOD 160

Discretizing equation (7.4) at a typical grid point i, we obtain the

following (n + 1) nonlinear algebraic equations.

" #

θ̃i−1 − 2θ̃i + θ̃i+1

fi (θ̃) := k(θ̃i ) +

(∆ξ)2

" #2

0 θ̃i+1 − θ̃i−1 hP L2

k (θ̃i ) − θ̃i = 0 i = 1, 2, · · · n + 1

2(∆ξ) A

In this set of equations i = 1 and i = (n+1) require special consideration

to incorporate the boundary conditions. Thus, making use of the left

boundary condition, θ̃0 = 1, f1 (θ̃) becomes,

" # " #2

1 − 2θ̃1 + θ̃2 0 θ̃2 − 1 hP L2

f1 (θ̃1 , θ̃2 ) := k(θ̃1 ) + k ( θ̃ 1 ) − θ̃1 = 0

(∆ξ)2 2(∆ξ) A

At the right boundary, we discretize the Robin boundary condition as,

" #

θ̃n+2 − θ̃n

k(θ̃n+1 ) + hθ̃n+1 = 0

2(∆ξ)

which can be rearranged as,

" ! #

2(∆ξ)h h i

θ̃n+2 = θ̃n − θ̃n+1 = θ̃n − βθ̃n+1

k(θ̃n+1 )

This can be used in the equation fn+1 (θ̃) to eliminate θ̃n+2 .

" #

θ̃n − 2θ̃n+1 + [θ̃n − βθ̃n+1 ]

fn+1 (θ̃n , θ̃n+1 ) := k(θ̃n+1 ) +

(∆ξ)2

" #2

0 [θ̃n − βθ̃n+1 ] − θ̃n hP L2

k (θ̃n+1 ) − θ̃n+1 = 0

2(∆ξ) A

The above equations f1 = 0, f2 = 0, · · · fn+1 = 0 can be represented

symbolically as a system of (n + 1) nonlinear equations of the form,

F(θ̃) = 0. These can be solved most effectively by the Newton method

for the unknowns θ̃ = {θ̃1 , θ̃2 , · · · , θ̃n+1 }.

p+1 p p

θ̃ = θ̃ − J −1 f (θ̃ ) p = 0, 1, · · ·

∂F

The Jacobian matrix, J = has the following tridiagonal structure.

∂ θ̃

∂f1 ∂f1

0 0

∂ θ̃1 ∂ θ̃2

∂f2 ∂f2 ∂f2

0

J=

∂ θ̃1 ∂ θ̃2

∂f3

∂ θ̃3

∂f3 ∂f3

0

∂ θ̃2 ∂ θ̃3 ∂ θ̃4

∂f4 ∂f4

0 0 ∂ θ̃3 ∂ θ̃4

7.3. QUASILINEARIZATION OF NONLINEAR EQUATIONS 161

equations:

f (x) = 0 (7.7)

A solution is sought by linearizing the equations using

x = xa + δ (7.8)

Hence the original equation becomes,

∂f

δ + O(δ2 )

f (x a + δ) = f (x a ) + (7.9)

∂x xa

Neglecting second and higher order terms, one gets the linearized equa-

tion,

∂f

δ = −f (x a ) (7.10)

∂x xa

Similar concepts can be applied to a (system of) differential equations

as well. Consider for example the equation given below

" #

d dθ

k(θ)ξ =0 (7.11)

dξ dξ

be written as,

" #2

d2 θ dθ dk dθ

f (θ) := k(θ)ξ + k(θ) +ξ =0 (7.12)

dξ 2 dξ dθ dξ

iteration from an initial guess θa (ξ). Let the actual solution be written

as

θ(ξ) = θa (ξ) + δ(ξ) (7.13)

where δ(ξ) is a small correction to be obtained by solving a linearized

form of equation (7.12). The linearized equation is derived by substitut-

ing equation (7.13) in equation (7.12), expanding the resulting expression

and neglecting quadratic and higher order terms in δ(ξ) in the expan-

sion.

7.3. QUASILINEARIZATION OF NONLINEAR EQUATIONS 162

" #2

d2 (θa + δ) d(θa + δ) dk

d(θ a + δ)

k(θa +δ)ξ +k(θa +δ) +ξ =0

dξ 2 dξ dθ θa +δ dξ

(7.14)

Note that each nonlinear term must now be expanded in Taylor series,

e.g.

dk

δ + ···

k(θa + δ) ≈ k(θa ) +

dθ θa

dk dk d 2k

+

≈ δ + ··· (7.15)

dθ θa +δ dθ θa dθ 2 θa

" #

d 2 θa d2 δ dk

δ d 2θ

a dk

δ d 2δ

ξ k(θa ) + k(θa ) 2 + + + ···

dξ 2 dξ dθ θa dξ 2 dθ θa dξ 2

dθa dδ dk

δ dθa + dk δ dδ + · · ·

k(θa ) + k(θa ) +

dξ dξ dθ θa dξ dθ θa dξ

( ) ( ) ( )

dk

d 2k dθ

2

dδ

2

dθ dδ

+ a a

ξ δ + ··· + +2 = (7.16)

0

dθ θa dθ 2 θa dξ dξ dξ dξ

Now, terms of order δ2 and higher are neglected since δ is small. The

terms that are evaluated at the current guess θa are moved to the right

hand side. Hence we get,

d2 δ dk

d 2 θa dδ dk

δ dθa +

k(θa )ξ +ξ δ + k(θa ) +

dξ 2 dθ θa dξ 2 dξ dθ θa dξ

( )2 ( )

d2 k

dθa dk

dθa dδ

ξ δ + 2ξ

dθ 2 θa dξ dθ θa dξ dξ

)2

(

2

d θa dθa

dk dθa

= − k(θa )ξ + k(θa ) +ξ (7.17)

dξ 2 dξ dθ θa dξ

Note that the right hand side of equation (7.17) is the same as equation

(7.12) evaluated at θa and when it is zero the problem is solved! The

left hand side of equation (7.17) is linear in δ subject to homogeneous

boundary condition of δ(1) = 0 and δ(ξ0 ) = 0. Hence the solution (or

correction) δ(ξ) will be zero when the iteration is converged. Now one

can discretize equation (7.17) and solve for the correction iteratively.

Equation (7.17) can also be written in operator form as,

Dθ δ = −f (θ)

7.4. CONTROL VOLUME METHOD 163

θ(ξ)

θ1

θ2

θi

∆ξ

ξ=0

ξ=1

1 2 i n

i=0 i=n+1

where the linear operator Dθ , called the Fréchet derivative is given by,

" ( )#

d2 · dk

dθa d·

k(θa )ξ 2

+ k(θa ) + 2ξ +

dξ dθ θa dξ dξ

( )2

2k

dk d2 θa dk dθa d dθ

ξ + +ξ

a ·

dθ θa dξ 2 dθ θa dξ dθ 2 θa dξ

Where observation is concerned, chance favours

only the prepared mind.

— LOUIS PASTEUR

Appendix A

Networked computing

environment within the

chemical and materials

engineering department

A.1 Introduction

ronment within the department of chemical and materials engineering at

the University of Alberta. You can choose to work with a variety of work-

stations such as the IBM RS/6000 which use the AIX operating system

(a version of Unix), SUN workstations using SUNOS operating system, or

the personal computers (Prospec 486) which use OS/2 and/or Win95 with

DOS emulation. Hence forth the computers will be identified merely by

the operating system used to run them viz. AIX, SunOS, OS/2 or WIN95

machines. Computers that are available for student use are distributed

as follows in the Chemical and Mineral Engineering (CME) building.

AIX machines Nine IBM RS 6000 - model 220 in room CME 244 +

One IBM RS 6000 - model 550 as server

A.1

A.1. INTRODUCTION A.2

475. These machines belong to individual researchers;

but all staff and graduate students can have access

provided the priority based on ownership is hon-

ored.

You are expected to be familiar with the basic concepts of operating sys-

tems, file systems, editors and structured programming and debugging

principles. I will focus on the concepts that are unique to the networked

environment and discuss briefly the software tools that are available for

various tasks. This document is intended merely as an introduction to

get you started and to identify the hardware and software resources that

are available on the network. Once you are aware of the art of the possi-

ble, you are on your own to make the proper choice of machine, software

tools and implement solutions strategies for solving a variety of prob-

lems. Advanced users should consult the original documentation on

each software. Online help is available for most of the software.

In a heterogeneous, networked environment, such as the one we have,

the range of services such as file space, printer, plotter capabilities, avail-

ability of specialized software etc. are distributed on a variety of ma-

chines, often matching the software needs with hardware capabilities.

For example, graphics packages and word processors on PC’s are ade-

quate for most document processing needs. Powerful editors are avail-

able on all of the machines; you may already have your own favorite

editor on one of these machines. Use it! Data processing, visualization,

and simulation of flow and chemical processes are numerically inten-

sive tasks and are best carried out on powerful workstations. MAPLE

(a symbolic computation package) and APSEN (a process simulator) are

made available only on one IBM RS 6000 machine for licensing reasons.

Hence it is essential to develop the skills to navigate through the network,

transfer data between computers and select the machine best suited for

a task.

Workstations using AIX or SUNOS operating system support multi-

tasking and multiple users. Multi-tasking implies that the computer can

handle several tasks (or processes) at the same time using time sharing

principles. In addition, AIX (and all other flavors of UNIX) can handle

A.1. INTRODUCTION A.3

formation. Thus, you need to get a userid and some disk space to store

your own files in your home directory.

On the other hand OS/2 supports multi-tasking, but not multiple

users. This implies that whenever you use an OS/2 machine, you can edit

a program in one window and compose a letter in another ( i.e., multi-

tasking feature), but you are responsible for keeping your personal files

separately in a floppy. If you leave them in a hard drive, there is no

guarantee that they will be available to you the next day! But you do not

need a userid to use OS/2 machines.

All of the machines (AIX, SunOS and OS/2) provide a friendly Graph-

ical User Interface (GUI) to interact with the operating system. The basic

element of a GUI is a window and a desktop. AIX uses Motif window man-

ager (a standard that is getting wide acceptance) and OS/2 uses the Pre-

sentation Manager (or PM). The figure A.1 illustrates the basic anatomy

of a window and how to manipulate its size and location. A typical AIX

window on IBM RS 6000 and an OS/2 window on Prospec 486, provide

a shell through which you can enter commands to the operating system.

Well designed applications avoid command line based interaction with

the computer; instead you have to merely (double) click on the icon to

start an application. These icons, of course, have to be made available

to the user and the concept of a desktop is used in both OS/2 and AIX

to organize and present file systems and groups of applications to the

user. Since AIX is a multiuser environment, each user can customize the

desktop. This information is saved for each user and during subsequent

logins you are presented with your own customized desktop. Under

OS/2 there is only one standard desktop administered by the computer

support staff - do not mess with it as it will make it difficult for subse-

quent users to access programs in a standard manner.

While illustrating a dialogue with the computer, following conventions

are adopted throughout this document.

bold font will indicate a command that you should enter

exactly as shown

italics font will indicate a parameter like a file name, directory

name etc. that should be substituted with the real name.

Unlike DOS, UNIX is case sensitive and most of the UNIX commands are

in lower case.

A.1. INTRODUCTION A.4

button here to maximize the

Click the left mouse Click the left mouse window size.

button here for a pull down button here to minimize the [Click again to recover the

menu. window (make into icon). original size]

[Click on the icon to recover

the window]

mouse button on the

title bar and drag to move

the window

window to make it the current window.

This window will then have the focus of the

keyboard. i.e. anything that you enter from

keyboard goes into this window.

X-windows

background screen.

Click the left Click the left When the cursor is in this

mouse button on a mouse button on a side region, clicking the left or

corner and drag to resize bar and drag to resize the right mouse button offers

the window window choice of additional

menus

A.2. USERID A.5

A.2 Userid

All students get a userid on the General Purpose Unix servers (or GPU)

maintained by the Computing and Networking Services (CNS). If you also

purchase the Netsurf 97 CD-ROM from CNS (for about $10), you get a

powerful set of software for connecting your home computer to the IN-

TERNET through the University. For more information about computing

and network services see their Web site at http:\\www.ualberta.ca\CNS.

Once you get the userid on the machines gpu.srv.ualberta.ca you can en-

able access to the machines maintained by the department of chemical

and materials engineering by (a) signing on to gpu.srv.ualberta.ca and

(b) running the following script:

[userid@gpu]> /afs/ualberta.ca/labs/cmel244/bin/register-244

You need to do this only once at the begining of the year. If you

encounter any problems see one of the DACS center staff (Mr. Bob Barton

or Mr. Jack Gibeau).

Figure A.2 provides a conceptual frame work for introducing the net-

work structure. A variety of workstations and personal computers are

connected by ethernet. This local area network (LAN) is a subnet of the

larger university wide ethernet network as well as the world wide INTER-

NET network. The underlying communication protocol is called TCP/IP

(Transmission Control Protocol/Internet Protocol) and has been widely

accepted as the standard.

In addition to providing some basic connectivity between machines,

a network enables sharing of hardware and software resources as well

as sharing of information and communication on a world wide basis.

We will focus on the departmental subnet to illustrate various concepts

which can then be easily extended to national and international level

networks. The figure A.2 illustrates the logical dependencies for the

purpose of sharing resources between various machines and not the

physical connections. As a user, we need not concern ourselves with

the network hardware connections. It is sufficient to realize that any

machine on the network can address any other machine, much like tele-

phone connections. This immediately requires that each machine on

the INTERNET have a unique IP number and a host name. For exam-

A.3. OVERVIEW OF THE NETWORK A.6

U of A servers provide

HOME NEWS, ftp, Gopher, e-mail

services

DIAL>telnet machine.eche.ualberta.ca IBM RS/6000 550

AIX 3.2.3

NIS server, NFS server,

do p,

in ft

te

s

X-server/clients

w

w t,

x- lne

x- lne

te

in ft

FORTRAN, emacs, gnuplot do p,

w

Room: CME 244 s

cmel31.ucs.ualberta.ca

to Room: CME 244

cmel39.ucs.ualberta.ca cmel11.ucs.ualberta.ca

IBM RS/6000 220 to

AIX 3.2.3 cmel28.ucs.ualberta.ca

NIS client to ugrads,

X-server/clients Prospec 486

OS/2

MATLAB, FORTRAN, emacs, X-clients

gnuplot, xmgr, khoros, tex

DOS/Windows emulation,

Lotus 123, Wordperfect

ravana.eche.ualberta.ca

do p,

in ft

s

w

w t,

x- lne

AIX 3.2.3

te

do p,

in ft

s

w

X-server/clients

w t,

x- lne

te

FLOW3D

Room: CME 475

six Any

IBM RS/6000 machine on

AIX 3.2.3 INTERNET.

NIS client to ravana, Staff offices,

X-server/clients rooms CME 473, 475

FORTRAN, C++, emacs, various labs in CME

gnuplot, xmgr, khoros, tex, building

FLOW3D

work

A.3. OVERVIEW OF THE NETWORK A.7

system

for undergraduate teaching and course related work

ugrads.labs.ualberta.ca 129.128.44.50 IBM RS 6000 - 550 AIX 4.1

ugrads1.labs.ualberta.ca 129.128.44.51 Sun Sparc Ultra2 SunOS 5.5.1

ugrads2.labs.ualberta.ca 129.128.44.52 Sun Sparc Ultra2 SunOS 5.5.1

cmel31 .ucs.ualberta.ca 129.128.44.31 IBM RS 6000 - 220 AIX 4.1

··· ··· ··· ···

cmel39 .ucs.ualberta.ca 129.128.44.39 IBM RS 6000 - 220 AIX 4.1

cmel11 .ucs.ualberta.ca 129.128.44.11 Prospec 486 OS 2

··· ··· ··· ···

cmel28 .ucs.ualberta.ca 129.128.44.29 Prospec 486 OS 2

for graduate research and faculty use

ravana.eche.ualberta.ca 129.128.56.102 IBM RS 6000 - 320 AIX 4.1

prancer.eche.ualberta.ca 129.128.56.23 IBM RS 6000 - 320 AIX 4.1

comet.eche.ualberta.ca 129.128.56.19 IBM RS 6000 - 375 AIX 4.1

dancer.eche.ualberta.ca 129.128.56.11 IBM RS 6000 - 375 AIX 4.1

cupid.eche.ualberta.ca 129.128.56.82 IBM RS 6000 - 355 AIX 4.1

dina.eche.ualberta.ca 129.128.56.70 IBM RS 6000 - 320 AIX 4.1

jhm2.eche.ualberta.ca 129.128.56.74 IBM RS 6000 - 375 AIX 4.1

chopin.eche.ualberta.ca 129.128.56.11 IBM RS 6000 - 320 AIX 4.1

poincare.eche.ualberta.ca 129.128.56.15 IBM RS 6000 - ??? AIX 4.1

handel.eche.ualberta.ca 129.128.56.54 IBM RS 6000 - 3AT AIX 4.1

brahms.eche.ualberta.ca 129.128.56.13 IBM RS 6000 - 320 AIX 4.1

network

ple the server within the department of chemical and materials engi-

neering for undergraduate student use has the IP number 129.128.44.50

and the host name ugrads.labs.ualberta.ca. The part of the IP number

"129.128.56." represents the department of chemical and materials engi-

neering subnet and we can connect up to 256 computers to this subnet.

Similarly the last part of the host name, viz. eche.ualberta.ca, also called

domain name represents the chemical and materials engineering do-

main. Different machines on this domain will have different names like,

prancer.eche.ualberta.ca, ravana.eche.ualberta.ca. Table A.1 lists com-

puters that are generally available for chemical and materials engineer-

ing students and staff. To determine your eligibility to have access to

any of these resources see one of the DACS center staff. Typically under-

graduate students can expect to have access to ugrads.eche.ualberta.ca

and the associated machines, while graduate students and staff will have

A.4. THE CLIENT-SERVER MODEL A.8

signon to these machines through a variety of means including remote

connections from home. These procedures will be outlined later in this

document.

model serves a very useful role in both providing a variety of services on

the network and in sharing the limited resources with maximum flexibil-

ity. From a users point of view understanding the client-server concept

helps in (i) navigating through the network and using the resources ef-

ficiently and (ii) diagnosing likely sources of problems when things do

not work as anticipated.

In simple terms a server provides a service to a client authorized to

request such a service. Typically, both the server and the client are pro-

grams that communicate over the network. The server is always running,

listening for requests over the network. Such programs that are running

all the time are also called daemons. When the client program initiates a

request, the server program checks for authentication and provides the

service. Some of the common services are explained below.

Let us take an example of Network File System, often called NFS. It is a set

of protocols developed by Sun Microsystems and it has been accepted

widely as a standard. It uses TCP/IP for communication, but provides a

higher level function in terms of sharing files between various machines

in a heterogeneous environment.

In figure A.3, the computer chopin.eche.ualberta.ca has a local file

system called /usr/local which contains a variety of very useful programs

like emacs, gnuplot, TeX, xmgr, pine etc. This file system is quite large

with about 450 Megabytes of programs and data that can be used by

other similar machines. So this file system is "exported" to group of

machines within the network. Thus chopin.eche.ualberta.ca acts as the

NFS server to this group of machines. The client machines then "mount"

this file system and make it appear as a local file system. In this way

the user sees the same file structure on each of the machine and the

administrator has to install and maintain only one copy of the software.

In a similar fashion, the authorization to signon to a group of ma-

chines ( i.e., user ids and passwords) is also maintained in a central loca-

A.4. THE CLIENT-SERVER MODEL A.9

ethernet

chopin.eche.ualberta.ca

ravana prancer comet

NFS clients

/usr/local

The file system chopin:/usr/local

NFS SERVER is mounted as /usr/local on each of

these machines. Logically it appears as a local

The file system /usr/local file system. But any directories and files under

resides physically in chopin /usr/local are actually fetched from chopin.

but has been exported to a group

of machines

df - Reports information about space on file systems

showmount - Displays a list of clients that have remotely mounted file systems

exportfs - Exports and unexports directories to NFS clients

mount - Makes a file system available for use

du - Summarizes disk usage

tion and served by the NIS (Network Information Service) server machine.

Figure A.4 shows a such a setup.

The University of Alberta supports sharing of the same file space from a

variety of machines on campus. There is a single home directory for

each userid from any UNIX machine on campus. CNS provides lim-

ited amount of disk space for each user (about 10Meg). In addition

the department of chemical and materials engineering provides addi-

tional space to undergraduate students under a directory called lab-

disk and for graduate students under a directory called chemeng. The

relevant directories will be found as a subdirectory under the home

directory of each user. To learn more about AFS go to the web site

http://www.ualberta.ca/HELP/afs/afs1.html.

The client-server model is also used to manage software licenses. Due

to financial constraints, unrestricted licenses are rarely purchased. Typ-

A.4. THE CLIENT-SERVER MODEL A.10

ethernet

ravana.eche.ualberta.ca

jhm2 dina comet

NIS SERVER

serves information on NIS clients

/home user ids and passwords When you attempt to signon to a client, it verifies

to other clients. with the NIS server that your user id is a valid one.

Your home directory is also made available on each client.

The file system /home

contains home directories of users.

It has been exported to other machines.

passwd - Changes a user’s password [it may take a while to propagate to

other clients]

rlogin - Connects the local host with a remote host.

rsh - Executes the specified command at the remote host

A.4. THE CLIENT-SERVER MODEL A.11

chines. We also rely heavily on a large collection of freely available soft-

ware. Two of the common licensing arrangements are floating network

license and node locked license. The former allows a fixed number of

concurrent users of a software on any machine in the network, while the

later allows any number of users on a single machine.

Typically a license server (which is a small program) runs on one of

the machines in the network as a daemon. Let us use FLOW3D and ASPEN

PLUS as examples. FLOW3D is a commercial software package for use

in fluid dynamics simulations and ASPEN PLUS is a very powerful steady

state process flow sheet simulator. For managing FLOW3D, the license

server (two programs named lmgrd and CFDSd) runs all the time on blue-

jay.ucs.ualberta.ca. Any time FLOW3D is started from any of the clients

within the network, the client contacts the server to check whether it

has the permission to start a new copy of the program. The server can

either grant the permission or deny it because that particular client is

not allowed to run that software or all of the available licenses are in use

at that time. For FLOW3D we have license for 100 concurrent users.

For ASPEN PLUS, the department of chemical and materials engineer-

ing has a multiuser, node locked license. This implies that any number

of users can use the program at the same time, but the software can

run on only one machine. In our case ASPEN PLUS runs on ugrads1 or

ugrads2.

For MATLAB the university has a site license for up to 150 concurrent

users. The most recent version of MATLAB 5.0 is available on ugrads1

or ugrads2.

Table A.2 lists all of the software available on the undergraduate and

graduate research network and the licensing status of each software.

A.4.3 X servers

X-windows is a sophisticated communication/window management soft-

ware developed at MIT. It uses TCP/IP for communication. It allows you

to interact with the local workstation through windows, menus, dialog

boxes, icons etc., in an intuitive way and minimizes the need for interac-

tion through command lines. One can view this as the next logical step

in the historical progression of the way humans have interacted with

computers. Originally it was accomplished through batch processing,

followed by interactive computing using commands which is being re-

placed currently by more intuitive interaction through a Graphical User

Interface (GUI) using menus and icons. One can expect this to be fol-

lowed by voice level interaction, leading ultimately to interaction with

A.4. THE CLIENT-SERVER MODEL A.12

ASPEN PLUS ugrads1 X process simulator

floating, multiuser license

MAPLE num.srv X symbolic computation package

floating, multiuser license

MATLAB ugrads1 prancer X powerful linear algebra package

cmel31 - comet with numerous tool boxes

cmel39 floating, multiuser license

FORTRAN all AIX machines IBM HESC license agreement

xde all AIX machines X powerful debugger

IBM HESC license agreement

emacs all AIX machines a powerful editor

GNU public license

gnuplot all AIX machines a simple 2-D graphics package

GNU public license

tgif all AIX machines X a powerful drawing package

free to use license

pine all AIX machines a standard e-mail program

free to use license

tin all AIX machines a standard news reader

free to use license

TeX all AIX machines a powerful typesetter

free to use license

123 OS/2 Windows 3.1 spread sheet

Freelance OS/2 Windows 3.1 Graphics

Wordperfect OS/2 Windows 3.1 word processor

HYSIM OS/2 DOS process simulator

network

A.4. THE CLIENT-SERVER MODEL A.13

While Microsoft (Windows, Windows NT), Apple (MacIntosh), NeXT

(NeXTSTEP), IBM (OS2-PM) and others have their own Graphical User In-

terface to interact with their operating system and application programs,

X-Windows goes one step further, in making the GUI software indepen-

dent of the underlying hardware and operating system. How this is ac-

complished is of interest only to programmers. From a users point of

view we should understand the concepts and be able to reap the benefits

of these features. In particular, X-windows makes the following possible:

chines can be either of the same make or completely different, e.g.,

a 486 running under OS/2 can be the local machine and IBM RS 6000

can be the remote machine.

ASPEN PLUS, FLOW3D etc. , but have the X-windows display routed

to your local machine. In other words you can interact with a re-

mote computer, located as far away as in another building or an-

other continent, with full window capabilities! (One needs a high

speed link to interact with a computer in another continent, of

course)

required to establish this interaction between a local and a remote com-

puter using X-windows are illustrated in figure A.5. Note that under

AIX the server program is called xinit and the client program is called

aixterm. Under SunOS, the client program is called xterm.

change of ideas take place through organized and moderated news groups.

It is more like a conference on thousands of topics. You can choose to

participate in topics of your choice. The topics are as wide ranging as

the ones on classical music (rec.music.classical), MATLAB (comp.soft-

sys.matlab), the AIX operating system (comp.unix.aix), dynamical sys-

tems theory (comp.theory.dynamic-sys) etc. The server is maintained

and administered by Computing and Network Services. The client pro-

gram is called a news reader and several versions of news readers exist.

On the AIX machines, the news reader can be invoked with the command

NOTE: The string

user@machine:dir

is a prompt from

the computer

A.4. THE CLIENT-SERVER MODEL A.14

even have a monitor attached to it.

It merely runs the applications and

local sends the display to the local

X -server (xinit) machine.

remote

This machine must have a monitor,

X client (aixterm)

keyboard and mouse.

ethernet

Command sequence What they do

1. login signon to the local machine with user id and password

2. xinit start the X-server

3. xhost remote permit remote machine to send display to local

4. rlogin remote signon to the remote machine from local

5. aixterm -display local:0 & start X-client on remote and redirect the display to local.

NOTE: Substitute the actual name of the machine for local and remote. Use fully qualified

domain names if the machines are on different subnet.

xinit - starts the X-server on the local machine

xhost - permit remote host to send display to local machine

aixterm - start a X-terminal (or X-client) and send the display to the

machine local.

If these files are found in your home directory, they are used in controling rlogin

between machines and in customizing your X-window environment.

.rhosts - controls rlogin between machines

.xinitrc - customizes the X-server.

.Xdefaults - customizes the Window appearance and functionality

.mwmrc - customizes the Motif window manager

A.4. THE CLIENT-SERVER MODEL A.15

user@machine:dir> tin

This program maintains a local database of the news groups that you

have subscribed. It fetches the latest articles from the server and allows

you to read and, optionally respond to articles. If you post a response,

it goes to the whole internet community. This service is a privilege and

should be used responsibly. Any abuse of the system will result in denial

of access to the entire network services.

While the News service allows two-way communication and thus active

participation, Gopher is a worldwide one-way information retrieval ser-

vice. A number of Universities and organizations participate in provid-

ing this service. The University of Alberta is one of the participants. The

Gopher server is maintained by Computing and Network Services on one

of their workstations. Any client on the campus ethernet (or even from

a home computer with a modem) can connect to this server and browse

through a wealth of information on such topics as

1. What is CWIS?/

2. What’s New on CWIS?/

3. Student Information and Services/

4. Libraries on the Network/

5. University Faculties and Departments/

6. Administrative Policies and Procedures (MAPPS)/

7. Campus Phone Directory <CSO>

8. Computing Resources/

9. International CWIS Systems/

10. Magazines and Publications/

11. University Services Directory/

12. Weather Report.

the International CWIS Systems gives access to other Gopher ser-

vices around the world. There are also nodes that link you to the Library

of Congress and other university libraries. To connect to Gopher from

any computer on campus with telnet feature enter,

A.5. COMMUNICATION A.16

0024, CWIS will be provided as a menu option and selecting it will con-

nect you to the Gopher server. When you are logged on to ugrads.ucs.ualberta.ca,

you can also access gopher by entering

user@machine:dir> gopher

When you are logged on to gpu.srv.ualberta.ca, you can also access go-

pher or world wide web by entering

user@machine:dir> lynx

This is a high end version of information sharing tool (like gopher) us-

ing windows and hypertext links established between computers on the

internet on a global level. It also operates on a client-server model.

Netscape is available on all of the AIX machines. Anyone with some

information to share can operate a server on one of their workstations

and there are thousands of such servers on the INTERNET. To access

information on WWW, start the client on your local workstation by en-

tering the following after you have started X-windows.

user@machine:dir> netscape

Your client should automatically know where the nearest server is lo-

cated. Typically there will be home page from where you can begin your

exploration. Have fun! Watch the time you spend on this one!! It is a

time sink!!!

A.5 Communication

from one computer to another. The following are available for use on

the Chemical And Materials Engineering network.

cal And Materials Engineering network via a modem. It allows

A.5. COMMUNICATION A.17

for VT100 emulation ( i.e., full screen use of editors like emacs,

news reader tin, e-mail program pine and gopher) and tek-

tronix emulation ( i.e., plotting programs like gnuplot can dis-

play graphs on your home computer in color!) and file trans-

fer between home computer and the university computer. You

can get a copy of MS-KERMIT Ver 3.10 for your PC from DACS

center staff. It can be distributed freely without any licensing

problems. Additional details can be found in Gianone (1991).

OS/2 to UNIX machines over ethernet. It supports VT100 em-

ulation, but has no graphic or file transfer support. Only com-

mand line interaction with the remote machine is possible.

ftp Preferred for file transfer over ethernet. Both AIX and OS/2

provide support for ftp.

(in room CME 473, staff offices etc. ) to workstations. It pro-

vides VT100 and tektronix emulation as well as ftp support.

OS/2 machines to AIX machines. It supports full GUI features

with the remote machine.

If you are using a 486 PC under OS/2 in room CME 244, the following

steps will establish a X-windows connection to a remote IBM RS 6000

running AIX.

• Make sure that the X-server is running under OS/2. This server is

called presentation manager X or PMX server and, under normal

operating conditions, it should be active and running. To verify

that it is running, look for the PMX icon on the screen. If you do

not find one, the key sequence ctrl Esc will give a list of windows.

Check if PMX is listed. If you do not find it, you can start a X-server

as follows:

appropriate icon.

> To start the PMX server, enter If PMX server is

already running,

[C:\] xinit you will be

informed of that

fact.

A.5. COMMUNICATION A.18

Remember that you can choose any one of the 9 AIX machines in

the network. You will see the same home directory and other soft-

ware resources from each of the nine machines. Hence choose a

random number between 31 and 39, so that no one machine is over

loaded all the time.

login: userid

Password: password

• Start the aixterm client on the remote machine, redirecting the dis-

play to the local machine by entering

user@machine:dir> open-Xpc nn

OS/2. This effectively tells the AIX machine (in this example cmel34)

to open a X-Window and display the window on the local worksta-

tion (i.e. cmelnn). Alternatively, you can enter,

computer prompt

NOTE: open-Xpc is nothing but a script that executes the above user@machine:dir>

identifies your

command for you!

userid, the

• Exit from the AIX machine and from the OS/2 Full Screen Window. machine name and

i.e. exit twice! the present

directory.

user@machine:dir> exit

[C:\] exit

• After some delay, an AIX window will open on your local 486 sta-

tion! Now you are connected to the AIX machine and typically you

will be in a shell called “ksh”. You can start any application like

MATLAB, xmgr etc. by simply entering the name of the program

e.g.,

A.5. COMMUNICATION A.19

user@machine:dir> matlab

it might open up other windows, dialogue boxes etc. and also pro-

vide online help windows. To start ASPEN PLUS enter

user@machine:dir> mmg

• Before you leave the OS/2 station, either logout from the remote

machine with the logout command as follows

ctrl-D key to logout

or close the aixterm window.

Caution: If you leave the OS/2 station without completing this last

step, the next person using that OS/2 station will have access to

your account on the AIX machine!

Use of Kermit is recommended. You also need a modem. Kermit has an

initialization file which can be setup in such a way that it automatically

dials the telephone number. The available phone numbers to connect

to the University of Alberta network server are : 492-4811 or 492-0024

(2400 Baud) , 492-0096 (9600 Baud) or 492-3214 (high speed modem).

Sample initialization files can be obtained from DACS center staff. You

start the kermit program by entering

mskermit.ini. If no file by name mskermit.ini exists in your directory and

you do not specify explicitly the name of an initialization file as shown

above with the -f flag, then the Kermit program will start on the PC, but

you will be left with the prompt

MS-KERMIT>

At this stage you can ask for additional help on kermit by entering the

kermit command help or ?. But the connection to the University network

must be done manually with the following commands

A.5. COMMUNICATION A.20

MSKERMIT>set port com2 ;this sets the port

MSKERMIT>OUTPUT ATDT4924811 ;this dials the number

MSKERMIT>connect ;this connects you to the Univ. network

in the initialization file. Note that anything following a semi-colon is

treated as a comment by MS-KERMIT. Whichever procedure you use, you

should get the following prompt if everything has gone well up to this

stage.

DIAL>

telnet, 129.128.44.50

or

telnet, ugrads.eche.ualberta.ca

to get connected to the AIX machine or any other valid internet number

(or name) for which you have a valid userid.

tn3270

All the PCs in room CME 473 and those in staff offices that have ethernet

cards are connected to INTERNET. The communication program tn3270

is also available in each of the machine. On these machines you can start

the connection by entering,

actual machine

You should be connected directly to the remote AIX machine. This com- name.

munication program is quite powerful. It provides full VT 100 emulation

and hence allows full screen use of editors (emacs, vi), news reader (tin)

etc. It also supports tektronix emulation and has facilities for capturing

graphics screen images as Postscript files on the local PC. Use the Alt-H

key sequence to get brief online help.

A.5. COMMUNICATION A.21

A client-server concept is used once again to transfer files between the

remote IBM RS 6000 and the local home computer. The MS-KERMIT on

your PC is the client. You must start a server on the IBM RS 6000, called

the C-KERMIT. Once you are logged in to the IBM RS 6000, enter

user@machine:dir> kermit

C-Kermit>

Enter,

for C-Kermit

to put C-KERMIT in server mode. Then escape back to the local MS-

KERMIT (Typically ctrl-[C is the escape sequence. The prompt should

be,

MS-Kermit>

Any command you enter now is acted up on by the local PC. To fetch a

file from the AIX machine (in your home directory, of course) to local PC,

enter,

for MS-Kermit

To send a file from your local PC to the AIX machine (in your home

directory, of course) enter,

command is for

MS-Kermit> fin MS-Kermit, it tells

C-Kermit to

to signal the server that file transfer session is finished. Then enter, terminate server

mode. The next

MS-Kermit> C command, C,

re-establishes

direct connection

to re-connect to the AIX machine. Note that the C-Kermit is still running to AIX

on the AIX; it has only terminated its server mode. Finally enter

A.5. COMMUNICATION A.22

help ask for help on FTP subcommands

get remotefname localfname get a file from the remote machine

mget remote-pattern get multiple files matching the pattern

put localfname remotefname put a file on the remote machine

mput local-pattern send multiple files matching the pattern

bin to enable file transfer in binary mode

ascii to enable file transfer in ascii mode (this is the default)

ls list files in the current directory on the remote machine

dir also list files in the current directory on the remote machine

cd change the directory on the remote machine

lcd change the directory on the local machine

pwd display present working directory on remote machine

quit to terminate the FTP session

is acted on by C-

to terminate the C-KERMIT program on the AIX machine and to return Kermit

to the shell level.

The file transfer program called ftp allows transfer of files between ma-

chines on ethernet. This is also implemented on the client-server model.

From a local machine connect to the remote machine as follows:

On the remote machine the ftp daemon called ftpd acts as the server.

You will be prompted for userid and password. Once the connection is

established you can use the commands shown in Table A.3 to transfer

file.

The FTP procedure summarized above is essentially the same on

most of the machines. Be bold and try them out and observe how fast

the file transfer is compared to KERMIT through a serial line.

A.6. OPERATING SYSTEMS A.23

ftp feature is also available under OS/2 machine. If you want to make a

copy of your personal files on the AIX machine to a floppy disk, do the

following.

• Make sure that you have a writable, formatted floppy disk in drive

A: of an OS/2 machine.

• Enter,

actual machine

name.

• Signon as usual.

[C:\] lcd A:

actual file name.

The first time you signon to an AIX machine you will be forced to change

your password. Subsequent times, you will be given informational mes-

sage regarding your previous singon time and date and any unsuccess-

ful attempts to singon to your id. Change your passwords periodically.

Make sure that you always logout before you leave your workstation.

The following commands summarize the syntax for loging in and out

of AIX systems and for changing passwords.

Login procedure

password: password actual userid and

password.

A.6. OPERATING SYSTEMS A.24

Logout procedure

ctrl-D key to logout

Changing password

prompts for the

old and new

passwords

A.6.2 Customizing your environment - the .profile file

Once you login successfully to an AIX machine, you will always start the

session in your home directory. There should be a file in your home

directory named .profile. Every time you login, the contents of this file

are executed once. Hence one can use this file to customize the work-

ing environment on an AIX machine with the help of this file. There is

also a system wide profile file, which is used to control such things as

the default search path for finding executable programs, controlling the

prompt string, identifying the terminal type, and to define a number of

environment variables that other application programs might need.

You can feel free to copy my version of this file in /u/kumar/.profile

and adopt it to your needs. My version of this file enables the command

line editing features - i.e., all the commands that you enter during a ses-

sion are stored in a buffer and you can scroll back and forth to retrieve

previous commands with Ctrl-p for previous and Ctrl-n for next. After

retrieving a previous command you can edit it with the cursor movement

keys Ctrl-b for backward, Ctrl-f forward and Ctrl-d for deleting the cur-

rent character. In effect it supports the same editing capabilities on the

command line that the editor emacs supports for a file. More on emacs

later. You can also set alias for the most frequently used commands.

For example you can set

alias dir=’ls -al’

so that when you enter dir you will get the directory listing. Another

useful alias is

alias rm=’rm -i’

which prompts you for confirmation before removing (deleting) files.

If you are familiar with the DOS directory and file structure, it should be

equally easy to work with the file systems of OS/2 and AIX. While DOS file

A.6. OPERATING SYSTEMS A.25

names are restricted to 12 characters, AIX allows very long file names.

But the mechanisms for creating directories and navigating up and down

the directory tree structure are essentially the same. Frequently used

commands that relate to file management are tabulated below.

seek online help man command help command help command

list the directory ls -al dir dir

list contents of a file cat fname type fname type fname

list one page at a time more fname more fname more < fname

create a file touch fname

erase a file rm fname erase fname erase fname

copy a file cp fn1 fn2 copy fn1 fn2 copy fn1 fn2

append fn1 to fn2 cat fn1 >> fn2 copy fn2+fn1

rename a file mv fn1 fn2 rename fn1 fn2 rename fn1 fn2

create a directory mkdir dirname mkdir dirname mkdir dirname

change directory cd dirname cd dirname cd dirname

delete an empty directory rmdir dirname rmdir dirname rmdir dirname

present working directory pwd cd prompt pg

disk usage summary du -s chkdsk

status of system ps -l pstat

Each of these commands, particularly in UNIX, can take several optional

parameters, flags etc. that further identify any specific features of the

command that you want to enable. This command list is by no means

complete. They are the more frequently used commands. On UNIX addi-

tional details on each of the command can be found with the command

These are called manual pages or man pages for short. Try

to get started! Although the man pages provide only limited help, they

are always available from any type of terminal. Much more exhaustive

online help using hypertext is available when you are connected to an

AIX machine via X-windows. To access this enter

user@machine:dir> info

A.7. EDITORS A.26

A.7 Editors

The functions that one expects from a good editor have several common

features. These can be broadly grouped into the following.

delete text by characters, words, lines, blocks, etc.

insert text by characters, words, lines, blocks, etc.

move text by characters, words, lines, blocks, etc.

copy text by characters, words, lines, blocks, etc.

locate text by strings perhaps with regular expressions

search & replace text1 by text2

that invoke functions to carry out the above tasks. On AIX there are

several editors. I prefer emacs, as I find it to be extremely powerful; it

also has built in online help and hence you can learn at your own pace

and grow with it. OS/2 also has a full screen visual editor. I am sure that

you will have your own favorite editor. If you have one use it and ignore

this section.

If you want to learn the ultimate in editors (my biased view of course!)

try emacs. To start this editor in AIX use the command,

user@machine:dir> emacs

The anatomy of the emacs editor screen is shown in Figure A.6. This

is also a full screen, visual editor. It works under both X-windows and

VT100 emulation.

Buffers

Emacs uses the concept of a buffer to keep a temporary copy of the file

that you are editing. You can edit any number of files at a time and each

A.7. EDITORS A.27

emacs mode relative file position

Buffer name

%%

C-p

commands to emacs are entered. cursor

C-b movements C-f

** in this position indicates

that the file has been modified.

C-n

A.7. EDITORS A.28

file is kept in a separate buffer. There is a one line command buffer called

minibuffer at the bottom of the screen and a text buffer at the top. A

status line in reverse video separates the minibuffer from the text buffer.

The key sequence C-x b allows you to cycle through the buffers.

Key sequences

Being the most powerful editor, emacs provides a large number of edit-

ing functions. These functions are accessed either by entering the func-

tion name in the command minibuffer or by directly entering a key se-

quence. This process of attaching key sequences to functions is called

key bindings. Actually when you enter a key sequence like C-f, it invokes

a function called forward-char. Some of the frequently used functions

and their key bindings are listed in Table A.7.1. In these C-x means the

keeping the control key pressed, enter the character x, while M-x implies

a two-character key sequence with the Esc key as the first one followed

by the character x. If you get into trouble or become confused with this

editor at any time enter the key sequence C-g C-g to discontinue what

you started. This means that while keeping the control key down, enter

the letter g couple of times.

Command completion

rather remember only the function names (and that too only vaguely!),

then you can enter M-x followed by the first few letters of the function

name and the space-bar key. A list of all possible functions beginning

with those few letters will be displayed in a lower window. This process

is called function completion. Try

M-x sa space-bar

You will see that the function name will be completed till save- and

stop as an ambiguity exists at the point. Another space-bar will show all

functions that begin with save- and you can continue to identify the one

you want - e.g., b space-bar will identify the function as save-buffer.

Recall that this function is bound to the key C-x C-s.

This concept of completion is also used in selecting a file. For exam-

ple, the key sequence C-x C-f will get a new file into a buffer for editing.

Once you execute the key sequence, the current directory will be dis-

played in the minibuffer. Entering the first few characters of a file name

followed by space-bar will complete the file name till the next point of

ambiguity.

A.7. EDITORS A.29

ask for help C-h C-h help-for-help

ask for tutorial C-h t help-with-tutorial

ask for information C-h i info

to end emacs session C-x C-c save-buffers-kill-emacs

abort if you get into trouble C-g keyboard-quit

insert a new file at cursor position C-x i insert-file

switch to a different buffer C-x b switch-to-buffer

save the current buffer into the file C-x C-s save-buffer

save the current buffer as a new file C-x C-w write-file

move one character forward C-f forward-char

move to previous line C-p previous-line

move to next line C-n next-line

delete the current character C-d delete-char

undo C-x u advertised-undo

move to end of line C-e end-of-line

open a new line for typing C-o open-line

kill from cursor to end of line C-k kill-line

yank it back (explore kill ring!) C-y yank

move to beginning of buffer C-x [ backward-page

move to end of buffer C-x ] forward-page

page backward to previous screen M-v scroll-down

redisplay current screen C-l recenter

incremental search backward C-r isearch-backward

search & replace Mx % query-replace

to cancel search C-g C-g

end recording keystrokes C-x ) end-kbd-macro

replay/execute recorded keystrokes C-x e call-last-kbd-macro

NOTE: C-h means keeping the control key down, enter the key “h”.

M-v means press and release the Meta key or the Esc key, then press the

key “v”.

C-x [ means keeping the control key down, enter the key “x”, then press

the key “[”

Table A.4: List of frequently used emacs functions and their key bindings

A.8. FORTRAN COMPILERS A.30

Emacs contains many more functions than listed in Table A.7.1 and

not all functions have key bindings. You are encouraged to go through

the online tutorial which can be invoked with the keystrokes C-h t. Online

documentation is available via C-h C-h i. The key sequence C-h C-h b

will show all the key bindings within emacs in a buffer called *Help*

after splitting the screen into two windows. You can switch to the lower

window with the key sequence C-x o which is the same as invoking the

function other-window. Repeating the key sequence C-x o will cycle you

through the various windows. To expand the current buffer into full

screen use the key sequence C-x 1.

This is only a minimal introduction to emacs. This editor is not for

the uninitiated user. You need patience to master this editor, but if you

persist the rewards in terms of increased productivity are great. With

this introduction you should be able to explore emacs deeper and deeper!

If you are bored while using emacs, chat with the doctor - use M-x doctor

- have fun.

the AIX machines. The FORTRAN compiler on the AIX server is named

xlf. To compile your code use the command

This is adequate for any self contained FORTRAN program. The -O op-

tion invokes optimization of the code. The executable code from the

compiler will automatically be stored in a file named a.out. To execute

your program use,

user@machine:dir> a.out

If you read data from unit 5 and write data to unit 6 within your FOR-

TRAN program, these I/O will be redirected, by default, to your terminal.

If you want to use files connected to these units, then you can use

Another way to use these files is to OPEN then explicitly within your

FORTRAN program.

If your program calls any library routines like the nswc math library,

then you must specify the location of the library as follows in the com-

A.9. DEBUGGERS A.31

pilation step.

Here -l parameter identifies the name of the file containing the library

and the -L parameter identifies the directory where the library file re-

sides. Note that the library file will be named libnswc.a. By convention,

however, the prefix lib and the postfix .a need not be specified in the

parameter -l.

Use the man xlf to find about other parameters the xlf compiler can

accept.

A.9 Debuggers

A source level debugger allows one to step through the program, exe-

cuting one line at a time, set break points at pre-selected lines, display

values of variables and change values of variables. There is an excellent

X-Windows based front end called xde to the debugger (dbx) on the RISC

server that is quite easy to use. Find out more about it using man xde. If

you want to use this debugger, however, you should access the computer

through X-Windows.

In order to enable the debugging features you should compile the

program with the -g option as follows.

The a.out file so produced will contain all the symbol table information

which is needed by the debugger. To invoke the debugger use,

This will open up several windows, one of which will show the source

listing. Other windows provide menus and buttons that you can use

to set break points, begin execution, display variables etc. There is also

online help explaining the various features of xde.

For the real computer hacks, a command line version of the debugger

called dbx is available. You can use this to debug FORTRAN, C and

PASCAL programs. Venturing into this is recommended only if you know

the language C quite well. This can be accessed, however, without X-

windows.

A.10. APPLICATION PROGRAMS A.32

They are all located under the directory "/usr/local". The procedure for

starting several useful applications is given in the next few sections.

Online help or online documentation is often adequate to learn more

about the software.

APSEN is a powerful steady state process flow sheet simulator. It is

useful for carrying out rigorous, steady state mass and energy balance

calculations. It has a fairly extensive thermodynamic data base. The X-

window based front end, called the model manager allows you to define

the problem interactively. The flow sheet can be constructed graphi-

cally by grabbing modules and connecting them up. Operating condi-

tions can be defined interactively by filling out forms. The expert sys-

tem interface ensures that all required input parameters are specified

before the simulation is started. ASPEN PLUS is licensed to run only on

ugrads.eche.ualberta.ca.

• Establish a X-window connection to ugrads from any OS/2 machine

(follow section A.5.2) or from other AIX machines (see figure A.5).

user@machine:dir> mmg

Advanced users of ASPEN PLUS can use the simulator directly from the

command line using

user@machine:dir> aspen

The input file containing the commands to ASPEN PLUS must be pre-

pared by the user using any standard editor. One can think of the Model

Manager as a front end that enables you to build the command file for

ASPEN PLUS in a GUI environment.

A.10.2 Xmgr

This is a powerful 2-D plotting and data analysis package. You can con-

trol every facet of the graph with this program. It runs only under X-

A.10. APPLICATION PROGRAMS A.33

start the program

• Establish a X-window connection to ugrads from any OS/2 machine

(follow section A.5.2) or from other AIX machines (see figure A.5).

user@machine:dir> xmgr

A.10.3 TEX

TEX is a powerful typesetting package that is widely available on several

platforms and it has very few licensing restrictions. This document, in

fact, was typeset using TEX. It is available on most of the AIX machines.

The authoritative document on TEX is by Knuth (Knuth, 1984) and on

LATEX is by Lamport (Lamport, 1986). The steps for compiling, previewing

and printing a TEX document are outlined here. See the above references

on how to create a TEX document. To compile a TEX file use,

device independent file. The contents of this "dvi" file can then be used

to either preview the document on the screen or send it to a printer. To

preview use,

Note that on the AIX machine, you must be using X-windows to use the

previewer. A program called dvips takes the "dvi" file and sends it to

a postscript printer. Since printer configurations vary, see one of the

support staff to find out the exact procedure for printing a document.

Pine is the standard mail reader on our AIX network. To start it simply

enter

A.11. DISTRIBUTED QUEUEING SYSTEM A.34

user@machine:dir> pine

This program will work under VT100 emulation and offer full screen

support. Online help can be accessed with the question mark, ?. Note

that e-mail addresses are formed as userid@machine.eche.ualberta.ca.

Here userid is your signon name or id on the AIX machine, machine is the

name of the machine. The rest of the e-mail address, eche.ualberta.ca

refers to the chemical and materials engineering subnet domain name.

If you have accounts on several machines it is recommended that you

select and consistently use one machine as your primary e-mail system.

Tin is the standard news reader on our AIX network. To start it simply

enter

user@machine:dir> tin

This program will work under VT100 emulation. Online help can be

accessed with the "h" key.

be executed on a machine that is relatively free within the graduate net-

work, while at the same time allowing faster keyboard/terminal response

to interactive users. Any task that takes longer that 5 min of CPU time

should be submitted through the DQS. Otherwise the jobs will be termi-

nated automatically. The commands that you should be familiar with

are

For the experts, online documentation via man pages is available on each

of the above commands. The following examples illustrate how to con-

struct a script file that you would submit using qsub.

A.11. DISTRIBUTED QUEUEING SYSTEM A.35

lines.

#!/bin/csh

# make the current directory the CWD

#$ -cwd

# lets put STDOUT/STDERR in the file "gaga"

#$ -eo gaga

# i’d like to know when she fires

#$ -mu user@address.ualberta.ca

#$ -mb

# and when she finishes

#$ -me

matlab >out « ’eof’ #the shell starts matlab; output goes to "out"

secant(’ass3a’,[0,1],eps,1) %matlab acts on this line and runs secant

fzero(’ass3a’,0.5,eps,1) %matlab acts on this line and runs fzero

quit %matlab acts on this line and quits

eof

ps -ael #back in the shell; output goes to "gaga"

ls -al #The shell acts on this too!

In the above file any line beginning with "#$" is a command for the

queueing system and any line that begins with "#" is a comment line.

The command

#$ -eo gaga

send all of the output that normally appears on the screen during an in-

teractive session to a file named "gaga". [Change the file name to some-

thing different from "gaga"!] The command

#$ -mu user@address.ualberta.ca

sends mail to the user when the job starts. Use your correct e-mail ad-

dress here. Similarly the command

#$ -me

sends mail when the job finishes.

After the preamble you can put any sequence of commands that you

would normally enter during an interactive session and these will be ex-

ecuted in sequential order. In the above example the command

matlab >out « ’eof’

starts MATLAB, redirects MATLAB output to a file named out and takes

the input to MATLAB from the following lines until "eof", the end-of-file

marker. The lines following "eof" should make sense to the shell, as it

interprets these lines. If you understand these principles you can con-

struct any complicated script using the full programming capabilities of

A.12. PRINTING REPORTS, GRAPHS ETC. A.36

The machines are grouped into "High", "Medium" and "Low" groups based

on the hardware capabilities. Here "-G" identifies the group as "Medium"

and only one machine is requested for the job with "1". Note that DQS

supports "parallel virtual machine" for jobs that can be executed in par-

allel on more than a single machine. In such cases you should request

the number of machines by replacing "1" with "n" where "n" can be be-

tween 1 and 9 since we have only nine machines. The default group is

"Medium" and the default number of machines is "1". So you could have

simply entered,

user@machine:dir> qstat

Construct a script file named, say m01.bat, containing the following lines.

#!/bin/csh

#$ -cwd

#$ -eo gaga

#$ -mu user@address.ualberta.ca

#$ -mb

#$ -me

runf3d -fort m02.f -command m02.fc -release 3.2.1 -geom m01.geo

In room CME 244 there are several old Epson printers connected directly

to the OS/2 machines. Each printer serves two OS/2 machines. In room

CME 473 each of the DOS machine is connected directly to a HP Laser Jet

printer.

Since AIX and OS/2 have network support, printing on these ma-

chines can be done using network printers. Hence there are no printers

connected directly to each of these machines. There are two network

A.12. PRINTING REPORTS, GRAPHS ETC. A.37

LaserJet printers, one located in room CME 244 and the other in CME

473. The network printers also operate on the client-server model. It

is important that you understand how the print servers operate and

use this facility in a responsible manner. If the facility is abused this

service will be discontinued and you will have to take your print jobs to

the Micro DEMO center at the book store!

• If you send a print job to a remote location, the job can be deleted

by the system administrator if it interferes with other queued print

jobs.

to attend to the printer immediately after you send a print job to

the printer - i.e., feed paper, watch for paper jams, collect your

output etc. .

For personal needs take your print job to the Micro DEMO center.

Most applications running under OS/2, AIX or DOS can generate printed

output for a number of different types of printers. Each type of printer/plotter

understands a particular set of instructions. For example HP LaserJet

uses PCL (Print Control Language) format, while HP plotters use HPGL

(HP Graphics Language). Another widely used page description language

is called Postscript. Application programs use something called a printer

driver to generate the output suitable for a particular output device.

If the output is generated on a computer that is connected directly

to a printer (like most DOS machines to HP LaserJet in CME 473 or OS/2

machines to Epson in CME 244) then you can send the print job directly

to the printer, typically through the printer port LPT1:

If the output is generated on a computer that does not have a printer

connected directly to it or you want a LaerJet output from an OS/2 ma-

chine, then you must use one of the network printers. In this case you

must first save the output from the application program into a file. Most

well designed application programs will give you the option to select the

output device and to save the printed output into a file. Use the follow-

ing convention in naming the output files:

A.12. PRINTING REPORTS, GRAPHS ETC. A.38

filename.ps postscript

filename.eps Encapsulated postscript

(Useful for merging graphics with text)

filename.pcl HP Laser Jet

filename.hgl HP plotters using HPGL instruction set

filename.dot Epson dot matrix printers

Note that such files contain specific instructions that are understood by

specific printers/plotters. Hence they must be sent to the appropriate

printers. If you send a PCL file to an Epson printer you will get garbled

output that will make no sense!

The simplest approach is to use the script named prnt on AIX machines

available within the chemical and materials engineering department. It

is a script written by Bob Barton. Hence it is available only on machines

maintained by Bob Barton.

Unix experts may want to try the basic set of unix commands lpr, lpq,

lprm to submit, monitor and manage a print task. On the AIX machines,

the command to submit a PCL file to a network printer is

where the printer name is either LJ 244 (LaerJet printer in room CME

244), LJ 473 (LaserJet printer in room CME 473) or PS 475 (the postscript

printer in room CME 475). Your job is then queued and you can examine Note that PS 475 is

the status of the queue with the command, available only on

the graduate

user@machine:dir> lpq -Pprinter name network, while

LJ 244 and LJ 473

are available on the

You can remove your print jobs from the queue with the command,

undergraduate

network.

user@machine:dir> lprm -Pprinter name job number

If your application program does not support the HP Laser Jet ( i.e.,

PCL format) output device, you can select the postcript output device

and save the file as fname.ps. A public domain utility program called

A.13. ANONYMOUS FTP SERVICE A.39

monitor of an AIX machine under X-windows or convert the file to PCL

format for printing on a Laser Jet printer. To use this conversion pro-

gram enter,

If you are using X-windows and want to preview the contents of the file

fname.ps enter,

user@machine:dir> DISPLAY=hostname:0

user@machine:dir> gs fname.ps

where hostname is the name of the X-client. To find out more about

"ghostscript" options use one of

user@machine:dir> gs -?

user@machine:dir> man gs

Important Note: Once you have printed the document delete the output

files from your home directory.

tion program you should select the output device to be HP LaserJet and

save the output into a file as discussed in the previous section. This file

will be on the local computer in the directory that you select. You can

print such an output file as follows:

distribution service called anonymous-ftp-service. You can signon to

their machines as an anonymous user and retrieve (or download) soft-

ware. You should use anonymous as the userid and your e-mail address

A.13. ANONYMOUS FTP SERVICE A.40

as the password. Recall from section §A.10.4, that the e-mail addresses

have the general form userid@machine.eche.ualberta.ca. You must be

courteous in using such services. Normally you are requested to down-

load software only during off-peak hours. You must also conform to all

the software licensing conditions. A sample ftp session is given below.

Connected to hobbes.NMSU.Edu.

220 hobbes FTP server (Version 5.1 (NeXT 1.0) Tue Jul 21, 1992)

ready.

Name (ftp-os2.nmsu.edu:kumar): anonymous

331 Guest login ok, send ident as password.

Password: userid@machine.eche.ualberta.ca

230 Guest login ok, access restrictions apply.

ftp>

129.128.44.nnn ( i.e., from room CME 244). Some well known sites and

the type of software that they serve are listed below.

MS-DOS related software 192.88.110.20 WSMR-SIMTEL20.ARMY.MIL

MS-DOS related software 128.252.135.4 wuarchive.wustl.edu

AIX related software 128.97.2.211 Harpo.SEAS.UCLA.EDU

dynamical systems theory 132.239.86.10 lyapunov.ucsd.edu

TEXrelated software 192.92.115.8 Niord.SHSU.edu

TEXrelated software 134.173.4.23 ymir.claremont.edu

NeXT related software 128.210.15.30 sonata.cc.purdue.edu

OS/2 related software 128.123.35.151 ftp-os2.nmsu.edu

U of A ftp server 129.128.76.12 ftp.srv.ualberta.ca

numerical analysis software 192.20.225.2 research.att.com

The reasonable man adapts himself to the world:

the unreasonable one persists in trying to adapt the

world to himself. Therefore all progress depends on

the unreasonable man.

Appendix B

An introduction to MATLAB

B.1 Introduction

tations normally encountered in engineering and science. It is available

on several platforms including personal computers using DOS, worksta-

tions, mainframes and supercomputers using UNIX. It brings together a

large collection of powerful numerical algorithms (from LINPACK, EIS-

PACK etc) for solving a variety of linear algebra problems and makes

them available to the user through an interactive and easy-to-use inter-

face. Very little programming effort is required to use many of its stan-

dard functions. Yet, an experienced programmer can write advanced

functions and even develop entire tool boxes for specific applications

like control system design and signal processing. In fact several such

tool boxes already exist.

MATLAB is available in the micro computer lab in room CME 244 on

nine of the IBM RS/6000-220 workstations and on the SunOS servers

ugrads1.labs and ugrads2.labs running under SunOS operating system.

These machines are on the internet and hence are accessible through a

variety of means. Note that a student edition of MATLAB is available

from the Book store. If you have a PC at home the software and the

manual is a great buy.

Since MATLAB is interactive, you are encouraged to try out the ex-

amples as you read this manual. After each step, observe the outcome

carefully. Since computers are programmed to respond in predictable

B.1

B.2. STARTING A MATLAB SESSION B.2

Familiarity with the basic concepts of the operating system and the

networked environment are assumed. In this notes you will be intro-

duced to some of the basic numerical and graphical capabilities of the

MATLAB. In particular the following will be explored.

problems in

> root finding

> curve fitting

> numerical integration

> integration of initial value problems

> nonlinear equations and optimization

> basic plotting capabilities

> Writing MATLAB functions and scripts - the m-file

For the adventurous, here are some of its advanced features. Explore

them on your own! The package provides features (or tool boxes) for sig-

nal processing, control system design, identification and optimization

through what are called m-files. The graphic features support include

3D and contour plotting as well as device drivers for a variety of out-

put devices including Postscript and meta file capabilities for producing

high quality plots (not just screen dumps!). It also provides facilities

for developing ones own tool boxes as well as facilities for interfacing

with other high level languages such as FORTRAN or C and invoke such

routines from within MATLAB.

Find a free station in room CME 244 the range of numbers 31 to 39.

These are the AIX machines. Signon by entering your userid and passs-

word. i.e.,

B.2. STARTING A MATLAB SESSION B.3

password: password

user@machine:dir> xinit

You may copy the files ".mwmrc" and ".Xdefaults" from the directory

"/afs/ualberta.ca/home/k/u/kumar/". These files customize the X-windows

environment when the X-server is started with the command "xinit". Sev-

eral windows will be started up and one of them will be named "Aixterm".

Normally this would be the shell "ksh" and all the paths to application

program will be setup correctly for you to start running application pro-

grams. To start MATLAB simply enter

user@machine:dir> matlab

If MATLAB does not start, seek help from the system administrator!

via X-windows was outlined in section A.5.2. This will provide a full X-

window based access to MATLAB. If you want to use MATLAB from a

home computer, only VT100 based emulation support is available, un-

less you have X-windows client on your home computer. Follow the steps

outlined in section A.5.3 to connect to an AIX machine from home using

kermit. In either case, after a successful connection has been estab-

lished, enter,

user@machine:dir> matlab

has advanced 2-D and 3-D graphics capabilities. The graphics features,

however, rely heavily on X-windows. Hence you must invoke MATLAB

under X-windows in order to see the graphs and images on the screen.

If you start MATLAB from a home computer, or a VT100 terminal on

campus. you are limited to seeing the textual output only on the screen.

You can still generate graphs with appropriate MATLAB commands, save

them on to a file or print them, but you cannot see them on the screen.

B.3. MATLAB BASICS B.4

Once you start MATLAB successfully, you should see the following prompt

on your screen.

< M A T L A B (tm) >

(c) Copyright 1984-92 The MathWorks, Inc.

All Rights Reserved

Version 4.0a

Dec 11 1992

Commands for more information: help, whatsnew, info, subscribe

»

This provides you with an interactive workspace in which you can define

any number of variables and invoke any function. To exit MATLAB at

any time enter

» quit

The commands that you enter within MATLAB are acted upon immedi-

ately. As soon as you enter a line like,

» fname

tion. If so it will be executed immediately. If not MATLAB searches the

path to look for a external function or a file by the name "fname.m". Such

a file is called a m-file, as its file extension is "m". If such a file is found

it will execute the contents of that file. If not, MATLAB will generate an

appropriate error message. m-files can be either scripts (i.e. a series of

valid MATLAB commands that are executed often and hence stored in

a file) or they can be used to define entirely new MATLAB functions of

your own. More on m-files later.

While in MATLAB, if you have the need to execute a UNIX shell com-

mand, you can do so with the escape character ! - e.g., try When you exit the

editor using

» !emacs ctrl-x ctrl-c,

you will return to

to invoke the emacs editor, or MATLAB

B.3. MATLAB BASICS B.5

» !ls -al

MATLAB provides extensive online help using commands like help, demo,

type, lookfor, whatsnew. They are not only useful for checking the syn-

tax of a particular function, but also for exploring and learning about new

topics. Since the help command often generates lots of text that tend to

scroll by very quickly, it is useful to enable a feature called “more” with

the command,

» more on

When this is enabled, you will be shown one screen full of information

at a time. Note that this is also UNIX feature that you can use with any

program that generates lots of scrolling text. To get started with the

online help, first get a list of help topics using

» help

Table B.1 provides a list of help topics which should give you some idea

about the broad scope of MATLAB. You can obtain a list of functions

under each topic (or directory) by entering help topic. For example to

get a listing of general purpose commands (the first item in the above

table) enter,

» help general

rial. Many of the functions that will be useful in a numerical methods

course are listed in subsequent sections of this chapter. One way to

become proficient in MATLAB is to use this HELP feature liberally - i.e.,

when ever you are in doubt call on the HELP!

There is also a built in DEMO feature. To invoke this feature simply

enter Try some graphics

demos. In Version

» demo 4.0, this works only

under X-windows

It will provide you with a menu of items. Select the ones that interest

you most. You can also search by keywords using the command lookfor.

Try,

B.3. MATLAB BASICS B.6

matlab/general General purpose commands

matlab/ops Operators and special characters

matlab/lang Language constructs and debugging

matlab/elmat Elementary matrices and matrix manipulation

matlab/specmat Specialized matrices

matlab/elfun Elementary math functions

matlab/specfun Specialized math functions

matlab/matfun Matrix functions & numerical linear algebra

matlab/datafun Data analysis and Fourier transform functions

matlab/polyfun Polynomial and interpolation functions

matlab/funfun Function functions & nonlinear numerical methods

matlab/sparfun Sparse matrix functions

matlab/plotxy Two dimensional graphics

matlab/plotxyz Three dimensional graphics

matlab/graphics General purpose graphics functions

matlab/color Color control and lighting model functions

matlab/sounds Sound processing functions

matlab/strfun Character string functions

matlab/iofun Low-level file I/O functions

matlab/demos Demonstrations and samples

toolbox/control Control System Toolbox

toolbox/ident System Identification Toolbox

toolbox/local Local function library

toolbox/optim Optimization Toolbox

toolbox/signal Signal Processing Toolbox

simulink/simulink SIMULINK model analysis and construction functions

simulink/blocks SIMULINK block library

simulink/simdemos SIMULINK demonstrations and samples

B.3. MATLAB BASICS B.7

Managing commands and functions

help On-line documentation

what Directory listing of M-, MAT- and MEX-files

type List M-file

lookfor Keyword search through the HELP entries

which Locate functions and files

demo Run demos

path Control MATLAB’s search path

Managing variables and the workspace

who List current variables

whos List current variables, long form

load Retrieve variables from disk

save Save workspace variables to disk

clear Clear variables and functions from memory

pack Consolidate workspace memory

size Size of matrix

length Length of vector

disp Display matrix or text

Working with files and the operating system

cd Change current working directory

dir Directory listing

delete Delete file

getenv Get environment value

! Execute operating system command

unix Execute operating system command & return result

diary Save text of MATLAB session

Controlling the command window

cedit Set command line edit/recall facility parameters

clc Clear command window

home Send cursor home

format Set output format

echo Echo commands inside script files

more Control paged output in command window

Starting and quitting from MATLAB

quit Terminate MATLAB

startup M-file executed when MATLAB is invoked

matlabrc Master startup M-file

B.3. MATLAB BASICS B.8

» lookfor inverse

which will scan for and print out the names of functions which have the

keyword "inverse" in their help information. The result is reproduced

below.

ACOS Inverse cosine.

ACOSH Inverse hyperbolic cosine.

ASIN Inverse sine.

ASINH Inverse hyperbolic sine.

ATAN Inverse tangent.

ATAN2 Four quadrant inverse tangent.

ATANH Inverse hyperbolic tangent.

ERFINV Inverse of the error function.

INVERF Inverse Error function.

INV Matrix inverse.

PINV Pseudoinverse.

IFFT Inverse discrete Fourier transform.

IFFT2 Two-dimensional inverse discrete Fourier transform.

UPDHESS Performs the Inverse Hessian Update.

scalar are special cases of a general matrix data structure. Similarly

MATLAB handles complex variables and numbers in a natural way. Real

variables, then are, special cases. Note that MATLAB is case sensitive.

• MATLAB remembers the previous command lines that you have en-

tered. You can recall them by simply using the up and down arrow

keys (or ctrl-p and ctrl-n key combinations) and then edit them

and reenter the edited command as a new command. Basically,

it supports the following emacs key definitions for command line

editing.

B.3. MATLAB BASICS B.9

Previous line ctrl-p

Next line ctrl-n

One character left ctrl-b

One character right ctrl-f

One word left esc b, ctrl-l

One word right esc f, ctrl-r

Cursor to beginning of line ctrl-a

Cursor to end of line ctrl-e

Cancel line ctrl-u

Delete character ctrl-d

Insert toggle ctrl-t

Delete to end of line ctrl-k

example,

» A = [1 2 3; 4 5 6; 7 8 9]

declare the dimension of an array. Since MATLAB is case sensitive

you have defined only "A " and "a" remains undefined. Similarly

» x=[2+4*i, 3+5*i]

add another element enter what would the

value of x(3) be?

» x(4)=5+6*i

creased to 4. Observe that the square brackets are used in forming

vectors and matrices. Semicolon is used to separate rows in a ma-

trix. Comma is used to separate individual elements of a vector (or

matrix). Parentheses are used to identify individual array elements.

(Try help punct and help paren)

the following exercise and make sure you understand the result.

» A(2:3,1:2)

Observe the use of () and : to select a sub block of A. Next, try What might

happen if the size

of sub-blocks are

different?

B.3. MATLAB BASICS B.10

» B(4:5,2:3)=A(2:3,1:2)

sign it to another sub-block of B. Next, try, What would be the

value of x after you

» x(4:-1:1) execute this

command? Why?

which reverses the order of elements of x. Next, try the command,

This example also demonstrates a powerful way of selecting spe-

cific elements of a vector. This is easily extended to matrices also.

Well, try,

variable. All the variables that you define during a MATLAB session

are stored in the workspace ( i.e., in computer memory) and they

remain available for all subsequent calculations during the entire

MATLAB session i.e., until you “quit” MATLAB.

» global A

then all those functions share the same value. To check if a variable

is global use,

» isglobal(A)

and the attributes of those variables, use one of the two commands

"who" and "whos".

» whos

B.3. MATLAB BASICS B.11

low the example below:

» x = 0 : 0.05 : 1.0

will generate x = [0 0.05 0.1 0.15 0.2 · · · 1.0]. (Try help colon).

• To suppress the automatic echoing of any line that you enter from

keyboard, terminate such a line with a semi-colon ";". For example

» x = 0 : 0.05 : 1.0;

will define x as before, but will not echo its value. (Try help punct).

• To continue the entry of a long statement onto the next line use an

ellipsis consisting of three or more dots at the end of a line to be

continued. For example

» −1/8 + 1/9 − 1/10 + 1/11

ple the following is a valid command line.

Use

» format long

• You can save the contents of a workspace with the "save" command.

Try, Try the command

!ls jnk*

» save jnk Observe that the

extension .mat has

In the next few statements examine the currently defined variables, been added

clear the workspace and load a previously saved workspace.

» whos

» clear

B.3. MATLAB BASICS B.12

» whos

» load jnk

» whos

use help on each of them to find out more precise information on

them.

− subtraction, e.g., e.g., C = A − B ⇒ Cij = Aij − Bij

P

∗ matrix multiplication, e.g., C = A ∗ B ⇒ Cij = k Aik Bkj

ˆ Matrix power. Z = Xˆy is X to the y power if y is a scalar and

X is square. If y is an integer greater than one, the power

is computed by repeated multiplication. For other values of

y the calculation involves eigenvalues and eigenvectors. (try

help arith).

0 Matrix transpose. X 0 is the complex conjugate transpose of X.

X.0 is the non-conjugate transpose. (try help punct).

\ left division. A\B is the matrix division of A into B, which is

roughly the same as inv(A)*B , except it is computed in a dif-

ferent way. If A is an N-by-N matrix and B is a column vector

with N components, or a matrix with several such columns,

then X = A\B is the solution to the equation A ∗ X = B com-

puted by Gaussian elimination. (try help slash)

/ right division. B/A is the matrix division of A into B, which is

roughly the same as B*inv(A).

the above operations to be valid; if you attempt matrix operations

between incompatible matrices an appropriate error message is

generated.

help relop for additional details.

> Greater than relational operator

<= Less than or equal

>= Greater than or equal

B.3. MATLAB BASICS B.13

== equal

˜= not equal

size, producing a resultant matrix consisting of 0’s and 1’s.

lows:

.∗ C = A.*B Cij = Aij Bij

Bij

.ˆ C = A.ˆB Cij = Aij

./ C = A./B Cij = Aij /Bij

.\ C = A.\B Cij = Bij /Aij

A list of all advanced matrix related functions in MATLAB is given in

Table B.3 Use the help command on each of these functions to find out

more about the function and its exact syntax.

Work through the following exercise to become familiar with the us-

age of some of the linear algebra functions and refresh some of the re-

sults from a first year linear algebra course.

» b = [0.369*275;0.821*275;0]; echoed on the

screen, while "b"

was not? Is "b" a

Observe the two different ways semicolon has been used here. What

row or a column

are they?

vector?

• Solve the equation Ax = b using,

» x = A\b

» A*x - b

B.3. MATLAB BASICS B.14

Matrix analysis

cond Matrix condition number

norm Matrix or vector norm

rcond LINPACK reciprocal condition estimator

rank Number of linearly independent rows or columns

det Determinant

trace Sum of diagonal elements

null Null space

orth Orthogonalization

rref Reduced row echelon form

Linear equations

\ and / Linear equation solution; use "help slash"

chol Cholesky factorization

lu Factors from Gaussian elimination

inv Matrix inverse

qr Orthogonal-triangular decomposition

qrdelete Delete a column from the QR factorization

qrinsert Insert a column in the QR factorization

nnls Non-negative least-squares

pinv Pseudoinverse

lscov Least squares in the presence of known covariance

Eigenvalues and singular values

eig Eigenvalues and eigenvectors

poly Characteristic polynomial

hess Hessenberg form

qz Generalized eigenvalues

rsf2csf Real block diagonal form to complex diagonal form

cdf2rdf Complex diagonal form to real block diagonal form

schur Schur decomposition

balance Diagonal scaling to improve eigenvalue accuracy

svd Singular value decomposition

Matrix functions

expm Matrix exponential

expm1 M-file implementation of expm

expm2 Matrix exponential via Taylor series

expm3 Matrix exponential via eigenvalues and eigenvectors

logm Matrix logarithm

sqrtm Matrix square root

funm Evaluate general matrix function

B.3. MATLAB BASICS B.15

» norm(A*x - b)

» rank(A)

appear to be lower

triangular?

• Calculate the determinant of A using

» det(A)

» det(L)*det(U)

» [v,d]=eig(A)

» prod(diag(d)) Can you explain

this result?

» c1=poly(A)

» roots(c1) Can you explain

» prod(ans) these results?

zeros of a multivariable function).

fun(x) is a function that you should write to

evaluate f(x) - i.e., you define your problem in

an m-file.

x0 is the initial guess for the root. [There is

obviously more to it than I can describe here!

Read the manual or try help fsolve].

B.3. MATLAB BASICS B.16

equation.

fun(x) is the external function describing your

problem that you should write in a m-file.

x0 is the initial guess.

of the polynomial with roots determined by V.

i.e., roots and poly are inverse functions of each

other.

coefficients are in c. i.e., Pn (x) = (c1 x n +

c2 x n−1 + · · · + cn+1 ).

efficients in descending powers of x are returned

in c.

in c at locations determined by s.

tors (x, y) and then computes a vector of in-

terpolated values of yi at xi.

vector x.

The other functions of possible interest are fmin, fmins, residue, conv

, table1.

(a,b) using adaptive recursive Simpson’s rule.

fun(x) is an external function that you must

provide in a m-file.

tol is the acceptable global error. trace is an

optional flag to monitor the integration pro-

cess.

B.3. MATLAB BASICS B.17

ential equations of the form dy/dt = f (y) us-

ing 4 and 5 order Runge-Kutta methods.

fun(y)is the external function which defines your

problem. You must provide this via a m-file.

(t0,y0) is the initial condition.

tf is the final point at which you want to stop

the integration.

tol is the acceptable global error in the solu-

tion. trace trace is the optional flag to print

intermediate results.

MATLAB ver 4.0 maintains separate graphics windows and a text win-

dow. Your interactive dialogue takes place on the text window. When

you enter any graphics command, MATLAB plots that graph immedi-

ately on the graphics window. It can open several graphics windows.

So, clearly commands are needed to select a specific window to be the

current one. The list of graphics related commands are given in Table

B.4. Work through the following exercise interactively and observe the

computer response in order to understand the basic graphic capabilities

of MATLAB. Make sure that you are running MATLAB under X-windows.

Text following the percent sign (%) are explanatory comments. You need

not enter them.

»figure(1) % open a graphics window labeled Figure 1

»figure(2) % open a graphics window labeled Figure 2

»plot(x,sin(x)) % plot sin(x)

»hold % keep the graph of sin(x)

»plot(x,cos(x),’go’) % add graph of cos(x) with line type ’go’

»title(’My first plot’) % put some title

»xlabel(’x-axis’) % label the x-axis

»ylabel(’y-axis’) % label the x-axis

»print -deps fig1.eps% produce a postscript copy in file fig1.eps

»!ls -al fig1.eps % verify that the figure is saved in a file

»!xpreview fig1.eps % use the postscript previewer of AIX (optional)

B.3. MATLAB BASICS B.18

Figure window creation and control

figure Create Figure (graph window)

gcf Get handle to current figure

clf Clear current figure

close Close figure

Axis creation and control

subplot Create axes in tiled positions

axes Create axes in arbitrary positions

gca Get handle to current axes

cla Clear current axes

axis Control axis scaling and appearance

caxis Control pseudocolor axis scaling

hold Hold current graph

Handle Graphics objects

figure Create figure window

axes Create axes

line Create line

text Create text

patch Create patch

surface Create surface

image Create image

uicontrol Create user interface control

uimenu Create user interface menu

Handle Graphics operations

set Set object properties

get Get object properties

reset Reset object properties

delete Delete object

drawnow Flush pending graphics events

newplot M-file preamble for NextPlot property

Hardcopy and storage

print Print graph or save graph to file

printopt Configure local printer defaults

orient Set paper orientation

capture Screen capture of current figure

Movies and animation

moviein Initialize movie frame memory

getframe Get movie frame

movie Play recorded movie frames

Miscellaneous

ginput Graphical input from mouse

ishold Return hold state

whitebg Set graphics window defaults for white background

graymon Set graphics window defaults for gray-scale monitors

B.3. MATLAB BASICS B.19

»figure(1) % make figure 1 the current figure

»close(1) % close window 1

»gcf % get current figure (should be 2)

»close(2) % close window 1

In this exercise you produced the data from within MATLAB. If you have

columns of data in a file, you can read them into MATLAB and plot them

as above. The postscript file produced in the above example can be

merged with other documents or printed on a postscript printer. Use

help print to find out about support for other type of printers and plot-

ters.

The Control system toolbox, which uses MATLAB matrix functions, was

built to provide specialized functions in control engineering. The Con-

trol system toolbox is a collection of algorithms, expressed in m-files,

that implement common control system design, analysis, and modeling

techniques.

Dynamic systems can be modeled as transfer functions or in state-

space form. Both continuous-time and discrete-time system are han-

dled. Conversions between various model representations are possible.

Time responses, frequency responses, and root-locus measures can be

computed and plotted. Other functions allow pole-placement, optimal

control, and estimation.

The following example shows the use of some of the control system

design and analysis tools available in MATLAB.

Example

1

G=

(s + 1)(s + 2)(s + 3)

and the denominator coefficients separately as follows:

» num = 1;

» den1 = [1 1];

» den2 = [1 2];

B.3. MATLAB BASICS B.20

» den3 = [1 3];

lution, conv, is used to obtain the polynomial product:

» den = conv(den1,conv(den2,den3));

step can be used:

» y = step(num,den,t); range 0-5.

» plot(t,y,’*’); Generate step

response and plot.

and then using the function bode:

» [mag,phase] = bode(num,den,w); spaced data in the

range 10−1 and 101

The bode plots for amplitude ratio and phase can be obtained by typing:

» loglog(w,mag)

» semilogx(w,phase)

then using the function rlocus:

» y = rlocus(num,den,k); range 20-70.

» plot(y,’*’) Generate and plot

the root-locus.

The closed-loop transfer function can be represented by:

Y G c Gp

=

Ysp 1 + G c Gp

for analysis using the same functions used in the open-loop system.

Discretization can only be done through the state-space model represen-

tation. Therefore, it is necessary to transform transfer function models

to state-space models. The transfer function model can easily be trans-

formed into the state-space model by using the function tf2ss:

B.3. MATLAB BASICS B.21

» [A,B,C,D] = tf2ss(num,den);

dx

where A, B, C, D are matrices in the differential equations dt := Ax + Bu

and y = Cx + Du. To obtain a discretized model, the function c2d is

used:

time assuming a zero-order hold on the input. To obtain a step response

on the discretized model, the function dstep can be used:

» y = dstep(ad,bd,C,D,1,100);

» plot(y),title(’step response’);

listed in Table B.5 The online help screen should be referred to for

information on how to use these tools. The function what can be used

to find out what other functions are available.

you can log a record of the entire session in a file with the diary com-

mand. The command

» diary file

will start recording every keyboard entry and most of the computers tex-

tual response (not graphics) in file. To suspend the recording, use

» diary off

» diary on

The file contains simple text (ASCII) and can be printed on the network

printer.

B.3. MATLAB BASICS B.22

Functions for model conversion

[num, den] = ss2tf (a, b, c, d, iu) State-space to transfer function

[z, p, k] = sstzp(a, b, c, d, iu) State-space to zero-pole

[a, b, c, d] = tf 2ss(num, den) Transfer function to state-space

[z, p, k] = tf 2zp(num, den) Transfer function to zero-pole

[a, b, c, d] = zp2ss(z, p, k) Zero-pole to state-space

[num, den] = zp2tf (z, p, k) Zero-pole to transfer function

[ad, bd] = c2d(a, b, T s) Continuous to discrete

[a, b] = d2c(ad, bd, T s) Discrete to continuous

Functions for modeling

append Append system dynamics

connect System interconnection

parallel Parallel system connection

series Series system connection

ord2 Generate A,B,C,D for a second order system

Continuous time and frequency domain analysis

impulse impulse response

step Step response

lsim Simulation with arbitrary inputs

bode Bode and Nichols plots

nyquist Nyquist plots

Discrete time and frequency domain analysis

dimpulse Unit sample response

dstep Step response

dlsim Simulation with arbitrary inputs

dbode Discrete Bode plots

B.3. MATLAB BASICS B.23

MATLAB derives its strength and wide popularity from being extensi-

ble through the m-file facility. Extensibility means that using a core set

of built-in functions, users can extend the capabilities of MATLAB by

writing their own functions. The functions are stored in files with the

extension “.m”. Any file with the extension “.m” in the search path of

MATLAB is treated as a MATLAB m-file. To find out the current path of

MATLAB enter,

» path

You can list the contents of a m-file with the type command. While the

help command produces only documentation on the function, the type

command produces a complete listing of the function. Try,

» type sin

» help sin

» type erf

» help erf

Note the “sin” is a built-in function and hence no code is listed. On the

other hand “erf” is the error function implemented as a m-file and hence

a complete listing is produced.

The m-files can take two forms - viz. (i) a script file and (ii) files that

define entirely new functions. Such files should be in the MATLAB search

path.

In a script file, you can put any MATLAB commands that you would

normally enter in an interactive session. Simply entering the name of

the file would then execute the contents of that file. For example to

enter a large matrix, create a file called "A.m" in your home directory

using your favorite editor. This file should contain the following text.

B = [ 1 2 3 4 5 6 7 8 9;

2 3 4 5 6 7 8 9 0;

3 4 5 6 7 8 9 0 1;

4 5 6 7 8 9 0 1 2;

5 6 7 8 9 0 1 2 3]

b=sin(B)

B.3. MATLAB BASICS B.24

»A

Note that a matrix variable "B" of size (5 × 9) has been defined in your

workspace and the variable "b" contains the values of sin(B).

In a script file you can include any such sequence of valid MATLAB

commands, including program flow control commands like for, if,

while loops etc. However a script file is not a function file and hence

you cannot pass any arguments to the script. Also, when you execute

a script file from the workspace, all of the variables defined in a script

file become global variables. In contrast any variable defined within a

function file is local to that function and only the results returned by the

function become global in nature.

tion, given by,

XN

zi (1 − Ki )

f (ψ) := =0

i=1

1 + ψ(Ki − 1)

In this equation, (zi , Ki ) are known vectors of length N and ψ is the

unknown scalar variable. So we should like to write a function, say,

flash(psi) that would return the value of f (ψ). This function should be

saved in a file named "flash.m" in your home directory. Such a function

might look as follows:

function f=flash(psi)

% Calculates the flash equation f=flash(psi,K,z)

% K is a vector of any length of equilibrium ratios

% z is the feed composition (same length as K)

% K, z are known.

% psi is the vapor fraction

% This is the last line of help. Notice the blank line below!

global K z

f=((1-K).*z) ./ (1+(K-1)*psi);

f=sum(f);

Let us understand the anatomy of this function. The first line should

always contain the keyword "function" in order to identify it as a func-

B.3. MATLAB BASICS B.25

tion definition and not a script file. Then a list of values computed and

returned by the function should appear - in the present case only "f" is

being returned. If you have more variables being returned you would

list them as "[f1, f2, f3] etc. Next, the equal sign is followed by the

name of the function. Then the list of input variables are given in paren- Note that the file

thesis. The next several lines begin with the percent sign (%) and hence name is

are treated as comments. Here is the place to put the documentation constructed by

appending ".m" to

on what the function does and how to use it. This is also the part that

the function name.

is printed out when a user asks for help on this function. A blank line

In the above

signifies the end of the help message. The actual code follows the blank example the file

line. Notice the use of element-by-element multiplication of two vectors name will be

which avoids the use of do loops. How elegant! flash.m

Assuming that you have created a file called "flash.m" containing

the above lines, work through the following steps.

» help flash

» type flash

» global K z

» z=[.25 .25 .25 .25]

» K=[1 .5 .4 .1]

» whos

» flash(0.1)

that it will take in a vector of ψ values and return the corresponding

function values!

If you know any one high level programming language such a FORTRAN,

C or even BASIC, you should have no difficulty in understanding the ele-

mentary program flow control features of MATLAB. A list of help topics

is given in Table B.6. Let us take the example of "flash.m" and illustrate

the use of "if" and "for" constructs. First we check if the length of vec-

tors K, z are the same; if not we generate an error message. Note the

length and error are built-in MATLAB functions. In the next section

we determine the length of input vector "x" and build a loop to calculate

the function for each element of "x" and store it in the corresponding

element of "f". Use "help relop" and "help lang" to find out more about

relational operators and programming language features.

B.3. MATLAB BASICS B.26

Example

function f=flash(x)

% K is a vector of any length of equil ratios.

% z is the feed composition (same length as K)

% K, z are defined as global in main

% x is the vapor fraction

% The following is the isothermal flash eqn.

global K z

if ( length(K) ˜= length(z) )

error(’Number of K values & compositions do not match’)

end

for i = 1:n %setup a loop for each element of x

t=((K-1).*z) ./ (1+(K-1)*x(i));

t=sum(t);

f(i) = t;

end

Version 4.0 of MATLAB provides for the first time some debugging tools.

If you are familiar with the debugging concepts, use of this facility should

be straight forward. Basically it is a tool for debugging new functions

that a user develops. It provides tools for the following:

suspended and the control returned to the user, (dbstop function-

name)

B.3. MATLAB BASICS B.27

MATLAB as a programming language

script About MATLAB scripts and M-files

function Add new function

eval Execute string with MATLAB expression

feval Execute function specified by string

global Define global variable

nargchk Validate number of input arguments

Control flow

if Conditionally execute statements

else Used with IF

elseif Used with IF

end Terminate the scope of FOR, WHILE and IF statements

for Repeat statements a specific number of times

while Repeat statements an indefinite number of times

break Terminate execution of loop

return Return to invoking function

error Display message and abort function

Interactive input

input Prompt for user input

keyboard Invoke keyboard as if it were a Script-file

menu Generate menu of choices for user input

pause Wait for user response

uimenu Create user interface menu

uicontrol Create user interface control

Debugging commands

dbstop Set breakpoint

dbclear Remove breakpoint

dbcont Resume execution

dbdown Change local workspace context

dbstack List who called whom

dbstatus List all breakpoints

dbstep Execute one or more lines

dbtype List M-file with line numbers

dbup Change local workspace context

dbquit Quit debug mode

B.3. MATLAB BASICS B.28

Since all the variables with in a function are treated as local variables,

their values are not available in the workspace. To examine their values

within a function, you have to be able to stop the execution at a spec-

ified line within a function and examine the values of local variables.

The following exercise illustrates the debugging process using the flash

function developed earlier. So we assume that a "flash.m" file exists in

your current directory.

»z=[.25 .25 .25 .25]; % define z

»global K z % define K z to be global variables

»whos % examine current variables

»dbstop flash % set up break point in flash

»psi=[.2:.2:.8] % define psi (a vector)

»flash(psi) % begin execution of flash, it will stop at

8 global K z the first executable line (line 8 here)

K»dbtype flash % list the function with line numbers

1 function f=flash(x)

2 % K is a vector of any length of equil ratios.

3 % z is the feed composition (same length as K)

4 % K, z are defnied as global in main

5 % x is the vapor fraction

6 % The following is the isothermal flash eqn.

7

8 global K z

9 if ( length(K) = length(z) )

10 error(’Number of K values & compositions do not match’)

11 end

12 n=length(x)

13 for i = 1:n

14 t=((K-1).*z) ./ (1+(K-1)*x(i));

15 t=sum(t);

16 f(i) = t;

17 end

K»dbstop 12 % set up a new break point at line 12

K»dbcont % resume execution from current line 8

12 n=length(x) stops before executing line 12

K»x % examine the value of x

B.3. MATLAB BASICS B.29

K»dbstep % execute one line and stop at line 13

13 for i = 1:n

K»n % now display value of n (should be 4)

K»dbstatus flash % display the break points.

K»K % display the value of K

K»z % display the value of z

K»dbcont % resume execution till the end of function.

K»dbquit % terminate debugging of flash

values of variables and step through execution one line at a time.

Angling may be said to be so like the mathematics,

that it can never be fully learnt.

— IZAAK WALTON

So is UNIX.

— K. Nandakumar

Appendix C

(which is a program or a process) accepts a command from the keyboard,

passes it to the operating system for execution, prints out any error or in-

formational messages generated by the command and displays a prompt

string indicating the readiness to accept another command. There are

several shells available under AIX. The Kron shell or ksh is one of the

most powerful shells and is the default shell on the AIX machines main-

tained by the department of chemical engineering.

In the GUI oriented environment, the equivalent of a command shell

is the desktop which organizes various tools and application programs

such as file manager, program manager, printer manager, etc. as objects

accessible via icons. The interaction takes place through dialogue boxes

and forms that must be completed. Program execution begins simply by

double clicking on the appropriate icons.

If you have a good reason to change your default shell to something

other that ksh, you can do so with the chsh command,

user@machine:dir> chsh

This command will display your current shell, and prompt you for the

C.1

C.1. INTRODUCTION TO THE SHELL AND THE DESKTOP C.2

name of the new shell. The change takes effect when you login the next

time. At any time you can invoke a new shell, different from the login

shell, e.g.,

user@machine:dir> csh

invokes a C-shell.

You can invoke a desktop at any time on AIX machines by entering

user@machine:dir> xdt3

Since the use of desktop is supposed to be rather intuitive, you are en-

couraged to explore its features on your own!

The ".profile" file is used to customize the shell environment for ksh.

The following is a typical example of a ".profile" file.

PATH=$PATH:$HOME/bin:$KHOROS_HOME/bin

HOSTNAME=‘hostname‘

PS1=’ $LOGNAME@$HOSTNAME:$PWD>’

EDITOR=emacs

alias ls=’ls -al’

alias rm=’rm -i’

export PATH HOSTNAME PS1 EDITOR TERM

have been defined. The concept of the environment is like a bulletin

board. You can post definitions of any number of variables there. Ap-

plication programs that expect specific variables can look for them and

use their values, if they are defined. Note that when you set the value of

a variable ( i.e., left hand side of an equal sign), the name is used without

any prefix. When you want to retrieve the value the $ prefix is used. For

example try,

C.1. INTRODUCTION TO THE SHELL AND THE DESKTOP C.3

variable $PATH, which is already defined, is redefined to add additional

paths, such as $HOME/bin separated by colon. Observe the ’$’ prefix to

the name of the environment variable. $HOME itself is an environment

variable, containing the value of the home directory. In the 2nd line the

variable HOSTNAME is defined to contain the name of the workstation.

This name is actually retrieved by the program ‘hostname‘. The 3rd line

redefines the prompt string PS1 using other variables such as LOGNAME,

HOSTNAME and PWD. These variables contain, respectively, the values of

the userid, machine name and present working directory. If the variable

EDITOR is set to emacs, then command line editing features using emacs

keys are enabled.

You can also define aliases for certain commands. In the example

above, the "ls" string is defined to be ’ls -al’ - so when you enter "ls" at

the prompt, the command ’ls -al’ is executed. To examine all the cur-

rently defined aliases, enter,

user@machine:dir> alias

By default, the "rm" command removes files without prompting you for

confirmation which could result in accidental deletion of files. The alias

defined above, assigns ’rm -i’ to "rm". The keyword "-i" stands for inter-

active mode and hence you will always be prompted before removing a

file.

The variables defined in a shell environment are available only to that

shell environment and not to other shells that you may start from the

current one. The export command is used to export the variables to all

subsequent shells. The last line in the above example exports several

environment variables.

To look at all of the environment variables defined in the current ksh

shell, enter,

user@machine:dir> set

user@machine:dir> DUMMY=junk

C.2. MANAGING FILES C.4

allows programming flow control features. Thus one can write quite

powerful scripts to execute a complex sequence of commands. A script

is nothing but a set of AIX instructions placed in a file. By enabling the

execute permission for this file, and entering its name from the command

line you can cause the instructions in that file to be executed.

In managing your files and directories, you need to be able to list the

contents of a directory or file, copy and move files, compress and uncom-

press files, create and delete files and directories, control the ownership

and access to files etc. Commands to carryout these tasks are illustrated

below with specific examples. Try them out at a terminal. To get a com-

plete description of each command use the man pages i.e.,

user@machine:dir> man command

The ls command produces a listing of all the files in the current direc-

tory. In its most useful form, you will use the “-al” keywords, i.e.,

Typically, files that begin with the “.” ( e.g., .profile) are treated as hidden

files. They keyword “-a” however lists all of the files including the hid-

den ones. The keyword “-l” produces the long listing, a sample of which

is shown in figure C.1. This listing provides information on file access

control, ownership, size, time stamp etc. Each line contains information

for a file or directory. The first character identifies whether it is a file (-),

a directory (d) or a symbolic link (l). A symbolic link is a pointer to some

other file (think of it as an alias). The next set of nine characters iden-

tify the file access control, in groups of three. Since AIX is a multiuser

environment, users can control ownership and access of their files to

others. The possible access modes are: read (r), write (w) execute (x) or

none(-). These modes apply to (user, group, others). The groups are es-

tablished by the system administrator. The owner and group names are

listed next, followed by file size in bytes, the time stamp for last change

and the file name.

C.2. MANAGING FILES C.5

permission

time stamp File name

{

{

control Owner Group File size

{

{

{

{

drwxr-sr-x 27 kumar sys 1536 May 24 23:14 .

drwxr-sr-x 59 sys sys 1536 May 13 08:52 ..

-rw-r--r-- 1 kumar others 1937 Jan 07 11:47 .Xdefaults

drwx------ 2 kumar others 512 Jul 21 1992 .elm

-rw-r--r-- 1 kumar sys 2504 May 19 12:08 .mwmrc

-rwxr-xr-x 1 kumar sys 610 May 04 12:36 .profile

-rw------- 1 kumar sys 348 May 14 12:22 .rhosts

drwxr-xr-x 3 kumar others 512 Jul 21 1992 .tin

-rw-r--r-- 1 kumar sys 136 May 11 14:11 .xdt3

-rw-r----- 1 kumar others 1222 Jan 19 1992 Ass1.m

drwxr-xr-x 2 kumar others 512 May 19 13:12 CHEM2

drwx------ 2 kumar others 512 May 27 1992 Mail

{

r read permission

w write permission 1st set applies to owner

x execute permission 2nd set applies to group

- no permission 3rd set applies to all

d indicates a directory

l indicates a symbolic link

Examples:

The file .profile has (read,write,execute) permission for owner (kumar in this case) and

(read,execute) permission for both the group (sys in this case) and everyone.

The command

chmod g+r file

will give read access to group for file, while

chmod o-w file

takes away write access to all for file

Other related Unix commands

ls -al - detailed listing of directory such as the above

chmod - change permission on files and directories

chown - change ownership of files and directories

rm - remove or delete a file

rmdir - remove or delete a directory

mkdir - create a new directory

C.2. MANAGING FILES C.6

The chmod command allows you to modify the access control of files and

directories.

Examples

use,

Note the the "-R" flag stands for recursive use of the command for

all files in all subdirectories.

cute permission at the directory level must be set.

The mv (move) command moves files and directories from one directory

to another, or renames a file or directory. You cannot move a file onto

itself.

Warning: The mv command can overwrite many existing files unless you

specify the -i flag. The -i flag prompts you to confirm before it overwrites

a file.

Examples

already exists, its contents are replaced with those of oldname.

C.2. MANAGING FILES C.7

This moves all files and directories under olddir to the directory

named newdir, if newdir exists. Otherwise, the directory olddir is

renamed to newdir.

The cp command creates a copy of the contents of the file or directory

from a source to a target. If the file specified as the target exists, the

copy writes over the original contents of the file. If you are coping more

than one source file, the target must be a directory.

Examples

If file.new does not already exist, then the cp command creates it.

If it does exist, then the cp command replaces it with a copy of the

file.old file.

directory /home/user/dir2/. As a variant, explore the "-R" flag to

copy not only all of the files, but also all of the subdirectories.

C.2. MANAGING FILES C.8

The chown command changes the owner of the file specified by the File

parameter to the user specified by the Owner parameter. The Owner

parameter can be specified either as a user ID or as a lo- gin name found

in the /etc/passwd file. Optionally, a group can also be specified. The

group can be specified either as a group ID or as a group name found in

the /etc/group file. The syntax is,

The compress command reduces the size of files using adaptive Lempel-

Zev coding. Each original file specified by the file parameter is replaced

by a compressed file with a ".Z" appended to its name. The compressed

file retains the same ownership, modes, and access and modification

times of the original file. If compression does not reduce the size of a

file, a message is written to standard error and the original file is not

replaced. The syntax is,

Also try the GNU version of compress utility called gzip and gunzip -

they are more efficient in both speed and size.

The rm command removes the entries for the specified file or files from a

directory. If an entry is the last link to a file, the file is then deleted. You

do not need read or write permission for the file you want to remove.

However, you must have write permission for the directory containing

that file.

Examples

C.3. MANAGING PROCESSES C.9

user@machine:dir> rm myfile

If there is another link to this file, then the file remains under that

name, but the name myfile is removed. If myfile is the only link,

the file itself is deleted. Caution: You are not asked for confir-

mation before deleting the file. It is useful to set an alias in your

".profile" file to redefine "rm" as

After each file name is displayed, enter "y" to delete the file, or

press the Enter key to keep it.

can be running at the same time. So we need a set of tools to monitor the

currently running processes and the resources they consume, suspend

or terminate specific processes, set priority for certain tasks or schedule

some tasks for execution at specified times. Commands to accomplish

these tasks are illustrated next.

The ps command displays a set of currently running tasks. In its sim-

plest and most useful form, the command is,

user@machine:dir> ps -ael

This provides a long listing of all the currently running processes in-

cluding all of the daemons started by the root at the time of booting the

computer. A typical sample output might look like,

200001 R 21 15101 17095 13 66 20 196d 116 pts/0 0:00 ps

240801 S 21 17095 16070 3 61 20 1dce 108 pts/0 0:00 ksh

260801 S 0 3637 3112 0 60 20 b25 260 - 0:00 sendmail

260801 S 0 12169 1 0 60 20 8a5 152 hft/0 0:00 lmgrd

222801 S 0 12938 12169 0 60 20 16aa 352 hft/0 0:05 CFDSd

40801 S 0 10342 8542 0 60 20 357a 196 - 0:11 nfsd

The process name (or the command name) is shown in the last col-

umn. Other useful parameters are the process identification number

(PID), the nice value (NI) which determines the priority of the process,

C.3. MANAGING PROCESSES C.10

and the cpu time (TIME) used up by the task. In the above example

listing, sendmail is the mail program, lmgrd is the license manager dae-

mon, CFDSd is the license server for FLOW3D program, nfsd is the NFS

daemon; all of these tasks are run by root with a user identification num-

ber (UID) of 0. Note that the ps command itself is a task.

If you started a process like "emacs" or "matlab" and you want to sus-

pend that task and return to the shell you can do so with the key se-

quence

user@machine:dir> ctrl-z

The PID number is displayed at that time. Even if you did not note it

down, you can find a list of all suspended jobs with the command

user@machine:dir> jobs

user@machine:dir> fg %n

where n is the job number produced by the jobs command (and not the

PID number!). The "fg" command brings a job to the foreground.

If you started a process in error and want to terminate it, you can use the

"kill" command. You need to find out the PID number of the process

using "ps command.

Except for the super user (or root), one can terminate only those pro-

cesses that belong to (or initiated by) individual users.

The nohup command runs a command or a script ignoring all hangups

and QUIT signals. Use this command to run programs in the background

after logging off. To run a nohup command in the background, add an

& (ampersand) to the end of the command. Its syntax is:

C.3. MANAGING PROCESSES C.11

When used in its simplest form as above, any output that would normally

appear on the screen will be saved in a file named nohup.out in the

current directory. Wait before logging off because the nohup command

takes a moment to start the command or script you specified. If you

log off too quickly, your command or script may not run at all. Once

your command or script starts, logging off does not affect it. Note that

in order to run a script, the script file must have execute permission.

night), use the at command. The job should be constructed in the form

of a script file. For example a file named test.bat contains the following

lines and has execute permission enabled with the chmod command.

secant(’ass3a’,[0,1],eps,1)

fzero(’ass3a’,0.5,eps,1)

quit

eof

In the above script we start MATLAB in the first line and redirect any

output generated by MATLAB for the standard output ( i.e., screen during

an interactive session) to a file named out. During an interactive session,

MATLAB expects commands from the standard input ( i.e., the keyboard).

Such inputs are now taken from the script file itself as seen in the next

few lines where we execute some MATLAB functions and finally quit

MATLAB.

The contents of such a script file can be executed interactively while

logged in to a machine by simply entering the file name as

user@machine:dir> test.bat

It is possible to write very sophisticated scripts in the Kron shell or any

other shell. When you invoke MATLAB, for example, with the command

matlab, you are actually executing a powerful script. Browse through

the matlab script file using

user@machine:dir> pg /usr/local/matlab/bin/matlab

C.4. LIST OF OTHER USEFUL AIX COMMAND C.12

Once a script file has been constructed, you can schedule it to be ex-

ecuted at a specified time using the at command as follows

which will begin executing the script file at 21:00 hours. To examine a

listing of all the jobs scheduled use,

user@machine:dir> at -l

To remove a job that you have accidentally submitted, you can use,

A list of less frequently used AIX commands is given in Table C.1. You

can use either the man page feature with

or the

user@machine:dir> info

command which starts the InfoExplorer to find out about the syntax and

usage of these and other commands. The directory /usr/bin contains

all of the Unix commands.

C.4. LIST OF OTHER USEFUL AIX COMMAND C.13

command Function

at to schedule a task to start at a given time

cat to list a file

cd to change directory

diff compare two files

dosformat formats a floppy diskette using MS-DOS standards

dosread copies a DOS file from a floppy

doswrite copies a unix file to a DOS formatted floppy

find find a file

info InfoExplorer - online documentation

ksh start a Kron Shell

make a powerful UNIX make facility

mail read mail

mkdir create a directory

man display online manual pages

logout logout of current AIX session

lpq list the queue of print jobs

lpr send a print job to a network printer

lprm remove a print job from a queue

nice control job priority

nohup Don’t kill a process upon logout

pg display a file, one page at a time

ping to check if another machine is alive

pwd display present working directory

rlogin remote login to another machine

rcp remote copy files from one host to other

need to have ".rhosts" file setup

rm remove (delete) files

rmdir remove directories

rsh execute a command on a remote machine

need to have ".rhosts" file setup

rusers list remote users in the local area network

script logs a terminal session to a file

talk talk to another user currently signed on

tar archive files

telnet connect to remote hosts

whoami find out the current user

xinit start X-server

xlc c-compiler

xlC c++ compiler

xlf Fortran compiler

- Wei PraterUploaded byMaquiman Kiyama
- Wei-prater Analysis of Complex Reaction SystemsUploaded byMarisol Lopez
- DirectX questionsUploaded byEdwin Chan
- complex_gnc05.pdfUploaded byVignesh Ramakrishnan
- Chapter 6 - Ordinary Differential EquationUploaded byjun005
- The Coefficient of Subgrade Reaction-misunderstood ConceptUploaded bynaveen00023627
- Full Text 01Uploaded byasha shaw
- 2017_05_final_hashash_4.2.17Uploaded byRory Cristian Cordero Rojo
- 1-s2.0-S0009250914007696-mainUploaded byKhairul
- Strong,Weak, FE Form of 1-DScalarUploaded byTashreeq Tahir
- 19 MacCormack TechniqueUploaded bykkkraja
- A Class of Three Stage Implicit Rational Runge-Kutta Schemes for Approximation of Second Order Ordinary Differential Equations.pdfUploaded byAlexander Decker
- 3 EDC1Uploaded byaryo_el06
- unittopicandrationaleUploaded byapi-252843397
- semester 1 final examUploaded byapi-474841765
- New Application of Differential Transformation Method for Improved Boussinesq EquationUploaded bySEP-Publisher
- 2 Plane Elasticity 1Uploaded byblackant007
- Synchronization of Electric CircuitsUploaded byeidermutum
- S0002-9947-01-02864-1Uploaded bysvnenad
- vanderpol-oscillator-draft.pdfUploaded byRicardo Simão
- A f 25175180Uploaded byAnonymous 7VPPkWS8O
- non linear programmingUploaded byapi-3774614
- Debora-Steady Thermal Analysis of Two-Dimensional Cylindrical pin fin with a.pdfUploaded byDébora Bretas
- Nonlinear Circuit Analysis in Time and Frequency Domain Example the Forced Van Der Pol OscillatorUploaded byArun Kumar
- DifferentialEquations-ExercisesPublicUploaded byyuyang zhang
- Lab 6 SolUploaded bycoatesy1991
- Chapter 3 - Ordinary differential equations (ODEs).pptUploaded byKiến
- Newsletter 22Uploaded byFunNoMore
- AISC Computer Aided Design Steel Structures Flexible ConnectionsUploaded byclam2014
- methods.psUploaded byttii0

- K-Means Clustering Tutorial_ Matlab CodeUploaded byakumar5189
- Lesson Planning ReciprocalsUploaded byJonathan Robinson
- Algebra I Notes Jan. 28Uploaded byrobin smith ccs
- Notes On Functional Analysis By Rajendra BhatiaUploaded bymksingh
- Matlab GraphUploaded bydiex1001
- Aspects of Products of DerivativesUploaded bysafariore
- Phasors and Complex Numbers in ACUploaded byRatoka Lekhema
- jigsaw activity transformations quadraticsUploaded byapi-333467509
- Natural Frequencies of a Tapered Cantilever Beam of Constant Thickness and Linearly Tapered WidthUploaded byAleksandar Nikolic
- Math 30-1 Ch. 6 Review NotesUploaded byNageeb Damani
- Exp101 Lab ReportUploaded byCarly Jena
- The Margrabe FormulaUploaded bydarraghg
- cs321-hw6Uploaded byAnonymous A58Wu0tXeQ
- 22 StatisticsUploaded byVamsiMadupu
- ejercicios webwork VectvectoresUploaded byErikRodriguez
- Introduction to Wavelets and Wavelet Transforms - A Primer , Brrus C. S., 1998.Uploaded bySrikanth Varanasi
- 6.Differentiability.pdfUploaded byToma Alex
- lecture1424086115.pdfUploaded bydinu
- 0606_w06_qp_2Uploaded byBaoz Ping
- Measures of Central TendencyUploaded byAbigail Cabison
- Shape Optimization of Mechanical Systems in COMSOL 4.4Uploaded byt a
- Approximating Definite IntegralsUploaded byAnthony Mak
- Mekanika TeknikUploaded byjesica
- Ch 22_Maxwell Stress Tensor.pdfUploaded byArtemis Karvounis
- 2E1 Linear Algebra Notes 2Uploaded byPrincess Aduana
- An Approximate Reflectance Profile for Efficient Subsurface ScatteringUploaded bymerocgi
- Midterm A Solutions S10Uploaded byYashish Mareachealee
- economicsUploaded byDarvin McMaster
- Dual Simplex MethodUploaded bySubhash Singh
- GD&T Geometric Dimensioning and TolerancesUploaded byAlejandro Lee

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.