Professional Documents
Culture Documents
Oxon
Requirements and Policies
Requirements
➢ Mathematical background
➢ Programming skill
Policies
➢ Homework must be submitted on time
➢ Personal project must be done by yourself
➢ Adjust your final report is prohibited
Evaluation Methods
Points of the Report Graded Contents
40 points Assignments
30 points Personal Project
30 points Group Project
Course Program
Weeks/dates Lecture Contents
Depending on the system being analyzed and the mathematical model used, the
governing equations may be a set of linear or nonlinear algebraic equations, a set of
transcendental equations, a set of ordinary or partial differential equations, a set of
homogeneous equations leading to an eigenvalue problem, or an equation involving
integrals or derivatives.
We may or may not be able to find the solution of a governing equation analytically.
If the solution can be represented in the form of a closed-form mathematical
expression, it is called an analytical solution.
Analytical solutions denote exact solutions that can be used to study the behavior of
the system with varying parameters.
Unfortunately, very few practical systems lead to analytical solutions, and hence
analytical solutions are of limited use. (EQ-nonlinear, BC-complex geometry, and …)
does not have a closed-form (analytical) solution. This integral can only be
evaluated numerically.
Since the integral is the same as area under the curve f ( x ) , its value can be
estimated by breaking the area under the curve into small rectangles and adding the
areas of the rectangles. (See Fig. 1.1.) discretize function f ( x ) to data table
The abacus, developed in ancient China and Egypt about 3000 years age, represents one
of the earliest computers.
The first systematic attempt to organize information processing resulted in the
development of logarithmic and trigonometric tables in the 16th and 17th centuries. The
slide rule, developed in 1654 by Robert Bissaker, was used for multiplication and
divisions, as well as for the evaluation of square roots, logarithms, and trigonometric
functions.
Simulated by the industrial revolution, the French philosopher and mathematician Blaise
Pascal developed the first mechanical adding mechanics in 1642. Later, Gottfried
Wilhelm von Leibniz, a German philosopher and mathematician, built a mechanical
calculator in 1694. In 1804, Joseph Jacquard, a French loom designer, developed an
automatic pattern loom whose sequence of operations was controlled by punched cards.
The loom was used to produce intricate patterns and paved the way for the development of
mechanical computers. The British mathematician Charles Babbage designed an automatic
digital computer around 1833, but the machine, called the analytical engine, was never
built.
The basic ideas of Babbage were implemented in the electromechanical Automatic-
Sequence-Controlled Calculator (ASCC), also known as MARK I, which was developed
as a join project between Harvard University and IBM. The first entirely electronic
universal calculator was built in 1945 at University of Pennsylvania with the support of
U.S. Army’s Ballistic Research Laboratory. It used vacuum tubes (~1950) and was called
the Electronic Numerical Integrator And Computer (ENIAC). In 1950, there were
approximately 20 automatic calculators and computers in the United States, with a total
value of nearly $1 million. The first-generation computers, developed between 1950 and
1959, included machines such as UNIVAC I and 1103 IBM 701 and 704, and ERA 1101.
The second-generation computers, produced between 1959-1963, were based on
ferrite-core memories and transistors (~1960) as circuit elements. CDC 1604 and 3600,
IBM 1401, 1620, 7040 and 7094, and PDP 1 represent some of the second-generation
computers.
The most powerful computer systems, also known as supercomputers, represent the
fifth-generation computers and were developed in 1980s. Although the mainframe
computers were popular, they were too expansive for individual professionals to acquire.
The development of integrated circuits (~1970, IC LSI,VLSI) consisting of thousands
of transistors on tiny silicon chips lead to the invention of the personal computer (PC) or
the microcomputer (~1980). The PC (Personal Computer) has dominated the computer
industry and slowly become part of everyday life.
As far back as January 1983, Time magazine selected the PC as its Man of the year.
time.com/time/covers/0,16641,19830103,00.html
1.2.2 Hardware and Software
The user software is developed using high-level language. The high-level languages
permit the programmer to write instructions for the computer in a form that is similar to
ordinary English and algebra. If instructions are written in a high-level language, the
resulting program will brief and independent of the machine. Thus, the programs written
in high-level languages are portable.
The user programs, also called source codes, are not executed directly. A compiler,
which is a part of the system software, translates the sources code into a binary code
(machine language), which is then loaded into computer memory and executed. If
compilation of a source code is successful, it means that the translator understood the
structure and syntax used in the source code. It does not imply that the instructions given
are correct or can be executed. The binary code can be saved and used at a late time
without a need to recompile. The binary code can be executed only on the specified type
of computer on which it was compiled. On the other hand, source code written in a
particular high-level language can be compiled and executed on any computer that has a
compiler for that language.
1.3 Computer Programming Languages
Once the particular numerical method to be used for the solution of a given problem is
selected, the method is to be transformed into computer code, or a program for
implementation on a specific computer using high-level language. Some of the
high-level languages are
FORTRAN, COBOL, BASIC, Pascal, C, Ada, LISP, APL, and FORTH.
Pascal was developed in 1971 by Swiss computer scientist, Niklaus Wirth. The
language was named after the French mathematician, Blaise Pascal (1623-1662), who
in 1642 attempted to construct a mechanical device to perform simple calculations (the
first mechanical adding mechanics). Pascal is a high-level language that is powerful,
easy to learn, and its syntax and organization tend to lead programmers to develop good
programs. The deficiencies of Pascal include a primitive I/O system and the
nonexistence of built-in interfaces to control the computers. To encourage compatibility
between compilers, Pascal was standardized by ANSI in cooperation with International
Standards Organization (ISO) and the Institute of Electrical and Electronic Engineers
(IEEE).
Ada is named in honor of Augusta Ada Byron (1815-1852), the Countess of Lovelace
and the daughter of the English poet Lord Byron. Ada was the assistant of the
mathematician, Charles Babbage, who invented the calculating machine called the
Analytical Engine. She wrote a computer program to computing Bernoulli numbers in
1830 on the Analytical Engine. Because of this effort, Ada may be considered the
world’s first computer programmer. Ada was developed under the initiative of the
United States Department of Defense with the aim of standardizing military software. It
is based on Pascal and was developed by a team of computer scientists lead by Jean
Ichbian of CII Honeywell Bull in 1980. Ada enforces rules that lead to the development
of more readable, portable, reliable, modular, maintainable, and efficient programs. Its
popularity is increasing due to the availability of a variety of software developed for the
military.
A variable, on the other hand, can be assigned different values (redefined) during
program execution. A variable name, to which a value is assigned, can be an integer,
real, double-precision, complex, logical variable or a character string. A variable
name in Fortran can consist of one to six alphabetic or numeric characters, and the
first character must be alphabetic.
1.4.3 Nonnumeric Data
Nonnumeric, or Character, data consist of one or more characters that include the
letters ‘A’ through ‘Z’, the digits 0 through 9, symbols such as +,-,*,/,.,(,),=,$, and the
blank space.
There are two universal coding systems (character code, an integer)
According to the Extended Binary Coded Decimal Interchange Code (EBCDIC)
used by IBM, 8 bits (256 codes) are used to store a single character.
In the American Standard Code for Information Interchange (ASCII) code used by
most other computing system (PC and Workstation), a single character is stored in 7
bits (128 codes).
1.4.1 Numeric Data
Numeric data, in the form of numbers, are used to compute, count, and label.
The numbers can be
Real, floating-point number: whole numbers with fractional part. ( a1a2 ...an b1b2 ...bm )
which is equivalent to
The subscripts b denotes the number base and the digits a j can take any of the
integer values, 0, 1, 2, ..., b 1. For examples:
the digits a j can take values from 0 to 1 for binary numbers ( b 2 ) and from 0 to 9
for decimal numbers ( b 10 ).
For example,
I m (10010) 2 24 (1) 23 (0) 22 (0) 21 (1) 20 (0) 18,
and
I m (1234)5 53 (1) 52 (2) 51 (3) 50 ( 4) 194.
1.4.2 Conversion of a Decimal (base 10) Number to a Binary (base 2) Number
First, the number I is expressed as the sum of twice another number P0 and a
constant a0 :
I 2 P0 a0 ( a0 is 0 or 1). (1.9)
The number P0 is then written as the sum of twice another number P1 and a
constant a1:
P0 2 P1 a1 ( a1 is 0 or 1). (1.10)
The process is continued until the new number Pm1 0 is obtained.
Thus, the procedure yields the sequence of numbers P0 , P1 , P2 , ..., Pm1 and
a0 , a1 , a2 , ..., am1 :
I 2 P0 a0 ;
P0 2 P1 a1 ;
P1 2 P2 a2 ; (1.11)
Similar to Eq. (1.13), a binary fraction is given by the sum of negative powers of 2.
Thus, a fractional number R, 0 R 1, can be expressed as
R (0.b1b2 ...bm ) 2 b1 2 1 b2 2 2 ... bm 2 m , (1.14)
where b1 , b2 ,..., bm are 0 or 1, and m is the number of digits required to represent
the number R.
To convert a decimal fraction R , 0 R 1, into a binary fraction, we use Eq. (1.14).
( R )10 0.b1b2 ...bm 2 b1 2 1 b2 2 2 ... bm 2 m , b1 , b2 ,..., bm ?
For this, the number R is doubled and separated into integer and fractional parts as
2 R b1 (b2 2 1 ... bm 2 m1 ) b1 f1 , 0 f1 1 (1.15)
where b1 integer part of (2 R ) intg(2 R ) , f1 fraction part of (2 R) frac(2 R) .
The fractional part is again doubled and the result expressed as a sum of an integer
b2 intg(2 f1 ) and a fraction f 2 frac(2 f1 ) :
2 f1 ingt(2 f1 ) + frac(2 f1 ) b2 f 2 (1.16)
This process is continued until the fractional part f i becomes zero. The process can
be summarized as follows:
2R ingt(2 R ) frac(2 R ) b1 f1
2 f1 ingt(2 f1 ) frac(2 f1 ) b2 f2
(1.17)
2 f i 1 ingt(2 f i 1 ) frac(2 f i 1 ) bi fi ( f i 0)
A 4-byte real number is stored internally as binary numbers as shown in Fig. 1.4, where
bit 31 (1 bit) is used to store the sign ( S 1 if negative and S 0 if positive),
bits 24 through 30 (7? bits) are used to store the exponent of 16? increased by 64?,
bits 0 through 23 (24? bits) are used to denote the magnitude or the fractional part.
e64
1d2d3...d p 1
. d 6 1 7
64 (2 )
sign exponent
“?” means system dependent,
2
fraction
Numeric Model in Languages
For example, the real number indicated in Fig. 1.5 can be converted to its decimal
equivalent as follows:
sign : + bit 0
exponent = 129 [ 27 + 20 = 129 ]
fraction part = 0.5 [(2-1), for normalization, f1=1] + 0
interpretation + 4.0 + 0.5 * 2129-126
4-byte INTEGER( 4 ):
( 0 0000000000000000000000000000100 ) ( 22 ) = 4
4-byte REAL( 1082130432.0 ): cannot be represented exactly
( 0 10011101 00000010000000000000000 ) x f 2e126
s |<---->| |<------------------->| 24
|-- the 4th byte --| |-- the 3rd byte --| |-- the 2nd byte --| |-- the 1st byte --|
31 24 23 16 15 8 7 0
S e7 e6 e5 e4 e3 e2 e1 e0 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22 f23 f24
|-8 bits exponent-| |------- 23 bits used to store fraction part -------|
Sign-Bit:
the 31 t h bit: value of 0 for positive , value of 1 for negative
Exponent Part (255 and 0):
1 1 1 1 1 1 1 1 : x is Infinite, all fraction-bit = 0 1 1 1 1 1 1 1 0 :254-126=+128
0 0 0 0 0 0 0 0 : x is Zero, all fraction-bit = 0 0 0 0 0 0 0 0 1 :001-126=-125
01111111: 1 01111110: 0 0 1 1 1 1 1 0 1 : -1 1 0 1 0 1 0 1 0 : 44
4-byte, 32 bits (sign 1, precision 23, range 8) REAL: x b e 126 f
b=2; 1 e(8 bit , unsigned ) 254 , i.e., emin 125 e 126 128 emax
p 24
f (0.5 f k b k ) , f1 1 for normalization; b 1 f (1 b p )
k 2
16-byte, 128 bits (sign 1, precision 112, range 15) REAL: x b e16382 f
1 e(15 bit , unsigned ) 32766 , emin 16381 e 16382 16384 emax
p 113
f (0.5 f k b k ) , f1 1 for normalization; b 1 f (1 b p )
k 2
For some numbers R , infinitely many digits are required for representation as a
binary fraction. For example, the fraction 1/6 (or, 2/3) can be expressed as
1
6
0.166 6 10 0.001... 2
2 3 2 5 2 7 2 9 ...
2
3
0.666 10 0.10... 2
2 1 2 3 2 5 2 7 ...
where the group of four digits 1001 (or, two digits 10) is repeated forever.
Note that a nice terminating decimals as 0.1 do NOT have terminating binary
representation. Example 1.2: 0.6 10 0.1001... 2
1
10
0.110 0.0001100... 2
2
4
2 5 2 8 2 9 ...
1.5 Program Structure
For example, consider the Taylor’s series expansion of the function ln(1 x ) :
1
i 1
??? x 2 x3 x 4 x5 x6
y(x) ln(1 x ) =
i 1 i
x x ...;
i
2 3 4 5 6
x 1. (1.29)
x2 x3 xn (n)
y ( x ) y (0) xy (0) y (0) y (0) ... y (0) ...
2! 3! n!
For simplicity, let the function y ( x ) be approximated by the first four terms of
the Taylor’s series expansion.
The resulting discrepancy between the exact function y(x) and the approximate
1 1 1
function, y(x) x x 2 x 3 x 4 , is called the truncation error.
2 3 4
Truncation error E y(x) y(x
)
1.6.5 Round-off (Rounding) Error (Computer: REAL Representation)
Since only a finite number of digits are stored (static) in a computer, the actual
numbers may undergo chopping or rounding of the last digit. For example, let a number
in decimal form is given by
x 0.b1b2 ...bi bi 1bi 2 ..., where 0 b j 9 for j 1. (1.30)
(4) During addition or subtraction, the rounding off of the final result is done such that
the position of the last retained digit is the same as that of the most significant last
retained digit in the original numbers
that were added or subtracted.
Example 1.8
(5) During multiplication or division, the round-off of the final result is done such that
the number of significant digits is equal to the smallest number of significant digits
used in the original numbers.
Example 1.9
(6) During multiple arithmetic operations, the operations are performed one at a time
as indicated by the parentheses:
(multiplication or division) (multiplication or division)
(addition or subtraction) (addition or subtraction)
In each step of the operation, the results are rounded as indicated in guidelines 4
and 5 before proceeding to the next operation, instead of only rounding the final
result.
The propagation error is the error in the output of a procedure due to the error in the
input data.
To find the propagation error, the output of a procedure ( f ) is considered as a
function of the input parameters ( x1 , x2 ,..., xn ) :
f f ( x1 , x2 ,..., xn ) f ( X ). (1.21)
Here, X x1 , x2 ,..., xn is the vector of input parameters.
T
f f
f ( x1 , x2 ,..., xn ) f ( x1 , x2 ,..., xn ) ( X )( x1 x1 ) ( X )( x2 x2 )
x1 x2
(1.22)
f
... ( X )( xn xn ) higher order derivative terms.
xn
By neglecting the higher order derivative terms, the error in the output can be
expressed as
f f f f ( x1 , x2 ,..., xn ) f ( x1 , x2 ,..., xn ). (1.23)
Denoting the errors in the input parameters as
xi xi xi , i 1,2,..., n, (1.24)
we can estimate the propagation error( f ) as
n
f
f ( X )( xi xi ). (1.25)
i 1 xi
f n
xi f
f ( X ) xi , (1.26)
f i 1 f ( X ) xi
where xi is the relative error in xi :
xi xi
x , i 1,2,..., n. (1.27)
i
xi
The quantity
xi f
ci (X ) (1.28)
f ( X ) xi
is called the amplification or the condition number of the relative input error xi .
The study of propagation error due to input errors is called the error analysis. The
numerical problem procedure is said to be well conditioned if the condition number
( ci ) are reasonably bounded.
Round-Off (Machine) Errors: Representation/Computation/ Propagation
Real numbers like pi 3.141592653589... and 1/ 3 0.33333333... ,
which do not have terminating decimal (human) representations,
do not have terminating binary (computer) representations either and thus
cannot be stored exactly using a finite number of bits.
Even such nice terminating decimals as 0.1 do not have terminating binary
representation [ 0.110 0.0001100... ]. (program ROUND_1)
2
In fact, of all (10) reals of the form 0.d1 , where d1 is a digit, only 0.0 ( d1 0 )
and 0.5 ( d1 5 ) can be represented exactly; 0.1, 0.2, 0.3, 0.4, 0.6, 0.7, 0.8, and
0.9 cannot. (0.1)10 (0.0001100) 2 , (0.6)10 (0.1001) 2 , (0.9)10 (0.11100)2
Only 4 two-digit (100) reals of the form 0.d1d 2 can be represented exactly,
namely, 0.00, 0.25, 0.50, 0.75; the remaining 96 two-digit reals cannot.
(0.55)10 (0.100011)2 , (0.65)10 (0.1010011)2 , (0.95)10 (0.111100)2
In general, the only real numbers that can be represented exactly in the computer's
memory are those that can be written in the form m /(2k ) .
Round-Off Errors
The errors intrinsic to the nature of the computer itself happen because any
computer has a finite precision (finite bits to store the digits).
Many floating-point (real) numbers cannot be represented exactly when the
representation uses a number base 2 on digital computers.
As a result, these values must be approximated by one of the nearest
representable values;
the difference is known as the machine (computer) round-off error.
1 cos( x ) sin 2 x
1 cos x
for x ~ 0.0 , b b 2 4ac for b2 4ac
Round-Off Errors
Consider the quadratic equation x 2 2ax 0 which has the two solutions
x1 a a2 and x2 a a2
If a 0 and is small compared with a ( a ), the root x2 is expressed as
the difference between two almost equal numbers and a considerable amount of
significance is lost. Instead, if we write
a a2
x2 a a 2
a a2 a a2
We obtain the root as approximately without loss of significance.
2a
Suppose that, for a fairly large value of x , we know that cosh( x ) a; sinh( x ) b
and that we want to compute e x . Clearly
e x cosh( x ) sinh( x ) a b
leading to a dangerous (subtractive) cancellation while, on the other hand,
1 1
e x
cosh( x ) sinh( x ) a b
gives a very accurate result.
Round-Off Errors
An ill-conditioned (sensitive to the round-off errors) linear system Ax b
Math: small changes in A or b may produce large changes in x
Computation:
difficult to solve accurately by direct method
the verification gives little information about the solutions
Verify the computed solution: (x=0.524659, y=0.524780)
(+3902.0)x + (-3903.0)y = -1.0 Exact solution is
(-3903.0)x + (+3904.0)y = +1.0 x=1.0, and y=1.0
+(3902.0*0.524659) -(3903.0*0.524659)
-(3903.0*0.524780) +(3904.0*0.524780)
2047.219418 -2047.744077
-2048.216340 +2048.741120
-0001.00 0001.00 Yes???
Subtractive Cancellation Error is the major source of round-off errors
Error Propagations
1 1 x n
Consider the evaluation of the integral: I n e x dx . A little manipulation
e 0
(integration by parts) yields the recurrence formula (marching problem)
1
I0 1 , and I n 1 n I n1 , n 1, 2, ...
e
Note that the integrand is always positive within the range of integration and that
the area under the curve, and hence the value of I n , decreases monotonically with
n . Thus, for all n , we may deduce that I n 0 and I n I n1 .
Simultaneous Algebraic Equations (3)
Tedious REAL Arithmetic ( , , , ) Computing (1) Computer
1.7 Numerical Methods Considered
The behavior of any physical or engineering system can be described by one or more
mathematical equation(s).
If the mathematical equations are simple (linear, simple geometric shape), the exact
(analytic) solution can be found in closed form.
Although closed-form solutions are most desirable, for most engineering problems,
the equations are quite complex for which the exact solution cannot be found.
In such case, numerical methods can be used to solve the mathematical equations
using arithmetic operations in order to understand the behavior of the system.
(solution: approximated in discrete form)
The various types of numerical methods discussed in this book are summarized next.
1.7.1 Solution of Nonlinear Equations (Chapter 2)
Many engineering problems involve the solution of one or more nonlinear equations. A
nonlinear equation may be in the form of an algebraic, transcendental, or polynomial
equation.
For example, the determination of nature frequencies of a vibrating system, the
temperature of a heated body from an energy balance, the friction factor corresponding
to a turbulent fluid flow, and the transient current in an electrical circuit lead to
different types of nonlinear
equations.
The analysis of many engineering system, such as the vibration of structures and
machine buckling of columns, and dynamic response of electrical systems, requires the
solution of a set of homogeneous linear algebraic equations.
In many physical and engineering problems, numeric data are collected to understand
physical phenomena. [ ( x1 , x2 , x3 , ...) ,not type of x (t ) ]
Then the principles of statistics and probability are used to analyze the data, develop
models, and predict the behavior of the system.
For example, if several values of an uncertain quantity, such as the wind load acting an
a building, are measured as ( x1 , x2 , x3 , ...) , they can be used to develop a histogram
and a probability distribution of wind load as shown in Fig. 1.11.
1.7.4 Curve Fitting and Interpolation (Chapter 5)
df ( x ) dP ( x )
? , f ( x )dx : f ( x ) ( x, y ) points Polynomial P ( x ) , P ( x )dx
dx dx
Discretization, rebuild function by Polynomial or Truncated Taylor Series
1.7.6 Numerical Differentiation (Chapter 7)
Ordinary differential equations arise in the study of many physical phenomena such as
dynamics, heat and mass transfer, current flow in electrical circuits, and chemical
reactions. In some cases, partial differential equations can be transformed to ordinary
differential equations.
In all these cases, the solution of a set of
one or more ordinary differential
equations is required under specified
initial (9) or boundary (10) conditions.
Although numerical solutions cannot provide an immediate insight into the behavior of
the simplified physical system, they can used to
study the behavior of the true physical system.
Analytic solution: exact, continuous
mathematical function in close-form.
Numerical solution: approximated, discrete
numeric data at specified points in (what it
means???).
1.7.9 Solution of Partial Differential Equations (Chapter 11)
For example, a partial differential equation involving the spatial coordinates x and y
as independent variables is given by Partial: two or more independent variables
2T 2T
f ( x, y ) ,
x 2
y 2
i 1,2,3,..., m;
solved for
j 1,2,3,..., n
These equations represent a system of
algebraic linear equations that can be
solved easily.
1.7.10 Optimization (Chapter 12)
The analysis, design, and operation of many engineering systems involve the
determination of certain variables so as to minimize an objective (cost) function while
satisfying certain functional and economical constraints. The solution of such problems
requires the use of analytical or numerical optimization techniques.
If the equations for the objective and constraint
functions are available in closed form and are
simple, analytic methods of optimization can be
used for the solution of the problem. On the other
hand, if the objective and constraint equations
are complex or not available in closed form,
numerical methods of optimization can be used
for the solution of the problem.
For example, to find the minimum of a function
f ( X ) subject to the constraints g j ( X ) 0 ,
j 1,2,..., m , first a starting vector X 1 is
assumed. Then the vector is iteratively improved
to find X 2 , X 3 ,... , and ultimately, the optimum
vector, X * , as shown in Fig. 1.15
1.7.11 Finite-Element Method (Chapter 13)
In many engineering system, the governing
equations will be in the form of partial differential
equations, and the solution domain will not be
regular.
These problems can be solved conveniently by
replacing the solution domain by several regular
subdomains or geometric figures, known as finite
elements, and assuming a simple solution in each
finite element.
When equilibrium and compatibility conditions are
enforced, the process leads to a system of algebraic
(or a system of ordinary differential) equations that
can be solved easily.
For example, the stresses induced in a plate with a
hole (Fig. 1.16(a)) can be found by modeling the
plate by a number of triangular elements as shown
in Fig. 1.16(b).
1.8 Software for Numerical Analysis
only Maple is considered in this book for demonstrating the symbolic or numerical
solution of different types of practical engineering problems. The basic concepts of
Maple are summarized in Appendix C. (www.maplesoft.com/)
MathCAD is another software that can be used for the solution of mathematical
problems, both numerically and symbolically. It also provides two- and
three-dimensional plots. A brief outline of the concepts of MathCAD are given in
Appendix E. (www.mathsoft.com/)
1.9 Use of Software Packages
To illustrate the use of software packages MatLab, Maple, and MathCAD through a
simple example, the multiplication of the following matrices is considered
8 0
2 3 4
A , B 2 7 .
1 5 6
1 4
The result is
18 37
C AB .
8 11
Fortran and C programs are given for the multiplication of the following matrices:
8 0
2 3 4
A , B 2 7 .
1 5 6
1 4
The result is
18 37
C AB .
8 11
1.10.2 C Program
1. Definitions
2. Determinant of a Matrix
3. Rank of a Matrix
4. Inverse Matrix
5. Basic Matrix Operations
1. Definitions
Row matrix: A matrix with 1 row and n columns is known as a row matrix or simply a
row vector. A row vector b with n elements is denoted as
Identity matrix: If all the elements of a diagonal matrix are equal to 1, unity, then the
matrix is known as an identity matrix or unit matrix and is denoted as I .
Zero matrix: If all the elements of a matrix are equal to zero, then the matrix is called a
zero or null matrix and is denoted as 0 .
Transpose of a matrix: The transpose of an m n matrix A is defined as the n m
matrix obtained by interchanging the rows and columns of A, and is denoted as A ,
T
2 4
4 1 .
A
T
(6)
5 8
It can be noted that the transpose of a column matrix (vector) is a row matrix (vector), and
vice versa.
Trace: The trace of a square matrix is defined as the sum of the elements in the main
diagonal. For example, the trace of the n n matrix, A aij , is given by
Trace A a11 a22 ... ann . (7)
Symmetric matrix: A square matrix for which the upper right half can be obtained by
flipping the matrix about the main diagonal is called a symmetric matrix. Thus, for a
symmetric matrix A aij that satisfies the relation A A , aij a ji .
T
For example
r a b
A a s c
b c t
For example
0 a b
A a 0 c
b c 0
Matrix decomposition: Any square matrix can be expressed as the sum of symmetric
and antisymmetric parts. Write A B C with B 12 ( A AT ) and C 12 ( A AT )
a11 1
2 ( a12 a21 ) ( a1n an1 )
1
2
1 (a a ) a22 1
( a a )
B 12 ( A AT ) 2 12 21 2 2n n2
, which is symmetric, and,
1 (a a )
2 ( a2 n an 2 )
1
2 1n n1 ann
The value of a determinant can be determined in terms of its minors and cofactors.
The minor of the element aij of the determinant A of order n is a determinant of
order n 1 obtained by deleting the row i and column j of the original determinant.
The minor of aij is denoted as M ij .
It can be seen from Eq. (13) that there are 2n different ways in which the determinant
of a matrix can be computed.
a11 a12
det A a11 11 a12 12 a11 a22 a12 a21 a11a22 a12 a21 . (12)
a21 a22
(2) If all the elements of a row (or a column) are zero, the value of the determinant is
zero.
(3) If any two rows (or two columns) are interchanged, the value of the determinant is
multiplied by -1.
(4) If all the elements of one row (or one column) are multiplied by the same constant a ,
the value of the new determinant is a times the value of the original determinant.
(5) If the corresponding elements of two rows (or two columns) of a determinant are
proportional (linear dependent), the value of the determinant is zero. For example,
4 6 8
det A 3 4 6 0.
2 2 4
Operation count for solving linear system Ax b involving n 81 unknowns A8181 :
by Cramer’s rule [( n 1) ( n 1)!] is 82!, which is a very large number indeed
( 4.75 10122 ), dwarfing even the national debt.
n3 813
by LU decomposition (or Gauss elimination) [ ] is , for comparison, would
3 3
require only about 177,147 (1.77 10 ) operations.
5
If we solve this problem using a machine capable of performing 100 million floating
point operations per second (100 megaflops, Pentium III 933MHz, year 2000)
Using Cramer’s rule would require about 3.20 10101 years. This is not worth
waiting for!
Using LU decomposition (or Gauss elimination) would only require a fraction of
a second on the same machine.
~ 3 gigaflops for Pentium-4 3.4G2004, ~ 48 gigaflops for Core2 Duo E67002008
Cramer’s rule should never be used for more than about three unknowns, since it
rapidly becomes very inefficient as the number of unknowns increases
Determinant of U : U u11 u22 ... unn
Determinant of U : U u11 u22 ... unn 1
3. Rank of a Matrix
For an n n matrix A, consider all possible square submatrices that can be formed by
deleting rows and columns. The rank of A is then defined as the size of the highest
order nonsingular submatrix. This implies that a square matrix of order n is
nonsingular if and only if its rank is n .
4. Inverse Matrix
where A A, for example, denotes the product of the matrix A1 and A.
1
adjoint A
, (15)
det A
where adjoint A is the adjoint matrix of A, and, det A , the determinant of A,
is assumed to be nonzero
Adjoint Matrix
The adjoint matrix of a square matrix A aij is defined as the matrix obtained by
replacing each element aij by its cofactor ij and then transposing.
Thus,
22 . . . 2 n 22 . . . n 2
21
12
. . . . . . . . . . . .
Adjoint A
. . (16)
. . . . . . . . . . .
. . . . . . . . . . . .
n2 . . . nn 2n . . . nn
n1 1n
a11 a12 a13
adjoint matrix of A
A a 21 a 22
a 23 , inverse of A : A
1
determinant of A
a 31 a 32 a 33
Equality of matrices: Two matrices A and B , having the same order, are equal if
and only if aij bij for every i and j .
Addition and subtraction of matrices: The sum of the two matrices A and B ,
having the same order, is given by the sum of the corresponding elements. Thus, if
C A B B A, then cij aij bij for every i and j .
Similarly, the difference of two matrices A and B of the same order, is given by
D A B with dij aij bij for every i and j .
Multiplication of matrices: The product of two matrices A and B is defined only
if they are conformable (i.e., if the number of columns of A is equal to the number of
rows of B ). If A is of order m n and B is of order n p , then the product
C A B is of order m p and is defined by C cij , with
n
cij aik bkj ; i 1, 2,..., m; j 1,2,..., p (17)
k 1
This means that cij is the quantity obtained by multiplying the i-th row of Aand the j-th
column of B and summing these products.
(2) The transpose of a matrix product is given by the product of the transposes of the
separate matrices in reverse order. Thus, if C A B , then
(3) The inverse of a matrix product is given by the product of the inverse of the
separate matrices in reverse order. Thus, if C A B , then
1 1 100 1
• In page 75, please explain the difference between σ𝑘=100,−1 and σ𝑘=1,+1
𝑘 𝑘
in their numerical results.