Professional Documents
Culture Documents
Matrix
Matrix
Matrix
PAPER
Arranged By:
FACULTY OF ECONOMICS
DEVELOPMENT ECONOMICS
OCTOBER 2019
PREFACE
Praise be to God Almighty for the blessings of his grace, and that the
authors were given the opportunity to be able to arrange a paper entitled Matrix
can be completed as expected. This paper was prepared to fulfill the tasks of the
Mathematics for Advanced Economic course.
The authors realize that in making this paper many find obstacles. But
from the encouragement of several parties, this paper can be resolved properly.
Gratitude for:
The authors are aware that the preparation of this paper is far from perfect.
Therefore the authors apologize if there are errors in the preparation of this paper.
Thus this paper was made, hopefully this paper will be useful for the
reader.
Authors
i
TABLE OF CONTENTS
PREFACE ................................................................................................................ i
TABLE OF CONTENTS ........................................................................................ ii
CHAPTER I .............................................................................................................1
INTRODUCTION ...................................................................................................1
1.1 Background of The Paper ..........................................................................1
1.2 Formulation of The Problem .....................................................................1
1.3 Purpose of The Paper .................................................................................1
CHAPTER II ............................................................................................................2
THEORY AND DISCUSSION ...............................................................................2
2.1 Basic matrix operations ..............................................................................2
2.2 Gauss Method ...........................................................................................12
2.3 Determinant matrix ...................................................................................12
2.4 Laplance Expansion .................................................................................12
2.5 Adjoint Matrix ..........................................................................................12
2.6 Cramer Rules ............................................................................................12
2.7 Determinant of Hessian ............................................................................12
CLOSING ..............................................................................................................19
3.1 Conclusion ................................................................................................19
BIBLIOGRAPHY ..................................................................................................20
ii
CHAPTER I
INTRODUCTION
1
1.3.2 To find out the concept of depreciation
CHAPTER II
2
Using row operations to convert a matrix into reduced row echelon form is
sometimes called Gauss–Jordan elimination. Some authors use the term
Gaussian elimination to refer to the process until it has reached its upper
triangular, or (unreduced) row echelon form. For computational reasons, when
solving systems of linear equations, it is sometimes preferable to stop row
operations before the matrix is completely reduced.
Example Question:
solve the linear equation below using the gauss method
4𝑥1 + 4𝑥2 + 8𝑥3 = 36
𝑥1 + 2𝑥2 + 3𝑥3 = 14
4𝑥1 + 𝑥2 + 𝑥3 = 9
Answer:
4 4 8 x1 36 1 1 2 9
1 2 3 x 14 1 2 3 14
2
4 1 1 x3 9 4 1 1 9
1
Multiply the first row by 4 to get 1 on position 𝑎11
1 1 2 9 1 1 2 9
1 2 3 14 0 1 1 5
4 1 1 9 0 3 7 27
decrease the second row with the first row 1 times and decrease the third
row with 4 times first row.
1 1 2 9 1 0 1 4
0 1 0 1 1 5
1 5
0 3 7 27 0 0 4 12
minus 1 times the second line in the first line and add 3 times row 1 to
line 3
1 0 1 4 1 0 1 4
0 1 1 5 0 1 1 5
3
0 0 4 12 0 0 1 3
1
multiply the third row by 4 to get 1 at position 𝑎33
1 0 1 4 1 0 0 1
0 1 1 5 0 1 0 2
0 0 1 3 0 0 1 3
Reduce third row with first row 1 times and Reduce second row with third
row 1 time
Solutions obtained :
𝑥1 = 1 𝑥2 = 2 𝑥3 = 3
4
(or minors) of B, each of size (n − 1) × (n − 1). The Laplace expansion is of
didactic interest for its simplicity and as one of several ways to view and compute
the determinant. For large matrices,.
Theorem.
Suppose B = [bij] is an n × n matrix and fix any i, j ∈ {1, 2, ..., n}
Then its determinant |B| is given by:
where 𝑏𝑖𝑛,𝑖𝑛 are 𝑏𝑖𝑗,𝑖𝑗 are values of the matrix's row or column that were excluded
by the step of finding minor matrix 𝑚𝑖𝑗 for the cofactor 𝑐𝑖𝑗 (see example below).
The determinant of this matrix can be computed by using the Laplace expansion
along any one of its rows or columns. For instance, an expansion along the first
row yields:
Laplace expansion along the second column yields the same result:
5
2.5 ADJOINT MATRIX
The adjugate of A is the transpose of the cofactor matrix C of A,
𝑎𝑑𝑗(𝐴) = 𝐶 𝑇
In more detail, suppose R is a commutative ring and A is an n × n matrix with
entries from R. The (i,j)-minor of A, denoted Mij, is the determinant of the (n − 1)
× (n − 1) matrix that results from deleting row i and column j of A. The cofactor
matrix of A is the n × n matrix C whose (i, j) entry is the (i, j) cofactor of A,
which is the (i, j)-minor times a sign factor:
Example :
Consider a 3×3 matrix, find the cofactor C and atrix adjoint A also deterinant
from matrix below
2 0 5
A = [0 1 1 ]
1 2 3
by changing the area between the element 𝐴𝑖𝑗 and its cofactor |𝐶𝑖𝑗 |
according to the law of the cofactor is ..
1 1 0 1 0 1
+[ ] −[ ] +[ ]
2 3 1 3 1 2 1 1 −1
0 5 2 5 2 0
C =.. − [ ] +[ ] −[ ] = [ 10 1 −4]
2 3 1 3 1 2
0 5 2 5 2 0 −5 −2 2
+ [
[ 1 1 ] − [ ] + [ ]
0 1 0 1]
So matrix adjoint A are the transpose from C
6
1 10 −5
Adj A = C’ = [ 1 1 −2]
−1 −4 2
The Determinant of the matrix is
|𝐴| = 2(1) + 0(1) + 5(−1) = −3 [row 1`]
|𝐴| = 0(10) + 1(1) + 1(−4) = −3 [row 2 ]
|𝐴| = 1(−5) + 2(−2) + 3(2) = −3 [row 3 ]
2.6 CRAMER RULES
Cramer's rule implemented in a naïve way is computationally inefficient
for systems of more than two or three equations. In the case of n equations
in n unknowns, it requires computation of n + 1 determinants, while Gaussian
elimination produces the result with the same computational complexity as the
computation of a single determinant. Cramer's rule can also be numerically
unstable even for 2×2 systems. However, it has recently been shown that Cramer's
rule can be implemented in O(n3) time, which is comparable to more common
methods of solving systems of linear equations, such as Gaussian
elimination (consistently requiring 2.5 times as many arithmetic operations for all
matrix sizes), while exhibiting comparable numeric stability in most cases.
Example
Solve the linear function below using cramer rules
x + 2 y + 4x = 32
4x +2y =30
2x + 3y +z =13
Matrix AX = B
1 2 4 𝑥 32
[4 2 0] [𝑦] = [30]
2 3 1 𝑧 13
With determinant =
[(1.2.1)+(2.0.2)+(4.4.3)] -[(2.2.4)+(1.0.3)+(1.4.2)] = 26
32 2 4
A. [30 2 0] [𝐴1 ] 260
13 3 1
X=
[𝐴]
= = 10
26
With determinant :
[(32.2.1)+(2.0.13)+(4.30.3)]-[(4.2.13)+(32.0.1)+(2.30.1)] = 260
1 32 4
B. [4 30 0] [𝐴2 ] −130
2 13 1
Y=
[𝐴]
= = −5
26
7
With determinant =
[(1.30.1)+(32.0.2)+(4.4.13)]-[(4.30.2)+(32.4.1)+(1.0.13)] = -130
1 2 32
C. [4 2 30] [𝐴 3 ] 208
2 3 13 Z=
[𝐴 ]
= =8
26
With determinant =
[(1.2.13)+(2.30.2)+(32.4.3)]-[(32.2.2)+(2.4.13)+(1.30.3)] = 208
8
|𝐴| = 16 − 4 = 12 |𝐴1 | = 560 − 360 = 180 |𝐴2 | = 380 − 140 =
240
180 240
Obtained: 𝑞1 = 12 = 15 𝑞2 = 12 = 20
b) Testing the requirements of the second level by taking the partials of the
second level to form the hessian determinant =
𝜋11 = 𝜋12 = 𝜋21 = −2 𝜋22 = −8
1
A1 Adj A
A
2.9 DETERMINANT OF THE 3X3 ORDER
9
2.10 HESSIAN CONSTRAINED OPTIMAZATION
A company produced 2 different types of product (Q1 and Q2) with the
intention to maximize the revenue from said products.
… objective function
To answer this question there are various ways to solve this and one of
them is through the LaGrange method:
… Constrained function
ax + by = c … LaGrange Function
10
CLOSING
3.1 Conclusion
11
BIBLIOGRAPHY
University Press.
20