You are on page 1of 6

Module 3

Sparsity Technique

3.1 Sparse matrices


Sparse matrix is a matrix in which most (or, at least, significant number) of the elements are zero.
In the context of power system analysis, the matrices associated with power flow solution are sparse.
For example, let us consider the YBU S matrix. As we have already seen, the off-diagonal elements of
YBU S matrix signifies the connectivity between the nodes. To be more precise, the element (i,j) of
YBU S matrix is non-zero if there is a direct connection between node ‘i’ and node ‘j’, while it is zero if
there is no direct connectivity between these two nodes. Now, in many power systems, generally any
bus is connected to mostly 3-4 buses directly. Therefore, in a 100 buses system (say), there would
be at best 4-5 non-zero terms (including the diagonal) in any row of the YBU S matrix, rest of the
elements being zero. Therefore, out of (100 × 100) = 10, 000 elements, only about 500 terms would
be non-zero and the other terms (elements) would be zero. Thus, in this case, the YBU S matrix is
almost 95 percent sparse. For any larger system, the percentage of sparsity of the associated YBU S
matrix would be even more.
Because of the sparsity of the YBU S matrix, the Jacobian matrix for load flow solution is also
sparse. To see that, please consider equations (2.48) - (2.55). From these equations it can be seen
that all the elements of the Jacobian matrix depend on the element Yij . Therefore, if this element
Yij is zero, the corresponding elements of the Jacobian matrix would also be zero. As most of the
elements (Yij ) of the YBU S matrix are zero, it immediately follows that most of the elements of the
Jacobian matrix would also be zero, thereby making the Jacobian matrix also quite sparse.
Now, in each iteration of the NRLF technique, (we are considering the polar form here), the
correction vector (∆X) is computed by inverting the Jacobian matrix and thereafter multiplying
the inverse of the Jacobian matrix with the mismatch vector (∆M ) (please see equation (2.45)).
However, even though the Jacobian matrix is sparse, its inverse is a full matrix. Hence, computation
of the direct inverse of the sparse matrix involves a lot of computational burden. Therefore, it would
be much less intensive if equation (2.45) can be solved exploiting the sparse nature of the Jacobian
matrix. Apart from this, storing all the elements of a highly sparse matrix also consumes the memory
unnecessarily. Therefore, if only the non-zero elements are stored in appropriate fashion, a lot of
memory can be freed. Of course, with the storage of only the non-zero elements, the complexity of

84
programming will increase. However, for any general purpose load flow program, which is expected
to handle any large size power system, enhancement in the complexity of programming is often a
small cost as compared to the advantage of optimized memory utilization.
Below we will discuss some schemes for solving a set of linear equations (note that equation (2.48)
is a set of linear equations) utilizing the sparse nature of the Jacobian matrix and also some schemes
for storing a sparse matrix. We will start with the Gaussian Elimination method for solving a set of
linear equations.

3.2 Gaussian elimination technique


Let us consider a linear system of equations:

Ax = b (3.1)

Where x and b are both (n×1) vectors and A is a (n×n) co-efficient matrix. The most obvious
method for solving equation (3.1) is to invert matrix A, that is x = A−1 b. However, equation
(3.1) can also be solved indirectly by converting the matrix A into an upper triangular form with
appropriate changes reflected in the vector b and then by back substitution. To illustrate the basic
procedure, let us consider a 4th order system as shown in equations (3.2)-(3.5).

a11 x1 + a12 x2 + a13 x3 + a14 x4 = b1 (3.2)

a21 x1 + a22 x2 + a23 x3 + a24 x4 = b2 (3.3)

a31 x1 + a32 x2 + a33 x3 + a34 x4 = b3 (3.4)

a41 x1 + a42 x2 + a43 x3 + a44 x4 = b4 (3.5)

The Gaussian elimination proceeds in certain sequential steps as described below:

Step 1:
a) Equation (3.2) is divided throughout by a11 .

a12 a13 a14 b1


x1 + x2 + x3 + x4 = (3.6)
a11 a11 a11 a11

b) Multiply equation (3.6) by a21 , a31 , a41 (one by one) and subtract the resulting expression
from equations (3.3), (3.4) and (3.5) respectively to yield:

a12 a21 a13 a21 a14 a21 b1 a21


(a22 − ) x2 + (a23 − ) x3 + (a24 − ) x4 = b2 − (3.7)
a11 a11 a11 a11

a12 a31 a13 a31 a14 a31 b1 a31


(a32 − ) x2 + (a33 − ) x3 + (a34 − ) x4 = b3 − (3.8)
a11 a11 a11 a11
85
a12 a41 a13 a41 a14 a41 b1 a41
(a42 − ) x2 + (a43 − ) x3 + (a44 − ) x4 = b4 − (3.9)
a11 a11 a11 a11
Equations (3.6) to (3.9) can be written more compactly as,

a12 a13 a14 b1


x1 + x2 + x3 + x4 = (3.10)
a11 a11 a11 a11

a(1) (1) (1) (1)


22 x2 + a23 x3 + a24 x4 = b2 (3.11)

a(1) (1) (1) (1)


32 x2 + a33 x3 + a34 x4 = b3 (3.12)

a(1) (1) (1) (1)


42 x2 + a43 x3 + a44 x4 = b4 (3.13)

Where, in equations (3.10) - (3.13)

aj1 a1k
a(1)
jk = ajk − for j, k = 2, 3, 4 (3.14)
a11

Step 2: In this step we will work with equations (3.11) - (3.13).


(1)
a) Equation (3.11) is divided throughout by a22 .

(1)
a23 a(1) b2(1)
x2 + (1)
x3 + 24
x4 = (3.15)
a22 a(1)
22 a(1)
22

(1) (1)
b) Multiplying equation (3.15) by a32 and a42 (one by one) and subtracting the resulting
expressions from equations (3.12) and (3.13) respectively one can obtain;

(1) a(1) (1)


23 a32 (1)
(1) (1)
a24 a32 (1) b2(1)
[a 33 − (1)
] x3 + [a 34 − (1)
] x4 = [b 3 − (1)
a(1)
32 ] (3.16)
a22 a22 a22

(1) a(1) (1)


23 a42 (1)
(1) (1)
a24 a42 (1) b2(1)
[a 43 − (1)
] x3 + [a 44 − (1)
] x4 = [b 4 − (1)
a(1)
42 ] (3.17)
a22 a22 a22

Similar to step 1, equations (3.15) - (3.17) are re-written as,

(1)
a23 a(1) b2(1)
x2 + (1)
x3 + 24
x4 = (3.18)
a22 a(1)
22 a(1)
22

(2) (2)
a33 x3 + a34 x4 = b3(2) (3.19)
(2) (2)
a43 x3 + a44 x4 = b4(2) (3.20)

Where, in equations (3.19) - (3.20),

(1) (1)
(2) (1) aj2 a2k
a jk =a jk − (1)
for j, k = 3, 4 (3.21)
a22

Step 3: In this step we will work with equations (3.19) and (3.20).

86
(2)
a) Equation (3.19) is divided throughout by a33 .

(2)
a34 b3(2)
x3 + (2)
x4 = (2)
(3.22)
a33 a33

(2)
b) Multiplying equation (3.22) by a43 and subtracting it from equation (3.20) one can obtain,

(2) a(2) (2)


34 a43 (2) b3(2) (2)
[a44 − (2)
] x4 = [b 4 − (2)
a43 ] (3.23)
a33 a33

Equation (3.23) contains only one unknown, x4 . Therefore, the value of x4 can be calculated
from this equation. With the value of x4 thus calculated, x3 can be calculated from equation (3.22).
Going back in this manner, x2 can be calculated from equation (3.18) (with the known values of x3
and x4 ) and lastly, the value of x1 can be calculated from equation (3.10) (with the known values
of x2 , x3 and x4 ).

The steps described in equations (3.6)-(3.23) can easily be expressed in terms of standard matrix
operations. To see this, let us represent equations (3.2)-(3.5) in matrix notation as shown in equation
(3.24). In this equation, it is assumed that a11 ≠ 0.

⎡ ⎤⎡ ⎤ ⎡ ⎤
⎢a11 a12 a13 a14 ⎥ ⎢x1 ⎥ ⎢b1 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢a a24 ⎥⎥ ⎢⎢x2 ⎥⎥ ⎢⎢b2 ⎥⎥
⎢ 21 a22 a23
⎢ ⎥⎢ ⎥ = ⎢ ⎥ (3.24)
⎢a31 a32 a33 a34 ⎥⎥ ⎢⎢x3 ⎥⎥ ⎢⎢b3 ⎥⎥

⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢a41 a42 a43 a44 ⎥⎦ ⎢⎣x4 ⎥⎦ ⎢⎣b4 ⎥⎦

Starting with this matrix, the various steps for Gaussian elimination are as follows.

Step M1

On equation (3.24), the operation R1/a11 (where ‘R1’ is the first row of the co-efficient matrix
of equation (3.24)) is carried out to obtain equation (3.6) and the resulting matrix equation is shown
in equation (3.25).
⎡ ⎤⎡ ⎤ ⎡ ⎤
⎢ 1 a12 /a11 a13 /a11 a14 /a11 ⎥ ⎢x1 ⎥ ⎢b1 /a11 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢a ⎥ ⎢x ⎥ ⎢ b ⎥
⎢ 21 a a a 24 ⎥ ⎢ 2 ⎥ ⎢ 2 ⎥
⎢ ⎥⎢ ⎥ = ⎢ ⎥
22 23
(3.25)
⎢a31 a a a ⎥ ⎢x ⎥ ⎢ b3 ⎥
⎢ 32 33 34 ⎥⎢ ⎥ ⎢
3 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢a41 a a a ⎥ ⎢x ⎥ ⎢ b 4 ⎥
⎣ 42 43 44
⎦⎣ ⎦ ⎣
4

Step M2

On equation (3.25), the operations (R2 − R1 ∗ a21 ), (R3 − R1 ∗ a31 ) and (R4 − R1 ∗ a41 ) are
carried out (where ‘Ri’ denotes the ith (i = 1, 2, 3, 4) row of the co-efficient matrix of equation (3.25))
to obtain equations (3.10)-(3.13) and the resulting matrix equation is shown in equation (3.26). In

87
(1)
this equation, it is assumed that a22 ≠ 0.

⎡1 a /a a /a a /a ⎤ ⎡x ⎤ ⎡b /a ⎤
⎢ 11 ⎥ ⎢ 1 ⎥ ⎢ ⎥
⎢ 12 11 13 11 14
⎥ ⎢ ⎥ ⎢ 1 11 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢0 a(1) a (1)
a (1)
⎥ ⎢x2 ⎥ ⎢ b(1) ⎥
⎢ 22 23 24 ⎥⎢ ⎥ ⎢ 2 ⎥
⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ (3.26)
⎢0 a(1) (1) (1) ⎥ ⎢ ⎥ ⎢ (1) ⎥
⎢ a a ⎥ ⎢ 3⎥ ⎢ 3 ⎥
x b
⎢ 32 33 34
⎥⎢ ⎥ ⎢ ⎥
⎢ (1) (1) (1) ⎥ ⎢ ⎥ ⎢ (1) ⎥
⎢0 a42 a43 a44 ⎥⎦ ⎢⎣x4 ⎥⎦ ⎢⎣ b4 ⎥⎦

Step M3

(1)
On equation (3.26), the operation ‘R2/a22 ’ is carried out (corresponding to equation (3.15)) to
obtain the resulting matrix equation shown in equation (3.27).

⎡1 a /a a13 /a11 a14 /a11 ⎤⎥ ⎡⎢x1 ⎤⎥ ⎡⎢ b1 /a11 ⎤⎥



⎢ 12 11
⎥⎢ ⎥ ⎢ ⎥
⎢ (1) (1) (1) (1) ⎥ ⎢ ⎥ ⎢ (1) (1) ⎥
⎢0 1 a23 /a22 a24 /a22 ⎥ ⎢x2 ⎥ ⎢b2 /a22 ⎥⎥
⎥ ⎢ ⎥ ⎢

⎢ ⎥ ⎢ ⎥ = ⎢ (1) ⎥ (3.27)
⎢0 a(1) (1) (1) ⎥ ⎢x ⎥ ⎢ b ⎥
⎢ a a ⎥ ⎢ 3⎥ ⎢ 3 ⎥
⎢ 32 33 34
⎥⎢ ⎥ ⎢ ⎥
⎢ (1) (1) (1) ⎥ ⎢ ⎥ ⎢ (1) ⎥
⎢0 a42 a43 a44 ⎥⎦ ⎢⎣x4 ⎥⎦ ⎢⎣ b4 ⎥
⎣ ⎦
Step M4

(1) (1)
On equation (3.27), the operations (R3 − R2 ∗ a32 ) and (R4 − R2 ∗ a42 ) are carried out cor-
responding to the equations (3.18)-(3.21) and the resulting matrix equation is shown in equation
(2)
(3.28). In this equation, it is assumed that a33 ≠ 0.

⎡1 a /a a13 /a11 a14 /a11 ⎤⎥ ⎡⎢x1 ⎤⎥ ⎡⎢ b1 /a11 ⎤⎥



⎢ 12 11
⎥⎢ ⎥ ⎢ ⎥
⎢ (1) (1) (1) (1) ⎥ ⎢ ⎥ ⎢ (1) (1) ⎥
⎢0 1 a23 /a22 a24 /a22 ⎥ ⎢x2 ⎥ ⎢b2 /a22 ⎥⎥
⎥ ⎢ ⎥ ⎢

⎢ ⎥ ⎢ ⎥ = ⎢ (2) ⎥ (3.28)
⎢0 (2) (2) ⎥ ⎢x ⎥ ⎢ b ⎥
⎢ 0 a33 a34 ⎥ ⎢ 3⎥ ⎢ 3 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ (2) (2) ⎥ ⎢ ⎥ ⎢ (2) ⎥
⎢0 0 a a ⎥ ⎢x ⎥ ⎢ b ⎥
⎣ 43 44 ⎦⎣ ⎦ ⎣
4 4 ⎦
Step M5

(2)
On equation (3.28), the operation ‘R3/a33 ’ is carried out to obtain the matrix equation shown
in equation (3.29).
⎡1 a /a a13 /a11 a14 /a11 ⎤⎥ ⎡⎢x1 ⎤⎥ ⎡⎢ b1 /a11 ⎤⎥

⎢ 12 11
⎥⎢ ⎥ ⎢ ⎥
⎢ (1) (1) (1) (1) ⎥ ⎢ ⎥ ⎢ (1) (1) ⎥
⎢0 1 a23 /a22 a24 /a22 ⎥⎥ ⎢⎢x2 ⎥⎥ ⎢⎢b2 /a22 ⎥⎥

⎢ ⎥ ⎢ ⎥ = ⎢ (2) (2) ⎥ (3.29)
⎢0 (2)
/a33(2) ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 1 a34 ⎥ ⎢x3 ⎥ ⎢b3 /a33 ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ (2) (2) ⎥ ⎢ ⎥ ⎢ (2) ⎥
⎢0 0 a43 ⎥ ⎢
a44 ⎦ ⎣x4 ⎦ ⎣ b4 ⎥ ⎢ ⎥
⎣ ⎦
Step M6

(2)
Lastly, on equation (3.29), the operation (R4 − R3 ∗ a43 ) is carried out to obtain the matrix

88
equation shown in equation (3.30).

⎡1 a /a a13 /a11 a14 /a11 ⎤⎥ ⎡⎢x1 ⎤⎥ ⎡⎢ b1 /a11 ⎤⎥



⎢ 12 11
⎥⎢ ⎥ ⎢ ⎥
⎢ (1) (1) (1) (1) ⎥ ⎢ ⎥ ⎢ (1) (1) ⎥
⎢0 1 a /a a /a ⎥ ⎢x ⎥ ⎢b /a ⎥
⎢ 23 22 24 22 ⎥ ⎢ 2 ⎥ ⎢ 2 22 ⎥
⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ (3.30)
⎢0 (2)
/a (2) ⎥ ⎢ ⎥ ⎢b(2) /a(2) ⎥
⎢ 0 1 a 33 ⎥ ⎢ 3 ⎥
x ⎢ 33 ⎥
⎢ 34
⎥⎢ ⎥ ⎢ 3 ⎥
⎢ (3) ⎥ ⎢ ⎥ ⎢ (3) ⎥
⎢0 0 0 a44 ⎥⎦ ⎢⎣x4 ⎥⎦ ⎢⎣ b4 ⎥
⎣ ⎦

(3) (2) a(2) (2)


34 a43 (3) b(2)
In equation (3.30), a44 = a44 − (2)
and b4 = b4(2) − 3
(2)
(2)
a43 . From this equation, the
a33 a33
unknowns can be easily solved by back-substitution starting from the last row of the final co-efficient
matrix in equation (3.30). Thus, Gaussian elimination enables us to solve the unknown quantities
in a systematic manner without inverting the co-efficient matrix. Therefore, by adopting the same
procedure, the correction vector (∆M ) can be computed from equation (2.48) without having to
invert the Jacobian matrix. When a large power system in analyzed, adopting Gaussian elimination
reduces computational burden to a large extent (as compared to inversion of the Jacobian matrix).
(1) (2)
In the above procedure, the variables a11 , a22 and a33 have been assumed to be non-zero. These
variables, by which the rows of the co-efficient matrix are divided, are called the ‘pivot variables’.
However, during the elimination process, it is not necessary that the ‘pivot variables’ would be always
non-zero. If any pivot variable turns out to be zero at any intermediate step, then the corresponding
row is interchanged with the next row so that the new pivot variable is non-zero and the elimination
process can continue.
We will look into an example of Gaussian elimination procedure in the next lecture.

89

You might also like