You are on page 1of 35

Solution Techniques of Power System

Analysis

Mohammed Ahmed
Solution Techniques
 Power system computations can be classified into two types:
 Sparse Matrix Computation;
 Graph Theoretic Computation.
 During a steady state analysis (e.g., load flow, optimal power
flow etc), one typically needs to solve a system of linear
equations, each equation having a few variables.
 Solution of differential and algebraic equations are required in
time domain simulations like transient stability.
 Graph theoretic computations are required to evaluate the
structure (connectivity) of a network for observability, islands,
loops etc.
 Graph theoretic computations are also required to optimize
computations in a sparse linear system solver.
Sparse Matrix
 A sparse matrix is defined as a matrix which has very few
nonzero elements.
 But, in fact, a matrix can be termed sparse whenever special
techniques can be utilized to take advantage of the large
number of zero elements and their locations.
 These sparse matrix techniques begin with the idea that the
zero elements need not be stored.
 One of the key issues is to define data structures for these
matrices that are well suited for efficient implementation of
standard solution methods, whether direct or iterative.
Cont’d

 The elements presented in the matrix so far applies to any


arbitrary linear system
 The next step is to see the effect of triangularization
schemes to a sparse matrix
 For a sparse system, only nonzero elements need to be
stored in the computer since no arithmetic operations are
performed on the 0’s
 The triangularization scheme is adapted to solve sparse
systems in such a way as to preserve the sparsity as
much as possible
Linear System Solution: Introduction
 A problem that occurs in many is fields is the solution of linear
systems:
Ax = b
Where: A - an n by n matrix with elements aij,
x - n-vectors with elements xi and
b - n-vectors with elements bi
 In power systems, we are particularly interested in systems when
n is relatively large and A is sparse
 How large is changing
 A matrix is sparse if a large percentage of its elements
have zero values
 Goal is to understand the computational issues (including
complexity) associated with the solution of these systems
Inverse of a Sparse Matrix
 The inverse of a sparse matrix is NOT in general a sparse
matrix
 We never explicitly invert a sparse matrix
 Individual columns of the inverse of a sparse matrix can be
obtained by solving x= A-1b with b set to all zeros except
for a single nonzero in the position of the desired column
 If a few desired elements of A-1 are desired (such as the
diagonal values) they can usually be computed quite
efficiently using sparse vector methods
 We can’t invert a singular matrix (whether sparse or not)
Full Matrix and Sparse Matrix Storage
 Full matrices are easily stored in arrays with just one
variable needed to store each value since the value’s row and
column are implicitly available from its matrix position
 With sparse matrices two or three elements are needed to
store each value
 The zero values are not explicitly stored
 The value itself, its row number and its column number
 Storage can be reduced by storing all the elements in a
particular row or column together
 Because large matrices are often quite sparse, the total
storage is still substantially reduced
Storage Methods for Sparse Matrix
 The adoption of sparse formats may affect the speed of certain
operations
 For example, with a sparse format we cannot access or search for
a particular element (or group of elements) directly, using the two
indexes i and j to determine where entry Aij is located in the
memory
 On the other hand, even if the operation of accessing an entry of a
matrix in sparse format turns out to be less efficient, by adopting a
sparse format we will nevertheless access only nonzero
elements, thus executing only useful operations
 Hence, in general, the sparse format is preferable in terms of
storage as well as in terms of computing time, as long as the matrix
is sufficiently sparse
Sparse Matrix Storage Schemes
 In order to take advantage of the large number of zero
elements, special schemes are required to store sparse
matrices.
 The main goal is to represent only the nonzero elements,
and to perform the common matrix operations.
 A general approach for storing a sparse matrix would be using
three vectors/arrays, each dimensioned to number of elements:
1. AA: A real array containing all the real (or complex) values of the
nonzero elements of A in any order;
2. JR: An integer array containing their row indices; and
3. JC: A second integer array containing their column indices.
 If unsorted then both row and column are needed
 New elements could easily be added, but costly to delete
Cont’d..
 Example 1. The matrix

 will be represented (for example) by

 In the example, the elements are listed by row or columns in an


arbitrary order.
 If the elements were listed by row, the array JC which contains
redundant information might be replaced by an array
which points to the beginning of each row instead.
 This would involve non-negligible savings in storage.
Cont’d…
 The new data structure has three arrays with the flg functions:
 A real array AA contains the real values aij stored row by row, from row
1 to n. The length of AA is Nz
 An integer array JA contains the column indices of the elements aij as
stored in the array AA. The length of JA is number of nonzero elements
Nz
 An integer array IA contains the pointers to the beginning of each row in
the arrays AA and JA
 Thus, the content of IA(i) is the position in arrays AA and JA where the i-
th row starts.
 The length of IA is n + 1 with IA(n + 1) containing the number IA(1) + Nz,
i.e., the address in AA and JA of the beginning of a fictitious row n + 1.
This format is probably the
most popular for storing
general sparse matrices called
the Compressed Sparse Row
(CSR) format.
Compressed Sparse Row(CSR) Storage
 If elements are ordered (as was case for previous example)
storage can be further reduced by noting we do not
need to continually store each row number
 Values are stored row by row
 The CSR format reduces the storage requirements by taking
advantage of needing only one element per row
 The CSR format has good advantages for computation when
using cache during matrix operations we are often
sequentially going through the vectors
 The CSR format uses 8 × Nz + 4 × (Nz + n + 1) bytes
Compressed Sparse Column (CSC) scheme
 It is the most obvious variation for storing the columns instead of
the rows, which identical, except storing the values by column
 It is difficult to add values.
 It mostly use the linked list approach(collection of nodes where
each node contains a data field and a reference(link) to the next
node in the list), which makes matrix manipulation simpler
 It is easy to extract a column as opposed to rows
 The roles of vectors IA and JA is exchanged compared with
the CSR format
 When performing matrix-vector multiplication with a
sparse matrix in CSC format it is preferable to compute the
result as a linear combination of the columns of the matrix
Reading Assignment about linked list approach for
Sparse Matrix Storage Techniques
Modified Sparse Row (MSR) Scheme
 It has only two arrays:
 a real array AA and
 an integer array JA.
 The first n positions in AA contain the diagonal elements of the
matrix in order. The unused position n + 1 of the array AA may
sometimes carry some information concerning the matrix.
 Starting at position n + 2, the nonzero entries of AA, excluding its
diagonal elements, are stored by row. For each element AA(k), the
integer JA(k) represents its column index on the matrix.
 The n + 1 first positions of JA contain the pointer to the beginning
of each row in AA and JA. Thus, for the above example, the two
arrays will be as follows:
Modified Sparse Row (MSR) Scheme
 The MSR (Modified Sparse Row) format is a special version of
CSR for square matrices exploiting the fact that:
 The diagonal elements of many matrices are usually nonzero
(matrices generated by finite elements)
 The diagonal elements are accessed more often than the rest of
the elements
 Diagonal entries can be stored in one single array, since their
indexes are implicitly known from their position in the array
 The MSR format turns out to be very efficient in memory terms
 It is one of the most compact formats for sparse matrices
 It is used in several linear algebra libraries for large problems
 The drawback is that it only applies to square matrices
Diagonally structured matrix
 These are matrices whose nonzero elements are located along a small
number of diagonals.These diagonals can be stored in a rectangular array
DIAG (1:n,1:Nd), where Nd is the number of diagonals.
 The offsets of each of the diagonals wrt the main diagonal must be known.
These will be stored in an array IOFF(1: Nd).
 Thus, the element ai,i+ioff(j) of the original matrix is located in position (i, j) of
the array DIAG, i.e., DIAG(i, j) ai,i+ioff(j).
 The order in which the diagonals are stored in the columns of DIAG is
generally unimportant, though if several more operations are performed
with the main diagonal, storing it in the first column may be slightly
advantageous.
 Note also that, all the diagonals except the main diagonal have fewer than n
elements, so there are positions in DIAG that will not be used.
 Example 3:
Ellpack-Itpack format
 It is a more popular general scheme considering the
assumption in this scheme is that there are at most Nd
nonzero elements per row, where Nd is small.
 Then two rectangular arrays of dimension n × Nd each
are required:
 The first, COEF, is similar to DIAG and contains the nonzero
elements of A.
 The nonzero elements of each row of the matrix can be
stored in a row of the array COEF(1:n,1: Nd), completing
the row by zeros as necessary.
 The Second, an integer array JCOEF(1:n,1: Nd) must be
stored which contains the column positions of each entry in
COEF.
Cont’d…
 Example 4: Thus, for the matrix of the previous example, the Ellpack-
Itpack storage scheme is

 A column number must be chosen for each of the zero elements that
must be added to pad the shorter rows of A, i.e., rows 1, 4, and 5.
 In the example, those integers are selected to be equal to the row
numbers, as can be seen in the JCOEF array.
 This is somewhat arbitrary, and in fact, any integer between 1 and n
would be acceptable.
Cont’d…
 Example:
Sparsity Techniques in power system
 Proposed Sparsity Technique, in large power systems, each bus is
connected to only a small number of other buses.
 Therefore, bus admittance matrix of a large power system is very
sparse. i.e. the bus admittance matrix will contain 0’s<<Nz
elements.
 This characteristic feature shows a considerable reduction in the
storage handling of the computer and computation time.
 This sparsity feature of Ybus matrix also extends to Jacobian
matrix. Sparsity can be simply defined to indicate the absence of
certain problem interconnections.
 Mathematically, the sparsity of an non matrix is given as Though

 Ybus is sparse, Zbus is full.


Cont’d…
 The sparsity technique is employed to ensure that only the non-zero
elements are stored and the full characteristic of the original matrix is
not lost.
 eg: Assume, a 100 bus electrical power system, each bus is on the
average connected to 1.5 other buses. So, we get a 100 by 100
matrix with 100 · 100 elements.
 With the above assumption in the Ybus matrix there are 100
diagonal elements ≠ 0 and 150 elements above the diagonal and 150
elements below the diagonal ≠ 0.
 The sparsity of this matrix will therefore be:
100∗100−150−150−100
Sparsity=
100∗100
Sparsity = 96.0 %
We can say that the matrix is 4% full.
Cont’d…
 Similarly for a 1000 bus system, a matrix dimension
with 1000 · 1000 elements.
 We have for the same assumption of how each bus is
interconnected for the Ybus matrix 1000 diagonal
elements ≠ 0 and 1500 elements above the diagonal and
1500 elements below the diagonal ≠ 0. The sparsity of
this matrix will be:
1000∗1000−1500−1500−1000
Sparsity=
1000∗1000
Sparsity = 99.60 %
 We can perhaps say that the matrix is 0.4% full.
Sparsity of the Jacobi-Matrix
• Diagonal terms of N-matrix • Off-diagonal term of N-
matrix

• Diagonal term of H-matrix • Off-diagonal term of H-matrix

• Diagonal term of J-matrix • Off-diagonal terms of J-matrix

• Diagonal term of L-matrix • Off-diagonal term of L-matrix

The elements of Ybus are also in the Jacobi matrices


Conclusion: If Ybus is a sparse matrix, the Jacobi matrix is also sparse
Sparse Direct Solution Methods
 Most direct methods for sparse linear systems perform an LU
factorization of the original matrix and try to reduce cost by
minimizing fill-ins.
 A typical sparse direct solution solver for positive definite matrices
consists of four phases:
1. Preordering is applied to reduce fill-in. Two popular
methods are used: minimum degree ordering and nested-dissection
ordering.
2. A symbolic factorization is performed. This means that the
factorization is processed only symbolically, i.e., without numeric
3. The numerical factorization, in which the actual factors L and
U are formed, is processed.
4. The forward and backward triangular sweeps are executed
for each different right-hand side.
Reading Assignment: minimum degree ordering and nested-dissection
Solution methods for linear equations
based on numerical analysis
 Direct methods
 Gaussian elimination
 Gauss-Jordan elimination
 LU factorization
 Cholesky factorization (Reading Assignmemt)
 Iterative methods
 Jacobi iterations
 Gauss-Seidel iteration
Gaussian Elimination
Cont’d…
Cont’d…
Cont’d…
Cont’d…
Gauss-Elimination Transforms the Matrix to the
Following Form :

 This matrix is U or an “upper” or “unit upper” triangular matrix.


The elements are calculated with iteration, where each step is an
“elementary operation”.
 It is possible to solve the equation with a simple “back substitution”
by dividing each row with the diagonal element
Back Substitution
LU Decomposition
If solving a set of linear equations → [A][X] = [b]
If [A] = [L][U] then → [L][U][X] = [b]
Multiply by → [L]-1
Which gives → [L]-1[L][U][X] = [L]-1[b]
Remember [L]-1[L] = [I] which leads → [I][U][X] = [L]-1[b]
to → [U][X] = [L]-1[b]
Now, if [I][U] = [U] then → [L]-1[b]=[G]
Now, let → [L][G] = [b] (1)
Which ends with → [U][X] = [G] (2)
and
LU Decomposition Given [A][X] = [b]
Steps: 1. Decompose [A] into [L] and [U]
2. Solve [L][G] = [C] for [G]
3. Solve [U][X] = [G] for [X]
Matlab and Sparse Matrices
 MATLAB never creates sparse matrices automatically
 A representation of the pattern is given by the command spy
 You must determine if a matrix contains a large enough
percentage of zeros to benefit from sparse techniques
 The density of a matrix is the number of nonzero elements
divided by the total number of matrix elements
 For matrix A, this would be
nnz(A)/prod(size(A)) or nnz(A) / numel(A)
 Matrices with very low density are often good candidates for use
of the sparse format
Material for Matlab Work
https://it.mathworks.com/help/matlab/math/constructing-sparse-matrices.html
Cont’d…
Converting Full to Sparse
 You can convert a full matrix to sparse storage using the sparse
function with a single argument
S = sparse(A)
 For example, given the matrix A:
A=[0005 (3,1) 1
0200 S = sparse(A) (2,2) 2
produces: (3,2) 3
1300
(4,3) 4
0 0 4 0]; (1,4) 5

 Output: nonzero elements of S, with their row and column indices


 The elements are sorted by columns
Cont’d…
Creating Sparse Matrices Directly
 You can create a sparse matrix from a list of nonzero elements
using the sparse function with five arguments
S = sparse(i,j,s,m,n)
where
 i and j are vectors of row and column indices, respectively, for the
nonzero elements of the matrix
 S is a vector of nonzero values whose indices are specified by the
corresponding (i,j) pairs
 m is the row dimension for the resulting matrix
 n is the column dimension
 The matrix S of the previous example can be generated with:
S = sparse([3 2 3 4 1],[1 2 2 3 4],[1 2 3 4 5],4,4)

You might also like