You are on page 1of 6

Kirchhoff's theorem

In the mathematical field of graph theory Kirchhoff's theorem or Kirchhoff's matrix tree theorem named after Gustav Kirchhoff is
a theorem about the number of spanning trees in a graph, showing that this number can be computed inpolynomial time as
the determinant of a matrix derived from the graph. It is a generalization of Cayley's formula which provides the number of spanning
trees in a complete graph.

Kirchhoff's theorem relies on the notion of the Laplacian matrix of a graph that is equal to the difference between the graph'sdegree
matrix (a diagonal matrix with vertex degrees on the diagonals) and its adjacency matrix (a (0,1)-matrix with 1's at places
corresponding to entries where the vertices are adjacent and 0's otherwise).

For a given connected graph G with n labeled vertices, let λ1, λ2, ..., λn−1 be the non-zero eigenvalues of its Laplacian matrix. Then
the number of spanning trees of G is

Equivalently the number of spanning trees is equal to any cofactor of the Laplacian matrix of G.

An example using the matrix-tree theorem


First, construct the Laplacian matrix Q for the example kite graph G (see image at right):

Next, construct a matrix Q* by deleting any row and any column from Q. For example, deleting row 1 and column 1 yields

Finally, take the determinant of Q* to obtain t(G), which is 8 for the kite graph. (Notice t(G) is the (1,1)-cofactor of Q in this
example.)

Proof outline
First notice that the Laplacian has the property that the sum of its entries across any row and any column is 0. Thus we can
transform any minor into any other minor by adding rows and columns, switching them, and multiplying a row or a column by −1.
Thus the cofactors are the same up to sign, and it can be verified that, in fact, they have the same sign.

We proceed to show that the determinant of the minor M11 counts the number of spanning trees. Let n be the number of vertices of
the graph, and m the number of its edges. The incidence matrix is an n-by-m matrix, which may be defined as follows: suppose
that (i, j) is the kth edge of the graph, and that i < j. Then Eik = 1, Ejk = −1, and all other entries in columnk are 0 (see
oriented Incidence matrix for understanding this modified incidence matrix E). For the preceding example (withn = 4 and m = 5):
Recall that the Laplacian L can be factored into the product of the incidence matrix and its transpose, i.e., L = EET.
Furthermore, let F be the matrix E with its first row deleted, so that FFT = M11.

Now the Cauchy-Binet formula allows us to write

where S ranges across subsets of [m] of size n − 1, and FS denotes the (n − 1)-by-(n − 1) matrix whose columns are
those of F with index in S. Then every S specifies n − 1 edges of the original graph, and it can be shown that those edges
induce a spanning tree iff the determinant of FS is +1 or −1, and that they do not induce a spanning tree iff the
determinant is 0. This completes the proof.

Particular cases and generalizations


Cayley's formula[edit]
Main article: Cayley's formula

Cayley's formula follows from Kirchhoff's theorem as a special case, since every vector with 1 in one place, −1 in another place, and
0 elsewhere is an eigenvector of the Laplacian matrix of the complete graph, with the corresponding eigenvalue being n. These
vectors together span a space of dimension n − 1, so there are no other non-zero eigenvalues.

Alternatively, note that as Cayley's formula counts the number of distinct labeled trees of a complete graph Kn we need to compute
any cofactor of the Laplacian matrix of Kn. The Laplacian matrix in this case is

Any cofactor of the above matrix is nn−2, which is Cayley's formula.

Kirchhoff's theorem for multigraphs[edit]


Kirchhoff's theorem holds for multigraphs as well; the matrix Q is modified as follows:

 if vertex i is adjacent to vertex j in G, qi,j equals −m, where m is the number of edges between i and j;
 when counting the degree of a vertex, all loops are excluded.

Explicit enumeration of spanning trees[edit]


Kirchhoff's theorem can be strengthened by altering the definition of the Laplacian matrix. Rather than merely counting edges
emanating from each vertex or connecting a pair of vertices, label each edge with an indeterminant and let the (i, j)-th entry of
the modified Laplacian matrix be the sum over the indeterminants corresponding to edges between the i-th and j-th vertices
when i does not equal j, and the negative sum over all indeterminants corresponding to edges emanating from thei-th vertex
when i equals j.

The determinant above is then a homogeneous polynomial (the Kirchhoff polynomial) in the indeterminants corresponding to
the edges of the graph. After collecting terms and performing all possible cancellations, each monomial in the resulting
expression represents a spanning tree consisting of the edges corresponding to the indeterminants appearing in that
monomial. In this way, one can obtain explicit enumeration of all the spanning trees of the graph simply by computing the
determinant.

Matroids[edit]
The spanning trees of a graph form the bases of a graphic matroid, so Kirchhoff's theorem provides a formula to count the
number of bases in a graphic matroid. The same method may also be used to count the number of bases in regular matroids,
a generalization of the graphic matroids (Maurer 1976).
See also[edit]

 Prüfer sequences
 Minimum spanning tree
 List of topics related to trees

References[edit]

 Harris, John M.; Hirst, Jeffry L.; Mossinghoff, Michael J. (2008), Combinatorics and Graph Theory, Undergraduate Texts
in Mathematics (2nd ed.), Springer.
 Maurer, Stephen B. (1976), "Matrix generalizations of some theorems on trees, cycles and cocycles in graphs", SIAM
Journal on Applied Mathematics 30 (1): 143–148, doi:10.1137/0130017, MR 0392635.
 Tutte, W. T. (2001), Graph Theory, Cambridge University Press, p. 138, ISBN 978-0-521-79489-3.

External links[edit]

 A proof of Kirchhoff's theorem


 A discussion on the theorem and similar results

http://fedelebron.com/an-introduction-to-incidence-matrices

An introduction to incidence matrices


Last time we saw some algebraic properties of adjacency matrices. Today I’d like to discuss
another related representation of graphs using matrices: the incidence matrix. The incidence
matrix is defined as such:

Definition. Given a graph G=(V,E), with |V|=n,|E|=m, and V={v1,⋯,vn}, E={e1,⋯,em},


we define the incidence matrix of G, B as the {0,1}n×m matrix such that Bi,j=1⟺vi∈ej. In
other words, if edge ej is incident to node vi.
To help in understanding, let us see an example. Suppose G=(V,E) is the following graph:
Since we have n=5 nodes and m=6 edges, the adjacency matrix B of G will be an
element in {0,1}5×6. Specifically, we will have the following as its incidence matrix:
B=⎛⎝⎜⎜⎜⎜⎜⎜110000110010100001100001100101⎞⎠⎟⎟⎟⎟⎟⎟

Unlike the adjacency matrix, the incidence matrix is rarely used as a data structure in
programming. There are several factors which make this unattractive in many cases:

 It uses Θ(VE) space as opposed to Θ(V2) for the adjacency matrix. Unless E∈o(V),
this is not an improvement.
 Checking if a node is related to some other node is O(E) (can you see why?), so it’s
worse than an adjacency matrix for this.
 Traversing a node’s adjacencies is O(E), so it’s worse than an adjacency list for this.
For all its faults, however, the incidence matrix does have some very interesting properties. In
some sense, it is giving more of a personality to edges, as opposed to considering them
“something that may happen between nodes”. We will see why this intuition makes sense.

As an initial property, I’d like to show a very common one that relates incidence matrices to
adjacency matrices. It was somewhat unexpected for me :)

A connection between incidence and adjacency matrices


Since we have these two ways to represent graphs, it is natural to ask oneself if they are
related in some algebraic way. The answer is yes, and it is a rather simple relationship:

Lemma. Let G=(V,E) be a graph, n=|V|, m=|E|, A(G) its adjacency matrix,
and B=B(G) its incidence matrix. Then there exists a diagonal matrix Δ∈Nn×n such that
BBt=A(G)+Δ

PROOF
First, let’s check that the dimensions match. B∈{0,1}n×m, so BBt∈Nn×n.
Since A∈{0,1}n×n and Δ∈Nn×n, the dimensions match. For convenience, let’s say V={v1,
⋯,vn}, E={e1,⋯,em}. Let us also note A(G) as just A.
Now let’s see what happens to a given entry of BBt. Let’s call the ith row of B Bi. Then we
have

(BBt)i,j=∑k=1mbi,kbtk,j=∑k=1mbi,kbj,k=⟨Bi,Bj⟩
So it is the inner product of the ith and jth rows. Since the elements in this inner product are
either 0 1, we can look at when both bi,k and bj,k are nonzero. This will be the case
or
whenever edge k connects nodes vi and vj. In other words, when ek=(vi,vj). This happens
at most once. In fact, if i≠j, it is 1 if and only if Ai,j=1. So for all i≠j, (BBt)i,j=Ai,j.
If i=j, then what we have is

(BBt)i,i=⟨Bi,Bi⟩=∑k=1mb2i,k
bi,k will be 0 if ek is not incident to vi, and 1 otherwise. Hence, this summation is the same
as deg(vi).
Then, if we call Δi,j=δi,jdeg(vi), we have that
BBt=A+Δ

A connection between G and its line graph L(G)


Lemma. Given a graph G and its incidence matrix B, call A(L(G)) the adjacency matrix of
its line graph,L(G). Then
BtB=2Im+A(L(G))

With Im the identity matrix in {0,1}m×m. The proof is straightforward.


Proof

Call n=|V|,m=|E|. B∈{0,1}n×m, so BtB∈{0,1}m×m. Let 1≤i,j≤m.


(BtB)i,j=∑k=1nbti,kbk,j=∑k=1nbk,ibk,j=⟨Bi,Bj⟩
where we denote Bi the ith column of B.
Suppose i=j. Then ⟨Bi,Bj⟩=⟨Bi,Bi⟩=∑nk=1b2k,i=2, since ei only has two nodes incident to it.
Suppose i≠j. For bk,ibk,j to be nonzero, both ei and ej are incident to vk. Clearly, two
edges ei and ej can only be both incident to at most 1 node, so for all other k′ which are
not that specific vk (if it exists), bk′,ibk′,j=0, and bk,ibk,j=1. Hence ⟨Bi,Bj⟩∈{0,1}.
Consider the line graph of G, L(G). L(G) has, as nodes, the edges of G. Two nodes
of L(G) are connected by an edge in L(G) if and only if the edges of G they represent
share an incidence to a vertex in G (which we saw above is unique if it exists). Hence, if we
take the adjacency matrix A(L(G)) of L(G), then
for 1≤i,j≤m, A(L(G))i,j=1⟺eiand ej share an incidence to some node vk. This was the
same as ⟨Bi,Bj⟩.
Then BtB=2Im+A(L(G)).
From this, we can extract some useful information. For instance, we can tell that every
eigenvalue of a line graph will be at least −2. The proof is simple.

Ranks and determinants of incidence matrices


This relation between an incidence matrix and a line graph is already an interesting algebraic
property, and we will make use of it in future expositions. I’d now like to introduce longer, but
maybe more interesting results regarding incidence matrices.

Theorem. Let G=(V,E) be a graph, and let B be its incidence matrix. Call rk(B)K the rank
of Bover the field K. Then
rk(B)GF(2)=|V|−1⟺G is connected.

Proof

$) $ We will first show that the n rows of B are linearly dependent, and then that any subset
ofn−1 rows is linearly independent. The result follows.
For 1≤i≤n, take vi to be the ith row of B. We have that ∑ni=1vi=0, since each entry j of
the result will be ∑mi=1Bi,j, the sum of the entries of the jth column of the matrix B, and
every one of those columns has exactly two 1s. Hence, every entry in this vector will be even
(in particular, 2), and since we are working over GF(2), this is equivalent to the 0 vector.
Hence the rows are linearly dependent, and the matrix B does not have full row rank
over GF(2). Then rk(A)GF(2)≤n−1.
To see that the rank over GF(2) is n−1, let us consider any proper subset S of the rows. So
let’s say we have d≤n−1 rows S={vr1,⋯,vrd}, such that x=∑di=1vri=0. This is a linear
combination of the rows in S and it’s equal to 0, so if it exists, this proves that Sis linearly
dependent over GF(2). Since G is connected, there’s at least one edge ej that connects
one of these d nodes represented by these d rows, to some other node not in these drows.
Then the jth column of x must be nonzero, because we are only taking into account the first
of its two 1 entries when adding it. But then x was nonzero, which is absurd.
Therefore, any subset of d≤n−1 rows of the matrix B is linearly independent over GF(2),
and thus rk(B)GF(2)≥n−1. Since we also had rk(B)GF(2)≤n−1, the result follows.
⇒) We know that rk(B)GF(2)=|V|−1, and we want to prove that G is connected. As always,
call n=|V|. Because rk(B)GF(2)=n−1, there are no d<n rows v1,⋯,vd such
that ∑di=1vi=0. So if we take any subset v1,⋯,vd of the nodes, we must always have at
least one edge connecting it to the other nodes, since if there wasn’t, the sum over all of
these rows would be 0. Thus, no set of d<n nodes can be isolated from the rest of the graph,
meaning G is connected.
https://books.google.com.pk/books?
id=vECbHnSx_wcC&pg=PA18&lpg=PA18&dq=bus+incidence+matrix&source=bl&ots=MYitGIz9vo&sig=iU
g4LF3xiChnguZhS1e4XHMRifs&hl=en&sa=X&ei=8vxrVdCNFcLzUsHUgSA&ved=0CC4Q6AEwAg#v=onepag
e&q=bus%20incidence%20matrix&f=false

https://books.google.com.pk/books?
id=NQhbtFC6_40C&pg=PA63&lpg=PA63&dq=spanning+tree+in+bus+incidence+matrix&source=bl&ots=
nld7RI5-
_5&sig=f1Ea0gNOCuCVc9x8HLtB8AHBVXA&hl=en&sa=X&ei=4AZsVayhA4GxU7akg6gB&ved=0CCIQ6AEw
AQ#v=onepage&q=spanning%20tree%20in%20bus%20incidence%20matrix&f=false

You might also like