Professional Documents
Culture Documents
1
Department of Computer Science
Rensselaer Polytechnic Institute, Troy, NY, USA
2
Department of Computer Science
Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 1 / 48
Graphs
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 2 / 48
Undirected and Directed Graphs
v1 v2 v1 v2
v3 v4 v5 v6 v3 v4 v5 v6
v7 v8 v7 v8
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 3 / 48
Degree Distribution
The degree of a node vi ∈ V is the number of edges incident with it, and is
denoted as d(vi ) or just di .
The degree sequence of a graph is the list of the degrees of the nodes sorted in
non-increasing order.
Let Nk denote the number of vertices with degree k. The degree frequency
distribution of a graph is given as
(N0 , N1 , . . . , Nt )
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 4 / 48
Degree Distribution
v1 v2
v3 v4 v5 v6
v7 v8
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 6 / 48
Adjacency Matrix
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 7 / 48
Graphs from Data Matrix
Many datasets that are not in the form of a graph can still be converted into one.
Let D = {x i }ni=1 (with x i ∈ Rd ), be a dataset. Define a weighted graph
G = (V , E ), with edge weight
wij = sim(x i , x j )
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 8 / 48
Iris Similarity
√
Graph: Gaussian Similarity
σ = 1/ 2; edge exists iff wij ≥ 0.777
order: |V | = n = 150; size: |E | = m = 753
bC bC uT
uT
bC bC bC uT
bC bC uT
bC uT uT
bC uT
bC
bC uT uT uT uT
bC uT
bC bC bC uT
uT uT
bC uT uT
bC bC uT uT
uT uT uT uT
bC uT
bC uT uT
bC bC bC uT uT uT bC
bC
bC uT uT
bC bC uT uT uT bC bC
bC bC uT
uT
bC uT bC bC
bC bC uT uT uT
bC uT uT
bC bC bC rS
uT uT uT
bC uT rS uT
bC bC uT
bC bC uT
bC rS
rS rS uT
bC bC rS
rS rS rS
rS rS rS
bC rS rS rS
rS rS rS rS rS
rS rS rS
rS rS rS rS
rS rS rS rS
rS rS
rS rS rS rS
rS rS
rS rS rS rS
rS rS
rS rS rS rS
rS
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 9 / 48
Topological Graph Attributes
Graph attributes are local if they apply to only a single node (or an edge), and
global if they refer to the entire graph.
Degree: The degree of a node vi ∈ G is defined as
X
di = A(i, j)
j
The corresponding global attribute for the entire graph G is the average degree:
P
di
µd = i
n
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 10 / 48
Iris Graph: Degree Distribution
0.10
0.09 13 13
0.08
0.07 10
9
0.06
f (k)
8 8 8
0.05 7
6 6 6 6 6
0.04 5 5 5
0.03 4 4 4
3 3
0.02 2 2
0.01 1 1 1 1 1 1 1
0 0 0 0 0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43
Degree: k
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 11 / 48
Iris Graph: Path Length Histogram
1044
1000
900 831
800 753
700 668
Frequency
600 529
500
400 330
300 240
200 146
90
100 30 12
0
0 1 2 3 4 5 6 7 8 9 10 11
Path Length: k
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 12 / 48
Eccentricity, Radius and Diameter
The eccentricity of a node vi is the maximum distance from vi to any other node
in the graph:
e(vi ) = max d(vi , vj )
j
The diameter, denoted d(G ), is the maximum eccentricity of any vertex in the
graph:
d(G ) = max e(vi ) = max d(vi , vj )
i i ,j
For a disconnected graph, values are computed over the connected components of
the graph.
The diameter of a graph G is sensitive to outliers. Effective diameter is more
robust; defined as the minimum number of hops for which a large fraction,
typically 90%, of all connected pairs of nodes can reach each other.
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 13 / 48
Clustering Coefficient
The clustering coeff icient of a node vi is a measure of the density of edges in the
neighborhood of vi .
Let Gi = (Vi , Ei ) be the subgraph induced by the neighbors of vertex vi . Note that
vi 6∈ Vi , as we assume that G is simple.
Let |Vi | = ni be the number of neighbors of vi , and |Ei | = mi be the number of
edges among the neighbors of vi . The clustering coefficient of vi is defined as
no. of edges in Gi mi 2 · mi
C (vi ) = = ni =
maximum number of edges in Gi 2
n i (n i − 1)
The clustering coeff icient of a graph G is simply the average clustering coefficient
over all the nodes, given as
1X
C (G ) = C (vi )
n i
C (vi ) is well defined only for nodes with degree d(vi ) ≥ 2, thus define C (vi ) = 0 if
di < 2.
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 14 / 48
Transitivity and Efficiency
3 × no. of triangles in G
T (G ) =
no. of connected triples in G
where the subgraph composed of the edges (vi , vj ) and (vi , vk ) is a connected
triple centered at vi , and a connected triple centered at vi that includes (vj , vk ) is
called a triangle (a complete subgraph of size 3).
The eff iciency for a pair of nodes vi and vj is defined as d (v1,v ) . If vi and vj are
i j
not connected, then d(vi , vj ) = ∞ and the efficiency is 1/∞ = 0.
The eff iciency of a graph G is the average efficiency over all pairs of nodes,
whether connected or not, given as
2 XX 1
n(n − 1) i j >i d(vi , vj )
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 15 / 48
Clustering Coefficient
Subgraph induced by node v4 :
v1
v1 v2 v3 v5
v7
v3 v4 v5 v6
The clustering coefficient of v4 is
2 2
C (v4 ) = 4 = = 0.33
6
v7 v8 2
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 16 / 48
Centrality Analysis
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 17 / 48
Betweenness Centrality
The betweenness centrality measures how many shortest paths between all pairs of
vertices include vi . It gives an indication as to the central “monitoring” role played
by vi for various pairs of nodes.
Let ηjk denote the number of shortest paths between vertices vj and vk , and let
ηjk (vi ) denote the number of such paths that include or contain vi .
The fraction of paths through vi is denoted as
ηjk (vi )
γjk (vi ) =
ηjk
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 18 / 48
Centrality Values
v1 v2
v3 v4 v5 v6
v7 v8
Centrality v1 v2 v3 v4 v5 v6 v7 v8
Degree 4 3 2 4 4 1 2 2
Eccentricity 0.5 0.33 0.33 0.33 0.5 0.25 0.25 0.33
e(vi ) 2 3 3 3 2 4 4 3
Closeness
P 0.100 0.083 0.071 0.091 0.100 0.056 0.067 0.071
j d(vi , vj ) 10 12 14 11 10 18 15 14
Betweenness 4.5 6 0 5 6.5 0 0.83 1.17
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 19 / 48
Prestige or Eigenvector Centrality
Let p(u) be a positive real number, called the prestige score for node u. Intuitively
the more (prestigious) the links that point to a given node, the higher its prestige.
X
p(v ) = A(u, v ) · p(u)
u
X
= AT (v , u) · p(u)
u
p ′ = AT p
where p 0 is the initial prestige vector. It is well known that the vector p k
converges to the dominant eigenvector of AT .
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 20 / 48
Computing Dominant Eigenvector: Power Iteration
The dominant eigenvector of AT
and the corresponding eigenvalue PowerIteration (A, ǫ):
can be computed using the power 1 k ← 0 // iteration
iteration method. 2 p 0 ← 1 ∈ Rn // initial vector
It starts with an initial vector p 0 , 3 repeat
and in each iteration, it multiplies 4 k ← k + 1 p k ← AT p k −1
on the left by AT , and scales the // eigenvector estimate
intermediate p k vector by dividing 5 i ← arg maxj p k [j] // maximum
it by the maximum entry p k [i] in value index
p k to prevent numeric overflow. 6 λ ← p k [i]/p k −1 [i] // eigenvalue
The ratio of the maximum entry in estimate
iteration k to that in k − 1, given 7 p k ← p 1[i ] p k // scale vector
p [i ]
as λ = p k [i ] , yields an estimate k
k−1
8
for the eigenvalue.
9 until
p k − p k −1
≤ ǫ
The iterations continue until the
difference between successive 10 p ← kp1 k p k // normalize eigenvector
k
eigenvector estimates falls below 11 return p, λ
some threshold ǫ > 0.
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 21 / 48
Power Iteration for Eigenvector Centrality: Example
v4 v5
v1
v3 v2
0 0 0 1 0 0 0 1 0 0
0 0 1 0 1 0 0 0 1 1
AT =
1
A= 0 0 0 0
0 1 0 1 0
0 1 1 0 1 1 0 0 0 0
0 1 0 0 0 0 1 0 1 0
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 22 / 48
Power Method via Scaling
p 0 p 1 p 2 p3
1 1 0.5 1 0.67 1 0.75
1 2 1 1.5 1 1.33 1
1 2 → 1 1.5 → 1 1.33 → 1
1 1 0.5 0.5 0.33 0.67 0.5
1 2 1 1.5 1 1.33 1
λ 2 1.5 1.33
p4 p5 p6 p7
1 0.67 1 0.67 1 0.69 1 0.68
1.5 1 1.5 1 1.44 1 1.46 1
1.5 → 1 1.5 → 1 1.44 → 1 1.46 → 1
0.75 0.5 0.67 0.44 0.67 0.46 0.69 0.47
1.5 1 1.5 1 1.44 1 1.46 1
1.5 1.5 1.444 1.462
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 23 / 48
Convergence of the Ratio to Dominant Eigenvalue
2.25
2.00 bc
1.75
1.50 bc bc bc bc
bc bc bc bc bc bc bc bc bc bc λ = 1.466
bc
1.25
0 2 4 6 8 10 12 14 16
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 24 / 48
PageRank
PageRank is based on (normalized) prestige combined with a random jump
assumption. The PageRank of a node v recursively depends on the PageRank of
other nodes that point to it.
Normalized Prestige: Define N as the normalized adjacency matrix
(
1
if (u, v ) ∈ E
N(u, v ) = od (u)
0 if (u, v ) 6∈ E
where od(u) is the out-degree of node u.
Normalized prestige is given as
X
p(v ) = N T (v , u) · p(u)
u
p ′ = (1 − α)N T p + αN Tr p
= (1 − α)N T + αN Tr p
= MT p
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 26 / 48
Hub and Authority Scores (HITS)
The authority score of a page is analogous to PageRank or prestige, and it depends on
how many “good” pages point to it. The hub score of a page is based on how many
“good” pages it points to. In other words, a page with high authority has many hub
pages pointing to it, and a page with high hub score points to many pages that have
high authority.
Let a(u) be the authority score and h(u) the hub score of node u. We have:
X T
a(v ) = A (v , u) · h(u)
u
X
h(v ) = A(v , u) · a(u)
u
Recursively, we have:
a k = AT h k −1 = AT (Aa k −1 ) = (AT A)a k −1
h k = Aa k −1 = A(AT h k −1 ) = (AAT )h k −1
The authority score converges to the dominant eigenvector of AT A, whereas the hub
score converges to the dominant eigenvector of AAT .
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 27 / 48
Small World Property
Real-world graphs exhibit the small-world property that there is a short path
between any pair of nodes. A graph G exhibits small-world behavior if the average
path length µL scales logarithmically with the number of nodes in the graph, that
is, if
µL ∝ log n
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 28 / 48
Scale-free Property
In many real-world graphs it has been observed that the empirical degree
distribution f (k) exhibits a scale-free behavior captured by a power-law
relationship with k, that is, the probability that a node has degree k satisfies the
condition
f (k) ∝ k −γ
which is the equation of a straight line in the log-log plot of k versus f (k), with
−γ giving the slope of the line.
A power-law relationship leads to a scale-free or scale invariant behavior because
scaling the argument by some constant c does not change the proportionality.
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 29 / 48
Clustering Effect
Real-world graphs often also exhibit a clustering effect, that is, two nodes are
more likely to be connected if they share a common neighbor. The clustering
effect is captured by a high clustering coefficient for the graph G .
Let C (k) denote the average clustering coefficient for all nodes with degree k;
then the clustering effect also manifests itself as a power-law relationship between
C (k) and k:
C (k) ∝ k −γ
In other words, a log-log plot of k versus C (k) exhibits a straight line behavior
with negative slope −γ.
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 30 / 48
Degree Distribution: Human Protein Interaction Network
|V | = n = 9521, |E | = m = 37060
bC
−2 bC
bC −γ = −2.15
bC
Probability: log2 f (k)
−4 bC
bC
bC
bC Cb Cb
bC
−6 bC
bC bC
bC bC Cb Cb
bC bC
bC bC
bC bC
−8 Cb bC bC bC bC
bC bC
bC bC Cb bC
bC Cb Cb bC
bC bC bC Cb
−10 bC bC Cb bC Cb bC Cb bC bC Cb
bC bC Cb bC
bC bC bC bC bC Cb bC
bC bC bC bC bC Cb bC
−12 bC bC bC bC bC bC Cb bC bC bC bC Cb bC bC bC bC bC
bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC
−14
0 1 2 3 4 5 6 7 8
Degree: log2 k
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 31 / 48
Cumulative Degree Distribution
F c (k) = 1 − F (k) where F (k) is the CDF for f (k)
bC
0 bC
bC −(γ − 1) = −1.85
bC bC bC bC bC bC
−2
Probability: log2 F c (k)
Cb Cb
bC Cb Cb
Cb bC Cb
bC Cb bC
Cb bC Cb
bC Cb bC bC
−4 bC bC bC bC bC bC
bC bC bC bC bC
bC bC bC bC bC
bC bC bC bC bC bC
bC bC bC bC bC bC bC
−6 bC bC bC bC bC bC bC
bC bC bC bC bC bC
bC bC bC
bC bC bC bC
bC bC bC bC bC
bC bC bC bC
bC bC bC bC
−8 bC bC bC bC
bC bC bC
bC Cb bC
bC bC bC bC
bC bC
−10 bC Cb
Cb bC
bC
bC
bC
−12 bC
bC
−14
0 1 2 3 4 5 6 7 8
Degree: log2 k
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 32 / 48
Average Clustering Coefficient
Average Clustering Coefficient: log2 C (k)
−2 −γ = −0.55
bC
bC bC bC bC
bC
bC bC Cb Cb Cb Cb bC bC
bC
bC bC bC Cb bC bC Cb Cb Cb bC bC
Cb bC bC Cb
Cb bC bC bC Cb bC bC bC
bC
bC bC bC
bC Cb Cb Cb Cb bC Cb Cb bC Cb bC
−4 bC
C b
bbC C bC bC bC bC
C b
CbC b
bC bC bC Cb C b C b bC Cb bC Cb bC
bC bC bC bC Cb bC Cb bC Cb bC
Cb
bC bC bC bC bC bC Cb bC bC Cb bC bC bC bC
bC bC Cb
bC Cb
bC bC bC Cb bC bC
bC bC
bC C b C b
bC Cb Cb
−6 bC
bC bC
bC
bC bC
−8 bC
1 2 3 4 5 6 7 8
Degree: log2 k
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 33 / 48
Erdös–Rényi Random Graph Model
The ER model specifies a collection of graphs G(n, m) with n nodes and m edges,
such that each graph G ∈ G has equal probability of being selected:
−1
1 M
P(G ) = M =
m
m
n(n−1)
where M = 2n = M
2 and m
is the number of possible graphs with m edges
(with n nodes).
Random Graph Generation: Randomly select two distinct vertices vi , vj ∈ V ,
and add an edge (vi , vj ) to E , provided the edge is not already in the graph G .
Repeat the process until exactly m edges have been added to the graph.
Let X be a random variable denoting the degree of a node for G ∈ G. Let p
denote the probability of an edge in G
m m 2m
p= = n =
M 2
n(n − 1)
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 34 / 48
Random Graphs: Average Degree
µd = E [X ] = (n − 1)p
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 35 / 48
Random Graphs: Degree Distribution
E [X ] = (n − 1)p ≃ np as n → ∞
var (X ) = (n − 1)p(1 − p) ≃ np as n → ∞ and p → 0
λk e−λ
f (k) =
k!
where λ = np represents both the expected value and variance of the distribution.
Thus, ER random graphs do not exhibit power law degree distribution.
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 36 / 48
Random Graphs: Clustering Coefficient and Diameter
Clustering Coefficient: Consider a node vi with degree k. Since p is the probability of an
edge, the expected number of edges mi among the neighbors of a node vi is simply
pk(k − 1)
mi =
2
The clustering coefficient is
2mi
C (vi ) = =p
k(k − 1)
which implies that C (G ) = 1n i C (vi ) = p. Since for sparse graphs we have p → 0, this
P
d(G ) ∝ log n
v7 v1
v6 v2
v5 v3
v4
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 38 / 48
WS Regular Graph: Clustering Coefficient and Diameter
mv 3k − 3
C (v ) = =
Mv 4k − 2
(
n
if n is even
d(G ) = n2−1
k
2k if n is odd
The regular graph has a diameter that scales linearly in the number of nodes, and
thus it is not small-world.
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 39 / 48
Random Perturbation of Regular Graph
Edge Rewiring: For each edge (u, v ) in the graph, with probability r , replace v
with another randomly chosen node avoiding loops and duplicate edges.
The WS regular graph has m = kn total edges, so after rewiring, rm of the edges
are random, and (1 − r )m are regular.
Edge Shortcuts: Add a few shortcut edges between random pairs of nodes, with
r being the probability, per edge, of adding a shortcut edge.
The total number of random shortcut edges added to the network is mr = knr .
The total number of edges in the graph is m + mr = (1 + r )m = (1 + r )kn.
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 40 / 48
Watts–Strogatz Graph: Shortcut Edges
n = 20, k = 3
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 41 / 48
Properties of Watts–Strogatz Graphs
Degree Distribution: Let X denote the random variable denoting the number of
shortcuts for each node. Then the probability of a node with j shortcut edges is given as
!
n′ j ′
f (j) = P(X = j) = p (1 − p)n −j
j
2kr 2kr
with E [X ] = n′ p = 2kr and p = n−2k −1 = n′ .
The expected degree of each node in the network is therefore 2k + E [X ] = 2k(1 + r ).
The degree distribution is not a power law.
Diameter: Small values of shortcut edge probability r are enough to reduce the diameter
from O(n) to O(log n).
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 42 / 48
Watts-Strogatz Model: Diameter (circles) and Clustering
Coefficient (triangles)
bC
167
100 1.0
90 0.9
Clustering coefficient: C (G )
80 0.8
70 0.7
Diameter: d(G )
uT bC
60 uT uT uT
0.6
uT uT uT uT uT
50 uT uT uT uT uT
0.5
uT uT uT uT uT uT uT
40 0.4
30 bC 0.3
bC
20 bC
bC
0.2
bC bC bC bC bC bC bC bC bC
10 bC bC bC bC bC bC 0.1
0 0
0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20
Edge probability: r
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 43 / 48
Barabási–Albert Scale-free Model
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 44 / 48
Barabási–Albert Graph
n0 = 3, q = 2, t = 12
probability distribution
v5 v10
2
π1 (v0 ) = π1 (v3 ) = = 0.2
10
v6 v9
3
π1 (v1 ) = π1 (v2 ) = = 0.3 v7 v8
10
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 45 / 48
Properties of the BA Graphs
Degree Distribution: The degree distribution for BA graphs is given as
(q + 2)(q + 1)q 2 2q(q + 1)
f (k) = · =
(k + 2)(k + 1)k (q + 2) k(k + 1)(k + 2)
For constant q and large k, the degree distribution scales as
f (k) ∝ k −3
The BA model yields a power-law degree distribution with γ = 3, especially for large
degrees.
Diameter: The diameter of BA graphs scales as
log nt
d(Gt ) = O
log log nt
suggesting that they exhibit ultra-small-world behavior, when q > 1.
Clustering Coefficient: The expected clustering coefficient of the BA graphs scales as
(log nt )2
E [C (Gt )] = O
nt
which is only slightly better than for random graphs.
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 46 / 48
Barabási–Albert Model: Degree Distribution
n0 = 3, t = 997, q = 3
bC
−2 bC
bC −γ = −2.64
bC
−4 bC
Probability: log2 f (k)
bC
bC
bC
−6 bC
bC bC
bC bC
bC
−8 bC
bC bC
bC
bC bC
bC Cb bC
bC Cb
bC bC
−10 bC bC bC
bC
bC
bC bC Cb
Cb bC Cb bC bC bC
Cb bC Cb bC bC bC bC bC
−12 bC bC bC bC bC bC bC bC bC bC bC bC bC bC
bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC bC
−14
1 2 3 4 5 6 7
Degree: log2 k
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 47 / 48
Data Mining and Machine Learning:
Fundamental Concepts and Algorithms
dataminingbook.info
1
Department of Computer Science
Rensselaer Polytechnic Institute, Troy, NY, USA
2
Department of Computer Science
Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
Zaki & Meira Jr. (RPI and UFMG) Data Mining and Machine Learning Chapter 4: Graph Data 48 / 48