You are on page 1of 5

0.1.

The Example for Illustration


To have more understanding about the graph neural network, let us give some technical
example about a specific graph with a specific feature of each vertex/node.
Example 1 Given that a graph G of order five. Suppose that vertex and edge sets are
V (G)={v 1 , v 2 , v3 , v 4 , v 5 } and V ( G )=v 1 v 2 , v 1 v 3 , v 2 v 3 , v 2 v 4 , v 2 v 5 , v 4 v 5 } , respectively. Given
that feature node as

[ ]
0,30 0,90 0,43
0,55 0,80 0,39
0
H v = 0,17
i
0,48 0,48
0,25 0,46 0,24
0,33 0,36 0,10

Obtain the nodes embedding with one hidden layer with one neuron, and with minimal loss
function.

Solution. By the above graph, we can determine the adjacency, identity, and loop-adjacency
matrices as follows.

[ ][ ] [ ]
0 1 1 0 0 1 0 0 0 0 1 1 1 0 0
1 0 1 0 0 0 1 0 0 0 1 1 1 0 0
A ( G )= 1 1 0 1 1 , I= 0 0 1 0 0 , B=A + I= 1 1 1 1 1
0 0 1 0 1 0 0 0 1 0 0 0 1 1 1
0 0 1 1 0 0 0 0 0 1 0 0 1 1 1

By using Equation?? and Equation??, we can start the technical calculation by initiating
the learning weight W =[ 0.2 0.2 0.2 ] of (1 , 3)-matrix. The first iteration of Equation?? can
be described as follows:
mlv i
¿ W l . H l−1
v , where i=1 , 2 ,3 , 4 ,5
i

m1v 1
¿ W 1 . H 0v 1

¿ [ 0.2 0.2 0.2 ] . [ 0.30 0.90 0.43 ]= [ 0.326 ]


m1v 2
¿ W 1 . H 0v 2

¿ [ 0.2 0.2 0.2 ] . [ 0.55 0.80 0.39 ]= [ 0.348 ]


1 1 0
mv 3
¿W .H v 3

¿ [ 0.2 0.2 0.2 ] . [ 0.17 0.48 0.48 ] =[ 0.226 ]


1 1 0
mv 4
¿W .H v 4

¿ [ 0.2 0.2 0.2 ] . [ 0.25 0.46 0.24 ] =[ 0.190 ]


1 1 0
mv 5
¿W .H v 5

¿ [ 0.2 0.2 0.2 ] . [ 0.33 0.36 0.10 ] =[ 0.158 ]


1
thus, we have mv :

[]
i

0.326
0.348
1
mv = 0.226
i

0.190
0.158
1
By considering the matrix B, we only include the non zeros element of m v , thus we have

[]
i

0.326

[ ] [ ] [ ] [ ]
0.326 0.326 0.348 0.226 0.226
1 1 1 1 1
mv = 0.348 , mv = 0.348 ,m v = 0.226 , mv = 0.190 , mv = 0.190
1 2 3 4 5

0.226 0.226 0.190 0.158 0.158


0.158

Take the sum of the elements of each nodes embedding are as follows:
h1v =0.9 , h1v =0.9 , h1v =1.248 , h1v =0.574 , h1v =0.574 . Thus, we have the first iteration of

[]
1 2 3 4 5

0.9
0.9
1
h =
aggregation v 1.248 where i=1 , 2 ,3 , 4 ,5 .
i

0.574
0.574

The loss (e ) can be calculated as

e
1
¿
||h 1
v1 −h1v |+|h1v −h1v |+|h1v −h1v |+|h1v −h 1v |+|h 1v −h1v |+|h1v −h1v |
2 1 3 2 3 3 4 3 5 4 5
|
¿ E ( G ) ∨¿ ¿
¿ 0.1087
l−1
In the second iteration, we need update H v first: i

l−2
Hv
H l−1
¿ i
×hlv−1 , where i=1 , 2 ,3 , 4 ,5
∑ (H
vi l−2
)
i
vi

[ 0.30 0.90 0.43 ]


Hv
1
¿ × 0.9
1
∑ [ 0.30 0.90 0.43 ]
¿ [ 0.166 0.497 0.237 ]
[ 0.55 0.80 0.39 ]
H 1v ¿ × 0.9
2
∑ [ 0.55 0.80 0.39 ]
¿ [ 0.284 0.414 0.212 ]
[ 0.17 0.48 0.48 ]
Hv
1
¿ × 1.248
3
∑ [ 0.17 0.48 0.48 ]
¿ [ 0.188 0.530 0.530 ]
[ 0.25 0.46 0.24 ]
H 1v ¿ × 0.574
4
∑ [ 0.25 0.46 0.24 ]
¿ [ 0.151 0.278 0.145 ]
[ 0.33 0.36 0.10 ]
Hv
1
¿ × 0.574
5
∑ [ 0.33 0.36 0.10 ]
¿ [ 0.240 0.262 0.073 ]
1
thus, we have H v :

[ ][
i

]
H 1v
0.166 1
1
0.497 0.237
0.284 H v2 0.414 0.212
1 1
H v ¿ H = 0.188
i v3 0.530 0.530
H 0.151 1 0.278 0.145
v4
0.240 1 0.262 0.073
H v5

Now, we need to update the learning weight. Given that the learning rate α , we can
update the weight w as
W
k +1
¿ W kj + α × z k × e k , where j=1 ,2 , 3
1 k
W
2
¿ W j +α × z k × e , for k =1 and α =0.1
2 1 1
W1 ¿ W 1 +α × z 1 × e
¿ 0.2+0.1 ×0.2058 × 0.1087
¿ 0.2022
W 22 ¿ W 12 +α × z 2 × e1
¿ 0.2+0.1 ×0.3962 ×0.1087
¿ 0.2043
2 1 1
W3 ¿ W 3 +α × z 3 × e
¿ 0.2+0.1 ×0.2394 × 0.1087
¿ 0.2027
Thus, we have W 2:
W = [ W 1 W 2 W 3 ]= [ 0.2022 0.2043 0.2027 ]
2 2 2 2

By this new W 2 in hand, we can calculate the second iteration as follows.


l l l−1
m v , ¿ W . H v where i=1 , 2 ,3 , 4 ,5
i i
2 2 1
mv 1
¿W .H v 1

¿ [ 0.2022 0.2043 0.2027 ] . [ 0.166 0.497 0.237 ]


¿ [ 0.1831 ]
2 2 1
mv 2
¿W .H v 2

¿ [ 0.2022 0.2043 0.2027 ] . [ 0.284 0.414 0.212 ]


¿ [ 0.1849 ]
m 2v 3
¿ W 2 . H 1v 3

¿ [ 0.2022 0.2043 0.2027 ] . [ 0.188 0.530 0.530 ]


¿ [ 0.2538 ]
m 2v 4
¿ W 2 . H 1v 4

¿ [ 0.2022 0.2043 0.2027 ] . [ 0.151 0.278 0.145 ]


¿ [ 0.1167 ]
m 2v 5
¿ W 2 . H 1v 5

¿ [ 0.2022 0.2043 0.2027 ] . [ 0.240 0.262 0.073 ]


¿ [ 0.1169 ]
2
thus, we have mv :

[ ]
i

0.1831
0.1849
2
mv = 0.2538
i

0.1167
0.1169
2
By considering the matrix B, we only include the non zeros element of h v , thus we have

[ ]
i

0.1831

[ ] [ ] [ ] [ ]
0.1831 0.1831 0.1849 0.2538 0.2538
2 2 2 2 2
mv = 0.1849 , mv = 0.1849 , mv = 0.2538 , mv = 0.1167 , mv = 0.1167
1 2 3 4 5

0.2538 0.2538 0.1167 0.1169 0.1169


0.1169

Take the sum of the elements of each nodes embedding are as follows:
h2v =0.6218 , h2v =0.6218 , h2v =0.8554 , h2v =0.4874 , h2v =0.4874 . Thus, we have the second

[ ]
1 2 3 4 5

0.6218
0.6218
2
iteration of aggregation h v = 0.8554 where i=1 , 2 ,3 , 4 ,5
i

0.4874
0.4874
The loss (e ) can be calculated as
¿|hv −hv |+|h v −h v |+|hv −hv |+|h v −h v |+|h v −hv |+|hv −hv |
2 2 2 2 2 2 2 2 2 2 2 2
2
e ¿
1 2 1 3 2 3 3 4 3 5 4 5

|E (G)|
¿ 0.0448

You might also like