Professional Documents
Culture Documents
[ ]
0,30 0,90 0,43
0,55 0,80 0,39
0
H v = 0,17
i
0,48 0,48
0,25 0,46 0,24
0,33 0,36 0,10
Obtain the nodes embedding with one hidden layer with one neuron, and with minimal loss
function.
Solution. By the above graph, we can determine the adjacency, identity, and loop-adjacency
matrices as follows.
[ ][ ] [ ]
0 1 1 0 0 1 0 0 0 0 1 1 1 0 0
1 0 1 0 0 0 1 0 0 0 1 1 1 0 0
A ( G )= 1 1 0 1 1 , I= 0 0 1 0 0 , B=A + I= 1 1 1 1 1
0 0 1 0 1 0 0 0 1 0 0 0 1 1 1
0 0 1 1 0 0 0 0 0 1 0 0 1 1 1
By using Equation?? and Equation??, we can start the technical calculation by initiating
the learning weight W =[ 0.2 0.2 0.2 ] of (1 , 3)-matrix. The first iteration of Equation?? can
be described as follows:
mlv i
¿ W l . H l−1
v , where i=1 , 2 ,3 , 4 ,5
i
m1v 1
¿ W 1 . H 0v 1
[]
i
0.326
0.348
1
mv = 0.226
i
0.190
0.158
1
By considering the matrix B, we only include the non zeros element of m v , thus we have
[]
i
0.326
[ ] [ ] [ ] [ ]
0.326 0.326 0.348 0.226 0.226
1 1 1 1 1
mv = 0.348 , mv = 0.348 ,m v = 0.226 , mv = 0.190 , mv = 0.190
1 2 3 4 5
Take the sum of the elements of each nodes embedding are as follows:
h1v =0.9 , h1v =0.9 , h1v =1.248 , h1v =0.574 , h1v =0.574 . Thus, we have the first iteration of
[]
1 2 3 4 5
0.9
0.9
1
h =
aggregation v 1.248 where i=1 , 2 ,3 , 4 ,5 .
i
0.574
0.574
e
1
¿
||h 1
v1 −h1v |+|h1v −h1v |+|h1v −h1v |+|h1v −h 1v |+|h 1v −h1v |+|h1v −h1v |
2 1 3 2 3 3 4 3 5 4 5
|
¿ E ( G ) ∨¿ ¿
¿ 0.1087
l−1
In the second iteration, we need update H v first: i
l−2
Hv
H l−1
¿ i
×hlv−1 , where i=1 , 2 ,3 , 4 ,5
∑ (H
vi l−2
)
i
vi
[ ][
i
]
H 1v
0.166 1
1
0.497 0.237
0.284 H v2 0.414 0.212
1 1
H v ¿ H = 0.188
i v3 0.530 0.530
H 0.151 1 0.278 0.145
v4
0.240 1 0.262 0.073
H v5
Now, we need to update the learning weight. Given that the learning rate α , we can
update the weight w as
W
k +1
¿ W kj + α × z k × e k , where j=1 ,2 , 3
1 k
W
2
¿ W j +α × z k × e , for k =1 and α =0.1
2 1 1
W1 ¿ W 1 +α × z 1 × e
¿ 0.2+0.1 ×0.2058 × 0.1087
¿ 0.2022
W 22 ¿ W 12 +α × z 2 × e1
¿ 0.2+0.1 ×0.3962 ×0.1087
¿ 0.2043
2 1 1
W3 ¿ W 3 +α × z 3 × e
¿ 0.2+0.1 ×0.2394 × 0.1087
¿ 0.2027
Thus, we have W 2:
W = [ W 1 W 2 W 3 ]= [ 0.2022 0.2043 0.2027 ]
2 2 2 2
[ ]
i
0.1831
0.1849
2
mv = 0.2538
i
0.1167
0.1169
2
By considering the matrix B, we only include the non zeros element of h v , thus we have
[ ]
i
0.1831
[ ] [ ] [ ] [ ]
0.1831 0.1831 0.1849 0.2538 0.2538
2 2 2 2 2
mv = 0.1849 , mv = 0.1849 , mv = 0.2538 , mv = 0.1167 , mv = 0.1167
1 2 3 4 5
Take the sum of the elements of each nodes embedding are as follows:
h2v =0.6218 , h2v =0.6218 , h2v =0.8554 , h2v =0.4874 , h2v =0.4874 . Thus, we have the second
[ ]
1 2 3 4 5
0.6218
0.6218
2
iteration of aggregation h v = 0.8554 where i=1 , 2 ,3 , 4 ,5
i
0.4874
0.4874
The loss (e ) can be calculated as
¿|hv −hv |+|h v −h v |+|hv −hv |+|h v −h v |+|h v −hv |+|hv −hv |
2 2 2 2 2 2 2 2 2 2 2 2
2
e ¿
1 2 1 3 2 3 3 4 3 5 4 5
|E (G)|
¿ 0.0448