You are on page 1of 8

Expert Systems With Applications 241 (2024) 122685

Contents lists available at ScienceDirect

Expert Systems With Applications


journal homepage: www.elsevier.com/locate/eswa

Dynamic link prediction by learning the representation of node-pair via


graph neural networks
Hu Dong a , Longjie Li a,b ,∗, Dongwen Tian a , Yiyang Sun a , Yuncong Zhao a
a
School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
b
Key Laboratory of Media Convergence Technology and Communication, Gansu Province, Lanzhou 730000, China

ARTICLE INFO ABSTRACT

Keywords: Many real-world networks are dynamic, whose structure keeps changing over time. Link prediction, which
Link prediction can foretell the emergence of future links, is one crucial task in dynamic network analysis. Compared to link
Dynamic networks prediction in static networks, it is more challenging and complicated in dynamic ones due to the dynamic
Graph neural networks
nature. On the other hand, effective use of the information carried out by dynamic networks can enhance
Representations learning
prediction accuracy. In this study, we presents a new end-to-end solution for dynamic link prediction, in
which the representations of node-pairs can be effectively learned via an improved graph neural network and
a nonlinear function by leveraging the structural information of individual snapshots, historical features from
network evolution, and global knowledge of the collapsed network. The proposed method can effectively cope
with the challenge of dynamic link prediction. Extensive tests are implemented on several dynamic networks
to assess the prediction performance of our proposed method. The results on these networks demonstrate that
our proposed method achieves superior effectiveness compared to the baselines in most cases.

1. Introduction are called dynamic networks, which can provide more accurate descrip-
tions for complex systems (Holme & Saramäki, 2012; Yang et al., 2020).
Many complicated systems in the real world can be naturally repre- Compared to link prediction in static networks, link prediction in a
sented by complex networks (Boccaletti et al., 2006), in which entities dynamic network (i.e., dynamic link prediction) is a challenging and
and relationship between them are modeled as nodes and links, re- complex process because not only the current structure of the network
spectively. Link prediction (Liben-Nowell & Kleinberg, 2007; Martínez but also its historical evolution affect the formation of links (Divakaran
et al., 2017), which aims to uncover missing links and foretell new & Mohan, 2020; Selvarajah et al., 2020).
connections in a network, is a crucial task of complex network analysis. To address the problem of dynamic link prediction, one simple effort
In a host of applications, such as friendships recommendation in online is to collapse a dynamic network into a collapsed network (Lei et al.,
social networks (Adamic & Adar, 2003; Li et al., 2020), product rec- 2019; Yang et al., 2020) and then to predict links on the collapsed net-
ommendation in e-commerce websites (Xie et al., 2015), and protein work (Liben-Nowell & Kleinberg, 2007; Sharan & Neville, 2008). How-
interaction prediction in biological networks (Hamilton et al., 2017), ever, such approaches ignore the evolution information of consecutive
link prediction plays a critical role. To date, researchers have developed snapshots, which leads to undesirable prediction performance. To over-
numerous methods to handle the link prediction problem by computing come this weakness, (Wu et al., 2020) introduced a similarity-based
the similarities of nodes or training prediction models (Daud et al., approach that uses node ranking to determine the similarity of two
2020; Zhou, 2021). However, the majority of these techniques are nodes and predicts their future similarity using the historical similarity
developed for static networks, which do not take the evolution of series. Chiu and Zhan (2018) proposed a deep learning-based method,
networks into account. in which features of node-pair are denoted using weak estimators
Actually, most of the real-world networks keep evolving with time. and conventional similarity indexes. In recent years, node embedding
For instance, the connections between individuals in a social network approaches like DeepWalk (Perozzi et al., 2014) and node2vec (Grover
are usually dynamically varying in accordance with the behaviors of & Leskovec, 2016), have been designed to project the representation
their social partners (Ibrahim & Chen, 2015). These kinds of networks of a node into a low-dimensional vector space. Subsequently, some

∗ Corresponding author at: School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China.
E-mail addresses: dongh20@lzu.edu.cn (H. Dong), ljli@lzu.edu.cn (L. Li), tiandw21@lzu.edu.cn (D. Tian), sunyy20@lzu.edu.cn (Y. Sun),
zhaoyc20@lzu.edu.cn (Y. Zhao).

https://doi.org/10.1016/j.eswa.2023.122685
Received 19 June 2023; Received in revised form 12 October 2023; Accepted 19 November 2023
Available online 23 November 2023
0957-4174/© 2023 Elsevier Ltd. All rights reserved.
H. Dong et al. Expert Systems With Applications 241 (2024) 122685

researchers (De Winter et al., 2018; Tripathi et al., 2022) proposed their ability to capture critical characteristics of networks for down-
to predict links in dynamic networks by learning node embeddings in stream tasks, GNNs have been applied in link prediction. Chen et al.
historical snapshots. However, in these methods, the representations (2021) designed an end-to-end dynamic link prediction paradigm using
of nodes are not learned from the downstream task. As a result, the Graph Convolution Network (GCN) (Kipf & Welling, 2017) em-
their performance may not be satisfactory. Due to their ability to bedded LSTM (Hochreiter & Schmidhuber, 1997), named GC-LSTM.
extract effective node representations according to downstream tasks, In this model, GCN is embedded in LSTM cell to better learn spatio-
graph neural networks (GNNs) (Zhou et al., 2020) have been adopted temporal features. In order to conduct link prediction in weighted
to handle link prediction (Yang et al., 2020; Zhang & Chen, 2018). dynamic networks, Lei et al. (2019) presented the model of GCN-GAN,
Moreover, the long short-term memory network (LSTM) (Hochreiter which utilized GCN, LSTM, and the generative adversarial network
& Schmidhuber, 1997) has been used to learn the temporal features (GAN) (Goodfellow et al., 2014) to exploit multiple kinds of infor-
from historical snapshots of a dynamic network (Chen et al., 2021; mation extracted from weighted dynamic networks. Li et al. (2022)
Selvarajah et al., 2020).
proposed the TSAM model to deal with dynamic link prediction in
To address the problem of dynamic link prediction, this work de-
directed networks, where graph attentional layers and graph convolu-
signs a new method, named DLP-LRN (Dynamic Link Prediction by
tional layers were used to capture neighborhood structural features and
Learning the Representation of Node-pair). The DLP-LRN method first
motif features, respectively. Moreover, temporal features were learned
learns multiple representations of a node-pair from different snapshots
via a graph recurrent unit layer with self-attention. Jiao et al. (2021)
via an improved GNN, and then aggregates these representations using
developed a temporal network embedding approach for link prediction,
a nonlinear function. Moreover, DLP-LRN also learns the representation
where the essential features of temporal networks were learned by a
of the node-pair from the collapsed network to replenish some global
variational autoencoder.
information. Then, the final representation of the node-pair is obtained
by fusing the above two parts, and sent to a Multi-Layer Perception Research gap: Although the above studies have handled the dy-
(MLP) to determine if the node-pair will have a link. In summary, we namic link prediction problem using different techniques and designs,
have the following main contributions: there are still some shortcomings in them. The features of node-pairs
used in Chiu and Zhan (2018) are handcrafted, which may fail to
(1) We propose a new dynamic link prediction method, in which express some complex nonlinear patterns that actually determine the
the representation of a node-pair is learned from both multiple link formations (Zhang & Chen, 2018). Hao et al. (2020), Tripathi
snapshots and the collapsed network. et al. (2022) and De Winter et al. (2018), represented node-pairs based
(2) We design an improved GNN to learn the representation of a on node embeddings that were learned according to the topological
node-pair from each snapshot and adopts a nonlinear function
structures of a network. However, the embeddings of nodes are learned
to aggregate the representations of the node-pair from multiple
independently of the downstream task. As a result, the key character-
snapshots.
istics that determine link formation may not be captured. The methods
(3) We validate the effectiveness of the proposed method via exten-
in Chen et al. (2021), Jiao et al. (2021), Lei et al. (2019) and Li
sive experiments on six dynamic networks.
et al. (2022) are end-to-end ones, which learn node embeddings using
The rest of this paper is structured as follows. Section 2 briefly GNNs. Essentially, link prediction is a binary classification problem
introduce the related work. Section 3 defines the problem of dynamic that predicts the label of a node-pair. Therefore, it is better to directly
link prediction. The detailed description of the proposed method is learn the features of node-pairs rather than the embeddings of nodes.
shown in Section 4. Comparison with baseline and ablation study In addition, all of these methods forecast links using the information
are discussed in Section 5. At last, Section 6 concludes the proposed obtained from the snapshots. None of them further consider to learn
method. some global information from the collapsed network.

2. Related work
3. Problem definition
We briefly sum up the research on deep learning-based dynamic
link prediction in this section. (Chiu & Zhan, 2018) proposed a deep A dynamic network is described by a series of network snapshots
learning-based method to perform dynamic link prediction, which gen-  = {𝐺1 , 𝐺2 , … , 𝐺𝑇 }, in which 𝐺𝑡 = (𝑉 , 𝐸𝑡 ) is a snapshot of the dynamic
erated the feature vectors of node-pairs using static similarity metrics network at time 𝑡. For simplicity, we assume that the nodes remain
and weak estimators, and trained a prediction model via a deep neu- unchanged across all snapshots. In this work, 𝑉 specifies all nodes in
ral network. However, the features of node-pairs in the approach  and 𝐸𝑡 denotes the set of links (edges) appeared in the snapshot
are handcrafted, thereby limiting its generalization ability. In this 𝐺𝑡 . We can use the adjacency matrix 𝐀𝑡 = [𝑎𝑡;𝑖,𝑗 ]𝑁×𝑁 to depict the
regard, Tripathi et al. (2022) developed an embedding-based dynamic topological structure of 𝐺𝑡 , here 𝑁 = |𝑉 | is the number of nodes in
link prediction method, where they learned node embeddings using bi- the dynamic network. If nodes 𝑣𝑖 and 𝑣𝑗 are linked in 𝐺𝑡 , 𝑎𝑡;𝑖,𝑗 = 1;
ased random walk and skip-gram model, and derived edge embeddings otherwise 𝑎𝑡;𝑖,𝑗 = 0.
via a max aggregator. Likewise, (De Winter et al., 2018) learned node In a static network, the purpose of link prediction is to uncover
embeddings using node2vec (Grover & Leskovec, 2016) and computed
the links that are actually existent but unknown based on the observed
the Hadamard product of node embeddings to get the embeddings of
network structures. However, link prediction in a dynamic network pre-
node-pairs. Then, embeddings of a node-pair from multiple snapshots
dicts the future structure of the network according to the information
were integrated with simple concatenation. At last, a classifier was
obtained from previous snapshots. Given 𝑙 + 1 consecutive snapshots,
trained to complete the dynamic link prediction task. Hao et al. (2020)
denoted as {𝐺𝑇 −𝑙 , 𝐺𝑇 −𝑙+1 , … , 𝐺𝑇 }, the task of dynamic link prediction
presented a link prediction model that combined node vector evolution
is to forecast the network structure at time 𝑇 + 1, i.e., 𝐺𝑇 +1 , according
and local neighborhood representation. In their model, node vectors
are learned by some node embedding algorithm, and edge representa- to the information of previous 𝑙+1 snapshots. Mathematically, dynamic
tions are obtained according to node vectors and local neighborhood link prediction can be described as
information. 𝐺𝑇 +1 = 𝑓 (𝐺𝑇 −𝑙 , 𝐺𝑇 −𝑙+1 , … , 𝐺𝑇 ), (1)
However, the above algorithms are not end-to-end ones, i.e., the
learning of embeddings is independent of link prediction. Owing to where 𝑓 (⋅) is a function that converts the input snapshots to graph 𝐺𝑇 +1 .

2
H. Dong et al. Expert Systems With Applications 241 (2024) 122685

Fig. 1. The overall framework of the DLP-LRN method.

Fig. 2. Subgraph extraction and feature vector initialization.

4. The proposed method 4.2. Feature learning from snapshots

4.2.1. Subgraph extraction and feature vector initialization


4.1. Model overview
To gauge whether a link exists among two nodes, the topological
structure surrounding them plays a key role. As a consequence, some
The motivation of this study is to design an end-to-end link predic- studies, such as SEAL (learning from Subgraphs, Embeddings and At-
tion method for dynamic networks, in which we are able to learn the tributes for Link prediction) (Zhang & Chen, 2018), SHFF (Subgraph
Hierarchy Feature Fusion) (Liu et al., 2020), and DLP-LES (Dynamic
features of node-pairs directly, rather than learning the embeddings of
network Link Prediction by Learning Effective Subgraphs) (Selvarajah
nodes individually. Specially, in order to learn more accurate represen-
et al., 2020), extract subgraphs for target node-pairs, and then learn
tations of node-pairs, this method should leverage not only the informa-
their features from the subgraphs. Inspired by the same idea, in this
tion from consecutive snapshots but also the overall information from work, we also extract subgraphs for node-pairs to learn their feature
the collapsed network. representations. Given a target node-pair (𝑣𝑖 , 𝑣𝑗 ) and a snapshot 𝐺𝑡
To this end, we propose a new method, namely DLP-LRN, to address in the network , this component first extracts an ℎ-hop enclosing
ℎ . Fig. 2 shows the
subgraph around (𝑣𝑖 , 𝑣𝑗 ) from 𝐺𝑡 , marked as 𝐺𝑡;𝑖,𝑗
the problem of dynamic link prediction. Fig. 1 presents the framework
of DLP-LRN, which is made up of the following three components: process of this step. In this study, we employ the method used in
SEAL (Zhang & Chen, 2018) to extract the enclosing subgraph. The
(1) Feature learning from snapshots (Part A in Fig. 1). Given a target ℎ-hop neighborhood of (𝑣𝑖 , 𝑣𝑗 ) is defined as
node-pair, 𝑙 + 1 subgraphs around this node-pair are extracted
𝛤𝑡ℎ (𝑣𝑖 , 𝑣𝑗 ) = {𝑣𝑘 |𝑣𝑘 ∈ 𝐺𝑡 , min(𝑑(𝑣𝑘 , 𝑣𝑖 ), 𝑑(𝑣𝑘 , 𝑣𝑗 )) ≤ ℎ}, (2)
from different snapshots of the dynamic network. Then, the

where 𝑑(𝑣𝑘 , 𝑣𝑖 ) is the distance between 𝑣𝑘 and 𝑣𝑖 . 𝐺𝑡;𝑖,𝑗 is the subgraph
feature representation of the node-pair with respect to each
subgraph is learned via a GNN. Subsequently, all feature rep- induced by 𝛤𝑡ℎ (𝑣𝑖 , 𝑣𝑗 ) ∪ {𝑣𝑖 , 𝑣𝑗 }.
ℎ ,
Next, we generate the feature vectors for all nodes in subgraph 𝐺𝑡;𝑖,𝑗
resentations of the node-pair are aggregated with a nonlinear
function. which are regarded as their initial features. The feature vector of a
ℎ is composed of two parts. The first part is the information
node in 𝐺𝑡;𝑖,𝑗
(2) Feature learning from collapsed network (Part B in Fig. 1). In this
associated with the node. If there is no initial information associated
component, we first combine the 𝑙 + 1 snapshot networks into a
with nodes in the network, we can employ some embedding method,
collapsed network. Next, a subgraph around the target node-pair
such as LINE (Large-scale Information Network Embedding) (Tang
is extracted, and then the feature representation of the node-pair et al., 2015) and node2vec (Grover & Leskovec, 2016), to extract a
with respect to the subgraph is learned. representation for each node. The second part is the structural label of
(3) Feature fusion and link prediction (Part C in Fig. 1). Both kinds the node denoted by a one-hot vector. The purpose of node labeling is
of feature representation of the target node-pair are combined in to distinguish the positions of nodes in an enclosing subgraph. Here, we
this component. Then, we feed the combined feature vector into use the DRNL (Double Radius Node Labeling) method (Zhang & Chen,
an MLP to predict whether the link of the node-pair exists. 2018) to obtain node labels in the subgraph.

3
H. Dong et al. Expert Systems With Applications 241 (2024) 122685

Fig. 3. Feature learning from subgraph.

4.2.2. Feature learning from subgraph 4.2.3. Feature aggregation from snapshots

After obtaining the enclosing subgraph 𝐺𝑡;𝑖,𝑗 and the initial feature Since we have 𝑙 + 1 snapshots, we can get 𝑙 + 1 representations of
vectors of nodes in the subgraph, the representation of node-pair (𝑣𝑖 , 𝑣𝑗 ) 𝑇 −𝑙 , 𝐙𝑇 −𝑙+1 , … , 𝐙𝑇 . In
node-pair (𝑣𝑖 , 𝑣𝑗 ) from the last step, which are 𝐙𝑖𝑗 𝑖𝑗 𝑖𝑗
with respect to snapshot 𝐺𝑡 is learned in this step. The process is de- this step, we introduce an exponential function to aggregate the 𝑙 + 1
ℎ centers on (𝑣 , 𝑣 ), we
picted in Fig. 3. Since the enclosing subgraph 𝐺𝑡;𝑖,𝑗 𝑖 𝑗 representations which is defined as
believe that not only the representation of (𝑣𝑖 , 𝑣𝑗 ) is critical to the link

𝑇
prediction task, but also the representations of other nodes are benefit 𝐙(𝑠) (1 − 𝛽)𝑇 −𝑡 ∗ 𝐙𝑡𝑖𝑗 , (5)
𝑖𝑗 =
to the task. To this end, we improve the Deep Graph Convolutional 𝑡=𝑇 −𝑙
Neural Network (DGCNN) (Zhang et al., 2018) to achieve our goal.
DGCNN is a GNN architecture for graph classification, and thence it where 𝛽 is a learnable weight within the range [0, 1], 𝐙(𝑠) 𝑖𝑗 denotes the
outputs the embedding of the subgraph. In addition to the embedding representation of node-pair (𝑣𝑖 , 𝑣𝑗 ) learned from the 𝑙 + 1 snapshots.
of the subgraph, in our method, we also use the embeddings of both 𝑣𝑖
and 𝑣𝑗 . To be specific, the process of this step is described below. 4.3. Feature learning from collapsed network
(1) At first, we use multiple graph convolution layers to learn the
embeddings of nodes in subgraph 𝐺𝑡;𝑖,𝑗 ℎ . Given an enclosing subgraph, In the proposed DLP-LRN model, we learn the representations of
denoted by adjacency matrix 𝐀, the form of the graph convolution layer node-pairs from not only individual snapshots but also the general
in DGCNN is structure of the network. Therefore, we summarize the multiple snap-
shots as a collapsed network (Lei et al., 2019; Yang et al., 2020), and
̃ −1 𝐀𝐗𝐖)
𝐙 = 𝑓 (𝐃 ̃ (3) learn the representation of a node-pair from the collapsed network.
where 𝐀 ̃ is the adjacency matrix of the enclosing subgraph with self- We are given a dynamic network with 𝑙 + 1 snapshots,
connection, 𝐀 ̃ = 𝐀 + 𝐈 (𝐈 is an identify matrix). 𝐃 ̃ is the diagonal  = {𝐺𝑇 −𝑙 , 𝐺𝑇 −𝑙+1 , … , 𝐺𝑇 }, in which 𝐺𝑡 = (𝑉 , 𝐸𝑡 ), 𝑡 ∈ [𝑇 − 𝑙, 𝑇 ].
matrix with 𝐃 ̃ 𝑖𝑖 = ∑ 𝐀 ̃ The collapsed network of  is denoted as 𝐺(𝑐) (𝑉 , 𝐸(𝑐) ), where 𝐸(𝑐) =
𝑗 𝑖𝑗 , 𝐖 is a trainable parameter matrix, and 𝑓
is a nonlinear activation function. 𝐗 is the feature matrix, each row ∪𝑇𝑡=𝑇 −𝑙 𝐸𝑡 .
of which is the feature vector of a node in the subgraph. 𝐙 is the Given the collapsed network 𝐺(𝑐) and the target node-pair (𝑣𝑖 , 𝑣𝑗 ), we
embedding matrix of nodes learned by this graph convolution layer. learn the representation of (𝑣𝑖 , 𝑣𝑗 ) from 𝐺(𝑐) using the same approach
(2) Next, we take out the embeddings of nodes 𝑣𝑖 and 𝑣𝑗 with respect as in the last component. Specifically, we extract an ℎ-hop enclosing
ℎ , denoted as 𝐙𝑡 and 𝐙𝑡 , respectively. Then, we use Hadamard ℎ
subgraph around (𝑣𝑖 , 𝑣𝑗 ) from 𝐺(𝑐) , denoted as 𝐺(𝑐);𝑖,𝑗 , and then initialize
to 𝐺𝑡;𝑖,𝑗 𝑖 𝑗
product between 𝐙𝑡𝑖 and 𝐙𝑡𝑗 , i.e., 𝐙𝑡𝑖 ⊙ 𝐙𝑡𝑗 , to capture the structural inter- ℎ
the representation of each node in 𝐺(𝑐);𝑖,𝑗 . Subsequently, the represen-
action features of (𝑣𝑖 , 𝑣𝑗 ). Hadamard product is a matrix multiplication ℎ
tation of (𝑣𝑖 , 𝑣𝑗 ) with respect to 𝐺(𝑐);𝑖,𝑗 are learned via our improved
that takes in two matrices of the same dimensions and returns a matrix DGCNN. Suppose 𝐙(𝑐) (𝑐)
𝑖 and 𝐙𝑗 are the embeddings of nodes 𝑣𝑖 and 𝑣𝑗 ,
of the multiplied corresponding elements. Particularly, the Hadamard
and 𝐙(𝑐) ℎ
is the embedding of 𝐺(𝑐);𝑖,𝑗 according to the improved DGCNN,
product of two embeddings is the element-wise multiplication of the 𝐺

two embeddings. the representation of (𝑣𝑖 , 𝑣𝑗 ) with respect to 𝐺(𝑐);𝑖,𝑗 is computed as
(3) As mentioned above, besides the features of (𝑣𝑖 , 𝑣𝑗 ), the embed-
𝐙(𝑐) (𝑐) (𝑐) (𝑐)
𝑖𝑗 = (𝐙𝑖 ⊙ 𝐙𝑗 ) ∥ 𝐙𝐺 . (6)
ding of the subgraph is also considered when learning the representa-
tion of (𝑣𝑖 , 𝑣𝑗 ). In this regard, we employ the SortPooling layer (Zhang
4.4. Feature fusion and link prediction
et al., 2018) to obtain the subgraph’s embedding. Afterward, we use
two 1-D convolution layers to reduce the dimension and enhance the
expressiveness of the subgraph’s embedding. For the sake of conve- After obtaining the representations 𝐙(𝑠) (𝑐)
𝑖𝑗 , 𝐙𝑖𝑗 of node-pair (𝑣𝑖 , 𝑣𝑗 ),
nience, we adopt the notation of 𝐙𝑡𝐺 to indicate the output of the 1-D DLP-LRN gets the final representation of (𝑣𝑖 , 𝑣𝑗 ) using the following
convolution layers. concatenation operation:
(4) Finally, we achieve the representation of (𝑣𝑖 , 𝑣𝑗 ) with respect to 𝐙𝑖𝑗 = 𝐙(𝑠) (𝑐)
(7)
ℎ 𝑖𝑗 ∥ 𝐙𝑖𝑗 .
𝐺𝑡;𝑖,𝑗 as
Then, DLP-LRN employs an MLP to compute the class of (𝑣𝑖 , 𝑣𝑗 ). The
𝐙𝑡𝑖𝑗 = (𝐙𝑡𝑖 ⊙ 𝐙𝑡𝑗 ) ∥ 𝐙𝑡𝐺 , (4) MLP used in our method includes three layers, which takes 𝐙𝑖𝑗 as the
where ∥ represents concatenation. According to Eq. (4), 𝐙𝑡𝑖𝑗 is composed input and outputs 0 or 1 as the class of (𝑣𝑖 , 𝑣𝑗 ). Here, 1 means a link
of two parts (as shown in Fig. 3). The first part is the Hadamard product exists between 𝑣𝑖 and 𝑣𝑗 , whereas 0 indicates no link between 𝑣𝑖 and 𝑣𝑗 .
of 𝐙𝑡𝑖 and 𝐙𝑡𝑗 , which highlights the importance of the target node-pair. In our model, we choose the binary cross-entropy function to compute
The second one is the embedding of the subgraph 𝐺𝑡;𝑖,𝑗 ℎ (i.e., 𝐙𝑡 ), which the loss:
1 ∑ [ ]
𝐺
further supplements some structural information of the subgraph to = − 𝑦𝑖𝑗 ln(𝑦̂𝑖𝑗 ) + (1 − 𝑦𝑖𝑗 ) ln(1 − 𝑦̂𝑖𝑗 ) , (8)
the target node-pair. To generate 𝐙𝑡𝑖𝑗 , we combine both parts using 
the concatenation operator, with the purpose of incorporating but not where 𝑦𝑖𝑗 is the label of (𝑣𝑖 , 𝑣𝑗 ), 𝑦̂𝑖𝑗 is the predicted result of (𝑣𝑖 , 𝑣𝑗 ), and
mixing them together.  is the number of samples of node-pairs.

4
H. Dong et al. Expert Systems With Applications 241 (2024) 122685

Table 1 Table 2
Topological information of the six networks. #Nodes and #Edges denote the union of AUC results of DLP-LRN and baselines. For each benchmark, the optimal value is
nodes and edges of all snapshots. presented in bold and the suboptimal value is underlined.
Enron Rado Forum Mess Call Digg Data Enron Rado Forum Mess Call Digg Mean
#Nodes 150 167 899 1899 6416 30 398 JC 0.8996 0.8579 0.6420 0.7055 0.5162 0.5284 0.6916
#Edges 1588 3251 7046 15 737 7250 86 312 AA 0.9035 0.9148 0.6190 0.7315 0.5165 0.5284 0.7023
Density 0.1421 0.2345 0.0175 0.0087 0.0004 0.0002 Node2vec 0.8105 0.8335 0.8264 0.8137 0.9025 0.5326 0.7865
Average degree 21.17 38.93 15.58 16.57 2.26 5.68 SEAL 0.8769 0.8896 0.8683 0.9065 0.9869 0.8489 0.8966
#Snapshots 8 9 6 6 7 5 LinkVec 0.8849 0.8573 0.8794 0.9376 0.9156 0.7152 0.8650
Winter’s 0.9142 0.9206 0.9568 0.9183 0.9739 0.8061 0.9150
GC-LSTM 0.9131 0.9278 0.9281 0.9475 0.9875 0.8322 0.9227
Hao’s 0.9213 0.9293 0.9465 0.9325 0.9844 0.8364 0.9250
5. Experiments DLP-LRN 0.9364 0.9338 0.9506 0.9757 0.9987 0.8701 0.9442

Table 3
In order to demonstrate the effectiveness of our DLP-LRN method,
AP results of DLP-LRN and baselines. For each benchmark, the optimal value is
multiple experiments are carried out on a collection of dynamic net- presented in bold and the suboptimal value is underlined.
works in this section. We adopt both AUC (area under the receiver oper- Data Enron Rado Forum Mess Call Digg Mean
ating characteristic curve) (Provost & Fawcett, 1997) and AP (average JC 0.8936 0.8315 0.5773 0.6332 0.5050 0.5244 0.6608
precision) (Aslam et al., 2005) as the evaluation metrics. AA 0.8997 0.9076 0.6584 0.7277 0.5204 0.5281 0.7070
Node2vec 0.8053 0.8255 0.7949 0.9009 0.8912 0.5431 0.7935
For our DLP-LRN method, the hop ℎ of enclosing subgraphs is set to SEAL 0.8742 0.8857 0.8803 0.8909 0.9747 0.8482 0.8923
1 because, when ℎ ≤ 2, it has no significant performance improvement LinkVec 0.8730 0.8446 0.8671 0.9155 0.9136 0.7028 0.8528
but is more time consuming. The learning rate is 0.0001 and patience Winter’s 0.9117 0.9088 0.9555 0.9110 0.9703 0.8007 0.9097
GC-LSTM 0.9075 0.9205 0.9269 0.9457 0.9851 0.8437 0.9215
is 10 in the early stopping strategy. The dimensions for the three graph
Hao’s 0.9285 0.9218 0.9428 0.9376 0.9812 0.8428 0.9258
convolution layers in the improved DGCNN are (128 + 𝑚𝑎𝑥_𝑙𝑎𝑏𝑒𝑙, 32), DLP-LRN 0.9412 0.9346 0.9532 0.9750 0.9987 0.8728 0.9459
(32, 32), and (32, 1), where 𝑚𝑎𝑥_𝑙𝑎𝑏𝑒𝑙 is the longest vector dimension
formed by node labels in historical snapshots. The detail settings of
DLP-LRN can be find in our codes.1
5.2. Comparison with baselines

The baselines methods are Jaccard Coefficient (JC) (Liben-Nowell


5.1. Benchmark networks
& Kleinberg, 2007), Adamic–Adar (AA) index (Adamic & Adar, 2003),
node2vec (Grover & Leskovec, 2016), SEAL (Zhang & Chen, 2018),
In our experiments, six dynamic networks are utilized. Table 1 LinkVec (Tripathi et al., 2022), Winter’s method (De Winter et al.,
outlines their basic statistics and the following provides their succinct 2018), GC-LSTM (Chen et al., 2021), and Hao’s method (Hao et al.,
2020). Among these methods, JC and AA are two similarity ones
overview.
that compute the similarities of node-pairs based on local structures.
Enron: This network is generated according to the email commu- Node2vec is a node embedding method. We use Hadamard product
nication among employees of the company of Enron (Rossi & Ahmed, between the embeddings of two nodes to obtain the representation of
2015). the node-pair, then we adopt logistic regression to gauge the linked
Rado: This is another email network among staffs of a mid-sized probability of the node-pair. SEAL is an end-to-end link prediction
manufacturing company (Rossi & Ahmed, 2015). method based on GNN. Since these four methods are designed for
static networks, we apply them on the collapsed network. LinkVec,
Forum: This network was compiled in 2004 from a student social
Winter’s method, GC-LSTM, and Hao’s Method are dynamic link predic-
networking forum at the University of California, Irvine (Opsahl, 2013). tion methods. LinkVec (Tripathi et al., 2022) uses Skip-gram and max
The network recorded student activities in the forum between May and aggregator to learn edge embeddings for link prediction. In Winter’s
October 2004. method (De Winter et al., 2018), the embedding of a node-pair in
Mess: This network records the message sending of students from a snapshot is obtained based on node2vec; embeddings of the node-
the University of California, Irvine in an online community from in pair in different snapshots are aggregated with simple concatenation.
April to October 2004 (Opsahl & Panzarasa, 2009). GC-LSTM (Chen et al., 2021) is an end-to-end approach with GCN
embedded LSTM. Hao et al. (2020) use a gated recurrent unit network
Call: This network is composed of mobile phone calls between a
to learn node representation, and train a binary classifier to forecast
group of students at MIT from September 2004 to January 2005 (Eagle links.
& (Sandy) Pentland, 2006). The network was gathered by the Reality The experimental results subject to AUC and AP are reported in Ta-
Mining experiment performed in 2004. bles 2 and 3, respectively. For the AUC results in Table 2, the proposed
Digg: This network is complied from the social website Digg (De DLP-LRN method outperforms all baselines on all networks except
Choudhury et al., 2009). In the network, users are modeled as nodes Forum, and achieves the suboptimal score on Forum. While JC and AA
and the replays between users are denoted as edges. The dataset covers perform the worst on most networks because they only consider the
local structural information. Furthermore, Winter’s method, GC-LSTM,
a period from October 29 to November 13 2008.
and Hao’s method all have higher AUC scores in most cases than other
In our experiments, the edges in the last snapshot are partitioned baselines. The reason is the three methods have the ability to capture
into a training set, a validation set, and a testing set. The percentages of the dynamic characteristics of networks, whereas other baselines are
edges contained in the three sets are 70%, 10%, and 20%, respectively. inadequate for dynamic link prediction because they are designed for
We also sample the same numbers of unconnected node-pairs (negative static networks. Moreover, subject to the average AUC score shown
samples) for the three sets to keep the balance of class. in the last column of Table 2, our proposed method outnumbers all
baselines. Besides, the similar phenomenon can be observed from the
AP results presented in Table 3. As a consequence, the proposed DLP-
LRN method achieved the best prediction performance compared to
1
https://github.com/ljlilzu/DLP-LRN these baselines.

5
H. Dong et al. Expert Systems With Applications 241 (2024) 122685

Fig. 4. Performance of DLP-LRN with different ways for aggregating historical information.

Table 4 than the earlier ones. On the other hand, it is easy to know that
Performance of DLP-LRN with different ways for representation learning of node-pair.
the aggregation operation based on the exponential function is much
𝐇𝑖𝑗 means the Hadamard product of nodes, 𝐙𝐺 indicates the embedding of subgraph,
and 𝐇𝑖𝑗 ∥ 𝐙𝐺 is the way proposed in this work, i.e., the concatenation of Hadamard faster than that based on LSTM. Therefore, the exponential function is
product of nodes and the embedding of subgraph. appropriate for our method.
Metric Way Enron Rado Forum Mess Call Digg
𝐇𝑖𝑗 0.9257 0.9231 0.9463 0.9769 0.9955 0.8436
AUC 𝐙𝐺 0.9234 0.9229 0.9317 0.9400 0.9932 0.8573 5.3.3. Influence of the collapsed network
𝐇𝑖𝑗 ∥ 𝐙𝐺 0.9364 0.9338 0.9506 0.9757 0.9987 0.8701
In our DLP-LRN method, we introduce the collapsed network for a
𝐇𝑖𝑗 0.9320 0.9242 0.9429 0.9725 0.9952 0.8501
dynamic network to describe its summarized structure. In the previous
AP 𝐙𝐺 0.9299 0.9240 0.9238 0.9472 0.9904 0.8591
𝐇𝑖𝑗 ∥ 𝐙𝐺 0.9412 0.9346 0.9532 0.9750 0.9987 0.8728 works, the researchers mainly focused on how to preferably capture
the structural and evolutionary characteristics of a network from its
consecutive snapshots, but ignored the global information of the net-
work hidden in its collapsed network. In our opinion, the collapsed
5.3. Ablation study
network can complement some information from global perspective
to the representations of nodes/node-pairs, and thus boost the per-
5.3.1. Influence of the way for representation learning
The DLP-LRN method learns the representation of a node-pair formance of the downstream task. Accordingly, the DLP-LRN method
based on its enclosing subgraph via an improved DGCNN. Because generates the final representation of a node-pair by combining the
DGCNN (Zhang et al., 2018) is designed for graph classification, we outcomes learned from both the consecutive snapshots and the col-
improved its architecture as in Fig. 3 to learn the representation of a lapsed network. To investigate the influence of the collapsed network
node-pair. In our method, the representation of a node-pair, say (𝑣𝑖 , 𝑣𝑗 ), on the prediction accuracy of DLP-LRN, we compare the results of DLP-
is obtained by concatenating the Hadamard product of the embeddings LRN with and without incorporating the information of the collapsed
of 𝑣𝑖 and 𝑣𝑗 , which emphasizes the importance of the node-pair, and the network. From the results, reported in Fig. 5, one can see that, when
embedding of the subgraph, which combines the information of neigh- using the collapsed network, DLP-LRN can attain superior performance,
bors. Here, we conduct an ablation experiment to confirm the efficacy especially on the networks of Forum and Mess. Besides, although DLP-
of the way, designed in this study, for learning the representation of LRN has been able to achieve very high performance on the network
a node-pair. Table 4 reports the results, in which three different ways of Call when the collapsed network is not used, its performance can be
are compared: Hadamard product of nodes, embedding of subgraph, further improved when using the collapsed network. As a consequence,
and concatenation of both (i.e., the proposed way). The results present
the collapsed network has a positive effect on the performance of the
that DLP-LRN attains the best performance, in most cases, under the
proposed DLP-LRN method.
proposed way, which proves the efficacy of the proposed representation
learning way for link prediction. The main reason is that the proposed
way not only pays close attention to the embeddings of target nodes
5.4. Discussion
but also leverages structural features of the subgraph around them.

5.3.2. Influence of the way for aggregating historical information From the above experimental results, we conclude that our DLP-LRN
To aggregate the representations of a node-pair learned from differ- method has the following advantages. (1) DLP-LRN outperforms the
ent snapshots, we introduce an exponential function in this study (see baseline methods on networks with diverse characteristics, indicating
Eq. (5)), which is effective and efficient. This function combines the that it works very well for dynamic link prediction. (2) By combining
representations of a node-pair learned from different historical snap- two parts of information, DLP-LRN enhances the representations of
shots to generate a summarized representation of the node-pair. In the node-pairs, and consequently benefits the downstream link prediction
literature, LSTM has always been employed to aggregate information task. (3) By introducing the collapsed network, which can comple-
from historical snapshots (Chen et al., 2021; Selvarajah et al., 2020;
ment some global information to the representations of node-pairs, the
Yang et al., 2020). Here, we investigate the influence of LSTM and
performance of DLP-LRN is further improved.
our exponential function, marked as EF, to the prediction accuracy of
the proposed DLP-LRN method through an experiment. Fig. 4 presents Besides, there are two main limitations in our DLP-LRN method.
the experimental results. One can see from Fig. 4 that the accuracy of (1) DLP-LRN assumes that the nodes do not change across different
DLP-LRN under EF is better than that under LSTM on all benchmark snapshots. Actually, in a dynamic network, not only links but also nodes
networks except Mess, on which two versions of DLP-LRN achieve very keep changing over time. (2) When generating the collapsed network,
similar results. The reason why the proposed method performs better DLP-LRN does not take into account the appearance time of a link,
under EF is that this function assigns more weight to the later snapshots which may be important to predict new links.

6
H. Dong et al. Expert Systems With Applications 241 (2024) 122685

Fig. 5. Performance of DLP-LRN with (w) and without (w/o) the collapsed network.

6. Conclusion and future work References

Adamic, L. A., & Adar, E. (2003). Friends and neighbors on the Web. Social Networks,
This work studied the problem of link prediction in dynamic net- 25, 211–230. http://dx.doi.org/10.1016/S0378-8733(03)00009-1.
works, which is more challenging and complicated because dynamic Aslam, J. A., Yilmaz, E., & Pavlu, V. (2005). The maximum entropy method for
analyzing retrieval measures. In Proceedings of the 28th annual international ACM
networks continue to evolve over time. In this work, we presented a
SIGIR conference on research and development in information retrieval (pp. 27–34).
new link prediction paradigm, namely DLP-LRN, to deal with this prob- http://dx.doi.org/10.1145/1076034.1076042.
lem. Our method can effectively cope with the challenge of dynamic Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., & Hwang, D. U. (2006). Complex
networks: Structure and dynamics. Physics Reports, 424, 175–308. http://dx.doi.
link prediction because it is able to learn the representations of node-
org/10.1016/j.physrep.2005.10.009.
pairs from both historical and global perspectives via an improved GNN Chen, J., Wang, X., & Xu, X. (2021). GC-LSTM: graph convolution embedded LSTM
and an exponential function for the downstream task. Experimental for dynamic network link prediction. Applied Intelligence, 52, 7523–7528. http:
results on six dynamic networks demonstrate the superior performance //dx.doi.org/10.1007/s10489-021-02518-9.
Chiu, C., & Zhan, J. (2018). Deep learning for link prediction in dynamic networks
of DLP-LRN compared to eight baselines. Moreover, ablation studies using weak estimators. IEEE Access, 6, 35937–35945. http://dx.doi.org/10.1109/
show the effectiveness of the three critical operations in the proposed ACCESS.2018.2845876.
method. Daud, N. N., Hamid, S. H. A., Saadoon, M., Sahran, F., & Anuar, N. B. (2020).
Applications of link prediction in social networks: A review. Journal of Network
For the future work, we will extend DLP-LRN to deal with the and Computer Applications, 166, Article 102716. http://dx.doi.org/10.1016/j.jnca.
more challenging link prediction problem in dynamic networks with 2020.102716.
unfixed node sets. Then, we will explore how to preserve and utilize De Choudhury, M., Sundaram, H., John, A., & Seligmann, D. D. (2009). Social
synchrony: Predicting mimicry of user actions in online social media. In Proceedings
the appearance times of links in the collapsed network. Moreover, we of the 2009 international conference on computational science and engineering (pp.
also consider to apply DLP-LRN to some practical application scenarios 151–158). http://dx.doi.org/10.1109/CSE.2009.439.
of real network systems, such as social networks and traffic networks. De Winter, S., Decuypere, T., Mitrović, S., Baesens, B., & De Weerdt, J. (2018).
Combining temporal aspects of dynamic networks with node2vec for a more
efficient dynamic link prediction. In Proceedings of the 2018 IEEE/ACM international
conference on advances in social networks analysis and mining (pp. 1234–1241).
CRediT authorship contribution statement http://dx.doi.org/10.1109/ASONAM.2018.8508272.
Divakaran, A., & Mohan, A. (2020). Temporal link prediction: A survey. New Generation
Computing, 38, 213–258. http://dx.doi.org/10.1007/s00354-019-00065.
Hu Dong: Methodology, Software, Writing – original draft. Longjie Eagle, N., & (Sandy) Pentland, A. (2006). Reality mining: sensing complex social
Li: Conceptualization, Formal analysis, Writing – review & editing, systems. Personal and Ubiquitous Computing, 10, 255–268. http://dx.doi.org/10.
1007/s00779-005-0046-3-z.
Funding acquisition. Dongwen Tian: Methodology, Software, Writing
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S.,
– original draft. Yiyang Sun: Methodology, Validation, Resources. Courville, A., & Bengio, Y. (2014). Generative Adversarial nets. In Proceedings of
Yuncong Zhao: Formal analysis, Writing – review & editing. the 27th international conference on neural information processing systems - vol. 2 (pp.
2672–2680).
Grover, A., & Leskovec, J. (2016). Node2vec: Scalable feature learning for networks. In
Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery
Declaration of competing interest
and data mining (pp. 855–864). http://dx.doi.org/10.1145/2939672.2939754.
Hamilton, W. L., Ying, R., & Leskovec, J. (2017). Inductive representation learning on
large graphs. In Proceedings of the 31st international conference on neural information
The authors declare that they have no known competing finan- processing systems (pp. 1025–1035).
cial interests or personal relationships that could have appeared to Hao, X., Lian, T., & Wang, L. (2020). Dynamic link prediction by integrating node
influence the work reported in this paper. vector evolution and local neighborhood representation. In Proceedings of the
43rd international ACM SIGIR conference on research and development in information
retrieval (pp. 1717–1720). http://dx.doi.org/10.1145/3397271.3401222.
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation,
Data availability 9, 1735–1780. http://dx.doi.org/10.1162/neco.1997.9.8.1735.
Holme, P., & Saramäki, J. (2012). Temporal networks. Physics Reports, 519, 97–125.
http://dx.doi.org/10.1016/j.physrep.2012.03.001.
Data will be made available on request. Ibrahim, N. M. A., & Chen, L. (2015). Link prediction in dynamic social networks
by integrating different types of information. Applied Intelligence, 42, 738–750.
http://dx.doi.org/10.1007/s10489-014-0631-0.
Acknowledgments Jiao, P., Guo, X., Jing, X., He, D., Wu, H., Pan, S., Gong, M., & Wang, W. (2021).
Temporal network embedding for link prediction via vae joint attention mechanism.
IEEE Transactions on Neural Networks and Learning Systems, 1–14. http://dx.doi.org/
10.1109/TNNLS.2021.3084957.
This work was supported in part by the Science and Technology
Kipf, T. N., & Welling, M. (2017). Semi-supervised classification with graph con-
Program of Gansu Province (Nos. 21JR7RA458 and 21ZD8RA008), and volutional networks. In Proceedings of the 5th international conference on learning
the Supercomputing Center of Lanzhou University. representations.

7
H. Dong et al. Expert Systems With Applications 241 (2024) 122685

Lei, K., Qin, M., Bai, B., Zhang, G., & Yang, M. (2019). GCN-GAN: A non-linear Selvarajah, K., Ragunathan, K., Kobti, Z., & Kargar, M. (2020). Dynamic network link
temporal link prediction model for weighted dynamic networks. In Proceedings of the prediction by learning effective subgraphs using CNN-LSTM. In Proceedings of the
IEEE INFOCOM 2019 - IEEE conference on computer communications (pp. 388–396). 2020 international joint conference on neural networks (pp. 1–8). http://dx.doi.org/
http://dx.doi.org/10.1109/INFOCOM.2019.8737631. 10.1109/IJCNN48605.2020.9207301.
Li, J., Peng, J., Liu, S., Weng, L., & Li, C. (2022). Temporal link prediction in directed Sharan, U., & Neville, J. (2008). Temporal-relational classifiers for prediction in
networks based on self-attention mechanism. Intelligent Data Analysis, 26, 173–188. evolving domains. In Proceedings of the 2008 eighth IEEE international conference
http://dx.doi.org/10.3233/IDA-205524. on data mining (pp. 540–549). http://dx.doi.org/10.1109/ICDM.2008.125.
Li, S., Song, X., Lu, H., Zeng, L., Shi, M., & Liu, F. (2020). Friend recommendation for Tang, J., Qu, M., Wang, M., Zhang, M., Yan, J., & Mei, Q. (2015). Line: Large-scale
cross marketing in online brand community based on intelligent attention allocation information network embedding. In Proceedings of the 24th international conference
link prediction algorithm. Expert Systems with Applications, 139, Article 112839. on world wide web (pp. 1067–1077). http://dx.doi.org/10.1145/2736277.2741093.
http://dx.doi.org/10.1016/j.eswa.2019.112839. Tripathi, S. P., Yadav, R. K., & Rai, A. K. (2022). Network embedding based link
Liben-Nowell, D., & Kleinberg, J. (2007). The link-prediction problem for social prediction in dynamic networks. Future Generation Computer Systems, 127, 409–420.
networks. Journal of the American Society for Information Science and Technology, http://dx.doi.org/10.1016/j.future.2021.09.024.
58, 1019–1031. http://dx.doi.org/10.1002/asi.20591. Wu, X., Wu, J., Li, Y., & Zhang, Q. (2020). Link prediction of time-evolving network
Liu, Z., Lai, D., Li, C., & Wang, M. (2020). Feature fusion based subgraph classification based on node ranking. Knowledge-Based Systems, 195, Article 105740. http://dx.
for link prediction. In Proceedings of the 29th ACM international conference on doi.org/10.1016/j.knosys.2020.105740.
information and knowledge management (pp. 985–994). http://dx.doi.org/10.1145/ Xie, F., Chen, Z., Shang, J., Feng, X., & Li, J. (2015). A link prediction approach for item
3340531.3411966. recommendation with complex number. Knowledge-Based Systems, 81, 148–158.
Martínez, V., Berzal, F., & Cubero, J.-c (2017). A survey of link prediction in complex http://dx.doi.org/10.1016/j.knosys.2015.02.013.
networks. ACM Computing Surveys, 49, 1–33. http://dx.doi.org/10.1145/3012704. Yang, M., Liu, J., Chen, L., Zhao, Z., Chen, X., & Shen, Y. (2020). An advanced
Opsahl, T. (2013). Triadic closure in two-mode networks: Redefining the global and deep generative framework for temporal link prediction in dynamic networks. IEEE
local clustering coefficients. Social Networks, 35, 159–167. http://dx.doi.org/10. Transactions on Cybernetics, 50, 4946–4957. http://dx.doi.org/10.1109/TCYB.2019.
1016/j.socnet.2011.07.001. 2920268.
Opsahl, T., & Panzarasa, P. (2009). Clustering in weighted networks. Social Networks, Zhang, M., & Chen, Y. (2018). Link prediction based on graph neural networks.
31, 155–163. http://dx.doi.org/10.1016/j.socnet.2009.02.002. In Proceedings of the 32nd international conference on neural information processing
Perozzi, B., Al-Rfou, R., & Skiena, S. (2014). Deepwalk: Online learning of social systems (pp. 5171–5181).
representations. In Proceedings of the 20th ACM SIGKDD international conference Zhang, M., Cui, Z., Neumann, M., & Chen, Y. (2018). An end-to-end deep learning
on knowledge discovery and data mining (pp. 701–710). http://dx.doi.org/10.1145/ architecture for graph classification. In Proceedings of the 32nd AAAI conference on
2623330.2623732. artificial intelligence (pp. 4438–4445).
Provost, F., & Fawcett, T. (1997). Analysis and visualization of classifier performance: Zhou, T. (2021). Progresses and challenges in link prediction. iScience, 24, Article
Comparison under imprecise class and cost distributions. In Proceedings of the third 103217. http://dx.doi.org/10.1016/j.isci.2021.103217.
international conference on knowledge discovery and data mining (pp. 43–48). Zhou, J., Cui, G., Hu, S., Zhang, Z., Yang, C., Liu, Z., Wang, L., Li, C., & Sun, M.
Rossi, R. A., & Ahmed, N. K. (2015). The network data repository with interactive (2020). Graph neural networks: A review of methods and applications. AI Open, 1,
graph analytics and visualization. In Proceedings of the 29th AAAI conference on 57–81. http://dx.doi.org/10.1016/j.aiopen.2021.01.001.
artificial intelligence (pp. 4292–4293).

You might also like