You are on page 1of 13

Soft Computing (2020) 24:4427–4439

https://doi.org/10.1007/s00500-019-04205-x (0123456789().,-volV)(0123456789().,-volV)

METHODOLOGIES AND APPLICATION

Learning path combination recommendation based on the learning


networks
Hong Liu1 • Xiaojun Li2

Published online: 8 July 2019


 Springer-Verlag GmbH Germany, part of Springer Nature 2019

Abstract
Discovering useful hidden learning behavior pattern from learning data for online learning platform is valuable in edu-
cation technology. Studies on learning path recommendation to recommend an appropriate resource for different users are
particularly important for the development of advanced online education. However, it may suffer from low recommen-
dation quality for beginners or learner with low participation. In order to improve the recommendation quality, a learning
path combination recommendation method based on the learning network (LPCRLN) is proposed. In LPCRLN, it intro-
duces complex network technology. Based on the characteristics of courses and learners, the course network and learner
network, respectively, are constructed, and then learners are divided into three types. Finally, the recommendation is made
in different scenarios according to the learner’s learning records. In this study, a series of experiments have been carried
out. By comparisons, experimental results indicate that our proposed method is able to make sound recommendations on
appropriate courses for different types of learners with significant improvement in terms of accuracy and efficiency.

Keywords Learning path combination recommendation  Learning network  MOOC  Scenarios

1 Introduction the MOOC platform are very abundant, but the courses
have no obvious prerequisite relations, so the learner’s
In recent years, domestic and overseas universities and selection is quite random; (2) learner’s diversity and
enterprises launched amount of high-quality video open knowledge pattern and basis make it hard to balance the
courses, resource-sharing courses and large-scale open degree of difficulty for learning courses. Therefore, how to
online courses, accompanied by the emergence of some meet the learning needs of different learners’ professional
web-based learning platforms, such as massive open online types and learning characteristics, as well as the general
course (MOOC). Due to the recent unprecedented prolif- learning needs of learners, and how to help learners find
eration of information and communication technologies, their own learning resources become the important
the digital learning resource is becoming sufficient, and research aim of online learning recommendation.
more and more people can obtain information through The sequence and path of learning activity are called a
online courses. However, for MOOC, according to the learning path (Hwang et al. 2010). According to the
survey conducted, online courses showed low pass rate and learning objectives, content, environment and basis, a
high dropout rate (Jordan 2014), and the reasons are listed series of learning activities are combined to form the
as follows (Hua and Zhang 2014): (1) course resources on learning path under the guidance of a certain learning
strategy. At present, a lot of research work has been done
on the online learning path recommendation, which pre-
Communicated by V. Loia. sented several ways to solve this type of related or similar
problem. One way of choosing a learning path is based on
& Hong Liu
LLH@mail.zjgsu.edu.cn graph theory. Brusilovsky and Maybury (2003) proposed
the first learning object sorting method, in which he con-
1
Computer and Information Engineering College, Zhejiang sidered the learner’s knowledge level and learning objec-
Gongshang University, Hangzhou, China tives. Many types of researches further improved and
2
School of Management and E-Business, Zhejiang Gongshang supplemented this method afterward. Alian and Jabri
University, Hangzhou, China

123
4428 H. Liu

(2009) defined the best learning path as the learning pro- Yang and Dong (2017) proposed a learning path model that
cess with the least time and efforts, and their research allows learning activities and the assessment criteria of
discussed how to select the shortest learning paths of their learning outcomes to be explicitly formulated by
course graph to learn the target knowledge. Essalmi et al. Bloom’s taxonomy learning path for learners. Dwivedi
(2010) suggested finding a suitability function to select a et al. (2017) present an effective learning path recom-
more appropriate path. Durand et al. (2013) proposed a mendation system (LPRS) for e-learners through a variable
learning path recommendation model based on graph the- length genetic algorithm (VLGA) by considering learners’
ory, which considered the interdependence relation learning styles and knowledge levels. Bendahmane et al.
between learning objects and recommended learning paths (2017) presented a competence-based approach (CBA)
from the community’s viewpoint. derived from learning data learner’s characteristics and
Another way of choosing a learning path is based on their expectations, in which learners were clustered and
artificial intelligence with the application of soft computing traced for getting the proper learning paths. Zhao et al.
methods. Berg et al. (2006) suggested a swarm intelligence (2017) proposed an approach of recommending micro-
based on learning path recommendation method. The core learning path based on improved ant colony optimization
idea is that if a learning path is frequently used, other algorithm, in which learner’s learning transitions of
learners are more likely to adopt the same learning path. knowledge level, knowledge area and learning goal can be
The method did not directly measure the matching degree detected according to the operation of the learner. Zhu
between learning object and learner’s demands but used the et al. (2018) presented a new multiconstraint learning path
information of the reference group to generate a recom- recommendation algorithm based on knowledge map,
mendation. Chen et al. (2006) proposed a personalized web which divided the e-learning process into four different
information search system, which constructed a suit- learning scenarios, and included eight kinds of constraint
able learning path based on item response theory. Chen learning paths and their corresponding constraint factors.
(2008) used a genetic algorithm to realize a personalized Wan and Niu (2018) incorporated a learning object (LO)-
learning path recommendation, which considered the oriented recommendation mechanism to learner-oriented
learner’s ability, the difficulty coefficient of learning recommender systems and proposed a LO self-organiza-
objects and conceptual continuity. Chen et al. (2009) tion-based recommendation approach (self). The other
adopted the ontology concept map to realize the person- example of the static learning path, using the swarm
alized learning path recommendation. At first, the method intelligence approach, can be found in Wang et al. (2008),
constructs the learner’s concept map based on the results of Yang and Wu (2009), Wan (2011), Abualigah and
previous tests, then the relationship between concepts is Hanandeh (2015) and Abualigah and Khader (2017).
determined by the conceptual relationship measurement What’s more, for MOOC and e-learning, the filtering-
and fuzzy clustering method, and finally learning path based approach is often used in the research of learning
recommendation is realized by applying the genetic algo- path. Salehi and Kamalabadi (2014) proposed a new
rithm based on the concept map. Yang and Wu (2009) material recommender system framework based on
classified the learners’ learning styles and measured the sequential pattern mining and multidimensional attribute-
frequency of a certain learning path learned by a particular based collaborative filtering (CF). Ding et al. (2016) pre-
learning style user, so as to improve the efficiency and sented a group recommender system for online course
accuracy of the swarm intelligence algorithm. Carchiolo study, which used the historical information about course
et al. (2010) studied the best-personalized learning path in ratings and former learners. Hou et al. (2016) proposed a
a trusted and recommended environment. This approach big data-supported, context-aware online learning-based
searched trusted peers because they provide useful course recommender system, which took the personal
resources to meet other similar problems and preferences. preferences into consideration, making the recommenda-
Cheng (2011) proposed using extensive ant colony algo- tion suitable for people with different backgrounds. The
rithm to solve the problem of learning path recommenda- other works of recommending courses utilize adaptive
tion, which made a comprehensive reference to learners’ learning (Li et al. 2005; Alzaghoul and Tovar 2016; Qazdar
learning path evaluation, the characteristics of the knowl- et al. 2015).
edge level and learning style of target users. Tam et al. The above-mentioned existing methods considered
(2014) presented an explicit semantic analysis, followed by some contents such as the learning style (Yang and Wu
enhancing the ontology analysis through concept cluster- 2009; Cheng 2011) and frequency of a certain learning path
ing, and applied an optimizer to find an optimal learning learned (Berg et al. 2006; Yang and Wu 2009). However,
path of involved concepts or modules. Tseng et al. (2016) due to insufficient consideration of the characteristics of
constructed the concept map for adaptive learning and online learning as diversity of learning styles, learning
provided educational recommender for individual students. randomness and unobvious prerequisite relations between

123
Learning path combination recommendation based on the learning networks 4429

learning resources, the above methods cannot precisely


match the learners’ needs, especially for beginners or
Data Data
learners with low participation. Therefore, there is a need
crawl preprocess Topology
to create an effective recommendation approach that con- Online learning structure
siders the different learning scenarios, including for platform
beginners, learner with low participation and activists.
This paper proposes the learning path combination Course- Learner-
recommendation method based on the learning network course learner
network network
(LPCRLN), from the objective angle of some learning
information, such as learning frequency and learner par-
Scenario
ticipation. This model firstly constructs a course–course Structure
analysis
analysis
network according to the current cases of learner’s course,
which represents the abstract relationship between learners Learning path recommention
of any two arbitrary courses. For a specific course, we
construct a learner–learner network to represent the
abstract relationship of two arbitrary learners. Secondly,
Scenario 1 Scenario 2 Scenario 3
we adopt the complex network theory to analyze the
characteristic of network structure. Finally, we propose an
algorithm of learning path combination recommendation to Top-K
recommend the learning path for learners in different
scenarios. Fig. 1 Framework of LPCRLN method
The rest of this paper is organized as follows: Sect. 2
introduces the construction of the learning network. The 2.1 Data acquisition and processing
related definitions and description of the LPCRLN method
are presented in Sect. 3. Experiments and discussions are This paper uses network robot to crawl the learning records
described in Sect. 4. Finally, this paper is concluded in on MOOC platform. The original online learning-related
Sect. 5. information, such as course name, category and evaluation,
may contain some noise, such as spelling errors, content
repetition and information asymmetry. So preprocessing is
2 The construction of the learning network necessary, such as web page cleaning to remove pages with
noise and automatic filtering to delete the repeated
With the continuous increase in online learning resources, resources to save storage space. After preprocessing, the
people gradually realize that to improve learning effi- information is transformed into relational data structures
ciency, quickly locating the appropriate resources for their and stored in a local database. The specific data structure
study is the key factor. Based on learning history and for this experiment includes: course information (course
learners’ similarity, the LPCRLN method constructs the categories, course id, course name in English, course name
learning network. For courses, the relationship between in Chinese, the number of learners concerned, course
course and learner is extracted. For learners, the similarity interestingness), learner information (learner id, education
between learners is constructed to guide the learning path level, learner name, number of comments, number of notes,
recommendation. Figure 1 shows the basic framework of number of posts, number of favorites, number of topics),
the LPCRLN method. Firstly, we collated the leaning data learning information (learner id, course id, learner URL,
from online learning platform; secondly, we preprocessed course pass) and learner evaluation (learner id, course id,
these data to construct the course–course network and the course mark).
learner–learner network; finally, based on these two net-
works, we present the learning path combination recom- 2.2 The definition of the learning network
mendation algorithm to recommend the appropriate
courses to different types of learners. The following sec- After data acquisition and processing, two kinds of
tions will describe each component of the framework and weighted networks can be built, course–course network
define relevant concepts. (CCN) and learner–learner network (LLN). Of which LLN
mainly cooperates with the CCN network to recommend
learning paths. In this subsection, we will first give the
formal description of CCN and LLN.

123
4430 H. Liu

Input(L i,k,len)

Scenarios of L i

Algorithm 1
Learning data Generate course- Scenario 1 :for a new learner
course network G(C) learned no course
Data extraction and Call Algorithm 2 C1 C2 C3
preprocessing
Generate learner- C1 C2 C3
learner network G(L) Scenario 2 :for a learner
learned only a course C3 C1 C2
Call Algorithm 3
Generate the recommended
Stored as the adjacent Scenario 3 :for a learner learning path
matrix learned more than two courses
Call Algorithm 4

Fig. 2 Architecture of learning path combination recommendation algorithm based on the learning network

Definition 1 Course–course network (CCN): The CNN Definition 2 Learner–learner network (LLN): The LLN
structure can be defined as CCN = (V, E), where V = (C1, structure can be defined as LLN = (V0 , E0 ), where V0 = (L1,
C2, C3,…, Cn) is the set of course L2, L3,…, Ln) is the set of learner nodes and E’ is the set of
nodes;E ¼ fðCi ; Cj ÞjCi ; Cj 2 Vg is the set of undirected undirected edges, denoting the relations of any two learners
edges denoting the learners’ learning relations for two learning the same course, i.e., if both learner Li and learner
arbitrary courses, i.e., if a learner Li which is defined in Lj learned course Ci, there will be an edge from the node Li
definition 2, learns courses Ci and Cj, there will be an edge to the node Lj. And the weight of each undirected edges
from the node Ci to the node Cj. And the weight of each denotes the learning course situation between learners,
undirected edges denotes the situation of learners passing which can be represented as the adjacency matrix:
courses, which can be represented as the adjacency matrix:  0
w ; ðLi ; Lj Þ 2 E0
8
Xm u0 = f u0ij ¼ g ð2Þ
> 0; otherwise
< wij ¼ lki  lkj ; ðCi ; Cj Þ 2 E
u¼fuij ¼ k¼1 g ð1Þ where u0 represents an n0 *n0 matrix, and n0 = |V0 | is the
>
:
0; otherwise number of network nodes. If u0ij ¼ w0 , then it means both
learner Li and learner Lj learn the same courses, and the
where u represents the adjacency matrix of n*n, n = |V| is
number of courses taken is w0 .
the total number of nodes and m = |E| is the total number of
edges in the network. If uij ¼0, then there is no edge
2.3 The characteristics of learning network
between Ci and Cj. If not, the weight of uij will take the
nodes
learners both learning courses Ci and Cj into consideration.
lki 2 f0:5; 1g represents the learning situation of learner k; The research of complex network has become a heated
if learner k passes course i, then lki ¼ 1; otherwise, lki is set topic recently. By granting different weights to each net-
to 0.5. work link, a weighted network is formed to provide a more
comprehensive description of network structure and
behavior. In this paper, the complex network technology
based on the improved node weight indicator which is
Table 1 Scenarios of LPCRLN defined as follows is used to analyze CCN network char-
Learning scenarios acteristics and to mine the learning relations of single
courses and the learning relations between courses and
1 The learner hasn’t learned any courses
courses.
2 The learner has learned only one course Ci
3 The learner has learned n (n C 2) courses Definition 3 Node weight (S): In the weighted complex
network, the node weight is generally defined as the sum of
associated edge weights. In CCN, there are two types of

123
Learning path combination recommendation based on the learning networks 4431

nodes: isolated nodes and non-isolated nodes. This paper most well-known method for similarity calculation, which
sets the weight of isolated node Ci to the sum of lki. The is used to calculate the similarity of any two learners in the
detailed definition is expressed as: paper. Since each feature corresponds to one dimension, an
8X N-dimensional space vector can be constructed:
< wij ; Ci is a non  isolated node and ðCi ; Cj Þ 2 E
j2N
si ¼ Xi Fi ¼ ðf1;i ; f2;i ; . . .; fN;i Þ ð4Þ
: Lki ; Ci is an isolate node; Lk is the kth learner
k2N 0
where Fi is the spatial vector of learner i; N represents the
ð3Þ
number of learner’s features; and fj,i is the quantized value
Definition 4 Learners’ similarity: For beginners, the of feature j. After the construction of learners’ vector
courses of similar learners are recommended. The simi- space, the cosine coefficient is used to represent the simi-
larity of any two learners is obtained mainly through their larity between learners as:
characteristics, including basic attributes and behavior
SimðLi ; Lj Þ ¼CosðLi ; Lj Þ
features. The vector space model (Salton et al. 1975) is the
PjFj
k¼1 fk;i  fk;j ð5Þ
¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
PjFj 2 PjFj 2 ffi
k¼1 fk;i  k¼1 fk;j
Algorithm 1 Learning path combination
recommendation algorithm based on learning where SimðLi ; Lj Þ represents the similarity between learn-
network. ers Li and Lj ; CosðLi ; Lj Þ is the cosine coefficients between
p=RecommendPath(C,L,Li, Clearn ,k,len) learners Li and Lj ; F is the feature set and jFj refers to the
Input number of features.
C:The course set;
L:The learner set; Definition 5 Learning path: Learning path is a sequence
Li :The target Learner; of multiple courses, between which there is no clear pre-
Clearn :The course set learned by Li; requisite relationship, and is denoted by p = {Ci,Cj,…,Cn}.
k:The number of courses on the recommended
learning paths;
len: The number of nodes on the path;
Output
The Learning path combination
Algorithm 2 Learning path recommendation for
recommendation p={ Crem };
newly registered learners.
Apply the function G(C)=GetCNN(C) to
p=RemPathofNewL(G(C), G(L), Li , k)
generate adjacency matrix of course-course
network using Eq.(1); Input:
Apply the function G(L)=GetLNN(L) to G(C):The course-course network;
generate adjacency matrix of learner-learner G(L):The learner-learner network;
network using Eq.(2); Li :The target Learner;
Apply the function num=GetCNumofL(G(L), k:The number of courses on the recommended
Li) to calculate the number of courses learned by learning paths;
Li; Output:
Apply the function Clearn=GetCourses(Li) to The Learning path combination
get all the courses of learner Li learning; recommendation p={ Crem };
if num==0 then Apply the function S(Li)=GetSimilarity(G(L),
call the function Li) to compute the similarity between Li and the
p=RemPathofNewL(G(C),G(L),Li, k) ; other learners using Eq.(5), from which find the
else if num==1 then nearest neighbors similar learner Lj;
call the function Apply the function C(Li)=GetCourses(Li) to
p=RemPathofOneCourseL(G(C),G(L),Li,Cle get all the courses of learner Lj learning, sort
arn, k); them in descending order based on their node
else weight;
call the function Apply the function
p=RemPathofOtherL(G(C),Li,Clearn,k,len); p(Crem)=RecommendPath(G(C),k,C(Li)) to
end if choose the node-weight top-k courses in CCN;
return the learner’s learning recommendation return p(Crem)=all the k recommendation
path p. courses.

123
4432 H. Liu

Algorithm 3 Learning path recommendation for Algorithm 4 Learning path recommendation for
the learner who has learned a single course. learner learned more than one course.
p=RemPathofOneCourseL(G(C), G(L), Li,Clearn, p=RemPathofOtherL(G(C),Li,Clearn, k,len)
k) Input:
Input: G(C):The course-course network G(C);
G(C):The course-course network; Li :The target Learner;
G(L):The learner-learner network; Clearn={Cj}:The course set learned by Li;
Li :The target Learner; k:The number of courses on the recommended
Clearn={Ci}:The course set learned by Li; learning paths;
k:The number of courses on the recommended len(len>=3):The number of nodes on the path;
learning paths; Output:
Output: The Learning path combination
The Learning path combination recommendation p={ Crem };
recommendation p={ Crem }; While choose any node combination (Cj,Ck)
for all the Ci ,i=0…m do from Clearn do
if ϕ ij ==0 ,j=0…n then apply the function
apply the function Ctemp(len)=GetCofAllPath(G(C),Cj,Ck,len)
L(Ci)=GetLearner(G(L), Ci) to get the to get all the course of the paths on which
Cj is the start point, Ck is the endpoint, and
learners learning course Ci;
the path length is len, or len-1,…,or 3.
apply the function
And categorize Ctemp according to the path
ML(Li)=GetL(G(L),L(Ci)) to choose Lj
length;
related to Li with the maximum node
for m==3,m<=len do
weight;
apply the function
apply the function
p(Crem)=RecommendPath(G(C),k,Ctemp
p(Crem)=RecommendPath(G(C),k,C(Lj))
(m) to choose the edge weight top-k
to choose the relevant Cj of Lj in CCN
courses in CNN;
and get the node-weight top-k courses;
end for
else
end while
apply the function C(Ci)=GetCourses(Ci)
return p(Crem)=all the k recommendation
to get all the relevant courses of Ci; courses.
apply the function p(Crem)=
RecommendPath(G(C),k,C(Ci)) to
choose the edge weight top-k courses in
CNN; 3 Learning path combination
end if recommendation algorithm
end for
if Crem == φ then According to the model proposed in Sect. 2, we design and
apply the function realize the learning path combination recommendation
p(Crem)=RecommendPath(G(C),k,C) to algorithm based on learning network, the description of
choose the node-weight of top-k courses in which is shown in Algorithm 1, and Fig. 2 shows the
CCN; architecture of the LPCRLN method.
end if
(1) Use two adjacent matrices to represent CNN and
return p(Crem)=all the k recommendation
LNN
courses.
According to the learning records crawled on the
MOOC platform, we use two adjacent matrices to
represent the course’s learned network, denoted as
G(C), and the learner’s learned network, denoted as
G(L). The network weights are given in Eqs. (1) and
(2).
(2) The scenarios of LPCRLN
The scenarios of LPCRLN can be divided into
three types, as shown in Table 1.

123
Learning path combination recommendation based on the learning networks 4433

For newly registered learners, who don’t learn any experimental subjects. According to the data, the number
courses, LPCRLN firstly selects their similar learners of learners is 5769 and the number of courses is 2312.
of the nearest neighbors, then the courses learned by Python 3.6 is used as the analytical tool and Sqlser-
these learners are sorted in descending order based ver2008R is used for data storage.
on their node weight, and finally the top-k courses to
be recommended to the new learners are chosen, 4.1 Node weight analysis of course
whose realization algorithm is shown in Algorithm 2.
If the learner has learned only one course Ci, The data obtained are processed to construct CCN network,
LPCRLN discusses two cases about Ci as follows: which has 2312 nodes including 1061 non-isolated nodes
(1) Ci is an isolated node in CCN, which have no and 1251 isolated nodes, and 74,692 edges, while the edge
adjacent nodes; (2) Ci is a non-isolated node in CCN, weight w is within the range of [0.25, 17.25]. If we draw
whose specific process is shown in Fig. 3 and the the CCN network graph based on all nodes and edges, the
realization algorithm is shown in Algorithm 3. graph between and among nodes may be very dense, so we
choose only nodes and edges with a weight greater than or
For learners who have already learned n (n  2)
equal to 3, whose graph is shown in Fig. 5, of which iso-
courses, the problem of learning path recommenda-
lated nodes will appear (i.e., the node not associated with
tion is on how to find the network path in CCN
any other nodes), such as nodes 364, 1260, 698 and 572.
containing all these nodes, whose specific process is
In order to analyze Scenario 1, the improved node
shown in Fig. 4 and the realization algorithm is
weight indicator is adopted to analyze the node charac-
shown in Algorithm 4.
teristics in CCN. Figure 6 shows the ratio of the number of
courses with a certain weight. It is found that most courses
have small node weights, 83.6% of which has a weight less
4 Experimental evaluations
than or equal to 50 and the average node weight is 56.73,
while only 12.16% nodes surpass the average. This shows
This section describes our experiment, its design and
that most learners concentrate on several courses on the
results to verify the hypotheses we have proposed and
platform, while the majority of courses are learned by few
evaluate the feasibility and effectiveness of LPCRLN. We
learners.
get the online learning record on the MOOC platform
(http://mooc.guokr.com) from 2014 to 2017 as the

CCN LLN
C1 C1
L1 L2
2
C1 C1 1
C1
C1 3
1 3 L3
1 wL1=9
L4
2 wL2=3
Input(G(C),G(L), L5 4 2 wL3=7
L7,C1,k) 4 wL4=10 Generate the recommended
L6
CCN wL5=7 learning path

C5 3.5 C2 L5 k=1 {C5}


k=2 {C5,C2}
wC5=6.5 3 2 k=3 {C5,C2,C3}
wC2=5.5
wC3=4 C1 C5
wC4=3 C3 1 C4 ………

CCN
C1 3.5 C2

k=1 {C2}
2.5 k=2 {C2,C3}
………
C3
wC1C2=3.5
wC1C3=2.5

Fig. 3 Processing flow for Scenario 2

123
4434 H. Liu

CCN
Generate the recommended learning path
C1 2 C2
1
1 3 3.5
Input(G(C), C3
len=3 1 k=1 {C4}
L2,{C1,C3},k,len) C4 k=2 {C4,C2}
2
C5 4.5 2 ………
R(C1,C2,C3)=3
4
C6 R(C1,C4,C3)=4
k=1 {C4,C6}
k=2 {C4,C6,C2}
CCN k=3 {C4,C6,C2,C5}
C1 2 C2
1 len=4 k=1 {C4}
len=4 1 3 3.5 R(C1,C4,C6,C3)=9.5 k=2 {C4,C6,C2}
C3 R(C1,C5,C4,C3)=4 k=3 {C4,C6,C2,C5}
1
2
C4 R(C1,C5,C6,C3)=7 ………
C5 4.5 2 len=3
4 R(C1,C2,C3)=3
C6 R(C1,C4,C3)=4

CCN
Generate the recommended learning path
C1 2 C2
1
1 3 3.5
Input(G(C), C3
len=4 1
L2,{C1,C2,C3},k,len) C4
2 k=1 {C4}
C5 4.5 2
R(C1,C4,C3)=3
4
R(C1,,C2)=null
C6
R(C2,,C3)=null k=1 {C4,C6}
k=2 {C4,C6,C5}
CCN
C1 2 C2 len=4
R(C1,C4,C3)=3
1 k=1 {C4,C6}
len=5 1 3 3.5
k=2 {C6,C4,C5}
C3 len=5 k=3 {C4,C6,C5}
1
R(C1,C4,C6,C3)=9.5
2
C4 ………
R(C1,C5,C6,C3)=7
C5 4.5 2
R(C1,C5,C4,C3)=4
4
R(C1,,,C2)=null
C6
R(C2,,,C3)=null

Fig. 4 Processing flow for Scenario 3

4.2 Experimental result ncorrect


P¼  100%
nrecall
ncorrect ð6Þ
In this paper, the precision and recall indicators are used to R¼  100%
analyze the effectiveness of LPCRLN method, which are nall
widely used evaluation metrics to evaluate the quality of where ncorrect is the number of correctly recommended
recommendations (Sarwar et al. 2000). The precision and courses, nrecall is the number of all recommended courses
recall indicators are defined as: and nall is the number of all sample courses. In the
experiments, we use part of the courses learners have
learned as the right course to recommend, i.e., if a learner

123
Learning path combination recommendation based on the learning networks 4435

courses actually learned. In the experiments, the learners


who have learned only one course are used to test Scenario
1, the learners who have learned two courses are used to
test Scenario 2, and the other learners are used to test
Scenario 3, in all of which we randomly choose a course as
a testing result. The experiments’ results of LPCRLN are
shown in Table 2, of which data size is set the value of
{50,100,150,200}, the k value is set to [1,3] and the len is
set to 3.
Based on the above data result, LPCRLN results with a
variety of different learner numbers and k values are
combined. Figure 7a shows that P&R varies in different
scenarios with the number of learners, and Fig. 7b shows
that the P varies in different learner numbers with different
k values.
From the table above, the recommendation under Sce-
nario 3 is better. For example, when k value is 1 and the
Fig. 5 CCN structure of nodes and edges with w [ 3 learner number is 100, the precision rate of recommenda-
tion is 93.33%. And the recommendation under Scenario 1
is relatively worse because only one of the learner char-
acteristics, education level, is used to get the learner sim-
ilarity. If the new learner has more characteristics, the
recommendation result under Scenario 1 will be improved.
From this, the LPCRLN method can solve the cold start
problem to some extent.
In order to evaluate the effectiveness of LPCRLN, some
experiments are performed to analyze different methods
including LPCRLN, top-k (Mukund and George 2004),
learner-based CF and course-based CF (Sarwar et al. 2000).
In Fig. 8a–e, experiments with various sizes but same
k value are performed to examine the impact on the per-
formance. In which, the training data are increased from
150 to 500 with an increment of 150. And from which, we
can see that LPCRLN is more robust than the other three
algorithms in case of the same training set size and k value.
Figure 8f presents the precision and recall curves for the
Fig. 6 Node weight of courses four algorithms, and the corresponding data results are
shown in Table 3. In the experiments, 300 data are ran-
has learned two courses C1 and C2, we choose either of the
domly selected as a testing set, which means the learners’
two courses C1 and C2 as the recommended correct course
number of Scenario 1, Scenario 2 and Scenario 3 is,
and then apply Algorithm 3 to get the recommended
respectively, 100, while the k values are in the range of
course.
[1,3], and the path length is set to 3. The result shows that
In the experiment, CCN is constructed based on all the
the proposed LPCRLN has a better performance than the
data. For Scenario 1, this paper selected the following
other three algorithms because the top-k algorithm only
learner characteristics as education level, number of com-
took the weight of nodes in the CCN into account, the
ments, notes, postings, collections and topics. For educa-
learner-based CF and course-based CF only took the case
tion level, integers in the range of [0, 6] are used to
of learner’s learning and passing the courses into account
represent others, middle school, associate degree, bachelor,
and the LPCRLN provides network analysis, learners’
master, Ph.D. and blank.
analysis and learners’ learning information. Overall, the
We firstly do some experiments for different scenarios,
proposed LPCRLN method outperformed the other algo-
data size and k values. Every experiment runs for ten times
rithm in terms of accuracy.
and gets the average result. And then the courses recom-
mended by LPCRLN method are compared with the

123
4436 H. Liu

Table 2 LPCRLN results of the


Scenario 1 Scenario 2 Scenario 3
different scenarios, different
data sizes and different k values k Learner P R P R P R
number

1 50 3.13% 2.00% 23.53% 5.97% 75.00% 8.11%


100 6.06% 2.00% 12.90% 3.28% 93.33% 10.61%
150 9.09% 2.00% 13.33% 3.59% 88.89% 10.32%
200 17.65% 3.00% 12.50% 3.59% 85.19% 11.86%
2 50 1.92% 2.00% 17.86% 7.46% 40.00% 8.11%
100 3.70% 2.00% 21.43% 7.38% 72.73% 12.12%
150 5.56% 2.00% 19.64% 6.59% 69.23% 11.61%
200 10.71% 3.00% 23.88% 8.21% 79.41% 13.92%
3 50 2.74% 4.00% 17.50% 10.45% 40.00% 10.81%
100 6.58% 5.00% 23.53% 9.84% 75.00% 20.45%
150 7.89% 4.00% 27.27% 10.78% 72.50% 18.71%
200 11.39% 4.50% 34.62% 13.85% 85.42% 21.13%

Bold indicates best P or R values in experiments with a different number of learners under the same K value
in the same scenario

Fig. 7 LPCRLN results with different learner numbers and k values

5 Conclusions the proposed method. What’s more, the method considers


the facts that newly enrolled learners do not have any
From different angles, this paper analyzes online learning online learning experience and the relationship among
relations between course and learners and constructs the courses in the online platform is not so obvious.
course–course network and the learner–learner network, Additionally, there are some limitations that require
which describes the relationships between any two arbi- further improvement. We will continue our research work
trary courses or learners, respectively. With complex net- in the following two aspects: (1) We will apply the
work theory, characteristics of these two networks are LPCRLN method with more real data so as to further verify
mined and a learning path combination recommendation its feasibility and validity; (2) we will try to improve its
method is proposed. The method considers three learning effectiveness through combining the LPCRLN method
scenarios and recommends appropriate learning paths for with existing recommendation method and considering
different learners, and the experimental analysis of the real more characteristic of learners and courses.
data of MOOC verified the feasibility and effectiveness of

123
Learning path combination recommendation based on the learning networks 4437

Fig. 8 Performance comparisons using LPCRLN, top-k, learner-based CF and course-based CF in terms of the precision and recall

123
4438 H. Liu

Table 3 Data results


LPCRLN Top-k Learner-based CF Course-based CF
corresponding to Fig. 8f
k P R P R P R P R

1 25.32% 5.65% 1.00% 0.85% 2.27% 1.98% 0.57% 0.28%


2 22.88% 7.63% 3.33% 2.82% 2.14% 3.67% 0.63% 0.57%
3 26.99% 12.43% 10.00% 8.47% 1.84% 4.24% 0.90% 1.13%
Bold indicates best P or R values in experiments with a different number of learners under the same K value
in the same scenario

Acknowledgments This work is supported by the National Social Ding YH, Wang DQ, Zhang YX, Li L (2016) A group recommender
Sciences Foundation Project (Grant Nos. 17BTQ069, 18BGL101) and system for online course study. In: International conference on
the Zhejiang Natural Science Foundation Project (Grant No. information technology in medicine and education, pp 318–320
LY19F020007). Durand G, Belacel N, Laplante F (2013) Graph theory based model
for learning path recommendation. Inf Sci 251(4):10–21
Dwivedi P, Kant V, Bharadwaj KK (2017) Learning path recom-
Compliance with ethical standards mendation based on modified variable length genetic algorithm.
Educ Inf Technol 6:1–18
Conflict of interest The authors declare that they have no conflicts of Essalmi F, Ayed LJB, Jemni M, Kinshuk Graf S (2010) A fully
interest to this work. personalization strategy of E-learning scenarios. Comput Hum
Behav 26(4):581–591
Ethical approval This article does not contain any studies with human Hou YF, Zhou P, Wang T, Hu YC (2016) Context-aware online
participants or animals performed by any of the authors. learning for course recommendation of MOOC big data. eprint
arXiv:1610.03147v2
Hua YF, Zhang LG (2014) Research on construction and application
References of multiplex and concentric learning analytics model based on
MOOCs. J Distance Educ 5:104–112
Abualigah LMQ, Hanandeh ES (2015) Applying genetic algorithms Hwang GJ, Kuo FR, Yin PY, Chuang KH (2010) A heuristic
to information retrieval using vector space model. Int J Comput algorithm for planning personalized learning paths for context-
Sci Eng Appl 5(1):19–28 aware ubiquitous learning. Comput Educ 54(2):404–415
Abualigah LM, Khader AT (2017) Unsupervised text feature Jordan K (2014) Initial trends in enrolment and completion of
selection technique based on hybrid particle swarm optimization massive open online courses. Int Rev Res Open Distance Learn
algorithm with genetic operators for the text clustering. J Super- 15(1):133–160
comput 73(11):4773–4795 Li YH, Zhao B, Gan JH (2005) Make adaptive learning of the MOOC:
Alian M, Jabri R (2009) A shortest adaptive learning path in the CML model. In: International conference computer science
eLearning systems: mathematical view. J Am Sci 5(6):32–42 and education (ICCSE), pp 1001–1004
Alzaghoul A, Tovar E (2016) A proposed framework for an adaptive Mukund D, George K (2004) Item-based top-N recommendation
learning of massive open online courses (MOOCs). In: Interna- algorithms. ACM Trans Inf Syst (TOIS) 22(1):143–177
tional conference on remote engineering and virtual instrumen- Qazdar A, Cherkaoui C, Battou A, Mezouary AEE (2015) A model of
tation, pp 127–132 adaptation in online learning environments (LMSs and MOOCs).
Bendahmane M, Falaki BE, Benattou M (2017) Individualized In: International conference on intelligent systems: theories and
learning path through a services-oriented approach. Springer, applications (SITA), pp 1–6
Berlin. https://doi.org/10.1007/978-3-319-46568-5_10 Salehi M, Kamalabadi IN (2014) Personalized recommendation of
Berg BVD, Tattersall C, Janssen J, Brouns F, Kurvers H, Koper R learning material using sequential pattern mining and attribute
(2006) Swarm-based sequencing recommendations in e-learning. based collaborative filtering. Educ Inf Technol 19(4):713–735
Int J Comput Sci Appl 3(3):1–11 Salton G, Wong A, Yang CS (1975) A vector space model for
Brusilovsky P, Maybury MT (2003) From adaptive hypermedia to the automatic indexing. Commun ACM 18(11):613–620
adaptive web. Commun ACM 45(5):30–33 Sarwar B, Karypis G, Konstan J, Riedl J (2000) Analysis of
Carchiolo V, Longheu A, Malgeri M (2010) Reliable peers and useful recommendation algorithms for e-commerce. In: Proceedings
resources: searching for the best personalized learning path in a of the second ACM conference on electronic commerce,
trust and recommendation-aware environment. Inf Sci pp 158–167
180(10):1893–1907 Tam V, Lam EY, Fung ST (2014) A new framework of concept
Chen CM (2008) Intelligent web-based learning system with person- clustering and learning path optimization to develop the next-
alized learning path guidance. Comput Educ 51(2):787–814 generation e-learning systems. J Comput Educ 1(4):335–352
Chen CM, Liu CY, Chang MH (2006) Personalized course sequenc- Tseng HC, Chiang CF, Su JM, Hung JL, Shelton BE (2016) Building
ing utilizing modified item response theory for web-based an online adaptive learning and recommendation platform. In:
instruction. Expert Syst Appl 30(2):378–396 International symposium on emerging technologies for educa-
Chen CM, Peng CJ, Shiue JY (2009) Ontology-based concept map for tion. SETE 2016: emerging technologies for education,
planning a personalized learning path. Br J Edu Technol pp 428–432
40(6):1028–1058 Wan FH (2011) Personalized recommendation for web-based learning
Cheng Y (2011) A method of swarm intelligence-based learning path based on ant colony optimization with segmented-goal and meta-
recommendation for online learning. J Syst Manag control strategies. IEEE Int Conf Fuzzy Syst 39(7):2054–2059
20(2):232–237

123
Learning path combination recommendation based on the learning networks 4439

Wan SS, Niu ZD (2018) An e-learning recommendation approach Zhao Q, Zhang YQ, Chen J (2017) An improved ant colony
based on the self-organization of learning resource. Knowl optimization algorithm for recommendation of micro-learning
Based Syst 160:71–87 path. In: IEEE International conference on computer and
Wang TI, Wang KT, Huang YM (2008) Using a style-based ant information technology, pp 190–196
colony system for adaptive learning. Expert Syst Appl Zhu HP, Tian F, Wu K, Shah N, Chen Y, Ni YF, Zhang XH, Chao
34(4):2449–2464 KM, Zheng QH (2018) A multi-constraint learning path
Yang F, Dong ZH (2017) Learning path construction in E-learning. recommendation algorithm based on knowledge map. Knowl
Lecture notes in educational technology. https://doi.org/10.1007/ Based Syst 143:102–144
978-981-10-1944-9
Yang YJ, Wu C (2009) An attribute-based ant colony system for Publisher’s Note Springer Nature remains neutral with regard to
adaptive learning object recommendation. Expert Syst Appl jurisdictional claims in published maps and institutional affiliations.
36(2):3034–3047

123

You might also like