Professional Documents
Culture Documents
fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2929872, IEEE Access
ABSTRACT Millions of alarms in optical layer may appear in optical transport networks every month, which
brings great challenges to network operation, administration and maintenance. In this paper, we deal with this
problem and propose a method of alarm pre-processing and correlation analysis for this network. During the alarm
pre-processing, we use the method of combined Time series segmentation and Time sliding window to extract the
alarm transactions, and then we use the algorithm of combined K-means and Back propagation neural network to
evaluate the alarm importance quantitatively. During the alarm correlation analysis, we modify a classic rule mining
algorithm, i.e. Apriori algorithm, into a Weighted Apriori to find the high-frequency chain alarm sets among the
alarm transactions. Through the actual alarm data from the record in optical layer of a provincial backbone of China
Telecom, we conducted experiments and the results show that our method is able to perform effectively the alarm
compressing, alarm correlating and chain alarm mining. By parameter adjustment, the alarm compression rate is able
to vary from 60% to 90% and the average fidelity of chain alarm mining keeps around 84%. The results show our
approach and method is promising for trivial alarm identifying, chain alarm mining and root fault locating in
existing optical networks.
INDEX TERMS Alarm pre-processing, K-means, back propagation neural network, alarm compression, alarm
correlation analyzing, optical network
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2929872, IEEE Access
original alarms, the performance of the algorithm will be Given its present huge scale in trans-provincial backbones and
degraded significantly. Moreover, the original alarms from metropolitans networks, there are a large number of original
actual optical networks always suffer from several problems alarms in the existing optical network. Moreover, the original
(i.e., information redundant, time asynchronous, and ambiguous alarm data are facing the problems of information redundant,
importance of alarm attributes, etc.), which are detrimental to time asynchronous, and the unclear IAAs. We use the TT
alarm analysis and compression. Therefore, alarm pre- method to divide all the original alarms into alarm sequences.
processing is required. Then the alarm synchronization, redundancy removal, and
The common methods of quantitative evaluation of the alarm alarm transaction extraction are performed for each alarm
importance rely on experienced network experts in determining sequence via a time sliding window. But there are many alarm
the relative importance of alarms [8–10]. Different network attributes and their IAAs are unknown, so that we use the KB
experts or researchers may derive different weights for the method to evaluate the IAAs quantitatively. Firstly, the alarm
alarms in the same network. However, the alarm number rises attributes are quantized, and then the K-means algorithm is used
to tens of thousands and also the network is becoming more and to classify the alarms. Then, the Back propagation neural
more dynamic. It becomes impossible to determine the relative network (BP-NN) algorithm is used to infer each IAA weight.
importance of all alarms accurately by manual operation or According to the network management logs in existing
experts alone. To address the problem of a huge number of provincial OTN, the original alarms usually contain the
alarms with various attributes and of uncertain importance in following attributes: equipment name, equipment type,
the dynamic optical network, an effective and objective method equipment address, network element type, alarm level, alarm
of alarm importance evaluation is necessary. In an optical name, alarm type, alarm time, subnet, and so on. Among these
network, the alarm usually contains many attributes and each attributes, the equipment name can be determined by the
attribute with a wide range of values makes the alarm equipment address. The network element type and its subnet are
importance ambiguous. Therefore, it is necessary to give unified within a certain network segment. Therefore, as shown
weights to these attributes and thus evaluate the alarm in Fig. 1, to start with, we extract the six attributes, the
importance quantitatively. Recently, machine learning is equipment address (Ea), the alarm name (An), the alarm level
playing an increasingly important role in optical communication (Al), the alarm type (At), the equipment type (Et) and the alarm
research, and has been applied in diverse areas such as time (t) to form a new alarm, which is marked as a 6-tuple {Ea,
predicting equipment failure in optical network [11], reducing An, Al, At, Et , t}.
nonlinear phase noise [12,13], compensating physical New Alarm Attributions
Alarm_num
impairments [14], monitoring optical performance [15,16], Equip_name
adaptive nonlinear decision at the receivers [17], adaptive Equip_type Equip_address (Ea)
demodulator[18], and traffic-aware bandwidth assignment [19]. Equip_address Alarm_name (An)
Original Alarm_level
In this paper, we propose the TKBW method for alarm pre- Alarm Alarm_name Alarm_level (Al)
Alarm_time
Extract
processing and alarm association rule mining. The time required Attributions Alarm_type (At)
Alarm_type
for different faults to trigger a series of chain alarms is different Interface_type Equip_type (Et)
and, therefore, we use the time sliding window (TT) to divide Subnet Alarm_time (t)
the original alarms into alarm sequences and thus extract the Alarm_parameter
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2929872, IEEE Access
k k {A, B, C}, {B, C, E}, {C, E, A, B}, {E, A, B}, {B, D}, and we
SI (t ) I (ti ) dist (t , ci ) (2) treat the total alarm transactions as an alarm transaction
i 1 i 1 tci
database TD1={{A,B,C}, {B,C,E}, {C,E, A, B}, {E, A, B}, {B,
where ci represents the midpoint of the time period ti . D}}.
TABLE I
The inter-segment similarity function SO(t) is defined as the Example of the alarm transaction database TD1
sum of the time intervals between the midpoint of each time Transaction Name Items
period, as follows T1 A(1), B(3), C(5)
SO (t ) dist(ci , c j ) (3)
T2 B(3), C(5), E(7)
1i j k
T3 C(5), E(7), A(8), B(9)
Then the sum of the squared error (SSE) is adopted as the
objective function and also the index to measure the division of T4 E(7), A(8), B(9)
the time window, as given by Eq. (4): T5 B(9), D(10)
k
SSE | t , ci |
2 2) QUANTITATIVE EVALUATION OF ALARM IMPORTANCE
(4) To evaluate the alarm importance quantitatively, it is necessary
i 1 tci
to assign IAA weights (Fig.4).
where the optimal result is to obtain the minimal SSE. The main We apply the KB method where the alarm attributes are taken
flow of the algorithm is attached as Algorithm I in Appendix. as the input and the alarms with similar attributes are classified
After the time segmenting, we use the time sliding window into the same class by the K-means algorithm. Then both the
method [21] to perform time synchronization, redundancy alarm attributes and the classifications are taken as the input of
removal, and alarm transaction extraction for the alarms in each the BP-NN to train the connection weights between the neurons,
time period. Time synchronization means that the alarms so that the connection weights map the information of the alarm
appearing in the same time window are regarded as concurrent attributes, and thus obtain IAA weights.
alarms and are to be extracted into the same alarm transaction. According to the advice of experienced network
In addition, if an alarm appears several times in a short interval, administrators and by observation of the alarm sets and the fault
it is recorded only once in the same time window in order to sets in the actual network logs, we select three alarm attributes
eliminate redundant alarms. The alarm transaction refers to the that are most closely related to the faults as the initial sample
collection of alarms appearing in the same time period. input, i.e. A={Al, At, Et} in Fig.1. Each attribute in the
Regarding the time sliding window method, we give the collection has several values that indicate the relative
following definitions: importance. Here we use the K-means algorithm to
Definition 1. Time window. The given time window width is automatically classify similar alarm samples, and get the
V, and the time window is used to slide from the beginning of a comprehensive alarm classifications after fully considering
time period until the end of the time period. these three alarm attributes.
Definition 2. The window slide step h, which is the length of We rewrite Eq. (1) as Eq. (5), where xi and xj represent the
each movement of the time window. attribute values, and D represents the number of attributes.
Figure 2 shows an example that the total alarms {A, A, B, D
C,...,C, B} are divided into several alarm sequences (e.g.,
D1 , D2 ,..., Dk ) according to the time series similarity calculation.
dist ( xi , x j ) (x
d 1
i ,d x j ,d ) 2 (5)
Then given the window width V of 5 and the sliding step size h All alarms are divided into K classes, denoted as C1 ,
of 2, the time sliding window method is used to perform alarm C2,......Ck. Then, we take A={Al, At, Et} as the input and the
transaction extraction for each alarm sequence. If for example
K alarm classes as the labels to train a stable BP-NN model
alarms B(23) and B(27) occur in the same window, the alarm B
owing to the strong self-learning and fitting ability of the neural
is recorded only once. The time when the alarm occurs is network. Thus, via the method in [22], the connection weights
recorded as 23s and 27s, which is expressed as B(23,27):1. of neurons are mapped to the IAA weights, as given by Eq. (6):
Similarly if alarms A(28) and A(28) are reported multiple times
at the same time, alarm A is recorded once. The alarm occurs FeatureX Hidden Y
(6)
XY
for 28s, which is expressed as A(28):1. where X is the weight matrix between the input layer neurons
and the hidden layer neurons, and Y is the weight matrix
between the output layer neurons and the hidden layer neurons.
Finally, we give the score of the alarm importance as Eq. (7),
where N is the number of inputs.
1
W Feature1 Al Feature2 At Feature3 Et (7)
N
FIGURE 2. Example of alarm transactions extraction using the time
sliding window. B. W-APRIORI BASED ALARM CORRELATION ANALYSIS
The score of the alarm importance is helpful for setting a
Table I gives an example of the alarm transactions extracted threshold and thus discard those trivial alarms and false alarms.
by the time period alarm sequence. The alarm transaction refers
to the set of alarms collected in a given time window, such as
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2929872, IEEE Access
For the remaining alarms, correlation analysis is still needed to defined as Eq. (8), where n is the number of items in X.
obtain chain-alarms. Meanwhile, minwSup is used to represent the minimum
The existing correlation analysis method is not designed for weighted support threshold, which is used to evaluate the
optical networks and they do not consider the difference in minimum limit of the transaction frequency.
alarm importance. For example, the typical Apriori algorithm
wSupport ( X) Support ( X) ( I X, j 1W j )
n
(8)
treats different alarms as equal, and there is no difference in the j
alarm attributes. However, in an actual optical network, an Definition 4. Weighted Confidence. The traditional
alarm usually consists of many attributes, and different attribute confidence of a rule X⇒Y is Confidence (X⇒Y)=
combinations indicates different alarm severities. Support(X∪Y)/Support(X). Then the weighted confidence
When mining the alarm correlation, we improve the Apriori (wConfidence) of a rule X⇒Y is defined as Eq. (9), where n is
algorithm as W-Apriori, in which the score of alarm importance the number of items containing the union of X and Y, and m is
is used as the weight. the number of items in X. Meanwhile, minwConf is used to
Given a database with a set of alarm transactions D={T1, represent the minimum weighted confidence threshold, which is
T2,...,Tm}, where Tj(j=1,2,...,m) is the set of alarms collected used to evaluate the minimum limit of the transaction
within time window. I={i1, i2,..., in} is the set of all alarms in the association.
database, and each alarm transaction set Tj is a subset of I (i.e. Support ( X Y) ( I ( XY ), j 1W j )
n
Tj⊂I). The set of alarm scores in the corresponding alarm set I (9)
wConfidence( X Y) j
n), and Wj is the score of the alarm ij. Definition 5. Weighted Frequent itemsets. Given a database
In traditional Apriori, the association rule is the expression of D and a weighted support threshold minwSup, if a pattern X
the form X⇒Y, where X and Y are disjoint item sets (i.e. satisfies: wSupport(X) ≥ minwSup. Then X is the weighted
X∩Y=Ø). The strength of the association rule is measured by its frequent itemsets.
support and confidence. The support determination rules can be Therefore, the purpose of the W-Apriori algorithm is to find
used for the frequency of the given data set, and the confidence all the association rules where the wSupport and the
determines how frequently Y appears in transactions that wConfidence satisfy the conditions wSupport ≥ minwSup and
contain X. The W-Apriori algorithm redefines the above two. wConfidence ≥ minwConf in the given alarm database D
Definition 3. Weighted Support. The traditional support of respectively. The description of the W-Apriori algorithm is
pattern X(X⊆I) is denoted as Support(X)=Count(X)/|D|, where given as Algorithm II (seen in Appendix).
Count(X)=|Tj|X Tj,Tj D| is the number of transactions for
which item set X appears in D, and |D| is the total number of
transactions in D. Then the weighted support (wSupport) of X is
Figure 3 is an example to illustrate the implementation of the we filter out high-frequency chain-alarm {A, B}, {B, C}, and
W-Apriori algorithm. We use the alarm transaction database D1 {B, E}.
given in Table I, which contains 5 alarm transaction sets: {A, B,
C. IMPLEMENTATION OF TKBW ALGORITHM
C}, {B, C, E}, {C, E, A, B}, {E, A, B}, {B, D}, where
minwSup = 50% is used as the minimum support threshold. The The application scenario of the alarm analysis scheme is shown
W-Apriori algorithm uses an iterative strategy of layer-by-layer in Fig. 4. In the OTN network, the alarm data in optical layer
search: in the k-th cycle, frequent k-itemsets are generated collected through the network management system or the
through a combination of the transaction database and candidate controller in software defined optical network (SDON), and
k-itemsets, and then a new candidate (k+1)-itemsets are then the TKBW method is used for alarm analysis.
generated based on the k-itemsets. And so on, the algorithm
stops until the maximum itemset of a cycle is empty. Finally,
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2929872, IEEE
Access
Figure 5 illustrate the overall principle of the TKBW Step 3. Scoring the alarm importance by using KB
method. TKBW consists of two modules, i.e. an alarm pre- method;
processing module and an alarm correlation analysis Step 4. Analyzing the alarm correlation with W-Apriori
module. The alarm pre-processing module extracts the algorithm and find out the chain alarms.
alarm transaction and evaluates the alarm weight and
converts the alarm data into alarm transactions suitable for III. EXPERIMENTS AND DISCUSSIONS
correlation analysis. The alarm correlation analysis module Experiments were conducted with the alarm data from the
is used to mine the association rules from the alarm network log of optical layer equipment in a provincial
transactions, and thus finds out the chain alarms. The backbone of China Telecom that contains 441 OTN nodes.
procedure of the TKBW method is shown as follows. We collected 5,100,000 original alarms within 30 days. The
Step 1. Collecting the alarms and select the useful alarm proposed TKBW method was developed via Python and
attributes; implemented on a computer with Windows 7 operating
Step 2. Extracting the alarm transactions by using TT system, Intel(R) Core (TM) processor i5-4345 with 2G
method; main memory.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2929872, IEEE
Access
40
35
Time Consumption (s)
45
40
30
35
25
30
20
25
15
10
UT method
20
15
5
TT method 10
UT method
TT method
0
5
0 1 2 3 4 5 6 7 8 9 10
Alarm Transaction Number (104)
0
0 1 2 3 4 5 6 7 8 9 10
Alarm Transaction Number (104)
0.40
alarm transactions by TT method and UT method.
0.35
0.30
B. QUANTITATIVE ANALYSIS OF ALARM IMPORTANCE
In step 3, according to the advice of experienced network 0.25
administrators, we selected three alarm attributes that are 0.20
the most closely related to the faults, namely, alarm level Al, 0.15 w1,1
alarm type At, and alarm equipment type Et. These three 0.10 w2,1
attributes were initialized. For example, for alarm level AL,
we initialized emergency as 1, importance as 2, secondary
0.05 w3,1
as 3 and prompt as 4. Then, the alarm transactions with 0.00
0 5 10 15 20 25 30 35
initialized attributes were taken as the input of K-means
Iteration Epochs
algorithm, where the alarms with similar attributes were
divided into the same class. As shown in Fig. 7, after about FIGURE 8. Connection weights vary with the iteration epoch of BP-NN.
15 iterations, the mean squared error of classification
converged to 1.5, indicating that the K-means algorithm is In the above, we marked the output classifications (C1 to
trained. C4) by K-means as four different values, but these values
are in fact only tags of the classes and they are not
necessarily the only representations. Therefore, there are 24
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2929872, IEEE
Access
0.10 120
0 5 10 15 20 25
Combination of original quantized values 100
FIGURE 9. IAA weights for different quantitative combinations. 80
60
C. ALARM CORRELATION ANALYSIS AND ALARM
COMPRESSION 40
After the preliminary alarm compression, we analyzed the W-Apriori
20 Apriori
correlation among the remaining alarms and explored ways
to compress the alarms in terms of alarm compression rate 0
and fidelity. 0 1 2 3 4 5 6 7 8 9 10
First, the time consumptions of the traditional Apriori Alarm Number (103)
and the modified W-Apriori were compared, as shown in FIGURE 11. Time consumption varies with alarm number.
Fig. 10, where the number of alarm transactions sets is
100,000 and minwSup changes from 0.1 to 0.6. As the Figure 12 presents the comparison of the compressed
minwSup increases, the time consumptions of both alarm collection number by both algorithms, with various
algorithms decrease gradually. The main reason is that minwSups and with a fixed alarm transaction number of
when the threshold minwSup is small, more alarm 10,000. To some extent, the same minwSup means that the
transaction sets need to be processed. In addition, the W- same compression fidelity can be obtained. Fig. 12
Apriori takes less time than the Apriori does. indicated with the same compression fidelity, W-Apriori
works more effectively and leads to fewer alarm collections.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2929872, IEEE
Access
1400 to 0.3, the alarm compression rates rises from 67% to 82%
1200 and then to 89%, respectively. That is, larger minwSup
yields higher compression rate. In other words, with larger
1000 minwSup, more alarms will be removed as false alarms.
Moreover, given a fixed minwSup, the compression rate
800 remains almost stable, independent of both the training
600
samples in the first 20 days and the application samples in
W-Apriori the last 10 days, which indicates that the W-Apriori works
400 Apriori stable and has little deviation between the training samples
and the application samples.
200
0 100%
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2929872, IEEE
Access
60%, which means in this case the alarm compression is not require further testing and analysis. Whatever the outcome,
very believable. the proposed scheme offers a good promising model, which
Moreover, from Figs. 14 and 15, it can be seen that when is interesting and will needs further future research.
minwSup is 0.2, the obtained alarm compression rate is
15% larger than when minwSup is 0.1, which indicates that IV. CONCLUSION
when the minwSup is 0.2, the original alarms can be To deal with the problem of coping with the huge number
compressed to a larger extent while relatively high of Alarms in OTNs, we have proposed the TKBW method,
compression fidelity is kept. Therefore, considering both which offers alarm compression and high-frequency chain
the alarm compression rate and fidelity, it may be alarms mining. By taking actual data of network logs from
appropriate to select the minwSup threshold as 0.2. China Telecom, we have conducted experiments, the results
However, in actual optical network, the minwSup threshold of which show that the TKBW method is able to give a
should be set to a particular optimal range of values score to indicate the importance of each alarm and the score
according to the network scale. If the threshold is set too is independent of the initial quantization values. In addition,
small, many redundant alarms (e.g., false alarms) will be the TKBW method is able to mine the chain alarms and
retained, which is not conducive to the subsequent alarm compress the alarms to a rate on demand by adjusting
correlation analysis and alarm compression; if the threshold parameters, such as minwSup. The results are promising
is set too large, some important alarms (e.g., substantial and helpful for false alarm identifying and root failure
alarms) will be removed. Therefore, in practical operation, locating.
it is necessary to set a threshold according to actual OAM
requirements and operational effects. APPENDIX
ALGORITHM I
100% Time Division Method based on Similarity of Time Series
Input: k: the number of time segments
90% n: database containing n alarm events
Output: k: k time intervals with minimum sum of squared errors
1: Select initial cluster center points for k time segments;
80% 2: for (j = 1; j < = n; j++) {
Fidelity
| t , c
2
6: compute SSE i | ;
i 1 tci
50%
7: Repeat until SSE conversed.
0 1 2 3 4 5 6 7 8 9 10
Time (day) ALGORITHM II
FIGURE 15. Compression fidelity with different minwSups. W-Apriori Algorithm
Ck: Candidate itemsets of size k
According to the experimental results, the TKBW
Lk: frequent itemsets of size k
method can effectively perform alarm compression, alarm Input: D: transaction database
correlation analysis and chain alarm mining to implement minsup: minimum support threshold
the compression of alarms and obtain high-frequency chain Output: L: frequent itemsets of D
alarms. This means that we can use high frequency chain 1: L1= FindFrequent_1_itemsets(D);
2: //generate the frequent 1-itemsets
alarms to find the root fault nodes location. Moreover, the
3: for (k = 2; Lk-1 ≠ ; k++)
method is not complicated in implementation. Therefore, 4: Ck = Genetate Candidates (Lk-1,minsup);
the proposed method is easy to set up in the controller. 5: //generate the new candidate itemsets
However, since the experimental data have come from the 6: for each transaction t D
actual alarm monitoring record in optical layer of a 7: Ct= subset(Ck,t);
8: //find out all candidate k-itemsets contained in
provincial backbone of China Telecom, the number of alarm
transaction t
attributes of the fiber layer equipment is small and the 9: end for
attributes are relatively thick. Our method cannot fully 10: Lk ={ c Ck| support ≥ minwsup}
guarantee that all OTNs are applicable. If there are more data 11: end for
from different OTNs, a more complete analysis method can 12: return L=∪Lk;
be proposed and the universality of the method can be
verified. In addition, we extracted six important and valid ACKNOWLEDGMENT
attributes in the experiment. If we change the number of We would like to acknowledge the China Telecom
alarm attributes (e.g., reduce the number of alarm attributes) Corporation for providing the network data, and the
and re-experiment, weather we get the same results will
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2929872, IEEE
Access
support from the Fundamental Research Funds for the [20] Keogh E, Chu S, Hart D, Pazzani M (2003) Segmenting time series:
Central Universities (2019RC12). A survey and novel approach. Work 57: 1–21.
[21] M. Hauptmann, J. H. Lubin, P. Rosenberg, et al. The use of sliding
time windows for the exploratory analysis of temporal effects of
REFERENCES smoking histories on lung cancer risk. Statistics in Medicine, 2000,
[1] N. Amani, M. Fathi, M. Dehghan, A case-based reasoning method 19(1): 2185-2194
for alarm filtering and correlation in telecommunication networks, [22] J.D.Olden, M.K.Joy, R.G., An accurate comparison of methods for
in: Proceedings of the Canadian Conference on Electrical and quantifying variable importance in artificial neural networks using
Computer Engineering, 2005, pp. 2182–2186. simulated data Eco Model,178 (3) (2004), pp.389-397
[2] M. Klemettinen, H. Mannila, H. Toivonen, Rule discovery in
telecommunication alarm data, Network and Systems Management
Journal 7 (4) (1999) 395–423.
[3] J. Wu , and X. Li, "An Effective Mining Algorithm for Weighted
Association Rules in Communication Networks." Journal of
Computers 3.10(2008).
[4] Yan, L., Li, C.: Incorporating pageview weight into an association-
rule-based web recommendation system. In: Australian Conference
on Artificial Intelligence, pp.577–586 (2006)
[5] R. Agrawal, R. Srikant, Fast algorithm for mining association rules,
in: Proceedings of the 20th VLDB Conference, 1994, pp. 487–499.
[6] G. Grahne, J. Zhu, High performance mining of maximal frequent
itemsets, in: Proceedings of the Sixth SIAM International Workshop
on High Performance Data Mining, 2003, pp. 135–143.
[7] T. Wang, P.L. He, Database encoding and an anti-apriori algorithm
for association rules mining, in: Proceedings of the Fifth
International Conference on Machine Learning and Cybernetics,
2006, pp. 1195–1198.
[8] J. Han, J. Pei, Y. Yin, R. Mao, Mining frequent patterns without
candidate generation: a frequent-pattern tree approach, Data Mining
and Knowledge Discovery 8 (1) (2004) 3–87.
[9] J.W. Han, Y.J. Fu, Discovery of multiple-level association rules
from large databases, in: Proceedings of the 1995 International
Conference on Very Large Data Bases, 2002, pp. 420–431.
[10] C.H. Cai, Ada W.C. Fu, C.H. Cheng, W.W. Kwong, Mining
association rules with weighted items, in: Proceedings of IEEE
International Database Engineering and Applications Symposium,
1998, pp. 68–77.
[11] Z. Wang, M. Zhang, D. Wang, C. Song, M. Liu, J. Li, L. Lou, and Z.
Liu, “Failure prediction using machine learning and time series in
optical network,” Opt. Express 25, 18553–18565 (2017).
[12] D. Wang, M. Zhang, Z. Li, Y. Cui, J. Liu, Y. Yang, and H. Wang,
“Nonlinear decision boundary created by a machine learning-based
classifier to mitigate nonlinear phase noise,” in Proceedings of
Optical Communication (ECOC 2015), pp. 1–3.
[13] D. Wang, M. Zhang, Z. Li, C. Song, M. Fu, J. Li, and X. Chen,
“System impairment compensation in coherent optical
communications by using a bio-inspired detector based on artificial
neural network and genetic algorithm,” Opt. Commun. 399, 1–12
(2017).
[14] D. Wang, M. Zhang, M. Fu, Z. Cai, Z. Li, H. Han, Y. Cui, and B.
Luo, “Nonlinearity Mitigation Using a Machine Learning Detector
Based on k-Nearest Neighbors,” IEEE Photonics Technol. Lett.
28(19), 2102–2105 (2016).
[15] D. Wang, M. Zhang,J. Li, Y. Xin, J. Li, M. Wang and X. Chen,
“Intelligent Optical Spectrum Analyzer Using Support Vector
Machine,” IEEE Photonics Society Summer Topical Meeting Series
(SUM), 239–240 (2018).
[16] D. Wang, M. Zhang, J. Li, Z. Li, J. Li, C. Song, and X. Chen,
“Intelligent constellation diagram analyzer using convolutional
neural network-based deep learning,” Opt. Express 25(15), 17150–
17166 (2017).
[17] Y. Yuan, M. Zhang, P. Luo, Z. Ghassemlooy, D. Wang, X. Tang,
and D. Han, “SVM detection for superposed pulse amplitude
modulation in visible light communications,” in Proceedings of
Communication Systems, Networks and Digital Signal Processing
(CSNDSP 2016), pp. 1–5.
[18] J. Li, M. Zhang, and D. Wang, “Adaptive demodulator using
machine learning for orbital angular momentum shift keying,” IEEE
Photon. Technol. Lett. 25, 1455–1458 (2017).
[19] C. Song, M. Zhang, X. Huang, Y. Zhan, D. Wang, M. Liu and Y.
Rong, “Machine Learning Enabling Traffic-Aware Dynamic Slicing
for 5G Optical Transport Networks,” Conference on Lasers and
Electro-Optics (CLEO), 1 – 2 (2018)
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.