You are on page 1of 1

A

Previous
ℱ1 + 𝛽1 𝐸𝑁 is used to

Ts1 infer 𝐵𝑘
Ts2 Ts3 Ts4 Process Distributed
ℱ2 +𝛽2
testing/inference


ℱ𝑖 (𝐵𝑘𝑃 ) Simple
ℱ𝑀 + 𝛽𝑀
𝐵𝑘𝑃
Concatenation of P Scheme
𝐸𝑁𝑘−1 output ( M times) ℱ1 (𝐵𝑘 )



Voting
Weight
ℱ2 (𝐵𝑘 )
𝑝
𝐵𝑘 ℱ𝑖 (𝐵𝑘𝑝 ) ℱ𝑖 (𝐵𝑘 ) 𝐸𝑁𝑘−1 (𝐵𝑘 )
𝐵𝑘


Data Partition
M loops
𝐵𝑘2 Distributed ℱ𝑖 (𝐵𝑘2 )
Testing ℱ𝑀 (𝐵𝑘 )
𝐵𝑘1
ℱ𝑖 (𝐵𝑘1 )

Ensemble Network Group of


𝒌𝒕𝒉data stream (EN) consists of 𝑴 𝑷 data 𝑷 inference outputs single base Prediction from M
𝑵𝟎 computing Final
without Label base classifiers and
groups from each data group using classifier different base
their voting weight nodes base classifier 𝓕𝒊
Ensemble
𝓕𝒊 prediction classifiers 𝓕𝟏 ,… Output
, 𝓕𝒎 ,…, 𝓕𝑴

ℱ𝑤𝑖𝑛
B Tr1 Tr2 Tr3 Tr4 Process Distributed
Training
𝐵𝑘′𝑃 ℒ𝑃 Scheme


Data Annotation and Data Partition


𝐵𝑘 Enrichment using
𝐵𝑘′ using Spark 𝐵𝑘
′𝑝
ℒ𝑝 Model
Fusion
ℒ𝐴𝐺𝐺 (ℱ𝑛𝑒𝑤 )
𝑫𝑨𝟑 module Platform ℱ𝑛𝑒𝑤 modifies Ensemble Network


Process (𝐸𝑁). If drift occurs, ℱ𝑛𝑒𝑤 is
𝐵𝑘′2 Distributed ℒ2 stacked as a new member
(Annotated+ Augmented Training
+ Labeled) Data 𝐵𝑘′1
of 𝐸𝑁. Otherwise, ℱ𝑛𝑒𝑤
ℒ1
replaces the winning
base classifier ℱ𝑤𝑖𝑛
𝒌𝒕𝒉data 𝑷 data groups + Aggregated
Accumulated 𝑵𝟎 computing 𝑷 sub local model
stream with ℱ𝑤𝑖𝑛 (winning
Data nodes models (A base
limited label base classifier)
classifier)

C 𝐖𝐞𝐒𝐜𝐚𝐭𝐭𝐞𝐫𝐍𝐞𝐭 ′ 𝐬 𝐥𝐚𝐫𝐠𝐞 − 𝐬𝐜𝐚𝐥𝐞 𝐝𝐢𝐬𝐭𝐫𝐢𝐛𝐮𝐭𝐞𝐝 𝐩𝐫𝐞𝐪𝐮𝐞𝐧𝐭𝐢𝐚𝐥 𝐭𝐞𝐬𝐭 − 𝐭𝐡𝐞𝐧 − 𝐭𝐫𝐚𝐢𝐧 𝐬𝐜𝐞𝐧𝐚𝐫𝐢𝐨 in classification task
𝐸𝑁𝑘−1 𝑫𝒂𝒕𝒂 𝑺𝒕𝒓𝒆𝒂𝒎 𝑩 = 𝑩𝟏 , 𝑩𝟐 , 𝑩𝟑 , … , 𝑩𝒌 , … , 𝑩𝑲 , … ; 𝟏 ≤ 𝒌 ≤ 𝑲; If any base classifiers are removed, EN is
updated. Winning base classifier is obtained
𝐵𝑘−1 from the highest voting weight

Ts1 Ts2 Ts3 Ts4


ℱ1 + 𝛽1
Global drift
Update base Base Learner
Concatenation of Obtain Ensemble detection - using
M Loop Distributed classifier voting Pruning ℱ2 +𝛽2
𝐵𝑘 Data Partition Inference using M base
partition output
for each base
Output using
voting weight
weight (𝛽) based Mechanism,
one sigma rule
Information from

classifiers on one sigma rule Voting weight


classifier 𝓕𝒎 (𝛽(𝑘−1) ) the winning base
assessment normalization ℱ𝑀 + 𝛽𝑀
𝐸𝑁𝑘 classifier output

In the node level (Inference


partition and assess each
sample using one sigma rule
for classifier weight update) –
Fig. C.1

Pseudolable from
Ensemble Output
Winning base
classifier ℱ𝑤𝑖𝑛
drift information

Tr1 Tr2 Tr3 Process


Tr4 ℱ1 + 𝛽1

ℱ2 +𝛽2
Distributed Model Fusion,
Data Annotation and Data Partition

The next
𝐵𝑘 Enrichment using
𝐵𝑘′ using Spark
Training using
winning base
creating new
base classifier prequential
𝑫𝑨𝟑 module on 𝑩𝒌 Platform ℱ𝑀 + 𝛽𝑀
classifier (ℱ𝑤𝑖𝑛 ) ℱ𝑛𝑒𝑤 Process 𝐵𝑘+1

ℱ𝑛𝑒𝑤 + 𝛽𝑛𝑒𝑤
in the case of
In the node level (Local drift illustration.
𝐸𝑁𝑘 ℱ𝑛𝑒𝑤 is attached
Inference Partition using a drift handling and
C.1. FWRLS)-Fig. C.2
A. Distributed testing
base classifier ℱ𝑚 and Penalty
𝐵𝑘+1 and reward mechanism C.2. Base learner learning (evolving) scheme
B. Distributed training
ℱ𝑚 (𝐵𝑘𝑃 ) A node
ℱ𝑤𝑖𝑛 (𝐵𝑘′𝑃 ) A node
a partition
model to be
merged
scheme
C. Prequential test-then-

Winning model
ℒ𝑝 Winning model
ℒ𝑝
ℱ𝑚
train scenario of
ℱ𝑤𝑖𝑛
+ + WeScatterNet which
In the node level In the node level

𝐵𝑘𝑃
Inference Partition and Penalty and Local drift handling (rule growing and pruning), makes use of distributed
Reward Mechanism 𝐵𝑘′𝑃 FWRLS (consequent parameter estimation)
a data group of a data group
training end testing
unlabelled data
scheme

You might also like