Professional Documents
Culture Documents
N1
A1
N2
A2
A3
1 Anomaly Detection: A Survey, Varun Chandola, Arindam Banerjee and Vipin Kumar, ACM
Computing Surveys.
TIBCO whitepaper | 3
2 Vlastelica, Ryan. Why bitcoin won’t displace Visa or Mastercard soon. Marketwatch,
December 18, 2017. https://www.marketwatch.com/story/why-bitcoin-wont-displace-visa-
or-mastercard-soon-2017-12-15
TIBCO whitepaper | 4
Manufacturing Defects
In manufacturing, manual processes to find anomalies is a
laborious and reactive process, and building machine-learning
models for each part of the system is difficult. Therefore, some
companies continuously monitor sensor data on manufactured
components with an autoencoder model. As the model scores
new data, any defects (anomalies) are quickly detected and
preventative actions can be taken.
Other Examples
Beyond the most common use cases already described, there
are many other examples of how anomaly detection can be
applied across a wide variety of industries:
Examples of Unsupervised
Learning Algorithms
Autoencoders
Unsupervised neural networks, or autoencoders, are used to
replicate the input dataset, but only approximately; if the input
is replicated exactly, the model cannot usefully be applied to
new data. The approximation is made by restricting the number
of hidden layers in a neural network. A reconstruction error is
generated upon prediction. The reconstruction error is defined
as the difference between the model output and the new
input data. The higher the reconstruction error, the higher the
possibility that the data point is an anomaly.
Clustering
In this technique, the analyst attempts to classify each data
point into one of many pre-defined clusters by minimizing the
within-cluster variance. Models such as K-means clustering,
K-nearest neighbors, and others, are used for this purpose.
A K-means or a K-NN model serves the purpose effectively
because they assign a separate cluster for all data points that
do not look similar to normal data. In general this technique
is most useful when the data is well separated into natural
clusters, a requirement that should be checked.
Autoencoders Explained
Autoencoders use unsupervised neural networks that are
both similar to and different from a traditional feed-forward
neural network. They are similar in that they use the same
principles (for example, backpropagation) to build a model.
They are different in that they do not use a labeled dataset
containing a target variable for building the model. They use a
training dataset and attempt to replicate the output dataset by
restricting hidden layers/nodes.
X1 X1
X2 X2
X3 X3
hW,b(X)
X4 X4
X5 X5
+1
X6 X6
Layer L2 Layer L3
+1
Layer L1
Encoder Decoder
Global Headquarters TIBCO fuels digital business by enabling better decisions and faster, smarter actions through the TIBCO
3307 Hillview Avenue Connected Intelligence Cloud. From APIs and systems to devices and people, we interconnect everything,
Palo Alto, CA 94304 capture data in real time wherever it is, and augment the intelligence of your business through analytical insights.
+1 650-846-1000 TEL Thousands of customers around the globe rely on us to build compelling experiences, energize operations, and
+1 800-420-8450 propel innovation. Learn how TIBCO makes digital smarter at www.tibco.com.
+1 650-846-1005 FAX ©2019, TIBCO Software Inc. All rights reserved. TIBCO and the TIBCO logo are trademarks or registered trademarks of TIBCO Software Inc. or its subsidiaries
in the United States and/or other countries. All other product and company names and marks in this document are the property of their respective owners and
www.tibco.com mentioned for identification purposes only.
07Nov2019