You are on page 1of 128

UNIVERSIDAD DE CHILE

FACULTAD DE CIENCIAS FISICAS Y MATEMATICAS


DEPARTAMENTO DE INGENIERIA INDUSTRIAL
DEPARTAMENTO DE CIENCIAS DE LA COMPUTACION

CLASIFICACION DE PHISHING UTILIZANDO MINERIA DE DATOS ADVERSARIAL


Y JUEGOS CON INFORMACION INCOMPLETA

TESIS PARA OPTAR AL GRADO DE MAGISTER EN GESTION DE


OPERACIONES

MEMORIA PARA OPTAR AL TITULO DE INGENIERO CIVIL EN


COMPUTACION

GASTON ANDRES LHUILLIER CHAPARRO

PROFESOR GUIA:
RICHARD WEBER HAAS

MIEMBROS DE LA COMISION:
ALEJANDRO HEVIA ANGULO
NICOLAS FIGUEROA GONZALEZ
SEBASTIAN RIOS PEREZ

SANTIAGO, CHILE
MAYO 2010
If you know the enemy and know yourself,
you need not fear the result of a hundred battles.
If you know yourself but not the enemy,
for every victory gained you will also suffer a defeat.
If you know neither the enemy nor yourself,
you will succumb in every battle.
Sun Tzu, The Art of War (600 B.C.) [78].
Agradecimientos

Con este trabajo se cierra una etapa que comenzo en marzo del 2002, y luego de participar
como alumno en varios planes de estudio (Biotecnologa, Computacion e Industrias), y tener el
placer de conocer a muchos valientes de Beauchef, se cierra el ciclo con una gran formacion valorica
que junto con todas las relaciones forjadas en este perodo, seran la base para mi futuro personal y
profesional.

Primero, quiero agradecer al profesor Richard Weber por su sabidura, consejos y apoyo
incondicional en la investigacion realizada tanto para esta tesis, como el resto de los temas de in-
vestigacion que hemos desarrollado en conjunto. Agradezco a Alejandro Hevia, por sus comentarios
sobre seguridad, ingeniera social y un apoyo lleno de entusiasmo, cuya perspectiva fue clave para
definir la aplicacion sobre la cual esta tesis fue construida. Muchas gracias a Nicolas Figueroa, cuyos
comentarios desde una perspectiva micro-economica y experiencia en teora de juegos hicieron posi-
ble la definicion de varias componentes teoricas de este trabajo. Tambien agradezco a Sebastian
Ros, cuyos comentarios, consejos e ideas potenciaron el desarrollo de esta investigacion.

Gracias a Meisy Ortega, mi gran companera y futura esposa, quien desde primer ano me
ha acompanado en los momentos buenos, y en especial los momentos duros. Gracias por toda la
paciencia, y el amor que me ha entregado durante estos 7 anos que llevamos juntos. Sin ella, nada
de esto hubiera sido posible.

Muchas gracias a todos los que participaron en aquellas largas jornadas de estudio y carretes
en las cuales se fue forjando una gran amistad, en particular a mis grandes viejos amigos de primer
ano: Jose Luis Lobato, Fernanda Bravo, Sebastian Court, Rodrigo Taboada, Nicolas Diban, Fran-
cisco Aguilera, Paulina Herrera y Daniel Garrido. De biotecnologa: Felipe Vasquez, Pablo Osorio y
Ornella Comminetti. De computacion: Felipe Bravo, Marcos Himmer, Ivan Videla, Claudio Millan,
Francisco Casals, Rodrigo Canovas, Francisco Echeverra, Francisca Varela, Daniel Valenzuela y
en particular al equipo XP-force: Jose Miguel Vives, Sergio Pola, Carlos Reveco, Javier Campos
y Emilio Ponce. De industrias: Cristian Bravo, Sebastian Maldonado, Vctor Rebolledo, Felipe
Castro, Fernando Alarcon, todos los miembros del equipo DOCODE, y al profesor y amigo Juan
Velasquez.

Finalmente quiero agradecer a mis padres Gaston y Patricia, que a lo largo de todos estos
anos, me dieron la posibilidad de estudiar, ademas de entregarme un apoyo incondicional para lograr
mis metas. Gracias a mis hermanos Nicole, Tomas y Juan Pablo, que siempre me recuerdan que
cosas tan simples como un buen disco, una buena cerveza, el compartir con los cercanos y el reirse
de la vida, son la clave para la felicidad.

Gaston A. LHuillier Ch.


Mayo, 2010

ii
Resumen Ejecutivo

Actualmente, el fraude por correo electronico se ha transformado en un problema que afecta la


seguridad y la economa global, cuya deteccion mediante el uso de filtros para correos tradicionales
ha sido reconocida como poco efectiva. Si bien se han desarrollado filtros especficos para este
tipo de correo, no se han presentado estudios que consideren explcitamente el comportamiento
adversarial de los mensajes a clasificar. Por lo general, en sistemas adversariales la calidad de un
clasificador disminuye a medida que un adversario aprende como derrotarlo. Para esto, la minera
de datos adversarial ha sido recientemente propuesta como una solucion, donde la interaccion entre
un adversario y el clasificador se define a traves de un juego entre dos agentes.

Esta tesis comprende el diseno y desarrollo de una metodologa para clasificar mensajes de
fraude por correo electronico considerando su comportamiento adversarial. La interaccion entre el
perpetrador de fraude y el clasificador fue desarrollada utilizando juegos dinamicos con informacion
incompleta y minera de datos adversarial. Segun esto, se presentan cuatro nuevos algoritmos de
clasificacion determinados mediante una aproximacion del equilibrio secuencial para juegos con
informacion incompleta. Cada uno de ellos, actualizan incrementalmente sus parametros con el
objetivo de mejorar su capacidad de prediccion en un ambiente de aprendizaje en lnea.

De acuerdo a la metodologa propuesta, es necesario considerar componentes que describen


la interaccion entre los agentes, como sus estrategias, tipos y funciones de utilidad. Para determinar
lo anterior, es necesario definir propiedades, establecer supuestos y analizar los datos disponibles
asociados a la aplicacion de interes. Para esto, se pueden utilizar distintas tecnicas, tanto cualitativas
como cantitativas, para definir los perfiles de estrategias, los tipos a considerar y las funciones de
utilidad. Sin embargo, estos elementos son de exclusiva responsabilidad del modelador, y pueden
variar significativamente dependiendo de la aplicacion.

La metodologa presentada en este trabajo fue aplicada en una base de datos de correo
electronico con mensajes fraudulentos y regulares, utilizada frecuentemente por investigadores de
este tipo de fraude. En terminos de la caracterizacion de los correos, fueron utilizadas tecnicas
de analisis semantico latente para potenciar la identificacion de elementos cercanos a la ingeniera
social, ampliamente presente en este tipo de fraude.

Con respecto a los resultados experimentales, el metodo de caracterizacion propuesto presento


un mejor rendimiento de clasificacion que otras tecnicas de caracterizacion presentes en la literatura.
Luego, en terminos de los algoritmos de clasificacion propuestos, los resultados experimentales
indican que la interaccion adversarial entre los agentes es capturada satisfactoriamente. Finalmente,
los resultados obtenidos en el analisis de sensibilidad de los algoritmos propuestos justifican la
robustez de los resultados anteriores.

Este trabajo abre la puerta a futuros desafos relacionados principalmente con extensiones
teoricas del marco trabajo propuesto y aplicaciones de la metodologa desarrollada en otros ambitos.
Ademas, esta tesis define un marco de trabajo que se puede adecuar para el estudio de otras
interacciones complejas entre agentes adversariales.

iii
Summary

Recently, fraud through malicious email messages was been a serious threat that affects global
security and economy, where traditional spam filtering techniques have shown to be ineffective.
Despite several classifiers has been developed for this type of fraud, none of them consider the
adversarial behaviour of incoming messages. In general, the performance of a classifier in adversarial
systems decreases after it is deployed, as the adversary learns to defeat it. Furthermore, adversarial
data mining has been introduced as a solution to this, where the classification problem is considered
as a game between two agents.

In this thesis, the design and development of a methodology to classify fraud email messages
considering its adversarial behaviour is presented. The interaction between the classifier and the
fraudster was modeled using dynamic games of incomplete information and adversarial data mining.
Given this, four new adversary-aware classification algorithms are introduced based on equilibria
strategies approximated by sequential equilibria for incomplete information games. Incremental
properties in the proposed classifiers are considered in order to enhance the predictive power in an
online learning environment.

In the proposed methodology, different components that describes the interaction between
agents must be considered, such as strategies, types, and utility functions associated to each one
of the agents. To determine these components, different properties, assumptions, and analysis the
available data for a given application must be performed. In this context, by the usage of modeling
and inference techniques, strategy profiles, agents types, and utility functions can be determined.
However, these elements are exclusive responsibility of the modeler, and may vary depending on
the application domain.

The proposed methodology was applied over known fraud and regular email corpus, com-
monly used by researchers in this type of fraud. Features extracted were enhanced by using latent
semantic analysis, for which social engineering components, commonly used in this type of fraud,
were obtained. Experimental results shows that the proposed characterization outperforms pre-
viously published email fraud feature extraction methods. In terms of the proposed classification
algorithms, experimental results shows that the adversarial interaction between agents is success-
fully captured. Finally, for each adversary-aware algorithm, sensitivity analysis results justify the
robustness of the proposed models.

This work suggests future developments in both theoretical and applied extensions of the
proposed framework. On terms of theoretical developments, new extensions can be considered for
more complex interactions between agents, whereas for applied extensions, new developments can
be proposed in other domain tasks where an adversarial environment can be modeled.

iv
Contents

Agradecimientos i

Resumen iii

Summary iv

Contents v

List of Tables ix

List of Figures x

1 INTRODUCTION 1
1.1 General Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 General Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Specific Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 PREVIOUS WORK 6
2.1 The Phishing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 Social Engineering and Email Fraud . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.2 Phishing Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.3 Machine Learning for the Phishing problem . . . . . . . . . . . . . . . . . . . 10
2.2 Adversarial Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

v
CONTENTS

2.2.1 Adversarial Classification Framework . . . . . . . . . . . . . . . . . . . . . . . 11


2.2.2 Adversarial Classification Extensions . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.3 Other Game Theoretic Frameworks . . . . . . . . . . . . . . . . . . . . . . . . 16

3 ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES 18


3.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.1 The Learning Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.2 The Strategic Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Sequential Equilibria in Adversarial Classification . . . . . . . . . . . . . . . . . . . . 22
3.3 Quantal Response Equilibrium Classifiers . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.1 Utility Function Modeling and Payoffs . . . . . . . . . . . . . . . . . . . . . . 25
3.3.2 Auxiliary Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.3 Static Quantal Response Equilibrium Classifier . . . . . . . . . . . . . . . . . 27
3.3.4 Incremental Quantal Response Equilibrium Classifier . . . . . . . . . . . . . . 28
3.3.5 Quantal Response Equilibrium Perceptron Classifier . . . . . . . . . . . . . . 29
3.4 Adversary Aware Online Support Vector Machines . . . . . . . . . . . . . . . . . . . 30
3.4.1 Utility Function Modeling and Payoffs . . . . . . . . . . . . . . . . . . . . . . 30
3.4.2 Classifiers Optimal Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4.3 AAO-SVM Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 PHISHING FEATURES, STRATEGIES, AND TYPES 38


4.1 Corpus Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.1 Structural Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2 Text-Mining Techniques for Feature Extraction . . . . . . . . . . . . . . . . . . . . . 39
4.2.1 Document Model Representation and Keyword Extraction . . . . . . . . . . . 41
4.2.2 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.3 Probabilistic Topic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3 Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.4 Adversarial Types Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5 EXPERIMENTAL SETUP 46

vi
CONTENTS

5.1 Benchmark Algorithms for Batch Learning . . . . . . . . . . . . . . . . . . . . . . . . 47


5.1.1 Support Vector Machines Classifier . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1.2 Nave Bayes Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.1.3 Logistic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2 Benchmark Algorithms for Online Learning . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.1 Relaxed Online Support Vector Machines . . . . . . . . . . . . . . . . . . . . 50
5.3 Experimental Setup for Feature Extraction and Selection . . . . . . . . . . . . . . . 50
5.4 Types Extraction Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.5 Online Learning Classifiers Experimental Setup . . . . . . . . . . . . . . . . . . . . . 51
5.6 Evaluation Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.7 Sensitivity Analysis and Robust Testing . . . . . . . . . . . . . . . . . . . . . . . . . 53

6 RESULTS AND ANALYSIS 55


6.1 Feature Extraction and Selection Results . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.1.1 Content-based and Topics Model Features . . . . . . . . . . . . . . . . . . . . 56
6.1.2 Feature Selection and Dimensionality Reduction . . . . . . . . . . . . . . . . 60
6.1.3 Feature Sets Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2 Types Extraction Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.3 Online Classification Algorithms Performance . . . . . . . . . . . . . . . . . . . . . . 64
6.3.1 Proposed Classification Algorithms Performance . . . . . . . . . . . . . . . . 64
6.3.2 Benchmark Algorithms Performance . . . . . . . . . . . . . . . . . . . . . . . 65
6.4 Sensitivity Analysis and Robust Testing . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.4.1 Static-QRE Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.4.2 Incremental-QRE Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . 68
6.4.3 QRE-Perceptron Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . 68
6.4.4 AAO-SVM Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.4.5 AAO-SVM and I-QRE Incremental Parameter . . . . . . . . . . . . . . . . . 69

7 CONCLUSIONS AND FUTURE WORK 71


7.1 Main and Specific Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

vii
CONTENTS

7.2 Game Theoretical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73


7.3 Phishing Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.4.1 Theoretical Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.4.2 Applied Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

REFERENCES 79

Appendix A ALGORITHMS AND METHODS 87


A.1 Computing Sequential Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.1.1 Quantal Response Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.1.2 The Complexity of Sequential Equilibria . . . . . . . . . . . . . . . . . . . . . 89
A.1.3 The Beer-Quiche Game Sequential Equilibria Numerical Approximation . . . 89
A.1.4 The Adversarial Signaling Game Sequential Equilibria Numerical Approxi-
mation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.2 From Learning to Playing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
A.2.1 Confusion Matrix and Classification Learning . . . . . . . . . . . . . . . . . . 97
A.2.2 Complete Information Interaction . . . . . . . . . . . . . . . . . . . . . . . . 97
A.2.3 Incomplete Information Interaction . . . . . . . . . . . . . . . . . . . . . . . . 98
A.2.4 AAO-SVM Utility-based Example . . . . . . . . . . . . . . . . . . . . . . . . 98
A.3 K-Means Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
A.4 Sequential Minimal Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Appendix B PHISHING FEATURE EXTRACTION RESULTS 104


B.1 Email Keyword Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
B.2 Latent Dirichlet Analysis Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

viii
List of Tables

Table 5.1 Confusion Matrix for binary classification problems. . . . . . . . . . . . . . . . 52

Table 6.1 Five most relevant stemmed words for each of the 13 clusters extracted with
the content-based keyword algorithm over the phishing corpus. . . . . . . . . . 57
Table 6.2 Ten most relevant stemmed words for five topics extracted by using the LDA
topic-model over the phishing corpus. . . . . . . . . . . . . . . . . . . . . . . . 58
Table 6.3 Experimental results for the benchmark batch machine learning algorithms. . . 62
Table 6.4 Frequency associated to phishing and ham emails for each cluster. . . . . . . . 64
Table 6.5 Experimental results for the benchmark online machine learning algorithms. . . 65

Table A.1 QRE approximation for the Beer-Quiche game. . . . . . . . . . . . . . . . . . 90


Table A.2 QRE approximation for the Adversarial Signaling game. . . . . . . . . . . . . 93
Table A.3 Cost Matrix for phishing email filtering. . . . . . . . . . . . . . . . . . . . . . . 97

Table A.4 Confusion matrix and payoffs in a complete information game for phishing
emails filtering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

Table A.5 Confusion matrix and payoffs for non-malicious messages in phishing filtering
with incomplete information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

Table A.6 Confusion matrix and payoffs for malicious messages in phishing filtering with
incomplete information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Table B.1 Top 15 keywords extracted for k = 13 clusters. . . . . . . . . . . . . . . . . . . 104


Table B.2 30 words with highest probability to appear in a document, given a topic z
and , parameters (25 topics). . . . . . . . . . . . . . . . . . . . . . . . . . . 107

ix
List of Figures

Figure 2.1 The spam game [3]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Figure 3.1 Qualitative relationship between proposed algorithms, based on the strategic
interaction between agents, the incremental interaction, and the level of com-
putational intelligence used for the game modeling. . . . . . . . . . . . . . . . 19
Figure 3.2 Dynamic interaction between the Classifier an the Adversary. . . . . . . . 20
Figure 3.3 Extensive-form representation of the signaling game between the Classifier
an the Adversary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Figure 4.1 Feature extraction flow diagram process for phishing messages. . . . . . . . . . 40

Figure 6.1 F-Measure evaluated over the benchmark algorithms for different number of
clusters (k [2, 20]) in the keyword finding algorithm. . . . . . . . . . . . . . . 57
Figure 6.2 F-measure evaluated over the benchmark machine learning algorithms for in-
creasing number of topics in the feature set defined by LDA model. . . . . . . 58
Figure 6.3 F-Measure evaluation criteria for SVMs algorithm using different feature set
strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Figure 6.4 F-Measure for all benchmark machine learning algorithms evaluated over the
different feature sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Figure 6.5 Davies-Bouldin index for types clustering. . . . . . . . . . . . . . . . . . . . . 63
Figure 6.6 Accumulated error evaluated each 500 messages presented in the online eval-
uation setup for S-QRE, I-QRE, QRE-P, and AAO-SVM classification algo-
rithms, using RO-SVM as a benchmark. . . . . . . . . . . . . . . . . . . . . . 65

x
LIST OF FIGURES

Figure 6.7 Accumulated error evaluated each 500 messages for I-QRE, AAO-SVM, RO-
SVM, Batch SVM, nave Bayes, and incremental nave Bayes classification
algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Figure 6.8 Zoom over the accumulated error evaluated each 500 messages for I-QRE,
AAO-SVM, RO-SVM, Batch SVM, nave Bayes and incremental nave Bayes
classification algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Figure 6.9 Accumulated error evaluated for the S-QRE classifier for = {1, 100, 1000, 1000}. 67
Figure 6.10 Accumulated error evaluated for the I-QRE classifier for = {1, 100, 1000, 1000}. 68
Figure 6.11 Accumulated error evaluated for the QRE-P classifier for = {1, 100, 1000, 1000}. 69
Figure 6.12 Accumulated error evaluated for the AAO-SVM classifier for values presented
in section 5.7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Figure 6.13 Accumulated Error evaluated for the AAO-SVM and I-QRE classifiers for
Gp = {10, 500, 1000}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Figure A.1 Beer-Quiche game. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90


Figure A.2 Dynamic interaction between the Classifier an the Adversary. . . . . . . . 99

xi
List of Algorithms

3.3.1 QRE1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3.2 QRE2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3.3 SQRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.3.4 IQRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.3.5 QRE-P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.4.1 Adversary Aware Online SVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.2.1 Relaxed Online Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . 50

A.3.1k-means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

A.4.1Sequential Minimal Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

xii
Chapter 1

Introduction

In this chapter, a general background on phishing fraud detection and adversarial environments is
presented, followed by this thesis general and specific objectives. Then the methodology used for
the development of the thesis is discussed. Finally, the thesis structure with a brief introduction
for all chapters is presented.

1.1 General Background

In security applications, such as phishing fraud detection or intrusion detection systems, threats are
becoming more effective as adversaries are adapting and evolving over current security systems. In
many domains, such as fraud, phishing, spam, unauthorized access, and other malicious activities,
there exists a permanent race between adversaries and classifiers. The evolution of the initial
problem is driven by a rational change of the adversaries behaviour. In this context, one of the
major problems of a classifier is to consider the adversarial concept drift. In recent studies on this
topic [75, 85, 89], the incremental characteristics of these applications have been mainly considered,
leaving, however, the adversarial behaviour as an open question in most of the previously mentioned
domain tasks.

Nowadays, in the cyber-crime context, one of the most common social engineering threats
is the phishing fraud. This malicious activity consists of sending email scams, where attackers ask
for personal information to break into any site where victims may store useful private information,
such as financial institutions, e-commerce and web 2.0 sites. By these methods, millions of dollars
are stolen every year, and this number is likely to keep increasing as the internet penetration in our
everyday life is augmenting [1].

In security, social engineering is a methodology that defines how to manipulate a person,


using mostly anxiety, vanity, and fear elements in order to do certain actions that would not do in a

1
Chapter 1. INTRODUCTION

regular state of mind. Similar to deceptive queries or deceit, this concept is applied to the purpose
of gathering classified information, to commit fraud or access to classified information systems. As
stated by Kevin Mitnick in The Art of Deception [55], a good social engineer plans an attack as
a chess game, always anticipating several steps over the opponents actions. Therefore, one of this
works hypothesis is that counter social engineering tactics are possible, by anticipating possible
adversarys actions.

There are important issues when using data mining to build a classifier for the phishing
detection task. Firstly, the correct definition of the feature set affects directly whether hidden pat-
terns are discovered or not. Secondly, the right classifier to be used is unclear, whether an online or
batch learning structure must be used, or even if a single or an ensemble learning model is the right
choice. Finally, the training data is needed to be pre-processed for the selected classification algo-
rithms. Regarding feature determination, several methodologies for feature extraction for phishing
detection have been developed [2, 8, 30], as well as different data mining algorithms have been used
for the classification task [6, 8, 30, 44, 47, 48]. It is important to highlight that text content based
features can be extended to a large scale of applications, given that phishers preferences may also
include the deception through images or text messages.

A simple phishing fraud message can be defined as a word or a set of words used by malicious
agents in their fraud message to deceit a service user. Whereas a generic message can be considered
as a group of words sent or delivered by a user for others to read. Using previous definitions in the
phishing context, in any media where a message can be delivered there exists a possibility to commit
fraud. It all depends on who receives the message, whether the used social engineering techniques
affect the receiver directly, and whether the deceptive message can trigger a nave response from the
end-users. Considering todays security threats, the automatic filtering of malicious messages has
been of great importance in data mining and cyber security communities, where fraud detection and
prevention using text mining techniques for enterprise security applications have been extensively
discussed.

According to the latter, in phishing and many other adversarial classification tasks, the
interaction between different agents must deal with the uncertainty of classifying malicious or regular
activities, without information about the real intention of the message. The latter can be modeled
as an incomplete information game, where the classifier must decide an strategy without knowing
the adversarys real type, whether it was malicious or just happened to be a malicious like regular
message. All this, using just the revealed set of features to decide. Here, the dynamic behaviour of
signaling games presents common elements with the online learning theory. In both cases the final
outcomes, whether a sequential equilibrium or a hypothesis classifier, are determined by incremental
events presented in their environments. As the nature of email filtering is determined by a large
stream of messages, online algorithms, as well as generative learning algorithms have been considered
as a measure to minimize the computational cost, even if the predictive power is lower than the
obtained by discriminative learning algorithms [58].

Signaling games, a special representation of dynamic games of incomplete information1 , has


1
Also known as Bayesian games.

2
Chapter 1. INTRODUCTION

been used by economists to explain agents behaviour which previously seems unreasonable. The
concept of incomplete information signals, such as a monopolist setting lower than optimal prices,
and other apparent irrational actions, are a credible mean to transmit private information. Our
proposal is that this concept can be extended to phishing: The idea of sending deceptive email
messages that could barely report fraud benefits (irrational behaviour), is the key strategy to send
messages that might not be classified as malicious. Then the real intention to commit fraud, or
private information, might work on nave internet users, breaking down the phishing filter.

The aim of this work is to use data mining and game theory together in two different levels,
summarized in the following points:

To develop and design a game-theoretic data mining framework using dynamic games of in-
complete information for the adversarial classification problem. For this purpose, a mechanism
is proposed to model a signaling game between an adversary and a classifier, where the game
equilibria is used to build strategic classifiers to detect phishing emails.
To characterize the game using text mining techniques and information retrieval methods to
extract the main features for improving the classification of phishing emails.

1.2 Objectives

1.2.1 General Objective

The main objective of this thesis is to design and develop a methodology to classify phishing emails,
modeling the interaction between the classifier and malicious agents using data mining and game
theory.

1.2.2 Specific Objectives

1. To design a game theoretic framework that represents the interaction between a phishing
filter and the adversaries. This interaction will be modeled under the assumption of incom-
plete information between the agents, characterizing agents of the system together with their
respective types, strategies and payoffs.
2. To build an incremental algorithm that enables the classification of malicious phishing activ-
ities, with competitive results against state of the art classification algorithms.
3. To characterize phishing activities using text mining techniques, identifying the most repre-
sentative feature space for each agent in the game definition.
4. The development of a portable and extensible computational tools, that can be used by
phishing detection communities and other malicious messages.

3
Chapter 1. INTRODUCTION

1.3 Methodology

The main tool to define the adversarial game between the malicious agents and the classifier is the
analytical mathematical modeling, where simple numerical simulations to quantify the analytical
results. Then, once the adversarial game is defined, the knowledge discovery in databases method-
ology (KDD) [29] will be used to define all parameters needed for the game, and to construct the
phishing email filter. Finally, performance measures will be used to determine the predictive power
of the proposed classifier against benchmark algorithms.

The methodology used to the development of this thesis is structured in the following steps:

1. Phishing problem definition and previous work


The phishing classification problem will be reviewed for this research, considering the state of
the art of filtering algorithms, national and international anti-phishing entities, and different
social engineering deceptive strategies used in email fraud.
2. Design and development of game theoretic models
Game theoretical models will be used in this work as a way to model the interaction between
malicious agents and a classifier. For this, a review on previous work on these topics will be
used to determine the most qualified model to represent the underlying strategic interaction
between agents. The phishing problem can then be technically analysed, so to propose the
most adequate model.
3. Data exploration, feature extraction, and feature selection
A phishing corpus will be analysed using state of the art tools for text mining, identifying
their main characteristics of the phishing phenomenon. Together with this, feature extraction
methodologies and feature selection algorithms will be developed, in order to obtain a com-
plete characterization of fraud email messages. In particular, the available dataset consists in
an English language phishing email corpus [56] (4450 phishing messages) and the Spamaas-
sassin collection, from the Apache SpamAssassin Project (6951 ham messages). The message
processing is based on a new combination and selection of features generated by well known
text-mining algorithms based on probabilistic topic modeling like Latent Dirichlet Allocation
(LDA) [12, 13, 14, 88], Singular Value Decomposition (SVD) [23, 61] together with keyword
finding techniques [82, 83, 84].
4. Design and development of the model
Firstly, using game theory [39], the interaction between a classifier and a malicious agent will
be modeled. The resulting model will include, a complete characterization on the agents,
types, strategies and payoffs will be determine. Afterwards, the state of the art in incremental
data mining [68], and classification algorithms for phishing filtering [2, 6] must be taken into
account. Once previous work is finished, a data mining and game theoretic methodology will
be proposed, where an incremental classification algorithm will be used on the available data
to test the algorithms [21].

4
Chapter 1. INTRODUCTION

5. Application and Evaluation of the Proposed Model


The proposed model will be applied on the preprocessed database, considering all assumptions
defined in previous steps. Simultaneously, other classification techniques for phishing email
will be evaluated to determine the appropriate benchmark for classification.

6. Results analysis and conclusions


Once the proposed model is developed, an analysis of the results will be determined over
batch learning settings, and online algorithms settings. The benchmark comparison between
classification algorithms will be analysed to evaluate the predictive power of the proposed
model over other classification algorithms, for both batch learning and online learning settings.
Finally, we present our conclusions for all presented steps.

1.4 Thesis Structure

In the next chapter, a bibliographic review is presented where the state of the art in social engineering
and phishing fraud strategies, data mining algorithms for phishing classification and game theoretic
frameworks for adversarial environments are extensively reviewed. Furthermore, previous work on
game theory and data mining models used for detection of malicious activities are presented.

On chapter 3, the main contribution of this thesis is presented, where the classification
problem definition, the game theoretic modeling and the analytical solution for the adversarial
classification under incomplete information games are extensively described.

Then, on chapter 4, the data preprocessing, feature extraction and feature selection algo-
rithms to match the game theoretic requirements are presented. Here, we describe the real world
email corpus, as well as the text mining techniques used in order to obtain the needed representation
of the data for the proposed game theoretical model.

On chapter 5, the experimental settings for the theoretical model application are described.
Together with this, the evaluation criteria and the measures considered for both batch learning and
online learning methods are discussed. The most accurate experimental settings for the proposed
game theoretical model are determined with sensitivity analysis and robust testing over the model
parameters presented in the present chapter.

Afterwards, on chapter 6, main results for the proposed classification algorithms and other
benchmark techniques are presented and analysed. Our main results are presented according to the
evaluation criteria defined in the previous chapter.

Finally, on chapter 7 the main conclusions are presented, which includes our main findings
and contributions, as well as the future work and lines for research.

5
Chapter 2

Previous Work

In this chapter, we review the state of the art and previous work. Firstly, different studies on social
engineering strategies and todays phishing countermeasures are discussed. Secondly, a review on the
adversarial classification framework and its extensions are exposed, where different game theoretical
models used together with data mining for adversarial environments are explained. Thirdly, the
most recent game theoretic frameworks applied for cyber-security issues are discussed. Finally, the
main data mining applications for phishing and unwanted email filtering are extensively reviewed,
highlighting their main contributions and differences amongst each other.

2.1 The Phishing Problem

The origin of the word Phishing is unclear. This term was associated to the ph usage for re-
placement of the f letter from the hacker communities, used in several hacker contexts in the 70s
and 80s for phone phreaking, built using the words {phone+hacker+freak}, whose objec-
tive was to alter or hack into equipment and systems connected to the public telephone networks.
Other origins to the phishing word are closely related to the term constructed by using together
the words {password+hacking+fishing} or {password+harvesting+fishing}. The first
term refers to breaking someones user and password information, whereas the the second term
refers on harvesting several passwords that could by useful for phishers to break into an specific
account. By the year 2004, this concept was extended to get from cybernauts by sending millions
of email messages, their user-names and passwords from their bank accounts, or any other website
that could store useful information to commit fraud.

By the year 2006, many important phishing attacks had been perpetrated against e-Bay and
Paypal customers by the usage of unwanted emails sent by phishers. Here, the basic strategy was
to alert users that they needed to send their personal information, or their accounts were in danger
to be closed by the company because of maintenance of their information systems. In the present

6
Chapter 2. PREVIOUS WORK

days, phishing attacks have evolved into many other sophisticated attacks. Spear-phishing, whaling,
vishing among others fraud techniques that are yet to be revealed yet, are based and inspired by
social engineering methods for making believe victims that they need to share information with
someone.

In this section, the origins of email fraud or phishing emails, their basic deceptive strategies,
together with recent developments of countermeasures are presented.

2.1.1 Social Engineering and Email Fraud

Social Engineering is considered as the act of gaining unauthorized access or information from
an information system by deceiving the humans that have knowledge about the target, originally
by using charm or fear. As a human element is involved, it represents a significant threat to
even the most secure systems. Phishing emails have been using social engineering tactics for their
improvement in adding everyday more and more victims. In this context, Spam emails (members of
the unwanted emails family as well) have a different objective as they are used for massive marketing
strategies and distribution for different products and services, mostly related to the pornographic
industry, which are likely to be openly criticized by our society if a traditional campaign is used.

One of the very first and popular letter scams are Nigerian 419 scams1 , developed since the
70s, and popularized by the electronic mail. The origin is tracked to the early 1900s [45], where
letters were sent all over the world, impersonating a military general, a dictator, his wife, sons,
ministers, or anyone who seems to have lots of money. Here, they described an exotic situation,
where the sender needed to save his money from a dictatorship, extract diamonds from an unknown
secret mine or any situation that could generate a considerable amount of money. In order to
prove the worthiness of the receiver, the scammers asks for money or bank accounts information.
Afterwards, they proceed to steal money, or to send back to the victim small sums of money to
engage with victims greed, asking back for more money, and so on. These messages, know as the
advance fee fraud, have been considered one of the first kinds of phishing emails, and it is estimated
that gives Nigeria 250.000 stable jobs in the present times [45].

According to [55], A company may have purchased the best security technologies that
money can buy, trained their people so well that they lock up all their secrets before going home at
night, and hired building guards from the best security firm in the business. That company is still
totally vulnerable.. In addition to the latter, the vulnerability is associated to simple technological
elements from which we are sharing information to others in our everydays life, like our email, or
web sites like Twitter, Facebook or MySpace. In these environments, the vulnerability is associated to
impersonating some official-like person or organization, exploiting human perception or requesting
for action, which usually involves either opening the email, clicking a link, logging into some server,
or sending personal information back via web or email.
1
These fraud activities violates the section 419 of the Nigerian criminal code

7
Chapter 2. PREVIOUS WORK

As previously introduced, the migration of phishing strategies to novel web 2.0 websites is
directly related to the opportunity to deliver a message that can be read by hundreds. without
revealing the real identity of the sender. Here, as well as in the email case, any kind of messages can
be delivered in order to fool the victims. In this context, The main problem for countermeasures
and evasion tactics, is that the human element is related to the naive behaviour of new users, which
are added everyday into the web 2.0 websites, whose low experience in such websites makes them
easy bait for phishers.

As our society is involved in the every day usage of applications where both malicious and
regular messages are delivered, the phishing filtering problem is becoming more difficult everyday.
Even human behaviour, mental models for phishing identification and security skins for graphical
user interfaces were proposed for enhancing human skills to detect phishing messages, particularly
emails and web-sites [26, 27, 28, 72, 71]. While client side phishing filtering techniques have been
developed by large software companies [28, 36, 50, 90], server side filtering techniques have been a
large research focus [2, 6, 7, 8, 30, 44]. Most of this work is based on machine learning approaches
to determine the relevant features to extract from phishing emails, and data mining techniques to
determine hidden patterns associated to the relationship between the extracted features.

Phishing fraud has been extensively analysed and their strategies are no mystery for anyone.
As presented by [55], some of the main psychological triggers or strategies used by social engineers
are analysed:

Authority: People have a tendency to comply when a request is made by a person authority.
... a person can be convinced to comply with a request if he or she believes the requester is
a person in authority or a person who is authorized to make such a request. Attackers will
often include legitimate company name that an end-user is likely to have a financial account
with, such as Paypal, eBay or Visa. These scams are related to the Account Denial of Service
(ADS) threat, which has several believable elements from a regular user point of view, and is
a reason why many attacks are based on this strategy.

Reciprocation: We may automatically comply with a request when we have been given
or promised something of value. ... When someone has done something for you, you feel an
inclination to reciprocate. This strong tendency to reciprocate exists even in situations where
the person receiving the gift has not asked for it. As the premise of phishing emails is to
request for information of some kind, using most of times a false pretext, the requester must
provide some benefit to the end user. Here, the strategy works using false congratulations or
the empty promise of a great gift to be delivered, asking in return for the users information,
possibly to help in the process of getting them their gift.

Consistency: People have the tendency to comply after having made a public commitment
or endorsement for a cause. The change/update the account type of message is a common
threat in the phishing context. Here, the attacker will falsely cite rules or terms of an account,
asking to verify their information in order to comply with rules or terms that they already
have committed to, publicly or not. The idea of a suspended account or limitation, will make

8
Chapter 2. PREVIOUS WORK

that most of people do what they can to stand by the guidelines that they may have signed
up for, even if they did not read the terms of service when they created their account.

Social Validation People have the tendency to comply when doing so appears to be in line
with what others are doing. The action of others is accepted as validation that the behaviour
in question is the correct and appropriate action. This element appears in phishing emails
when, for example, a security audit is cited as the reason for the verification of the users data.
Explaining a procedure where the user was randomly selected, which implies that other users
are selected and that it happens all the time, leading the user to send its valuable data.

Scarcity: People have the tendency to comply when it is believed that the object sought is
in short supply and others are competing for it, or that it is available only for a short period
of time. The concept of limited time or supply is a trick that every salesmen often exploit.
The main reason to do so to put in the state of mind of the victims that they have to make
a decision immediately, rather than to think it out. This strategy, together with others, can
create a more successful fraud message, for example telling a person that their account will
be shut off if they do not comply in general is scary enough, but telling them that it will be
shut off if they do not comply within 24 hours makes the emotional shock of losing account
access more strong and present in ones mind, since they now have little time to make their
decision.

One of the premises of this thesis, is that by using advanced text mining techniques, messages
that uses these social engineering strategies can be identified. Furthermore, in the email context,
some other strategies tries to exploit the human perception, making them to see what they wants to
see rather than what is actually there. Using deceptive links to false websites, false companies logos,
and even false sender mailing addresses, the end user is fooled into several phishing traps. However,
by the usage of scripting languages and text parsing, some of these strategies can be identified, and
used as characterization elements for email phishing messages.

2.1.2 Phishing Countermeasures

Spam filtering has been discussed over the last years, and many filtering techniques have been
described [35]. Phishing classification is different in many aspects from the spam case, where most
of the spam email just want to inform about some product. In phishing there is a more complex
interaction between the message and the receiver, like following malicious links, filling deceptive
forms, or replying with useful information which are relevant for the message to succeed. Also,
there is a clear difference among many phishing techniques, where the two main message categories
are the popularly known deceptive phishing and malware phishing. While malware phishing has been
used to spread malicious software installed on victims machines, deceptive phishing, according to
[7], can be categorized in the following six categories: Social engineering, Mimicry, Email spoofing,
URL hiding, Invisible content and Image content. For each one of these subcategories, specific
feature extraction techniques have been proposed by [7] to help phishing classifiers to use the right
characterization of these malicious messages.

9
Chapter 2. PREVIOUS WORK

Among the countermeasures used against phishing, three main alternatives have been used
(see [7] for a complete review of countermeasures): Blacklisting and white-listing, network and
encryption based countermeasures, and content based filtering.

The first alternative, in general terms consists in using public lists of malicious phishing web-
sites (black lists) and lists of legitimate non-malicious web-sites (white lists) [47, 70]. The
idea is that every link in a message must be checked in both lists. The main problem of this
countermeasure is that phishing web-sites do not persist long enough to be updated on-time
in the black lists, making difficult to keep an up-to-date list of malicious web-sites.

The second alternative is based on email authentication methods [59, 64], where the trans-
action time could be a considerable computational cost. Besides, a special technological
infrastructure is needed for this countermeasure [7].

Previous work on content-based phishing filtering [2, 6, 7, 8, 30, 44] focused on the extraction
of a large number of features and the usage of popular machine learning techniques for
classification. These approaches for automatic phishing filtering have shown promising results
on setting the relative importance of features.

In web page phishing classification, information retrieval and text mining text representation,
such as the Term Frequency - Inverse Document Frequency (tf-idf ), has been previously used in the
Carnegie Mellon ANTI-phishing and Network Analysis tool (CANTINA) project, developed by the
CMU Cups Anti-phishing group [91]. Also, online classification algorithms have been proposed for
large datasets classification [47, 48], where the need to update incrementally the classifier as objects
are presented to the models is of crucial importance, as the classifiers performance decreases rapidly
over time. This fact is related directly to the update strategies and mechanisms defined by the
message filter, as well as the size of the training sets and even if new features are to be considered.
As stated by previous authors [7, 48], if new features are considered in extraction procedures, the
classification algorithms must be retrained from scratch.

2.1.3 Machine Learning for the Phishing problem

Different text mining techniques for phishing filtering have been proposed. In Abu-Nimeh et al.
[2] used Logistic Regression, Support Vector Machines (SVM) and Random Forests to estimate
classifiers for the correct labeling of email messages, obtaining the best results with an F-measure2
of 90%. In [30], using a list of improved features directly extracted from email messages, the author
proposed a SVM based model which obtained an F-measure of 97,64% in a different phishing corpus
data set.
2
F-measure is a machine learning quality measure represented by the harmonic mean between precision and recall.
For further details, see chapter 5.

10
Chapter 2. PREVIOUS WORK

By using of more sophisticated text mining techniques, Bergholz et al. in [8] proposed a
novel characterization of emails using Class-Topic model, and using a SVM model obtained an F-
measure of 99,46% in an updated version of previously used phishing corpus. Later, Bergholz et al.
in [7] proposed an improved list of features to extract from emails, that could characterize most of
phishing tactics proposed by the same authors. Using a SVM model, a F-measure of 99,89% in a
new benchmark database was obtained.

As a work extended from the developed in this thesis, a game theoretic model was proposed
by [44], where the dynamics on social engineering strategies are considered in a incomplete infor-
mation game between the classifier and phishers, whose interaction determined by the sequential
equilibria between the agents is updated as the phishing messages are presented.

2.2 Adversarial Machine Learning

2.2.1 Adversarial Classification Framework

As described by Dalvi et al. in [19], an Adversarial Game can be represented as a game between two
players: A malicious agent whose adversarial activity reports him benefits, and a classifier whose
main objective is to identify as many malicious activities as possible, maximizing its expected utility.
The malicious agent tries to avoid detection by changing its features (hence its behaviour), inducing
a high false-positive rate to the classifier. The adversary is aware that changing to a non adversarial
behaviour, it might not increase its benefit. Considering this, the adversary might try to maximize
its benefit minimizing the cost of changing its features. This framework, based on a single shot
game of complete information, was initially tested in a spam detection domain where the adversary
aware nave Bayes classifier had significantly less false positives and false negatives than the plain
version of the classifier. Then a repeated version of the game was tested, where results showed
that the adversary aware classifier outperformed consistently the adversary unaware nave Bayes
classifier.

Consider the set of variables x = (x1 , . . . , xA ), where xa represents the ath feature for a given
object, let Fxa be the set of every possible values for xa where Fx = xa x Fxa , and Fy = {1, +1}.
Let y Fy a random variable from the set of features, where y = +1 represents a malicious criminal
instance, and y = 1 represents a regular criminal instance3 . Let an instance i determined by the
set of features (xi ,yi ). Let the training set Tr and the testing set Te defined by partitions over the
whole set of observations P({xi , yi }Ti=1 ).

Definition 1 Adversarial Classification Game. The adversarial classification [19] is defined


as a game between two players: The Classifier who learns from Tr the classification function
3
Regular criminal instance refers to an object that potentially could be associated to a crime, but it has a regular
innocent behaviour.

11
Chapter 2. PREVIOUS WORK

C(x) {+1, 1}, which is used to predict the behaviour of adversaries in Te , and an Adversary,
whose objective is that malicious instances in Te are misclassified, modifying the feature vector xi
to xj = (xi ).

On the one hand, the Classifier is characterized by a set of parameters, defined by Va ,


associated to the cost of measuring a given feature xa and UC (C(x), y) which represents the utility
function to classify as C(x) an instance whose real class is y. Generally, it can be considered that
UC (+, ) < 0 and UC (, +) < 0 given the cost associated to the classifier when mistakes are
committed, UC (, ) > 0 and UC (+, +) > 0. The Classifiers objective is to maximize the
expected utility function, represented by the following expression,

" #
X X
E(UC ) = P (x, y) UC (C((x)), y) Va (2.1)
(x,y)Fx Fy xa x

On the other hand, the Adversary is characterized by the set of parameters Wa , which
represents the cost of changing the feature vector x from its value x to x0 , clearly not modifying
the feature does not incurs in any costs, from which W (x, x0 ) = 0. Furthermore, the Adversary
is represented by its utility function UA (C(x), y) which measures the cost when the prediction C(x)
is assigned to an instance whose real class is y. In general, UA (, +) > 0, UA (+, +) < 0 and
UA (, ) = UA (+, ) = 0. As well as the Classifier, the Adversary seeks to maximize its
expected utility function, represented by the following expression,

X
E(UA ) = P (x, y) [UA (C((x)), y) W (x, (x))] (2.2)
(x,y)Fx Fy

To compute the optimal strategies, the type of classifier, the feature space, the modifications
considered from the adversary to make the malicious patterns, and the respective costs, are elements
that defines how complex the task will be. Applications on a spam filtering task, using a nave Bayes
classifier, and synonyms attacks as adversarial strategies have been used in [19]. Furthermore, in
this analytical framework the assumption that between two players there is a complete information
is unlikely to be fulfilled, for which different approaches are proposed by the authors.

This approach is a simple case in a complete information context, for which different exten-
sions where developed since it original proposal was formulated [19]. In further applications, such as
fraud detection, the agents information might not be of common knowledge, for which extensions
to incomplete information games [44] or Stackelberg games [41] must be considered. Also, differ-
ent approaches for different interaction mechanisms were proposed to enhance different adversarial
applications modeling [73], and even theoretical discussions on the future of adversarial machine
learning were proposed [5].

12
Chapter 2. PREVIOUS WORK

2.2.2 Adversarial Classification Extensions

The adversarial classification framework has been extended into different adversarial settings, chang-
ing mainly the game theoretic modeling of the strategic interaction between agents. In despite of
the game theoretic modeling changes, the adversarial properties of spam classification has been
widely used for the empirical evaluation of these extensions. In the following, recent developments
and extensions of this framework are presented.

Stackelberg Games Extension

An interesting approach, proposed in [41], considers an adversarial Stackelberg game model to define
the interaction between the classifier and the adversary. This model is based on the assumption
that all players act rationally throughout the game. For the Stackelberg game, this implies that the
second player will respond with the action that maximizes its utility given the action of the first
player. The rational acting assumption at every stage of the game, eliminates the Nash equilibrium
with non-credible threats where a sub-game perfect equilibrium is determined. In addition, authors
assumed a perfect information game, where each player knows the other players utility function.

On this setting, the sub game perfect equilibrium is determined using stochastic optimization,
were Monte-Carlo simulations on mixture models and linear adversarial transformations where
tested over Spam data sets, showing promising results.

Zero-Sum Games Extension

Another contribution in Adversarial Classification, developed by Sonmenz [73], defines the adver-
sarial game using an adversary-aware and utility-based classifier, and different adversarial strategies
such as worst-case, goal-based, and utility-based scenarios.

For the classifier the first case, denoted as the worst-case scenario, defined as a two-player
zero-sum game, an adversary aware classifier is determined using a joint Linear Support Vector
Machines (L-SVM) and minimax optimization problem to find the optimal equilibrium and the op-
timal hyperplane simultaneously. In the general sum game, called the utility-based scenario, a Nash
equilibrium is determined using a Nikaido-Isado-type functions to define the optimal hyperplanes
in a L-SVM context.

The adversary, in the goal-based scenario tries to maximize the false negative classification
error. This goal makes sense, as real world applications such as spam filtering and intrusion de-
tection, the main aim of the adversary is to defeat the classifier. Another scenario described for
the adversarial, the worst-case scenario, the goal is to maximize the total classification error of the
classifier. Finally, a third kind of adversary (Utility-based adversary) tries to maximize the false

13
Chapter 2. PREVIOUS WORK

negative classification, but considering a different utility gains for the adversary in each sample.

Can Machine Learning be Secure?

Several studies about the possibility that a classifier is maliciously trained or that its optimal
strategies could be revealed in adaptive adversarial environment has been developed. In [5], open
questions on this matter where analysed and extensively discussed. More specifically, these questions
were related to the following list of topics:

1. Can the adversary manipulate a learning system to permit a specific attack? For example,
can an attacker leverage knowledge about the classifier to bypass the filtering?

2. Can an adversary degrade the performance of a learning system to the extent that system
administrators are forced to disable the IDS? For example, could the attacker confuse the
classifier and cause a valid e-mail rejection?

3. What defences exist against adversaries manipulating or attacking learning systems?

4. Finally, what is the potential impact from a security point of view of using machine learning
on a system? Can an attacker exploit its properties to disrupt the classifier?

In [4], the whole spectra of the open problems in the security of learning is presented. Here,
authors suggest three directions towards research on secure learning:

1. The first one is related to the bounds on adversarial influence. Here, open questions
about the influence of the learner performance and the influence for reverse engineering state
are analysed [4].

2. The second direction is associated to the value of adversarial capabilities. Topics related
to identifying adversarial capabilities, characterizing tolerable capabilities and the defender
information are discussed [4].

3. The third direction is about technologies for secure learning. In this context, questions
about detecting malicious training instances, the design of security sensitive learners and
orthogonal experts are reviewed [4].

Exploiting an Spam Classifier

Furthermore in the adversarial machine learning community, Barreno et al. [57] presents a novel
strategy in how to exploit a spam classifier to render it useless. Here, different attacking strategies
described as indiscriminate attack, a focused attack and an attack under optimal attacking functions

14
Chapter 2. PREVIOUS WORK

were used to maliciously train a nave Bayes classifier. All of these attacking strategies were aware
about the complete set of parameters and the structure of the classifier, and the classifier was not
developed in an adversary-aware context. Results shows that the false positive rate was increased
with the attacking strategies, in comparison with a regular set of instances.

Adversarial Learning Theory

Adversarial Learning, introduced by [46], states that any adversary that is aware of some properties
of a classifier, can reconstruct the classifier based on reasonable assumptions and reverse engineering
algorithms. Here, the classifier learns under the presence of an unknown adversary who maliciously
changes the observations used for the learning step. However, authors states that it is not assumed
that the classifier information is completely known by the adversary. Given this, the adversary tries
to model the classifiers parameters with reverse engineering techniques. This problem, which is
different from the original adversarial classification problem, is introduced as the Adversarial Clas-
sifier Reverse Engineering (ACRE) problem. Furthermore, the adversarys objective is to identify
high-quality instances that the classifier does not label as malicious, rather than modeling perfectly
the classifier itself.

Randomization Methods Against Adversarial Learning

Battista el al. in [10] presents a promising alternative to randomize the classifier decision function in
order to hide the classifiers strategy observed by the adversary, minimising the adversarial learning,
and their possibilities to miss-train the classifier. In this work, authors provide a formal support for
designing hard-to-evade classifiers in an adversarial context. Based on the adversarial classification
framework [19], the possibility of introducing some randomness in the placement of the classification
boundary is discussed. This fact permits to hide information about the classifier to the adversary.
A discussion on possible implementation in multiple classifier systems and experimental validation
of the proposed method over the SpamAssassin Spam corpus is also presented in [10].

Multi-Classifier Systems for Adversarial Classification

There is significant work on using ensembles or multi-classifier systems to deal with concept drift,
present in the adversarial classification problem. In [11] a multi-classifier system is presented to
defeat an adversarial behaviour in the spam context. Here an ensemble of classifiers that auto-
matically deals with the adversarial classification problem. First, using mutual agreement between
classifiers, possible changes in classifier accuracy are determined. This pairwise agreement states
the label class assignments by the pairs of classifiers. Using real data from the blog domain, changes
in mutual agreement are considered as indicators of decreased classification accuracy. Then, the
output of the ensemble is used as a proxy for the true label for new instances, leading to a retrain

15
Chapter 2. PREVIOUS WORK

of individual classifiers identified as potentially weak via mutual agreement.

2.2.3 Other Game Theoretic Frameworks

Other strategic frameworks have been developed for security applications like Spam [3] or credit
card fraud detection [81], where game theory has been proposed to model the interaction between
a classifier and malicious players in an adversarial environment.

Spam Games

Androutsopoulos et al. in [3] models the interaction between the spammers and the email readers
as a two-player game. Here, the email readers decide to read or delete an email after it is labeled
as legitimate or spam email by the spam filter, while the spam author generates its own strategy to
insert spam content into a legitimate email or not. Furthermore, mixed strategies of both parties
are introduced in order to find an appropriate equilibrium for the game.

Figure 2.1 shows the interaction between spammers and the email users when modeled as
a two-player game. This interaction is repeated whenever a user requests an email message from
the inbox I, which represents an incoming stream of emails. Here, the spammer is the first to
play, and has two options to interfere: First by introducing a spam message in the incoming stream
of messages with action S, or doing nothing represented by action L which will cause to obtain a
non-spam message from the incoming stream of emails. These actions are associated to the fact
that the community of spammers controls the ratio of spam in the Web, to control the influence in
which spam filters learn about their evolving strategies.

Then, emails (both spam and legitimate) are classified as Spam S or Legitimate L by a
given spam filter. However, this interaction is extended as the end user also decides to read R or
delete D emails labeled by the spam filter. Finally, all this interaction is represented by utility
functions over their strategies, determined by parameters R , S , , and .

As mixed strategies can be adopted by both players during the game, whenever the game is
repeated each players probabilities to choose any given action are updated. Finally, after defining
the two-player game and the actions of the players, spammer and email reader, this game result can
be approximated by using game theoretic approaches which are analytically determined by authors
in this research. For example, a possible scenarios in which this mechanism can be used is the case
of Internet Service Providers (ISP) regulations, on how they influence on the expected spam senders
parameters to minimize the expected percentage of spam messages.

16
Chapter 2. PREVIOUS WORK

Figure 2.1: The spam game [3].

Game theory for Credit-Card Fraud Detection

Another main application of game theoretic models in the area of security of banking applications
is in credit card fraud detection as proposed in [81]. In this context, it is well known that intru-
sion prevention mechanisms are largely insufficient for protection of databases against information
warfare attacks by authorized users. This, has drawn interest towards internal intrusion detection
and game theoretical applications to mitigate credit card fraud commited by authorized insiders.
In this work, authors proposed the interaction between an attacker and a detection system as a
multi-stage game between two players, where each player tries to maximize its expected utility over
the possible strategies. Authors proposed a two-layered interaction, where the first layer is used to
determine an initial score of fraud activity, and the second layer uses this score as input and using
game theoretical parameters, proposed a classifier for the malicious activities.

17
Chapter 3

Adversarial Classification Using


Signaling Games

In the following chapter, we present the main contribution of this thesis. The problem definition and
interaction between a classifier and adversarial agents is proposed, as well as the game theoretical
basis to solve the model are reviewed. For the adversarial learning, four novel algorithms are
introduced: The Static Quantal Response Equilibria (S-QRE), the Incremental Quantal Response
Equilibria (I-QRE), the Quantal Response Equilibria perceptron (QRE-P), and the Adversary Aware
Online Support Vector Machines (AAO-SVM). As presented in Figure 3.1, each of these algorithms
represents different levels of strategic interactions between agents.

Firstly, S-QRE is an static signaling game where no parameters are updated and all messages
are evaluated using an equilibrium calculated over the game primitives.

Secondly, I-QRE is an incremental representation of the S-QRE algorithm, for which messages
are evaluated using mixed strategies updated in periods of arriving messages.

Thirdly, QRE-P is a completely dynamic algorithm from which all game-theoretical parame-
ters and equilibria strategies evolve as a new message is presented.

Finally, AAO-SVM defines a machine learning algorithm based in RO-SVM optimization


problem [68], in whose constraints, game theoretical parameters are used as a prior-knowledge
on every classification message evaluation.

As a benchmark of previous algorithms, both batch and online machine learning techniques
were evaluated over the data-set of messages. Techniques considered were SVMs, nave Bayes,
Logistic Regression in the batch learning case, and Relaxed Online SVMs, and an incremental
version of nave Bayes in the online learning setting.

18
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

Figure 3.1: Qualitative relationship between proposed algorithms, based on the strategic interaction
between agents, the incremental interaction, and the level of computational intelligence used for the
game modeling.

Figure 3.1 describes that S-QRE considers the computation of decision strategies based in
a complete representation of the data-set, for which no incremental interaction and learning steps
are presented. Likewise, I-QRE and QRE-P are related in how the learning procedure is based in
a perceptron based algorithm, but differ slightly in the amount of observations considered for its
strategic and incremental interaction. Furthermore, AAO-SVM and RO-SVM are closely related
in its SVM based algorithm and incremental interaction, but their fundamental difference is the
strategic interaction considered in the AAO-SVM.

3.1 Problem Definition

In what follows, the phishing classification learning problem and the strategic interaction between
an adversary and the classifier are introduced. This sub-section aims to define symbols and math-
ematical notation used in this thesis.

3.1.1 The Learning Problem

Consider a message arriving at time represented by the feature vector x = (x,1 , . . . , x,A ), where
x,a is the ath feature of object x , a {1, . . . , A}, {1, . . . , T }, and x , , where is
the space of all possible objects x. Each message can belong to two classes: positive (or malicious)
messages, and negative (or regular) messages. This is represented by the y variable, whose possible
values are y {+1, 1}. The problem data-set and target label are presented by matrix X and

19
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

vector Y respectively, where X = (x1 , . . . , xT )T and Y = (y1 , . . . , yT )T . Therefore, the learning


problem data-set T is defined by matrix X and vector Y, T = (X , Y).

This learning problem is known as a supervised online learning problem [68], from which
a training and test set are available to build a classifier represented by the function f from a set
of possible classification functions =. The training set, represented by Tr , is used to estimate the
classifier and Te is used to evaluate its performance. The complete database is represented by
T = Tr Te . This evaluation is made in order to eliminate all potential bias on the evaluation
step, as elements evaluated were not used to estimate the classifier. Once the classification function
f = is determined, the final decision for a new object is determined by the evaluation of the
objects features,
f : {+1, 1}
xi yi

where for a given object xi with A features in its feature space, its class yi is predicted by
the classification function f .

3.1.2 The Strategic Interaction

We define the adversarial classification under a dynamic game of incomplete information as a sig-
naling game between an Adversary, which attempts to defeat a Classifier by not revealing
information about its real type, modifying xi , a message of type i, into a message of type j by
(xi ) = xj . As shown in Figure 3.2, this dynamic interaction between agents forces to update of
the Classifier as adversaries modifies their behaviour.

Figure 3.2: Dynamic interaction between the Classifier an the Adversary.

Furthermore, in Figure 3.2, two-dimensional objects (Regular and Malicious) are classified
by a hyperplane. Here, (a) describes the original setup of the Classifier, (b) shows the change of
adversaries behaviour on its features, and (c) represents the updated classifier considering the new
behaviour of the adversaries.

20
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

Consider the incomplete information game, as defined by Harsanyi in [39],

b = (N , (An )nN , (Tn )nN , (pn )nN , (Un )nN )

where N = {1, ..., N } is the set of players, An is the set of possible actions for player n,
n N . Tn is the nth player possible type set n N . pn is a probability function pn : Tn [0, 1]
which assigns a probability distribution over jN Tj to each possible player type (Tn ), n N .
Finally, the utility function of player n is denoted by Un : (jN Aj ) (jN Tj ) R, which
corresponds to the payoff of player n as a function over the actions of all players (An ) and their
types (Tn ).

Based on the previous scheme, as described in [32, 34], dynamic games of incomplete infor-
mation can be modeled as a signaling game. The proposed model of incomplete information for
the adversarial classification between an Adversary (A) and a Classifier (C), i.e. N = {A, C},
behaves as the following sequence of events:

1. Nature at time draws a type ti for the Adversary, where typeSi = (, ) is composed by
{R, M } and {1, . . . , k}, from the set of types TA = {tR, }k=1 {tM, }k=1 . According to
this, type ti states whether the Adversary is Regular (R) or Malicious (M), and defines the
initial message of type , xi . Nature draws accordingP to the probability distribution pA (ti ),
where pA (ti ) > 0, i {R, M } {1, . . . , k} and iTA pA (ti ) = 1. In this context, k is the
total number of messages that can be drawn by Nature from .

2. The Adversary at time observes its type ti , which can be either tR, or tM, , {1, . . . , k},
and chooses a message xj from its set of actions AA = {(xi ) = xj }kj=1 , where xi is defined
from the type tR, or tM, , {1, . . . , k}. The function : transforms a feature
vector at time from xi into xj , the message which the Classifier has to decide its class.
A non-malicious adversary does not have incentives to modify its behaviour, so (xi ) = xi ,
when its type is tR, , {1, . . . , k}.

3. The Classifier at time observes xj (but not ti ) and chooses an action C(xj ) from its set of
actions, defined over the simplex AC = {p = p+1 + p1 R2 , p+1 + p1 = 1, p+1 , p1 0},
whose corners are pure strategies {+1, 1}. It is important to notice that the Classifier is
a single type player, so its type is common knowledge and there is no need to be mentioned
further.

4. Finally, payoffs are revealed to each agent by UA (ti , (xi ), C((xi ))) and
UC (ti , (x ), C((xi ))).
i

The extensive form game that represents the signaling game between the Adversary and
Classifier is presented in Figure 3.3, where for a given time xj is defined by (xi ) = xj and Ij
is the j th information set where the classifier has to decide C(xj ). All intermediate nodes between
Nature and information sets, represents the strategy nodes for the adversary, where (xi ) = xj is

21
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

Figure 3.3: Extensive-form representation of the signaling game between the Classifier an the
Adversary.

decided. In Appendix A.2 a numerical example for a simple case is presented with further details
from the classification learning interaction to the strategic interaction for signaling games between
agents.

3.2 Sequential Equilibria in Adversarial Classification

In order to analyse the optimal strategies for the Classifier in the proposed mechanism, special re-
quirements and assumptions over traditional Bayesian Nash equilibrium must be considered. These
requirements must be satisfied to refer to perfect Bayesian equilibrium (PBE) refinement concept
in the adversarial classification signaling game.

In terms of game b , at each information set, the Classifier has a belief about with which
probabilities a node has been reached by the Adversary. These beliefs defines the basics of the
proposed interaction, for which the sequential rationality definition is stated [34, 43].

Definition 2 (Sequential Rationality): Given its beliefs, the Classifiers strategies must be
sequentially rational.

22
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

Sequential rationality insist that the Classifier have beliefs and act optimally given these
beliefs, but it is necessary to take assumptions in order to discard unreasonable beliefs. In an
extensive-form game, information sets are on the equilibrium path if they will be reached with
positive probability and the game is played according to the equilibrium strategies. At on the
equilibrium path information sets, beliefs are determined by Bayes rule and the players equilib-
rium strategies. Krebs and Wilson formalized in [43] the concept of sequential rationality, where it
is stated that the equilibrium no longer consists on just optimal strategies for each agent, but also
includes a belief for each agent at each information set at which the agent has to make a move.

Definition 3 (Bayesian Updating): For each xj AA , if there exists1 ti T such that A ,


j
then the Classifiers belief at the information set Ij corresponding to x must follow from Bayes
rule and the Adversarys strategy

A (xj |ti ) p(ti )


(ti |xj ) = P (3.1)
tr T A (xj |tr ) p(tr )
P
If tr T A (xj |tr ) p(tr ) = 0, (ti |xj ) can be defined as any probability distribution.

In order to satisfy the Bayesian Updating and Sequential Rationality, the following signaling
requirements are described.

Definition 4 (Signaling requirement 1 (S1)): After observing a message xj , from AA , the


Classifier has a belief about which types could have P sent xj . Denote this belief by the probability
distribution (ti |x ), where (ti |x ) 0, ti T and ti T (ti |xj ) = 1.
j j

Definition 5 (Signaling requirement 2 (S2C)) For each xj AA , the Classifiers optimal


over the Classifiers actions C(xj ) A , must
strategy defined as the probability distribution C C
maximize the Classifiers expected utility, given the beliefs (ti |xj ) about which types could have
sent xj . That is,

X
xj , C (|xj ) arg max (ti |xj ) UC (ti , xj , C )) (3.2)
C
ti T

where,

X
UC (ti , xj , (|xj )) = C (C(xj )|xj )UC (ti , xj , C(xj )) (3.3)
C(xj )AC

1
The set of types for the Adversary will be consider the only set of types, TA = T as the Classifier has only
one type of agent

23
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

Definition 6 (Signaling requirement 3 (S2A)): For each ti T , the Adversarys optimal


message xj = (xi ), defined by the probability distribution A , must maximize the Adversarys

utility function, given the Classifiers strategy C . That is,

ti , A (|ti ) arg max UA (ti , A , C ) (3.4)


A

where X
UA (ti , A , C ) = A (xj |ti )UA (ti , xj , C (|xj ))
xj AA

and X
UA (ti , xj , C (|xj )) = C (C(xj )|xj )UA (ti , xj , C(xj ))
C(xj )AC

Assumption 1 (Signaling game refinements): The dynamic game of incomplete information


between the Classifier and the Adversary can be modeled by a signaling game, which satisfies
signaling requirements for sequential rationality.

Sequential equilibria, a subset of perfect Bayesian equilibrium (PBE) in the adversarial signaling
game is a pair of mixed strategies A , , and a belief (t |xj ) satisfying signaling requirements S1,
C i
S2C, S2A, and Bayesian Updating. It is clear, by construction of the mechanism, that requirements
S1 and Bayesian Updating are satisfied by the adversarial classification game. Nevertheless, signal-
ing requirement S2A will be considered as satisfied as a first approach and a strong assumption on
this works game development. Whether adversarial behaviour strategies, as described by Dalvi et
al. in [19], could represent a more reliable interaction as an open problem for future work.

3.3 Quantal Response Equilibrium Classifiers

In this section, the strategic classifiers proposed in this thesis are introduced. First, the modeling
of utility functions for the Adversary and Classifier which will be used in all QRE classifiers
is presented. Then, static QRE, incremental QRE and the QRE perceptron algorithms are intro-
duced. As previously stated in this chapter, these algorithms have different levels of interaction and
dynamism between the Classifier and the Adversary, where the static QRE can be interpreted
as an strategic classifier, the incremental QRE re-evaluates the strategies profiles on periods as
game theoretical parameters changes, and finally the QRE perceptron re-evaluates the strategies
profiles whenever a classification mistake is presented. See Appendix A.1 for further details in how
sequential equilibria can be numerically approximated by using the logit QRE algorithm [52, 77].

It is important to notice, that as an strong assumption, the only elements known a-priori
in the following algorithms are the Adversarys types, empirically determined using clustering
analysis over the set of messages in the data-set T . However, dynamic clustering techniques [60]
could be used to determine whether new types appear as the game evolves. In the present case, the

24
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

analysis over the complete data-set permits to extract all possible types, and then by updating the
types probabilities by their frequencies determines the incremental relevance of each one of them.

3.3.1 Utility Function Modeling and Payoffs

For QRE-P algorithm, the utility function for the Classifier is determined by an incoming message
of type j at time ,
UC (ti , xj , C(xj )) = C (ti , xj , C(xj )) UC (3.5)
UA (ti , xj , C(xj )) = A (xj , C(xj )) (UA A ||ci xj ||) (3.6)

For the Classifier, the term C (ti , xj , C(xj )) in equation 3.5 is a sign function which takes
positive values if no mistakes are committed for message xj , C(xj ), in terms of malicious and regular
types ti . If errors are committed, the sign function takes negative values. UC is a parameter which
represents a baseline value for the Classifiers utility.

For the Adversary, the utility function is determined by equation 3.6. Here, A (xj , C(xj ))
is a sign function which takes positive values if the classification decision is 1 and negative values if
the classification decision is +1, no matter what the Adversary types are. The value A represents
a factor which amplifies the distance between the centroid of types ti , represented by ci , and the
action (xi ) = xj taken by the Adversary. Finally, UA is a parameter which represents the
baseline value for the Adversarys utility.

Following the same idea for the latter utility functions, for S-QRE, I-QRE, and AAO-SVM
must be redefined. Given that historical information on incremental data-sets is considered for the
strategic update, the utility functions considered for the Adversary are determined by centroids
computed at time (equation 3.8). Moreover, for the Classifier the information considered is
still associated to message xj (equation 3.7).

UC (ti , xj , C(xj )) = C (ti , xj , C(xj )) UC (3.7)


UA (ti , xj , C(xj )) = A (xj , C(xj )) (UA A ||ci c ||) (3.8)

3.3.2 Auxiliary Methods

In this section, the QRE algorithm used to determine the mixed strategies, as well as the classifica-
tion hypothesis and its evaluation are presented. These methods are transversal to all QRE-based
algorithms.

25
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

QRE Algorithm

An algorithm to determine the mixed strategies for both Adversary and Classifier based in the
QRE logit equilibria [77] is presented. In simple terms, this algorithm takes as input a data-set
T and a number of types k as input, for which using clustering techniques, {ci }ki=1 centroids are
determined. Then, information sets {Ii }ki=1 and types probabilities {p(ti )}ki=1 are determined based
on frequency on messages types in T . Then, utilities for the Adversary and the Classifier are
determined. Finally, using all of the latter as input for the traceZeroes procedure2 , the mixed
strategies for the classifier C are determined.

Algorithm 3.3.1: QRE1


Data: T , k, x
Result: C
1 Initialize {ci }k
i=1 = k-Means(T );
k
2 {Ii }i=1 = build k Information sets;
3 p(ti ) = |(ti T )|/|T |, i {1, . . . , k};
4 set UA = {UA (ti , x , +)}k k
i=1 {UA (ti , x , )}i=1 according to 3.6;
5 set UC = {UC (ti , x , +)}k k
i=1 {UC (ti , x , )}i=1 according to 3.5;
6 C = traceZeroes(UA , UC , {Ii }i=1 , {p(ti )}i=1 , {ci }k
k k
i=1 );
7 return C ;

The main difference between QRE1 (algorithm 3.3.1) and QRE2 (algorithm 3.3.2), is the
utility function evaluation. This is for the evaluation of the QRE-P perceptron evaluation property,
which for the rest of the algorithms is considered as a batch incremental evaluation of the data-set.

Algorithm 3.3.2: QRE2


Data: T , k
Result: C
1 Initialize {ci }k
i=1 = k-Means(T );
k
2 {Ii }i=1 = build k Information sets;
3 p(ti ) = |(ti T )|/|T |, i {1, . . . , k};
4 foreach x T do
5 set UA UA + {UA (ti , x , +}ki=1 {UA (ti , x , }ki=1 according to 3.8;
6 set UC UC + {UC (ti , x , +}ki=1 {UC (ti , x , }ki=1 according to 3.7;
7 C = traceZeroes(UA , UC , {Ii }ki=1 , {p(ti )}ki=1 , {ci }ki=1 );
8 return C ;

2
This procedure is based on input parameters defined in the Gambit software tool for the Quantal Response
Equilibria computation [77].

26
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

Classification Hypothesis and Evaluation

In algorithms 3.3.3, 3.3.4, and 3.3.5, the function flipCoin: [0, 1] [0, 1] [0, 1] takes values from
(+1|) and flips a biased coin for the classification hypothesis. If h = +1, the Classifiers
C
decision is +1 (a malicious message), and if h = 1, the Classifiers decision is 1 (a regular
message). The performance of the classifier is determined by the cumulative classification error,
calculated by function Err.

3.3.3 Static Quantal Response Equilibrium Classifier

The Static Quantal Response Equilibrium Classifier (S-QRE) is a one-shot representation of the
whole interaction between the Classifier and Adversary in a signaling game. Here, the objective
is to determine whether in a completely strategic interaction, the probability distribution C over the
Classifiers actions (or mixed strategies) in the sequential equilibria can be used for the decision
of labeling a given message xj T as malicious (C(xj ) = +1) or regular (C(xj ) = 1).

S-QRE Algorithm

The static QRE algorithm is stated as follows,

Algorithm 3.3.3: SQRE


Data: Tr , Te , k
Result: {HTe = h , x Te }, Err
1 Initialize C = QRE2 (T , k);
r
2 foreach (x , y ) Te do
3 h = flipCoin(C );
4 Err(h , y );
5 return {h }, x Te ;

This algorithm represents the simplest case for the interaction between the Classifier and
the Adversary. The decision function is based on mixed strategies profile determined by the
QRE over the whole training set (Tr ). Afterwards, the test is performed over a testing data-set Te
in which the classification performance is evaluated with function Err. This method represents the
simplest case for strategic analysis where no dynamism is considered, and the evaluation of messages
is considered as static. In this case, intuition tells that the classification error is incremental over
time given the incremental properties of the game between agents.

27
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

3.3.4 Incremental Quantal Response Equilibrium Classifier

In the signaling game between agents, their online behaviour is needed to be considered as the
game develops. The main idea of this algorithm, is that the game starts when mixed strategies are
calculated for both agents, C and A , using a training set E . This set enables the definition of
Tr
an initial equilibria, which will be incrementally improved as new messages are presented in periods
of time. The update of the Classifiers strategies is determined by a game period parameter Gp,
which indicates that every Gp messages the sequential equilibria is updated using QRE over the
incremental data-set S. As the game develops, new information about the adversaries arises, which
is used by the Classifier by forcing it to infer new strategies, as presented in algorithm 3.3.4.

I-QRE Algorithm

The incremental QRE (I-QRE) algorithm is stated as follows,

Algorithm 3.3.4: IQRE


Data: Tr , Te , Gp, k
Result: {HTe = h , x Te }, Err
= QRE(T , k);
1 Initialize S := {Tr } C r
2 foreach (x , y ) Te do
3 h = flipCoin(C );
4 Err(h , y );
5 if |S Tr | mod Gp then
6 C = QRE2 (S, k);
7 S.add(x , y );
8 return {h }, x Te ;

This algorithm initially evaluates mixed strategies C with QRE using the training data-set
Tr . Afterwards, as new messages arrives, the Classifier decides whether a message is malicious
or regular by using the probabilities C . Then, every Gp messages, types probabilities and beliefs
are updated, with the aim of improving the Classifiers performance. We feel this approach
captures the strategic interaction better than the static algorithm S-QRE (3.3.3). This statement is
based on the incremental consideration of agents interaction, for which every Gp cycles (considered
as learning cycles) the mixed strategies are updated. Improvements on this algorithm could be
obtained by sensitivity and robust testing over the algorithms parameters.

28
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

3.3.5 Quantal Response Equilibrium Perceptron Classifier

For the QRE-perceptron algorithm3 (QRE-P), the game starts when both agents (Classifier and
Adversary) are firstly introduced to each other with the first message whose type is unknown. The
classifier, which does not have any information about the game, can only consider a 50/50 possibility
to classify the message as malicious or regular. Then, as the game develops, new information about
the adversaries arises forcing the Classifier to dynamically infer mixed strategies C for the game.

QRE-P Algorithm

Initially, mixed strategies and types probabilities are considered as a uniform distribution over the
set of elements in both cases. This is, C (+1|) = 0.5. The QRE Perceptron algorithm is stated as
follows,

Algorithm 3.3.5: QRE-P


Data: T , k
Result: {HT = h , x T }, Err
= (0.5, 0.5);
1 Initialize S := {} C
2 foreach x , y do
3 h = flipCoin(C );
4 Err(h , y );
5 if h 6= y then
6 C = QRE1 (S, k, x );
7 S.add(x , y );
8 return {h }, x Te ;

In this case, the algorithm updates all game theoretical values as the interaction between
agents evolves. The main idea is that these parameters are updated only if the Classifier commits
a mistake in its decision function h , lowering the time complexity for the evaluation of the Clas-
sifier over the data-set. Intuitively, this approach does not represent the strategic interaction, as
the Classifier is forced to decide at each x without enough information to compute the mixed
strategies. However, this perceptron-like algorithm represents the extreme case of the incremental
interaction, for which is considered relevant to be evaluated as a benchmark strategic algorithm.
3
Based on the dynamic update of parameters in classification mistakes originally proposed by [65].

29
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

3.4 Adversary Aware Online Support Vector Machines

Another approach for the strategic interaction between the Adversary and the Classifier is
presented. Here, the inclusion of game theoretical parameters in a machine learning algorithm is
done by the utility function modeling of the classifier. Furthermore, sequential equilibria properties
are used for the inclusion of new constraints in a Support Vector Machines (SVM) classifier, for
which its online evaluation leads to the proposed Adversary Aware Online SVM algorithm.

3.4.1 Utility Function Modeling and Payoffs

Consider a data-set characterized by just one feature x1 . Let suppose that this feature represents
the usage of a phishing-like word, e.g. paypal. Therefore, if paypal is used in a message, then
x1 = 1 and x1 = 0 otherwise. Suppose the classification function defined by y = w x1 + b. If
y >= , for a given threshold , the message is classified as phishing (or malicious), and if y <
is classified as regular (or non-malicious). As a strong assumption, the construction and modeling
of the utility function considers that every feature with a value different than 0 presents a closest
to phishing type of message strategy. In the following, the utility cost function construction will
be based on the classification cost matrix presented in Table A.3, where the M , R , M and M
parameters are introduced.

Suppose the case where the real type of a given message is malicious. If a malicious feature
x1 is used in the message, then the value for the classification function is y = w + b, so there it is
likely that w + b > , therefore the message classified as malicious. The utility of the Classifier,
if the message is classified as malicious, is proposed to be considered as UC = M (w + b), the
maximum reward to the classifier for performing a great work. Furthermore, if the Classifier
decides that the message was not malicious, even if x1 = 1, it must be penalized with his maximum
cost UC = M (w + b), where M > 1 is a miss-classification cost parameter. If x1 = 0, then the
classifier is rewarded (or penalized) with a lower value, as it manages to classify a message without
phishing properties as malicious, or miss-classify a malicious message without phishing properties
respectively. Based on this idea, the Classifiers payoffs for the malicious and regular cases are
defined as follows:



M (w + b) if x1 = 1, C(x1 ) = +1


b if x1 = 0, C(x1 ) = +1
M
UCM (x1 , C(x1 )) =
M (w + b)
if x1 = 1, C(x1 ) = 1


b if x1 = 0, C(x1 ) = 1
M

In case that the real type of a given message is regular, the payoffs behaves differently.
Here, the maximum payoff can be achieved when the classifier decides is not phishing, when is not
using a phishing feature. The minimum payoff is achieved when the regular message does not use a

30
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

phishing feature but is classified as malicious. Here, R and R parameters are introduced to differ
the payoffs from the previous case.



R (w + b) if x1 = 0, C(x1 ) = 1


b if x1 = 1, C(x1 ) = 1
R
UCR (x1 , C(x1 )) =

R (w + b) if x1 = 0, C(x1 ) = +1


b if x1 = 1, C(x1 ) = +1
R

For the general case with A features, following the same idea of the simple case where A = 1,
the classification
S function is defined by y = wT x + b, where dim(x) = dim(w) = A. Recalling that
T = {tR, }k=1 {tM, }k=1 , the general case for the Classifiers utility functions can be separated
into two cases. The first case is where an originally regular type of message ({tR, }k=1 ) is classified
as malicious or regular. The second case is when an originally malicious type of message ({tM, }k=1 )
is classified as malicious or regular. These cases can be modeled by the following utility functions,

UC (tR, , x, C(x) = +1) = R (wT (e x) + b) (3.9)


T
UC (tR, , x, C(x) = 1) = R (w (e x) + b) (3.10)
T
UC (tM, , x, C(x) = 1) = M (w x + b) (3.11)
T
UC (tM, , x, C(x) = +1) = M (w x + b) (3.12)

Where e is a ones vector of dimension A. It is important to consider that the previous


utility function representation is specially designed for binary features that only captures malicious
strategies. (i.e. if all features are set to 1, the message is likely to be malicious, and if all features are
set to 0, it is likely to be regular). Furthermore, these cases comply with the simple case modeling,
where for a given message , when its type is malicious, if feature x, is not presented (x, = 0),
then the utility function is M b and M b for the +1 and 1 labeling respectively. Then, for
the case when its type is regular, if x, = 0 then the evaluation of (e x) = e, so the result is
R (w + b) and R (w + b) for the 1 and +1 labeling respectively.

For the Adversarys utility function modeling, it is relevant to consider that payoffs are
related to the distance of their message type t to their emitted message at time of type tj , xj .
This can be determined by the distance from the message type to the centroids which determines
the different by using the distance function d(, ) : RA RA R. In the simple case, when the
Adversary is regular, its utility function properties are the same as the Classifier. This is
because in this case both agents share the same objective, and are rewarded in similar ways. This

31
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

is,



R (w + b) if x1 = 0, C(x1 ) = 1


b if x1 = 1, C(x1 ) = 1
R R
UA (x1 , A(x1 )) =
R (w + b)
if x1 = 0, C(x1 ) = +1


b if x1 = 1, C(x1 ) = +1
R

Then, when the Adversary is malicious, the utility function modeling must consider dif-
ferent evaluation scenarios. First, when the classification is +1 and the original message type is
changed in order to disable the malicious feature, i.e. x1 = 0, the utility is more negative than the
case when the original type is not changed. Then, when the classification label is 1, the utility
takes positive values considering differences when the malicious feature was used or not. In case the
feature is used, the distance between the original type and the chosen by the Adversary must be
taken into consideration.



M (w (1 + 1) + b) if x1 = 0, C(x1 ) = +1


(w + b) if x1 = 1, C(x1 ) = +1
M M
UA (x1 , C(x1 )) =

M b if x1 = 0, C(x1 ) = 1


(w + b 1) if x1 = 1, C(x1 ) = 1
M

Following the same idea used for the general case in the Classifiers utility function mod-
eling case, for the Adversary the utility functions considered in this case are the following:

UA (tR, , x, C(x) = +1) = R (wT (e x) + b) (3.13)


T
UA (tR, , x, C(x) = 1) = R (w (e x) + b) (3.14)
T
UA (tM, , x, C(x) = +1) = M (w ((e x) + d(e x, c )) + b) (3.15)
T
UA (tM, , x, C(x) = 1) = M (w x + b d(x, c )) (3.16)

where c represents the centroid of cluster associated to type t, .

32
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

3.4.2 Classifiers Optimal Strategy

From the signaling requirement S2C, it can be shown that the maximization problem 3.17 can be
extended as the following development


X X
xj , C (|xj ) arg max (ti |xj ) C (C(xj )|xj ) UC (ti , xj , C(xj )) (3.17)
C(xj ) t T
i C(xj )AC

As mentioned before, the Classifiers optimal strategies are defined by the set of actions
AC = {+1, 1}, and considering the case where the expected utility achieves its maximum values
for C(xj ) = +1, the previous expression can be extended into equation 3.18,

X X
(ti |xj ) C (+1|xj )UC (ti , xj , +1) > (ti |xj ) C (1|xj )UC (ti , xj , 1) (3.18)
ti T ti T

The types set T can be partitioned into two new subsets T = TM TR , where TM = {tM, }k=1
and TR = {tR, }k=1 are sets that contains all types of messages that can be sent by the malicious
and regular Adversary respectively. The following development can be considered by using the
partitioning rule for Adversarys types,

X X
(tR, |xj ) C (+1|xj ) UC (tR, , xj , +1) + (tM,i |xj ) C (+1|xj ) UC (tM,i , xj , +1)
tR, TR tM, TM
X X
> (tM, |xj ) C (1|xj ) UC (tM, , xj , 1) + (tR, |xj ) C (1|xj ) UC (tR, , xj , 1)
tM, TM tR, TR

Then,

X
(tM, |xj ) C (+1|xj )UC (tM, , xj , +1) C (1|xj ) UC (tM, , xj , 1)
tM, TM
X
> (tR, |xj ) C (1|xj ) UC (tR, , xj , 1) C (+1|xj ) UC (tR, , xj , +1)
tR, TR

t t
M,
where UC,M (xj ) and UC,R
R,
(xj ) are defined as,

33
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

t
M,
UC,M (xj ) C

(+1|xj ) UC (tM, , xj , +1) C

(+1|xj ) UC (tM, , xj , 1) (3.19)
tR, j
UC,R (x ) C (1|xj ) UC (tR, , xj , 1) C

(+1|xj ) UC (tR, , xj , +1) (3.20)

In the following, for the sake of simplicity, messages at time of type j, xj , will be considered
as a generic message of type xj . Using equation 3.19 and equation 3.20, the final expression for
equation 3.21 is deduced,

X t
X t
(tM, , xj )UC,M
M,
(xj ) > (tR, , xj )UC,R
M,
(xj ) (3.21)
tM, TM tM, TR

t t
where (tR, |xj ) is defined by equation 3.1, and the values for UC,M
M,
(xj ) and UC,R
R,
(xj )
must be determined by using the general case utility function modeling for the Classifier.

Given previously obtained results, it is relevant to see that the Classifiers optimal strategy
C (xj ) is obtained by the following conditional statement,

(
j +1 if condition 3.21 is satisfied
C (x ) = (3.22)
1 Otherwise

Furthermore, the evaluation of the general case variational utility functions for the Classi-
fier (equation 3.19 and equation 3.20) by using previously utility function modeling, the following
development can be determined. In this case, M . R , M and R must be defined based on as-
sumptions on the game modeling, such as the cost matrix proposed in Table A.3, and e is a vector
of ones whose dimension is A.

t
M,
UC,M (xj ) = C

(+1|xj ) UC (tM, , xj , +1) C

(1|xj ) UC (tM, , xj , 1)
t
M,
UC,M (xj ) = C (+1|xj )M (wT xj + b) C

(1|xj )((M ) (wT xj + b))
t
M,
UC,M (xj ) = (M C (+1|xj ) + M C

(1|xj )) (wT xj + b) (3.23)

Following the same idea, when the messages real type is regular, it can be shown that,

t
R,
UC,R (xj ) = C

(1|xj ) UC (tR, , xj , 1) C

(+1|xj ) UC (tR, , xj , +1)
t
R,
UC,R (xj ) = C

(1|xj ) R (wT (e xj ) + b) C

(+1|xj ) (R ) (wT (e xj ) + b)
t
R,
UC,R (xj ) = (R C

(1|xj ) + R C

(+1|xj )) (wT (e xj ) + b) (3.24)

34
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

Finally, replacing equation 3.23 and equation 3.24 into equation 3.21,

X t
X t
(tM, , xj )UC,M
M,
(xj ) > (tR, , xj )UC,R
R,
(xj )
tM, TM tR, TR
X
j
(tM, , x )(M C (+1|xj )
+ M C (1|xj )) (wT xj + b)
tM, TM
X
> (tR, , xj )(R C

(1|xj ) + R C

(+1|xj )) (wT (e xj ) + b)
tR, TR
X
(wT xj + b) (tM, , xj )(M C

(+1|xj ) + M C

(1|xj ))
tM, TM
X
> (wT (e xj ) + 2b b) (tR, , xj )(R C

(1|xj ) + R C

(+1|xj ))
tR, TR
X
(wT xj + b) (tM, , xj )(M C

(+1|xj ) + M C

(1|xj ))
tM, TM
X
+(wT xj + b) (tR, , xj )(R C

(1|xj ) + R C

(+1|xj ))
tR, TR
X
> (wT e + 2b) (tR, , xj )(R C

(1|xj ) + R C

(+1|xj ))
tR, TR
(3.25)

P
Then, grouping by (wT xj + b) and factorizing by tR, TR (tR, , xj )(R C
(1|xj ) +
R
(+1|xj )) the following expression is obtained,
C

(wT e + 2b)
(wT xj + b) > P (+1|xj )+ (1|xj ))
(tM, ,xj )(M C
(3.26)
t TM M C
PM, (1|xj )+ (+1|xj )) +1
tR, TR (tR, ,xj )(R C R C

Where expressions and can be defined as


1 + (xj )
(xj ) PA (3.27)
k=1 wk + 2b
P
t TM (tM,r |xj ) (M C (+1|xj ) + M C (1|xj ))
j
(x ) PM,r (3.28)
tR,r TR (tR,r |xj ) (R C (1|xj ) + R C (+1|xj ))

The previous game-theoretic result (condition 3.21), can be used as a prior knowledge con-
straint in a classification problem, associated with the regularized risk minimization from the statis-
tical learning theory proposed by Vapnik in [80]. For this, when the Classifiers optimal strategy

35
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

is to classify a given message at time as f (x ) = wT xj + b = +1, the hyperplane evaluation


wT xj + b must hold condition 3.26. Now, for a given message at time , the type j must be deter-
mined with arg minj d(x , cj ), for a given set of clusters with centroids cj , j {1, . . . , k} which will
be considered in this constraint as a strong assumption that the evaluation of the hyperplane that
k T , the type will be maintained in the evaluation of the function. For this, considering a slack
variable for the non-separable classification case, the classification constraint can be expressed as
condition 3.29,


y wT x + b (x ) (1 ) (3.29)

This game theoretic learning result can be incorporated into the following quadratic opti-
mization problem,

a T
1X 2 X
min wi + C i
w,b, 2
i=1 i=1 (3.30)

subject to yi wT xi + b (xi ) (1 i ), i {1, . . . , T }
i 0 i {1, . . . , T }

The dual representation of the classification problem in equation 3.30, is presented by the
following expression,

T
X T X
X T
max i yi (xi ) yj (xj ) i j xTi xj

i=1 i=1 j=1

subject to Ci i 0, i {1, . . . , T } (3.31)


T
X
yi (xi ) i = 0
i=1

This proposed classifier can be considered as cost-sensitive [24] over the cost matrix for the
phishing filtering problem presented in Table A.3. In despite problem 5.2 has been considered as
a batch learning algorithm, its formulation can be solved incrementally by online learning algo-
rithms. To solve this problem, an online learning technique is proposed in this thesis, inspired
in the development of D. Sculley in [68]. This algorithm is based on solving its dual formulation
using the Sequential Minimal Optimization (SMO) described in [63]. The SMO algorithm is used
to train SVMs breaking up the large Quadratic Programming (QP) representation of the dual into
small series of QP problems, which are solved analytically by the algorithm. Small changes in the
SMO algorithm, such as explained in previous work on prior knowledge inclusion in SVMs [87] was

36
Chapter 3. ADVERSARIAL CLASSIFICATION USING SIGNALING GAMES

considered. For further details on how the SMO algorithm is used to solve the dual formulation of
SVMs (equation 5.2), see the Appendix A.4.

3.4.3 AAO-SVM Algorithm

Based on previous work on Online Support Vector Machines algorithms described by [33] and later
in [68], the proposed adversary aware classifier is stated as follows,

Algorithm 3.4.1: Adversary Aware Online SVM


Data: T , m, , Gp, C, k
Result: f (x ) = wT x + b ,
1 Initialize w0 := 0, b0 := 0, S := {};
2 foreach (x , y ) T do
Classify T
3
Tx using f (x ) = w 1 x + b 1 ;
4 if y w 1 x + b 1 (x ) < then
5 Find w0 , b0 with prior knowledge SMO on S, with w 1 and b 1 as seed hypothesis,
and (x );
6 set w := w0 and b := b0 ;
7 if |S| > m then
8 remove oldest example from S;
9 if T mod Gp = 1 then
10 C = QRE2 (S, k);
11 update (x ), i S;
12 S.add(x , y );
13 return {w , b }T=1 ;

Algorithm 3.4.1 presents the online learning algorithm, Adversary-Aware Online SVM (AAO-
SVM). Based on the Classifiers beliefs and sequential equilibrium strategies, the hyperplane
parameters are updated, incorporating as prior knowledge constraints the game theoretic results.
The main idea of the algorithm is that given an incoming message x , a label is assigned using
the classification function f (x ) = wT1 x + b 1 . If the Classifiers optimal strategy is not
satisfied (equation 3.21), the hyperplane parameters are updated using a modified version of the
SMO algorithm over seen messages (S set). A memory parameter m is used to set the number
of messages in S. Then, every Gp periods, all game theoretical parameters are updated, types
probabilities, and strategies are recalculated using logit QRE.

37
Chapter 4

Phishing Features, Strategies, and


Types

In this chapter, the phishing and ham corpus description is settled. Then, a brief review on text-
mining techniques used in this thesis for feature extraction, and the feature space definition to
characterize malicious messages is presented. Finally, the procedure to determine the types of
messages for the game theoretic modeling is introduced.

4.1 Corpus Description

To test proposed methodology in a malicious context, it was used an English language phishing
and Ham email corpus built using Jose Nazarios phishing corpus [56] and the Spamassassin Ham
collection. The phishing corpus1 consists of 4450 emails manually retrieved from November 27, 2004
to August 7, 2007. The Spamassassin collection, from the Apache SpamAssassin Project2 , based
on a collection of 6951 Ham email messages. The email collection was saved in a Unix mbox email
format, and was processed using Perl scripts.

4.1.1 Structural Features

As initially described in [30] and then by [7] and [6], the extraction of basic content based features is
needed for a minimum representation of phishing emails. These features are associated to structural
properties of the email, such as link analysis, programming elements and the output of the spam
1
Available at http://monkey.org/~jose/wiki/doku.php?id=PhishingCorpus
2
Available at http://spamassassin.apache.org/publiccorpus/

38
Chapter 4. PHISHING FEATURES, STRATEGIES, AND TYPES

filters. The extension of these features can be determined for a given web-based technology. In the
email based technology, these features can be determined as follows:

Structural properties are proposed as four binary features defined by the MIME (Multi-
purpose Internet Mail Extensions) standard, related to the possible number of email formats,
and states information about the total number of different body parts (all body parts, discrete
body parts, composite body parts, and alternative body parts).

Link analysis provides seven binary features related to the properties of every link in an email
message: The existence of links in a message, if the number of internal links is greater than
one, if the number of external links is greater or equal than one, if the number of links with
IP numbers is greater than one, if the number of deceptive links3 is greater than one, if the
number of links behind images is greater than one and if the total number of dots in all links
for a message is greater than 10.

Programming elements are defined as binary features representing whether HTML, JavaScript
and forms are used in a message.

Finally, the SpamAssassin filters output score for an email was used, indicating with a binary
feature if the score was greater than 5.0, the recommended spam threshold.

It is important to notice that all previously mentioned features (a total of 15 features which
represents the set ) are directly extracted from the email message, and the extension of such basic
features for other applications rather than email filtering are subject to the technology used for the
applications.

4.2 Text-Mining Techniques for Feature Extraction

In this thesis the applicability of topic based features for malicious message filtering is driven by
data mining methodologies. Different text mining techniques will be reviewed and modified from
its original formulation for the characterization of malicious messages. In the following, malicious
messages will be considered in the context of phishing email messages.

The general characterization of malicious and regular messages is described in Figure 4.1.
Here, the input dataset is represented by an initial message corpus, composed by the training with
regular and malicious messages. Over this corpus, three main feature extraction methodologies are
applied. First, the documents tokenization step is needed to extract every word used in the message.
This step is particularly different for every messages technology. For example email messages, or
HTML based technologies (like web-sites) the parsing of HTML code and several scripting methods
is needed. In case of malicious messages presented in Short Message Service (SMS) or Real Simple
3
Links whose real URL is different from the URL presented to the email reader.

39
Chapter 4. PHISHING FEATURES, STRATEGIES, AND TYPES

Syndication (RSS) where the message itself is written differently, novel parsing components and
Natural Language Processing (NPL) techniques have to be considered. Afterwards, a cleaning step
of each tokenized representation of the message is needed. Here a stop-words removal process and
the stemming of the messages words is realized.

Figure 4.1: Feature extraction flow diagram process for phishing messages.

Then different feature extraction methodologies are executed. Among these methodologies,
keyword extraction, latent semantic analysis and latent Dirichlet allocation features were considered.
Furthermore, the features extracted are combined together in order to extract the keywords and
content-topics that fully characterize a given message. Depending on the context where messages are
being analysed (SMS, RSS, HTML4 , mbox5 , amongst others), technology dependent features, refer
in this research as basic structural features, can be extracted from the messages. A proposed list of
basic structural features for email phishing messages are presented in section 4.1. Finally, all features
(keywords, content-topics and basic structural features) represents the feature space associated
to a given message. Here, feature selection methodologies are required in order to minimize the
complexity for the classification task.

Let the set of features determined by the keyword finding algorithm be , the set of features
determined by latent semantic indexing be and the set of features determined by latent Dirichlet
4
Hypertext Markup Language
5
Electronic mail message file format

40
Chapter 4. PHISHING FEATURES, STRATEGIES, AND TYPES

allocation , and the set of basic structural features be , then the final set of features that is
analysed into the feature extraction step is given by,

= (( ) ( )) = (( ) ) (4.1)

As shown in equation 4.1, the final set of features is a combination of structural basic
features , which are independent from the other content based feature sets , and . However,
these sets of features are no independent from each other. They are represented by binary features,
indicating whether a keyword, or topic, is present in a given message, and the intersection between
these features describes a final set of features. Our hypothesis, tested in section 6, is that is more
likely to represent a training set of messages in the classification task.

4.2.1 Document Model Representation and Keyword Extraction

Limitations and strengths on word content based filtering based on the Vector Space Model (VSM)
and the Term Frequency - Inverse Document Frequency (tf-idf ) representation of documents have
been extensively discussed [66, 69, 84]. On the one hand, the content based features cannot be
extended directly to other languages, given that words are language dependant features. Also, this
methodology is not able to capture semantic content by itself, so the information that is gathered
by the VSM model needs to be completed with more sophisticated methodologies.

On the other hand the VSM is easy to implement and has low computational cost. Further-
more, the information needed as input (word data) is easy to determine, and is transversal to every
media that a phishing message could be sent (email, Twitter, Facebook, etc.). Therefore, content
based detection can be extended to any word message domain.

Let R be the total number of different words in the complete collection of documents (mes-
sages), and Q the total number of messages. In the following, the representation of a corpus is
given by M = (mij ), i = {1, ..., R}, j = {1, ..., Q} , where mij is the weight associated to whether a
given word is more important than another one in a document. As shown in equation 4.2, the mij
weights considered in this research are defined as an improvement of the tf-idf term [66].


Q
mij = fij (1 + sw(i)) log (4.2)
ni

where fij is the frequency of the ith word in the j th document, sw(i) is a factor of relevance
associated of word i in a set of words and ni is the number of documents containing word i. In
wi i
this case, sw(i) = Temail
E , where wemail is the frequency of word i over all documents, and T E is the
total amount of emails. The tf-idf term is a weighted representation of the importance of a given
word, in a document that belongs to a collection of documents. The term frequency indicates the

41
Chapter 4. PHISHING FEATURES, STRATEGIES, AND TYPES

weight of each word in a document, while the inverse document frequency states whether the word
is frequent or uncommon in the document, setting a lower or higher weight respectively.

Based the on previous tf-idf representation, a clustering technique must be considered for
the segmentation of the whole collection of phishing emails. k-Means clustering with the cosine
between documents, as the distance function was used (see equation 4.3).

PR
k=1 mki mkj
cos(mi , mj ) = qP qP (4.3)
R R
k=1 (mki )2 k=1 (mkj )2

In k-Means [49], centers are chosen initially at random, and in each iteration, the similarity
between objects is determined within each cluster, therefore each center is updated by the geometric
mean of their corresponding objects. The optimal number of clusters was determined using as
stopping rules the minimization of the distance within every cluster and the maximization of the
distance between clusters. A frequently used index for determining such number is the Davies-
Bouldin index [22]. (See Appendix A.3 for further details on the k-means algorithm.)

sY
Cw(i) = || mip (4.4)
p

For i 1, .., R, where Cw is a vector containing the geometric mean of each word weights
within the messages contained in a given cluster. Here, is the set of documents in each cluster,
whose cardinality is ||, and mip is represented by equation 4.2. Finally, the most important words
for each cluster can be determined ordering the weights of vector Cw. This procedure is based
on previous work described in [83], where the application was for web-mining and the clustering
algorithm used was Self Organizing Feature Maps (SOFM) [42].

4.2.2 Singular Value Decomposition

Using the tf-idf matrix [66], a singular value decomposition (SVD) of this matrix reduces the
dimensions of the term by document space. SVD considers a new representation of the feature space,
where the underlying semantic relationship between terms and documents is revealed. Let matrix A
be an n p tf-idf representation of documents and k an appropiate number for the dimensionality
reduction and term projection. Given, Uk = (u1 , ..., uk ) an n k matrix, the singular values matrix
k
Dk = diag(d1 , ..., dk ), where {di }i=1 represents the eigenvalues for AAT ) and Vk = (v1 , ..., vk ) an
m k matrix, then the SVD decomposition of A is represented by

Ai = Ui Di ViT (4.5)

42
Chapter 4. PHISHING FEATURES, STRATEGIES, AND TYPES

Where the expression for Vi Di can be used as a final representation of a given document
i. As described in [61], SVD preserves the relative distances in the VSM matrix, while projecting
it into a Semantic Space Model, which has a lower dimensionality. This allows to keep just the
minimum information needed to define the appropriate representation of the dataset.

4.2.3 Probabilistic Topic Models

A topic model can be considered as a probabilistic model that relates documents and words through
variables which represent the main topics infered from the text itself. In this context, a document
can be considered as a mixture of topics, represented by probability distributions which can generate
the words in a document given these topics. The infering process of the latent variables, or topics,
is the key component of this model, whose main objective is to learn from text data the distribution
of the underlying topics in a given corpus of text documents.

One of the main of such topic models, is the Latent Dirichlet Allocation (LDA) [14, 12,
40]. LDA is a model where latent topics of documents are infered from estimated probability
distributions over the training dataset. The key idea of LDA, is that every topic is modeled as a
probability distribution over the set of words represented by the vocabulary, and every document
as a probability distribution over a set of topics. These distributions are sampled from multinomial
Dirichlet distributions.

In the following, let R a vector of words that defines the vocabulary to be used. A word,
considered the basic unit of discrete data, indexed by {1, ..., R}. A message is a sequence of S words
defined by W = (w1 , ..., wS ), where ws represents the sth word in the message. Finally, a corpus is
defined by a collection of Q message denoted by C = (W1 , ..., WQ ).

As described by [14], the latent Dirichlet allocation model can be represented as a proba-
bilistic generative process described by the following sequence of events,

1. Choose a number of multinomial S (S Poisson()) which represents the amount of words


in a given message.

2. Choose Dir().

3. For each of the S words ws .

(a) Choose a topic zs Multinomial().


(b) Choose a word ws from p(ws |zs , ), a multinomial probability conditioned on the topic
zs .

Q
For LDA, given the smoothing parameters RR + and R+ , and a joint distribution of
a topic mixture , the idea is to determine the probability distribution to generate from a set of T

43
Chapter 4. PHISHING FEATURES, STRATEGIES, AND TYPES

topics z, a message composed by a set of T words W ,

T
Y
p(, z, W |, ) = p(|) p(zn |)p(wn |zn , ) (4.6)
n=1

where p(zn |) can be represented by the random variable i , as the ith value such that topic
zn is presented in the document i (zni = 1). A final expression can be deduced by integrating
equation 4.6 over the random variable and summing over topics z. Given this, the marginal
distribution of a message can be defined as follows:

Z T X
!
Y
p(W |, ) = p(|) p(zn |)p(wn |zn , ) d (4.7)
n=1 zn

The final goal of LDA is to estimate previously stated distributions to build a generative
model for a given corpus of messages. There are several methods developed for making inference
over these probability distributions such as variational expectation-maximization [14], a variational
discrete approximation of equation 4.7 empirically used by [88] and by a Gibbs sampling Markov
chain Monte Carlo model [38] which have been efficiently implemented and applied by [62].

4.3 Feature Selection

Information theoretic measures have been widely used for feature selection [74, 31], where the
mutual information between each feature and the dependent variable indicates its relevance and
leads to the definition of most information-rich features. As the the following expression states, the
dependency between the ith feature xi and the target variable y is determined by their probability
densities p(xi ) and p(y) respectively.

Z Z
p(xi , y)
I(i) = p(x,i , y)log dxdy (4.8)
x,i y p(x,i )p(y)

In most text-mining applications, data is represented by discrete amounts of data. Given this,
probability distributions are unlikely to be estimated, for which probabilities can be approximated
by frequency tables and counting techniques. The following expression represents the information
gain for the ith feature,

XX P (x,i , y)
I(i) = P (x,i , y)log (4.9)
x,i y
P (x,i )P (y)

44
Chapter 4. PHISHING FEATURES, STRATEGIES, AND TYPES

Using this operator, the optimal set of features can be determined by sorting decreasingly
the mutual information parameter I(i) for all i. Afterwards, the final set can be determined by
selecting those features which overrides a given threshold.

4.4 Adversarial Types Extraction

After the feature selection algorithm, the Adversarys types ti T are determined using k-Means
(see Appendix A.3) clustering over the whole collection of emails (phishing and ham). Therefore,
the number of clusters over the whole set of features Kfeatures will represent the total number of
types for the Adversary player, and for each type ti there will be an associated type of message
xi , i {1, . . . , Kfeatures }. For each message x, its type will be determined by

ti = arg min d(x, Ci ) (4.10)


i

where Ci is the centroid of cluster i, and function d : RA RA R represents the distance


between two vectors of dimension A. The distance function used in this research is the cosine
distance similarity function (equation 4.3). Each type is directly related on the different strategies
(clustering words or other phishing elements used) in each message proposed by the Adversary.

The fact that types are determined over the whole phishing and ham corpus is relevant to
the game modeling. In this way, it is possible to determine which messages are closest to a malicious
type or a regular type, affecting directly the Natures probabilities of choosing a given type, hence
the beliefs calculated for the Classifier.

45
Chapter 5

Experimental Setup

Classifying of phishing emails is a natural extension of text mining, where the most promising
classification algorithms are Support Vector Machines, nave Bayes, Random Forest, among other
text categorization algorithms [69]. A considerable difficulty is the online setting associated to the
email inbox nature, where messages arrive from a non-deterministic distribution, and a non-bounded
space of messages.

In this context, the following experimental settings will be determined to give the right
benchmark results for the proposed feature extraction technique between previous results and batch
learning algorithms. Also the objective of the experimental setting is to compare the accuracy and
effectiveness of different online classification algorithms and the proposed adversary-aware classi-
fier. On the one hand, batch learning algorithms to validate that features extracted (or Adver-
sarys strategies) characterize and offer enough information to separate phishing messages from
ham messages. On the other hand, online learning algorithms are used to benchmark the proposed
QRE-based and AAO-SVM algorithms which are developed for being evaluated incrementally.

It is important to understand that the batch evaluation of the whole corpus is completely
independent from the online learning task. Here, the previous evaluation of features and classifi-
cation performance in the whole dataset is to set an information rich evaluation environment, in
order to test the game-theoretic algorithms in the best possible way. This could be considered an
infringement on the online learning paradigm, for which initially there is no information about the
system and the classifier must evolve by its own means, but is necessary to set the needed evaluation
setup for this first approach in the evaluation of the game theoretic algorithms.

46
Chapter 5. EXPERIMENTAL SETUP

5.1 Benchmark Algorithms for Batch Learning

Several text classification approaches have been previously used in email and message classification
[8, 30, 68, 69, 47], where the automation of both the learning and classification process have been of
main interest for researchers. For this problem, several algorithms have been developed, where the
most used in email message classification in batch supervised learning are Support Vector Machines
([67]), associated with the regularized risk minimization from the statistical learning theory proposed
by Vapnik [80], and nave Bayes [25] classification algorithm.

5.1.1 Support Vector Machines Classifier

Support Vector Machines (SVMs) are based on the Structural Risk Minimization (SRM) [80, 16]
principle from statistical learning theory. Vapnik stated that the fundamental problem when de-
veloping a classification model is not concerning the number of parameters to be estimated, but
more about the flexibility of the model, given by VC-dimension, introduced by V. Vapnik and A.
Chervonenkis [15, 79, 80] to measure the learning capacity of the model. So, a classification model
should not be characterized by the number of parameters, but by the flexibility or capacity of the
model, which is related to the extent of how complex the model is. The higher the VC-dimension,
the more flexible a classifier is.

The main idea of SVMs is to find the optimal hyperplane that separates objects belonging
to two classes in a Feature Space A, maximizing the margin between these classes. The Feature
Space is consider to be a Hilbert Space defined by a dot product, known as the Kernel function,
k(x, x0 ) := ((x) (x0 )), where : A, is the mapping defined to translate an input vector into
the Feature Space. The objective of the SVM algorithm is to find the optimal hyperplane wT x + b
defined by the following optimization problem,

A T
1X 2 X
min wi + C i
w,,b 2
i=1 i=1 (5.1)

subject to yi wT xi + b 1 i i {1, . . . , T }
i 0 i {1, . . . , T }

The objective function includes training errors i while obtaining the maximum margin
hyperplane, adjusted by parameter C. Its dual formulation is defined by the following expression,
known as the Wolfe dual formulation.

47
Chapter 5. EXPERIMENTAL SETUP

T
X T
1 X
max i i j yi yj k(xi , xj )
2
i=1 i,j=1

subject to i 0, i {1, . . . , T } (5.2)


T
X
i yi = 0
i=1

Finally, after determining the optimal parameters , for a given message x the continuous
outputs are represented by,

T
X
g (x ) = i yi k(xi , x ) + b (5.3)
i=1

The final classification for element th is given by f (x ) = sign (g(x )).

5.1.2 Nave Bayes Classifier

Nave Bayes classifier have been intensively used for email message filtering [69]. Their main objec-
tive is to determine the probability that an object composed by the feature vector x has a given
label (y {+1, 1}). Using the Bayes theorem, and considering that features presented in xt are
independent of each other, this probability can be calculated by the following expression,

A
P (y ) P (x |y ) P (y ) Y
P (y |x ) = = P (x,j |y ) (5.4)
P (x ) P (x )
j=1

It is easy to prove that considering the conditional probabilities over the labels, the previous
equation can be extended to the following expression,

A
X
P (y = +1|x ) P (y = +1) P (x,j |y = +1)
ln = ln + ln (5.5)
P (y = 1|x ) P (y = 1) P (x,j |y = 1)
j=1

48
Chapter 5. EXPERIMENTAL SETUP

And the final decision is given by the following cases, subject to a given threshold .

(
+1 if ln PP (y
(y =+1|x )
=1|x )
>
f (x ) = (5.6)
1 Otherwise

It is important to notice that the previous expression can be determined by continuous


updates in the probabilities of the labels (y ) given the features (x ), determined over the frequencies
of the features presented in the incoming stream of messages. The threshold can be determined
by cost-sensitive analysis of the classification task [25], minimizing the expected cost.

5.1.3 Logistic Regression

The idea of this classification algorithm is to determine for a given object x the posterior probabil-
ities of labels y . For this, logistic regression estimates parameters L and T for a linear regression
over the set of features, which is mapped by approximating a function into the interval [0, 1] as its
output has the properties of a probability distribution. In general, the model has the form

exp(T + LT x )
P (y |x ) = (5.7)
1 + exp(T + LT x )

This model
P parameters are determined either by maximizing the conditional likelihood on
the training set T=1 log P (y |x ; L , T ), or by minimizing the class-loss over the training set.

5.2 Benchmark Algorithms for Online Learning

Recalling the learning problem described in section 3.1.1, let us consider a message arriving at
time represented by the feature vector x = (x,1 , . . . , x,i , . . . , x,A ), where x,i is the ith feature
of message x . Each message can belong to two classes: positive (or malicious) messages, and
negative (or regular) messages. Perceptron-like algorithms are the main alternative for previously
stated learning settings.

For these types of learning algorithms, the main idea is to update the classifier as errors are
presented in the evaluation step. If the classification task is considered as an error minimization
problem, the main difference between perceptron-like algorithms is directly related to the way in
which they update the classifiers optimal parameters. In the scope of this research, a classic
perceptron-like algorithm [65], the Relaxed Online SVM [68] is used as benchmark algorithm in the
online learning setting.

49
Chapter 5. EXPERIMENTAL SETUP

5.2.1 Relaxed Online Support Vector Machines

Previous work developed by [21, 68] will be used as a benchmark algorithm in this thesis. The Re-
laxed Online Support Vector Machine (RO-SVM) model was originally proposed for Spam filtering,
a task not far from phishing classification. This work was inspired by previous work on incremental
algorithms [? ], where some of the main characteristics about incremental classifiers are discussed.

Algorithm 5.2.1: Relaxed Online Support Vector Machines


Data: T , C, m, p
Result: weights {wj }aj=1
1 Initialize w0 := 0, b := 0, S := {};
2 foreach (x , y ) T do
3 h = sign(w x + b);
4 if y h < m then
5 Find w0 and b0 using SMO A.4.1 on S using w and b as seed hypothesis.;
6 set w := w0 ;
7 set b := b0 ;
8 if |S| > p then
9 remove oldest example from S;
10 S.add(x , y );
11 return w ;

As described in algorithm 5.2.1, the main idea is still in the same context as perceptron-like
algorithms, where the re-evaluation step and update of the classifier is done only when a mis-
classification is performed.

5.3 Experimental Setup for Feature Extraction and Selection

A 10 times 10 cross-validation learning schema using the benchmark machine learning algorithms
on the complete database characterized with different set of features was developed. The learning
algorithms were implemented using open source machine learning algorithms, where SVMs were
constructed by using the libSVM-library [17], and the nave Bayes model, as well as the logistic
regression used were implemented in Weka open source data mining tool [86].

For each benchmark algorithm, different feature sets were evaluated using the F-Measure
performance criterion. The list of training/test sets used is the following:

1. Structural features represented by the feature set (see section 4.1.1)

50
Chapter 5. EXPERIMENTAL SETUP

2. SVD features represented by the feature set (see section 4.2.2)

3. Content-topic features represented by the feature set (see section 4.2.3)

4. Keywords features represented by the feature set (see section 4.2.1)

5. SVD, content-topic and Keyword features intercepted, represented by the feature set F =
( ) ( ). (see equation 4.1)

6. All features, considering F = F (see equation 4.1)

7. All features F preprocessed by the mutual information feature selection algorithm (see sec-
tion 4.3)

All these experimental sets were evaluated over SVM, nave Bayes and logistic regression.
Also, the results for SVMs were compared with those results obtained by [8] and [30], whose exper-
iments were evaluated over the same phishing and ham corpus considered in this research.

Finally, all machine learning benchmark algorithms were evaluated over the set defined by
the feature selection procedure over the F set. Here, all performance evaluation criteria were
determined in order to compare the classification performance against results obtained in [8].

5.4 Types Extraction Experimental Setup

In terms of game theoretic characterization, types for the Adversary agent must be extracted.
By using the phishing features extracted and then selected by mutual information, as previously
mentioned in section 4.4, k-means clustering algorithm is proposed to group the complete corpus of
phishing and ham messages.

The experimental setup for this step is determined by the evaluation of the Davies-Bouldin
index for different number of clusters, which states whether a given segmentation parameter k has
an stable relation between its centroids distance and the distance between each object to its closest
centroid (see appendix A.3 for further details).

5.5 Online Learning Classifiers Experimental Setup

For the online setting, the Relaxed Online SVM (RO-SVM) proposed by Sculley and Wachman in
[68] was used, as well as an incremental evaluation of nave Bayes, and the perceptron algorithm [65]
were considered as benchmark techniques for the proposed game theoretical classification models
(S-QRE, I-QRE, QRE-P, and AAO-SVM). The algorithms performance was evaluated every 500

51
Chapter 5. EXPERIMENTAL SETUP

observations, where each classifiers performance regarding all evaluation criteria was calculated
over all the observed set of messages.

For the proposed game theoretical models implementation, the approximation on the se-
quential equilibria was determined using Gambit software command-line tool [52] (gambit-logit),
where the S-QRE, I-QRE, and QRE-P were implemented in Perl programming language, and the
AAO-SVM classifier was implemented in C++ extending D. Sculleys Online SVM development
[68]. For the benchmark online learning classifiers, the RO-SVM was deployed from its original im-
plementation [68], and the incremental nave Bayes classifier was implemented in Weka open source
data mining tool [86] using the NaiveBayesUpdateableBayes version of the algorithm.

Furthermore, to test the database drift concept in terms of Adversarial changes in the
incoming email messages, batch learning algorithms were trained on a small subset of the corpus
(30%), and then tested over the remaining part of the data (70%). This evaluation was executed
under the same instances order that online algorithms were evaluated. Also, the evaluation of the
Error (1 Accuracy) was done every 500 observations. The implementations used in this case were
the same as those considered for the batch learning process in the feature extraction and selection
evaluation step.

5.6 Evaluation Criteria

As presented in Table 5.1, the resulting confusion matrix of this binary classification task can be
described using four possible outcomes: Correctly classified phishing messages or True Positives
(TP), correctly classified ham messages or True Negative (TN), wrong classified ham messages as
phishing or False Positive (FP) and wrong classified phishing messages as ham or False Negative
(FN).

Table 5.1: Confusion Matrix for binary classification problems.

y = +1 y = 1
C(x) = +1 TP FP
C(x) = 1 FN TN

The evaluation criteria considered are common machine learning measures [7], which are
constructed using the before mentioned classification outcomes.

The False Positive Rate (FP-Rate) and the False Negative Rate (FN-Rate) as the proportion
of wrongly classified ham and phishing email messages respectively.
FP
FP-Rate = (5.8)
FP + TN

52
Chapter 5. EXPERIMENTAL SETUP

FN
FN-Rate = (5.9)
FN + TN
Precision, that states the degree in which messages identified as phishing are indeed malicious.

TP
Precision = (5.10)
TP + FP
Recall, that states the percentage of phishing messages that the classifier manages to classify
correctly. Can be interpreted as the classifiers effectiveness.
TP
Recall = (5.11)
TP + FN

F-measure, the harmonic mean between the precision and recall


2 Precision Recall
F-measure = (5.12)
Precision + Recall

Accuracy, the overall percentage of correct by classified email messages.


TP + TN
Accuracy = (5.13)
TP + TN + FP + FN

Accumulated Error, the overall percentage of incorrect by classified email messages at time .

T P ( ) + T N ( )
Accumulated Error( ) = 1 (5.14)
T P ( ) + T N ( ) + F P ( ) + F N ( )

5.7 Sensitivity Analysis and Robust Testing

To evaluate the parameters of the proposed classification algorithms, performance measures of


different scenarios for their parameters were evaluated. The S-QRE, I-QRE, and QRE-Perceptron
algorithms were evaluated for = {1, 100, 1000, 10000}, parameter which is directly associated
to their utility function modeling. The UA and UC baseline utilities were not considered in the
classifiers evaluation. It is important to highlight that the economic interpretation of the values
is associated to how difficult is for the Adversary to change from one type of messages into another,
by activating and deactivating its features used in the email message. Analysing equation 3.5, it is
easy to see that = 1 establishes in the utility function that the cost to change the type of message
is very low and that = 10000 is extremely high.

For the I-QRE and the AAO-SVM classification algorithms, the evaluation of different in-
cremental scenarios was considered. Here, the set of values proposed for the incremental step are
defined as Gp = {50, 500, 1000}. It is relevant to recall that given their structure, the QRE-P

53
Chapter 5. EXPERIMENTAL SETUP

classification algorithm does not need further sensitivity testing and evaluation of new parameters,
as well as the I-QRE algorithm.

Finally, for the AAO-SVM classifier sensitivity analysis evaluation for their utility functions,
five different scenarios were proposed for variations on their parameters as exposed in Table A.3.

The first scenario is associated to the case when the relation between R , M , R and M is
equally distributed over the cost matrix for the classification problem. This is M
R
= 1 and
R
M = 1.

The second scenario is based on the same evaluation proposed for Figure A.2, where the
relation between parameters is determined by M R
= 4 and MR
= 2, which states that the
miss-classification cost, as well as the correct classification for regular messages returns best
payoffs.

The third scenario is based on the idea that the miss-classification cost, as well as the correct
classification for malicious messages returns best payoffs, this is achieved by considering M
R
=
R
1/4 and M = 1/2.

The forth scenario is related to the fact that to commit a miss-classification on ham messages
is extremely costly in comparison with miss-classifying a phishing message, this is M
R
= 100,
and classifying phishing messages correctly reports highest benefits by far, represented by
R
M = 1/100.

The fifth case is the opposite of the latter, where classifying correctly ham messages is better
rewarded and miss-classifying phishing messages is much more costly than miss-classifying
ham messages, all this represented by M R
= 1/100 and MR
= 100.

The range of the proposed values were determined by preliminary sensitivity testing over
the utility function, and the classification performance of the proposed algorithms. In this case,
small variations on these parameters (100) showed a better performance than high variations on
parameters (1000), for which small ranges for parameters were selected in this work.

54
Chapter 6

Results and Analysis

In this chapter, results for both feature extraction and classification algorithms are presented.
Firstly, all feature extraction and selection results are discussed. Secondly, experimental results
for text-based game theoretic parameters used in the proposed algorithms are presented. Thirdly,
proposed online algorithms are evaluated and compared against benchmark algorithms for online
learning and incremental drift evaluations of learning classifiers. Finally, robust testing and sensi-
tivity analysis evaluation setup for the proposed classification algorithms is presented.

Unlike previous chapters, feature extraction and selection experimental results are presented
ahead of classification algorithms results. This is given the traditional sequence of events in the
KDD process, where data mining techniques are evaluated after the definition and selection of the
data to be used.

6.1 Feature Extraction and Selection Results

The experimental setting for evaluating different feature extraction techniques and their respective
results is presented. In the first place, all topic-based and keyword features evaluation for different
classification algorithms is presented, followed by a brief social engineering analysis of the extracted
features. Then, feature selection procedures were evaluated and its experimental results presented.
Finally, batch learning algorithms evaluation over different sets of extracted features are presented
and discussed for all benchmark algorithms.

55
Chapter 6. RESULTS AND ANALYSIS

6.1.1 Content-based and Topics Model Features

Following the content-based feature extraction methodology described in section 4.2, a large set of
features that defines malicious messages was obtained. Next, all results associated to this step of
the process are presented. First, content-based keyword finding results are discussed, followed by
topic modeling results, and finally a social engineering analysis of these results.

Content-based Keyword Features

As described in section 4.2.1, one of the main difficulties is to determine the number of clusters
for which the overall set of words will be grouped. Recalling this method, the proposed clustering
strategy is k-means, where k (the number of clusters) is a parameter that must be determined
a-priori. Afterwards, once the number of clusters is defined, for each cluster, the number of relevant
features are determined by the geometric mean of each word weights within the messages contained
in a given cluster (see equation 4.4). In this work, the performance of the selected number of clusters
was determined by evaluating the F-measure for different classification algorithms.

In the following, the evaluation of three machine learning algorithms (SVM, nave Bayes, and
logistic regression) for k [2, 20] is presented. At first, only for evaluation purposes, the number
of words, now considered as features, is fixed to 500. So, considering the case of even numbers of
clusters, the first 500/k words for each cluster were selected, and for odd numbers the first b500/kck
words for each cluster were selected. To illustrate the procedure, for 2 clusters (k = 2), the 250
most relevant words for each cluster were selected. Then, for 3 clusters (k = 3), the first 166 words
were selected. And so on, until for 20 clusters (k = 20), the 25 most relevant words for each cluster
were selected as the final set of features.

As shown in Figure 6.1, the F-measure is maximal for each of the benchmark classification
algorithms when k = 13, except for nave Bayes with k = 14. Then, the 30 most relevant words
of each cluster where selected as features (a total of 390 features which represents the feature set
). The first five relevant words of each cluster are presented in Table 6.1. For the complete list of
content-based keyword features, see Appendix B.1.

Topic Model Features

As described in section 4.2.3, features determined by Latent Dirichlet Allocation (LDA) leads to a
latent semantic analysis enhancement of the words to be used as features. LDA is a topic-modeling
technique, based on a Bayesian net representation of the interaction between words, topics, and
documents. Here, the main objective is to determine whether a given word fits to a given topic,
where the number of topics must be handled as an a-priori parameter in this method. As well as
content-based keywords, the number of topics was determined by using benchmark machine learning

56
Chapter 6. RESULTS AND ANALYSIS

Figure 6.1: F-Measure evaluated over the benchmark algorithms for different number of clusters
(k [2, 20]) in the keyword finding algorithm.

Table 6.1: Five most relevant stemmed words for each of the 13 clusters extracted with the content-
based keyword algorithm over the phishing corpus.
Cluster Word 1 Word 2 Word 3 Word 4 Word 5
1 credit password understand inconveni safeti
2 follow bill communiti violat sell
3 ebay secur bank access confirm
4 payment error info sensit internet
5 vector image fromfeatur doubl subform
6 paypal account inform updat protect
7 signin offer marketplac trade purchas
8 amazon never maintain keep cash
9 googl polici help agreement question
10 login respons yahoo attempt believ
11 mobil solut signup wireless debt
12 use card repli review verif
13 union nation answer googl barclay

algorithms, by evaluating their classification performance (Accuracy) for feature sets built using the
30 most representative words of 5, 10, 15, 20, 25 topics.

As show in Figure 6.2, the F-measure evaluated using the benchmark algorithms applied over
the feature set increases as the number of topics is higher. An analytical and qualitative review

57
Chapter 6. RESULTS AND ANALYSIS

Figure 6.2: F-measure evaluated over the benchmark machine learning algorithms for increasing
number of topics in the feature set defined by LDA model.

over the set of topics was developed. Topic-models with over 30 topics were not clearly associated
to email fraud and social engineering attacks in the phishing email context. Given this, a maximum
of 25 topics was considered. Finally, the content-topic feature set (see section 5.3) is determined
by the 30 most relevant words for each of the 25 extracted (a total of 750 features). In Table 6.2,
10 most relevant words selected for topics 1, 2, 5, 15, 20, and 25 are presented. For the complete
list of LDA features, separated on their respective topics, see Appendix B.2.

Table 6.2: Ten most relevant stemmed words for five topics extracted by using the LDA topic-model
over the phishing corpus.
Topic 1 Topic 2 Topic 5 Topic 15 Topic 20 Topic 25
paypal account account grupo bank click
account messag fraudul imagen account visa
secur suspend bank cuenta bankofamerica card
password inform thank para america receiv
protect termin suspend click wellsfargo free
inform warn fraud cliente well credit
verifi legal login googl fargo usernam
click agreement secur bancaria barclay success
access liabil notif nuestro huntington want
assist resolv regard dato client wish

58
Chapter 6. RESULTS AND ANALYSIS

Social Engineering Analysis From Extracted Features

Amongst the 13 clusters presented in section 6.1.1, it can be concluded from Table 6.1 that cluster
1 and 12 are directly related to credit card fraud messages, and clusters 2, 4, and 7 are associated to
e-Commerce fraud related messages. Clusters 3 and 6 are related to ebay and paypal fraud messages
respectively, a widely used strategy in the phishing community. Furthermore, cluster 5 is associated
to html and javascript code, commonly used in fraud messages, and clusters 8, 9, and 10 are
related to Amazon, Google and Yahoo accounts fraud techniques, respectively. Finally cluster 11
represents mobile devices and wireless connections type of frauds, and cluster 13 is related to a
specific bank account type of phishing fraud.

As presented in Table 6.2, the 10 most relevant words for six arbitrarily chosen topics are
presented. Here, topic 1 can be associated to paypal fraud phishing messages, where classified
information about the user is demanded by directing the user into a fraud link. Then, topic 2
represents a general account message warning about suspension of a given service and topic 5
represents a specific bank account. Topic 15 groups Spanish words associated to clients bank
accounts. Topic 20 lists several banks that could be used in a fraud message. Finally, topic 25 is
associated to credit card fraud messages, widely used in phishing strategies. The rest of the topics,
are determined by combinations of these strategies, and those determined by the keyword extraction
algorithm. For more detail on which terms defines each topic, refer to Table B.2.

From the previously stated feature extraction methodologies, the following social engineer-
ing strategies are proposed. These strategies are directly associated to a set of strategy profiles
determined by the usage of certain set of words in a message, represented in their respective feature
space. However, the phishing corpus presents small variations and combinations of each one of these
main strategies.

1. Security compromise of an account, presented when an adversary claims maliciously that a


users account may be hacked or compromised.

(a) Words related with a user account.


(b) A corporation name.
(c) Threat of denial of service (DoS) or suspension of the users account. This may appear
as We will suspend your account until you verify your information.
(d) Any type of request for information, like asking a user to follow a link or to reply with
their account information.

2. Business Opportunity, presented as a monetary offer sent to the user.

(a) Congratulatory language.


(b) Money language, dollar amounts, the word free.
(c) Corporation name and request for information, as described in Security compromise of
an account.

59
Chapter 6. RESULTS AND ANALYSIS

3. Change/Update to Account, presented when an account verification is being requested due to


a given change, refreshment or modification.

(a) Words that states that an account is being updated or changed somehow.
(b) Corporation name and request for information, as described in Security compromise of
an account.

4. General Opportunity, similar to Business Opportunity, but not just related to money. Could
be associated to stock options tips, vacations or credit card offers.

(a) Words related to credit cards.


(b) Congratulatory language, corporation names, and request for information, as previous
social engineering tactics.

6.1.2 Feature Selection and Dimensionality Reduction

The overall set of features to be considered for the strategy profiles (Content-based keywords , and
topic models ) was presented in the previous section. However, as it presents a high dimensionality
( = 1113) a two step dimensionality reduction strategy was evaluated. In the first step, a
Singular Value Decomposition (SVD) (described in section 4.2.2) of the complete tf-idf representa-
tion of the corpus is evaluated. Then, the extracted feature set and the SVD results are intersected
to filter those words that do not represent a further semantic relevance. Finally, a weight-based
feature selection step was performed, where the mutual information for each word was evaluated as
the filtering criteria.

Consider the feature set represented by the set of words in the outcome of the SVD matrix
decomposition algorithm. In this research, by using Matlab 7.8.0 (R2009a), the rank of matrix
M was determined as 1780 (less than the total size of words R = 25205 and documents Q = 4450
messages), which theoretically represents the total size of of the semantically relevant features for
the evaluated phishing corpus. The evaluation of ( ), which represents a dimensionality
reduction strategy to maintain only those semantically relevant words in the final feature set, was
1002 features. Then, by adding all structural features , the number of features in the final set (F)
to be evaluated by the mutual information procedure was 1017.

The weight-based feature selection by mutual information, is based in the information


theoretic criterion measured between a given variable and the target label. After all features
are evaluated, the selection is realized over all those which comply with a given threshold.
In this research, a total number of 532 features were selected as the final set of features to
characterize both phishing and ham messages. Mutual information was calculated using the
weka.attributeSelection.InfoGainAttributeEval module implemented in the Weka open source data
mining tool [86]. In this case, the threshold used to discard features, was an information gain of
0.005. With this, about 50% of the feature set was filtered, reducing significantly the feature space
and thus the complexity of the classification problem.

60
Chapter 6. RESULTS AND ANALYSIS

6.1.3 Feature Sets Evaluation

Recalling section 5.3, different benchmark algorithms were evaluated over different feature sets,
using the F-Measure performance criterion. The list of evaluated feature sets used is the following:

1. Structural features represented by the feature set (see section 4.1.1)


2. SVD features represented by the feature set (see section 4.2.2)
3. Content-topic features represented by the feature set (see section 4.2.3)
4. Keywords features represented by the feature set (see section 4.2.1)
5. SVD, content-topic and Keyword features intercepted, represented by the feature set F =
( ) ( ). (see equation 4.1)
6. All features, considering F = F (see equation 4.1)
7. All features F preprocessed by the mutual information feature selection algorithm (see sec-
tion 4.3)

Figure 6.3: F-Measure evaluation criteria for SVMs algorithm using different feature set strategies.

In Figure 6.3, the F-measure evaluation of the SVM algorithm over previous feature sets
is presented. According to this, the Feature Selection set (feature set F evaluated with the
weight-based feature selection), outperformed all other feature sets considered, including results
reported by Bergholz et al. in [8] and Fette et al. in [30], with an F-measure of 99.46% and 97.64%,
respectively.

Furthermore, topic-model features (LDA Features) were superior in 3.80% in the overall
evaluation of the F-measure, in comparison with the next best set of features Keywords Features

61
Chapter 6. RESULTS AND ANALYSIS

Table 6.3: Experimental results for the benchmark batch machine learning algorithms.
Model Accuracy FP-Rate FN-Rate Precision Recall F-measure
Bergholzs SVM [8] 99,52% 1.11% 0.07% 99.89% 99.89% 99.89%
Logistic Regression 98.83% 0.71% 1.55% 98.45% 99.41% 98.93%
Naive Bayes 98.69% 0.59% 1.89% 98.11% 99.53% 98.81%
SVM 99.54% 0.28% 0.61% 99.39% 99.77% 99.58%

( set). However, all features combined, Feature Selection (F set), reported an improvement of
3.13% in the overall evaluation of this performance measure.

As presented in Figure 6.4, all benchmark machine learning algorithms indicate that feature
selection over feature set F is the best experimental setup, where all three algorithms achieved their
maximum values for the F-measure. It is important to notice that amongst all single evaluated
feature sets (, , and ), the best performance was obtained in the topic-model feature set ().

Figure 6.4: F-Measure for all benchmark machine learning algorithms evaluated over the different
feature sets.

Likewise, Figure 6.4 shows that feature selection algorithm is relevant for improving perfor-
mance measures. For the SVM algorithm, the F-measure increases from 99,01% to 99.58%, for the
nave Bayes increases from 97.23% to 98.81%, and for logistic regression the F-measures rises from
98.22% to 98.93%. These results are far from their initial evaluation for the simplest feature set
(), where reported F-measures for SVMs, nave Bayes, and logistic regression are 88.24%, 84.12%,
and 86.30% respectively.

Table 6.3 presents the evaluation of the dataset with all features F, preprocessed by the
mutual information feature selection algorithm. In this case, SVMs (99,54%) showed a slightly
better accuracy than obtained by Bergholz et al. in [8], for the same phishing and spam corpus

62
Chapter 6. RESULTS AND ANALYSIS

(99,52%). In terms of precision, recall, and F-measure, the SVM algorithm reported to be highly
competitive against [8], achieving 99.39%, 99.39% and 99.58% respectively.

6.2 Types Extraction Results

As previously stated in section 3.1.2, the types are a key element for the game modeling, whose
definition is fundamental for the strategic interaction between the Classifier and the Adversary
agents. To the best of our knowledge, the extraction of types for an incomplete information game
modeling scenario from a real world dataset has never been executed by the using of data mining.

Types extraction was done by the usage of clustering analysis over the whole phishing and
ham dataset. As described in appendix A.3, the evaluation of obtained clusters, and the process
of cluster analysis was done using the Davies-Bouldin index, which states whether the obtained
clusters are well distributed from each other (distance between centroids) and within themselves
(distance from objects to their closest centroids). Fuzzy clustering, such as Fuzzy C-means [9], and
other clustering methods, such as Kohonen Self Organizing Maps [42], were not considered in this
first approach for types extraction. Further experiments using more complex clustering methods
could improve obtained results for this game theoretical component.

Figure 6.5: Davies-Bouldin index for types clustering.

As presented in Figure 6.5, the Davies-Bouldin index states that the optimal number of
clusters is 7, where a maximum value for the 1 DB index was obtained. The decision criteria for
this index is that better clusters are obtained as the value is minimum (or 1DB is maximum). The
evaluation for the best cluster must not consider small numbers for the k parameter, as low numbers
often represents good partitions over the dataset, but lack of relevant information. However, an
interesting value was obtained for 13 clusters, but whose index value is lower than the obtained for
7 clusters.

63
Chapter 6. RESULTS AND ANALYSIS

Table 6.4: Frequency associated to phishing and ham emails for each cluster.
Cluster Phishing Ham
1 1454 (91%) 148 (9%)
2 932 (96%) 37 (4%)
3 886 (20%) 3571 (80%)
4 621 (86%) 105 (14%)
5 295 (82%) 63 (18%)
6 284 (16%) 1474 (84%)
7 87 (5%) 1553 (95%)

In Table 6.4 an analysis on frequencies of phishing and ham messages is presented for the 7
extracted types with k-means algorithm. In this case, clusters 1, 2, 4, and 5 are strongly related to
phishing, but clusters 3, 6, and 7 were mostly related to ham messages.

6.3 Online Classification Algorithms Performance

In this section, the performance criteria evaluation for the proposed QRE based classification algo-
rithms is presented, as well as for benchmark algorithms. In this work, QRE based algorithms are
S-QRE, I-QRE, QRE-P, and AAO-SVM. Benchmark algorithms considered are RO-SVM, incre-
mental nave Bayes, and the perceptron, which were evaluated according to the experimental setup
previously explained in section 5.5.

6.3.1 Proposed Classification Algorithms Performance

As presented in Figure 6.6, results shows a decremental error in terms of all incremental algorithms,
whereas for the static version of the QRE classification, the error was incremental. Both I-QRE
and QRE-P presents initially, at first 500 messages evaluation, an error of 78.22% and 74.13%
respectively, and for the last 11500 messages an error of 33.13% and 57.01% respectively. Then, for
both SVM-based algorithms, AAO-SVM and RO-SVM, the initial Error was 59.21% and 57.88%
respectively, and the final Error was 13.37% and 14.8% respectively. These results indicates that the
adversary aware online SVM presented slightly better results, than the adversary un-aware online
SVM, but the overall evaluation of both indicates a similar behaviour. Finally, QRE-P presented a
smaller Error rate difference between the first evaluation and the final, followed by I-QRE and then
by AAO-SVM.

As shown in Table 6.5, the differences in classification performance of the proposed algorithms
can be identified, where the only one that is highly competitive with the benchmark method is the
AAO-SVM, followed by I-QRE.

64
Chapter 6. RESULTS AND ANALYSIS

Figure 6.6: Accumulated error evaluated each 500 messages presented in the online evaluation
setup for S-QRE, I-QRE, QRE-P, and AAO-SVM classification algorithms, using RO-SVM as a
benchmark.

Table 6.5: Experimental results for the benchmark online machine learning algorithms.
Model Accuracy FP-Rate FN-Rate Precision Recall F-measure
S-QRE 32.12% 63.75% 74.06% 25.94% 21.36% 23.43%
QRE-P 42.09% 54.17% 62.01% 37.99% 39.08% 38.53%
I-QRE 66.88% 42.01% 23.11% 76.89% 61.90% 68.59%
AAO-SVM 86.63% 9.43% 16.35% 83.65% 92.12% 87.68%
RO-SVM 85.52% 10.19% 17.87% 82.13% 91.05% 86.36%

6.3.2 Benchmark Algorithms Performance

To identify the online property of learning algorithms is not an easy task. In this work, a first
approach using previously mentioned classification performance measures the applicability and ac-
curacy of the overall proposed algorithm were tested. Results showed for RO-SVM an F-measure of
86.01% with an accuracy of 86.01%. For online nave Bayes, the reported F-measure is 85.20% and
the accuracy 81.18%. AAO-SVM algorithm, reported an F-measure of 87.69%, and an accuracy of
86.63%, which are slightly better results than benchmark online classification algorithms on these
performance criteria.

Figure 6.8 presents a close-up in the last messages evaluation for all classification algorithms
(from message 8000 to 11500).

In general, as exhibit in Figure 6.7 and Figure 6.8, the performance for all incremental
algorithms diminishes as the messages are presented. However, the rate in which algorithms are

65
Chapter 6. RESULTS AND ANALYSIS

Figure 6.7: Accumulated error evaluated each 500 messages for I-QRE, AAO-SVM, RO-SVM,
Batch SVM, nave Bayes, and incremental nave Bayes classification algorithms.

Figure 6.8: Zoom over the accumulated error evaluated each 500 messages for I-QRE, AAO-SVM,
RO-SVM, Batch SVM, nave Bayes and incremental nave Bayes classification algorithms.

improved is different from each other, considering a better performance for both AAO-SVM and RO-
SVM, whose results outperformed the incremental version of nave Bayes. I-QRE evaluation was not
competitive enough with their respective benchmark algorithms, but the fact that the classification
performance improves over time just using game-theoretic properties of the game is remarkable. For
batch learning algorithms, nave Bayes evaluation performance showed to be incremental in terms
of the Error, whether the batch version of SVMs presented a similar behaviour, but with a very
slight positive slope in terms of the Error curve. The SVM evaluation on a small percentage of the
dataset was enough to maintain an appropriate classification performance.

66
Chapter 6. RESULTS AND ANALYSIS

6.4 Sensitivity Analysis and Robust Testing

As discussed in section 3.3, all proposed algorithms (S-QRE, I-QRE, QRE-P and AAO-SVM) are
evaluated based on strong assumptions in game modeling parameters, specially related to the util-
ity function used to determine the game equilibria. Likewise, the evaluation periods parameters
for which the classifiers were incrementally updated must be evaluated. In the following, a sen-
sitivity analysis and robust testing for each adversary aware classification algorithm is presented
and discussed. Furthermore, all of the following evaluations were based on the experimental setup
described in section 5.7.

6.4.1 Static-QRE Sensitivity Analysis

The sensitivity analysis for the S-QRE algorithm was performed in term of the utility function
parameter associated to the Adversarys utility function. Here, the baseline utility parameter was
considered constant (UA = 100). As shown in Figure 6.9, incremental properties of the evaluation
gives incremental values in terms of the binary classification error, where different scenarios were
observed for the proposed utility function evaluation ( = {1, 100, 1000, 1000}).

Figure 6.9: Accumulated error evaluated for the S-QRE classifier for = {1, 100, 1000, 1000}.

The first case ( = 1), is associated to the fact that changing from an original strategy into
a new one, does not report higher costs. As shown in Figure 6.9, the initial error evaluation was
higher than other utility function variations. Also, the classification error showed small variations
in terms of the incremental evaluation of messages, reflecting an stability in the measured error
rate. Cases = 100 and = 10000 were similar in terms of initial and final error evaluation, and
the overall variation for the whole dataset. Surprisingly, the scenario where = 1000 showed better
results in terms of the initial error evaluation, and for almost the whole set evaluation. In the final

67
Chapter 6. RESULTS AND ANALYSIS

percentage of the dataset, the scenario where = 10000 obtained better classification results.

6.4.2 Incremental-QRE Sensitivity Analysis

Following the same evaluation as previous algorithm, the sensitivity analysis for the I-QRE algorithm
was performed in terms of the utility function parameter associated to the Adversarys utility
function. Also, as the previous case, the baseline utility parameters were considered constant
(UA = 100).

As shown in Figure 6.10, the evaluation for = 1000 presented lowest values for the clas-
sification error over the whole corpus evaluation. For = 1, the evaluation was similar as the
evaluation for = 100 for messages from 500 to 6500, but then its performance was constant in
terms of the descent gradient for the error curve. For the case when = 100, it dropped signifi-
cantly, outperforming the = 10000 and = 1 cases. Finally, it is shown that = 10000 presented
small variations in the whole corpus evaluation, but whose results were not better than the = 100
and = 1000 cases.

Figure 6.10: Accumulated error evaluated for the I-QRE classifier for = {1, 100, 1000, 1000}.

6.4.3 QRE-Perceptron Sensitivity Analysis

QRE-P algorithm was evaluated in terms of the utility function parameter associated to the Ad-
versarys utility function. As the previous sensitivity analysis for S-QRE and I-QRE, the baseline
utility parameters were considered constant (UA = 100).

Figure 6.11 presents the evaluation for different values for the utility functions parameters

68
Chapter 6. RESULTS AND ANALYSIS

( = {1, 100, 1000, 10000}), for which best results were obtained for = 1000. In this case, the
classification error is well behaved from approximately message 3000, obtaining best results than
the other cases until the end of the evaluation. The rest of the classifiers built for = 1, = 100,
and = 10000, showed a similar behaviour where the worst results were obtained for for = 10000.

Figure 6.11: Accumulated error evaluated for the QRE-P classifier for = {1, 100, 1000, 1000}.

6.4.4 AAO-SVM Sensitivity Analysis

Finally, as explained in section 5.7, in the case of AAO-SVM algorithm the sensitivity analysis
was performed considering the relationship between different parameters presented in the Classi-
fiers and Adversarys utility functions. Then, the analysis was extended over the incremental
parameters of the algorithm, such as Gp and m.

Figure 6.12, shows the classification error over messages evaluated. In the first place, the
overall evaluation of the corpus results indicates a constant decreasing error over time in scenarios
5 and 3. For scenarios 1, 2, and 4, for which the learning curve behaviour was similar (particularly
for scenario 1 and 4), and the best overall results were presented for scenario 2, outperforming in
almost the whole corpus all scenarios.

6.4.5 AAO-SVM and I-QRE Incremental Parameter

Figure 6.13 presents the evaluation for Gp = {10, 500, 1000}, where both AAO-SVM and I-QRE are
evaluated together. Here, results shows that in both cases Gp = 500 determines the best evaluation
for the accumulated error performance. In cases when Gp = 10, results shows that for the I-QRE
algorithm is similar to the QRE-P evaluated for = 1000. This is presented by the fact that

69
Chapter 6. RESULTS AND ANALYSIS

Figure 6.12: Accumulated error evaluated for the AAO-SVM classifier for values presented in
section 5.7.

classifiers shares the same parameters properties. Finally, when Gp = 1000, results were not as
different as for Gp = 500, but slightly worst in terms of the accumulated error.

Figure 6.13: Accumulated Error evaluated for the AAO-SVM and I-QRE classifiers for Gp =
{10, 500, 1000}.

70
Chapter 7

Conclusions and Future Work

The search for static security - in the law and elsewhere - is misguided. The fact is security can
only be achieved through constant change, adapting old ideas that have outlived their usefulness to
current facts.
William Osler, Canadian Physician (1849-1919)

As stated by William Osler in the previous quote, security applications must be considered
as a dynamically learning scheme, as constant changes in our everyday life challenges our security
systems, for which old ideas must be adapted into new ones. Furthermore, in adversarial applica-
tions, as stated by Sun Tzu in the Art of War [78], in order to augment the probabilities to succeed
in a given battle, you need to know both the enemy and yourself. If this is not hold, for every
victory gained you will suffer a defeat, or in the worst case, you will succumb in every battle.

The main conclusion of this work is that given an adversarial application, where available
data permits the definition of agents, strategy profiles, agent types, and its utility function, different
classification algorithms can be proposed to capture the adversarial behaviour. This can be achieved
by using data mining techniques in the game theoretic parameters, as well as using adversarial data
mining to build a machine learning algorithm based on the strategic interaction. In this case, the
strategic interaction was modeled by signaling games, and this may vary depending on how the
adversarial system is defined by the modeler.

Also, it can be concluded that counter-measures for adversarial systems can be proposed,
and their effectiveness are limited exclusively to the modeling of the system. As adversarial patterns
for phishing classification were extracted, for both social engineering features, as well as dynamic
adversarial interaction between the phishing fraudster and the classifier, this work could be extended
to further adversarial domains.

Finally, in terms of enhancing data mining with game theory, there are different levels of
interaction that can be achieved. On the one hand, data mining can be used to determine certain

71
Chapter 7. CONCLUSIONS AND FUTURE WORK

properties and game theoretical parameters that must be used as input for the game that wants to
be evaluated. In this case, the usage of unsupervised techniques, such as clustering methods, could
lead to missing information that the game needs. On the other hand, adversarial data mining can
be used to build a machine learning model with game theoretic parameters, related exclusively to
the game modeling proposed. In this case, strategy profiles were used as input to the learning step,
and given the incremental properties of the adversarial interaction, the game theoretic parameters
in the algorithm changed according to the evolution of the interaction between agents.

In the rest of this chapter, the main conclusions and highlights of this thesis are presented.
Firstly, the research hypothesis and research objectives are discussed. Secondly, we discuss the game
theoretical framework proposed in chapter 3 together with the main research hypothesis exposed in
chapter 1 and results presented in chapter 6. Thirdly, the phishing filtering problem and proposed
solution is reviewed and analysed in terms of highlights obtained for feature extraction, experimental
setup for adversary aware classifiers, and results, (presented in chapter 4, chapter 5 and chapter 6
respectively). Finally, future work and further developments are discusses in both theoretical and
practical points of view of game theory, data mining, and adversarial applications.

7.1 Main and Specific Objectives

In the present thesis, the main objective and specific objectives were successfully accomplished. The
following list discusses briefly how each one of the initially stated objectives were fulfilled, and how
this thesis development is related to each one of them.

1. As shown in chapter 3, chapter 4, chapter 6, and as will be discussed extensively in this chapter,
a methodology to classify phishing emails, modeling the interaction between the classifier and
malicious agents using data mining and games theory was developed.

2. As presented in chapter 3, a game theoretic framework that represents the interaction between
a phishing filter and the adversaries was designed. The interaction was modeled under the
assumption of incomplete information between agents, characterizing agents of the system
together with their respective types, strategies and payoffs.

3. As presented in chapter 3 and chapter 4, an incremental algorithm that enables the classifi-
cation of malicious phishing activities was built, whose results were competitive against state
of the art classification algorithms, as presented in chapter 6.

4. As presented in chapter 4, the phishing activities were characterized using text mining tech-
niques, where the most representative feature space for each agent in the game definition was
evaluated as presented in chapter 5, and whose results were discussed in chapter 6.

5. A portable and extensible computational tool was implemented, for S-QRE, I-QRE and QRE-
P in Perl programming language, and AAO-SVM in C++ programming language. Further-
more, the feature extraction procedures were implemented in Perl programming language. All

72
Chapter 7. CONCLUSIONS AND FUTURE WORK

of the present thesis code will be released under the terms of the GNU General Public License
version 3.0 (GPLv3), the same license that the Gambit tool1 was released. This will enable
the developed code for being used freely by phishing filtering communities.

7.2 Game Theoretical Framework

In this work, an extension of the Adversarial Classification framework for Adversarial Data Mining
was presented, considering dynamic games of incomplete information, or signaling games, as a new
approach to make classifiers improve their performance in adversarial environments. This approach
considered strong assumptions on the Adversary strategies, the utility function modeling for the
Classifier and the Adversary, and experimental setup related to the database processing. As a
first approach, interesting empirical results were obtained for the proposed S-QRE, I-QRE, QRE-P,
and AAO-SVM classifiers.

Feature extraction is a key component for the game strategies and types for the proposed
game theoretical framework. Results showed that the proposed feature extraction and selection
techniques are highly competitive in comparison with previous feature extraction work. Future
work could be oriented to consider a mixture of previous and present feature extraction techniques.
This could estimate a better strategy space for the Adversary, therefore the Adversary types.
This is an important topic that affects directly the definition of the signaling game between the
Adversary and the Classifier, hence the Classifiers performance.

QRE-based classification algorithms showed good results in the online learning experimental
setup. Here, the main results showed that for the incremental algorithms the classification per-
formance was improving as messages were incrementally evaluated. In this way, these classifiers
which were completely based on game theoretical parameters, and none machine learning interac-
tion, learned about the Adversary as the game developed. However, the batch learning version
of the strategic classifier (S-QRE) presented a good evaluation for the dataset, but its performance
diminished over time. This behaviour was expected, as benchmark classification algorithms also
diminished their performance in the incremental evaluation of the corpus.

In terms of sensitivity analysis for QRE-based classification algorithms, best results were
obtained for = 1000 and Gp = 500. This configuration states that the Adversarys utility
function penalizes the fact of changing its strategies, for which equilibria strategies for both players
becomes more stable, hence the Classifiers strategies are related to a best classification criteria,
rather than for small values for the parameter. An interesting result was obtained for high values
in the penalization parameter, where for = 10000, lower values for the classification performance
were obtained. This could be explained because of that the mechanism forces the classifier to stay
static because no changes are observed for the Adversary. In other words, the fact that the
interaction between agents keeps static could influence the classification performance because no
1
This tool was used in this thesis code for approximate the QRE for a given signaling game.

73
Chapter 7. CONCLUSIONS AND FUTURE WORK

dynamism was presented for both agents. However, these conclusions are based on game theoretic
assumptions in the mechanism, further experiments in the evolution of the equilibria obtained could
help to explain quantitatively these results.

In terms of the parameter for game periods (Gp), for both I-QRE and AAO-SVM, the
intermediate value Gp = 500 presented best results. This was expected as lower values (Gp = 10)
and higher values (Gp = 1000) were too small and large to capture enough information of the
game development. An interesting result was obtained for the I-QRE algorithm when Gp = 10,
because of its resemblance to the evaluation of the QRE-P algorithm for = 1000. This was
presented because in the QRE-P algorithm the update on the agents strategies is done whenever a
classification mistake is observed, which was at a rate of every 10 messages.

The proposed Adversary Aware Online Support Vector Machines classifier (AAO-SVM),
whose core is mainly the SVM model, considers a signaling game where beliefs, mixed strategies
and probabilities for the messages types are updated and incorporated in a game theoretic param-
eter into the optimization problem, as new messages are presented. This enables the classifier to
change the margin error dynamically as the game evolves, considering an embedded awareness of the
adversarial environment. More specifically, this is considered in the miss-classification constraint
in the optimization problem for the SVM algorithm, given the utility function modeling for the
Classifier. Results obtained showed a promising interaction with the dataset for the proposed
classifiers over benchmark algorithms used in this experimental setup and previous work. Despite
that the underlying assumptions can be argued and/or improved, the AAO-SVM classifier obtained
slightly better results than the plain online SVM algorithm.

7.3 Phishing Filtering

Content-based methods are a promising approach that can be applied to a wide list of potential
threats, but most likely to be used in adding security elements to electronic mails. In this thesis,
all major features considered were extracted directly from the content of the malicious messages,
reporting outstanding results. In addition, structural features associated to the media in which
messages are transmitted contributed to the enhancement of the results obtained.

In terms of characterization of malicious messages, all proposed feature sets reported different
levels of performance over the benchmark classification algorithms. The combination of topic-model
features and keyword features, filtered by those SVD words and filtered by a mutual information
feature selection, presented slightly better results than state of the art feature extraction methods
for phishing messages. This characterization is fundamentally based on latent semantic analysis
over the email message corpus, where clustering techniques for determining topics and keywords
are enhanced by an SVD reduction of the tf-idf representation of the corpus. Independent feature
sets reported highest values for the F-measure criteria for the topic-models features (), and lowest
values for the structural features ().

74
Chapter 7. CONCLUSIONS AND FUTURE WORK

The machine learning approach for malicious message classification reported interesting re-
sults according to the evaluation criteria of benchmark algorithms. The classification performance
evaluated over different sets of features, indicates that the SVM algorithm, based on the struc-
tural risk minimization, outperforms other machine learning algorithms, such as generative models
represented by nave Bayes, and discriminative classifier models represented by logistic regression.
This supports the fact that SVMs are being prefered for classification tasks, rather than other
classification algorithms, specially in text-mining applications.

In this way, the fact that is it possible to define a filter based on social engineering features
is possible to be defined. Furthermore, as presented in section 6.1.1, the feature extraction and
selection procedures used in this work represents a feature space that is closely related to social
engineering tactics presented in the email corpus used for the evaluation of the present thesis.

7.4 Future Work

This works extensions can be classified in two development areas. The first one is related to
theoretical extensions for the proposed game-theoretical framework, and also in terms of data mining
extensions possibilities. The second area of extensions is associated to further developments in
phishing and spam classification, and developments in other data-rich environments with adversarial
characteristics. In what follows, all extensions concerning this thesis are discussed as future work.

7.4.1 Theoretical Extensions

As mentioned before, theoretical extensions can be associated to both data mining and game theory
issues of the proposed game theoretic data mining framework. As presented in the following list,
different strategical interactions between agents can be taken into consideration. Here, in each of
the following cases, different data mining concerns are considered.

1. Determining the actual drift concept of the game is an important open question. In this
thesis, as a strong assumption for the Classifier, the Adversarys strategies are observed
from a given probability distribution defined for the observed data, which is assumed to
be intentionally modified by a centralized adversarial agent. An experimental setup for the
Classifier to show the impact related to the inclusion of new Adversary strategies within
an already defined set of strategies (or features) might contribute to answer this question.

2. Another possible extension is related to game theoretical assumptions, such as the Quantal
Response Equilibria that is being used for the approximation of the strategies. Further con-
siderations in different refinements on incomplete information games can be used for a better
approximation on different game modeling scenarios.

75
Chapter 7. CONCLUSIONS AND FUTURE WORK

3. The Adversary can strategically adapt its behaviour using a strategy finding algorithm (e.g.
Adversarial Learning, proposed in [46]). Here, data mining issues related to the adversarial
learning problem must be considered, which to the best of our knowledge, it only has been
considered only in a complete information game modeling.

4. The Adversary, whose only strategic reaction is based on utility function maximization
assumptions, must deal with multiple strategic Classifiers in an incomplete information
interaction. As for today, only complete information modeling has been considered for a multi-
classifier strategic system [11], and incomplete information multi-classifier strategic interaction
is still an open question.

5. An Adversary that can adapt its behaviour by an strategy finding algorithm, playing against
multiple strategic Classifiers in an incomplete information interaction has not been con-
sidered yet. However, this extensions difficulty can be mitigated by first developing the two
latter items of future work suggested above.

6. Another interesting future development is the case where one strategic Classifier must
interact with multiple strategic Adversaries. Here, several complexities can be associated
to the game modeling, for which the assumption that there is one central strategic Adversary
controlled by Nature doesnt hold. Given this, further revision into game theory modeling
for incomplete information games must be considered.

7. Finally, the natural extension of all previous problems is when all agents, with no assumptions
on a central agent for both Adversary and Classifier, strategically modifies their behaviour
using strategy finding algorithms for an incomplete information interaction.

7.4.2 Applied Extensions

Different future developments can be taken into consideration for real world security related data
mining applications, from improving the characterization of phishing strategies, spam classification
applications, and other adversarial environments problems where the interaction between agents
can be modeled. In this context, it is important to consider that for any application, the problem
must be modeled according to the game interaction proposed in this framework. This is, consid-
ering the utility functions and the set of strategies for both Adversary and Classifier, and the
Adversarys types.

1. In the phishing classification context, the feature extraction analysis can be extended to other
languages like Spanish, French, amongst other world-wide used languages. However, this
is directly constrained to the development of new benchmark datasets for malicious message
classification, whose construction is not an easy task. For this, ideally a hand-crafted collection
of malicious messages should be made, in order to label the messages by an expert. The
experts performance must be double-checked for non-forced errors that could be made in the
the labeling step. In addition to this, legal implications in the usage of collected messages
must be taken into account.

76
Chapter 7. CONCLUSIONS AND FUTURE WORK

2. Social Network Analysis (SNA) applications by using incomplete information interaction be-
tween related agents. Here, game theoretic extensions must be considered for combining
adversarial data mining and SNA in different applications where the interaction between the
network components (e.g. players (edges) or relations (links)) could extend the adversarial
classification domain tasks, such as counter-terrorism and homeland security applications.

3. There are several potential applications whose data generation process can be modeled as an
adversarial scenario. Our framework could be extended and applied to the following applica-
tions:

(a) Intrusion Detection Systems that must classify adversarial interactions between potential
attackers whose objective is to access a given computer system, or network.
(b) Fraud-related problems, such as banking, tax payment, identity theft, health insurance,
investments and other types of fraud schemes for which the underlying interaction be-
tween Adversarys and Classifiers must be modeled and adjusted into the thesis
considerations.
(c) Crime modeling, such as burglary or robbery, can be modeled in a spatio-temporal
database where strategies can be defined for adversaries, types can be extracted for
their interactions, and utility function assumptions can be considered.
(d) In marketing applications, adversarial clients could try to decide their strategies against
a given firm. One example for this, is when firms maximizes their utility by the usage
of revenue management and clients are aware of this, modifying their behaviour for their
own benefit.

4. Further developments in adversarial information retrieval with incomplete information games.


In this case, malicious strategic agents are using different resources available online (mainly
from search engines) whose activities can be classified by adversary-aware classifiers.

77
Chapter 7. CONCLUSIONS AND FUTURE WORK

Conferences and Workshops


The main results of this thesis were presented in different national and international con-
ferences and workshops by the support from the Chilean Instituto Sistemas Complejos de In-
geniera (ICM: P-05-004-F, CONICYT: FBO16; www.sistemasdeingenieria.cl), the Chilean
Anillo project ACT87 Quantitative methods in security (www.ceamos.cl), the Chilean Com-
puter Emergency Response Team (www.clcert.cl), and the Master in Operations Management
program of the University of Chile is greatly acknowledged.

[1] G. LHuillier, R. Weber, and N. Figueroa. Online phishing classification using adversarial data
mining and signaling games. In CSI-KDD 09: Proceedings of the ACM SIGKDD Workshop on
CyberSecurity and Intelligence Informatics, pages 3342, New York, NY, USA, 2009. ACM.

[2] G. LHuillier, R. Weber, and N. Figueroa. Malicious Activities Classification Using Adversarial
Data Mining and Games of Incomplete Information. EURO 09: 23rd European Conference on
Operations Research. Bonn, Germany, 2009.

[3] G. LHuillier, R. Weber, N. Figueroa. Phishing Classification Using Adversarial Data Mining
and Incomplete Information Games. In Optima09: VIII Congreso Chileno de Investigacion
Operativa. Termas de Chillan, Chile, 2009.

[4] G. LHuillier, R. Weber, N. Figueroa, A. Hevia. Clasificacion de Phishing usando Minera de


Datos Adversarial y Juegos con Informacion Incompleta. JCC09: Encuentro de Tesistas,
Jornadas Chilenas de Computacion 2009. Santiago, Chile, 2009.

[5] G. LHuillier, R. Weber, N. Figueroa. An Incremental Algorithm for Sequential Equilibria in


Adversarial Data Mining with Signaling Game. BAO10: First Business Analytics and
Optimization Workshop. Santiago, Chile, 2010.

[6] G. LHuillier, R. Weber, A. Hevia, S. Rios. Latent Semantic Analysis and Keyword Extraction
for Phishing Classification. ISI10: IEEE International Conference on Intelligence and Security
Informatics. Vancouver, Canada, 2010.

[7] G. LHuillier, R. Weber, and N. Figueroa. Sequential Equilibria Algorithms for Adversarial
Data Mining with Signaling Games. ALIO-INFORMS10: Joint International Meeting. Buenos
Aires, Argentina, 2010.

[8] G. LHuillier, R. Weber, and N. Figueroa. Dealing with Multiple Equilibria in Game Theoretic
Data Mining Applications. ALIO-INFORMS10: Joint International Meeting. Buenos Aires,
Argentina, 2010.

78
REFERENCES

[1] Antiphishing working group, http://www.antiphishing.org.

[2] Saeed Abu-Nimeh, Dario Nappa, Xinlei Wang, and Suku Nair. A comparison of machine
learning techniques for phishing detection. In eCrime 07: Proceedings of the anti-phishing
working groups 2nd annual eCrime researchers summit, pages 6069, New York, NY, USA,
2007. ACM.

[3] Ion Androutsopoulos, Evangelos F. Magirou, and Dimitrios K. Vassilakis. A game theoretic
model of spam e-mailing. In CEAS, 2005.

[4] Marco Barreno, Peter L. Bartlett, Fuching Jack Chi, Anthony D. Joseph, Blaine Nelson, Ben-
jamin I.P. Rubinstein, Udam Saini, and J. D. Tygar. Open problems in the security of learning.
In AISec 08: Proceedings of the 1st ACM workshop on Workshop on AISec, pages 1926, New
York, NY, USA, 2008. ACM.

[5] Marco Barreno, Blaine Nelson, Russell Sears, Anthony D. Joseph, and J. D. Tygar. Can
machine learning be secure? In ASIACCS 06: Proceedings of the 2006 ACM Symposium on
Information, computer and communications security, pages 1625, New York, NY, USA, 2006.
ACM.

[6] Ram Basne, Srinivas Mukkamala, and Andrew H. Sung. Detection of Phishing Attacks: A
Machine Learning Approach, chapter Studies in Fuzziness and Soft Computing, pages 373383.
Springer Berlin / Heidelberg, 2008.

[7] Andre Bergholz, Jan De Beer, Sebastian Glahn, Marie-Francine Moens, Gerhard Paass, and
Siehyun Strobel. New filtering approaches for phishing email. Journal of Computer Security,
2009. Accepted for publication.

[8] Andre Bergholz, Jeong-Ho Chang, Gerhard Paass, Frank Reichartz, and Siehyun Strobel. Im-
proved phishing detection using model-based features. In Fifth Conference on Email and
Anti-Spam, CEAS 2008, 2008.

[9] James C. Bezdek and Joseph C. Dunn. Optimal fuzzy partitions: A heuristic for estimating
the parameters in a mixture of normal distributions. IEEE Trans. Computers, 24(8):835838,
1975.

79
REFERENCES

[10] Battista Biggio, Giorgio Fumera, and Fabio Roli. Adversarial pattern classification using mul-
tiple classifiers and randomisation. In SSPR/SPR, pages 500509, 2008.

[11] Battista Biggio, Giorgio Fumera, and Fabio Roli. Multiple classifier systems for adversarial
classification tasks. In Jon Atli Benediktsson, Josef Kittler, and Fabio Roli, editors, MCS,
volume 5519 of Lecture Notes in Computer Science, pages 132141. Springer, 2009.

[12] Istvan Bro, David Siklosi, Jacint Szabo, and Andras A. Benczur. Linked latent dirichlet
allocation in web spam filtering. In AIRWeb 09: Proceedings of the 5th International Workshop
on Adversarial Information Retrieval on the Web, pages 3740, New York, NY, USA, 2009.
ACM.

[13] Istvan Bro, Jacint Szabo, and Andras A. Benczur. Latent dirichlet allocation in web spam
filtering. In AIRWeb 08: Proceedings of the 4th international workshop on Adversarial infor-
mation retrieval on the web, pages 2932, New York, NY, USA, 2008. ACM.

[14] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. J. Mach.
Learn. Res., 3:9931022, 2003.

[15] Anselm Blumer, A. Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learnability and
the vapnik-chervonenkis dimension. J. ACM, 36(4):929965, 1989.

[16] Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. A training algorithm for
optimal margin classifiers. In COLT 92: Proceedings of the fifth annual workshop on Compu-
tational learning theory, pages 144152, New York, NY, USA, 1992. ACM.

[17] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001.

[18] In-Koo Cho and David M Kreps. Signaling games and stable equilibria. The Quarterly Journal
of Economics, 102(2):179221, May 1987.

[19] Nilesh Dalvi, Pedro Domingos, Mausam Sumit, and Sanghai DeepakVerma. Adversarial classi-
fication. In in Proceedings of the Tenth International Conference on Knowledge Discovery and
Data Mining, volume 1, pages 99108, Seattle, WA, USA, 2004. ACM Press.

[20] Constantinos Daskalakis, Paul W. Goldberg, and Christos H. Papadimitriou. The complexity
of computing a nash equilibrium. In STOC 06: Proceedings of the thirty-eighth annual ACM
symposium on Theory of computing, pages 7178, New York, NY, USA, 2006. ACM.

[21] D. David Sculley. Advances in Online Learning-Based Spam Filtering. PhD thesis, Tufts
University, 2008.

[22] D.L. Davies and D.W. Bouldin. A cluster separation measure. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 1:224227, 1979.

[23] Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard
Harshman. Indexing by latent semantic analysis. Journal of the American Society for Infor-
mation Science, 41:391407, 1990.

80
REFERENCES

[24] Pedro Domingos. Metacost: a general method for making classifiers cost-sensitive. In KDD
99: Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery
and data mining, pages 155164, New York, NY, USA, 1999. ACM.

[25] Pedro Domingos and Michael Pazzani. On the optimality of the simple bayesian classifier under
zero-one loss. Machine Learning, 29(2-3):103130, 1997.

[26] Julie S. Downs, Mandy Holbrook, and Lorrie Faith Cranor. Behavioral response to phishing
risk. In eCrime 07: Proceedings of the anti-phishing working groups 2nd annual eCrime
researchers summit, pages 3744, New York, NY, USA, 2007. ACM.

[27] Julie S. Downs, Mandy B. Holbrook, and Lorrie Faith Cranor. Decision strategies and suscep-
tibility to phishing. In SOUPS 06: Proceedings of the second symposium on Usable privacy
and security, pages 7990, New York, NY, USA, 2006. ACM.

[28] Serge Egelman, Lorrie Faith Cranor, and Jason Hong. Youve been warned: an empirical study
of the effectiveness of web browser phishing warnings. In CHI 08: Proceeding of the twenty-
sixth annual SIGCHI conference on Human factors in computing systems, pages 10651074,
New York, NY, USA, 2008. ACM.

[29] Usama M. Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth. From data mining to
knowledge discovery: an overview. pages 134, 1996.

[30] Ian Fette, Norman Sadeh, and Anthony Tomasic. Learning to detect phishing emails. In WWW
07: Proceedings of the 16th international conference on World Wide Web, pages 649656, New
York, NY, USA, 2007. ACM.

[31] Francois Fleuret. Fast binary feature selection with conditional mutual information. Journal
of Machine Learning Research, 5:15311555, 2004.

[32] D. Fudenberg and J. Tirole. Game Theory. MIT Press, October 1991.

[33] Claudio Gentile. A new approximate maximal margin classification algorithm. Journal of
Machine Learning Research, 2:213242, December 2001.

[34] R. Gibbons. Game Theory for Applied Economists. Princeton University Press, 1992.

[35] Joshua Goodman, Gordon V. Cormack, and David Heckerman. Spam and the ongoing battle
for the inbox. Commun. ACM, Vol. 50(2):2433, 2007.

[36] Google. Google toolbar, 2009.

[37] Georg Gottlob, Gianluigi Greco, and Toni Mancini. Complexity of pure equilibria in bayesian
games. In IJCAI07: Proceedings of the 20th international joint conference on Artifical in-
telligence, pages 12941299, San Francisco, CA, USA, 2007. Morgan Kaufmann Publishers
Inc.

[38] T. Griffiths. Finding scientific topics. In Proceedings of the National Academy of Sciences,
number 101, pages 52285235, 2004.

81
REFERENCES

[39] John C. Harsanyi. Games with incomplete information played by bayesian players. the basic
probability distribution of the game. Management Science, 14(7):486502, 1968.

[40] G. Heinrich. Parameter estimation for text analysis. Technical report, 2004.

[41] M. Kantarcioglu, B. Xi, and C. Clifton. A game theoretic framework for adversarial learning.
In CERIAS 9th Annual Information Security Symposium, 2008.

[42] T. Kohonen, M. R. Schroeder, and T. S. Huang, editors. Self-Organizing Maps. Springer-Verlag


New York, Inc., Secaucus, NJ, USA, 2001.

[43] David M Kreps and Robert Wilson. Sequential equilibria. Econometrica, 50(4):86394, July
1982.

[44] Gaston LHuillier, Richard Weber, and Nicolas Figueroa. Online phishing classification using
adversarial data mining and signaling games. In CSI-KDD 09: Proceedings of the ACM
SIGKDD Workshop on CyberSecurity and Intelligence Informatics, pages 3342, New York,
NY, USA, 2009. ACM.

[45] Rachael Lininger and Russell Dean Vines. Phishing: Cutting the Identity Theft Line. John
Wiley & Sons, 2005.

[46] Daniel Lowd and Christopher Meek. Adversarial learning. In KDD 05: Proceedings of the
eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages
641647, New York, NY, USA, 2005. ACM.

[47] Justin Ma, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. Beyond blacklists:
learning to detect malicious web sites from suspicious urls. In KDD09: Proceedings of the
15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages
12451254, New York, NY, USA, 2009. ACM.

[48] Justin Ma, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. Identifying suspicious
urls: an application of large-scale online learning. In ICML 09: Proceedings of the 26th
Annual International Conference on Machine Learning, pages 681688, New York, NY, USA,
2009. ACM.

[49] J. MacQueen. Some methods for classification and analysis of multivariate observations. In
Proceedings of 5th Berkeley Symposium on Mathematical Statistics Probability, University of
California, volume 1, pages 281297, 1967.

[50] McAfee. Siteadvisor, 2009.

[51] Richard Mckelvey and Thomas Palfrey. Quantal response equilibria for extensive form games.
Experimental Economics, 1(1):941, 1998.

[52] Richard D. McKelvey, Andrew M. McLennan, and Theodore L. Turocy. Gambit: Software
tools for game theory, version 0.2007.01.30, 2007.

[53] Richard D. McKelvey and Thomas R. Palfrey. Quantal response equilibria for normal form
games. In Normal Form Games, Games and Economic Behavior, pages 638, 1996.

82
REFERENCES

[54] Peter Bro Miltersen and Troels Bjerre Srensen. Computing sequential equilibria for two-
player games. In SODA 06: Proceedings of the seventeenth annual ACM-SIAM symposium on
Discrete algorithm, pages 107116, New York, NY, USA, 2006. ACM.

[55] Kevin D. Mitnick and William L. Simon. The Art of Deception: Controlling the Human Element
of Security. John Wiley & Sons, Inc., New York, NY, USA, 2003.

[56] Jose Nazario. Phishing corpus, 2004-2007.

[57] Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin I. P. Rubin-
stein, Udam Saini, Charles Sutton, J. D. Tygar, and Kai Xia. Exploiting machine learning to
subvert your spam filter. In LEET08: Proceedings of the 1st Usenix Workshop on Large-Scale
Exploits and Emergent Threats, pages 19, Berkeley, CA, USA, 2008. USENIX Association.

[58] Andrew Y. Ng and Michael I. Jordan. On discriminative vs. generative classifiers: A comparison
of logistic regression and naive bayes. In Thomas G. Dietterich, Suzanna Becker, and Zoubin
Ghahramani, editors, NIPS, pages 841848. MIT Press, 2001.

[59] Yutaka Oiwa, Hiromitsu Takagi, Hajime Watanabe, and Hirofumi Suzuki. Pake-based mutual
http authentication for preventing phishing attacks. In WWW 09: Proceedings of the 18th
international conference on World wide web, pages 11431144, New York, NY, USA, 2009.
ACM.

[60] Mahamed G. Omran, Ayed A. Salman, and Andries Petrus Engelbrecht. Dynamic clustering
using particle swarm optimization with application in image segmentation. Pattern Anal. Appl.,
8(4):332344, 2006.

[61] Christos H. Papadimitriou, Hisao Tamaki, Prabhakar Raghavan, and Santosh Vempala. Latent
semantic indexing: a probabilistic analysis. In PODS 98: Proceedings of the seventeenth ACM
SIGACT-SIGMOD-SIGART symposium on Principles of database systems, pages 159168,
New York, NY, USA, 1998. ACM.

[62] X. H. Phang and C.T. Nguyen. Gibbslda++, 2008.

[63] J. Platt. Sequential minimal optimization: A fast algorithm for training support vector ma-
chines. In B. Schoelkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods -
Support Vector Learning. MIT Press, 1998.

[64] Qiong Ren, Yi Mu, and Willy Susilo. Mitigating phishing with id-based online/offline authenti-
cation. In AISC 08: Proceedings of the sixth Australasian conference on Information security,
pages 5964, Darlinghurst, Australia, Australia, 2008. Australian Computer Society, Inc.

[65] Frank Rosenblatt. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mecha-
nisms. Spartan Books, 1962.

[66] G. Salton, A. Wong, and C. S. Yang. A vector space model for automatic indexing. Commun.
ACM, Vol. 18(11):613620, 1975.

83
REFERENCES

[67] Bernhard Scholkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines,
Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001.

[68] D. Sculley and Gabriel M. Wachman. Relaxed online svms for spam filtering. In SIGIR
07: Proceedings of the 30th annual international ACM SIGIR conference on Research and
development in information retrieval, pages 415422, New York, NY, USA, 2007. ACM.

[69] Fabrizio Sebastiani. Text categorization. In Alessandro Zanasi, editor, Text Mining and its
Applications to Intelligence, CRM and Knowledge Management, pages 109129. WIT Press,
Southampton, UK, 2005.

[70] S. Sheng, B. Wardman, G. Warner, L. Cranor, J. Hong, and C. Zhang. Empirical analysis of
phishing blacklists. In Proceedings of Conference on Email and Anti-Spam 2009, 2009.

[71] Steve Sheng, Bryant Magnien, Ponnurangam Kumaraguru, Alessandro Acquisti, Lorrie Faith
Cranor, Jason Hong, and Elizabeth Nunge. Anti-phishing phil: the design and evaluation of a
game that teaches people not to fall for phish. In SOUPS 07: Proceedings of the 3rd symposium
on Usable privacy and security, pages 8899, New York, NY, USA, 2007. ACM.

[72] Dynamic Security Skins and Rachna Dhamija. The battle against phishing:. In SOUPS 05:
Proceedings of the 2005 symposium on Usable privacy and security, pages 7788. ACM Press,
2005.

[73] Orhan Sonmez. Learning game theoretic model parameters applied to adversarial classification.
Masters thesis, Saarland University, 2008.

[74] Kari Torkkola. Feature extraction by non parametric mutual information maximization. Jour-
nal of Machine Learning Research, 3:14151438, 2003.

[75] Alexey Tsymbal, Mykola Pechenizkiy, Padraig Cunningham, and Seppo Puuronen. Dynamic
integration of classifiers for handling concept drift. Inf. Fusion, 9(1):5668, 2008.

[76] Theodore L. Turocy. A dynamic homotopy interpretation of the logistic quantal response
equilibrium correspondence. Games and Economic Behavior, 51(2):243263, May 2005.

[77] Theodore L. Turocy. Using quantal reponse to compute nash and sequential equilibria. Eco-
nomic Theory, Vol. 42, Issue 1, 2010.

[78] Sun Tzu. The Art of War. 600 B.C.

[79] V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events
to their probabilities. Theory of Probability and its Applications, 16:264280, 1971.

[80] Vladimir N. Vapnik. The Nature of Statistical Learning Theory (Information Science and
Statistics). Springer, 1999.

[81] Vishal Vatsa, Shamik Sural, and Arun K. Majumdar. A game-theoretic approach to credit
card fraud detection. In ICISS, pages 263276, 2005.

84
REFERENCES

[82] J.D. Velasquez and V. Palade. Adaptive Web Sites: A Knowledge Extraction from Web Data
Approach. IOS Press, 2008.

[83] J.D. Velasquez, Sebastian A. Rios, Alejandro Bassi, Hiroshi Yasuda, and Terumasa Aoki.
Towards the identification of keywords in the web site text content: A methodological approach.
International Journal of Web Information Systems information, Vol. 1(1):pp. 5357, 2005.

[84] J.D. Velasquez, H. Yasuda, T. Aoki, and R. Weber. A new similarity measure to understand
visitor behavior in a web site. IEICE Transactions on Information and Systems, Special Issues
in Information Processing Technology for web utilization, vE87-D i2.:389396, 2004.

[85] Haixun Wang, Wei Fan, Philip S. Yu, and Jiawei Han. Mining concept-drifting data streams
using ensemble classifiers. In KDD 03: Proceedings of the ninth ACM SIGKDD international
conference on Knowledge discovery and data mining, pages 226235, New York, NY, USA,
2003. ACM.

[86] Ian H. Witten and Eibe Frank. Data Mining: Practical machine learning tools and techniques.
Morgan Kaufmann, San Francisco, 2nd edition edition, 2005.

[87] Xiaoyun Wu and Rohini Srihari. Incorporating prior knowledge with weighted margin support
vector machines. In KDD 04: Proceedings of the tenth ACM SIGKDD international conference
on Knowledge discovery and data mining, pages 326333, New York, NY, USA, 2004. ACM.

[88] Dongshan Xing and Mark Girolami. Employing latent dirichlet allocation for fraud detection
in telecommunications. Pattern Recognition Letters, Vol. 28(13):17271734, 2007.

[89] Peng Zhang, Xingquan Zhu, and Yong Shi. Categorizing and mining concept drifting data
streams. In KDD 08: Proceeding of the 14th ACM SIGKDD international conference on
Knowledge discovery and data mining, pages 812820, New York, NY, USA, 2008. ACM.

[90] Yue Zhang, S. Egelman, Lorrie F. Cranor, and Jason Hong. Phinding phish: Evaluating anti-
phishing tools. In In Proceedings of the 14th Annual Network and Distributed System Security
Symposium (NDSS 2007), 28 February -2 March 2007.

[91] Yue Zhang, Jason I. Hong, and Lorrie F. Cranor. Cantina: a content-based approach to
detecting phishing web sites. In WWW 07: Proceedings of the 16th international conference
on World Wide Web, pages 639648, New York, NY, USA, 2007. ACM.

85
Appendix

86
Appendix A

Algorithms and Methods

A.1 Computing Sequential Equilibria

Recently, a numerical approximation on sequential equilibria refinements have been proposed by


Turocy in [77] based on previous work defined in [53, 76], using a transformation of the logit
Quantal Response Equilibrium (QRE) correspondence. In the following, the QRE approximation
is reviewed and then, a brief reference on the computational complexity for sequential equilibria
is introduced, and computational solutions are shown for the beer-quiche game and the proposed
adversarial signaling game.

A.1.1 Quantal Response Equilibria

As originally stated by [51], the set of logit Agent Quantal Response Equilibria (logit-AQRE) is a
correspondence mapping denoted by,

F :
(0, ) [0, 1]a , a A(h), h N

Theorem 1 logit A-QRE Sequential Equilibria For every finite extensive form game, every
limit point of a sequence of logit-AQREs with going to infinity corresponds to the strategy of a
sequential equilibrium assessment of the game [51].

Theorem 1 states that by the usage of QRE, the principal branch of the QRE represents the
sequential equilibria, as extensively described in [77]. Furthermore, the logit AQRE is introduced

87
Appendix A.

in [51] as an statistical version of sequential rationality.

In [77], an implementation of Quantal Response Equilibria (QRE) has been proposed for
approximating sequential equilibria in signaling games [18]. The main idea in QRE is to approximate
the strategies probabilities based on a logit function by tracing zeroes in a linear system of equations
H. Mixed strategies are expressed in terms of a parameter, as states the following equation

eUA (a)
a = P UA (b)
(A.1)
bI(a) e

Here, UA (a) is the utility for the agent A of commiting action a when information set I(a)
is reached. Also, denotes the behaviour strategy profile, that for each action a, the probability
a is the one associated to the fact that the action a is played when its information set is reached.

The system of equations H = {Hab , Hh }, from which h, a h, and b h where b 6= a, is


constructed by,

a
= e(Ua ()Ub ())
b
ln(a ) ln(b ) = (Ua () Ub ())
X
b = 1
bh
Y
ln(b ) = 0
bh

Therefore, the final system of equations H is defined by,

Hab (, ) ln(a ) ln(b ) (Ua () Ub ()) = 0 (A.2)

and

Y
Hh (, ) ln(b ) = 0 (A.3)
bh

Using this expressions, beliefs are shown to be accurately computed in terms of the economic
interpretation based on logarithmic properties of the equation system whose zeroes are traced.
This procedure can be done using a predictor-corrector method [77]. The predictor step uses the

88
Appendix A.

Jacobian of the system with respect to (, ), and then it is evaluated at a point ( , ), determined
iteratively, forming an logit AQRE.

Let pn () the probability that a node n is reached, if strategy profile is played. The
most relevant procedure involved for computing the sequential equilibria, is to compute accurately
its beliefs for information sets which pn () 0 along the branch of the logit correspondence.
For this, the logarithmic implementation of action probabilities is ideal in terms of its numerical
convergence. Finally, The values for the sequential equilibria are approximated as . This
numerical algorithm has been implemented in Gambit [52], an open-source project for estimating
equilibrium results in finite games. Further details on this topic, and for numerical experiments are
presented in [77].

An alternative method for computing sequential equilibria in two-player games was developed
by [54]. However, this method is only designed for games with two strategic agents, but with a
limited number of types, considering zero-sum payoffs for its interaction. Here, the main idea is
to solve the optimization problem associated to the equilibrium findings by the usage of linear
programming where the sequential equilibrium can be found in polynomial time.

A.1.2 The Complexity of Sequential Equilibria

Over the last years, recent developments in computational theory has shown that the complexity
of deciding whether a game has a Nash equilibria is PPAD-complete1 [20]. As signaling games
have their own Nash equilibrium concept related to the dynamic interaction between agents, its
refinements for equilibria has been studied by [37], where the perfect Bayesian equilibria (PBE) is
shown to be NP-complete. However, as stated by [20], whether computing a sequential equilibrium
in a strategic-form game is in PPAD is left open.

A.1.3 The Beer-Quiche Game Sequential Equilibria Numerical Approximation

Figure A.1 represents the Beer-Quiche game, proposed by Cho and Kreps in [18]. This signaling
game has been widely used in game theory courses to explain the basic components in these games,
such as the bayesian updates of beliefs, and the sequential rationality equilibrium refinement.

Here, this game was solved using the QRE algorithm to approximate the sequential equilibria
as (in this case 1000). Numerical results obtained for each evaluation of are
presented in Table A.1. In this case notation, and as described in the original formulation [18],
players strategies are considered as the following:
1
The complexity reduction can be demonstrated by the problem of finding an approximate fixed point of a Brouwer
function, a common problem in game theory to solve the Nash equilibria.

89
Appendix A.

Figure A.1: Beer-Quiche game.

1:1:1 A wimpy player 1 chooses to order a quiche.

1:1:2 A wimpy player 1 chooses to order a beer.

1:2:1 A surly player 1 chooses to order a quiche.

1:2:2 A surly player 1 chooses to order a beer.

2:1:1 Player 2 chooses to duel against a quiche-type player 1.

2:1:2 Player 2 chooses not to duel against a quiche-type player 1.

2:2:1 Player 2 chooses to duel against a beer-type player 1.

2:2:2 Player 2 chooses not to duel a beer-type player 1.

Table A.1: QRE approximation for the Beer-Quiche game.

Lambda 1:1:1 1:2:2 1:2:1 1:2:2 2:1:1 2:1:2 2:2:1 2:2:2

0 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0,018463 0.504615 0.495385 0.495384 0.504616 0.496323 0.503677 0.496292 0.503708

0,038772 0.509689 0.490311 0.490305 0.509695 0.492315 0.507685 0.49218 0.50782

0,061111 0.515263 0.484737 0.484717 0.515283 0.487952 0.512048 0.487616 0.512384

0,085682 0.521379 0.478621 0.478564 0.521436 0.483212 0.516788 0.482551 0.517449

90
Appendix A.

0,112707 0.528083 0.471917 0.471789 0.528211 0.478071 0.521929 0.476929 0.523071

0,142428 0.535418 0.464582 0.464324 0.535676 0.472509 0.527491 0.470686 0.529314

0,175113 0.543428 0.456572 0.456094 0.543906 0.466508 0.533492 0.463755 0.536245

0,211054 0.552152 0.447848 0.447015 0.552985 0.460057 0.539943 0.456062 0.543938

0,25057 0.561623 0.438377 0.43699 0.56301 0.453151 0.546849 0.447527 0.552473

0,294007 0.571865 0.428135 0.425911 0.574089 0.445796 0.554204 0.438067 0.561933

0,341745 0.582883 0.417117 0.413657 0.586343 0.438016 0.561984 0.427593 0.572407

0,394188 0.594664 0.405336 0.400089 0.599911 0.429855 0.570145 0.41602 0.58398

0,451772 0.607161 0.392839 0.385054 0.614946 0.42139 0.57861 0.403263 0.596737

0,51495 0.620284 0.379716 0.368386 0.631614 0.412743 0.587257 0.389251 0.610749

0,584186 0.63389 0.36611 0.349911 0.650089 0.404099 0.595901 0.373933 0.626067

0,659916 0.647755 0.352245 0.329456 0.670544 0.395732 0.604268 0.357294 0.642706

0,742501 0.661554 0.338446 0.306873 0.693127 0.388046 0.611954 0.339375 0.660625

0,83211 0.674817 0.325183 0.28208 0.71792 0.38163 0.61837 0.320306 0.679694

0,928519 0.686877 0.313123 0.255122 0.744878 0.377328 0.622672 0.300346 0.699654

1,136786 0.696799 0.303201 0.226274 0.773726 0.376314 0.623686 0.279938 0.720062

1,243068 0.703323 0.296677 0.196163 0.803837 0.380108 0.619892 0.259754 0.740246

1,345293 0.704906 0.295094 0.165834 0.834166 0.390418 0.609582 0.240657 0.759343

1,439842 0.699943 0.300057 0.136619 0.863381 0.408715 0.591285 0.223514 0.776486

1,525384 0.687117 0.312883 0.109769 0.890231 0.435714 0.564286 0.208886 0.791114

1,603312 0.665656 0.334344 0.0860978 0.913902 0.471117 0.528883 0.196826 0.803174

1,676991 0.635356 0.364644 0.0659017 0.934098 0.513779 0.486221 0.186937 0.813063

1,750705 0.596521 0.403479 0.049124 0.950876 0.562006 0.437994 0.17858 0.82142

1,828895 0.55 0.45 0.0355445 0.964456 0.613747 0.386253 0.171058 0.828942

1,91573 0.49726 0.50274 0.0248729 0.975127 0.666727 0.333274 0.16373 0.83627

2,014918 0.44034 0.55966 0.0167694 0.983231 0.718641 0.281359 0.15606 0.84394

2,129665 0.381621 0.618379 0.0108513 0.989149 0.767418 0.232582 0.147644 0.852356

2,262708 0.323483 0.676517 0.00671152 0.993288 0.811455 0.188546 0.138234 0.861766

2,41636 0.267996 0.732004 0.00394918 0.996051 0.849774 0.150226 0.127739 0.872261

2,592565 0.216751 0.783249 0.00219915 0.997801 0.882053 0.117947 0.116223 0.883777

2,793009 0.170847 0.829153 0.00115231 0.998848 0.908533 0.0914667 0.10389 0.89611

3,019328 0.130954 0.869047 0.000564706 0.999435 0.929847 0.0701528 0.0910492 0.908951

3,273371 0.0973809 0.902619 0.000257201 0.999743 0.946806 0.0531939 0.0780741 0.921926

3,557438 0.0701057 0.929894 0.000108151 0.999892 0.960218 0.0397817 0.0653593 0.934641

3,874428 0.0487794 0.951221 4,17 0.999958 0.970778 0.0292221 0.0532851 0.946715

4,227854 0.0327659 0.967234 1,46 0.999985 0.979031 0.0209694 0.0421891 0.957811

4,621763 0.0212301 0.97877 0,461 0.999995 0.985391 0.0146092 0.0323424 0.967658

5,060603 0.0132573 0.986743 0,13 0.999999 0.990182 0.00981819 0.0239281 0.976072

5,549092 0.00796825 0.992032 0,0323 1 0.993675 0.00632469 0.0170271 0.982973

6,09212 0.00459967 0.9954 0,007 1 0.996118 0.00388163 0.0116136 0.988386

6,694734 0.00254145 0.997459 0,0013 1 0.997743 0.00225669 0.00756587 0.992434

7,362207 0.00133777 0.998662 0,000205 1 0.998764 0.00123606 0.0046909 0.995309

8,100164 0.000666836 0.999333 0,0000269 1 0.999366 0.000634404 0.00275748 0.997243

8,914751 0.000312532 0.999687 0,000000288 1 0.999697 0.000303391 0.00153052 0.998469

91
Appendix A.

9,812802 0.000136622 0.999863 0,000000247 1 0.999866 0.000134369 0.000798442 0.999202

10,801973 5,52 0.999945 1,66E-08 1 0.999945 5,47 0.000389457 0.999611

11,890864 2,04 0.99998 8,47E-12 1 0.99998 2,04 0.000176565 0.999823

13,089093 0,687 0.999993 3,22E-11 1 0.999993 0,685 7,39 0.999926

14,407377 0,207 0.999998 8,85E-13 1 0.999998 0,207 2,83 0.999972

15,857598 0,00553 0.999999 1,69E-14 1 0.999999 0,0553 0,987 0.99999

17,452886 0,013 1 2,18E-16 1 1 0,013 0,0309 0.999997

19,207721 0,00263 1 1,82E-18 1 1 0,00263 0,0863 0.999999

21,138046 0,000455 1 9,43E-21 1 1 0,000455 0,000212 1

23,261405 0,000066 1 2,88E-25 1 1 0,000066 0,000453 1

25,5971 0,0000079 1 4,93E-26 1 1 0,0000079 0,000828 1

28,166364 0,000000764 1 4,47E-29 1 1 0,000000764 0,000128 1

30,992555 5,85E-08 1 2,01E-32 1 1 5,85E-08 0,0000164 1

34,101365 3,47E-09 1 4,17E-36 1 1 3,47E-09 0,00000171 1

37,521056 1,55E-10 1 3,71E-40 1 1 1,55E-10 1,42E-07 1

41,282717 5,07E-12 1 1,3E-44 1 1 5,07E-12 9,2E-09 1

45,420543 1,18E-13 1 1,63E-50 1 1 1,18E-13 4,54E-11 1

49,972152 1,88E-15 1 6,64E-55 1 1 1,88E-15 1,66E-11 1

54,978922 1,98E-17 1 7,8E-61 1 1 1,98E-17 4,34E-13 1

60,486368 1,33E-19 1 2,34E-68 1 1 1,33E-19 7,91E-15 1

66,54456 5,38E-22 1 1,56E-75 1 1 5,38E-22 9,66E-17 1

73,20857 1,26E-24 1 2E-82 1 1 1,26E-24 7,59E-19 1

80,538982 1,61E-27 1 4,15E-91 1 1 1,61E-27 3,67E-22 1

88,602435 1,05E-30 1 1,17E-100 1 1 1,05E-30 1,04E-23 1

97,472233 3,31E-34 1 3,64E-111 1 1 3,31E-34 1,65E-26 1

107,229011 4,66E-38 1 1,01E-122 1 1 4,66E-38 1,36E-29 1

117,961467 2,7E-42 1 1,96E-135 1 1 2,7E-42 5,56E-33 1

129,767168 5,89E-47 1 2,04E-149 1 1 5,89E-47 1,04E-36 1

142,753439 4,39E-52 1 8,48E-165 1 1 4,39E-52 8,21E-41 1

157,038338 1,01E-57 1 1,02E-181 1 1 1,01E-57 2,53E-45 1

172,751726 6,3E-64 1 2,5E-200 1 1 6,3E-64 2,75E-50 1

190,036453 9,44E-71 1 8,41E-221 1 1 9,44E-71 9,55E-56 1

209,049653 2,94E-78 1 2,54E-243 1 1 2,94E-78 9,43E-62 1

229,964173 1,63E-86 1 4,29E-268 1 1 1,63E-86 2,34E-68 1

252,970145 1,34E-95 1 2,42E-296 1 1 1,34E-95 1,27E-75 1

278,276714 1,37E-105 1 0 1 1 1,37E-105 1,29E-83 1

306,11394 1,4E-117 1 0 1 1 1,4E-117 2,07E-92 1

336,734889 1,14E-128 1 0 1 1 1,14E-128 4,42E-102 1

370,417933 5,73E-142 1 0 1 1 5,73E-142 1,01E-112 1

407,469281 1,35E-156 1 0 1 1 1,35E-156 2,01E-124 1

448,225763 1,09E-172 1 0 1 1 1,09E-172 2,7E-137 1

493,057894 2,18E-190 1 0 1 1 2,18E-190 1,86E-151 1

542,373238 7,37E-211 1 0 1 1 7,37E-211 4,94E-167 1

596,620117 2,82E-232 1 0 1 1 2,82E-232 3,63E-184 1

92
Appendix A.

656,291683 7,78E-255 1 0 1 1 7,78E-255 5,16E-203 1

721,930406 9,47E-281 1 0 1 1 9,47E-281 9,57E-224 1

794,133001 0 1 0 1 1 0 1,5E-246 1

873,555856 0 1 0 1 1 0 1,23E-271 1

960,920996 0 1 0 1 1 0 3,13E-299 1

1057,02265 0 1 0 1 1 0 0 1

Lambda 1:1:1 1:2:2 1:2:1 1:2:2 2:1:1 2:1:2 2:2:1 2:2:2

A.1.4 The Adversarial Signaling Game Sequential Equilibria Numerical Ap-


proximation

The adversarial signaling game was solved using the QRE algorithm to approximate the sequential
equilibria as (in this case 1000). Numerical results obtained for each evaluation of
are presented in Table A.2. In this case notation, and as described in section A.2, players strategies
are considered as the following:

1:1:1 A regular player chooses to send a fraud type of message.

1:1:2 A malicious player chooses to send a fraud type of message.

1:2:1 A malicious player chooses to send a non-fraud type of message by changing its initial
strategy with (xi ) = xj

1:2:2 A regular player chooses to send a non-fraud type of message.

2:1:1 Classifier labels as malicious (C(xj ) = +1) a fraud type of message.

2:1:2 Classifier labels as regular (C(xj ) = 1) a fraud type of message.

2:2:1 Classifier labels as malicious (C(xj ) = +1) a non-fraud type of message.

2:2:2 Classifier labels as regular (C(xj ) = 1) a non-fraud type of message.

Table A.2: QRE approximation for the Adversarial Signaling


game.

Lambda 1:1:1 1:2:2 1:2:1 1:2:2 2:1:1 2:1:2 2:2:1 2:2:2

0 1 0,5 0,5 1 0,5 0,5 0,5 0,5

0,000577 1 0,501354 0,498646 1 0,500102 0,499898 0,489487 0,510513

0,001212 1 0,502803 0,497197 1 0,500226 0,499774 0,477934 0,522066

93
Appendix A.

0,001909 1 0,504347 0,495653 1 0,500377 0,499623 0,465262 0,534738

0,002674 1 0,505985 0,494015 1 0,500559 0,499441 0,451394 0,548606

0,003513 1 0,507711 0,492289 1 0,500776 0,499224 0,436262 0,563738

0,004431 1 0,509521 0,490479 1 0,501035 0,498965 0,419811 0,580189

0,005436 1 0,511404 0,488596 1 0,50134 0,49866 0,402006 0,597994

0,006533 1 0,513348 0,486652 1 0,501698 0,498302 0,382839 0,617161

0,007729 1 0,515336 0,484664 1 0,502114 0,497886 0,36234 0,63766

0,00903 1 0,517352 0,482648 1 0,502594 0,497406 0,34058 0,65942

0,010442 1 0,519375 0,480625 1 0,503143 0,496857 0,317681 0,682319

0,011972 1 0,521384 0,478616 1 0,503765 0,496235 0,293819 0,706181

0,013625 1 0,523362 0,476638 1 0,504466 0,495534 0,269224 0,730776

0,01541 1 0,525291 0,474709 1 0,505249 0,494751 0,244173 0,755827

0,017333 1 0,527161 0,472839 1 0,506119 0,493881 0,218987 0,781013

0,019403 1 0,528967 0,471033 1 0,507081 0,492919 0,194015 0,805985

0,021629 1 0,530711 0,469289 1 0,508142 0,491858 0,16962 0,83038

0,024025 1 0,532406 0,467594 1 0,50931 0,49069 0,14616 0,85384

0,026603 1 0,534069 0,465931 1 0,510597 0,489403 0,123974 0,876026

0,02938 1 0,535728 0,464272 1 0,512019 0,487981 0,103363 0,896637

0,032375 1 0,537413 0,462587 1 0,513596 0,486404 0,0845767 0,915423

0,03561 1 0,539161 0,460839 1 0,515355 0,484645 0,0678001 0,9322

0,039111 1 0,541008 0,458992 1 0,517325 0,482675 0,0531451 0,946855

0,042907 1 0,542984 0,457016 1 0,519545 0,480455 0,0406458 0,959354

0,047031 1 0,545116 0,454884 1 0,522057 0,477943 0,030259 0,969741

0,05152 1 0,547418 0,452582 1 0,524905 0,475095 0,0218691 0,978131

0,056415 1 0,549891 0,450109 1 0,528139 0,471861 0,0152995 0,984701

0,061763 1 0,552518 0,447482 1 0,531807 0,468193 0,0103274 0,989673

0,067614 1 0,555263 0,444737 1 0,535954 0,464046 0,0067024 0,993298

0,074025 1 0,558072 0,441928 1 0,54062 0,45938 0,0041657 0,995834

0,081057 1 0,560871 0,439129 1 0,545833 0,454167 0,0024688 0,997531

0,088778 1 0,563571 0,436429 1 0,551609 0,448391 0,0013885 0,998611

0,097264 1 0,566073 0,433927 1 0,557948 0,442052 0,0007373 0,999263

0,106596 1 0,568267 0,431733 1 0,564829 0,435171 0,0003674 0,999633

0,116867 1 0,570045 0,429955 1 0,572208 0,427792 0,0001708 0,999829

0,128176 1 0,571303 0,428697 1 0,580018 0,419982 7,35415 0,999926

0,140636 1 0,571949 0,428051 1 0,588164 0,411836 2,91038 0,999971

0,154368 1 0,571911 0,428089 1 0,596534 0,403466 1,04976 0,99999

0,169508 1 0,571142 0,428858 1 0,60499 0,39501 0,341923 0,999997

0,186205 1 0,569625 0,430375 1 0,613389 0,386611 0,0099547 0,999999

0,20462 1 0,567381 0,432619 1 0,62158 0,37842 0,0256163 1

0,224931 1 0,564461 0,435539 1 0,629421 0,370579 0,0057548 1

0,247332 1 0,56095 0,43905 1 0,636789 0,363211 0,0011135 1

0,272032 1 0,556955 0,443045 1 0,643585 0,356415 0,0001828 1

0,299262 1 0,552599 0,447401 1 0,649743 0,350257 0,000025 1

0,329272 1 0,548007 0,451993 1 0,655228 0,344772 0,0000028 1

94
Appendix A.

0,362335 1 0,543299 0,456701 1 0,660036 0,339964 0,0000003 1

0,398752 1 0,538582 0,461418 1 0,664191 0,335809 0 1

0,438852 1 0,533945 0,466055 1 0,667733 0,332267 0 1

0,482997 1 0,529459 0,470541 1 0,670716 0,329284 0 1

0,531586 1 0,525175 0,474825 1 0,6732 0,3268 0 1

0,585058 1 0,521129 0,478871 1 0,675248 0,324752 0 1

0,643896 1 0,517339 0,482661 1 0,67692 0,32308 0 1

0,708632 1 0,513815 0,486185 1 0,678273 0,321727 0 1

0,779854 1 0,510556 0,489444 1 0,679357 0,320643 0 1

0,858206 1 0,507557 0,492443 1 0,680217 0,319783 0 1

0,944398 1 0,504806 0,495194 1 0,680893 0,319107 0 1

1,039215 1 0,50229 0,49771 1 0,681418 0,318582 0 1

1,143515 1 0,499994 0,500006 1 0,681819 0,318181 0 1

1,258246 1 0,497902 0,502098 1 0,682121 0,317879 0 1

1,384451 1 0,495999 0,504001 1 0,682344 0,317656 0 1

1,523276 1 0,494269 0,505731 1 0,682502 0,317498 0 1

1,675983 1 0,492698 0,507302 1 0,68261 0,31739 0 1

1,843959 1 0,491272 0,508728 1 0,682679 0,317321 0 1

2,028731 1 0,489977 0,510023 1 0,682717 0,317283 0 1

2,231979 1 0,488803 0,511197 1 0,68273 0,31727 0 1

2,455551 1 0,487738 0,512262 1 0,682726 0,317274 0 1

2,701477 1 0,486772 0,513228 1 0,682709 0,317291 0 1

2,971994 1 0,485895 0,514105 1 0,682681 0,317319 0 1

3,269562 1 0,485101 0,514899 1 0,682647 0,317353 0 1

3,596884 1 0,48438 0,51562 1 0,682608 0,317392 0 1

3,956938 1 0,483726 0,516274 1 0,682566 0,317434 0 1

4,352995 1 0,483132 0,516868 1 0,682523 0,317477 0 1

4,788656 1 0,482594 0,517406 1 0,682479 0,317521 0 1

5,267882 1 0,482105 0,517895 1 0,682436 0,317564 0 1

5,79503 1 0,481662 0,518338 1 0,682394 0,317606 0 1

6,374891 1 0,481259 0,518741 1 0,682353 0,317647 0 1

7,012737 1 0,480894 0,519106 1 0,682314 0,317686 0 1

7,714366 1 0,480563 0,519437 1 0,682277 0,317723 0 1

8,486158 1 0,480261 0,519739 1 0,682241 0,317759 0 1

9,335128 1 0,479988 0,520012 1 0,682208 0,317792 0 1

10,268994 1 0,47974 0,52026 1 0,682177 0,317823 9,88131e-324 1

11,296246 1 0,479514 0,520486 1 0,682148 0,317852 0 1

12,426223 1 0,47931 0,52069 1 0,682121 0,317879 0 1

13,669196 1 0,479124 0,520876 1 0,682096 0,317904 0 1

15,036467 1 0,478955 0,521045 1 0,682073 0,317927 0 1

16,540463 1 0,478801 0,521199 1 0,682051 0,317949 0 1

18,194859 1 0,478662 0,521338 1 0,682032 0,317968 0 1

20,014695 1 0,478535 0,521465 1 0,682013 0,317987 0 1

22,016513 1 0,47842 0,52158 1 0,681997 0,318003 0 1

95
Appendix A.

24,218513 1 0,478316 0,521684 1 0,681981 0,318019 0 1

26,640712 1 0,478221 0,521779 1 0,681967 0,318033 0 1

29,305132 1 0,478134 0,521866 1 0,681954 0,318046 0 1

32,235992 1 0,478056 0,521944 1 0,681942 0,318058 0 1

35,459939 1 0,477984 0,522016 1 0,681931 0,318069 0 1

39,00628 1 0,47792 0,52208 1 0,681921 0,318079 0 1

42,907255 1 0,477861 0,522139 1 0,681912 0,318088 0 1

47,198327 1 0,477807 0,522193 1 0,681904 0,318096 0 1

51,918507 1 0,477759 0,522241 1 0,681896 0,318104 0 1

57,110704 1 0,477714 0,522286 1 0,681889 0,318111 0 1

62,82212 1 0,477674 0,522326 1 0,681883 0,318117 0 1

69,104678 1 0,477638 0,522362 1 0,681877 0,318123 0 1

76,015492 1 0,477604 0,522396 1 0,681872 0,318128 0 1

83,617387 1 0,477574 0,522426 1 0,681867 0,318133 0 1

91,979472 1 0,477547 0,522453 1 0,681863 0,318137 0 1

101,177765 1 0,477522 0,522478 1 0,681859 0,318141 0 1

111,295887 1 0,477499 0,522501 1 0,681855 0,318145 0 1

122,425821 1 0,477479 0,522521 1 0,681852 0,318148 0 1

134,668749 1 0,47746 0,52254 1 0,681849 0,318151 0 1

148,135969 1 0,477443 0,522557 1 0,681846 0,318154 0 1

162,949911 1 0,477427 0,522573 1 0,681843 0,318157 0 1

179,245247 1 0,477413 0,522587 1 0,681841 0,318159 0 1

197,170117 1 0,477401 0,522599 1 0,681839 0,318161 0 1

216,887474 1 0,477389 0,522611 1 0,681837 0,318163 0 1

238,576567 1 0,477378 0,522622 1 0,681835 0,318165 0 1

262,434569 1 0,477369 0,522631 1 0,681834 0,318166 0 1

288,678371 1 0,47736 0,52264 1 0,681832 0,318168 0 1

317,546553 1 0,477352 0,522648 1 0,681831 0,318169 0 1

349,301553 1 0,477345 0,522655 1 0,68183 0,31817 0 1

384,232054 1 0,477338 0,522662 1 0,681829 0,318171 0 1

422,655604 1 0,477332 0,522668 1 0,681828 0,318172 0 1

464,921509 1 0,477327 0,522673 1 0,681827 0,318173 0 1

511,414005 1 0,477322 0,522678 1 0,681826 0,318174 0 1

562,555751 1 0,477318 0,522682 1 0,681826 0,318174 0 1

618,811671 1 0,477313 0,522687 1 0,681825 0,318175 0 1

680,693183 1 0,47731 0,52269 1 0,681824 0,318176 0 1

748,762846 1 0,477306 0,522694 1 0,681824 0,318176 0 1

823,639475 1 0,477303 0,522697 1 0,681823 0,318177 0 1

906,003768 1 0,477301 0,522699 1 0,681823 0,318177 0 1

996,604489 1 0,477298 0,522702 1 0,681822 0,318178 0 1

Lambda 1:1:1 1:2:2 1:2:1 1:2:2 2:1:1 2:1:2 2:2:1 2:2:2

96
Appendix A.

A.2 From Learning to Playing

In the following, the general modeling from the classification learning from adversarial data, to the
strategic interaction between the Classifier and Adversary is presented. First, the confusion
matrix is detailed on a learning schema. Then, a complete information model from the learning
schema is exposed, for which the extension of the confusion matrix leads to a competitive interaction
between agents. Finally, two scenarios for the incomplete information interaction derived from the
confusion matrix is presented. In the first place, using the utility function modeling for the AAO-
SVM algorithm, a numerical example of how the game is settled in a two types signaling game.
This example can be directly extended for the utility function modeling for QRE-based algorithms.

A.2.1 Confusion Matrix and Classification Learning

In order to settle some basics to the inclusion of strategic interaction into machine learning, consider
the confusion matrix presented in Table A.3 for a general binary classification task for phishing email.
Here, if any message x is classified as malicious (C(x) = +1), and the real class was y = Malicious
or y = Regular, then the utility is M and the cost is R respectively. However, if the message
is classified as regular (C(x) = 1), and the real class was y = Malicious or y = Regular, then the
cost is M and the utility is R respectively.

Table A.3: Cost Matrix for phishing email filtering.


y = Malicious y = Regular
C(x) = +1 M R
C(x) = 1 M R

In this setting, only the Classifier side is represented from the interaction between Ad-
versarys and Classifier in terms of costs and benefits, for which it is not possible to realize the
Adversarys strategies.

A.2.2 Complete Information Interaction

In this strategic interaction, an expansion of the confusion matrix must be considered for Adver-
sarys costs and benefits associated to the Classifiers actions. The payoff matrix is presented
in Table A.4, where for the malicious Adversary (y = Malicious), has an opposite perception of
winnings and looses, as the Classifier is trying to defeat it. This values can be considered as M
and M , for C(x) = 1 and C(x) = +1 respectively. Then, for the regular Adversary, there is
a shared perception of winnings and looses, where the Adversarys payoffs are R and R , for
mistakes and wins respectively.

97
Appendix A.

Table A.4: Confusion matrix and payoffs in a complete information game for phishing emails
filtering.
y = Malicious y = Regular
C(x) = +1 M , M R , R
C(x) = 1 M , M R , R

A.2.3 Incomplete Information Interaction

Furthermore, in an incomplete information game, the types of Adversarys takes previous payoff
matrix to a new level of complexity. In a simple case of two types, a payoff matrix is needed for
each combinations of types. Let us consider that in the incomplete information game definition,
that type 1 represents malicious message senders and type 2 represents regular message senders.
One confusion matrix is associated to types whose message turns to be a non-malicious message,
and the other matrix represents types whose messages turned to be malicious.

A.2.4 AAO-SVM Utility-based Example

In this case, for C(x) = +1 and malicious type sending malicious messages, payoffs are (M M , M M )
for the Classifier and the Adversary respectively. For C(x) = +1 and regular type sending mali-
cious messages, payoffs are (RM ,RM ) for the Classifier and the Adversary respectively. For
the Classifier and the Adversary, when C(x) = 1, payoffs are (M M , M M ) and (RM , RM )
for malicious types sending malicious messages and regular type sending malicious messages respec-
tively.

Table A.5: Confusion matrix and payoffs for non-malicious messages in phishing filtering with
incomplete information.

Type malicious sends non-malicious message Type regular send non-malicious message
C(x) = +1 M N M , M N M RN M ,RN M
C(x) = 1 M N M , M N M RN M , RN M

Summarizing, the signaling game is presented in figure A.2, for which using a given set of
payoffs, the mixed strategies of the game were determined. Here, optimal strategies are computed2
based on an initial set of values for previously explained payoffs3 .

As shown in Figure A.2, there are two types of adversaries and a classifier. The Adversarys
2
Further details on computing equilibria are presented in section A.1.
3
The numerical values for payoffs used in this example are explained in details in section 3.4.1.

98
Appendix A.

Table A.6: Confusion matrix and payoffs for malicious messages in phishing filtering with incom-
plete information.
Type malicious sends malicious message Type regular send malicious message
C(x) = +1 M M , M M RM ,RM
C(x) = 1 M M , M M R M , R M

Figure A.2: Dynamic interaction between the Classifier an the Adversary.

types are characterized by their intention of committing fraud and non-fraud by sending regular
1
and malicious messages. The types probabilities are distributed in 20 for regular senders whose
4
intention is to commit fraud, 20 for those malicious senders whose intention is to commit fraud, and
15
20 for those regular senders whose intention is not to commit fraud. Then, payoffs are estimated
following Table A.5 and Table A.6, assigning values to these parameters. After the game develops,
mixed strategies are determined, where if the Classifier sees a non-fraud type of message4 it must
classify C(x) = 1, and if it infers that the message is from the fraud type, it has a 15
22 probability of
7
deciding that it is a malicious message, and a 22 probability of deciding that it is a regular message.
See Appendix A.1.4 for further details in the QRE approximation for this signaling game.

4
This does not imply that the Classifier identifies a non-fraud message, instead it infers from message features
that the message type is non-fraud

99
Appendix A.

A.3 K-Means Clustering

The k-means algorithm [49] has been widely used in several data mining and Knowledge Discovery
in Databases applications for unsupervised learning. The pseudo-code for its implementation is
presented in algorithm A.3.1.

Algorithm A.3.1: k-means


Data: T , k, k
Result: Centroids {cj }kj=1
1 Initialize cj = rand, j = {1, . . . , k};
2 while D > k do
3 Assign each xi , i = {1, . . . , N } to its nearest centroid cj ;
4 update cj as the mean of all xi assigned;
P
5 calculate D = N i=1 (minj d(xi , cj ));
6 return {cj }kj=1 ;

It is important to notice that one of the main characteristics of the algorithm is the distance
d(, ), which is relevant to be considered in the type of segmentation problem to be solved. In
text-mining, the most common distance function is the cosine similarity function (equation 4.3).

In k-means, a frequently used index for determining such number is the Davies-Bouldin index
[22], defined as,

k
1X Sk(Qi ) + Sk(Qj )
Davies-Bouldin = max (A.4)
k i6=j S(Qi + Qj )
i=1

where k is the number of clusters, Sk refers to the mean distance between objects and their
centroid, and S(Qi + Qj ) is the distance between centroids. This index evaluation states that the
best clustering parameter k is obtained for smaller index values.

100
Appendix A.

A.4 Sequential Minimal Optimization

The SMO algorithm was originally proposed by Platt in [63], and for then it has been used for
computing the SVM algorithm solution for its quadratic problem in the dual form. The main idea
is that given a pair of objects, the dual optimization problem is calculated analytically, and this
solution is then compared against previous calculated pairs to decide which pair is maximizing the
dual formulation. The pseudo-code is presented in algorithm A.4.1.

Recalling the optimization problem for SVMs to determine the decision function f (x) =
wT x + b, in its dual form (see equation 5.2), the Karush-Kuhn-Tucker (KKT) conditions for the
convergence to an optimal point are,

i = 0 yi (wT xi + b) 1
i = C yi (wT xi + b) 1
0 < i < C yi (wT xi + b) = 1

All of the above conditions are considered i = {1, . . . , N }, where the optimization algorithm
iterates until these conditions are satisfied. The idea is to select two parameters (i and j )
optimizing the objective function analytically for both these value. Then, the b parameter is adjusted
based on the parameters. This process is repeated until the convergence of the s. The first step
is to find bounds L and H such that L j H, in order to satisfy the constraint that 0 j C.
These constraints are given by the following:

If yi 6= yj , L = max(0, j i ), H = min(C, C + j i ) (A.5)


If yi = yj , L = max(0, i + j C), H = min(C, i + j ) (A.6)

Then, the idea is to find the j which maximizes the objective function. If lies outside
the L and H bounds, it is clipped to satisfy the range constraint. Furthermore, it can be shown
that the optimal j is

yj (Ei Ej )
j = j (A.7)

where,

Ek = f (xk ) yk (A.8)
= 2 xi xj xi xi xj xj (A.9)

101
Appendix A.

After the value of is determined, the clipping procedure is done by by the following cases,



H if j > H
j = j if L j H (A.10)


L if j < L

The expression for i is given by,

i = i + yi yj (jold j ) (A.11)

For the b threshold, after optimizing i and j , it is calculated as



b1 if 0 i C
b = b2 if 0 j C (A.12)


(b1 + b2 )/2 otherwise

and b1 and b2 which satisfies the KKT conditions are defined by the following equations,

b1 = b Ei yi (i iold )xi xi yi ( jold )xj xj (A.13)


b2 = b Ej yi (i iold )xi xj yi ( jold )xj xj (A.14)

Given all these expressions, the SMO algorithm iteratively determines the best values for
and b, fundamental parameters for determining the classification function. A simple approach of

102
Appendix A.

this algorithm is presented as follows (for further details refer to [63]),

Algorithm A.4.1: Sequential Minimal Optimization


Data: T , C, tol, max
Result: { Ra }, b R
1 Initialize i = 0, passes = 0;
2 while passes < max do
3 num changed alphas = 0;
4 for i 1 to m do
5 Ei = f (xi ) yi ;
6 if (yi Ei < tol and i < C) or (yi Ei > tol and i > 0) then
7 selects j 6= i randomly;
8 Ej = f (xj ) yj ;
9 iold = i , jold = j ;
10 compute L and H by equation A.5 and equation A.6;
11 if L == H then
12 next i;
13 Compute by equation A.9;
14 if >= 0 then
15 next i;
16 Compute and clip j using equation A.7 and equation A.10;
17 if |i jold | < 105 then
18 next i;
19 Compute i using equation A.11;
20 Compute b using equation A.12;
21 num changed alphas + +;

22 if num changed alphas == 0 then


23 passes + +;
24 else
25 passes = 0;

26 return , b;

103
Appendix B

Phishing Feature Extraction Results

In this chapter keyword features and topic modeling features extracted from the phishing corpus
are presented. Here, keyword features are presented with their respective term frequency, and topic
models features with their probability estimation of appearing in a given message.

B.1 Email Keyword Features

The following list of features is extracted by keyword extraction from email messages, as described
in chapter 4, originally proposed by [82, 83]. The first 15 words are presented, together with their
respective term frequency calculated over their respective cluster.

Table B.1: Top 15 keywords extracted for k = 13 clusters.

Word Frequency Cluster Word Frequency Cluster


limit 3071 1 address 2868 2
credit 2158 1 follow 1919 2
dot 1685 1 bill 1197 2
password 1656 1 communiti 940 2
recent 1502 1 violat 694 2
provid 1500 1 sell 664 2
invoic 1328 1 buyer 483 2
restor 1152 1 immedi 427 2
understand 1120 1 know 357 2
inconveni 1108 1 browser 331 2
make 1108 1 advanc 318 2
depart 1033 1 forward 316 2

104
Appendix B.

safeti 978 1 inappropri 298 2


apolog 931 1 gray 297 2
report 896 1 current 283 2
ebay 23801 3 chase 2263 4
secur 8040 3 payment 960 4
bank 6159 3 error 764 4
access 5018 3 ncua 740 4
user 3797 3 label 712 4
mail 3434 3 note 710 4
thank 3147 3 parti 698 4
custom 3116 3 info 684 4
ebayisapi 2697 3 collaps 639 4
time 2449 3 sensit 628 4
confirm 2369 3 hour 614 4
master 2327 3 internet 602 4
postdirect 2322 3 featur 582 4
verifi 2255 3 begin 568 4
file 2065 3 caus 568 4
vector 425 5 paypal 21832 6
imagen 146 5 account 21630 6
linkcircl 112 5 ding 19859 6
invis 100 5 messag 10432 6
fromfeatur 85 5 target 7685 6
primapaginafcu 85 5 inform 7448 6
doubl 55 5 updat 5563 6
ibuynew 44 5 protect 4262 6
modem 44 5 onlin 4034 6
mrkt 44 5 click 3903 6
ssiiggnniinn 44 5 servic 3809 6
subform 44 5 logo 3734 6
veri 44 5 member 3148 6
instantcashtransf 43 5 dear 2540 6
telecomitalia 43 5 notif 2043 6
signin 2217 7 amazon 1759 8
list 1334 7 never 962 8
partnerid 1193 7 maintain 456 8
siteid 1119 7 ommand 436 8
offer 1039 7 mfcisa 428 8
puserid 975 7 world 405 8
home 943 7 tip 365 8
marketplac 854 7 runam 311 8
direct 825 7 keep 300 8
bshowgif 823 7 ruproduct 298 8
errmsg 768 7 ruparam 287 8

105
Appendix B.

icon 747 7 cash 258 8


trade 658 7 subject 258 8
state 611 7 legal 244 8
behalf 568 7 product 241 8
ebaystat 11365 9 login 3563 10
email 10946 9 line 1514 10
page 6646 9 respons 1311 10
polici 5461 9 window 1246 10
help 5023 9 yahoo 1127 10
link 4073 9 ident 974 10
item 3415 9 region 959 10
privaci 3090 9 status 863 10
name 2859 9 blue 849 10
sent 2733 9 irow 797 10
agreement 2720 9 attempt 792 10
receiv 2251 9 unusu 775 10
question 2100 9 white 765 10
respond 1904 9 element 659 10
chang 1761 9 addit 625 10
area 400 11 use 2313 12
demo 246 11 sidebar 1914 12
hidden 193 11 card 1765 12
expens 138 11 repli 1629 12
theimag 93 11 review 1622 12
mobil 81 11 verif 1255 12
move 79 11 assist 1177 12
solut 74 11 decor 975 12
arrow 63 11 third 884 12
pager 56 11 bankofamerica 810 12
south 52 11 sign 790 12
cccc 50 11 enter 663 12
navcontain 48 11 america 592 12
thin 42 11 alert 550 12
iraq 40 11 select 549 12
union 1015 13 spcell 97 13
nation 986 13 deutsch 96 13
answer 785 13 signon 78 13
googl 403 13 forum 63 13
barclay 342 13 postbank 53 13
tander 194 13 australia 49 13
situat 100 13 keybank 49 13
sprow 98 13

106
Appendix B.

B.2 Latent Dirichlet Analysis Features

The following, are results obtained for LDA algorithm over the phishing corpus, where for each
topic, the 30 most probable words. These features were obtained by using the GibbsLDA++
implementation of LDA, developed by [62].

Table B.2: 30 words with highest probability to appear in a


document, given a topic z and , parameters (25 topics).

Word (w) Probability1 Topic (z) Word (w) Probability2 Topic (z)
paypal 0.192848 1 account 0.032193 2
ding 0.080856 1 messag 0.032039 2
account 0.054404 1 microsoft 0.027419 2
email 0.028494 1 suspend 0.026032 2
secur 0.022040 1 user 0.024903 2
password 0.019156 1 schema 0.024030 2
sidebar 0.018948 1 inform 0.022080 2
protect 0.016765 1 provid 0.019153 2
inform 0.016032 1 action 0.016227 2
verifi 0.013451 1 servic 0.012582 2
never 0.012845 1 caus 0.012274 2
logo 0.012463 1 termin 0.011094 2
mail 0.011650 1 section 0.011042 2
click 0.010917 1 warn 0.010170 2
login 0.010726 1 legal 0.009964 2
dot 0.010710 1 agreement 0.009708 2
link 0.010583 1 financi 0.009605 2
target 0.010535 1 believ 0.009143 2
access 0.010216 1 period 0.009143 2
address 0.009642 1 liabil 0.009092 2
page 0.009403 1 normal 0.008938 2
choos 0.008782 1 take 0.008886 2
repli 0.008511 1 issu 0.008835 2
receiv 0.008160 1 verifi 0.008681 2
websit 0.008065 1 temporarili 0.008630 2
help 0.007905 1 authent 0.008322 2

1
P (w|z, , ), = 0.05, = 0.1
2
idem.

107
Appendix B.

use 0.007905 1 immedi 0.008219 2


assist 0.007523 1 loss 0.008065 2
updat 0.007125 1 resolv 0.007911 2
sure 0.006360 1 problem 0.007911 2
amazon 0.068392 3 invoic 0.080197 4
inform 0.041702 3 field 0.030923 4
updat 0.021543 3 error 0.027602 4
account 0.019433 3 currenc 0.020054 4
sign 0.019393 3 break 0.019631 4
file 0.017405 3 label 0.019510 4
card 0.014525 3 emphasi 0.018484 4
area 0.013430 3 input 0.015585 4
coord 0.013146 3 readon 0.014015 4
rect 0.013106 3 line 0.012566 4
exec 0.013065 3 collaps 0.011600 4
shape 0.013025 3 note 0.011177 4
obido 0.012092 3 control 0.010392 4
ding 0.011078 3 float 0.009849 4
person 0.010875 3 word 0.009668 4
error 0.010591 3 longsidebar 0.008158 4
credit 0.010429 3 dot 0.008037 4
target 0.010023 3 long 0.007313 4
secur 0.009252 3 evalu 0.006286 4
mail 0.008603 3 deutsch 0.005803 4
rec 0.008441 3 inlineblu 0.005561 4
yahoo 0.008279 3 larg 0.005441 4
follow 0.008238 3 gray 0.005380 4
verifi 0.007711 3 ein 0.005320 4
login 0.007427 3 green 0.005320 4
bill 0.007224 3 grey 0.005320 4
home 0.006981 3 ccddee 0.005199 4
order 0.006940 3 highlight 0.005139 4
logo 0.006819 3 blue 0.005018 4
return 0.006453 3 uuml 0.004958 4
account 0.057253 5 messag 0.216498 6
confirm 0.020133 5 ding 0.045904 6
fraudul 0.019903 5 account 0.036889 6
bank 0.019444 5 paypal 0.036328 6
wamu 0.018717 5 dot 0.025067 6
thank 0.018526 5 name 0.016052 6
verif 0.018258 5 sidebar 0.015458 6
use 0.016383 5 target 0.013972 6
suspend 0.016306 5 aaaaaa 0.012915 6
blockquot 0.015426 5 ppem 0.012882 6

108
Appendix B.

fraud 0.013359 5 outset 0.012585 6


login 0.012862 5 line 0.011825 6
matter 0.012135 5 frontpag 0.011693 6
purpos 0.011943 5 updat 0.011363 6
verifi 0.011829 5 generat 0.011330 6
mail 0.011637 5 notif 0.010537 6
secur 0.011446 5 microsoft 0.009481 6
custom 0.010757 5 logo 0.009217 6
collaps 0.010681 5 limit 0.008655 6
requir 0.009877 5 button 0.008424 6
complet 0.009839 5 secur 0.008424 6
cite 0.009533 5 narrow 0.008292 6
depart 0.009265 5 equiv 0.008127 6
sent 0.008997 5 activ 0.007698 6
onlin 0.008614 5 mail 0.007631 6
notif 0.008538 5 label 0.007103 6
regard 0.008423 5 ersidebar 0.007103 6
answer 0.008308 5 addit 0.007070 6
mutual 0.008193 5 dummi 0.007004 6
washington 0.008040 5 ingeoa 0.006971 6
ebay 0.177700 7 chase 0.087492 8
target 0.032008 7 onlin 0.024050 8
ebaystat 0.027677 7 imag 0.021499 8
ding 0.026458 7 vector 0.016434 8
page 0.025658 7 decor 0.015043 8
help 0.021109 7 link 0.014076 8
polici 0.020690 7 return 0.013690 8
account 0.019598 7 servic 0.013380 8
user 0.018379 7 bank 0.013032 8
inform 0.014340 7 custom 0.012800 8
return 0.013703 7 survey 0.012182 8
ebayisapi 0.012739 7 true 0.010751 8
secur 0.011884 7 chaseonlin 0.010713 8
agreement 0.011410 7 part 0.010326 8
trademark 0.010974 7 ding 0.009939 8
privaci 0.010956 7 jpmorgan 0.009901 8
signin 0.010846 7 status 0.009669 8
click 0.009081 7 repeat 0.009514 8
design 0.007808 7 inform 0.009089 8
chang 0.007553 7 underlin 0.008780 8
owner 0.007498 7 ccpmweb 0.008509 8
respect 0.007244 7 time 0.008006 8
brand 0.006898 7 java 0.007427 8
properti 0.006898 7 window 0.007349 8

109
Appendix B.

onclick 0.006825 7 take 0.006808 8


number 0.006425 7 common 0.006769 8
copi 0.006279 7 echaseweb 0.006615 8
reserv 0.006097 7 share 0.006421 8
site 0.005988 7 secur 0.006344 8
regist 0.005879 7 hspace 0.006305 8
email 0.084617 9 link 0.017638 10
paypal 0.078259 9 detail 0.017577 10
master 0.064000 9 main 0.015049 10
postdirect 0.063863 9 payment 0.014371 10
ding 0.027831 9 sheet 0.013323 10
account 0.025850 9 name 0.011905 10
jjjcue 0.016986 9 icon 0.011535 10
inform 0.014344 9 card 0.011042 10
begin 0.011233 9 money 0.011042 10
secur 0.009444 9 equiv 0.010363 10
free 0.009031 9 emphasi 0.009562 10
link 0.007930 9 input 0.009315 10
blue 0.007572 9 send 0.008822 10
click 0.007187 9 form 0.008144 10
profil 0.006912 9 account 0.006541 10
triangletran 0.006774 9 transact 0.006541 10
protect 0.005618 9 java 0.006418 10
dotlinevert 0.005618 9 credit 0.006233 10
purchas 0.005535 9 window 0.006171 10
notif 0.005480 9 repeat 0.005986 10
privaci 0.005453 9 display 0.005863 10
verifi 0.005425 9 valu 0.005431 10
offer 0.005370 9 paypal 0.005308 10
logo 0.005260 9 block 0.005123 10
learn 0.005123 9 dot 0.004938 10
ident 0.005095 9 check 0.004692 10
xpterrorbox 0.005012 9 good 0.004507 10
help 0.004985 9 bill 0.004445 10
copi 0.004875 9 auto 0.004383 10
money 0.004847 9 trebuchet 0.004322 10
updat 0.074128 11 account 0.053933 12
paypal 0.063439 11 justifi 0.026968 12
account 0.045897 11 inform 0.018882 12
inform 0.025584 11 attempt 0.015067 12
servic 0.023696 11 bank 0.014089 12
onlin 0.018915 11 notic 0.013730 12
continu 0.018093 11 commit 0.013078 12
bill 0.016692 11 request 0.011676 12

110
Appendix B.

login 0.016235 11 custom 0.011089 12


logo 0.015017 11 receiv 0.010665 12
result 0.012763 11 person 0.010600 12
target 0.012185 11 violat 0.010404 12
failur 0.012185 11 recent 0.010209 12
privaci 0.011941 11 servic 0.010144 12
normal 0.011301 11 login 0.009850 12
dear 0.011027 11 secur 0.009687 12
member 0.010845 11 enforc 0.009068 12
onc 0.010723 11 protect 0.008742 12
problem 0.010449 11 holder 0.008448 12
futur 0.010449 11 theft 0.008057 12
attent 0.010296 11 author 0.008024 12
interrupt 0.010205 11 confirm 0.007861 12
follow 0.009901 11 fraud 0.007698 12
user 0.009840 11 click 0.007666 12
come 0.009474 11 unauthor 0.007633 12
link 0.009109 11 capit 0.007568 12
agreement 0.008987 11 name 0.007470 12
click 0.008530 11 relat 0.007437 12
suspens 0.008500 11 link 0.007405 12
minut 0.008378 11 ident 0.007144 12
gold 0.009397 13 time 0.040970 14
adob 0.004121 13 roman 0.035161 14
unionplant 0.003500 13 irow 0.030068 14
third 0.003345 13 endif 0.021732 14
fifth 0.003267 13 bidi 0.019242 14
long 0.002103 13 name 0.018375 14
catref 0.002025 13 line 0.015470 14
fsearch 0.001870 13 lastrow 0.013961 14
febay 0.001560 13 firstrow 0.013471 14
fclk 0.001560 13 tahoma 0.013169 14
live 0.001482 13 microsoft 0.012792 14
insid 0.001327 13 languag 0.011999 14
fell 0.001327 13 fareast 0.011962 14
gwgw 0.001249 13 list 0.011962 14
import 0.001249 13 auto 0.011811 14
port 0.001249 13 word 0.010981 14
ygcmail 0.001249 13 section 0.010377 14
enum 0.001249 13 offic 0.010340 14
photoshop 0.001249 13 schema 0.009585 14
jfif 0.001172 13 page 0.009359 14
kijmxvjkm 0.001094 13 pagin 0.009019 14
give 0.001094 13 widow 0.008755 14

111
Appendix B.

bank 0.001094 13 orphan 0.008718 14


price 0.001094 13 lang 0.008567 14
flag 0.001094 13 charact 0.008227 14
ball 0.001017 13 link 0.008227 14
burn 0.001017 13 behavior 0.008227 14
fsofocus 0.001017 13 default 0.008152 14
sbrftog 0.001017 13 level 0.008114 14
butt 0.001017 13 ansi 0.007888 14
tander 0.015254 15 courier 0.055597 16
grupo 0.014075 15 address 0.047271 16
imagen 0.011403 15 email 0.040807 16
cuenta 0.010146 15 paypal 0.036877 16
pagead 0.009596 15 element 0.034032 16
para 0.009517 15 account 0.029171 16
adurl 0.009203 15 normal 0.025241 16
iclk 0.009124 15 ad 0.016967 16
nlqh 0.008888 15 assist 0.016036 16
aedkeuifovd 0.008888 15 click 0.015002 16
ycnqz 0.008888 15 anchor 0.014640 16
boagyqor 0.008888 15 link 0.014433 16
fxbjgsiqlu 0.008888 15 frame 0.014071 16
edsauehkarnhtwzau 0.008810 15 yahoo 0.013140 16
gbazuccapccqkcxu 0.8810 15 target 0.012416 16
fmqwgjlkqaxgfkag 0.8731 15 need 0.011434 16
client 0.007788 15 login 0.011434 16
googl 0.006766 15 page 0.010710 16
ust 0.005116 15 repli 0.009262 16
bancaria 0.004880 15 use 0.008745 16
banamex 0.004880 15 thank 0.008590 16
fals 0.004330 15 paragraph 0.008538 16
wmmessag 0.004252 15 wrap 0.008176 16
ding 0.004173 15 respons 0.008072 16
nuestro 0.004016 15 rule 0.007917 16
common 0.003937 15 vertic 0.007917 16
support 0.003937 15 receiv 0.007762 16
dato 0.003937 15 column 0.007710 16
oacut 0.003780 15 help 0.007659 16
qwgs 0.003466 15 horizont 0.007607 16
account 0.064524 17 file 0.028748 18
access 0.046712 17 function 0.018901 18
credit 0.039600 17 document 0.016419 18
limit 0.038479 17 status 0.014498 18
union 0.030085 17 imag 0.013377 18
ncua 0.023710 17 java 0.011616 18

112
Appendix B.

secur 0.019994 17 outimag 0.010655 18


inform 0.019545 17 overimag 0.010095 18
restor 0.015989 17 name 0.009534 18
ding 0.013811 17 theimag 0.007293 18
protect 0.013747 17 navig 0.006733 18
provid 0.012465 17 load 0.006733 18
servic 0.012433 17 layer 0.006252 18
complet 0.012113 17 globpop 0.006172 18
inconveni 0.011248 17 form 0.005932 18
feder 0.011056 17 locat 0.005852 18
reason 0.011024 17 open 0.005772 18
understand 0.010928 17 return 0.005692 18
sensit 0.010479 17 window 0.005612 18
featur 0.009614 17 imagenum 0.005612 18
like 0.008813 17 true 0.005372 18
remov 0.007083 17 resiz 0.004491 18
recent 0.006699 17 substr 0.004411 18
apolog 0.006667 17 scrollbar 0.004331 18
review 0.006571 17 length 0.004171 18
help 0.006571 17 list 0.004091 18
file 0.006539 17 inam 0.003931 18
primari 0.006539 17 browser 0.003771 18
possibl 0.006282 17 hide 0.003530 18
concern 0.006122 17 strform 0.003370 18
ebay 0.055697 19 bank 0.080334 20
partnerid 0.043715 19 onlin 0.048987 20
siteid 0.040638 19 account 0.026347 20
signin 0.037743 19 secur 0.023379 20
puserid 0.035691 19 bankofamerica 0.019596 20
bshowgif 0.030158 19 updat 0.018913 20
item 0.028436 19 america 0.016425 20
errmsg 0.028143 19 custom 0.013401 20
ding 0.027301 19 wellsfargo 0.013118 20
page 0.022317 19 well 0.013005 20
target 0.021621 19 logo 0.011253 20
ebaystat 0.015466 19 fargo 0.011055 20
ebayisapi 0.014147 19 link 0.009698 20
messag 0.013194 19 inform 0.009670 20
runam 0.011399 19 barclay 0.008313 20
nofollow 0.011289 19 confirm 0.008030 20
ruproduct 0.010922 19 alert 0.007917 20
member 0.010812 19 login 0.007465 20
ruparam 0.010483 19 document 0.007408 20
favoritenav 0.010373 19 huntington 0.007041 20

113
Appendix B.

view 0.008431 19 demo 0.006674 20


unpaid 0.008064 19 client 0.006645 20
receiv 0.007002 19 dear 0.006391 20
respond 0.006892 19 reserv 0.005967 20
seller 0.006672 19 servic 0.005769 20
login 0.006452 19 hous 0.005656 20
ommand 0.006196 19 profil 0.005515 20
eeeef 0.006159 19 set 0.005402 20
ebayisapidllsignifavoritenav 0.00606086 19 server 0.005119 20
iteid 0.006049 19 desktop 0.005091 20
account 0.108365 21 bank 0.044692 22
paypal 0.057606 21 servic 0.034466 22
access 0.041338 21 custom 0.034466 22
secur 0.033969 21 busi 0.033997 22
limit 0.031532 21 client 0.033997 22
protect 0.020217 21 region 0.027398 22
review 0.016593 21 form 0.026054 22
activ 0.015767 21 corpor 0.024146 22
system 0.015264 21 citi 0.020518 22
measur 0.012968 21 nation 0.019861 22
restor 0.012646 21 fffff 0.019861 22
team 0.011841 21 mail 0.017266 22
inform 0.011418 21 confirm 0.013795 22
inconveni 0.010311 21 onlin 0.013138 22
possibl 0.010069 21 procedur 0.013044 22
remain 0.009989 21 dear 0.012262 22
help 0.009546 21 start 0.011949 22
understand 0.009546 21 equiv 0.011793 22
unusu 0.009022 21 complet 0.010949 22
ensur 0.009002 21 request 0.010636 22
apolog 0.008881 21 support 0.010417 22
compromis 0.008640 21 access 0.010104 22
login 0.007452 21 obligatori 0.010073 22
result 0.007230 21 need 0.009885 22
fraud 0.007210 21 thank 0.009479 22
issu 0.007150 21 generat 0.009479 22
user 0.007130 21 hyperlink 0.009416 22
screen 0.006807 21 respond 0.009197 22
time 0.006807 21 updat 0.009197 22
thank 0.006767 21 choos 0.009135 22
ebay 0.106097 23 bank 0.013773 24
ebaystat 0.080799 23 secur 0.010288 24
ding 0.074090 23 money 0.008222 24
polici 0.030848 23 fund 0.006673 24

114
Appendix G.

email 0.030032 23 deposit 0.006501 24


page 0.024783 23 contact 0.006372 24
messag 0.021820 23 inform 0.006243 24
target 0.020523 23 hsbc 0.006071 24
item 0.020454 23 email 0.006071 24
help 0.016494 23 peopl 0.006028 24
question 0.013814 23 privat 0.005856 24
ebayisapi 0.013745 23 compani 0.005813 24
sent 0.013693 23 line 0.005641 24
privaci 0.012628 23 custom 0.005555 24
member 0.012534 23 know 0.005167 24
respond 0.011434 23 made 0.004823 24
user 0.010532 23 aspx 0.004565 24
agreement 0.010257 23 send 0.004264 24
secur 0.008763 23 number 0.004221 24
trademark 0.008230 23 come 0.004178 24
regist 0.008058 23 corpor 0.004178 24
learn 0.007663 23 busi 0.004178 24
marketplac 0.007302 23 receiv 0.004135 24
section 0.006942 23 market 0.004006 24
includ 0.006813 23 mail 0.003963 24
safeti 0.006048 23 instruct 0.003920 24
protect 0.005945 23 make 0.003920 24
respons 0.005610 23 investig 0.003791 24
white 0.005378 23 direct 0.003748 24
feedback 0.005069 23 loyalti 0.003705 24
click 0.025023 25 credit 0.007086 25
email 0.023974 25 domain 0.006928 25
powersel 0.022715 25 ebay 0.006561 25
visa 0.021981 25 usernam 0.006509 25
card 0.017103 25 unsubscrib 0.006404 25
mail 0.015425 25 yahoo 0.006299 25
receiv 0.011544 25 join 0.005775 25
checkoutv 0.011229 25 list 0.005722 25
ebayv 0.011177 25 success 0.005355 25
microsoft 0.009551 25 messag 0.005355 25
schema 0.009498 25 star 0.005040 25
program 0.008554 25 icon 0.004935 25
powersellernew 0.008082 25 want 0.004778 25
offer 0.007715 25 wish 0.004673 25
free 0.007663 25 member 0.004568 25

115