Professional Documents
Culture Documents
Project Report Final
Project Report Final
A PROJECT REPORT
Submitted by
Gunn Soni(20BCS3148)
Manas Singh(21BCS8192)
Jyotirnob Sharma(21BCS8061)
Mrinank Chandna(20BCS3146)
Prince Kumar Singh(21BCS11257)
BACHELOR OF ENGINEERING
IN
Chandigarh University
May 2024
BONAFIDE CERTIFICATE
Certified that this project report “AI-driven Cybersecurity Threat” is the bonafide
work of “Manas Singh(21BCS8192), Jyotirnob Sharma(21BCS8061), Prince
Kumar Singh(21BCS11257), Gunn Soni(20BCS3148), Mrinank
Chandna(20BCS3146)” who carried out the project work under my/our
supervision.
SIGNATURE SIGNATURE
We would like to express our gratitude and appreciation to all those who gave us the possibility
to complete this report. Special thanks are due to our supervisor Er. Jyoti Ma’am whose help,
stimulating suggestions and encouragement helped us during the fabrication process and in
writing this report. We also sincerely thank her for the time spent proofreading and correcting
our many mistakes. Many thanks go to all the lecturers and supervisors who have given their
full effort in guiding the team in achieving the goal as well as their encouragement to maintain
our progress in track. Our profound thanks go to all classmates, especially to our friends for
spending their time helping and giving support whenever we need it in fabricating our project.
ABSTRACT....................................................................................................................................... iii
ABBREVIATION ............................................................................................................................. vi
5.1. Conclusion........................................................................................................................ 37
REFERENCES
APPENDIX
List of Figures
i
List of Tables
ii
ABSTRACT
Looking to the future, the report explores emerging trends and technologies in AI-
driven cybersecurity and discusses the evolving nature of cyber threats that
organizations are likely to face. By embracing AI as a powerful tool in their
cybersecurity arsenal, organizations can better defend against the ever-changing
landscape of cyber threats.
iii
सारां श
iv
GRAPHICAL ABSTRACT
v
ABBREVIATION
Sr.no. Abbreviation Full Forms
1. AI Artificial intelligence
2. SIEM Integrated with security
information and event
3. IDS Intrusion Detection Systems
4. SVM Support Vector Machine
5. DNN Deep neural networks
vi
Chapter -1. INTRODUCTION
1
1.4. Integration and Analysis:
In today's dynamic cybersecurity landscape, organizations must strategically navigate a
myriad of challenges to safeguard their digital assets effectively. One crucial aspect is the
integration of AI-driven cybersecurity solutions into existing infrastructure to fortify threat
detection capabilities. By seamlessly incorporating AI algorithms into security frameworks,
organizations can enhance their ability to identify and mitigate evolving threats in real-time.
o Detection Across Various Data Sources: Anomalies can occur at different levels within a
network, including at the network layer (e.g., unusual traffic volume or unexpected network
protocols), the system layer (e.g., abnormal system log entries or errors), and the user layer (e.g.,
suspicious user login attempts or unauthorized access to resources). An effective anomaly
detection system should be capable of monitoring and analyzing data from multiple sources to
detect anomalies comprehensively.
o Early Warning System: Anomaly detection serves as an early warning system for cybersecurity
incidents, allowing organizations to proactively identify and respond to potential threats before
they escalate into security breaches or significant disruptions. Timely detection of anomalies
enables security teams to investigate and mitigate security incidents promptly, minimizing the
impact on business operations and data security.
o Continuous Monitoring and Analysis: Anomaly detection is not a one-time process but rather
a continuous monitoring and analysis of network data in real-time or near real-time. By
continuously monitoring network traffic, system logs, and user activities, organizations can detect
anomalies as they occur and take immediate action to address security threats and vulnerabilities.
3
o Adaptive and Context-Aware Detection: Effective anomaly detection systems should be
adaptive and context-aware, meaning they can adapt to evolving threats and changing network
conditions. By incorporating contextual information and historical data, anomaly detection
systems can better differentiate between benign anomalies and malicious activities, reducing false
positives and false negatives.
4
These vulnerabilities go away endless networks uncovered to several dangers, inclusive of:
o Information Breaches: touchy statistics like financial facts, private information, and highbrow
assets may be stolen and misused, causing economic losses, reputational damage, and criminal
repercussions.
o Ransomware attacks: structures may be encrypted and held hostage, disrupting operations and
halting commercial enterprise processes until a ransom is paid.
o Enterprise Disruption: assaults can cripple vital infrastructure, inflicting outages, provider
disruptions, and productivity losses.
5
Documentation
5 Jyotirnob Sharma Research paper
Documentation
1.7. Timeline:
The development and deployment of AI-driven cybersecurity threat detection involve various
stages, and timelines can vary based on the complexity of the project, the size of the organization,
and the specific requirements. Below is a generalized timeline for implementing AI-driven
cybersecurity threat detection:
6
Week 1: Research and Data Collection
During this phase, the research team will identify relevant literature and gather data from
various sources, including network traffic logs, system logs, and user activities. This data will serve
as the foundation for developing the anomaly detection system using Random Forests.
7
Table 1.2 Identification of Tasks
CHAPTER 1 – Introduction
Background: Briefly introduce the importance of cybersecurity threat detection.
Highlight the evolving nature of cyber threats and the need for advanced detection mechanisms.
Objectives: Clearly state the objectives of implementing AI-driven cybersecurity threat detection.
Scope: Define the scope of the report, including the systems, data, and threats covered.
Current State of Cyber Threats: Provide an overview of the current cybersecurity threat
landscape. Discuss prevalent attack vectors and types of cyber threats.
8
AI Model Selection and Training: Explain the criteria for selecting AI models and algorithms.
Detail the training process, parameters, and techniques used.
Integration and Implementation: Outline the strategy for integrating AI-driven threat detection
into existing cybersecurity systems.
Provide insights into the implementation process.
Results: Present the results of the AI model evaluation, highlighting strengths and areas for
improvement.
9
Chapter -2 LITERATURE REVIEW/BACKGROUND STUDY
2019: Machine learning calculations started picking up footing in organized security, and
advertising promising comes about in peculiarity discovery. - Analysts investigated the
application of directed, unsupervised, and semi-supervised learning approaches in
distinguishing atypical behavior inside arranged activity. - Early endeavors were made to
address the challenges of information shortage and lesson lopsidedness through techniques
such as information enlargement and gathering learning.
10
discovery. - Consideration moved towards the advancement of peculiarity location models
competent in identifying unpretentious and advancing dangers in real-time.
2024 (Show): - Progressing inquiry emphasizes the requirement for strong irregular location
arrangements able to tend to the advancing dangerous scene. - Integration of inconsistency
location with manufactured insights (AI) and machine learning (ML) proceeds to be a focal
point, with endeavors pointed at making strides in discovery exactness and decreasing wrong
positives.
11
o Rule-based Frameworks:
Rule-based frameworks worked on predefined rules or heuristics to distinguish atypical
behavior inside organize activity. These rules were ordinarily determined from master information
or authentic information.
Whereas rule-based frameworks given a degree of customization and adaptability, they
frequently needed versatility and battled to adjust to energetic and complex organize situations.
o Factual Strategies:
Factual peculiarity location strategies analyzed organize activity information to distinguish
deviations from typical behavior based on factual measurements such as cruel, standard deviation,
or recurrence conveyances.
Whereas generally basic and computationally efficient, measurable strategies were inclined to
tall wrong positive rates and battled to distinguish between honest to goodness irregularities and
generous variances in organize activity.
12
o Profound Learning Strategies:
Profound learning, especially convolutional neural systems (CNNs) and repetitive neural
systems (RNNs), advertised breakthroughs in inconsistency location by naturally learning
progressive representations of arrange activity information.
CNNs were capable at capturing spatial conditions in organize activity information, whereas
RNNs exceeded expectations at modeling temporal conditions over time.
These profound learning models empowered the improvement of highly exact and adaptable
inconsistency discovery frameworks able of taking care of huge volumes of arrange activity
information.
Key Highlights:
Frequently Examined Features: Through bibliometric analysis, it becomes evident that certain
features are consistently explored in the context of anomaly detection in network security
research. These include packet headers, payload content, flow metrics, network protocols, and
communication patterns. Understanding these features is fundamental to devising effective
anomaly detection strategies.
Advanced Techniques: In addition to traditional features, the analysis reveals the prevalence of
advanced techniques such as data dimensionality reduction methods, feature selection
algorithms, and ensemble learning approaches. These advanced techniques play a significant
role in enhancing the accuracy and efficiency of anomaly detection systems.
13
Effectiveness:
Evaluation Metrics: The effectiveness of anomaly detection methods is rigorously evaluated
using a variety of performance metrics. These metrics include detection accuracy, false positive
rate, true positive rate, precision, recall, and F1-score. Each metric provides valuable insights into
the performance of anomaly detection algorithms across different scenarios.
Insights from Bibliometric Analysis: By leveraging bibliometric analysis, researchers gain insights
into the relative effectiveness of various anomaly detection algorithms and approaches across
diverse network environments. Understanding the effectiveness of these methods is crucial for
selecting the most appropriate techniques for specific deployment scenarios.
Downsides:
High False Positive Rates: Despite advancements, many anomaly detection techniques still
suffer from high false positive rates. These false alarms can lead to alert fatigue among security
personnel and undermine the effectiveness of anomaly detection systems.
Lack of Interpretability: Advanced machine learning algorithms, particularly deep learning models,
often lack interpretability, making it challenging to understand the rationale behind anomaly
detection decisions. Interpretable models are essential for building trust and facilitating decision-
making in security operations.
Scalability Issues: As network traffic data volumes continue to grow exponentially, scalability
becomes a critical concern for anomaly detection systems. Scalability issues can hamper real-time
deployment and limit the effectiveness of anomaly detection solutions in dynamic network
environments.
Data Scarcity and Class Imbalance: Limited availability of labeled training data and imbalanced
class distributions pose significant challenges for training accurate anomaly detection models.
Addressing data scarcity and class imbalance is essential for building robust and reliable anomaly
detection systems capable of detecting emerging threats.
14
2.4. Review Summary
o Advancements in Machine Learning:
The literature review delves into the multitude of advancements in machine learning techniques
applied to anomaly detection in network security. Traditional methods, such as rule-based and
statistical approaches, have paved the way for more sophisticated algorithms like Support Vector
Machines (SVM), k-Nearest Neighbors (k-NN), and Random Forests. These algorithms have shown
promise in effectively identifying anomalies in network traffic by learning patterns and deviations
from normal behavior. Moreover, the advent of deep learning has revolutionized anomaly detection,
with techniques like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks
(RNNs) demonstrating remarkable performance in capturing intricate patterns and dependencies in
network data.
o Application-specific Considerations:
The literature emphasizes the importance of tailoring anomaly detection methods to the specific
characteristics and requirements of different network environments. For instance, in Wireless
Sensor Networks (WSNs), where resource constraints and dynamic topology are prevalent,
lightweight anomaly detection algorithms that minimize computational overhead are favored.
Conversely, in Industrial Control Systems (ICS), where real-time response and critical
infrastructure protection are paramount, anomaly detection techniques must be robust and resilient
to cyber threats. Similarly, in Software-Defined Networks (SDNs), where network control is
centralized and programmable, anomaly detection mechanisms must adapt to the dynamic nature
of network configurations and policies.
15
may become overwhelmed. Furthermore, the scarcity of labeled data for training and evaluating
anomaly detection models remains a significant obstacle, particularly in detecting novel and
previously unseen threats.
16
To achieve these objectives, the following approaches will be employed:
o Utilization of Machine Learning Algorithms: Leveraging supervised, unsupervised, and semi-
supervised learning techniques to train anomaly detection models on labeled and unlabeled
network traffic data, enabling the system to detect both known and novel threats.
o Integration with Advanced Analytics: Incorporating advanced analytics methods, including
deep learning models such as convolutional neural networks (CNNs) and recurrent neural
networks (RNNs), to extract meaningful patterns and features from network traffic data for
more accurate anomaly detection.
o Collaborative Research and Experimentation: Engaging in collaborative efforts with industry
experts and academic researchers to explore novel techniques and evaluate the effectiveness of
proposed anomaly detection methods through rigorous experimentation and validation.
While addressing the issue at hand, it is essential to be mindful of the following considerations:
o Avoiding Overfitting: Ensuring that anomaly detection models generalize well to subtle data
and do not overfit to specific training datasets, which may lead to reduced detection
performance in real-world scenarios.
o Upholding Privacy and Security: Maintaining the privacy and security of sensitive network
data throughout the anomaly detection process, adhering to established data protection
protocols and regulatory requirements.
o Transparent and Interpretable Solutions: Striving to develop anomaly detection systems that
are transparent and interpretable, enabling stakeholders to understand the rationale behind
anomaly findings and facilitating informed decision-making.
2.6 Goals/Objectives:
Create a Pattern Irregularity Location Show:
Make a standard inconsistency discovery show utilizing administered learning methods,
accomplishing a least discovery exactness of 85% on a standard benchmark dataset.
17
Investigate Unsupervised Learning Approaches:
Explore the viability of unsupervised learning calculations, such as k-means clustering and
separation woodlands, in recognizing irregularities inside organize activity information,
accomplishing a untrue positive rate underneath 5%.
18
Approve Execution in Real-world Scenarios:
Approve the execution of the irregularity location framework in real-world organize situations,
collaborating with industry accomplices to send and assess the system's adequacy in recognizing
real cyber dangers.
19
DESIGN FLOW/PROCESS
o Parcel Headers: Highlights extricated from parcel headers, such as source and goal IP
addresses, harbor numbers, convention sorts, and bundle length, give profitable data for
recognizing irregularities in organize activity.
o Payload Substance: Analyzing the payload substance of organize parcels, counting HTTP
demands, DNS inquiries, and payload estimate, can uncover suspicious designs demonstrative
of pernicious action, such as command-and-control communications or information
exfiltration.
o Stream Insights: Flow-based highlights, such as stream length, bundle rate, byte rate, and
inter-arrival time, offer experiences into the behavior of organize streams and empower the
discovery of inconsistencies, such as denial-of-service (Do’s) assaults or harbor filtering
exercises.
o Organize Conventions: Highlights related to arrange conventions, counting the nearness of
particular conventions (e.g., HTTP, FTP, SSH) and convention inconsistencies (e.g.,
convention infringement, unordinary convention behavior), help in recognizing anomalous
arrange behavior and potential security dangers.
o Communication Designs: Analyzing communication designs, such as frequency of intuitive
between arrange substances, worldly conditions, and activity volume varieties, makes a
difference distinguish deviations from typical organize behavior and potential signs of
compromise.
o Information Dimensionality Lessening Procedures: Strategies for lessening the
dimensionality of organize activity information, such as vital component investigation (PCA)
20
or t-distributed stochastic neighbor inserting (t-SNE), empower the extraction of basic
highlights whereas moderating the revile of dimensionality and moving forward discovery
execution.
o Highlight Choice Strategies: Include determination strategies, counting channel, wrapper,
and inserted approaches, encourage the recognizable proof of the foremost discriminative
highlights for irregularity location, upgrading demonstrate interpretability and lessening
computational complexity.
o Gathering Learning Approaches: Gathering learning strategies, such as sacking, boosting,
and arbitrary timberlands, combine different irregularity discovery models to make strides
location exactness and vigor, leveraging the differences of person models to moderate wrong
positives and untrue negatives.
o Profound Learning Designs: Profound learning designs, counting convolutional neural
systems (CNNs) and repetitive neural systems (RNNs), empower the programmed extraction
of progressive representations from organize activity information, capturing complex designs
and moving forward discovery execution.
o Worldly and Spatial Setting:
Incorporating temporal and spatial setting highlights, such as session term, grouping of
occasions, and spatial connections between organize substances, improves the understanding
of arrange behavior and encourages the location of odd exercises.
21
o Regarding financial variables, several considerations are crucial:
Cost-effectiveness: Organizations need to assess not only initial investments but also
ongoing operational costs like maintenance, upgrades, and training. Effective resource
allocation ensures optimal fund utilization and maximizes ROI within budget constraints.
Resource Allocation: Proper allocation involves prioritizing aspects such as hardware,
software, personnel, and training to achieve desired outcomes efficiently.
o Concerning environmental impact:
Sustainability: Organizations can minimize environmental footprints through energy-
efficient hardware, eco-friendly manufacturing practices, and reducing carbon emissions.
This aligns with corporate social responsibility and contributes to long-term
sustainability.
Green Practices: Incorporating renewable energy sources, energy-efficient algorithms,
and optimizing hardware utilization are strategies to reduce environmental impact.
o In health and security:
Safety Protocols: Implementing safety protocols protects personnel involved in system
deployment and operation. This includes training on handling sensitive equipment and
risk mitigation strategies.
Threat Mitigation: Identifying and mitigating potential health risks associated with the
system, such as exposure to electromagnetic radiation, ensures a safe working
environment.
o In manufacturability:
Scalability: Designing for scalability ensures long-term viability and adaptability with
changing network requirements.
Ease of Deployment: Simplifying installation procedures and providing user-friendly
interfaces streamline deployment and minimize downtime.
Modularity: Modular design allows for easy maintenance, upgrades, and customization,
reducing implementation complexity.
o Professional ethics entail:
Integrity: Upholding integrity in data handling and decision-making processes fosters
trust among stakeholders.
22
Confidentiality: Safeguarding sensitive information and respecting user privacy are
fundamental ethical principles.
Accountability: Establishing mechanisms ensures transparency and responsibility in
case of system failures or breaches, promoting ethical conduct.
o Considering social and political issues:
Cultural Sensitivity: Recognizing cultural nuances ensures that system design respects
diverse backgrounds and values.
Stakeholder Engagement: Engaging with stakeholders fosters collaboration and
addresses concerns effectively.
Geopolitical Considerations: Understanding geopolitical dynamics helps navigate legal
and political challenges associated with system deployment across borders.
o In cost considerations:
ROI Analysis: Conducting a comprehensive cost-benefit analysis assesses financial
viability and impact on organizational goals.
Total Cost of Ownership (TCO): Considering TCO provides a holistic view of financial
implications over the system's lifecycle.
Risk Management: Identifying and mitigating financial risks through proactive
strategies enhances project success and financial sustainability.
23
unauthorized access attempts, privilege escalation), file characteristics (e.g., suspicious file
extensions, file entropy), and behavioral anomalies (e.g., deviations from normal user
behavior).
Feature Engineering: Transform raw data into actionable features through techniques such
as data preprocessing, extraction, and transformation. For example, you might derive
features such as packet flow statistics, frequency of access to critical resources, or temporal
patterns of system events.
Feature Selection: Employ methods such as statistical analysis, machine learning
algorithms, or domain expertise to select the most informative features for detecting
cybersecurity threats. Prioritize features that exhibit high discriminatory power and are
robust against noise.
24
understanding and enable stakeholders to interpret the rationale behind threat detection
outcomes.
By conducting thorough feature analysis and finalizing features within the specified
constraints, the AI-driven cybersecurity threat detection system can effectively identify and mitigate
a wide range of cyber threats while meeting operational, regulatory, and privacy requirements.
3.3.2. Pre-processing:
Upon data acquisition, the dataset undergoes pre-processing to prepare it for analysis. Tasks
include data cleaning, normalization, and handling missing values. Pre-processing ensures data
quality and consistency before feature extraction.
25
3.3.4. Dataset after Feature Selection:
Following feature engineering, the dataset is subjected to feature selection to focus on the
most informative attributes while reducing dimensionality. This optimized dataset serves as input
for training and testing the anomaly detection model.
26
Fig. 3.1 Design Flow
Ensemble Learning:
Ensemble learning is a machine learning approach that combines multiple models to produce better
predictive performance than any individual model. In the context of anomaly detection, ensemble
learning can enhance the system's ability to detect anomalies in network traffic by aggregating the
predictions of multiple decision trees.
27
3.4.1. IDS Dataset Acquisition:
The process begins with the acquisition of an Intrusion Detection System (IDS) dataset
containing labeled instances of network traffic. This dataset serves as the training data for building
the ensemble of decision trees.
28
3.5.2. Data Preprocessing and Model Development:
Data Preparation: Preprocessed the network traffic data, including feature extraction,
normalization, and transformation, to prepare it for analysis by the machine learning models.
Model Development: Implemented the Auto encoder, One-Class SVM, and Isolation Forest
models for unsupervised anomaly detection based on the extracted features, ensuring robust
and efficient anomaly detection capabilities.
29
3.5.6. Performance Evaluation:
Benchmarking: Evaluated the performance of the RADIANT system using benchmark
datasets such as NSL-KDD and UNSW-NB15 to assess accuracy, false positive rate, detection rate,
and computational efficiency, providing insights into its effectiveness.
Comparative Analysis: Compared the performance of the RADIANT system with existing
approaches, such as conventional neural networks, highlighting its advantages in real-time anomaly
detection and demonstrating its superiority in detecting and mitigating network threats.
30
RESULTS ANALYSIS AND VALIDATION
-
Fig. 4.1 Functional architecture for anomaly detection in Radiant.
31
o Data Flow: Network traffic data flows through these modules, undergoing feature
extraction and analysis by the anomaly detection models. Real-time insights and alerts
are then generated.
o Technologies: Leverages technologies like streaming analytics for real-time processing,
machine learning libraries for anomaly detection, and visualization tools for presenting
results.
33
4.3. Result and Analysis:
This section rigorously evaluates the proposed RADIANT framework for real-time anomaly
detection in network traffic.
34
Fig. 4.2 Evaluation of Anomaly Detection Techniques in NSL – KDD and UNSW – NB15
The AUC for RADIANT consistently surpassed benchmarks across both datasets,
highlighting its strength in differentiating normal and anomalous traffic patterns.
35
4.3.3. Comparative Analysis
The evaluation results reveal several advantages of RADIANT:
o High Accuracy: It achieves a significant rate of correctly identifying anomalies in real-world
network traffic.
o Improved F1-score: Compared to baselines, it exhibits a better balance between precision and
recall, leading to more robust detection performance.
o Reduced False Positives: It maintains high precision, minimizing unnecessary alerts for
normal network activity.
o Adaptability: By leveraging unsupervised learning and continuous retraining, it can
effectively adapt to evolving network patterns and emerging threats.
These findings suggest that RADIANT has the potential to be a valuable tool for real-time
anomaly detection in network security, offering improved accuracy, adaptability, and efficiency
compared to existing methods.
36
CONCLUSION AND FUTURE WORK
5.1. Conclusion
In conclusion, the development and implementation of the RADIANT system for real-time
anomaly detection in network traffic represent a significant advancement in cybersecurity defense
mechanisms. By leveraging unsupervised machine learning techniques such as Auto encoder, One-
Class SVM, and Isolation Forest, RADIANT offers a proactive and adaptable approach to identifying
anomalous patterns in network traffic without relying on pre-defined attack signatures.
Overall, the RADIANT system offers a comprehensive and efficient framework for real-time
anomaly detection in network traffic, providing enhanced security, automated analysis, and
adaptability to emerging threats. Its successful implementation signifies a significant step forward in
bolstering cyber defenses and safeguarding critical infrastructure, sensitive data, and user privacy in
the digital age.
38
REFERENCES
[1] Q. Jing, A. V. Vasilakos, J. Wan, J. Lu, and D. Qiu, “Security of the internet of things:
perspectives and challenges,” Wireless Networks, vol.20, no. 8, pp. 2481–2501, 2014.
[2] L. Atzori, A. Iera, and G. Morabito, “The internet of things: A survey,” Computer networks,
vol. 54, no. 15, pp. 2787–2805, 2010.
[3] M. Tavallaee, E. Bagheri, W. Lu, and A. A. Ghorbani, “A detailed analysis of the KDD CUP
99 data set,” in Computational Intelligence for Security and Defense Applications, 2009. CISDA
2009. IEEE Symposium on. IEEE, 2009, pp. 1–6.
[4] N. Moustafa and J. Slay, “The evaluation of network anomaly detection systems: Statistical
analysis of the UNSW-NB15 data set and the comparison with the KDD99 data set,” Information
Security Journal: A Global Perspective, vol. 25, no. 1-3, pp. 18–31, 2016.
[5] D. W. Vilela, T. F. Ed’Wilson, A. A. Shinoda, N. V. de Souza Araujo, R. de Oliveira, and V.
E. Nascimento, “A dataset for evaluating intrusion detection systems in IEEE 802.11 wireless
networks,” in Communications and Computing (COLCOM), 2014 IEEE Colombian Conference
on. IEEE, 2014, pp. 1–5.4 [6] S. Chebrolu, A. Abraham, and J. P. Thomas, “Feature deduction and
ensemble design of intrusion detection systems,” Computers & Security, vol. 24, no. 4, pp. 295–
307, 2005.
[7] S. Mukkamala, A. H. Sung, and A. Abraham, “Intrusion detection using an ensemble of
intelligent paradigms,” Journal of network and computer applications, vol. 28, no. 2, pp. 167–182,
2005.
[8] S. Peddabachigari, A. Abraham, C. Grosan, and J. Thomas, “Modeling intrusion detection
system using hybrid intelligent systems,” Journal of network and computer applications, vol. 30,
no. 1, pp. 114–132, 2007.
[9] W. Hu, W. Hu, and S. Maybank, “Adaboost-based algorithm for network intrusion detection,”
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 38, no. 2, pp. 577–
583, 2008.
[10] J. B. Cabrera, C. Guti´errez, and R. K. Mehra, “Ensemble methods for anomaly detection and
distributed intrusion detection in mobile ad-hoc networks,” Information Fusion, vol. 9, no. 1, pp.
96–119, 2008.
39
[11] G. Giacinto, R. Perdisci, M. Del Rio, and F. Roli, “Intrusi
on detection in computer networks by a modular ensemble of one-class classifiers,” Information
Fusion, vol. 9, no. 1, pp. 69–82, 2008.
[12] M. Govindarajan and R. Chandrasekaran, “Intrusion detection using neural based hybrid
classification methods,” Computer Networks, vol. 55, no. 8, pp. 1662–1671, 2011.
[13] S. S. S. Sindhu, S. Geetha, and A. Kannan, “Decision tree based light weight intrusion
detection using a wrapper approach,” Expert Systems with Applications, vol. 39, no. 1, pp. 129–
141, 2012. [
14] B. A. Tama and K.-H. Rhee, “Classifier ensemble design with rotation forest to enhance attack
detection of IDS in wireless network,” in 11th Asia Joint Conference on Information Security
(AsiaJCIS). IEEE, 2016, pp. 87–91.
[15] A. Liaw and M. Wiener, “Classification and regression by random forest,” R news, vol. 2, no.
3, pp. 18–22, 2002. [16] S. Aiello, E. Eckstrand, A. Fu, M. Landry, and P. Aboyoun, “Machine
learning with R and H2O,” August 2016. [Online]. Available: http://h2o.ai/resources
[17] M. Friedman, “A comparison of alternative tests of significance for the problem of m
rankings,” The Annals of Mathematical Statistics, vol. 11, no. 1, pp. 86–92, 1940.
[18] J. Demˇsar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine
learning research, vol. 7, no. Jan, pp. 1–30, 2006.
[19] P. Nemenyi, “Distribution-free multiple comparisons,” in Biometrics, vol. 18, no. 2, 1962, p.
263.
[20] J. Kevric, S. Jukic, and A. Subasi, “An effective combining classifier approach using tree
algorithms for network intrusion detection,” Neural Computing and Applications, pp. 1–8, 2016.
[21] M. Panda, A. Abraham, and M. R. Patra, “Discriminative multinomial naive bayes for network
intrusion detection,” in 2010 Sixth International
[22] L. I. Kuncheva, Combining pattern classifiers: methods and algorithms. John Wiley & Sons,
2004
40
APPENDIX
Plagiarism Report
41
Code
42