Professional Documents
Culture Documents
Full Chapter The Ai Revolution in Networking Cybersecurity and Emerging Technologies Omar Santos PDF
Full Chapter The Ai Revolution in Networking Cybersecurity and Emerging Technologies Omar Santos PDF
https://textbookfull.com/product/beyond-the-algorithm-ai-
security-privacy-and-ethics-1st-edition-santos/
https://textbookfull.com/product/the-culture-of-ai-everyday-life-
and-the-digital-revolution-anthony-elliott/
https://textbookfull.com/product/emerging-technologies-in-
brachytherapy-1st-edition-pieters/
Advancement in Emerging Technologies and Engineering
Applications Chun Lin Saw
https://textbookfull.com/product/advancement-in-emerging-
technologies-and-engineering-applications-chun-lin-saw/
https://textbookfull.com/product/current-and-emerging-mhealth-
technologies-emre-sezgin/
https://textbookfull.com/product/computer-vision-and-ai-
integrated-iot-technologies-in-the-medical-ecosystem-1st-edition-
alex-khang/
https://textbookfull.com/product/hcna-networking-study-guide-1st-
edition-huawei-technologies-co/
https://textbookfull.com/product/emerging-technologies-in-food-
science-focus-on-the-developing-world-monika-thakur/
THE AI REVOLUTION IN
NETWORKING, CYBERSECURITY,
AND EMERGING TECHNOLOGIES
This page intentionally left blank
THE AI REVOLUTION IN
NETWORKING, CYBERSECURITY,
AND EMERGING TECHNOLOGIES
Many of the designations used by manufacturers and sellers to distinguish their products are Editor-in-Chief
claimed as trademarks. Where those designations appear in this book, and the publisher was Mark Taub
aware of a trademark claim, the designations have been printed with initial capital letters or in
all capitals. Director ITP Product
Management
The authors and publisher have taken care in the preparation of this book, but make no Brett Bartow
expressed or implied warranty of any kind and assume no responsibility for errors or omissions.
No liability is assumed for incidental or consequential damages in connection with or arising Executive Editor
out of the use of the information or programs contained herein. James Manly
For information about buying this title in bulk quantities, or for special sales opportunities Managing Editor
(which may include electronic versions; custom cover designs; and content particular to your Sandra Schroeder
business, training goals, marketing focus, or branding interests), please contact our corporate Development Editor
sales department at corpsales@pearsoned.com or (800) 382-3419. Christopher A. Cleveland
For government sales inquiries, please contact governmentsales@pearsoned.com. Production Editor
For questions about sales outside the U.S., please contact intlcs@pearson.com. Mary Roth
$PrintCode
I would like to dedicate this book to my lovely wife, Jeannette, and my
two beautiful children, Hannah and Derek, who have inspired and supported me
throughout the development of this book.
—Omar Santos
vii
viii Co ntent s
Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Index.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
This page intentionally left blank
Preface
The AI Revolution in Networking, Cybersecurity, and Emerging Technologies offers an immersive journey
into the world of artificial intelligence and its profound impact on key domains of technology. This
manuscript demystifies AI’s emergence, growth, and current impact, shedding light on its revolu-
tionary applications in computer networking, cybersecurity, collaboration technologies, IoT, cloud
computing, and other emerging technologies.
From explaining AI’s role in managing and optimizing networks to its integral part in securing the
digital frontier, the book offers a wealth of insights. It explores how AI is building robust bridges in
collaboration tools and turning IoT into a super-intelligent network of devices. The reader will also
discover how AI is transforming the cloud into a self-managing, secure, and ultra-efficient environ-
ment and propelling other technologies towards unprecedented advancements.
Our motivation is for this book to serve as a comprehensive guide that bridges the gap between
the complex world of artificial intelligence and its practical implications in the field of IT. We aim to
make the profound impacts and potential of AI in various technology sectors not only understand-
able but also tangible for a wide spectrum of readers. Additionally, part of our vision is to create an
essential resource that empowers readers to understand, navigate, and address the opportunities,
complex challenges, and responsibilities associated with AI technologies. This book will empower
readers, whether they are IT professionals, tech enthusiasts, business leaders, or students, with the
necessary knowledge and insights into how AI is reshaping the IT landscape. By providing a clear,
in-depth exploration of AI’s role in computer networking, cybersecurity, IoT, cloud computing, and
more, we aim to equip readers to harness the power of AI in their respective fields. Ultimately, our
motive is for this book to not only educate but also inspire—serving as a catalyst that propels
individuals and organizations into the future of AI-integrated technology.
This book is highly relevant for a range of audiences, given its exploration of various aspects of
artificial intelligence and technology.
• IT Professionals: Those who work in fields related to information technology, network
management, cybersecurity, cloud computing, IoT, and autonomous systems could benefit
from understanding how AI is revolutionizing their respective fields.
• Tech Enthusiasts: Individuals with an interest in emerging technologies and future trends
might find this book interesting due to its examination of AI’s influence on various domains.
• Business Leaders & Managers: This book would be useful for executives, managers, and
decision-makers who need to understand the implications of AI on business processes and
strategies, particularly those related to IT.
• Academics and Students: Professors, researchers, and students in fields related to
computer science, information technology, and AI would find the book useful for research
and educational purposes.
xv
xvi Prefa ce
• Policy Makers: Given the increasing impact of AI on society and the economy, policymakers
could also gain valuable insights from this book.
• AI Professionals: People working in the field of AI might use this book to understand the
broader context and applications of their work.
Register your copy of The AI Revolution in Networking, Cybersecurity, and Emerging Technologies
on the InformIT site for convenient access to updates and/or corrections as they become
available. To start the registration process, go to informit.com/register and log in or create an
account. Enter the product ISBN (9780138293697) and click Submit.
Acknowledgments
We would like to thank the technical editor, Petar Radanliev, for his time and technical expertise.
Additionally, our appreciation goes to the dedicated Pearson team, with special mentions to James
Manly and Christopher Cleveland, for their amazing support.
x v ii
This page intentionally left blank
About the Authors
Omar Santos is a cybersecurity thought leader with a passion for driving industry-wide initiatives
to enhance the security of critical infrastructures. Omar is the lead of the DEF CON Red Team Village,
the chair of the Common Security Advisory Framework (CSAF) technical committee, the founder
of OpenEoX, and board member of the OASIS Open standards organization. Omar’s collaborative
efforts extend to numerous organizations, including the Forum of Incident Response and Security
Teams (FIRST) and the Industry Consortium for Advancement of Security on the Internet (ICASI).
Omar is a renowned person in ethical hacking, vulnerability research, incident response, and AI
security. He employs his deep understanding of these disciplines to help organizations stay ahead
of emerging threats. His dedication to cybersecurity has made a significant impact on businesses,
academic institutions, law enforcement agencies, and other entities striving to bolster their security
measures.
With over 20 books, video courses, white papers, and technical articles under his belt, Omar’s exper-
tise is widely recognized and respected. Omar is a Distinguished Engineer at Cisco, focusing on AI
security research, incident response, and vulnerability disclosure. Omar is a frequent speaker at
many conferences, including RSA, Blackhat, DEF CON, and more, where he shares his cybersecurity
and AI security insights with the global community. You can follow Omar on Twitter @santosomar.
Samer Salam is a technology architect and engineering leader in the computer networking indus-
try with over two decades of experience. In his role as Distinguished Engineer at Cisco Systems,
he focuses on identifying, incubating, and mainstreaming disruptive technologies, in addition to
defining and driving the system and software architecture for networking products. His work spans
the areas of Intent Based Networking, Artificial Intelligence, Natural Language Processing, Machine
Reasoning, Semantic Technologies and Immersive Visualization. Previously at Cisco, he held mul-
tiple technical leadership and software development positions working on IoT, Layer 2 VPN, Metro
Ethernet, OAM protocols, network resiliency, system scalability, software quality, multi-service edge,
broadband, MPLS, and dial solutions.
Samer was awarded the International Society of Service Innovation Professionals (ISSIP) 2022
Excellence in Service Innovation Award for the “Impact to Innovation” category. He holds over 99
US and international patents, and is coauthor of The Internet of Things From Hype to Reality: The Road
to Digitization, He has authored fourteen IETF RFCs, and multiple articles in academic and industry
journals. He is also a speaker at Cisco Live, and blogs on networking technology. Samer holds an
M.S. degree in Computer Engineering from the University of Southern California in Los Angeles
and a B.Eng. in Computer and Communications Engineering, with Distinction, from the American
University of Beirut.
Hazim Dahir is a Distinguished Engineer at the Cisco Technology Enablement and Acceleration
Office. He is working to define and influence next-generation digital transformation architectures
across multiple technologies and industry verticals. Hazim started his Cisco tenure in 1996 as a soft-
ware engineer and subsequently moved into the services organization, focusing on large-scale and
x ix
xx Ab o u t t h e Au t h o rs
1
2 C h a p ter 1 I nt ro d u cin g th e Ag e o f A I
and vast storage it needs. Finally, in Chapter 7, “Impact of AI in Other Emerging Technologies,” we
expand our view to encompass the broader technological horizon. We’ll see how AI is breathing
life into other cutting-edge technologies, from autonomous vehicles and personalized medicine to
quantum computing and beyond.
Together, these chapters weave the narrative of the ongoing AI revolution. The journey won’t
be easy; it’s complex, uncertain, even daunting. But it is also exhilarating, rich with potential and
opportunities. Join me, along with my co-authors Hazim and Samer, as we embark on this journey.
headlines, or what you hear in podcasts and YouTube videos, but also thrive in the form of myriad
incremental advancements that might seem small on their own but collectively represent a significant
transformation. The landscape of AI we explore today might differ from the one that exists tomorrow.
However, instead of viewing this inability to keep up as a shortcoming, consider it a testament to
the true potential and pace of the AI field. Don’t think about this book as just a static record of AI’s
current state, but rather view it as a compass, pointing toward the broader impact. It’s designed to
provide a framework, a lens through which to make sense of this ongoing revolution, and to help
navigate the future developments that, at this moment, we can only begin to imagine.
The AI field has already seen an incredible number of important milestones, many of which have led
to the advacements we’re seeing today. Figure 1-1 provides a timeline of the most popular historical
milestones of AI.
These milestones, among many others, represent key moments in the development of AI, each
marking a significant stride forward in the technology’s capabilities. Let’s explore the milestones
illustrated in Figure 1-1.
In 1950, Alan Turing proposed a test to measure a machine’s ability to exhibit intelligent behav-
ior equivalent to or indistinguishable from that of a human. This test, known as the Turing Test,
remains an important concept in the field of AI. The first organized gathering to discuss AI was
the Dartmouth Conference, held in 1956. This is where the term “artificial intelligence” was coined.
The Dartmouth Conference initiated active research in the field. Three years later, John McCarthy
and Marvin Minsky established the Artificial Intelligence Laboratory at the Massachusetts Institute
of Technology (MIT), signifying the formalization of AI as an academic field of study. Joseph
Weizenbaum at MIT later created ELIZA, one of the first AI programs that could simulate conversa-
tion with a human being. In 1972, MYCIN was developed at Stanford University; it was one of the
first expert systems designed to help doctors diagnose bacterial infections and recommend treat-
ments. IBM’s supercomputer, Deep Blue, defeated world chess champion Garry Kasparov in 1997.
This event showcased the potential for AI to outperform humans in complex tasks. Watson, another
IBM creation, won the game show Jeopardy in 2011, demonstrating a significant leap in AI’s natural
language processing and understanding capabilities.
Developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, AlexNet won the ImageNet
Large Scale Visual Recognition Challenge in 2012, highlighting the effectiveness of deep learning
and convolutional neural networks in image recognition tasks. Two years later, Microsoft launched
Xiaoice, a social chatbot that could carry on conversations, paving the way for the development of
advanced conversational AI systems.
In 2015, Google’s AlphaGo, developed by DeepMind, defeated world Go champion Lee Sedol,
showcasing the power of AI in mastering a game far more complex than chess. Elon Musk co-
founded OpenAI, a research organization committed to ensuring artificial general intelligence
(AGI) can be aligned with human values and widely distributed. OpenAI’s GPT models have
marked significant progress in generative AI. DeepMind’s AI, AlphaFold, made a breakthrough in
biology by solving the protein folding problem, demonstrating the potential for AI to accelerate
scientific discovery.
4
1956: The Dartmouth 1997: Deep Blue Defeats 2015: AlphaGo Defeats
Garry Kasparov 2022–2023: GPT-4
Conference Test Go Champion
I nt ro d u cin g th e Ag e o f A I
Figure 1-1
Historical AI Development Milestones
Th e A I B l a c k B ox Pro bl e m a n d E x pl a i n a bl e A I 5
Nowadays, dozens of AI models and applications are being released at a very rapid pace. But what’s
next? We can extrapolate, based on trends and projected advancements, that AGI is very near. Such
AI systems will possess the ability to understand, learn, and apply knowledge across a wide array of
tasks at a human level.
Meanwhile, quantum computing is in an emerging stage. Its integration with AI will open up new
possibilities for data processing and machine learning. How quantum computers are programmed
and networked will be vastly different than the case for traditional computers.
As AI systems grow more complex, the demand for transparency also increases. In particular, we
will need to see significant advancements in “explainable AI” (XAI). We must design these systems to
provide clear, understandable explanations for their decisions and actions.
NOTE Feature engineering is the process in which the data scientist specifies which aspects of the data the
model should pay attention to.
Traditional models tend to be task-specific. This implies that a new model must be trained from
scratch for each unique problem, with little to no transfer of knowledge occurring from one task to
another.
LLMs introduce a different approach, known as Transformer-based models. These models leverage
deep learning and natural language processing (NLP) to understand and generate human-like text.
They are pretrained on a massive corpus of text data, learning patterns, structures, and even some
facts about the world from billions of sentences. Unlike traditional models, these LLMs are general-
ists. Once trained, they can be fine-tuned for a wide range of tasks, such as translation, question-
answering, summarization, and more, all within the same model architecture. The ability to transfer
knowledge across tasks is one of their key strengths.
TIP In the context of AI, parameters are the internal variables that the model learns through training. They
are the part of the model that is learned from historical training data and enable the model to make predic-
tions or decisions. In a simple linear regression model, the parameters are the slope and the y-intercept. In
a deep learning model such as a neural network, the parameters are the weights and biases in the network.
These parameters are initially set to random values and then iteratively adjusted based on the feedback
signal (loss or error) that the model gets as it trains on the data.
In the case of LLMs like GPT, the parameters are the weights in the numerous layers of the Transformer
architecture that the model uses. As an example, the legacy GPT-3 has 175 billion parameters, which means
the model has an equal number of weights that it can adjust to learn from the data it’s trained on. GPT-4 is
speculated to have 1.76 trillion parameters, though some sources suggest it is a combination of different
models (the exact details have not been disclosed by OpenAI).
Their total number of parameters allows these models to capture and represent very complex patterns and
relationships in the data. In turn, that is part of what enables them to generate such remarkably human-like
text.
To d ay ’s L L M s vs. Tra d i ti o n a l M a c h i n e Le a r n i n g 7
Unlike traditional machine learning, large language models do not rely on explicit feature
engineering. They learn to represent and understand the data automatically through their training
process, which involves adjusting millions, billions, or even trillions of parameters to minimize the
difference between their predictions and the actual outcomes.
Table 1-1 compares the traditional machine learning models with the newer, more advanced
models such as LLMs, Transformer-based models, and generative AI models.
Transfer learning is a machine learning technique in which a model developed for a given task is
reused as the starting point for a model on a second task. Basically, you take a pretrained model
(a model trained on a large dataset) and adapt it for a different (but related) problem. Transfer
learning is useful when you have a small dataset for the problem you are interested in solving but
also have access to a much larger, related dataset.
For example, suppose you have a convolutional neural network (CNN) model that has been trained
to recognize 1000 types of objects. This model has already learned useful features from the images
it has seen, such as edges, corners, and textures. Now, you have a new task where the system needs
to recognize only a few types of network topology devices (e.g., routers, switches, firewalls, serv-
ers, desktops). Instead of training a new model from scratch, you can use the pretrained model and
8 C h a p ter 1 I nt ro d u cin g th e Ag e o f A I
slightly modify its architecture to suit your specific task. In this way, you can leverage the learned
features without having to start the training process from scratch.
Several different types of transfer learning are possible:
• Feature extraction: The pretrained model acts as a feature extractor. You remove the output
layer and add new layers that are specific to your task. The pretrained layers are usually
“frozen” during training (i.e., their weights are not updated).
• Fine-tuning: You not only replace the output layer but also continue to train the entire
network, sometimes at a lower learning rate, to allow the pretrained model to adapt to the
new task.
• Task-specific models: Sometimes, certain layers of the pretrained model may be replaced or
adapted to make the model better suited to the new task.
TIP Training a model from scratch can be computationally expensive and time-consuming. Transfer learn-
ing can significantly speed up this training process. When you have a small dataset, training a model from
scratch may result in overfitting. Transfer learning helps in such cases by leveraging a pretrained model. A
pretrained model has generalized features that can enhance its performance on the new task, even if the
new task is significantly different from the original task. Transfer learning has seen successful applications in
multiple domains, including NLP, computer vision, and reinforcement learning.
Feature engineering is the process of selecting, transforming, or creating new input variables
(features) to improve the performance of a machine learning model. The quality and relevance of
the features used can significantly impact the model’s ability to learn the underlying patterns in the
data, and consequently its performance on unseen data.
The feature engineering process typically involves a combination of domain expertise, data analysis,
and experimentation, and can include steps such as variable transformation, feature extraction, and
feature construction. Variable transformation is explained in Figure 1-2.
Figure 1-2
Mechanics of Variable Transformation
Another random document with
no related content on Scribd:
THE FREEBOOTER OF LOCHABER.
By Professor Wilson.