You are on page 1of 19

SEPTEMBER 2022 Sponsored by

How Machine Learning,


INSIDE:

How Machine Learning, AI & Deep


Learning Improve Cybersecurity >>

AI & Deep Learning How Can AI Help in the Fight Against


Ransomware? >>

Criminals Use Deepfake Videos to

Improve Cybersecurity
Interview for Remote Work >>

EU Debates AI Act to Protect Human


Rights, Define High-Risk Uses >>

Machine intelligence is influencing all aspects of cybersecurity. Here’s what Deep Instinct Perspectives: Harnessing
security teams need to consider to make AI work in their organizations. the Power of Deep Learning to Speed Up
Threat Detection >>

Next
FEATURE

How Machine Learning, AI & Deep Learning


Improve Cybersecurity
Machine intelligence is influencing all aspects of cybersecurity. Here’s what security teams need to consider
to make AI work in their organizations.
By George Hulme, Contributing Writer, Dark Reading

E
nterprise security teams are strained. They’re That staffing shortage is unlikely to let up any time
striving to defend ever-more entryways as at- soon. With an estimated 2.7 million gap in skilled cy-
tackers’ sophistication and motivation grow ex- bersecurity professionals needed to fill current demand,
ponentially. At the same time, enterprise environments one should expect enterprise security teams to remain
are becoming increasingly complex, with more servers, strained for some time to come. However, the rise of ar-
endpoints, cloud systems, and applications coming on- tificial intelligence (AI) and machine learning (ML) to aug-
line daily. ment human security efforts offers a glimmer of hope at
To even have a chance at reducing risks, enterprises alleviating the pain.
have begun deploying security solutions at a breakneck
pace, installing (by some estimates) an average of 76 How Can AI/ML Help?
security tools and spending upward of 14% of their an- Advancements in AI technology and ML algorithms prom-
nual IT budgets on security. The situation is untenable. ise to transform enterprise security by giving IT secu-
“Enterprise security teams are just deluged with threats rity teams enhanced tools to better detect and respond
and priorities,” says Scott Crawford, an analyst at 451 to attacks faster than before. Before security teams can
Research. “A lot of the tools that they are turning to are use these tools, they need to know what is possible with
going unused because they don’t have the staff to man- different types of learning and how AI can best address
age them.” today’s complex cybersecurity challenges. Although it is

September 2022 2
Previous Next
HOW MACHINE LEARNING, AI & DEEP LEARNING IMPROVE CYBERSECURITY

common to hear these terms used interchangeably, se- patterns. “These specialized tools help narrow down the How AI/ML Is Used in Cybersecurity
curity teams need to understand the differences between events to those most likely to be actual risks, resulting How are enterprises predominantly deploying AI/ML?
machine learning and deep learning, and how to evaluate in faster, near-real-time detection of adversarial activity,” Are they aggregating their own data and assigning
and choose the right methods for success. Singh says. their homegrown algorithms to assess that data, or
If AI/ML can do anything, it can help human defenders Ultimately, AI/ML proponents say these technologies is the intelligence embedded within the products and
become more efficient. “AI systems can support human significantly reduce the time to detect breaches, en- services they are using? Experts say most enterprises
defenders by saving time, cutting costs, and reducing po- hance cybersecurity decision-making through boosted are turning to AI/ML within the tool sets they acquire
tential for human error,” says Jason Rader, vice president data analysis, improve overall security defenses through and deploy.
and chief information security officer at services provid- enhanced attack and user behavioral analysis, and even “I do see the more advanced security teams applying
er Insight. “AI’s ability to speed up processes that would increase effective automation. “Direct business benefits data science and machine learning to their own data-
take hours — or perhaps days or months — for humans can include reduced analyst workload, decreased risk sets,” says Allie Mellen, an analyst at Forrester Research.
to complete is substantial.”
According to Ed Bowen, AI leader and managing direc-
Experts advise that security teams should not take the homegrown AI/ML path
tor at Deloitte’s AI Center of Excellence, AI/ML can, using
techniques such as reinforcement learning, help improve
unless they are really committed to having a large supporting organization with
functions from vulnerability testing through identity and a lot of committed resources.
access management. “We are also seeing exciting devel-
opments in the ability of AI models and tools to increase exposure through better coverage, and significantly low- “This is a very hard thing to do because of the
the efficacy of cyber-threat detection and response, and er security costs. AI/ML-based security tools also enable specialized staffing required. You need data scientists
to expedite cyber operators’ reaction time to the greatest better breach recovery times with an automated response and people who understand the threats that are in the
threats,” says Bowen. AI/ML’s ability to reduce noise in capability,” says Singh. environment. You need to understand the environment
alerts and data should enable human defenders to spend For those reasons, analysts expect the AI/ML market to yourself. It gets very costly and very resource inten-
more time focused on high-value security efforts. continue to grow more rapidly than the overall enterprise sive very quickly,” she adds. Mellen and other experts
DJ Singh, cybersecurity architect at Capgemini Amer- technology market. Market intelligence firm Meticulous advise that usually security teams should not take the
icas, echoes Bowen’s observations and says customers Research expects the cybersecurity market to continue homegrown AI/ML path unless they are really commit-
are seeking ways to implement AI-based security that will to grow at 24% annually through 2029, when it is expect- ted to having a large supporting organization with a lot
analyze event data using ML models that identify attack ed to reach $66 billion. of committed resources.

September 2022 3
Previous Next
HOW MACHINE LEARNING, AI & DEEP LEARNING IMPROVE CYBERSECURITY

451’s Crawford agrees: “Building one’s own AI/ML is Assessing the Vendor’s AI/ML Claims What’s the difference between AI and ML? That de-
something that’s not very common and something we When CISOs and others evaluate the marketing claims pends on whom you ask, but many contend that machine
see more among the security one-percenters.” of security vendors boasting their AI/ML capabilities, it’s learning isn’t an AI because it doesn’t actively interact
Increasingly, security vendors and service providers tough to tell what’s real and what’s not. “That’s really the with the environment. Rather, ML collects and categoriz-
are incorporating AI/ML capabilities within their tech- rub there,” says Crawford. “When someone is putting for- es data, and then makes observations and predictions on
nologies. “Security vendors have significantly more re- ward an approach to AI/ML as part of their value propo- the data it ingested using mathematical modeling. Actual
sources to provide to those with data science skills, sition, it becomes difficult for the person who isn’t a data AIs, however, actively interact with environments and at-
such as better salaries, better benefits, and the ability scientist or isn’t familiar with the techniques to know if the tempt to understand their environments enough to make
to provide more opportunities for career advancements. claims are real.” useful and accurate decisions much like a human would.
Vendors can bring the resources needed to succeed in “Among one of the biggest bait-and-switches in AI Machine learning is a subset of the study and implemen-
the role, and that’s not always an option for security vending is the sale of rules-based engines in place of AI/ tation of artificial intelligence.
There are also different types of learning, specifically su-
pervised and unsupervised learning. Supervised learning
Actual AIs actively interact with environments and attempt to understand their
uses data inputs that are already labeled that train ma-
environments enough to make useful and accurate decisions much like a chine learning systems. With unsupervised learning, the
human would. data inputs are not previously labeled, and the machine
intelligence finds patterns and useful information within
teams within the enterprise,” Mellen says. ML programs,” says Adam Kamor, co-founder and head the data without human help. Most enterprise security
“We very much need the human element in cyberse- of engineering at synthetic data provider Tonic.ai. “AI teams need AI/ML systems to identify conditions humans
curity to make it successful,” Mellen says. “Most of the learns from data while rules-based engines do not. This cannot, or provide accurate analysis to surprising (unex-
use cases that I’ve seen around machine learning are would allow an AI program to handle unexpected out- pected) situations and improve analysis over time.
on detection, particularly anomaly detection, and find- comes because it has seen the unexpected event in past Although today’s security challenges and enterprise
ing patterns and being able to identify when an activity data. An AI model is less likely to overlook or miss small environments are more sophisticated than ever, some
is happening that shouldn’t be happening.” patterns in data that someone engineering a rules-based use cases are relatively straightforward to predict with
Security teams must understand the various types of engine may [miss].” enough historical data, such as many types of fraudu-
AI/ML available, how to vet vendor AI/ML claims, and To avoid such bait-and-switches and promises vendors lent financial transactions and end-user behavioral anal-
how to deploy AI/ML technologies effectively. cannot deliver, understanding the terms is crucial. ysis. Also, AI/ML is useful for analyzing vulnerability

September 2022 4
Previous Next
HOW MACHINE LEARNING, AI & DEEP LEARNING IMPROVE CYBERSECURITY

assessments much more rapidly and accurately than after it’s been put through a training environment. During •  Can the AI model be explained as to why it has made a
humans or spotting flaws in application code that de- this testing, engineers need to watch for biases that will certain decision? Or is the AI/ML model a “black box”?
velopers miss. Conversely, some attacks can be quite skew results, produce patterns, and create false correla- •  Can the AI model provide enough details to help with
complex, and events and techniques an attacker uses to tions,” Kamor advises. forensics?
move laterally may not be as one would expect. Here, Capgemini Americas’ Singh says each organization is Many of the experts we interviewed suggest security
unsupervised learning systems can identify causations unique and will be quite different from any test or training teams would do best to ignore all vendor claims regarding
human analysts may miss, and ideally, this would occur data provided by a vendor to teach the AI/ML models. their AI/ML capabilities. “In most cases, security teams
before attacks are successful. Singh advises enterprises to find a methodical approach won’t be vetting various types of AI/ML — they will be
To help discern if tools are AI or simple rule-based anal- that will help them qualify the AI/ML vendor’s offering seeking improved outcomes,” says 451’s Crawford. That
ysis, Kamor suggests CISOs ask their vendors how their based on measurable business benefits. means they’ll focus on whether they see a noticeable im-
product or service handles unexpected outcomes and
whether accuracy improves without human involvement.
Before being deployed into production, the AI/ML needs to be tested
“One can automate AI models to learn from new data as
through training and validation phases that rely on synthetic and non-
it comes in, which cannot be done with rule-based sys-
tems. At the end of the day, an AI model is making pre-
production data.
dictions based on data it has seen previously. The gain
here comes from the automation of the AI model learning Singh recommends that any assessment address the provement in the signal-to-noise ratio within the data so
from new data that comes in to make predictions on even following questions: that human analysts get more actionable insights.
newer data,” Kamor advises. •  Does the AI/ML have out-of-the-box integrations with “That’s a practical measure that people can use, and
That means, ultimately, enterprise teams must test ven- data sources — for instance, identity, endpoint, and to the extent that tools deliver that, then practitioners will
dor claims and the capabilities of their AI/ML systems. network events? see benefit, regardless of the underlying technologies. It’s
Typically, explains Kamor, that type of quality-assurance •  Does the AI/ML have out-of-the-box integrations really about the efficacy and effectiveness of the technol-
testing is conducted in production environments. How- with security information and event management and ogy,” he says.
ever, before being deployed into production, the AI/ML incident response tooling? Sohrob Kazerounian, AI research lead at Vectra AI,
needs to be tested through training and validation phases •  How granular is the application programming interface? agrees. “The bottom line is the bottom line,” Kazerounian
that rely on synthetic and non-production data. “The first Does it allow additional data modeling using tools such says. “If AI/ML is not materially improving some business
stage of testing requires an evaluation of training data as GraphQL? objective, then it’s a distraction, not a solution. AI/ML

September 2022 5
Previous Next
HOW MACHINE LEARNING, AI & DEEP LEARNING IMPROVE CYBERSECURITY

vendors need to have clear outcome-based value prop- fidelity of event data, or savings in security information “Does it integrate into my current platform? Is there an
ositions, and CISOs need to hold those vendors’ feet to and event management (SIEM) storage costs. API integration? Does this work across my on-prem and
the fire to prove them.” And before any implementation, the security team cloud workloads? What are the security implications of
Kazerounian advises that CISOs focus their testing should have clear plans about how the AI/ML deployment the product itself? What are the automation capabilities?
efforts on outcomes. “Tests should positively validate will benefit the entire organization, as well as the securi- It’s very probable that the AI/ML features of that shiny,
that successful outcomes are material improvements ty team, and plan for the AI/ML’s ongoing support. “The new product are abstracted from the operators on the se-
over whatever they’re replacing. While it isn’t always organization should create a detailed execution plan, un- curity team, so the usability and integration features into
feasible to test in a production environment, some kind derstand what is achievable, and have the budget to con- our current security operations, and the time to value are
of live-fire exercise will inevitably provide the best re- tinue the program — not just budget for the deployment,” key factors,” Rader says.
sults,” he says. Crawford says. “Enterprise security teams must also remember that AI/
ML systems rely on consistency of data. At times, se-
curity vendors make changes that effect the granularity
The security team should have clear plans about how the AI/ML deployment
of event data. AI/ML is not a ‘set it and forget it’ tool. It
will benefit the entire organization, as well as the security team, and plan for the
requires re-testing and its own qualifications when data
AI/ML’s ongoing support. changes,” Capgemini’s Singh says.
Singh says that new tools and conditions can impact
Implementing AI/ML in the Real World “Be prepared to see what needs adjustment in the mod- an AI/ML’s learning behavior, and to mitigate the risks as-
Just as AI/ML vendors claim assessment tests need el. It’s always exciting for innovators and developers to sociated with such changes, teams should consider im-
clear outcomes, so do the pilots and deployments. Ton- come up with new approaches, new models, but where plementing AI/ML with a nod to the modern developer’s
ic.ai’s Kamor says, “Every AI program needs to have a they really demonstrate their value is in their proper tun- playbook: Implement phases of small changes and test
clear directive, and too often, enterprises dive in without ing to a given use case and over time,” Crawford says. and release to a staging environment. “This incremental
a clear vision.” For detection, that would be tuning to improve attack approach not only helps the AI/ML model to learn the
Capgemini’s Singh agrees that any AI/ML implementa- identification and response, but it also applies to other new data source, but also helps fine-tune the data source
tion should have a clearly defined success criteria — and areas, such as improving entitlement management, vul- one at a time. And it creates a good test environment,
detailed in business terms, not technical terms — such as nerability assessments, and even attack simulations. In- including tooling to simulate complex attacks, as well as
reduction in manual effort, detection of advanced attacks, sight’s Rader advises to also look at the addition of AI/ML test data that is a representative sample of a real-life pro-
reduction in risk, cost avoidance due to early detection, strategically, and not tactically. duction environment,” he adds.

September 2022 6
Previous Next
HOW MACHINE LEARNING, AI & DEEP LEARNING IMPROVE CYBERSECURITY

Experts offer other advice for enterprises aiming to be brought to the attention of analysts so that they can
avoid missteps. “Organizations that don’t have a ma- perform their jobs better.
ture model development life cycle and an AI risk and “We’ve yet to see that really cascade down to the prod-
governance program can run into AI implementation uct side of things. I think it’s very interesting because be-
challenges at various points,” Deloitte’s Bowen says. ing able to provide things like recommendations to secu-
Singh points out areas in which organizations may rity analysts on what responses they should take could
face challenges when implementing AI without a ma- be a very compelling use case that we haven’t explored
ture model: yet because we’ve been so caught up in the detection
•  Insufficient data profiling. aspects,” Mellen says.
•  Not assembling representative datasets for training Such a future should strike most security professionals
models. as good news, as they need as much help — human or
•  Not testing multiple model architectures to determine machine — as they can get when it comes to making
the best performance for a given problem. sense of their environments, attack techniques, and how
•  Using inappropriate performance metrics (for example, to make the most effective responses possible. And that
trying to measure the “accuracy” of the model when help can’t get here soon enough.
the dataset is highly imbalanced).
•  Skipping the best practice of sufficient quality control About the Author: For more than 20 years, George V. Hulme has
and peer review throughout the model development written about business, technology, and cybersecurity topics. He
life cycle. currently focuses on cybersecurity and digital innovation. Previ-
Going forward, as Forrester’s Mellen points out, the in- ously, he was senior editor at InformationWeek magazine, and has
vestments made by security services providers today in freelanced for many trade and general-interest publications.
AI/ML will eventually make their way into the products
enterprise security teams use. That’s because cyberse-
curity vendors, especially those with a services arm, are
increasingly using AI/ML to determine ways to improve
the effectiveness and the quality of their analyst response
by analyzing the data and determining what data should

September 2022 7
Previous Next
ANALYSIS

How Can AI Help in the Fight Against


Ransomware?
Less than a quarter of organizations believe they are fully prepared for a ransomware
attack, threatening data privacy.
By Maxine Holt, Research Director, Omdia

I
ndividuals are increasingly aware of the importance of important as all data is not equal: Some data will require
data privacy, and governments continue to implement strong protection, and other data will not. By understand-
and tighten associated regulations. How successful- ing these nuances, organizations can begin exploring
ly are organizations dealing with data privacy? It varies more advanced approaches to ransomware as with the
wildly; there are all-too-frequent reports of data privacy use of artificial intelligence (AI) to see unseen patterns in
failures, often associated with ransomware. A Dark Read- the data that may point to a potential incursion or threat.
ing poll that ended in December 2021 found that less than
a quarter of organizations believe they are fully prepared for Enter Deep Learning
a ransomware attack, leaving the remaining three-quarters Attackers using malware can block access to data and/
highly susceptible, which in turn threatens data privacy. or systems, encrypt and lock data, or even move compa-
Ransomware will continue to be a hugely successful ny data off-site. Attacks that take place over a keyboard
method of attack that organizations must defend against, can be particularly difficult to detect and mitigate as they
with data privacy regulations a significant part of the can dwell over time, appearing innocuous at first as at-
equation. Focusing on the information life cycle (create, tackers may use trusted routes of ingress as they move
process, store, transmit, destroy) will help organizations laterally through a target network. AI techniques such as
understand what data requires protection and where it unsupervised deep learning (DL) can help organizations
resides. Furthermore, classifying data appropriately is understand attack targets and vectors by encouraging ob-

September 2022 8
Previous Next
HOW CAN AI HELP IN THE FIGHT AGAINST RANSOMWARE?

servability across the data life cycle. If an organization can of governance requirements that span the full information
detect the wake of activity created by a potential wrong- life cycle (create, process, store, transmit, destroy). Fortu-
doer, it stands a good chance of blocking or diverting an nately, both within and beyond the confines of the security
incursion before systems can be locked or data encrypted. industry, technology providers are presently laser-focused
Here, AI offers many helpful tools that can help compa- on helping companies build a consistent view of company
nies deal with malware. Statistical and mathematical ma- operational, system, and analytical data using the concept
chine learning (ML) algorithms like “k-nearest neighbor” of a data fabric.
and “decision trees” can identify malware payloads and Over time, Omdia expects these metadata efforts to
known attack patterns, for example. Where AI really steps more closely align between security and business practic-
into the spotlight, however, is with DL neural networks. es. At that time, companies will likely provision an AI-ca-
Unlike statistical and mathematical ML technologies that pable malware tool in the same way they provision any
use known rules (e.g., “this is or is not a piece of malware”) cloud-native service, by specifying data sources and flip-
to identify a potential attack, DL technologies can actu- ping the “on” switch. Until then, organizations without an
ally deduce the rules themselves. Popular DL algorithms existing investment in a data fabric may find themselves
— including convolutional neural networks (CNNs), recur- somewhat handicapped without the ability to “observe”
rent neural networks (RNNs), and long short-term mem- the entirety of the system of resources they’re seeking to
ory (LSTM) — can parse huge amounts of disparate data protect. In other words, fighting malware, just like fighting
to build an understanding of the patterns in that data, data privacy risks, demands a high degree of data literacy,
patterns that may turn out to represent an attack. domain expertise, and governance.

Data Governance is Key About the Author: Maxine Holt leads Omdia’s cybersecurity
research, developing a comprehensive research program to
IT and security practitioners considering investing in AI as
support vendor, service provider, and enterprise clients. Maxine is
a means of fighting ransomware must first build an under- a regular speaker at events and writes a monthly Computer Weekly
standing of their entire data landscape as it pertains to data article covering various aspects of information security.
security and privacy. This means building solid metadata
defining ownership, access, privacy exposure, locality, and
so on. On top of this, the organization must establish a set

September 2022 9
Previous Next
ANALYSIS

Criminals Use Deepfake Videos to Interview


for Remote Work
The latest evolution in social engineering could put fraudsters in a position to commit insider threats.
By Ericka Chickowski, Contributing Writer, Dark Reading

S
ecurity experts are on the alert for the next evolution “In these interviews, the actions and lip movement of
of social engineering in business settings: deepfake the person seen interviewed on-camera do not complete-
employment interviews. The latest trend offers a ly coordinate with the audio of the person speaking,”
glimpse into the future arsenal of criminals who use con- the advisory said. “At times, actions such as coughing,
vincing, faked personae against business users to steal sneezing, or other auditory actions are not aligned with
data and commit fraud. what is presented visually.”
The concern comes following a recent advisory from the The complaints also noted that criminals were using sto-
FBI Internet Crime Complaint Center (IC3), which warned of len personally identifiable information (PII) in conjunction
increased activity from fraudsters trying to game the online with these fake videos to better impersonate applicants,
interview process for remote-work positions. The advisory with later background checks digging up discrepancies
said that criminals are using a combination of deepfake vid- between the individual who interviewed and the identity
eos and stolen personal data to misrepresent themselves presented in the application.
and gain employment in a range of work-from-home po-
sitions that include information technology, computer pro- Potential Motives of Deepfake Attacks
gramming, database maintenance, and software-related While the advisory didn’t specify the motives for these at-
job functions. tacks, it did note that the positions applied for by these
Federal law-enforcement officials said in the advisory that fraudsters were ones with some level of corporate access
they’ve received a rash of complaints from businesses. to sensitive data or systems.

September 2022 10
Previous Next
CRIMINALS USE DEEPFAKE VIDEOS TO INTERVIEW FOR REMOTE WORK

Thus, security experts believe one of the most obvious released an official warning indicating that companies
goals in deepfaking one’s way through a remote inter- must be cautious of North Korean IT workers pretending
view is to get a criminal into a position to infiltrate an to be freelance contractors to infiltrate companies and
organization for anything from corporate espionage to collect revenue for their country,” explains Stuart Wells,
common theft. CTO of Jumio. “Organizations that unknowingly pay North
“Notably, some reported positions include access to Korean hackers potentially face legal consequences and
customer PII, financial data, corporate IT databases and/ violate government sanctions.”
or proprietary information,” the advisory said.
“A fraudster that hooks a remote job takes several giant What This Means for CISOs
steps toward stealing the organization’s data crown jew- A lot of the deepfake warnings of the past few years have
els or locking them up for ransomware,” says Gil Dabah, been primarily around political or social issues. However,
co-founder and CEO of Piiano. “Now they are an insider this latest evolution in the use of synthetic media by crimi-
threat and much harder to detect.” nals points to the increasing relevance of deepfake detec-
Additionally, short-term impersonation might also be a tion in business settings.
way for applicants with a “tainted personal profile” to get “I think this is a valid concern,” says Dr. Amit Roy-Chow-
past security checks, says DJ Sampath, co-founder and dhury, professor of electrical and computer engineering
CEO of Armorblox. at University of California at Riverside. “Doing a deep-
“These deepfake profiles are set up to bypass the fake video for the duration of a meeting is challenging
checks and balances to get through the company’s re- and relatively easy to detect. However, small companies
cruitment policy,” he says. may not have the technology to be able to do this detec-
There’s potential that in addition to getting access for tion and hence may be fooled by the deepfake videos.
stealing information, foreign actors could be attempting Deepfakes, especially images, can be very convincing
to deepfake their way into US firms to fund other hack- and if paired with personal data can be used to create
ing enterprises. workplace fraud.”
“This FBI security warning is one of many that have been Sampath warns that one of the most disconcerting
reported by federal agencies in the past several months. parts of this attack is the use of stolen PII to help with
Recently, the US Treasury, State Department, and FBI the impersonation.

September 2022 11
Previous Next
CRIMINALS USE DEEPFAKE VIDEOS TO INTERVIEW FOR REMOTE WORK

“As the prevalence of the Dark Net with compromised “Synthetic media like deepfakes is going to just take
credentials continues to grow, we should expect these social engineering to another level,” says Canham, who
malicious threats to continue in scale,” he says. “CISOs last year at Black Hat presented research on counter-
have to go the extra mile to upgrade their security pos- measures to combat deepfake technology.
ture when it comes to background checks in recruiting. The good news is that researchers like Canham and
Very often these processes are outsourced, and a tighter Roy-Chowdhury are making headway on coming up with
procedure is warranted to mitigate these risks.” detection and countermeasures for deepfakes. In May,
Roy-Chowdhury’s team developed a framework for de-
Future Deepfake Concerns tecting manipulated facial expressions in deepfaked vid-
Prior to this, the most public examples of criminal use of eos with unprecedented levels of accuracy.
deepfakes in corporate settings have been as a tool to He believes that new methods of detection like this
support business email compromise (BEC) attacks. For can be put into use relatively quickly by the cybersecu-
example, in 2019 an attacker used deepfake software to rity community.
impersonate the voice of a German company’s CEO to “I think they can be operationalized in the short term —
convince another executive at the company to urgently one or two years — with collaboration with professional
send a wire transfer of $243,000 in support of a made- software development that can take the research to the
up business emergency. More dramatically, last fall a software product phase,” he says.
criminal used deepfake audio and forged email to con-
vince an employee of a United Arab Emirates company About the Author: Ericka Chickowski specializes in coverage of
to transfer $35 million to an account owned by the bad information technology and business innovation. She has focused
guys, tricking the victim into thinking it was in support of on information security for the better part of a decade and reg-
a company acquisition. ularly writes about the security industry as a contributor to Dark
According to Matthew Canham, CEO of Beyond Layer Reading.
7 and a faculty member at George Mason University, at-
tackers are increasingly going to use deepfake technol-
ogy as a creative tool in their arsenals to help make their
social engineering attempts more effective.

September 2022 12
Previous Next
ANALYSIS

EU Debates AI Act to Protect Human Rights,


Define High-Risk Uses
The commission argues that legislative action is needed to ensure a well-functioning market for AI systems
that balances benefits and risks.
By Nathan Eddy, Contributing Writer, Dark Reading

T
he European Commission (EC) is currently debating new rules and actions
for trust and accountability in artificial intelligence (AI) technology through a
legal framework called the EU AI Act. Its aim is to promote the development
and uptake of AI while addressing potential risks some AI systems can pose to safe-
ty and fundamental rights.
While most AI systems will pose low to no risk, the EU says, some create dangers
that must be addressed. For example, the opacity of many algorithms may create
uncertainty and hamper effective enforcement of existing safety and rights laws.
The EC argues that legislative action is needed to ensure a well-functioning inter-
nal market for AI systems where both benefits and risks are adequately addressed.
“The EU AI Act aims to be a human centric legal-ethical framework that intends to
safeguard and protect human rights and fundamental freedoms from violations of
these rights and freedoms by algorithms and smart machines,” says Mauritz Kop,
Transatlantic Technology Law Forum Fellow at Stanford Law School and strategic
intellectual property lawyer at AIRecht.
The right to know whether you are dealing with a human or a machine — which is
becoming increasingly more difficult as AI becomes more sophisticated — is part of

September 2022 13
Previous Next
EU DEBATES AI ACT TO PROTECT HUMAN RIGHTS, DEFINE HIGH-RISK USES

that vision, he explains. “high risk” using criteria that are still under debate,
Kop notes that AI is now mostly unregulated, except creating a list of examples of high-risk systems to help
for a few sector-specific rules. The act aims to address guide judgment.
the legal gaps and loopholes by introducing a product “It will be a dynamic list that contains various types of
safety regime for AI. AI applications used in certain high-risk industries, which
“The risks are too high for nonbinding self-regulation means the rules get stricter for riskier AI in healthcare and
by companies alone,” he says. defense than they are for AI apps in tourism,” Kop says.
“For instance, medical AI is [classified as] high risk to pre-
Effects on AI Innovation vent direct harm to patients due to AI errors.”
Kop admits that regulatory conformity and legal com- He notes there is still controversy about the criteria and
pliance will be a burden, especially for early AI startups definition of AI that the draft uses. Some commentators
developing high-risk AI systems. Empirical research argue it should be more technology-specific, aimed at
that shows the GDPR — while preserving privacy and certain riskier types of machine learning, such as deep
data protection and data security — had a negative ef- unsupervised learning or deep reinforcement learning.
fect on innovation, he notes. “Others focus more on the intent of the system, such as
Risk classification for AI is based on the intended social credit scoring, instead of potential harmful results,
purpose of the system, in line with existing EU product such as neuro-influencing,” Kop adds. “A more detailed
safety legislation. Classification depends on the func- classification of what ‘risk’ entails would thus be welcome
tion the AI system performs and on the specific pur- in the final version of the act.”
pose and modalities for which the system is used.
“The legal uncertainty surrounding [regulation] and Facial Recognition as a High-Risk
the lack of budget to hire specialized lawyers or mul- Technology
tidisciplinary teams still are significant barriers for a Joseph Carson, chief security scientist and advisory CISO
flourishing AI startup and scale-up ecosystem,” Kop at Delinea, participated in several of the talks around the
says. “The question remains whether the AI Act will im- act, including as a subject matter expert in the use of AI
prove or worsen the startup climate in the EU.” in law enforcement and articulating the concerns around
The EC will determine which AI gets classified as security and privacy.

September 2022 14
Previous Next
EU DEBATES AI ACT TO PROTECT HUMAN RIGHTS, DEFINE HIGH-RISK USES

The EU AI Act, he says, will mainly affect those organi- tion of personal data, and nondiscrimination.
zations that already collect and process personal identi- Strandell-Jansson points out a few exceptions, includ-
fiable information. Therefore, it will impact how they use ing use for law enforcement purposes for the targeted
advanced algorithms in processing the data. search for specific potential victims of crime, including
“It is important to understand the risks if no regulation missing children; the response to the imminent threat of a
or act is in place and what the possible impact [is] if or- terror attack; or the detection and identification of perpe-
ganizations abuse the combination of sensitive data and trators of serious crimes.
algorithms,” Carson says. “The future of the Internet is a “Regarding private businesses, the AI Act considers all
scary place, and the enforcement of the EU AI Act allows emotion recognition and biometric categorization sys-
us to embrace the future of the Internet using AI with both tems to be high-risk applications if they fall under the use
responsibility and accountability.” cases identified as such — for example, in the areas of
Regarding facial recognition, he says the technology employment, education, law enforcement, migration, and
should be regulated and controlled. border control,” she explains.
“It has many amazing uses in society, but it must be As such, potential providers would have to subject such
something you opt in and agree to use; citizens must AI systems to transparency and conformity obligations
have a choice,” he says. “If no act is in place, we will before putting them on the market in Europe.
see a significant increase in deepfakes that will spiral
out of control.” The Time to Act on AI Is Now
Malin Strandell-Jansson, senior knowledge expert at Dr. Sohrob Kazerounian, AI research lead at Vectra, an AI
McKinsey & Co, says facial recognition is one of the most cybersecurity company, says the need to create a regula-
debated issues in the draft act, and the final outcome is tory framework for AI has never been more pressing.
not yet clear. “AI systems are rapidly being integrated into products
In its draft format, the AI Act strictly prohibits the use and services across wide-ranging markets,” he says. “Yet
of real-time remote biometric identification in publicly the trustworthiness and interpretability of these systems
accessible spaces for law enforcement purposes, as it can be rather opaque, with poorly understood risks to
poses particular risks for fundamental rights — notably users and society more broadly.”
human dignity, respect for private and family life, protec- While some existing legal frameworks and consumer

September 2022 15
Previous Next
EU DEBATES AI ACT TO PROTECT HUMAN RIGHTS, DEFINE HIGH-RISK USES

protections may be relevant, applications that use AI are as well as to their manipulation,” she says. “In addition, cations of statistical models and machine learning sys-
sufficiently different enough from traditional consumer there are serious doubts as to the scientific nature and tems within these areas should receive heavy regulatory
products that they necessitate fundamentally new legal reliability of such systems.” oversight,” he adds.
mechanisms, he adds. The bill would require people to be notified when they
The overarching goal of the bill is to anticipate and miti- encounter deepfakes, biometric recognition systems, or Possible Groundwork for Similar US Law
gate the most critical risks resulting from the use and failure AI applications that claim to be able to read their emo- It is unclear what this act could mean for a similar law in
of AI, with actions ranging from banning systems deemed tions. Although that’s a promising step, it raises a couple the US, Kazerounian says, noting that it has now been
to have “unacceptable risk” altogether to heavy regulation of potential issues. more than half a decade since the passing of GDPR, the
of “high-risk” systems. Another, albeit less-noted, conse- Overall, Kazerounian says it is “undoubtedly” a good EU law on data regulation, without any similar federal
quence of the framework is that it could provide clarity and start to require increased visibility for consumers when laws following in the US — yet.
certainty to markets about what regulations will exist and they are being classified by biometric data and when they “Nevertheless, GDPR has undoubtedly influenced the
how they will be applied. are interacting with AI-generated content rather than real behavior of multinational corporations, which have ei-
“As such, the regulatory framework may in fact result in humans or real content. ther had to fracture their policies around data protec-
increased investment and market participation in the AI “Unfortunately, the AI act specifies a set of application tions for EU and non-EU environments or simply have a
sector,” Kazerounian says. areas within which the use of AI would be considered single policy based on GDPR applied globally,” he says.
high-risk, without necessarily discussing the risk-based “In any case, if the US decides to propose legislation on
Limits for Deepfakes and Biometric criteria that could be used to determine the status of fu- the regulation of AI, at a minimum it will be influenced by
Recognition ture applications of AI,” he says. “As such, the seemingly the EU act.”
By addressing specific AI use cases, such as deepfakes ad-hoc decisions about which application areas are con-
and biometric or emotional recognition, the AI Act is hop- sidered high-risk simultaneously appear to be too specif- About the Author: Nathan Eddy is a freelance journalist and
ing to ameliorate the heightened risks such technologies ic and too vague.” award-winning documentary filmmaker specializing in IT security,
pose, such as violation of privacy, indiscriminate or mass Current high-risk areas include certain types of biomet- autonomous vehicle technology, customer experience technology,
surveillance, profiling and scoring of citizens, and manip- ric identification, operation of critical infrastructure, em- and architecture and urban planning. A graduate of Northwestern
ulation, Strandell-Jansson says. ployment decisions, and some law enforcement activi- University’s Medill School of Journalism, Nathan currently lives in
“Biometrics for categorization and emotion recognition ties, he explains. Berlin, Germany.
have the potential to lead to infringements of people’s “Yet it isn’t clear why only these areas were considered
privacy and their right to the protection of personal data high-risk and furthermore doesn’t delineate which appli-

September 2022 16
Previous Next
SPONSORED CONTENT

DEEP INSTINCT PERSPECTIVES

Harnessing the Power of Deep Learning


to Speed Up Threat Detection
AI is the future of security — but not all AI is created equal.
By Chuck Everette, Director of Cybersecurity Advocacy, Deep Instinct

T
he pace of the cybersecurity arms race is accelerating. Attacks are increasing
exponentially, and breaches are on the rise. As soon as a defensive measure
becomes mainstream, the threat actors develop a new workaround.
It’s becoming clear that artificial intelligence (AI) is the future of cybersecurity tools.
But to stay ahead of the attack groups, AI-based solutions need to rapidly evolve and
be positioned to predict and prevent attacks rather than react to them. A form of AI
known as deep learning (DL) has emerged as the most effective way of delivering a
prevention-first approach to take back the advantage from attackers.
So, what is deep learning, and how does it differ from other forms of AI?

AI Defined
The term AI has become an overused sales buzzword, muddying the waters for enter-
prises seeking the best solution for their needs. While often applied indiscriminately, AI
is an umbrella term that can be split into three distinct types: basic AI, reactive machine
learning (ML), and proactive DL.

September 2022 17
Previous Next
HARNESSING THE POWER OF DEEP LEARNING TO SPEED UP THREAT DETECTION

In its simplest form, basic AI involves human-created al- teams and comb through large volumes of data far faster
gorithms relying on the use of signatures and scripts. This than any human.
form requires constant human intervention to stay up to However, the technology still has some serious short-
date and can be applied only to recognizing and flagging comings. Manual training can only use currently available
known threat patterns and behaviors. Basic AI is com- data from known attacks and behaviors. Machine learning
monly deployed in tools such as antivirus software, and requires samples of existing threats so that humans can
it’s prone to a high false-positive rate and critical delays manually label and train the algorithm to detect and re-
due to its need to check threat signatures against a cloud spond to threats. This is ML’s Achilles’ heel. The vast major-
database. ity of newer threats are missed because ML has not trained
on them in the past — an extremely dangerous blind spot.
The Strengths and Weaknesses of Reactive Furthermore, false-positive rates are usually around 1% to
Machine Learning 2% — and this becomes a huge problem when there are
ML takes a step beyond basic AI by enabling greater be- tens of thousands of alerts a day.
havioral analysis, pattern recognition, and event correla- But, most crucially, ML is a firmly reactive application of
tion, and when applied to cybersecurity solutions it most AI. Threats need to have actively executed within the net-
often results in a reactive security posture. work environment before they can be detected. Consider-
With ML, the algorithms are trained on only 1% to 2% ing that the fastest ransomware can begin spreading and
of all available data, which must be continually updated damaging the network in a matter of seconds, this reactive
with manually engineered datasets that enable them to stance is increasingly not fast enough to stop most attacks
examine behaviors on the endpoint after malware execu- before the attackers get inside an environment and start to
tion. That said, the fully human-led training process used move laterally, thereby creating backdoors, stealing data,
for standard machine learning requires a high degree of and causing other damage.
expertise, can be quite time intensive, and potentially can
introduce human bias. The Deep Learning Difference
ML-powered tools have come to be particularly useful in As the most advanced form of AI, deep learning (DL) is
reactive, post-execution threat detection and analytics as extremely fast at detecting and preventing these threats.
they can do the heavy lifting for security operations center DL takes a proactive approach to security, focusing

September 2022 18
Previous Next
HARNESSING THE POWER OF DEEP LEARNING TO SPEED UP THREAT DETECTION

on identifying incoming threats the moment they hit the ingly critical to getting out ahead of rapidly innovating
environment and blocking them before they can execute. threat actors. The speed, power, and accuracy of deep
This proactive approach is facilitated by a more learning provides the critical edge that enterprises need
sophisticated application of AI. Rather than being trained to regain the advantage.
on data that already has been manually labeled as benign
or malicious, DL solutions are fed 100% raw data, and they About Deep Instinct
will identify the correct patterns independently. Deep Instinct takes a prevention-first approach to stopping ran-
This results in a large, multilayered model that utilizes all somware and other malware using a deep learning cyberse-
available data rather than a human-determined subset. A curity framework. We predict and prevent known, unknown,
properly fully trained DL solution is, therefore, less affected and zero-day threats in <20 milliseconds, 750X faster than
by bias, and it gains far greater pattern recognition than a the fastest ransomware can encrypt. Deep Instinct has >99%
standard ML solution. The more organic training process zero-day accuracy and promises a <0.1% false-positive rate.
results in a greater degree of accuracy and renders deep The Deep Instinct Prevention Platform provides complete,
learning more likely to identify attack techniques and multilayered protection against threats across hybrid environments.
tactics missed by ML models. For more, visit http://www.deepinstinct.com.
Crucially, this means a much greater ability to identify
zero-day threats. Even a DL solution that has not been
updated for the better part of a year can identify and block
a previously unknown threat.
Alongside this, DL-based cybersecurity solutions can op-
erate at a speed and with accuracy that can’t be touched
by a ML-based solution. The best DL-powered solutions
can identify threats in less than 20 milliseconds, far faster
than even the swiftest known ransomware can execute.
The false-positive rate can also be as little as 0.1%.
As the pace of the cyber arms race continues to accel-
erate, proactive, prevention-first security will be increas-

September 2022 19
Previous

You might also like