You are on page 1of 3

A Model for Human and Machine Interaction: Human-

Machine Teaming Grows up


MalBot Feb '18

Security operation centers (SOCs) are struggling to keep up with attackers, and artificial intelligence
(AI) has failed to deliver significant improvements. The industry has been successful at applying AI to
malware detection and user and entity behavior analytics (UEBA) using deep neural networks and
anomaly detection. But other core SOC jobs such as monitoring, triage, scoping, and remediation
remain highly manual. Some repetitive and low-value tasks can be assisted with automation, but
tasks that require analysis and creativity are hard to capture in code. Even worse: Imagine trying to
automate the investigation of an undiscovered attack technique.

Automation and current AI solutions depend upon a human observing and understanding a threat,
then building a model or writing code. The time gap between the human observing a phenomenon
and the machine helping is the reason why attackers often have the upper hand. In order to get
ahead, we need to make AI systems learn and interact directly with practitioners at the SOC.

The idea behind human-machine teaming (HMT see [1] and [2]) is to put the human in the AI
algorithm loop. In a SOC context, the human has the intuition to find a new attack technique and the
creativity to investigate it using company tools. Using human input, the machine gathers information
and presents it back in a summary to manage the human cognitive workload. As a result of the
human-machine interaction, the machine learns to better proceed in new scenarios, while the human
continues to adapt, focusing on higher-value tasks.
Several products put the
human in the loop, but few empower the human to perform high-order cognitive tasks.

Research shows that unsupervised anomaly detection can be improved by asking the human to
examine alerts when classification confidence is low. This approach improves detection by 4X and
reduces false positives by 5X [3]. More importantly, the system teaches itself to address adversaries’
changing tactics.

Our assessment of the current SOC tools landscape shows that several products put the human in
the loop, but very few empower the human to perform high-order cognitive tasks. In order to
understand where we stand as an industry and what the gap is, we clustered tools into four groups.

Most cybersecurity
products today deliver HMT1 and HMT2 capabilities. McAfee Investigator delivers HMT3 and our
engineers are working toward HMT4.

On the vertical axis, we have ascending levels of cognitive tasks that humans bring to the team,
while on the horizontal axis we have machine capabilities. An assumption of this model is that a
human is not able to exercise high-order tasks if she also has to perform low-level functions. This is
similar to a Maslow pyramid psychology model. As the machine starts to interact with the human at
a higher level of cognition, the team becomes more effective and the degree of human-machine
teaming increases from HMT0 to HMT4.
Most of the products in the industry today revolve around the first two iterations of human-machine
teaming, known as HMT1 and HMT2. In these scenarios, humans interact with products by analyzing
data and providing explicit orders on how to drill down and gather additional data. In some products,
humans are able to elevate their work by getting insights and applying their intuition and context to
them.

What is clearly missing are products that can take directional feedback, for instance: “Get me
evidence that supports potential lateral movement on this case”. We are also missing products that
can learn by observing the human at work, for instance, learning to dismiss the alerts that humans
have investigated and dismissed in the past.

At McAfee we are using this HMT maturity model as a guide to building better features and tools for
the SOC. We recently launched McAfee Investigator [4] to help triage alerts faster and more
effectively. Investigator, which uses a question answering approach to leverage expert knowledge
[5], can take directional feedback from the human to pivot an investigation (HMT3). Our goal is to
develop Investigator to a point where it can learn directly from practitioners (HMT4).

Learn more about human-machine teaming here.

[1] S. Grobman, “Why Human-Machine Teaming Will Lead to Better Security Outcomes,” 13 July 2013. [Online]. Available:
https://securingtomorrow.mcafee.com/executive-perspectives/human-machine-teaming-will-lead-better-security-outcomes/

[2] B. Kay, “News from Black Hat: Humans Collaborate and Team with Machines to Work Smarter,” 25 July 2017. [Online].
Available: https://securingtomorrow.mcafee.com/business/news-black-hat-humans-collaborate-team-machines-work-
smarter/

[3] K. Veeramachaneni, I. Arnaldo and V. Korrapati, “AI^2 : Training a big data machine to defend,” IEEE 2nd International
Conference on Big Data Security on Cloud, 2016.

[4] “McAfee Investigator,” [Online]. Available: https://www.mcafee.com/us/products/investigator.aspx

[5] F. M. Cuenca-Acuna and I. Valenzuela, “The Need for Investigation Playbooks at the SOC,” 2017. [Online]. Available:
https://www.sans.org/summit-archives/file/summit-archive-1496695240.pdf

McAfee does not control or audit third-party benchmark data or the websites referenced in this document. You should visit
the referenced website and confirm whether referenced data is accurate.

McAfee technologies’ features and benefits depend on system configuration and may require enabled hardware, software,
or service activation. Learn more at mcafee.com. No computer system can be absolutely secure.

The post A Model for Human and Machine Interaction: Human-Machine Teaming Grows up
appeared first on McAfee Blogs.

Article Link: https://securingtomorrow.mcafee.com/business/model-human-machine-


interaction-human-machine-teaming-grows/

You might also like