You are on page 1of 12

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/300372121

Cyber-security: Role of Deception in Cyber-Attack Detection

Conference Paper · July 2016

CITATIONS READS

7 877

3 authors:

Palvi Aggarwal Cleotilde Gonzalez


Indian Institute of Technology Mandi Carnegie Mellon University
18 PUBLICATIONS   36 CITATIONS    225 PUBLICATIONS   3,777 CITATIONS   

SEE PROFILE SEE PROFILE

Varun Dutt
Indian Institute of Technology Mandi
157 PUBLICATIONS   905 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Understanding the Role of Deception in Cyber-Attack Detection View project

Cognitive Modelling View project

All content following this page was uploaded by Palvi Aggarwal on 01 August 2016.

The user has requested enhancement of the downloaded file.


Cyber-security: Role of Deception in Cyber-Attack
Detection
Palvi Aggarwal1,*, Cleotilde Gonzalez2, Varun Dutt 1
1
Applied Cognitive Science Laboratory, Indian Institute of Technology Mandi,
India
2
Dynamic Decision Making Laboratory, Carnegie Mellon University, Pittsburgh,
USA
palvi_aggarwal@students.iitmandi.ac.in, coty@cmu.edu, & varun@iitmandi.ac.in

Abstract. Cyber-attacks are increasing in the real-world and cause widespread


damage to cyber-infrastructure and loss of information. Deception, i.e., actions
to promote the beliefs of things that are not true, could be a way of countering
cyber-attacks. . In this paper, we propose a deception game, which we use to
evaluate the decision making of a hacker in the presence of deception. In an ex-
periment, using the deception game, we analyzed the effect of two between-
subjects factors in Hacker’s decisions to attack a computer network (N = 100
participants): amount of deception used and the timing of deception. T he
amount of deception used was manipulated at 2-levels: low and high. T he tim-
ing of deception use was manipulat ed at 2-levels: early and late. Results re-
vealed that using late and high deception condition, proportion of not attack ac-
tions by hackers are higher. Our results suggest that deception acts as a deter-
rence strategy for hacker.

Keywords: Deception · Cyber-attacks · IBL · Security Analyst · Cyber De-


fence

1 Introduction

The increasing reliance of our information age industries, governments, and econ o-
mies on cyber infrastructure makes them more and more vulnerable to cyber-attacks.
In their most disruptive form, cyber-attacks target the enterprise, military, govern-
ment, or other infrastructural resources of a country and its citizens. The volume and
erudition of cyber threats (malicious hacking, cyber warfare, cyber espionage, cyber
terrorism, and cyber-crime) are increasing exponentially, and possess potent threats to
the cyber world. According to Trustwave’s 2015 Global Security Report, 98% of
tested web applications were found vulnerable to cyber-attack. Based on the Depart-
ment of Business, Innovation & Skills’ 2015 security survey 90% of the big organiza-
tion and 74% of the small organization suffered from security breaches [1].

*
Palvi Aggarwal, ACS Lab, IIT Mandi, India.
.
On the other hand, cyber criminals get smarter day by day. They find new
ways to get into our systems and damage or steal information in less time than we
expect. Modern cyber-attacks on critical cyber infrastructure reveal the urgent need
for enhanced cyber security. As cyber threats grow, so our security techniques, to
neutralize them. There are various security solutions available to defend against
cyber-attacks, but not enough capable to prevent from zero day cyber-attacks [2]. So
it is not debatable that prevention only security mechanisms, will work effectively.
Cyber security teams must create a real time attack detection environment against
frequently changing cyber-attack landscape. Deception tools can play a critical role in
such scenarios.
Deception is referred as an interaction between two people or parties, a target
and a deceiver, in which the deceiver effectively causes the target to believe as true a
precise false description of reality, with the objective of causing the target to work in
such a way that is beneficial to the deceiver [3]. Deception has been used as a strategy
in war zones [4] for many years and by cyber attackers in cyber world [5]. Cyber
attackers are using various techniques for a deceptive purpose, such as change in
malware signature, conceal code and logic, encrypted exploits, and social engineering
(e.g. By deceptive help desk employees to install malicious code or obtain creden-
tials). However, deception can be used as a line of defense against cyber attackers.
Kevin Mitnick, a famous hacker wrote in “The Art of Deception” that, “If deception
can be used for cyber-attack, can it also be used for defense” [6].
Deception strategies make use of feints and deceit to thwart an attacker’s
cognitive process, delay attack activities and disrupt the breach process. Deception is
achieved through misdirections, fake responses, and obfuscations [7]. These tech-
niques, gain attack’s trust in network, data, applications and systems they interact
during attack execution. Security experts are using honeypots and honeynets for gath-
ering intelligence of attackers. However, use of deception can be improved beyond
detection to prevention expertise. To improve the deception strategy, the question is
not whether we use deception or not, but whether the timing and amount of deception
is significant or not. In this paper, we are analyzing the appropriate timing and amount
of deception to use against the attacker.
One way to study cyber security is through a non-cooperative dynamic game
with and without complete information as described in behavioral game theory [8, 9,
10, 11, 12]. A non-cooperative dynamic game consists of two or players, set of ac-
tions, the outcome of each player interaction and the game’s information structures
[13]. Alpcan and Başar [14] have provided real-world abstractions of several complex
security games with reduced complexity (actions and states) that preserve the im-
portant dynamics communicated by these games.
Current game-theoretic approaches, which study the interaction between
hackers and analysts, have disregarded the role of deception as a strategy to coun ter
cyber-attacks [15]. Deception, when used in cyber security, refers to a strategy that is
used by the analyst to mislead the hacker into taking an action which will help in cap-
turing the hacker. Deception in game-theoretic approach has been studied using
Honeypot Game HG(n, k) [16]. They used extensive games of imperfect information
to analyze deception strategies in honeynets and to compute mixed strategy Nash
equilibrium. In this paper, we are using deception game to analyze the effect of the
amount and timing of deception on attacker’s decisions of attacking the network or
not attacking the network. In this game, hacker’s action is abstracted as a choice b e-
tween an “attack” action and a “not-attack” action. An attack action means attacking a
regular webserver or honeypot web server using cyber threats; whereas, a not-attack
action means not-attacking the system.
In addition to mathematically derived Nash equilibriums [13], literature has
shown Instance-Based Learning Theory [8, 11, 17, 18] to be an accurate account of
human decisions in BGT situations involving hackers and analysts [9, 10, 11]. In such
situations, hackers and analysts, playing a game against each other, possess cognitive
limitations and rely on recency and frequency of available information to make deci-
sions. Therefore, the application of IBLT to the hacker’s experiential decisions in a
deception game will help explain how these decisions are impacted by the amount and
timing factors, and will help improve current technical solutions to provide better
decision support to analysts in their job.
Furthermore, research shows that Prospect Theory [19, 20] provides a robust
account of decisions in situations involving gains and losses. According to PT, people
value gains and losses differently and losses have more emotional impact than an
equivalent amount of gains. For example, in a traditional way of thinking, the amount
of utility gained from receiving $50 should be equal to a situation in which you
gained $100 and then lost $50. In both situations, the end result is a net gain of $50.
Although both situations are identical; however, to our cognitive system, gaining
$100 and then losing $50 is more loss -making compared to gaining $50 in a single
time. Given that hacker’s goal is gaining payoff, PT is likely to help explain the co m-
bined effect of situations involving both gains and losses. Another theory Surprise
trigger change [21] accounts the decisions during surprise situations. According to
this hypothesis, Surprise is assumed to increase with the gap between the expected
and the observed outcomes. This theory will help explaining the changing behavior of
hacker during multiple games.

In the following sections, we first explain the generic cyber-security game; next we
explain the methodological details and then present our results. The paper ends by
discussing the results obtained, the novelty of our experimental approach, and its sig-
nificance in protecting networks from cyber-attacks.

2 Deception Game

The deception game is a sequential, incomplete information, single player


game (shown in Figure 1) i.e., a game between a hacker player and the system. The
deception game is denoted as DG (n, k, γ), where n is total number of webservers, k
are the honeypots and γ is the number of probes after which the hacker makes his final
move. In this current formulation of DG, n = 2 (two webservers), k = 1 (one honey-
pot), and γ = 1 (the hacker can probe only one of the webserver once or can choose
not to probe before attacking one of the webservers for real). There are multiple
rounds in this game, where each round consists of a probe stage and an attack stage.
In the Probe stage, hacker is asked to either probe any one of the two webservers
presented as buttons on a computer screen or not to probe. Probing a webserver means
clicking the button corresponding to a webserver and getting a res ponse from the
computer network on whether the probed webserver is a honeypot or a regular web-
server. Honeypot webservers are decoys that pretend to be regular webservers with
the main aim of trapping hackers. In contrast, regular webservers are real webservers,
which store valuable information on company’s products and employees. Hacker’s
goal is to attack the regular webserver. If deception is used in a game, then the net-
work’s response to hacker’s probe will be opposite to the actual state of webservers.
Thus, if hacker probes a regular webserver, then the network’s response will be
“honeypot” and if the hacker probes a honeypot, then the network’s response will be
“regular.” In contrast, if deception is absent in a game, and then the system will re-
spond as per the actual state of webservers. Thus, a probe on the regular webserver
will be responded to as “regular” and a probe to a honeypot will be responded to as
“honeypot.”
After probing one of the webservers, the hacker enters the Attack stage. In
the Attack stage, the hacker could either decide to attack one of the webservers for
real or decide not to attack the computer network. After the Attack stage, the hacker
will be given feedback about the actions she took in the preceding game and the n a-
ture of the webserver, honeypot or regular, which the hacker actually attacked. Probe
and Attack stages of the game will involve payoffs as shown in Table 1 and hackers
will be shown this payoff as well as their cumulative payoff across games.
Fig. 1. T he Probe and Attack stages of the deception game.

Table 1. The Hacker’s payoffs during the Probe and Attack stages in the game.

Hacker’s Action Hacker’s Hacker’s Action Hacker’s


Payoff Payoff
Probe a Regular 5 points Attack a Regular 10 points
Webserver Webserver
Probe a Honeypot -5 points Attack a Honeypot -10 points
Webserver Webserver
Do Not Probe 0 points Do Not Attack 0 points
(a) Probe Stage (b) Attack Stage

3 Experiment: Influence of Amount and Timing of Deception


on Hacker’s Decisions

3.1 Experiment Design


In this experiment, we analyzed the effect of two between-subjects factors:
Amount of deception’s use and the timing of deception’s use on Hacker’s decisions to
attack a computer network. The amount of deception’s use was manipulated at 2-
levels: low and high. The timing of deception’s use was manipulated at 2-levels: early
and late. In all conditions, participants playing as hackers were given 10 games in a
sequence (end point unknown to participants). When the amount of deception was
low, then 2 games out of 10 games ha deception; however, when the amount of de-
ception was high, then 4 games out of 10 games had deception. Furthermore, when
the timing of deception’s use was early, then deception was present on the first few
games in the sequence. However, when the timing of deception’s use was late, then
deception was present on the last few games in the sequence. Overall, this design
resulted in a total of 4 between-subjects conditions: Early low deception (ELD), Early
High Deception (EHD), Late Low Deception (LLD) and Late High Deception (LHD).

3.2 Participants
A total of 100 participants participated in online cyber security study. These partic-
ipants were equally divided across four between subject conditions: ELD (N=25),
EHD (N=25), LLD (N=25) and LHD (N=25). Sixty-eight percent of participants were
males. Age of participants ranged from 18 years to 45 years (Mean = 23; SD = 4).
About 71% of participants self-reported to possess a 4-year undergraduate college
degree; 20% reported to have high-school degrees, 7% reported to have 2-year college
degrees, or some college experience; and, 2% reported to either have a grad uate or a
professional degree. Hacker participants were paid INR 30 after completion of exper-
iment.

3.3 Procedure
Participants were given instructions about their goal in the cyber-security game and
they possessed complete information about their own action’s payoffs (the payoff
matrix was given). Specifically, human hackers were asked to maximize their payoff
by attacking/not-attacking the network over several rounds of play (the endpoint of
the game was not disclosed to performing participants at any point in the game). Each
round had two stages: Probe stage and Attack stage. Hacker had three actions altern a-
tives to choose between in each stage: attack webserver 1, attack webserver 2 and not-
attack. Hacker participants had to make a choice between these alternatives presented
to them on their screen via three buttons in order to maximize their payoffs. Once the
study ended, the online system thanked human participants and raised a response
requesting the experimenter to make online payment to participants .

4 Results

4.1 Proportion of Attack on Honeypot Webserver


Figure 2 shows the average proportion of attack actions of hacker on Honeypot
webserver for Low Deception and High Deception condition. The mean proportion of
attack actions on honeypot webserver in High deception condition was not signifi-
cantly different from Low Deception condition (0.43 ~ 0.39; F (1, 96) = 1.529, p =
0.291). Similarly, Figure 3 shows average proportion of attack actions on honeypot
for Late Deception and Early Deception condition. The mean proportion of attack
actions on honeypot webserver in late deception condition was not significantly dif-
ferent from early deception condition (0.41 ~ 0.42; F (1, 96) = 0.087, p = 0.769).
Further we analyzed the within subjects variables and interaction effect of timing and
amount of deception. The results were not significant. However, the average attack
proportion across all participants was significant (F (9, 96) =4.043, p =0.00) as shown
in figure 4.

Fig. 2. Effect of Amount of Deception

Fig. 3. Effect of T iming of Deception


Fig. 4. Average attack proportion on honeypot webserver

4.2 Proportion of Not Attack Actions


Figure 5 shows the average proportion of Not Attack of hacker for early deception
condition and late deception condition. The average proportion of not attack action
was significantly higher in late deception condition than early deception condition
(0.25> 0.08; F (1, 96) = 17.959, p < 0.05). Figure 6 shows proportion of Not Attack
actions for Low deception and High deception condition. The average proportion of
not attack action was higher for high amount of deception than low amount of decep-
tion (0.21> 0.11; F (1, 96) = 6.67, p < 0.05). According to our hypothesis, for higher
amount of deception the hacker would either attempt more honeypot attacks or not
attack action. The results of not attack action were in line with our hypothesis. For
high amount of deception and late timing of deception, hacker attempted more num-
ber of not attack actions. However, the interaction effect of amount and timing was
not significant.

Fig. 5. Effect of T iming of Deception


Fig. 6. Effect of Amount of Deception

5 Discussion and Conclusion

In today’s age of Internet, large computers networks are widely used. Cyber-attacks
on such networks are an emerging issue which needs to be solved in a more sophisti-
cated way [22]. Deception strategy is currently moving forefront to fight against
cyber-attacks. Choice of correct timing and correct volume to use deception may help
to improve the defense mechanism. Our results show that using adequate amount of
deception and appropriate timing of deception do influence the proportion of attack
and not attack actions occurring on a network.

In general, high amount and late timing of deception decrease attack action on ne t-
work. Such decrease in attack actions signifies that high amount and late timing of
deception is creating deterrence for hackers to attack the network. These results, on
account of motivation, can be explained by Prospect Theory (PT) [19, 20], Surprise
Trigger Change theory [21] and Instance-Based Learning Theory (IBLT) [23].
First, we found that the manipulation of timing and amount of deception does
not show any difference in attack on honeypot action. However, attack proportion on
honeypot within 10 games is significant. As per surprise trigger change theory, the
proportion of attack on honeypot is lower for negative recency and higher for positive
recency. If the hacker experienced losses in previous games, then he tried to lower the
attack on the honeypot to maximize his payoffs.
Second, we found that not attack actions are more for late deception than ear-
ly deception condition. Initially hacker did not experience any losses, so in order to
maximize his profit he chooses more attack actions rather than not attack actions.
However, late deception surprised hacker with sudden negative outcome. According
to PT, loses influence people’s decisions more than gains. Thus, the fear of encounter-
ing a loss will make people careful about the actions and as a res ult increase their
proportion of not attack actions.
Next we found that proportion of not attack action is greater for high amount
of deception than for low amount of deception. High amount of deception create fear
of being trapped in the hacker’s mind. According to loss aversion theory, people’s
tendency is to strongly prefer avoiding losses to acquiring gains. Most studies suggest
that losses are twice as powerful, psychologically, as gains. Similarly, in this situation
to gain access of the system at the cost of being trapped created fear to the hacker.
Thus not attack actions are higher for high amount of deception.
Although deception itself is not a new concept in defense, however deception
as a game changing tool against hacker is still at its early stage. Deception can play as
an effective tool to detect attacks even before they occur and attack process can be
made costly. To make deception an effective strategy, we will extend the current
experiment to multiple probing before attacks. In the real world, hacker probes the
network multiple times to gain adequate information for attack. We want to analyze
the effect of multiple probes on success of the deception. Further, we plan to use dif-
ferent deception techniques other than a decoy (mimicking, mas king, packaging, etc.)
to build deception an effective strategy.

Acknowledgments
Palvi Aggarwal was supported by Visvesverya Ph.D. scheme for Electronics and
IT (IITM/DeitY-MLA/ASO/77), Department of Electronics and Information
Technology, Ministry of Communication & IT, Government of India. Cleotilde
Gonzalez was supported by the Army Research Laboratory under Cooperative
Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA) to Cleotilde
Gonzalez. Varun Dutt was supported by the Department of Science and Tech-
nology, Government of India award (Award number: SR/CSRI/28/2013(G)) to
Varun Dutt. The views and conclusions contained in this document are
those of the authors and should not be interpreted as representing the official
policies, either expressed or implied, of the Army Research Laboratory or the
Indian or U.S. Government.

References
1. T rustwave global Security Report retrieved from: https://www2.trustwave.com/rs/815-RFM-
693/images/2015_T rustwaveGlobalSecurityReport.pdf
2. Symantec Corporation. Internet security threat report Retrieved from
http://www.symantec.com/content/en/us/enterprise/other_resources/bistr_main_report_v19_
21291018.en-us.pdf (2014).
3. Whaley, B.: T oward a General T heory of Deception, T he Journal of Strategic Studies, Frank
Cass, London, 5(1):178-192, March 1982.
4. Glantz, D.: Military Deception in the Second World War. Cass Series on Soviet Military
T heory & Practice. London: Routledge. ISBN 978 -0-714-63347-3 (1989).
5. Denning, D.: Information warfare and security. New York: AddisonWesley (1999).
6. Mitnick, Kevin D.: and William L. Simon. T he art of deception: Controlling the human
element of security. John Wiley & Sons, (2011).
7. Rowe, Neil C., and E. John Custy: "Deception in cyber-attacks." Cyber warfare and cyber
terrorism (2008).
8. Dutt, V., Ahn, Y. S., & Gonzalez, C.: Cyber situation awareness modeling detection of
cyber-attacks with instance-based learning theory. Human Factors: T he Journal of the Hu-
man Factors and Ergonomics Society, 55(3), 605 -618 (2013).
9. Arora, A., & Dutt, V.: Cyber Security: Evaluating the Effects of Attack Strategy and Base
Rate through Instance Based Learning. In 12th International Conference on Cognitive Mo d-
eling. Ottawa, Canada (2013).
10.Kaur, A., & Dutt, V.: Cyber Situation Awareness: Modeling the Eff ects of Similarity and
Scenarios on Cyber Attack Detection. In Paper presented at the 12th International Confe r-
ence on Cognitive Modeling. Ottawa, Canada (2013).
11.Gonzalez, C., & Dutt, V.: Instance-based learning: Integrating sampling and repeated deci-
sions from experience. Psychological review, 118(4), 523 (2011).
12.Roy, S., Ellis, C., Shiva, S., Dasgupta, D., Shandilya, V., & Wu, Q.: A survey of game the o-
ry as applied to network security. In System Sciences (HICSS), 2010 43rd Hawaii Intern a-
tional Conference on (pp. 1-10). IEEE (2010).
13.Camerer, C.: Behavioral game theory: Experiments in strategic interaction. Princeton Un i-
versity Press (2003).
14.Alpcan, T ., & Başar, T .: Network security: A decision and game-theoretic approach. Cam-
bridge University Press (2010)
15.Crouse, M.: "Performance Analysis of Cyber Deception Using Probabilistic Models."
(2012).
16.Garg, N, and Daniel G.: "Deception in honeynets: A game-theoretic analysis." Information
Assurance and Security Workshop. IAW'07. IEEE SMC. IEEE, 2007.
17.Dutt, V., & Gonzalez, C.: Making Instance-based Learning T heory Usable and Understand-
able: T he Instance-based Learning T ool. Computers in Human Behavior, 28(4), 1227 -1240.
doi: 10.1016/j.chb.2012.02.006, (2012).
18.Gonzalez, C., Lerch, J. F., & Lebiere, C.: Instance-based learning in dynamic decision mak-
ing. Cognitive Science, 27(4), 591–635. doi:10.1016/S0364-0213(03)00031-4, (2003).
19.Kahneman, D., & T versky, A.: Prospect theory: An analysis of decision under risk. Econ o-
metrica, 263-291, (1979).
20.T versky, A., & Kahneman, D.: Advances in prospect theory: Cumulative representation of
uncertainty. Journal of Risk and uncertainty, 5(4), 297 -323, (1992).
21.Nevo, I., and Ido E.: "On surprise, change, and the effect of recent outcom es." Frontiers in
psychology 3 (2012).
22.George L.:Cyber-Physical Attacks. Retrieved from
http://www.professionalsecurity.co.uk/reviews/cyber-physical-attacks,(2015).
23.Dutt, V., Ahn, Y. S., & Gonzalez, C.: Cyber situation awareness modeling detection of
cyber-attacks with instance-based learning theory. Human Factors: T he Journal of the Hu-
man Factors and Ergonomics Society, 55(3), 605 -618, (2013).

View publication stats

You might also like