Professional Documents
Culture Documents
net/publication/300372121
CITATIONS READS
7 877
3 authors:
Varun Dutt
Indian Institute of Technology Mandi
157 PUBLICATIONS 905 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Palvi Aggarwal on 01 August 2016.
1 Introduction
The increasing reliance of our information age industries, governments, and econ o-
mies on cyber infrastructure makes them more and more vulnerable to cyber-attacks.
In their most disruptive form, cyber-attacks target the enterprise, military, govern-
ment, or other infrastructural resources of a country and its citizens. The volume and
erudition of cyber threats (malicious hacking, cyber warfare, cyber espionage, cyber
terrorism, and cyber-crime) are increasing exponentially, and possess potent threats to
the cyber world. According to Trustwave’s 2015 Global Security Report, 98% of
tested web applications were found vulnerable to cyber-attack. Based on the Depart-
ment of Business, Innovation & Skills’ 2015 security survey 90% of the big organiza-
tion and 74% of the small organization suffered from security breaches [1].
*
Palvi Aggarwal, ACS Lab, IIT Mandi, India.
.
On the other hand, cyber criminals get smarter day by day. They find new
ways to get into our systems and damage or steal information in less time than we
expect. Modern cyber-attacks on critical cyber infrastructure reveal the urgent need
for enhanced cyber security. As cyber threats grow, so our security techniques, to
neutralize them. There are various security solutions available to defend against
cyber-attacks, but not enough capable to prevent from zero day cyber-attacks [2]. So
it is not debatable that prevention only security mechanisms, will work effectively.
Cyber security teams must create a real time attack detection environment against
frequently changing cyber-attack landscape. Deception tools can play a critical role in
such scenarios.
Deception is referred as an interaction between two people or parties, a target
and a deceiver, in which the deceiver effectively causes the target to believe as true a
precise false description of reality, with the objective of causing the target to work in
such a way that is beneficial to the deceiver [3]. Deception has been used as a strategy
in war zones [4] for many years and by cyber attackers in cyber world [5]. Cyber
attackers are using various techniques for a deceptive purpose, such as change in
malware signature, conceal code and logic, encrypted exploits, and social engineering
(e.g. By deceptive help desk employees to install malicious code or obtain creden-
tials). However, deception can be used as a line of defense against cyber attackers.
Kevin Mitnick, a famous hacker wrote in “The Art of Deception” that, “If deception
can be used for cyber-attack, can it also be used for defense” [6].
Deception strategies make use of feints and deceit to thwart an attacker’s
cognitive process, delay attack activities and disrupt the breach process. Deception is
achieved through misdirections, fake responses, and obfuscations [7]. These tech-
niques, gain attack’s trust in network, data, applications and systems they interact
during attack execution. Security experts are using honeypots and honeynets for gath-
ering intelligence of attackers. However, use of deception can be improved beyond
detection to prevention expertise. To improve the deception strategy, the question is
not whether we use deception or not, but whether the timing and amount of deception
is significant or not. In this paper, we are analyzing the appropriate timing and amount
of deception to use against the attacker.
One way to study cyber security is through a non-cooperative dynamic game
with and without complete information as described in behavioral game theory [8, 9,
10, 11, 12]. A non-cooperative dynamic game consists of two or players, set of ac-
tions, the outcome of each player interaction and the game’s information structures
[13]. Alpcan and Başar [14] have provided real-world abstractions of several complex
security games with reduced complexity (actions and states) that preserve the im-
portant dynamics communicated by these games.
Current game-theoretic approaches, which study the interaction between
hackers and analysts, have disregarded the role of deception as a strategy to coun ter
cyber-attacks [15]. Deception, when used in cyber security, refers to a strategy that is
used by the analyst to mislead the hacker into taking an action which will help in cap-
turing the hacker. Deception in game-theoretic approach has been studied using
Honeypot Game HG(n, k) [16]. They used extensive games of imperfect information
to analyze deception strategies in honeynets and to compute mixed strategy Nash
equilibrium. In this paper, we are using deception game to analyze the effect of the
amount and timing of deception on attacker’s decisions of attacking the network or
not attacking the network. In this game, hacker’s action is abstracted as a choice b e-
tween an “attack” action and a “not-attack” action. An attack action means attacking a
regular webserver or honeypot web server using cyber threats; whereas, a not-attack
action means not-attacking the system.
In addition to mathematically derived Nash equilibriums [13], literature has
shown Instance-Based Learning Theory [8, 11, 17, 18] to be an accurate account of
human decisions in BGT situations involving hackers and analysts [9, 10, 11]. In such
situations, hackers and analysts, playing a game against each other, possess cognitive
limitations and rely on recency and frequency of available information to make deci-
sions. Therefore, the application of IBLT to the hacker’s experiential decisions in a
deception game will help explain how these decisions are impacted by the amount and
timing factors, and will help improve current technical solutions to provide better
decision support to analysts in their job.
Furthermore, research shows that Prospect Theory [19, 20] provides a robust
account of decisions in situations involving gains and losses. According to PT, people
value gains and losses differently and losses have more emotional impact than an
equivalent amount of gains. For example, in a traditional way of thinking, the amount
of utility gained from receiving $50 should be equal to a situation in which you
gained $100 and then lost $50. In both situations, the end result is a net gain of $50.
Although both situations are identical; however, to our cognitive system, gaining
$100 and then losing $50 is more loss -making compared to gaining $50 in a single
time. Given that hacker’s goal is gaining payoff, PT is likely to help explain the co m-
bined effect of situations involving both gains and losses. Another theory Surprise
trigger change [21] accounts the decisions during surprise situations. According to
this hypothesis, Surprise is assumed to increase with the gap between the expected
and the observed outcomes. This theory will help explaining the changing behavior of
hacker during multiple games.
In the following sections, we first explain the generic cyber-security game; next we
explain the methodological details and then present our results. The paper ends by
discussing the results obtained, the novelty of our experimental approach, and its sig-
nificance in protecting networks from cyber-attacks.
2 Deception Game
Table 1. The Hacker’s payoffs during the Probe and Attack stages in the game.
3.2 Participants
A total of 100 participants participated in online cyber security study. These partic-
ipants were equally divided across four between subject conditions: ELD (N=25),
EHD (N=25), LLD (N=25) and LHD (N=25). Sixty-eight percent of participants were
males. Age of participants ranged from 18 years to 45 years (Mean = 23; SD = 4).
About 71% of participants self-reported to possess a 4-year undergraduate college
degree; 20% reported to have high-school degrees, 7% reported to have 2-year college
degrees, or some college experience; and, 2% reported to either have a grad uate or a
professional degree. Hacker participants were paid INR 30 after completion of exper-
iment.
3.3 Procedure
Participants were given instructions about their goal in the cyber-security game and
they possessed complete information about their own action’s payoffs (the payoff
matrix was given). Specifically, human hackers were asked to maximize their payoff
by attacking/not-attacking the network over several rounds of play (the endpoint of
the game was not disclosed to performing participants at any point in the game). Each
round had two stages: Probe stage and Attack stage. Hacker had three actions altern a-
tives to choose between in each stage: attack webserver 1, attack webserver 2 and not-
attack. Hacker participants had to make a choice between these alternatives presented
to them on their screen via three buttons in order to maximize their payoffs. Once the
study ended, the online system thanked human participants and raised a response
requesting the experimenter to make online payment to participants .
4 Results
In today’s age of Internet, large computers networks are widely used. Cyber-attacks
on such networks are an emerging issue which needs to be solved in a more sophisti-
cated way [22]. Deception strategy is currently moving forefront to fight against
cyber-attacks. Choice of correct timing and correct volume to use deception may help
to improve the defense mechanism. Our results show that using adequate amount of
deception and appropriate timing of deception do influence the proportion of attack
and not attack actions occurring on a network.
In general, high amount and late timing of deception decrease attack action on ne t-
work. Such decrease in attack actions signifies that high amount and late timing of
deception is creating deterrence for hackers to attack the network. These results, on
account of motivation, can be explained by Prospect Theory (PT) [19, 20], Surprise
Trigger Change theory [21] and Instance-Based Learning Theory (IBLT) [23].
First, we found that the manipulation of timing and amount of deception does
not show any difference in attack on honeypot action. However, attack proportion on
honeypot within 10 games is significant. As per surprise trigger change theory, the
proportion of attack on honeypot is lower for negative recency and higher for positive
recency. If the hacker experienced losses in previous games, then he tried to lower the
attack on the honeypot to maximize his payoffs.
Second, we found that not attack actions are more for late deception than ear-
ly deception condition. Initially hacker did not experience any losses, so in order to
maximize his profit he chooses more attack actions rather than not attack actions.
However, late deception surprised hacker with sudden negative outcome. According
to PT, loses influence people’s decisions more than gains. Thus, the fear of encounter-
ing a loss will make people careful about the actions and as a res ult increase their
proportion of not attack actions.
Next we found that proportion of not attack action is greater for high amount
of deception than for low amount of deception. High amount of deception create fear
of being trapped in the hacker’s mind. According to loss aversion theory, people’s
tendency is to strongly prefer avoiding losses to acquiring gains. Most studies suggest
that losses are twice as powerful, psychologically, as gains. Similarly, in this situation
to gain access of the system at the cost of being trapped created fear to the hacker.
Thus not attack actions are higher for high amount of deception.
Although deception itself is not a new concept in defense, however deception
as a game changing tool against hacker is still at its early stage. Deception can play as
an effective tool to detect attacks even before they occur and attack process can be
made costly. To make deception an effective strategy, we will extend the current
experiment to multiple probing before attacks. In the real world, hacker probes the
network multiple times to gain adequate information for attack. We want to analyze
the effect of multiple probes on success of the deception. Further, we plan to use dif-
ferent deception techniques other than a decoy (mimicking, mas king, packaging, etc.)
to build deception an effective strategy.
Acknowledgments
Palvi Aggarwal was supported by Visvesverya Ph.D. scheme for Electronics and
IT (IITM/DeitY-MLA/ASO/77), Department of Electronics and Information
Technology, Ministry of Communication & IT, Government of India. Cleotilde
Gonzalez was supported by the Army Research Laboratory under Cooperative
Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA) to Cleotilde
Gonzalez. Varun Dutt was supported by the Department of Science and Tech-
nology, Government of India award (Award number: SR/CSRI/28/2013(G)) to
Varun Dutt. The views and conclusions contained in this document are
those of the authors and should not be interpreted as representing the official
policies, either expressed or implied, of the Army Research Laboratory or the
Indian or U.S. Government.
References
1. T rustwave global Security Report retrieved from: https://www2.trustwave.com/rs/815-RFM-
693/images/2015_T rustwaveGlobalSecurityReport.pdf
2. Symantec Corporation. Internet security threat report Retrieved from
http://www.symantec.com/content/en/us/enterprise/other_resources/bistr_main_report_v19_
21291018.en-us.pdf (2014).
3. Whaley, B.: T oward a General T heory of Deception, T he Journal of Strategic Studies, Frank
Cass, London, 5(1):178-192, March 1982.
4. Glantz, D.: Military Deception in the Second World War. Cass Series on Soviet Military
T heory & Practice. London: Routledge. ISBN 978 -0-714-63347-3 (1989).
5. Denning, D.: Information warfare and security. New York: AddisonWesley (1999).
6. Mitnick, Kevin D.: and William L. Simon. T he art of deception: Controlling the human
element of security. John Wiley & Sons, (2011).
7. Rowe, Neil C., and E. John Custy: "Deception in cyber-attacks." Cyber warfare and cyber
terrorism (2008).
8. Dutt, V., Ahn, Y. S., & Gonzalez, C.: Cyber situation awareness modeling detection of
cyber-attacks with instance-based learning theory. Human Factors: T he Journal of the Hu-
man Factors and Ergonomics Society, 55(3), 605 -618 (2013).
9. Arora, A., & Dutt, V.: Cyber Security: Evaluating the Effects of Attack Strategy and Base
Rate through Instance Based Learning. In 12th International Conference on Cognitive Mo d-
eling. Ottawa, Canada (2013).
10.Kaur, A., & Dutt, V.: Cyber Situation Awareness: Modeling the Eff ects of Similarity and
Scenarios on Cyber Attack Detection. In Paper presented at the 12th International Confe r-
ence on Cognitive Modeling. Ottawa, Canada (2013).
11.Gonzalez, C., & Dutt, V.: Instance-based learning: Integrating sampling and repeated deci-
sions from experience. Psychological review, 118(4), 523 (2011).
12.Roy, S., Ellis, C., Shiva, S., Dasgupta, D., Shandilya, V., & Wu, Q.: A survey of game the o-
ry as applied to network security. In System Sciences (HICSS), 2010 43rd Hawaii Intern a-
tional Conference on (pp. 1-10). IEEE (2010).
13.Camerer, C.: Behavioral game theory: Experiments in strategic interaction. Princeton Un i-
versity Press (2003).
14.Alpcan, T ., & Başar, T .: Network security: A decision and game-theoretic approach. Cam-
bridge University Press (2010)
15.Crouse, M.: "Performance Analysis of Cyber Deception Using Probabilistic Models."
(2012).
16.Garg, N, and Daniel G.: "Deception in honeynets: A game-theoretic analysis." Information
Assurance and Security Workshop. IAW'07. IEEE SMC. IEEE, 2007.
17.Dutt, V., & Gonzalez, C.: Making Instance-based Learning T heory Usable and Understand-
able: T he Instance-based Learning T ool. Computers in Human Behavior, 28(4), 1227 -1240.
doi: 10.1016/j.chb.2012.02.006, (2012).
18.Gonzalez, C., Lerch, J. F., & Lebiere, C.: Instance-based learning in dynamic decision mak-
ing. Cognitive Science, 27(4), 591–635. doi:10.1016/S0364-0213(03)00031-4, (2003).
19.Kahneman, D., & T versky, A.: Prospect theory: An analysis of decision under risk. Econ o-
metrica, 263-291, (1979).
20.T versky, A., & Kahneman, D.: Advances in prospect theory: Cumulative representation of
uncertainty. Journal of Risk and uncertainty, 5(4), 297 -323, (1992).
21.Nevo, I., and Ido E.: "On surprise, change, and the effect of recent outcom es." Frontiers in
psychology 3 (2012).
22.George L.:Cyber-Physical Attacks. Retrieved from
http://www.professionalsecurity.co.uk/reviews/cyber-physical-attacks,(2015).
23.Dutt, V., Ahn, Y. S., & Gonzalez, C.: Cyber situation awareness modeling detection of
cyber-attacks with instance-based learning theory. Human Factors: T he Journal of the Hu-
man Factors and Ergonomics Society, 55(3), 605 -618, (2013).