You are on page 1of 27

Research Proposal

Importance of Securing Technologies with Artificial Intelligence

Jackey Chhetri

Student ID 00018252

A Proposal Submitted as Partial Fulfilment for the degree of

Bachelor of Information Technology (HONS)

Lecturer: Pramod Parajuli, PhD

Course: EC3246 (Research Methodology)

Padmashree International College

Kathmandu, Nepal

April 2020

i
Abstract
Technology has become a gift to this world. A gift which can be used for our convenience and
pleasure. Its use has become so prominent that its hard to name a sector where the use of
technology is not possible. According to MIT cosmologist Max Tegmark, our universe is simply
a mathematical structure. He supports his opinion by saying that all the matters are made up of
particles which have properties such as charge and spin; these properties are purely
mathematical. Technology is a machinery equipment which runs with proper engineering and
algorithms. It is possible that these technologies can be used for decision making for an output
which is always effective.

In 21st century we can literally witness how technologies has revolutionised shopping process,
information search, entertainment and particularly media. Decision making is a fundamental
human process at the centre of our interaction with the world. Our brain size is limited by
calories, the size of the birth canal and other biological factors. AI (Artificial Intelligence)
attempts to mimic human decision making in some capacity, and advances in AI (Artificial
Intelligence) have shown significant promise in assisting and improving human decision making,
particularly in real-time and complex environments. An articulate intelligence could be
unimaginably smarter than a human. The smartest human who ever lived probably had an IQ of
less than 300. We don't know if an AI could have an IQ as high 3000 or even 3 million.

Therefore, understanding the importance of Artificial Intelligence and also by understanding


what it is capable of, we need to take necessary precaution to secure AI based technologies.

ii
Contents
Introduction......................................................................................................................................1
Problem Statement...........................................................................................................................2
Objectives........................................................................................................................................3
Literature Review............................................................................................................................4
Methodology..................................................................................................................................11
Schedule and deliverables..............................................................................................................16
Results and Discussions.................................................................................................................19
Conclusion.....................................................................................................................................20
Reference.......................................................................................................................................21

iii
Introduction
Given any scenario to a normal human being it is quite likely that the person will look for the
choices. Among those choices they would make a decision which is favourable for the
surrounding. The decision one makes can vary from another. This is because we each have our
own ethics and moral codes. Whereas, computers or AI technology has only the capability to
examine the possible outcome and do what is best for itself. AI can be very dangerous and here
are several reasons:

 It’s a computer itself: This means that the AI can communicate with other computers
effortlessly, bypassing even the most renown hackers.
 Computers are logical machines: Imagine you gave the simple instruction to the AI to
learn. At first it will learn from its direct environment but after that is depleted it will look
for other sources of information. After scouring a majority of the internet, it still hasn’t
learnt enough.
 It can manipulate itself: Say the programmers inserted an emergency shutdown function
to prevent it from going buck-wild. The AI could simply find this code, remove it and
then proceed to start a new version of itself.
 Spread ability: Computers to this day need a specific set of instructions on what to do.
This means you could reprogram the code for the general AI to run a more simplified
version of itself, meaning that you could never kill it.
 Rampant cyberterrorism: Take all the stuff above and add it in to one evil AI. The
damage it could do would be massive.

Ironically truth is reverse, Machines are lot better than humans. They are reliable, fast, efficient,
re-programmable and what so not. But the only thing that makes the difference is self-conscience
and reasoning. So, although mechanically machines are better but biologically, we overpower
them. Therefore, to make better decision for any circumstances a human control is always
necessary.

1
Problem Statement
Technology is an aspect of our everyday lives that has achieved deep acculturation due to its
easy access and the degree of its complexity. In possibly ten years, perhaps less, perhaps more,
computers will show improvement over Humans. The thing to remember about this is regardless
of how slow it may come; it is coming and nothing is going to prevent it from coming. So, what
are we going to do when an Artificial General Super Intelligent Personality (AGSIP), wearing a
living self-recuperating cybernetic body, can do completely anything you can do, just better,
quicker, and for less cash?

We as a human are so focused on the bigger picture that we forget the world we are living in
sometimes. For an example, since data about the technology for limited, and although 200
vehicle organizations are bouncing into self-driving vehicle space, there are insufficient strong
facts to make a benchmark for security principles. Any PC gadget associated with the web is
vulnerable to hacking. These vehicles depend intensely on the product that runs their parts, and if
a programmer gets into the framework, they can control each part of the vehicle. So, we see the
future where there are no accidents because of the self-driving cars with AI. But relying on that
AI has resulted in many serious accidents. Its understandable with the fact that a software
sometimes can malfunction. But malfunction directly resulting to the death of an individual is the
sign we should see that although there is possible technology which is perfect. We are not ready
for it.

2
Objectives
The main objectives of the research project are:

 The implement necessary protocols and safety features to avoid malfunctioning of AI


based Technologies.
 To understand the importance of understanding algorithms of a software.
 To enhance and augment human perception with artificial intelligence technologies.
 To analyse on ethical problems of artificial intelligence technology.

3
Literature Review
Cilli, Claudio & Magnanini, Giulio. (2018). On the security of the AI systems.

If an AI system is calibrated to do some malicious task this may do the task in a “better” way
than a human does so it can be devastating. There are AI weapon systems that can recognize a
person from a photo and once recognized can shoot to it. Also exist AI software that read all the
interests of a person to do some social engineering to steal her passwords. Those are only two
examples of malicious tasks that, realized with an AI system, can create a lot of security issues.
Security issues also may arise from a bad training or from a bug in an AI system. Let’s think
about an AI system that permits to drive a car. If the car is not trained well to recognize a person,
the system cannot stop correctly and can kill a person that is crossing the road.

These are only some examples of security issues in AI. There can be a lot of other threats for the
system. Three different “Security-zones”

4
Digital security is related to the digital attacks related to the AI. Often AI is used in cybersecurity
to offense and to defence a system. Related to this topic is possible to notify attacks like the
“spear phishing attack” (An automated social engineering attack), large scale attacks, machine
learnings models to avoid detection (a machine that creates and executes commands on a system
ad-hoc to avoid detection). Also, in the cybersecurity defence the AI is involved. In fact, AI can
be used to implement active-defence: a mechanism that can learn from the attacks received and
classified to classify the future attacks.

5
Physical Security is related to the use of AI in weapon systems. In fact, there are a lot of robot’s
system and drones that uses AI to kill people. These systems are often trained to recognize a
certain person and to kill him automatically. A system like this guarantees the anonymity of the
terrorist and may have devastating effects. One famous example of an attack like this is the
“sweep bot”, a cleaning robot trained to kill the prime minister. Systems of this type encourage
terrorism.

6
Political Security is related to the use of AI to alter the political landscape of a nation. The AI is
used to analyse big data that came from the social networks. In fact, the use of AI on the data that
came from the network can masquerade people with political views to spread political messages
to cause dissent. Also, is possible to use some social engineering attacks to convince fooling
humans to change public opinion. AI is also used to widespread fake news with very realistic
videos, to change the voting intentions of a person, to filter some information to a person and to
monitor surveillance in the authority regimes.

7
IoT Security Foundation, 2016. “IoT Security Compliance Framework”

Three levels of security

To control an AI system it must be necessary to check the security at 3 levels:

The first level is the classical software level. It includes the static code analysis, the
programming vulnerabilities, the language vulnerabilities. The learning level is the ML level. It
must be necessary to control the data inserted in the database and how the machine reacts to
some input of data. The distributed level is a level used when an AI system is composed by many
instances that resolve different tasks to join a final single decision. It must be necessary that
every instance took the right decision to have a right final decision.

Software level: To control the software level it can be used the classical process to integrate
Quality and Security in SDLC

Some task as vulnerability scanning and patching, static code analysis or code inspection may be
implemented using an AI system that may do the task better than a human.

8
Learning level

To be an AI system, the machine must learn with a ML algorithm how to do a task. So, it must
be validated the ML algorithm of the machine. To control at this level the machine must receive
a lot of different data as input and it must be checked how the machine reacts and how is the
error of the machine. To do so are necessary regression testing and security testing. As defined in
the Microsoft summit:

The problem today is that those tests often fails because there are a lot of adversarial schemes
that an AI system encounters during its lifecycle. For example, is possible for an automatically
driven car knows all the possible images that may encounter in the world. So, it can be possible
that, in front of rare situations, cannot recognize some patterns and does some wrong behaviour
that may be fatal.

Distributed level

This level is related to the control of the local instances of a distributed AI system. It consists to
validate all the results that came from the different instances of the system. To do so, is
necessary to check all the instances at the previous two levels.

9
Info security Magazine, January 2017. “How AI is the Future of Cybersecurity”, Ryan Kh

The Future of Cybersecurity

“We’ve never faced more varied or far-reaching cyber threats than we have today. What’s worse
is that these attacks are becoming more common, more sophisticated, and more impactful. When
you add a dwindling cybersecurity workforce into the mix, the outlook isn’t great. However, AI
systems can help address some of those problems and ultimately give your business an advantage
when facing a cyber-attack.”

Cybersecurity solutions that rely on AI can use existing data to handle new generations of
malware and cybersecurity attacks.

10
Methodology
The incorporation of Artificial Intelligence into security systems can be used to reduce the ever-
increasing threats of cyber security that is being faced by the global businesses. Across the
industries applications using Machine learning as well as artificial intelligence (AI) are broadly
being used all the more as data collection, storage capabilities and computing power are
increasing. In real time, the huge amount of data is difficult to be handled by humans. With the
help of machine learning as well as Artificial Intelligence, the huge amount of data can probably
be reduced down in milliseconds, as a result of which the enterprise can easily identify also
recover from threat.

Considerations:

The widespread of the AI systems and the growing precision of the ML algorithms are
expanding the attack’s landscape. In fact, the software that being developed is more complex
than the classical software, so often is vulnerable (vulnerabilities are discovered daily). Also
these systems uses a lot of data for ML algorithm that must be always controlled because they
are a sort of “professors” for the machine and if train the machine in a wrong way, the system
doesn’t complete correctly its task or may be vulnerable. Some others security issues came from
the dual use of the AI systems. An AI system can be used to offence. The widespread of these
types of systems may enable everyone to create an intelligent weapon system that can kill a
person maintaining anonymous the attacker. So, security issues are nowadays multiple and is
difficult to create a 100% secure AI system.

11
Mitigate security issues Four high-level recommendations can be identified to mitigate security
issues.

12
Starting from those 4 recommendations, in the report are identified 4 priority research areas
where to invest to increase security in AI. These areas are related to:

1) The application of cybersecurity in AI to discover vulnerabilities and improve the knowledge


(creation of red teams, formal verification of the code, creation of public log when a
vulnerability is discovered, forecasting security-relevant capabilities, creation and distribution of
security tools, implementation and control of the hardware).

2) Explore the different openness models (Pre-publication of the risk assessment in technical
areas of special concern, creation of central access licensing modules, sharing regimes that favor
safety and security, sharing norms applied to other AI models).

3) Promoting a culture of responsibility (Education for scientist and engineers, creation of ethical
statements and standards, whistleblowing measures, nuanced narratives).

4) Developing technological and policy solutions (privacy protection, coordinated use of AI for
public-good security, monitoring AI relevant resources).

Compliance classes

It could be possible to assign compliance classes to AI systems in the same way of IoT systems
to create a compliance plan. AI systems, like IoT systems, may involve different classes of
security related to the CIA principles (Confidentiality, Integrity, Availability). Those classes
identify how an AI system, if compromised, could be dangerous.

“Class 0: where compromise to the data generated or level of control provided is likely to result
in little discernible impact on an individual or organization.

Class 1: where compromise to the data generated or level of control provided is likely to result in
no more than limited impact on an individual or organization.

Class 2: in addition to class 1, the device is designed to resist attacks on availability that would
have significant impact an individual or organization, or impact many individuals, for example
by limiting operations of an infrastructure to which it is connected.

Class 3: in addition to class 2, the device is designed to protect sensitive data including sensitive
personal data.

13
Class 4: in addition to class 3, where the data generated or level of control provided or in the
event of a security breach have the potential to affect critical infrastructure or cause personal
injury.

Where the definitions of the levels of integrity, availability and confidentiality are as follows:

Integrity:

o Basic - devices resist low level threat sources that have very little capability and priority

o Medium - devices resist medium level threat sources that have from very little, focused
capability, through to researchers with significant capability

o High - devices resist substantial level threat sources

Availability:

o Basic - devices whose lack of availability would cause minor disruption

o Medium – devices whose lack of availability would have limited impact on an individual or
organization

o High – devices whose lack of availability would have significant impact to an individual or
organization, or impacts many individuals

Confidentiality:

o Basic – devices processing public information

14
o Medium – devices processing sensitive information, including Personally Identifiable
Information, whose compromise would have limited impact on an individual or organization

o High - devices processing very sensitive information, including sensitive personal data whose
compromise would have significant impact on an individual or organization.”

15
Schedule and deliverables

Source: Algorithmia’s “2020 State of Enterprise ML”. This question about how long it takes to
deploy an ML model into production was only asked to a subset of respondents at a company
that has an ML model production.

16
Source: Vilmos Müller’s analysis of of the 2018 Kaggle ML & DS Survey.

The chart represents answers to this question: “During a data science project, approximately
what proportion of your time is devoted to the following activities?”

Source: Alegion and Dimensional Research’s “What data scientists tell us about AI model
training today.” A third of AI/ML projects stall at the proof of concept phase, while only about a
fifth AI model training projects are not delayed.

17
“Continuous Delivery for Machine Learning End-to-end Process.”

From the perspective of the project owner, projects generally go through the following phases:

1. Pre-initiation (I have an idea)


2. IT Initiation (I need some techs)
3. Definition and Planning (Let’s be clear on what we’re doing here.)
4. Launch and Execution of the Project (Let’s get this thing going, I have the money!)
5. Performance and Controls on the Project (Are we making progress?)
6. System Launch
1. Launch business, make money
7. Warranty Period (It’s not supposed to do that, fix it.)
8. Project Close Out (I guess this is what I bought?)

18
Results and Discussions
AI can also be used to detect threats and other potentially malicious activities. Conventional
systems simply cannot keep up with the sheer number of malwares that is created every month,
so this is a potential area for AI to step in and address this problem. Cyber security companies
are teaching AI systems to detect viruses and malware by using complex algorithms so AI can
then run pattern recognition in software. AI systems can be trained to identify even the smallest
behaviors of ransomware and malware attacks before it enters the system and then isolate them
from that system. They can also use predictive functions that surpass the speed of traditional
approaches.

19
Conclusion
This paper consists of how an AI system work and states how a security-relevant properties of AI
functions. Although there is currently a big debate raging about whether Artificial Intelligence
(AI) is a good or bad in terms of every aspect it has been used in. In this era when the technology
has come so far with a tremendous advancement in internet of things and connected devices, the
experts of Cyber Security are facing a lot of issues. They need all the support that they can get to
help them prevent the cyber-attacks and security breaches The organizations being more
connected than ever is leading to heavy traffic, increase in security attack vectors, breaches in
security and a lot more threats in the cyber are that is becoming more and more difficult to
handle by humans alone. Developing a software system with standard logic for effectively
defending against the growing cyberattacks is however bothersome. On the other hand, the
problems of cyber security can be efficiently resolved using the strategies involving AI.

20
Reference
 Albus, J. S. (2002). "4-D/RCS: A Reference Model Architecture for Intelligent
Unmanned Ground Vehicles" (PDF). In Gerhart, G.; Gunderson, R.; Shoemaker, C.
Proceedings of the SPIE AeroSense Session on Unmanned Ground Vehicle Technology.
3693. pp. 11–20. Archived from the original (PDF) on 25 July 2004.

 Alghamdi, Yasser. (2016). Negative Effects of Technology on Children of Today.


10.13140/RG.2.2.35724.62089.

 Buchanan, Bruce G. (2005). "A (Very) Brief History of Artificial Intelligence" (PDF). AI
Magazine: 53–60. Archived from the original (PDF) on 26 September 2007.

 Castrounis, A. (2017). Artificial Intelligence, Deep Learning, and Neural Networks,


Explained. [online] Kdnuggets.com. Available at:
http://www.kdnuggets.com/2016/10/artificial-intelligence-deep-learning-neural-
networks-explained.html [Accessed 28 Sep. 2017].

 Cilli, Claudio & Magnanini, Giulio. (2018). On the security of the AI systems. E. Tyugu.
Algorithms and Architectures of Artificial Intelligence.IOS Press. 2007

 Hecht, L., Hecht, L. and Hecht, L., 2020. Add It Up: How Long Does A Machine
Learning Deployment Take? - The New Stack. [online] The New Stack. Available at:
<https://thenewstack.io/add-it-up-how-long-does-a-machine-learning-deployment-take/>
[Accessed 25 April 2020].

 Infosecurity Magazine, January 2017. “How AI is the Future of Cybersecurity”,Ryan Kh

21
 IoT Security Foundation, 2016. “IoT Security Compliance Framework”

 Laurence, A., 2020. The Impact Of Artificial Intelligence On Cyber Security. [online]
CPO Magazine. Available at: <https://www.cpomagazine.com/cyber-security/the-impact-
of-artificial-intelligence-on-cyber-security/> [Accessed 25 April 2020].

 martinfowler.com. 2020. Continuous Delivery For Machine Learning. [online] Available


at: <https://martinfowler.com/articles/cd4ml.html> [Accessed 25 April 2020].
 Medium. 2020. Essential Data Skills — Supply And Demand On The Job Market. [online]
Available at: <https://towardsdatascience.com/essential-data-skills-supply-and-demand-
on-the-job-market-4f7dffa23b70> [Accessed 25 April 2020].

 NabaSuroor and Syed Imtiyaz Hassan, “Identifying the factors of modern day stress
using machine learning”, International Journal of Engineering Science and Technology,
vol. 9, Issue 4, April 2017, pp. 229-234, e-ISSN: 0975–5462, p-ISSN: 2278–9510.

 Perez Veiga, Alberto. (2018). Applications of AI to Network Security.


10.13140/RG.2.2.29373.56803.

 Stlouisintegration.com. 2020. How To Run An Artificial Intelligence Project | Saint


Louis Integration. [online] Available at: <http://stlouisintegration.com/content/how-run-
artificial-intelligence-project> [Accessed 25 April 2020].

 The Official NVIDIA Blog. (2017). The Difference Between AI, Machine Learning, and
Deep Learning? | NVIDIA Blog. [online] Available at:

22
https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-
machine-learning-deep-learning-ai/ [Accessed 28 Sep. 2017].

 University of Cambridge, Future of Humanity insititute, University of Oxford, Centre for


the study of existensial risk, Center for a New American security, Electronic frontier
foundation, OpenAI, February 2018. “The malicious use of Artificial Intelligence:
Forecasting, Prevention and Mitigation”

 2020. [online] Available at: <https://www.quora.com/Why-are-Stephen-Hawking-and-


Elon-Musk-so-concerned-about-the-dangers-of-AI-Are-they-justified> [Accessed 25
April 2020].

 2020. [online] Available at: <https://www.cser.ac.uk/research/risks-from-artificial-


intelligence/> [Accessed 25 April 2020].

23
BACHELOR OF INFORMATION TECHNOLOGY (HONS)
Semester 4
Assignment: Marking Scheme
Course: EC3246 (Research Methodology)
Student Name: _Jackey Chhetri_
Student Id: __00018252______
Allocated Marks Obtained Marks

Q.1. Selection of research papers 5

Q.1. Research paper elements 10

Q.1. Gap of each element 5

Q.2. Paraphrasing of each research paper 10

Q.2. Critical analysis of each research paper 10

Q.2. Composition 10

Q.3. Research problem, questions, objectives 10

Q.3. Literature review 10

Q.3. Methodology 10

Q.3. Referencing 10

Q.3. Overall writing 10

Total 100

24

You might also like