You are on page 1of 105

Textbook on

COMPUTER SYSTEM SECURITY


Basic Concepts and Applications

Rakesh Kumar Yadav


Ph.D. (IIT), M. Tech. & B.Tech
Director
KCC Institute of Technology & Management, Greater Noida

Dolly Sharma
Ph.D., M. Tech. & B.Tech
Associate Professor
Amity School of Engineering and Technology, Amity University, Noida

Vanshika Rastogi
M.Tech & B.Tech
Assistant Professor
KCC Institute of Technology & Management, Greater Noida

Prabhakar Sharma
M.Tech (CDAC-Mohali) & B. Tech
Assistant Professor
ITS Engineering College, Greater Noida
PREFACE
We are extremely happy to come out with this book entitled “Computer System
Security – Basic Concept and Applications”. This book is for B.Tech (CSE/IT)
and MCA students of Dr. A.P. J. Abdul Kalam Technical University, Uttar Pradesh,
Lucknow. This book can also be followed by Engineering and MCA students of
other Universities. Postgraduates and Researchers will be the equal beneficiaries.
This book will help them to comprehend the basic concepts of Cyber Security.
Technology is a set of tools to improve the productivity, quality, and joy that they
get from their work. Security is an essential part of today’s cyber world. Everything,
from chat to shopping is on the internet today and thus, whatever we do online
needs to be protected and made secure. In this book, we have tried to emphasise on
the importance of security, various types of attacks, threats, vulnerabilities and
security at different levels. The topics have been presented in a form that is easy to
read and understand.
This book addresses the answer to the following questions related to computer
security:
• What are the different security architectures?
• What are the marketplaces for vulnerabilities?
• How can we defend against Control Hijacking?
• What is the Confinement principle?
• What are the various Access Control mechanisms?
• How can we have a secure web environment?
• What is Cryptography?
• How do we provide security at various levels of networking?
The book is divided into small chapters to make the concept clear. After each
chapter, exercise has been given to apply the knowledge gained. In the book, each
chapter contains exemplary problems and images to make the understanding better.
We hope, the book provides you a good knowledge about the world of Computer
System Security. We have tried to make every concept easy to read and understand.
SYLLABUS
KNC-301: COMPUTER SYSTEM SECURITY

UNIT-I
Computer System Security Introduction:
Introduction, What is computer security and what to l earn?, Sample Attacks, The Marketplace
for vulnerabilities, Error 404 Hacking digital India part 1 chase.
Hijacking & Defense:
Control Hijacking, More Control Hijacking attacks integer overflow, More Control Hijacking
attacks format string vulnerabilities, Defense against Control Hijacking - Platform Defenses,
Defense against Control Hijacking - Run-time Defenses, Advanced Control Hijacking attacks.

UNIT-II
Confidentiality Policies:
Confinement Principle, Detour Unix user IDs process IDs and privileges, More on confinement
techniques, System call interposition, Error 404 digital Hacking in India part 2 chase, VM
based isolation, Confinement principle, Software fault isolation, Rootkits, Intrusion Detection
Systems

UNIT-III
Secure architecture principles isolation and leas:
Access Control Concepts, Unix and windows access control summary, Other issues in access
control, Introduction to browser isolation.
Web security landscape:
Web security definitions goals and threat models, HTTP content rendering. Browser isolation.
Security interface, Cookies frames and frame busting, Major web server threats, Cross site
request forgery, Cross site scripting, Defenses and protections against XSS, Finding
vulnerabilities, Secure development.

UNIT-IV
Basic cryptography:
Public key cryptography, RSA public key crypto, Digital signature Hash functions, Public
key distribution, Real world protocols, Basic terminologies, Email security certificates,
Transport Layer security TLS, IP security, DNS security.

UNIT-V
Internet Infrastructure:
Basic security problems, Routing security, DNS revisited, Summary of weaknesses of internet
security, Link layer connectivity and TCP IP connectivity, Packet filtering firewall, Intrusion
detection.
SYLLABUS
KNC-301: COMPUTER SYSTEM SECURITY

CHAPTER-I
Computer System Security: Introduction, Computer Security and Architecture, The OSI
Architecture, Security Attacks, The Marketplace for Vulnerabilities, Common Computer
Security Vulnerabilities, Causes and Harms of Computer Security Vulnerabilities, A Model
for Network Security, Case Study: Hacking Digital India Part 1 Chase.

CHAPTER-II
Hijacking & Defense: Control Hijacking, Buffer Overflow Attacks, Control Hijacking Attacks
from String Vulnerabilities, Defense against Control Hijacking, Platform Defenses, Run-Time
Defenses.

CHAPTER-III
Confidentiality Policies: Confinement Principle, Detour Unix User IDs Process IDs and
Privileges, Basic Concepts of UNIX User IDs, Basic Permission Bits on Files and Directories,
UNIX-Access Control, System Call Interposition, Initial Implementation-JANUS, ptrace,
systrace, VM Based Isolation, Software Fault Isolation, Root Kits, Intrusion Detection Systems.

CHAPTER-IV
Secure Architecture Principles Isolation and Leas: Access Control Concepts, Discretionary
Access Control (DAC), Mandatory Access Control (MAC), Role-Based Access Control
(RBAC), Issues in Access Control, Browser Isolation, How does it work, Advantages, CASE
STUDY- Access Control in Unix and Windows.

CHAPTER-V
Web security Landscape: Overview, HTTP, Features of HTTP, Architecture, Cookies, Major
Web Server Threats, Cross Site Request, Forgery, Cross Site Scripting, Defenses and
Protections against XSS, Finding Vulnerabilities, Secure Development.

CHAPTER-VI
Basic Cryptography: Public key Cryptography, RSA Public Key Cryptography, Generation of
RSA Key Pair, RSA Encryption, RSA Decryption, Digital Signature, Model of Digital Signature,
Encryption with Digital Signature, Importance of Digital Signature, Public Key Distribution, Email
Security Certificates, How does it work, Advantages, Transport Layer Security, TLS Advantages,
TLS Disadvantages, Working of TLS, IP Security, Components of IP Security, Working of IP
Security, Advantages, DNS Security, Types of Attacks, Measures against DNS Attacks.

CHAPTER-VII
Internet Infrastructure: Basic Security Problems, Routing Security, Weakness of Internet
Security, Common Security Problems, Means of Protection, Firewalls, Types of Firewalls,
How Firewalls work?
ACKNOWLEDGEMENTS
We are obliged to Almighty for everything that we have. We wish to express our profound
thanks to all those who helped in making this book a reality.
We are greatly thankful to Prof. J.P. Saini, Vice Chancellor (NSUT-Delhi), Prof. KV Arya,
Professor (IIITM-Gwalior), Prof. DS Yadav, Professor (IET-Lucknow), Prof. Raghuraj Singh,
Professor (HBTU-Kanpur), Prof. Vinay Rishiwal, Professor (Rohilkhand University, Bareilly),
Prof. P.K. Bharti, Vice Chancellor (SVU-Gajraula), Prof. K.P. Yadav, Vice Chancellor (Sangam
University-Bhilwara) and Prof. Deepak Garg, Professor (Bennett University-Greater Noida)
for their guidance while writing this book.
We would like to give our sincere thanks to Mr. Deepak Gupta (Chairman of KCC Institutes)
for his support.
We are thankful to Dr. R.K. Jain and all other staff members of JBC Press for making this
book a great reality. Best wishes to our students, enjoy learning “Computer System Security”.
We are Thankful to
Prof. R S Nirjar EX-Chairman, AICTE, Delhi
Prof. R P Yadav EX-VC, RTU, Kota
Prof. Brahmjit Singh Professor, NIT, Kurushetra
Prof. G S Yadava EX-Pro VC, Lingaya University, Faridabad
Prof. Sushrut Das Associate Professor, IIT, Dhanbad
Prof. Sudhir Kumar Director, Greater Noida College of Technology, Greater Noida
Prof. Ashish Gupta Professor, ITS Engg. College, Greater Noida
Prof. Sunil Kumar Director, IEC, Greater Noida
Prof. R L Yadav Professor, Galgotias College of Engineering Technology, Greater Noida
Prof. Pankaj Jha Associate Professor, IIMT, Greater Noida
Prof. C S Yadav Professor, NIET, Greater Noida
Prof. Kamlesh Rana Director, Accurate ITM, Greater Noida
Prof. R K Raghuvanshi Director, JIIMS, Greater Noida
Prof. Ghazala Naaz Professor, NIET, Greater Noida
Prof. Parmanand Dean, Sharda University
Prof. Rohit Garg Director, MIT, Moradabad
Prof. Ajay Kumar Director, Graphic Era University, Dehradun
Prof. Sanjay Singh Professor, ABES, Ghaziabad
Prof. Sapna Katiyar Professor, ABIT, Ghaziabad
Prof. Dharmendra Kumar Ass. Prof., Delhi Technical Campus, Greater Noida
Prof. Shivani Kaul Ass. Prof., KCC ITM, Greater Noida
Prof. Huma Khan Asst. Prof., KCC ITM, Greater Noida
Prof. Srinivas Aruonda Ass. Prof., KCC ITM, Greater Noida
Prof. Monu Singh Asst. Prof., KCC ITM, Greater Noida
Prof. Seema Srivastava Ass. Prof., KCC ITM, Greater Noida
Prof. Ravi B Singh Ass. Prof., KCC ITM, Greater Noida
Prof. Abhishek Swami Associate Professor, SGT University, Gurugram
Prof. Dilip Yadav Associate Professor, Galgotias University, Greater Noida

And all others who taught us, suggested us and helped us directly or indirectly.
CONTENTS

CHAPTER 1
COMPUTER SYSTEM SECURITY 1–12
1.1 INTRODUCTION 1
1.2 COMPUTER SECURITY AND ARCHITECTURE 2
1.2.1 The OSI Security Architecture 2
1.2.2 Threat 2
1.2.3 Attack 3
1.2.4 Security Attacks, Services and Mechanisms 3
1.3 SECURITY ATTACKS 4
1.3.1 Passive attack 4
1.3.2 Active attacks 6
1.4 THE MARKETPLACE FOR VULNERABILITIES 8
1.4.1 Common Computer Security Vulnerabilities 8
1.4.2 Causes and Harms of Computer Security Vulnerabilities 8
1.5 A MODEL FOR NETWORK SECURITY 9
1.6 CASE STUDY: Hacking Digital India Part 1 Chase 10
Exercise 11

CHAPTER 2
HIJACKING AND DEFENSE 13–20
2.1 CONTROL HIJACKING 13
2.1.1 Buffer Overflow attacks 13
2.1.2 Control Hijacking attack format string vulnerabilities 14
2.2 DEFENSE AGAINST CONTROL HIJACKING 15
2.2.1 Platform Defenses 16
2.2.2 Run-time Defenses 17
Exercise 19
x Computer System Security

CHAPTER 3
CONFIDENTIALITY POLICIES 21–34
3.1 CONFINEMET PRINCIPLE 22
3.2 DETOUR OF UNIX USER IDs, PROCESS IDs AND PRIVILIGES 22
3.2.1 Basic concepts of UNIX IDs 23
3.2.2 Basic Permission Bits on Files and Directories 23
3.2.3 UNIX-Access Control 23
3.3 SYSTEM CALL INTERPOSITION 26
3.3.1 Initial implementation-JANUS 27
3.3.2 Ptrace 29
3.3.3 Systrace 29
3.4 VIRTUAL MACHINE BASED ISOLATION 30
3.5 SOFTWARE BASED FAULT ISOLATION 31
3.6 ROOTKITS 31
3.7 INTRUSION DETECTION SYSTEM 32
Exercise 33

CHAPTER 4
SECURE ARCHITECTURE (PRINCIPLES, ISOLATION AND LEAS) 35–42
4.1 ACCESS CONTROL CONCEPTS 35
4.1.1 Discretionary Access Control (DAC) 35
4.1.2 Mandatory Access Control (MAC) 36
4.1.3 Role-Based Access Control (RBAC) 36
4.2 ISSUES IN ACCESS CONTROL 36
4.2.1 Appropriate role-based access 36
4.2.2 Poor password management 37
4.2.3 Poor user education 37
4.3 BROWSER ISOLATION 37
4.3.1 How does it work? 37
4.3.2 Advantages 38
4.4 CASE STUDY - ACCESS CONTROL IN UNIX AND WINDOWS 39
Exercise 42

CHAPTER 5
WEB SECURITY LANDSCAPE 43–56
5.1 OVERVIEW 43
5.2 HTTP 44
Computer System Security xi

5.2.1 Features of HTTP 44


5.2.2 Architecture 45
5.3 COOKIES 45
5.4 MAJOR WEB SERVER THREATS 47
5.5 CROSS SITE REQUEST FORGERY 49
5.6 CROSS SITE SCRIPTING 51
5.7 DEFENSES AND PROTECTION AGAINST XSS 52
5.7.1 Escaping 53
5.7.2 Validating Input 53
5.7.3 Sanitizing 54
5.8 FINDING VULNERABILITY 54
5.9 SECURE DEVELOPMENT 55
Exercise 56

CHAPTER 6
BASIC CRYPTOGRAPHY 57–74
6.1 PUBLIC KEY CRYPTOGRAPHY 57
6.1.1 Components of Public Key Encryption 58
6.1.2 Weakness of the Public Key Encryption 59
6.1.3 Applications 59
6.2 RSA PUBLIC KEY CRYPTOGRAPHY 59
6.2.1 Generation of RSA Key Pair 60
6.2.2 Generate the RSA modulus(n) 60
6.2.3 Find Derived Number(e) 60
6.2.4 Form the public key 61
6.2.5 Generate the private key 61
6.2.6 RSA Encryption 61
6.2.7 RSA Decryption 61
6.3 DIGITAL SIGNATURE 62
6.3.1 Model of Digital Signature 62
6.3.2 Encryption with Digital Signature 63
6.3.3 Importance of Digital Signature 64
6.4 PUBLIC KEY DISTRIBUTION 65
6.4.1 Public Announcement of Public Keys 65
6.4.2 Publicly Available Directory 65
6.4.3 Public-Key Authority 66
6.4.4 Public Key Certificates 66
xii Computer System Security

6.5 E-MAIL SECURITY CERTIFICATES 67


6.5.1 How does it work? 67
6.5.2 Advantages 67
6.6 TRANSPORT LAYER SECURITY 68
6.6.1 TLS advantages 68
6.6.2 TLS disadvantages 68
6.6.3 Working of TLS 69
6.7 IP SECURITY 69
6.7.1 Components of IP Security 70
6.7.2 Working of IP Security 71
6.7.3 Advantages 71
6.8 DNS SECURITY 72
6.8.1 Types of Attacks 72
6.8.2 Measures against DNS attacks 73
Exercise 73

CHAPTER 7
INTERNET INFRASTRUCTURE 75–84
7.1 BASIC SECURITY PROBLEMS 75
7.1.1 Code Injection 75
7.1.2 Data Breach 76
7.1.3 Malware Infection 76
7.1.4 Distributed Denial Service of attack 76
7.2 ROUTING SECURITY 76
7.3 WEAKNESS OF INTERNET SECURITY 80
7.3.1 Common Security Problems 80
7.3.2 Means of protection 81
7.4 FIREWALLS 82
7.4.1 Types of Firewalls 82
7.4.2 How Firewalls work? 83
Exercise 84
Appendix 85
Model Question Paper 89
Glossary 91
References 93
1
Computer System Security
Learning Objective
• Computer Security • Common computer system vulnerabilities
• CIA Triangle • Causes and harm of vulnerabilities
• OSI Security Architecture • A network security model
• Security Attacks-Categories • Case Study: Hacking Digital India
• Active and Passive Attacks

We live in a digital era which understands that our private information is more
vulnerable than ever before. We all live in a world which is networked together, from
internet banking to government infrastructure, where data is stored on computers
and other devices. A portion of that data can be sensitive information, whether that is
intellectual property, financial data, personal information, or other types of data for
which unauthorized access or exposure could have negative consequences.

1.1 INTRODUCTION
Until the time, media publicized the data security; computer security was confined
to the physical security. Traditionally, computer facilities have been protected for
the following reasons:
i. To avoid theft or damage of the hardware.
ii. To avoid disruption of service.
iii. To avoid theft of the information.
Computer data often travels from one computer to another, leaving the safety of its
protected physical surroundings. Once the data is out of hand, people with bad
intention could modify or forge your data, either for amusement or for their own
benefit. Computer security basically is the protection of computer systems and
information from harm, theft, and unauthorized use. It is the process of preventing
and detecting unauthorized use of your computer system. It is the protection offered
to an automated information system in order to attain the applicable objectives.
computer security is the protection of computing
systems and the data that they store or access.
2 Computer System Security

1.2 COMPUTER SECURITY AND ARCHITECTURE


When we talk about “computer security”, we mean that we are addressing three
very important aspects of any computer-related system namely, Fig 1.1.
• Confidentiality: access (reading, viewing, printing, knowing, etc.)
• Integrity: modification (includes writing, changing, changing status, deleting,
and creating).
• Availability: Denial of service
ity

Int
ial
nt

eg
ide

rit
y
nf

Information
Co

Security

Availability

Fig. 1.1 The CIA triangle

1.2.1 The OSI Security Architecture


In any organization, the manager is responsible for the security measures. Thus,
requires some systematic way of defining the requirements of security and
characterizing the approaches to satisfy those requirements. Usage of LAN, WAN
and centralized data processing environment make it a tough task. The OSI
architecture helps the managers with a way of organizing the task of providing
security. The Open System Interconnect (OSI) security architecture was designated
by the ITU-T (International Telecommunication Union - Telecommunication). The
OSI security architecture was developed in the context of the OSI protocol
architecture.
The OSI Security Architecture is a framework that provides a systematic way of
defining the requirements for security and characterizing the approaches to satisfying
those requirements. However, for our purposes in this chapter, an understanding
of the OSI protocol architecture is not required. For our purposes, the OSI security
architecture provides a useful, if abstract, overview of many of the concepts. The
OSI security architecture focuses on security attacks, mechanisms, and services.
These can be defined briefly as follows:

1.2.2 Threat
A computer system threat is anything that leads to loss or corruption of data or
physical damage to the hardware and/or infrastructure. It has a potential for violation
of security, which exists when there is a circumstance, capability, action, or event
Computer System Security 3

that could breach security and cause harm. That is, a threat is a possible danger
that might exploit vulnerability. They can put individuals’ computer systems and
business computers at risk, so vulnerabilities have to be fixed so that attackers
cannot infiltrate the system and cause damage.
Threats can include everything from viruses, trojans, back doors to outright attacks
from hackers. Often, the term blended threat is more accurate, as the majority of
threats involve multiple exploits. For example, a hacker might use a phishing attack
to gain information about a network and break into a network.

1.2.3 Attack
An assault on system security that derives from an intelligent threat; that is, an
intelligent act that is a deliberate attempt (especially in the sense of a method or
technique) to evade security services and violate the security policy of a system.

1.2.4 Security Attacks, Services and Mechanisms


To assess the security needs of an organization effectively, the manager responsible
for security needs some systematic way of defining the requirements for security
and characterization of approaches to satisfy those requirements. One approach is
to consider three aspects of information security:
Security attack – Any action that compromises the security of information owned
by an organization. It includes exploitation of computer systems and networks. It
uses malicious code to alter computer code, logic or data and lead to cybercrimes,
such as information and identity theft.
Security mechanism – A mechanism that is designed to detect, prevent or recover
from a security attack. One of the most specific security mechanisms in use is
cryptographic techniques. Encryption or encryption-like transformations of
information are the most common means of providing security. Some of the
mechanisms are:
i. Encipherment
ii. Digital Signature
iii. Access Control
Security service – A service that enhances the security of the data processing
systems and the information transfers of an organization. The services are intended
to counter security attacks and they make use of one or more security mechanisms
to provide the service.
The classification of security services are as follows:
i. Confidentiality: Ensures that the information in a computer system and
transmitted information are accessible only for reading by authorized parties.
Eg., printing, displaying and other forms of disclosure.
4 Computer System Security

ii. Authentication: Ensures that the origin of a message or electronic document


is correctly identified, with an assurance that the identity is not false.
iii. Integrity: Ensures that only authorized parties are able to modify computer
system assets and transmitted information. Modification includes writing,
changing status, deleting, creating and delaying or replaying of transmitted
messages, the receiver of a message be able to deny the transmission.
iv. Access control: Requires that access to information resources may be
controlled by or for the target system.
v. Availability: Requires that computer system assets be available to authorized
parties when needed.

1.3 SECURITY ATTACKS


There are four general categories of attack which are listed below:
i. Interruption
An asset of the system is destroyed or becomes unavailable or unusable. This is an
attack on availability.
e.g. destruction of piece of hardware, cutting of a communication line or disabling
of file management system.
ii. Interception
An unauthorized party gains access to an asset. This is an attack on confidentiality.
Unauthorized party could be a person, a program or a computer.e.g., wire tapping
to capture data in the network, illicit copying of files
iii. Modification
An unauthorized party not only gains access to but tampers with an asset. This is
an attack on integrity.
e.g., changing values in data file, altering a program, modifying the contents of
messages being transmitted in a network.
iv. Fabrication
An unauthorized party inserts counterfeit objects into the system. This is an attack
on authenticity.
e.g., insertion of spurious message in a network or addition of records to a file.
A useful categorization of these attacks is in terms of
i. Passive attacks
ii. Active attacks

1.3.1 Passive attack


Passive attacks are in the nature of eavesdropping on, or monitoring of,
transmissions. The goal of the opponent is to obtain information that is being
Computer System Security 5

transmitted. Passive attacks are of two types:


i. Release of message contents: A telephone conversation, an e-mail message
and a transferred file may contain sensitive or confidential information. We
would like to prevent the opponent from learning the contents of these
transmissions. The process is shown in Fig. 1.2.

BOB
BOB reads the
content of
message which
Lily sends to John

LILY Internet JOHN

Fig. 1.2 Release of Message Content


ii. Traffic analysis: If we had encryption protection in place, an opponent might
still be able to observe the pattern of the message. The opponent could
determine the location and identity of communication hosts and could observe
the frequency and length of messages being exchanged, as shown in Fig 1.3.
This information might be useful in guessing the nature of communication
that was taking place.

BOB
BOB observes the
pattern
of messages
exchanged
between
LILY and JOHN

LILY Internet JOHN

Fig. 1.3 Traffic Analysis


6 Computer System Security

Passive attacks are very difficult to detect because they do not involve any alteration
of data. However, it is feasible to prevent the success of these attacks.

1.3.2 Active attacks


These attacks involve some modification of the data stream or the creation of a
false stream.
These attacks can be classified into four categories:
i. Masquerade – One entity pretends to be a different entity. As shown in Fig. 1.4.

BOB
BOB pretends
to be LILY

LILY Internet JOHN

Fig. 1.4 BOB Pretending to be Lily


ii. Replay – involves passive capture of a data unit and its subsequent transmission
to produce an unauthorized effect. As shown in Fig. 1.5.

BOB
Captures the
message and
resend it to
(can send
multiple times)

LILY Internet JOHN

Fig. 1.5 BOB Resending the Message


Computer System Security 7

iii. Modification of messages – Some portion of message is altered or the


messages are delayed or recorded, to produce an unauthorized effect. As shown
in Fig. 1.6.

BOB
Modifies the
message
and on send
to john

LILY Internet JOHN

Fig. 1.6 Modified Message Sent


iv. Denial of Service – Prevents or inhibits the normal use or management of
communication facilities (Fig. 1.7). Another form of service denial is the
disruption of an entire network, either by disabling the network or overloading
it with messages so as to degrade performance.

BOB
overload the server
by giving
false request

LILY Internet JOHN

Fig. 1.7 Denial of Service


It is quite difficult to prevent active attacks absolutely, because to do so would
require physical protection of all communication facilities and paths at all times.
8 Computer System Security

Instead, the goal is to detect them and to recover from any disruption or delays
caused by them.

1.4 THE MARKETPLACE FOR VULNERABILITIES


Computer system vulnerability is a flaw or weakness in a system or network that
could be exploited to cause damage, or allow an attacker to manipulate the system
in some way.
This is different from a “cyber threat” in that while a cyber threat may involve an
outside element, computer system vulnerabilities exist on the network asset
(computer) to begin with. Additionally, they are not usually the result of intentional
effort by an attacker—though cybercriminals will leverage these flaws in their
attacks, leading some to use the terms interchangeably.
It is possible for network personnel and computer users to protect computers from
vulnerabilities by regularly updating software security patches. These patches are
capable of solving flaws or security holes found in the initial release. Network
personnel and computer users should also stay informed about current vulnerabilities
in the software they use and look out for ways to protect against them.

1.4.1 Common Computer Security Vulnerabilities


The most common computer vulnerabilities include:
i. Bugs
ii. Weak passwords
iii. Software that is already infected with virus
iv. Missing data encryption
v. OS command injection
vi. SQL injection
vii. Buffer overflow
viii. Missing authorization
ix. Use of broken algorithms
x. URL redirection to untrusted sites
xi. Path traversal
xii. Missing authentication for critical function
xiii. Unrestricted upload of dangerous file types
xiv. Dependence on untrusted inputs in a security decision
xv. Cross-site scripting and forgery
xvi. Download of codes without integrity checks
Computer System Security 9

1.4.2 Causes and Harms of Computer Security Vulnerabilities


Computer system vulnerabilities exist because programmers fail to fully understand
the inner programs. While designing and programming, programmers don’t really
take into account all aspects of computer systems and this, in turn, causes computer
system vulnerability. Some programmers program in an unsafe and incorrect way,
which worsen computer system vulnerability.
The harm of computer system vulnerability can be presented in several aspects, for
example, the disclosure of confidential data, and widespread of Internet virus and
hacker intrusion, which can cause great harm to enterprises and individual users
by bringing about major economic loss. With the steady improvement of the degree
of information, very severe computer system vulnerabilities can become a threat to
national security in the aspects of economy, politics, and military.
Computer security vulnerability can harm five kinds of system securities that
include: Reliability, confidentiality, entirety, usability, and undeniableness.
i. Reliability: This refers to reducing incorrect false alarm in the operation of a
computer system and enhancing the efficiency of a computer system.
ii. Confidentiality: This refers to protecting users’ information from disclosure
and getting by unauthorized third party.
iii. Entirety: This system security requires that information or programs should
not be forged, tampered, deleted or inserted deliberately in the process of
storing, operation and communication. In other words, information or programs
cannot be lost or destroyed.
iv. Usability: This ensures that users can enjoy the services offered by computers
and information networks.
v. Undeniableness: This security refers to guaranteeing information actors to
be responsible for their behavior.
Software security tools and services for transferring large data sets can help users find
architectural weaknesses and stay up to date with reliable data tracking and measuring.

1.5 A MODEL FOR NETWORK SECURITY


A message is to be transferred from one party to another across some sort of internet.
The two parties, who are the principals in this transaction, must cooperate for the
exchange to take place. A logical information channel is established by defining a
route through the internet from source to destination and by the cooperative use of
communication protocols (e.g., TCP/IP) by principals.
Using this model (Fig. 1.8) requires us to:
i. Design a suitable algorithm for the security transformation
10 Computer System Security

ii. Generate the secret information (keys) used by the algorithm


iii. Develop methods to distribute and share the secret information
iv. Specify a protocol enabling the principals to use the transformation and secret
information for a security service.
Trusted third party
(e.g., arbiter, distributer
of secret information)

Sender Recipient
Information
Security-related channel Security-related
transformation transformation
Message

Message

Message

Message
Secure

Secure
Secret Secret
information information

Opponent

Fig. 1.8 Model for Network Security


A general model is illustrated as above, which reflects a concern for protecting an
information system from unwanted access. Most readers are familiar with the
concerns caused by the existence of hackers, who attempt to penetrate systems that
can be accessed over a network. The hacker can be someone who, with no malign
intent, simply gets satisfaction from breaking and entering a computer system. Or,
the intruder can be a disgruntled employee who wishes to do damage, or a criminal
who seeks to exploit computer assets for financial gain. A model for network security
access is shown in Fig. 1.9.
Information system

Computing resources
Opponent (processor, memory, I/O)
- human (e.g., cracker) Data
- software (e.g., virus, worm)
Processes
Access Channel Gatekeeper Software
function
Internal security controls

Fig. 1.9 A Model for Network Security Access

1.6 CASE STUDY: HACKING DIGITAL INDIA PART 1 CHASE


Security on internet is an absolute myth in today’s world. Every bit of information
which we possess is vulnerable today. Computer attacks today are usually done on
a large scale and you might become its victim unknowingly. India is a huge country
and we are slowly but steadily realizing our dreams of “Digital India”. This means
Computer System Security 11

that each one of us gets vulnerable the moment we get connected to the Internet, if
you are connected 24×7, and then you are vulnerable to it the whole day. Today in
our world absolute security is virtually non-existent. What was secure yesterday is
not secure today and what is secure today will definitely not be secure tomorrow.
With every minute passing by our lives we are constantly adding up data on the
internet, there is absolutely no denying that each one of us is connected with Internet
but today we are a part of it as well. Back in 2015 India had just 15 million smart
phones, but today we are one of the biggest smart phone markets having over 250
million devices in the country. Realizing the dreams of IOT ( Internet of things) is
just making the machines smarter day by day as we are giving away all our personal
and professional information to the machines for our own convenience.
The hackers create a Trojan file, an android apk file. The file is available on the
internet. One, who downloads this file, his phone is hacked. These files are usually
attached with some games like Candy Crush, Mini Militia and Clash of Clans. The
user is not aware that the file may contain a backdoor that can harm their data.
Once the user of the android downloads the file, the hacker gets the OS version
with access to everything on the device.
It not only makes use of Trojan Horse but also phishing and ransomware. Phishing
is a method of trying to gather personal information using deceptive e-mails and
websites. The goal is to trick the email recipient into believing that the message is
something they want or need — a request from their bank, for instance, or a note
from someone in their company — and to click a link or download an attachment.
Ransomware is a form of malware that encrypts a victim’s files. The attacker then
demands a ransom from the victim to restore access to the data upon payment.

Exercise
1. Explain the importance of computer security.
2. What is a CIA triad? Explain.
3. “What was secure yesterday is not secure today and what is secure today will
definitely not be secure tomorrow.” Justify the statement.
4. Differentiate between active and passive attacks.
5. Explain the various harms of computer system vulnerability.
6. Are threats and attacks same? Justify.
7. Explain how we can avoid passive attacks?
8. What are the various features of computers that are harmed by the computer
vulnerabilities?
9. Does authentication and authorization mean the same? If no, justify.
12 Computer System Security

10. Explain the OSI security architecture.


11. Explain the usage of TCP/IP in network security model.
12. Define the following terms:
i. Trojan Horse
ii. Ransomware
iii. Phishing
13. Explain the concept of interception and modification.
14. Explain the role of the gatekeeper function in the model for network security
access.
OR
State and explain the significance of the gatekeeper function in the model for
network security access.
15. Why is it difficult to detect a passive attack?
16. Name any 5 common computer system vulnerabilities.
17. Explain denial of service.
18. State the various causes for computer system vulnerabilities.
19. Differentiate between Undeniableness and denial of service.
20. State and explain the various passive attacks.
a
2
Hijacking and Defense
Learning Objective
• Control hijacking • Defense against Control Hijacking
• Buffer Overflow • Platform Defense
• String vulnerabilities • Run-time defense

When a program executes, it has a trajectory, the program runs from one instruction to
another. When a programmer writes a program, he has a control flow in mind and thus
the attacker tries to change the flow of control towards something that was not intended.
Most of the cases, this is done to take over the system or inject malicious code.

2.1 CONTROL HIJACKING


Hijacking is a type of network security attack in which the attacker takes control of a
communication - just as an airplane hijacker takes control of a flight - between two
entities and masquerades as one of them. Hijacking is a type of network security attack
in which the attacker takes control of a communication - just as an airplane hijacker
takes control of a flight - between two entities and masquerades as one of them.

2.1.1 Buffer Overflow attacks


Most common vulnerability which had its first major exploit in 1988 as an Internet
Worm. It is one of the biggest challenges even today. About 20% vulnerabilities even
today in internet are Buffer vulnerabilities. Buffer overflow errors are characterized by
the overwriting of memory fragments of the process, which should have never been
modified intentionally or unintentionally. Overwriting values of the IP (Instruction
Pointer), BP (Base Pointer) and other registers causes exceptions, segmentation faults,
and other errors to occur. Usually these errors end execution of the application in an
unexpected way. Buffer overflow errors occur when we operate on buffers of char
type. Buffer overflows can consist of overflowing the stack or overflowing the heap.
We don’t distinguish between these two in this article to avoid confusion.
Attackers exploit buffer overflow issues by overwriting the memory of an
application. This changes the execution path of the program, triggering a response
that damages files or exposes private information. For example, an attacker may
14 Computer System Security

introduce extra code, sending new instructions to the application to gain access to
IT systems. If attackers know the memory layout of a program, they can intentionally
feed input that the buffer cannot store, and overwrite areas that hold executable
code, replacing it with their own code. For example, an attacker can overwrite a
pointer (an object that points to another area in memory) and point it to an exploit
payload, to gain control over the program.
For example, there is a code:
#include<stdlib.h>
#include<unistd.h>
#include<stdio.h>
int main(int args,char **argv){
volatile int modified;//when you declare some variable as volatile
char buffer[64];
modified=0;
gets(buffer);
if(modified!=0){
printf(“’modified’ variable changed”);
}
else{
printf(“\nTry again”);
}
}
If on execution, value of modified is changed then there is a buffer overflow that
has changed the value of modified.
C and C++ are two languages that are highly susceptible to buffer overflow attacks,
as they don’t have built-in safeguards against overwriting or accessing data in
their memory. Mac OSX, Windows, and Linux all use code written in C and C++.
Languages such as PERL, Java, JavaScript, and C# use built-in safety mechanisms
that minimize the likelihood of buffer overflow.
Types of Buffer Overflow Attacks
• Stack-based buffer overflows are more common, and leverage stack memory
that only exists during the execution time of a function.
• Heap-based attacks are harder to carry out and involve flooding the memory
space allocated for a program beyond memory used for current runtime operations.

2.1.2 Control Hijacking attack format string vulnerabilities


Consider the code given below. This program uses %n format string. Wherever it
is used, it gets the number of characters written so far. So, when you execute this
program, the output is as expected.
#include<stdio.h>
#include<stdlib.h>
int main()
{
Hijacking and Defense 15
int A = 5, B = 7, count_one, count_two;
//Example of a %n format string
printf(“The number of bytes written up to these point X %n is being
stored in count_one, and the number of bytes up to here X%n is
being stored in count_two.\n”, &count_one, &count_two);
printf(“count_one;%d\n”, count_one);
printf(“count_two;%d\n, count_two);
//Stack Example
printf(“A is %d and is at %08x. B is %x.\n”,A,&A,B);
exit(0);
}
But if the code is modified:
#include<stdio.h>
#include<stdlib.h>
int main()
{
int A = 5, B = 7, count_one, count_two;
//Example of a %n format string
printf(“The number of bytes written up to this point X %n is being
stored in count_one, and the number of bytes up to here X%n is
being stored in count_two.\n”, &count_one, &count_two);
printf(“count_one;%d\n”, count_one);
printf(“count_two;%d\n, count_two);
//Stack Example
printf(“A is %d and is at %08x. B is %x.\n”, A, &A);
exit(0);
}
Now, when this code is executed, you get the correct address for A but for B also
you get an address of A. This kind of issues leads to the attack. Any function that
uses a format string can be a vulnerable function. Functions like; Printf, fprintf,
sprint, vprintf, vfprintf are all vulnerable functions.

2.2 DEFENSE AGAINST CONTROL HIJACKING


So far, we have seen the vulnerabilities or the different control hijacking strategies,
so now let us see how we can defend against them. There are different ways to do it:
• Fix the bugs: For fixing the bugs, we usually follow the software auditing.
This can be done using the automated tools like Coverity, Prefast/Prefix. But
these tools cannot give you hundred percent assurance for the defenses. These
tools are even expensive to buy and thus cannot be afforded by small scale
companies. The other way to follow is to rewrite the software in a type safe
language like JAVA, ML.
• Platform Defenses: The other approach is to prevent the code execution
beyond the point where control hijacking takes place.
• One can also add runtime code to detect overflows exploits. This addition of
run-time code needs to be done automatically.
16 Computer System Security

2.2.1 Platform Defenses


The attack can be prevented by marking the stack and heap as non-executable.
This process is called DEP (Data Execution Prevention). This can be implemented
in Linux (via PaX project) and in Windows. While working in Windows, one can
easily turn on the DEP. It is a security feature that can help prevent damage to your
computer from viruses and other security threats. Harmful programs can try to
attack Windows by attempting to run (also known as execute) code from system
memory locations reserved for Windows and other authorized programs. Unlike a
firewall or antivirus program, DEP does not help prevent harmful programs from
being installed on your computer. Instead, it monitors your programs to determine
if they use system memory safely. To do this, DEP software works alone or with
compatible microprocessors to mark some memory locations as “non-executable”.
If a program tries to run code-malicious or not-from a protected location, DEP
closes the program and notifies you.
• Open System by clicking the Start button, right-clicking Computer, and then
clicking Properties.
• Click Advanced system settings. If you’re prompted for an administrator password
or confirmation, type the password or provide confirmation. (Fig. 2.1)
• Under Performance, click Settings.

Fig. 2.1 Advanced System Settings


Hijacking and Defense 17

• Click the Data Execution Prevention tab, and then click Turn on DEP for
all programs and services except those selected. (Fig. 2.2).
• To turn off DEP for an individual program, select the check box next to the
program that you want to turn off DEP for, and then click OK.
• If the program is not in the list, click Add. Browse to the Program Files folder,
find the executable file for the program (it will have an .exe file name
extension), and then click Open.
• Click OK, in the System Properties dialog box if it appears, and then click OK
again. You might need to restart your computer for the changes to take effect.

Fig. 2.2 Data Execution Prevention Tab


But, this process even do have limitations, sometimes the heap cannot be made
non-executable. Also, this approach does not defend against “Return Oriented
Programming”.

2.2.2 Run-time Defenses


There are number of different solutions for run-time checking and for this you
need to insert some code at run-time to perform run-time checking. For doing this
there are different automated methods, Stack Guard is one of them.
StackGuard basically works by inserting a small value known as a canary between
the stack variables (buffers) and the function return address. When a stack-buffer
overflows into the function return address, the canary is overwritten. During function
18 Computer System Security

return the canary value is checked and if the value has changed the program is
terminated. Thus reducing code execution to a mere denial of service attack. The
performance cost of inserting and checking the canary is very small for the benefit
it brings, and can be reduced further if the compiler detects that no local buffer
variables are used by the function so the canary can be safely omitted.
Compilers implement this feature by selecting appropriate functions, storing the
stack canary during the function prologue, checking the value in the epilogue, and
invoking a failure handler if it was changed. For example consider the following
code:
void function1 (const char* str){
char buffer[16];
strcpy(buffer, str);
}
StackGuard automatically converts this code to:

extern uintptr_t __stack_chk_guard;


noreturn void __stack_chk_fail(void);void function1(const char* str){
uintptr_t canary = __stack_chk_guard;
char buffer[16];
strcpy(buffer, str);
if ( (canary = canary ^ __stack_chk_guard) != 0 )
__stack_chk_fail();
}

2.2.2.1 Canaries
There are two types of canaries which are supported by StackGuard:

Terminator canaries
Most buffer overflow attacks are based on certain string operations which end at
string terminators. A terminator canary contains NULL(0x00), CR (0x0d), LF
(0x0a), and EOF (0xff), four characters that should terminate most string operations,
rendering the overflow attempt harmless. This prevents attacks using strcpy() and
other methods that return upon copying a null character while the undesirable
result is that the canary is known.
This type of protection can be bypassed by an attacker overwriting the canary with
its known values and the return address with specially-crafted value resulting in a
code execution. This can be when non-string functions are used to copy buffers
and both the buffer contents and the length of the buffer are attacker controlled.

Random canaries
A random canary is chosen at random at the time the program executes. With this
method, the attacker could not learn the canary value prior to the program start by
Hijacking and Defense 19

searching the executable image. The random value is taken from /dev/urandom if
available, and created by hashing the time of day if /dev/urandom is not supported.
This randomness is sufficient to prevent most prediction attempts. If there is an
information leak flaw in the application, which can be used to read the canary
value, this kind of protection could be bypassed.
Though StackGuard may be effective in preventing stack-buffer overflow attacks
it has certain limitations as well:
• An information disclosure flaw in a different part of the program could disclose
the global __stack_chk_guard value. This would allow an attacker to write
the correct canary value and overwrite the function return address.
• Not all buffer overflows are on stack. StackGuard cannot prevent heap-based
buffer overflows.
• While StackGuard effectively prevents most stack buffer overflows, some
out-of-bounds write bugs can allow the attacker to write to the stack frame
after the canary, without overwriting the canary value itself.
• If a function has multiple local data structures and pointers to functions, these
are allocated on the stack as well, before the canary value. If there is a buffer
overflow in any one of these structures, the attacker can use this to overwrite
adjacent buffers/pointers which could result in arbitrary code execution. This
really depends on the arrangement of data on the stack.
• On some architectures, multi-threaded programs store the reference canary
__stack_chk_guard in Thread Local Storage, which is located a few kb after
the end of the thread’s stack. In these circumstances, a sufficiently large
overflow can overwrite both canary and __stack_chk_guard to the same value,
causing the detection to incorrectly fail.

Exercise
1. What is Control Hijacking?
2. List out the different control attacks.
3. What are the intentions of the attacker while hijacking a system?
4. What are the different types of Buffer Overflow attacks?
5. Explain stack and heap based attacks.
6. Explain the various string vulnerabilities.
7. List out the various ways to defend against control hijacking.
8. How does fixing the bugs helps in defending against control hijacking?
9. Explain Data Execution Prevention.
20 Computer System Security

10. How Stack Guard does helps in Runtime defense against control hijacking?
OR
Explain the use of Stack Guard in defense against control hijacking.
11. What are canaries?
12. Briefly explain the different types of canaries supported by Stack Guard.
13. State the disadvantages of Stack Guard.
14. When is a random canary chosen?
15. Why do C and C++ tend to be more vulnerable towards control hijacking?
a
3
Confidentiality Policies
Learning Objective
• Discretionary Access Control • System Call Interposition
• Mandatory Access Control • Virtual Machine Based Isolation
• Confinement Principle • Software Based Fault Isolation
• UNIX-User IDs, Access, Privileges • Rootkits

Certain information is intended to be kept secret, so that it can be protected from


the unauthorized access. The principle of confidentiality specifies that only the
sender and the intended recipient should be able to access the content of the message.
Confidentiality policies aims at providing the protection of confidentiality. It is
also called Information Flow Policy and prevents the unauthorized disclosure of
information. There are multiple acts by the government to prevent the unauthorized
access and thus achieve confidentiality.
The Information flow is managed by multiple access control mechanisms.
• Discretionary Access Control (DAC).It is basically a mechanism where a user
sets the access control to allow or deny access to any object. It is also referred to
as Identity based access control. It is a traditional access control mechanism
that is implemented by the operating system like UNIX. It is based on the identity
of user and the ownership. The programs are free to change access to the user’s
objects. It supports the trusted admins or the untrusted ordinary user.The problem
with the DAC is that it is difficult to enforce a system wide security policy or
even it becomes difficult to classify users if there are more than two categories
of users. Another problem with DAC is that it is based only on the user’s role
and ignores the other security related issues like the function of the program,
sensitivity of the program or the integrity.
• Mandatory Access Control (MAC). It is a mechanism where system controls
access to an object and a user cannot alter that access. It supports wide varity
of users and has a separation of security domain. It keeps a track of the program
integrity and limits the privileges of the user.
22 Computer System Security

3.1 CONFINEMET PRINCIPLE


There are times when we need to run buggy or some untrusted code. It might be
some program from untrusted internet site that includes some apps, extensions,
and plugins. One needs to be careful while running such codes as they can harm
your system security. The main goal is if any application misbehaves then kill that
application. For this many antivirus companies makes use of Honeypots. A honeypot
is a computer or computer system intended to mimic likely targets of cyber-attacks.
It can be used to detect attacks or deflect them from a legitimate target. It can also
be used to gain information about how cybercriminals operate.The principle behind
them is simple: Don’t go looking for attackers. Prepare something that would attract
their interest — the honeypot — and then wait for the attackers to show up.
Confinement principle ensures that a misbehaving application cannot harm rest of
the system.This can be done through the use of Virtual machines, Systemcall
interposition.

3.2 DETOUR OF UNIX USER IDS, PROCESS IDS AND PRIVILIGES


Any system which is based on the notion of authorization or authentication, there
is a notion of Subject and Principals.
In a security context, a subject is any entity that requests access to an object. These
are generic terms used to denote the thing requesting access and thing against
which request is made. In other words, object is anything on which subject can
perform some action. When you log onto an application you are the subject and the
application is the object. When someone knocks on your door the visitor is the
subject requesting access and your home is the object access is requested
of.Sometimes, subject can also behave like objects with operations like kill, suspend
and resume.
A subset of subject that is represented by an account, role or other unique identifier
is the principal.When we get to the level of implementation details, principals are
the unique keys we use in access control lists. They may represent human users,
automation, applications, connections, etc.
User is a subset of principal usually referring to a human operator. The distinction
is blurring over time because the words “user” or “user ID” are commonly
interchanged with “account”. However, when you need to make the distinction
between the broad class of things that are principals and the subset of these that
are interactive operators driving transactions in a non-deterministic fashion, “user”
is the right word.
User is more specific than subject or principal in that it usually refers to an interactive
operator. That is why we have a graphical User Interface and not a Graphical Principal
Interface. A user is an instance of subject that resolves to a principal. A single user
Confidentiality Policies 23

may resolve to any number of principals but any principal is expected to resolve to a
single user (assuming people observe the requirement not to share IDs).

3.2.1 Basic concepts of UNIX IDs


Every user in UNIX like operating system is identified by different integer number,
this unique number is called as UserID.There are three types of UID defined for a
process, which can be dynamically changed as per the privilege of task.
• Real UserID : It is account of owner of this process. It defines which files
this process has access to.
• Effective UserID : It is normally same as Real UserID, but sometimes it is
changed to enable a non-privileged user to access files that can only be accessed
by root.
• Saved UserID : It is used when a process is running with elevated privileges
(generally root) needs to do some under-privileged work, this can be achieved
by temporarily switching to non-privileged account.
Normally, there is one-to-many mapping from user to the principals but each
principal needs to be associated with only one user. This approach ensures the
accountability of the user action. In UNIX, all the objects are modeled as files and
are arranged in a hierarchy. These files exist in directories and a directory is also
considered as a file. Each object in UNIX has a owner, a group and 12 permission
bits.

3.2.2 Basic Permission Bits on Files and Directories


The basic bits help in controlling the content reading of a file, changing the content
of a file, loading the file in memory and further executing it.
In terms of directories, the permission bits help in allowing one to show file names
in a directory, traversing a directory, creating or deleting files in directory.

3.2.3 UNIX-Access Control


UNIX uses access control lists. A user logs into UNIX and has a right to start
processes that make requests. A process is “bigger” than a subject, many domains
may correspond to a single process. Each process has an identity(uid). This uid is
obtained from the file that stores user passwords: /etc/passwd. An entry in /etc/
passwd may look like:
fbs : abcdefg : 100 : 5 : Schneider, F.B. : /usr/fbs : /bin/sh

account encrypted uid group id “In real life” where files what shell
name password start program starts
on login
24 Computer System Security

Every process inherits its uid based on which user starts the process. Every process
also has an effective uid, also a number, which may be different from the uid.
Finally, each UNIX process is a member of some groups. In the original UNIX
every user was a member of one group. Currently, users can be members of more
than one group. Group information can be taken from /etc/passwd or from a file /
etc/groups. System administrators control the latter file. An entry in /etc/groups
may look like:
Staff : “ : 17 : fbs, Idzhou, ulfar

group password group id list of userids


name “signifies cannot that are members
log in w/password of this group
to this account
When a process is created, associated with it is the list of all the groups it is in.
UNIX that uses a notion of an additional access control list, and not just mode bits
to handle access control. In this case, each file has mode bits as we have been
discussing and also extended permissions. The extended permissions provide
exceptions to the mode bits as follows:
• Specify: for example, “r— u:harry” means that user harry has read only access.
• Deny: for example “-w- g:acsu” means remove write access from the group
acsu.
• Permit: for example “rw- u:bill, g:swe” means give read and write access to
bill if bill is also a member of the group swe. The comma is conjunction.
With extended permissions it’s possible to force a user to enter a particular group
before being allowed access to a file.

3.2.3.1 Principle of Fail-Safe Defaults


This principle restricts how privileges are initialized when a subject or object is
created.The principle of fail-safe defaults states that, unless a subject is given explicit
access to an object, it should be denied access to that object.
This principle requires that the default access to an object is none. Whenever access,
privileges, or some security-related attribute is not explicitly granted, it should be
denied. Moreover, if the subject is unable to complete its action or task, it should
undo those changes it made in the security state of the system before it terminates.
This way, even if the program fails, the system is still safe.
For example,if the mail server is unable to create a file in the spool directory, it
should close the network connection, issue an error message, and stop. It should
not try to store the message elsewhere or to expand its privileges to save the message
in another location, because an attacker could use that ability to overwrite other
Confidentiality Policies 25

files or fill up other disks (a denial of service attack). The protections on the mail
spool directory itself should allow create and write access only to the mail server
and read and delete access only to the local server. No other user should have
access to the directory.
In practice, most systems will allow an administrator access to the mail spool directory.
By the principle of least privilege, that administrator should be able to access only
the subjects and objects involved in mail queuing and delivery. As we have seen, this
constraint minimizes the threats if that administrator’s account is compromised. The
mail system can be damaged or destroyed, but nothing else can be.

3.2.3.2 Principle of Least Privilege


The principle restricts how privileges are granted. The principle of least privilege
states that a subject should be given only those privileges that it needs in order to
complete its task.
If a subject does not need an access right, the subject should not have that right.
Furthermore, the function of the subject (as opposed to its identity) should control
the assignment of rights. If a specific action requires that a subject’s access rights
be augmented, those extra rights should be relinquished immediately on completion
of the action. This is the analogue of the “need to know” rule: if the subject does
not need access to an object to perform its task, it should not have the right to
access that object. More precisely, if a subject needs to append to an object, but not
to alter the information already contained in the object, it should be given append
rights and not write rights.
In practice, most systems do not have the granularity of privileges and permissions
required to apply this principle precisely. The designers of security mechanisms
then apply this principle as best they can. In such systems, the consequences of
security problems are often more severe than the consequences for systems that
adhere to this principle.
The UNIX operating system does not apply access controls to the user root. That
user can terminate any process and read, write, or delete any file. Thus, users who
create backups can also delete files. The administrator account on Windows has
the same powers.This principle requires that processes should be confined to as
small a protection domain as possible.
For example, a mail server accepts mail from the Internet and copies the messages
into a spool directory; a local server will complete delivery. The mail server needs
the rights to access the appropriate network port, to create files in the spool directory,
and to alter those files (so it can copy the message into the file, rewrite the delivery
address if needed, and add the appropriate “Received” lines). It should surrender
the right to access the file as soon as it has finished writing the file into the spool
26 Computer System Security

directory, because it does not need to access that file again. The server should not
be able to access any user’s files, or any files other than its own configuration files.

3.2.3.3 Principle of Complete Mediation


This principle restricts the caching of information, which often leads to simpler
implementations of mechanisms.
The principle of complete mediation requires that all accesses to objects be checked
to ensure that they are allowed.
Whenever a subject attempts to read an object, the operating system should mediate
the action. First, it determines if the subject is allowed to read the object. If so, it
provides the resources for the read to occur. If the subject tries to read the object
again, the system should check that the subject is still allowed to read the object.
Most systems would not make the second check. They would cache the results of
the first check and base the second access on the cached results.When a UNIX
process tries to read a file, the operating system determines if the process is allowed
to read the file. If so, the process receives a file descriptor encoding the allowed
access. Whenever the process wants to read the file, it presents the file descriptor
to the kernel. The kernel then allows the access.If the owner of the file disallows
the process permission to read the file after the file descriptor is issued, the kernel
still allows access. This scheme violates the principle of complete mediation, because
the second access is not checked. The cached value is used, resulting in the denial
of access being ineffective.
For example, the Domain Name Service (DNS) caches information mapping host
names into IP addresses. If an attacker is able to “poison” the cache by implanting
records associating a bogus IP address with a name, one host will route connections
to another host incorrectly.

3.3 SYSTEM CALL INTERPOSITION


System call interposition is a powerful method for regulating and monitoring
application behavior.A system call is a way for programs to interact with the
operating system. A computer program makes a system call when it makes a request
to the operating system’s kernel. System call provides the services of the operating
system to the user programs via Application Program Interface(API). It provides
an interface between a process and operating system to allow user-level processes
to request services of the operating system. System calls are the only entry points
into the kernel system. All programs needing resources must use system calls.
In recent years, a wide variety of security tools have been developed that use this
technique. This approach brings with it a host of pitfalls for the unwary implementer
that if overlooked can allow this tool to be easily circumvented.
Confidentiality Policies 27

3.3.1 Initial implementation-JANUS


Janus can be thought of as a firewall that sits between an application and the
operating system, regulating which system calls are allowed to pass. This is
analogous to the way that a firewall regulates what packets are allowed to pass.
Another way to think about Janus is as an extension of the OS reference monitor
that runs at user level. Janus’s main feature is to use system call interposition to
contain untrusted applications. The implementation of Janus requires its own kernel
module to support the functionality it needs (mainly syscall monitoring), and it
requires that the user use the Janus CLI to run the executable to be contained.
Janus consists of mod Janus, a kernel module that provides a mechanism for secure
system call interposition, and Janus, a user-level program that interprets a user-
specified policy in order to decide which system calls to allow or deny. To gain a
better understanding of Janus’s basic operating model we will look at the lifetime
of a program being run under Janus:
• At startup, Janus reads in a policy file that specifies which files and network
resourcesit will allow access to.
• Janus then forks, the child process relinquishes all of its resources (closes all
of its descriptors, etc.), and the parent attaches to the child with the tracing
interface provided via mod Janus. At the user level, this consists of attaching
a file descriptor to the child process. Janus then selects on this descriptor and
waits to be notified of any interesting events.
• The child execs the sandboxed application.
• All accesses to new resources via open; bind etc. is first screened by Janus to
decide whether to allow theapplication access to the descriptor for the resource.
• The program continues to run under Janus’s supervision until it voluntarily ends
its execution or is explicitly killed by Janus for a policy violation. If a sandboxed
process forks, its new children will have new descriptors attached to them, and
will be subjected to the security policy of their parents by the Janus process.
Along with the benefits, Janus does come with some restrictions;
• Portability
Using the tracing facilities provided by an OS means poor portability. Originally
implemented on solaris: the linux port was difficult. Worse, the system call
interposition approach won’t work on Windows at all. UNIX has O(100)
syscalls, while Windows has thousands.
• Incorrectly Replicating the OS
In order to make policy decisions, Janus must obtain and interpret OS state
associated with the application it is monitoring. Achieving this can lead to
28 Computer System Security

replicating the OS in two ways. First, we may try to replicate OS state.


Necessarily, we must keep around some state in order to track what processes
we are monitoring. This state overlaps with state managed by the OS. In order
to interpret application behavior (e.g. the meaning of a system call) we must
replicate OS functionality. In both cases, replication introduces the possibility
of inconsistency that can lead to incorrect policy decisions.
• Incorrectly Mirroring OS State
Janus often needs OS state in order to make a policy decision. For example, if
we observe that a process wants to call IOCtl on a descriptor, we might want
to know more about that descriptor. Is it open read-only, or read-write? Is it
associated with a file or a socket? Does it have the O SYNC flag set? One
solution to this problem is to infer current OS state, by observing past
application behavior. This option is certainly attractive in some ways. Inferring
state means we don’t need to modify the OS if this information is not readily
available. It also eliminates the system call overhead of querying the OS.
Unfortunately, trying to infer even the most trivial information can be error-
prone, as we discovered in the course of building Janus. Janus needs to know
the protocol type of IP sockets in order to decide whether or not to let a monitored
process bind them. The System Call Interposition is shown in Fig. 3.1.

Application
janus
process process
Policy Engine
process
open (“foo”)

Allow Deny
open (“foo”)

result

User Space

Kernel Space
open (“foo”)
System Call Entry mod_janus
result

result

Allow
Kernel Proper
open (“foo”)

Fig. 3.1 System Call Interposition in Janus


Confidentiality Policies 29

3.3.2 ptrace
The ptrace() system call provides a means by which a parent process may observe
and control the execution of another process, and examine and change its core
image and registers. It is primarily used to implement breakpoint debugging and
system call tracing. The ptrace() system call shall enable a process to observe and
control the execution of another process, as well as examine and change certain
attributes of that process.This function operates via requests, which act on the
traced process using the other parameters in ways unique to each request type.
The tracing process must initiate tracing, either via the PTRACE_TRACEME or
PTRACE_ATTACH requests, before other requests may be performed. Except for
PTRACE_TRACEME and PTRACE_KILL, all requests must be performed on a
traced process that has been stopped. All signals, except one, delivered to the traced
process cause it to stop, irrespective of its registered signal handling, and cause an
event to be delivered to the tracing process which can be detected using the wait(2)
system call. The exception is the SIGKILL signal, which is delivered immediately
and performs its usual specified behavior.

3.3.3 systrace
Systrace is a computer security utility which limits an application’s access to the
system by enforcing access policies for system calls. This can mitigate the effects
of buffer overflows and other security vulnerabilities. It was developed by
NielsProvos and runs on various Unix-like operating systems.
Systrace is particularly useful when running untrusted or binary-only applications
and provides facilities for privilege elevation on a system call basis, helping to
eliminate the need for potentially dangerous setuid programs. It also includes
interactive and automatic policy generation features, to assist in the creation of a
base policy for an application.
Systrace supports the following features:
• Confines untrusted binary applications: An application is allowed to make
only those system calls specified as permitted in the policy. If the application
attempts to execute a system call that is not explicitly permitted, an alarm gets
raised.
• Interactive policy generation with graphical user interface: Policies can
be generated interactively via a graphical frontend to Systrace. The frontend
shows system calls and their parameters not currently covered by policy and
allow the user to refine the policy until it works as expected.
• Supports different emulations: GNU/Linux, BSDI, etc..
30 Computer System Security

• Non-interactive policy enforcement: Once a policy has been trained,


automatic policy enforcement can be used to deny all system calls not covered
by the current policy. All violations are logged to Syslog. This mode is useful
when protecting system services like a web server.
• Remote monitoring and intrusion detection: Systrace supports multiple
frontends by using a frontend that makes use of the network, very advanced
features are possible.
• Privilege elevation: Using Systrace’s privilege elevation mode, it’s possible
to get rid of setuid binaries. A special policy statement allows selected system
calls to run with higher privileges, for example, creating a raw socket.

3.4 VIRTUAL MACHINE BASED ISOLATION


A virtual machine (VM) is an operating system or application environment that is
installed on software, which imitates dedicated hardware. The end user has the
same experience on a virtual machine as they would have on dedicated
hardware.Specialized software, called a hypervisor, emulates the PC client or
server’s CPU, memory, hard disk, network and other hardware resources completely,
enabling virtual machines to share the resources. The hypervisor can emulate
multiple virtual hardware platforms that are isolated from each other, allowing
virtual machines to run Linux and Windows Server operating systems on the same
underlying physical host. Virtualization limits costs by reducing the need for physical
hardware systems. Virtual machines more efficiently use hardware, which lowers
the quantities of hardware and associated maintenance costs, and reduces power
and cooling demand. They also ease management because virtual hardware does
not fail. Administrators can take advantage of virtual environments to simplify
backups, disaster recovery, new deployments and basic system administration tasks.
Virtual machines do not require specialized, hypervisor-specific hardware.
Virtualization does, however, require more bandwidth, storage and processing
capacity than a traditional server or desktop if the physical hardware is going to
host multiple running virtual machines. VMs can easily move, be copied and
reassigned between host servers to optimize hardware resource utilization.
There are two types of VMs, process and system,
• A process VM is a virtual platform created for an individual process and
destroyed once the process terminates. Virtually all operating systems provide
a process VM for each one of the applications running, but the more interesting
process VMs are those which support binaries compiled on a different
instruction set.
Confidentiality Policies 31

• A system VM supports an OS together with many user processes. When the


VM runs under the control of a normal OS and provides a platform-independent
host for a single application we have an application VM, e.g., Java Virtual
Machine (JVM).

3.5 SOFTWARE BASED FAULT ISOLATION


Now a days, most of the software uses multiple threads, so the question arises
whether we can do isolation between them or not.
When protecting a computer system, it is often necessary to isolate an untrusted
component into a separate protection domain and provide only controlled interaction
between the domain and the rest of the system. Software-based Fault Isolation
(SFI) establishes a logical protection domain by inserting dynamic checks before
memory and control-transfer instructions. Compared to other isolation mechanisms,
it enjoys the benefits of high efficiency (with less than 5% performance overhead),
being readily applicable to legacy native code, and not relying on special hardware
or OS support. SFI has been successfully applied in many applications, including
isolating OS kernel extensions, isolating plug-ins in browsers, and isolating native
libraries in the Java Virtual Machine.
The SFI approach partitions the process memory into segments.

Code Segment Data Segment Code Segment Data Segment ............................................................

3.6 ROOTKITS
Rootkits are a kind of malware that are designed in a way that they can remain
hidden in the computer. The user might not observe them, but they are active.
Rootkits helps the cyber criminals or the attackers to remotely control your system.
They contain a number of tools, ranging from programs that helps the hackers to
steal your password to modules that makes it easy for them to steal your bank
related information. Rootkits even help the attackers by disabling the security
software and trace the keys you type on the keyboard.
Rootkits are really difficult to detect and thus making the malware reside on your
system for a long time and causing the harm. Sometimes, the only solution left is to
erase the operating system from your computer and reinstall it. Now, the question
is how do we get rootkits on the computer system? Consider, opening an email or
downloading a file that looks perfect but actually has virus. Rootkits may be
downloaded even from the mobile app.
32 Computer System Security

There are five different types of rootkits:


• Hardware or Firmware rootkit
This type of malware may affect the hard drive, the BIOS or even a software
installed in the memory chip of motherboard. It can even harm the routers.
• Bootloader rootkit
The bootloader is responsible for loading the operating system when the
machine is turned on. A bootloader toolkit then attacks the system and replaces
the original bootloader with a hacked one.
• Memory rootkit
This resides in the computer’s memory that is, RAM (Random Access
Memory). This may perform some real harmful effect but they have a short
life span. They disappear as soon as you reboot the system.
• Application rootkit
It replaces the standard computer files with the rootkit files. This kind of
rootkit may infect applications such as Word, Paint or Notepad. Everytime,
this kind of application runs, there is a chance for the hackers to attack.
• Kernel mode rootkit
These rootkits target the core of the computer operating system. The hackers
can use this rootkit to change the functionality of the operating system. The
hackers need to add their code into the original code.
Rootkits are dangerous and are very difficult to detect, so the better solution
is to be conscious while downloading something from the internet. There is
no way to protect your system from rootkits. Updating your computer’s
applications, operating system, and anti-virus can help you in protecting against
the rootkit. Keeping a check on phishing e-mail is another way of protecting
yourself from the rootkits.

3.7 INTRUSION DETECTION SYSTEM


An Intrusion Detection System (IDS) is a device or software application used to
keep a check on the network for malicious activity. If any malicious activity is
being monitored, immediately an alert is issued. Any malicious activity or the
violation of the policy is immediately reported to the administrator or collected
centrally using a security information and event management (SIEM) system. This
SIEM system integrates the output from various sources and makes use of alarm
filtering techniques to differentiate the malicious activity from the false alarm.
Confidentiality Policies 33

The Intrusion Detection System may be classified as:


• Network Intrusion Detection System (NIDS):
This kind of systems are set up at some planned point within the network so
that it can keep a check on the network traffic from the various devices on the
network. It performs the observation of passing traffic and on the subnet and
matches the traffic that is passed on the subnet to the collection of known
attacks.
An example, of NIDS can be installing it on the subnet where firewalls are
located in order to keep a track if someone tries to crack the firewall.
• Host Intrusion Detection System (HIDS):
This kind of systems run on independent hosts or devices on the network. A
HIDS monitors the incoming and outgoing packets from the device only and
alerts the administrator if it detects some malicious activity. It takes a snapshot
of the existing system file and compares it with some previous snapshots. If
there is some modification detected, an alert is issued to the administrator.
An example of the HIDS are the mission critical machines, which are not
expected to change their layout.
• Protocol Based Intrusion Detection System (PIDS):
It comprises of a system or an agent that would consistently reside at the front
end of the server, trying to control and interpret the protocol between the user
and server. It tries to secure the web server by keeping a check on the HTTPS
protocol.
• Application Protocol-Based Intrusion Detection System (APIDS):
It is a system that resides within a group of servers. It identifies or discovers
an intrusion by monitoring and interpreting the communication on application
specific protocols.
• Hybrid Intrusion Detection System :
It is a combination of two or more intrusion detection system. Here, the host
agent or the system data is combined with the network information to develop
a complete view of the network system.

Exercise
1. Why is confidentiality important in computer security?
2. Differentiate between DAC and MAC.
3. State the confinement principle.
4. What are honeypots?
34 Computer System Security

5. Explain the terms Subject and Principal with respect to UNIX.


6. What are the different user ids supported by UNIX?
7. Briefly explain the access control in UNIX.
8. What is System call Interposition?
9. Write a short note on Janus.
10. Elaborate the drawbacks of Janus.
11. Explain Virtual Machine based isolation.
12. Explain how does the software based fault isolation partitions the memory.
OR
Explain the importance, goal and approach of Software Based Fault Isolation.
13. What are Rootkits?
14. Explain the importance of Rootkits.
15. List the different types of rootkits.
16. Explain the Intrusion Detection System.
a
4
Secure Architecture
(Principles, Isolation and Leas)
Learning Objective
• Access Control-types, issues • Access Control in UNIX
• Browser Isolation-working and advantages • Access Control in WINDOW

When it comes to protecting your home or business, as well as the building’s


occupants, access control is one of the best ways for you to achieve peace of mind.
But, access control is much more than just allowing people to access your building,
access control also helps you effectively protect your data from various types of
intruders and it is up to your organization’s access control policy to address which
method works best for your needs.

4.1 ACCESS CONTROL CONCEPTS


Access control is a method of guaranteeing that users are who they say they are
and that they have the appropriate access to company data. At a high level, access
control is a selective restriction of access to data. It consists of two main
components: authentication and authorization
Access control is used to identify an individual who does a specific job, authenticate
them, and then proceed to give that individual only the key to the door or workstation
that they need access to and nothing more. Authentication is a technique used to
verify that someone is who they claim to be. Any organization whose employees
connect to the internet—in other words, every organization today—needs some
level of access control in place. Access control systems come in three variations:

4.1.1 Discretionary Access Control (DAC)


Discretionary Access Control is a type of access control system that holds the
business owner responsible for deciding which people are allowed in a specific
location, physically or digitally. DAC is the least restrictive compared to the other
systems, as it essentially allows an individual complete control over any objects
36 Computer System Security

they own, as well as the programs associated with those objects. The drawback to
Discretionary Access Control is the fact that it gives the end user complete control
to set security level settings for other users and the permissions given to the end
user are inherited into other programs they use which could potentially lead to
malware being executed without the end user being aware of it. A common criticism
of DAC systems is a lack of centralized control.

4.1.2 Mandatory Access Control (MAC)


Mandatory Access Control is more commonly utilized in organizations that require
an elevated emphasis on the confidentiality and classification of data (i.e. military
institutions). MAC doesn’t permit owners to have a say in the entities having access
in a unit or facility, instead, only the owner and custodian have the management of
the access controls. MAC will typically classify all end users and provide them
with labels which permit them to gain access through security with established
security guidelines.

4.1.3 Role-Based Access Control (RBAC)


Also known as Rule-Based Access Control, RBAC is the most demanded in regard
to access control systems. Not only is it in high demand among households, RBAC
has also become highly sought-after in the business world. In RBAC systems,
access is assigned by the system administrator and is stringently based on the
subject’s role within the household or organization and most privileges are based
on the limitations defined by their job responsibilities. So, rather than assigning an
individual as a security manager, the security manager position already has access
control permissions assigned to it. RBAC makes life much easier because rather
than assigning multiple individuals particular access, the system administrator only
has to assign access to specific job titles.

4.2 ISSUES IN ACCESS CONTROL


While authentication and access control are great ways to keep data and systems
secure, there are often simple errors that are made that compromise the security of
your business and prevent access control measures from being effective.

4.2.1 Appropriate role-based access


Users should only be given access to systems that they need to access, and at a
level that’s appropriate to their role. Good practice is to ensure that access privileges
(and changes) are approved by a sufficiently senior Director or Manager. New
employees or those changing role should have approved/documented access rights
and they should be revoked across all systems for any leavers without delay. Finally,
Super Architecture 37

access privileges should be reviewed regularly and amended as part of a process of


security governance.

4.2.2 Poor password management


Password management is one of the most common mistakes when it comes to
access control. When there are a lot of different systems that require a password to
access then it’s not uncommon for employees and even business owners to use the
same password across the board. Even when employees are required to change
their password regularly though, there’s still the problem of using passwords that
are weak and easy to crack. Using the same password across multiple systems is
something that many people are guilty of. It’s logical why people would do this
since remembering multiple passwords can often be impractical.

4.2.3 Poor user education


One of the most important aspects of improving the security of company data is
educating your employees about risk. Your employees could easily be doing things
that are putting your data at risk. For example, people will often try to find a quicker
and easier way to accomplish something, often not being aware of the risk they
could be creating. This is why good training about risk is vital. Human error is
always one of the biggest security risks for any company so you should be very
aware of this and take any steps you can to educate your employees, including
risk-training programs.

4.3 BROWSER ISOLATION


Browser isolation is a cyber security model used to physically isolate an internet
users web browser and their browsing activity away from the local machine and
network, it is the underlying model and technology that supports a remote browsing
platform. This isolation may occur locally on the computer or remotely on a server.
Browser Isolation technology provides malware protection for day to day browsing
by eliminating the opportunity for malware to ever get on the end user’s device.
It essentially secures a computer/network from web-based threats by executing all
browsing activity in an isolated virtual environment, so that any threats are contained
in this environment and can’t infiltrate the user’s entire ecosystem (their computer’s
hard-drive, the other devices on the network, etc.). Even though Browser Isolation
is gaining traction as an IT security solution, there is still a lot of misinformation
on what Browser Isolation is.

4.3.1 How does it work?


Browser Isolation isolates browsing activity away from end users devices and into
38 Computer System Security

a remote server. This server can be on-premises, but not connected to the companies’
regular IT infrastructure, or it can be delivered as a cloud based service. This allows
the user to continue to surf the web as they normally would, but because the remote
browser has been isolated away from the physical desktop and network, they are
no longer at risk from web based threats.
There are multiple technologies that deliver browser isolation. The most common
way of delivering Browser Isolation is Server-Side Browser Isolation. Server Side
Browser Isolation delivers literal isolation of browsing activity, by physically
isolating malware and cyber- attacks away from your networks and user machines.
Server-Side models deliver a remote browser to their users, which are hosted on a
physically isolated server built to handle cyber risks. This means that end users can
continue to use the web without disruption, able to view dynamic web pages as
they normally would, and use controls such as copy, paste and print. They normally
do not require any endpoint clients or software to be installed.

4.3.2 Advantages
4.3.2.1 Reduces Web Based Threats
Isolation stops the delivery of active code to the user’s local browser and device.
This means it blocks web-based infections such as ransom ware and advertising
from reaching user devices and business networks. The majority of threats facing
organizations come from the internet, and so by isolating browsing activity,
organizations greatly reduce the risks of attacks.

4.3.2.2 Saves Time


Isolation has benefits over more traditional web filtering solutions in that it is less
time consuming and requires less oversight after initial set-up. Traditional solutions
usually require admins to white list and blacklist pages safe and unsafe web pages
for end users to visit. Admins may also have to deal with requests and web based
alerts when users have attempted to visit a site that is potentially unsafe.
Browser Isolation remediates this issue by allowing users to access all websites,
without needing to worry about threats, as they are isolated away from the user.
Like traditional systems, Browser Isolation vendors such as Menlo Security still
do offer website classification, so that admins can control the types of pages users
can visit. This allows them to set policies around what controls users have on
unsafe pages, which saves admins time from having to deal with requests and
alerts to investigate.

4.3.2.3 Increases Productivity


Browser Isolation helps to increase productivity, as it allows users to view the web
Super Architecture 39

for research, communication and cloud productivity completely as normal. Using


traditional web security approaches, users can find using the web limited by websites
being blocked. Using Isolation, users can be more productive by using the web
completely as they usually would, without impact on their user experience, while
still remaining fully protected from web based threats.
Employees can view PDFs and Microsoft Office files as they normally would,
with many Browser Isolation vendors displaying a render of the original file in a
‘safe-mode’ that prevents any threats from being downloaded to the local network.
Once a document has been verified as safe, and according to admin policies, users
can then download the files and use them as they normally would.

4.3.2.4 Phishing Protection


Isolation can help businesses to deal with damaging phishing attacks. The majority
of phishing attacks originate via emails, often containing links to malicious phishing
websites, or malicious downloads. Some Isolation vendors integrate with email
networks to scan these links and attachments and display safe renders to the user,
that greatly reduces the risk that even a sophisticated email threat will be successful.
When a user clicks on a file in a phishing email, the Browser Isolation technology
will show them a safe render, while anti-virus engines will determine whether or
not the original file should be downloaded. If a link within an email is opened, and
it goes to a potentially dangerous websites, Browser Isolation solutions such as
Menlo Security will display a safe ‘read-only’ version of the page, which does not
allow users to enter any account details which would compromise their data.

4.4 CASE STUDY- ACCESS CONTROL IN UNIX AND WINDOWS


UNIX uses access control lists. A user logs into UNIX and has a right to start
processes that make requests. A process is “bigger” than a subject, many domains
may correspond to a single process. Each process has an identity(uid). This uid is
obtained from the file that stores user passwords: /etc/passwd. An entry in /etc/
passwd may look like:
fbs : abcdefg : 100: 5 : Schneider,F.B. : /usr/fbs : /bin/sh

account encrypted uid group id “in real life” where files what shell
name password start program starts
on login
Every process inherits its uid based on which user starts the process. Every process
also has an effective uid, also a number, which may be different from the uid.
Finally, each UNIX process is a member of some groups. In the original UNIX
every user was a member of one group. Currently, users can be members of more
40 Computer System Security

than one group. Group information can be gotten from /etc/passwd or from a file /
etc/groups. System administrators control the latter file. An entry in /etc/groups
may look like:
staff : * : 17 : fbs, ldzhou, ulfar

group password group id list of userids


name * signifies cannot that are members
log in w/password of this group
to this account
When a process is created, associated with it is the list of all the groups it is in.
Recall that groups are a way to shorten access control lists. They are useful in
other ways as well.
All of the above implements a form of authentication, knowing the identity of the
subject running commands. Objects in UNIX are files. UNIX attempts to make
everything look like a file. (E.g., one can think of “writing” to a process as equivalent
to sending a message, etc.) Because of this, we will only worry about files,
recognizing that just about every resource can be cast as a file.
Here is a high-level overview of the UNIX file system. A directory is a list of pairs:
(filename, i-node number). Running the command ‘ls’ will produce a list of
filenames from this list of pairs for the current working directory. An i-node contains
a lot of information, including:
• where the file is stored — necessary since the directory entry is used to access
the file,
• the length of the file — necessary to avoid reading past the end of the file,
• the last time the file was read,
• the last time the file was written,
• the last time the i-node was read,
• the last time the i-node was written,
• the owner — a uid, generally the uid of the process that created the file,
• a group — gid of the process that created the file is a member of,
• 12 mode bits to encode protection privileges — equivalent to encoding a set
of access rights.
Nine of the 12 mode bits are used to encode access rights. These access bits can be
thought of the protection matrix entry. They are divided into three groups of three:
u g o
rwx rwx rwx
The first triplet (u) is for the user, the second (g) for the group and the third (o) for
anyone else. If a particular bit is on, then the named set of processes have the
Super Architecture 41

corresponding access privileges (r:read, w:write, x:execute).


There are some subtleties however. In order to access a file, it is necessary to utter
that object’s name. Names are always relative to some directory. Directories are
just files themselves, but in the case of directories:
• The “r” (read) bit controls the ability to read the list of files in a directory. If
“r” is set, you can use “ls” to look at the directory.
• The “x” (search) bit controls the ability to use that directory to construct a
valid pathname. If the “x” bit is set, you can look at a file contained in the
directory.
Windows NT supports multiple file systems, but the protection issues we will
consider are only associated with one: NTFS. In NT there is the notion of an item,
which can be a file or a directory. Each item has an owner. An owner is usually the
thing that created the item. It can change the access control list, allow other accounts
to change the access control list and allow other accounts to become owner. Entries
in the ACL are individuals and groups. Note that NT was designed for groups of
machines on a network, thus, a distinction is made between local groups (defined
on a particular workstation) and global groups (domain wide). A single name can
therefore mean multiple things.
NTFS is structured so that a file is a set of properties, the contents of the file being
just one of those properties. An ACL is a property of an item. The ACL itself is a
list of entries: (user or group, permissions). NTFS permissions are closer to extended
permissions in UNIX than to the 9 mode bits. The permission offer a rich set of
possibilities:
• R — read
• W — write
• X — execute
• D — delete
• P — modify the ACL
• O — make current account the new owner (“take ownership”)
The owner is allowed to change the ACL. A user with permission P can also change
the ACL. A user with permission O can take ownership. There is also a packaging
of privileges known as permissions sets:
• no access
• read — RX
• change — RWXO
• full control — RWXDPO
NT access control is richer than UNIX, but not fundamentally different.
42 Computer System Security

Exercise
1. State the importance of access control.
2. Why authentication and authorization is important?
OR
State and explain the difference between authentication and authorization.
3. What are the various ways to provide access control?
4. Write short note on:
i. DAC
ii. MAC
iii. RBAC
5. Why appropriate role based access is important?
6. What are the various challenges faced while implementing access control?
7. What is browser isolation?
8. State the advantages of browser isolation.
9. Explain the working of Browser isolation.
10. Briefly explain the access mechanism in UNIX and WINDOWS.

d
5
Web Security Landscape
Learning Objective
• HTTP-features, architecture • Cross Site Scripting (XSS)
• Cookies • Defense against XSS
• Web Sever Threats • Finding vulnerabilities
• Cross Site Request Forgery • Secure development

You’ve launched your website and done all you can to ensure its success, but you
may have overlooked a critical component: website security. Cyberattacks cause
costly clean-up, damage your reputation, and discourage visitors from coming back.
Fortunately, you can prevent it all with effective website security.

5.1 OVERVIEW
Web security is also known as “Cybersecurity”. Website security can be a complex
(or even confusing) topic in an ever-evolving landscape. Website security is the
measures taken to secure a website from cyberattacks. In this sense, website security
is an ongoing process and an essential part of managing a website.It basically
means protecting a website or web application by detecting, preventing and
responding to cyber threats.
Websites and web applications are just as prone to security breaches as physical
homes, stores, and government locations. Unfortunately, cybercrime happens every
day, and great web security measures are needed to protect websites and web
applications from becoming compromised.
That’s exactly what web security does – it is a system of protection measures and
protocols that can protect your website or web application from being hacked or
entered by unauthorized personnel. This integral division of Information Security is
vital to the protection of websites, web applications, and web services. Anything that
is applied over the Internet should have some form of web security to protect it.
Website security protects your website from:
• DDoS attacks. These attacks can slow or crash your site entirely, making it
inaccessible to visitors.
44 Computer System Security

• Malware. Short for “malicious software,” malware is a very common threat


used to steal sensitive customer data, distribute spam, and allow cybercriminals
to access your site, and more.
• Blacklisting. Your site may be removed from search engine results and flagged
with a warning that turns visitors away if search engines find malware.
• Vulnerability exploits. Cybercriminals can access a site and data stored on it
by exploiting weak areas in a site, like an outdated plugin.
• Defacement. This attack replaces your website’s content with a cybercriminal’s
malicious content.
Website security protects your visitors from:
• Stolen data. From email addresses to payment information, cybercriminals
frequently go after visitor or customer data stored on a site.
• Phishing schemes. Phishing doesn’t just happen in email – some attacks take
the form of web pages that look legitimate but are designed to trick the user
into providing sensitive information.
• Session hijacking. Some cyber attacks can take over a user’s session and
force them to take unwanted actions on a site.
• Malicious redirects. Certain attacks can redirect visitors from the site they
intended to visit to a malicious website.
• SEO Spam. Unusual links, pages, and comments can be put on a site to confuse
your visitors and drive traffic to malicious websites.

5.2 HTTP
HTTP means HyperText Transfer Protocol. HTTP is the underlying protocol used
by the World Wide Web and this protocol defines how messages are formatted and
transmitted, and what actions Web servers and browsers should take in response to
various commands.
For example, when you enter a URL in your browser, this actually sends an HTTP
command to the Web server directing it to fetch and transmit the requested Web
page. The other main standard that controls how the World Wide Web works is
HTML, which covers how Web pages are formatted and displayed.

5.2.1 Features of HTTP


• HTTP is connectionless: The HTTP client, i.e., a browser initiates an HTTP
request and after a request is made, the client waits for the response. The
server processes the request and sends a response back after which client
Web Security Landscape 45

disconnect the connection. So client and server knows about each other during
current request and response only. Further requests are made on new connection
like client and server are new to each other.
• HTTP is media independent: It means, any type of data can be sent by HTTP
as long as both the client and the server know how to handle the data content.
It is required for the client as well as the server to specify the content type
using appropriate MIME-type.
• HTTP is stateless: As mentioned above, HTTP is connectionless and it is a
direct result of HTTP being a stateless protocol. The server and client are
aware of each other only during a current request. Afterwards, both of them
forget about each other. Due to this nature of the protocol, neither the client
nor the browser can retain information between different requests across the
web pages.

5.2.2 Architecture
The HTTP is meant for request/response depending on a client-server architecture
where the user requests information through a web browser to the web server,
which then responds to the requested data. (As shown in Fig. 5.1)
Web Client: The client of this client-server architecture asks for a request to a
specific server through the HTTP (TCP/IP connection) as a request method in the
form of a URL. It also contains a MIME-like message that contains request modifier
and client information.
Web Server: This accepts the request and process with a response by a status line,
together with the version of the message’s protocol as well as the success or error
code, followed by a MIME-like message having server information, some metadata,
and possible the entity-body content holding the requested information.

HTTP Request

HTTP Response

Client Server

Fig. 5.1 HTTP

5.3 COOKIES
A computer “cookie” is more formally known as an HTTP cookie, a web cookie,
an Internet cookie or a browser cookie. The name is a shorter version of “magic
cookie,” which is a term for a packet of data that a computer receives and then
46 Computer System Security

sends back without changing or altering it. A computer cookie consists of


information. When you visit a website, the website sends the cookie to your
computer. Your computer stores it in a file located inside your web browser.
Cookies are files that contain small pieces of data — like a username and password
— that are exchanged between a user’s computer and a web server to identify
specific users and improve their browsing experience.
For example, cookies let websites recognize users and recall their individual login
information and preferences, such as sports news versus politics.
Shopping sites use cookies to track items users previously viewed, allowing the
sites to suggest other goods they might like and keep items in shopping carts while
they continue shopping.
Cookies are created when users visit a new website, and the web server sends a
short stream of information to their web browsers. That cookie is only sent when
the server wants the web browser to save the cookie. In that case, it will remember
the string name=value and send it back to the server with each follow-on request.If
a user returns to that site in the future, the web browser returns that data to the web
server in the form of a cookie.
The name “cookie” comes from “magic cookies,” coined by web browser
programmer Lou MOntulli. The terms refer to packets of information that are sent
and received without changes. The analogy to the munch able baked good is
coincidental, although appropriate.
There are three different types of cookies:
• Session Cookies – these are mainly used by online shops and allows you to
keep items in your basket when shopping online. These cookies expire after a
specific time or when the browser is closed.
• Permanent Cookies – these remain in operation, even when you have closed
the browser. They remember your login details and password so you don’t
have to type them in every time you use the site. It is recommended that you
delete these type of cookies after a specific time.
• Third-Party Cookies – these are installed by third parties for collecting certain
information. For example: Google Maps.
Cookies carry a security risk, but as with most online activities it’s possible to
negate and reduce these risks. To protect yourself for the more dangerous aspects
of cookies make sure you do the following:
• Always be careful when sharing personal information. Cookies can transmit
this information, so tread carefully. And if you’re using a public computer
then do not send any personal information.
Web Security Landscape 47

• Disable the storage of cookies in your internet browser. This reduces the amount
of information being shared and can be adjusted in your browser’s privacy
settings.
• Browser add-ons are available that block third-party software such as cookie
trackers and ensure that your browsing habits remain private.
• Always make sure you have anti-malware software installed on your PC as
malware can often disguise itself as harmless cookies or infiltrate advertising
networks.
• If a website asks you to accept cookies and you’re unsure of its legitimacy
then leave the website immediately.

5.4 MAJOR WEB SERVER THREATS


Websites are hosted on web servers. Web servers are themselves computers running
an operating system; connected to the back-end database, running various
applications. Any vulnerability in the applications, Database, Operating system or
in the network will lead to an attack on the web server. The common vulnerabilities
are shown in Fig. 5.2.

Custom Flaws
Business Logic
Web Applicatio ilities
ns Technical Vulerab

Third-Party
Commercial
Web Applicatio Open Source /
ns

Web Server oft IIS


Apache / Micros

Database L / Db2
Oracle / MySQ

Applications Commercial
Open Source /

Operating Syst ux / OS X
em Windows / Lin

Network alls
Routers / Firew

Fig. 5.2 Vulnerability Stack of Web Server


48 Computer System Security

The most common web security threats are as follows:


i. Computer Virus: Computer viruses are pieces of software that are designed
to be spread from one computer to another. They’re often sent as email
attachments or downloaded from specific websites with the intent to infect
your computer — and other computers on your contact list — by using systems
on your network. Viruses are known to send spam, disable your security
settings, corrupt and steal data from your computer including personal
information such as passwords, even going as far as to delete everything on
your hard drive.
ii. Rogue Security Software: Rogue security software is malicious software
that mislead users to believe there is a computer virus installed on their
computer or that their security measures are not up to date. Then they offer to
install or update users’ security settings. They’ll either ask you to download
their program to remove the alleged viruses, or to pay for a tool. Both cases
lead to actual malware being installed on your computer.
iii. Trojan Horse:Metaphorically, a “Trojan horse” refers to tricking someone
into inviting an attacker into a securely protected area. In computing, it holds
a very similar meaning — a Trojan horse, or “Trojan,” is a malicious bit of
attacking code or software that tricks users into running it willingly, by hiding
behind a legitimate program.
They spread often by email; it may appear as an email from someone you
know, and when you click on the email and its included attachment, you’ve
immediately downloaded malware to your computer. Trojans also spread when
you click on a false advertisement.Once inside your computer, a Trojan horse
can record your passwords by logging keystrokes, hijacking your webcam,
and stealing any sensitive data you may have on your computer.
iv. Adware and Spyware: By “adware” we consider any software that is designed
to track data of your browsing habits and, based on that, show you
advertisements and pop-ups. Adware collects data with your consent — and
is even a legitimate source of income for companies that allow users to try
their software for free, but with advertisements showing while using the
software. The adware clause is often hidden in related User Agreement docs,
but it can be checked by carefully reading anything you accept while installing
software. The presence of adware on your computer is noticeable only in
those pop-ups, and sometimes it can slow down your computer’s processor
and internet connection speed.When adware is downloaded without consent,
it is considered malicious.
Spyware works similarly to adware, but is installed on your computer without
Web Security Landscape 49

your knowledge. It can contain keyloggers that record personal information


including email addresses, passwords, even credit card numbers, making it
dangerous because of the high risk of identity theft.
v. Computer Worm: Computer worms are pieces of malware programs that
replicate quickly and spread from one computer to another. A worm spreads
from an infected computer by sending itself to all of the computer’s contacts,
then immediately to the contacts of the other computers.Interestingly, they are
not always designed to cause harm; there are worms that are made just to
spread. Transmission of worms is also often done by exploiting software
vulnerabilities.
vi. Denial-of-Service attack:A denial-of-service attack is a security event that
occurs when an attacker prevents legitimate users from accessing specific
computer systems, devices, services or other IT resources.An attacker may
cause a denial of service attack by sending numerous service request packets
overwhelming the servicing capability of the web server, or he may try to
exploit a programming error in the application causing a DOS attack.
vii. Phishing:Phishing is a method of a social engineering with the goal of
obtaining sensitive data such as passwords, usernames, and credit card
numbers.The attacks often come in the form of instant messages or phishing
emails designed to appear legitimate. The recipient of the email is then tricked
into opening a malicious link, which leads to the installation of malware on
the recipient’s computer. It can also obtain personal information by sending
an email that appears to be sent from a bank, asking to verify your identity by
giving away your private information.
viii. SQL Injection attack: SQL injection attacks are designed to target data-driven
applications by exploiting security vulnerabilities in the application’s software.
They use malicious code to obtain private data, change and even destroy that
data, and can go as far as to void transactions on websites. It has quickly
become one of the most dangerous privacy issues for data confidentiality.

5.5 CROSS SITE REQUEST FORGERY


Cross-site request forgery is a web security vulnerability that allows an attacker to
induce users to perform actions that they do not intend to perform. It allows an
attacker to partly circumvent the same origin policy, which is designed to prevent
different websites from interfering with each other.
Cross site request forgery (CSRF), also known as XSRF, Sea Surf or Session Riding,
is an attack vector that tricks a web browser into executing an unwanted action in
an application to which a user is logged in.
50 Computer System Security

A successful CSRF attack can be devastating for both the business and user. It can
result in damaged client relationships, unauthorized fund transfers, changed
passwords and data theft—including stolen session cookies.CSRFs are typically
conducted using malicious social engineering, such as an email or link that tricks
the victim into sending a forged request to a server. As the unsuspecting user is
authenticated by their application at the time of the attack, it’s impossible to
distinguish a legitimate request from a forged one.
The process of CSRF is shown in Fig. 5.3

2 3
Perpetrator embeds the A visitor clicks on the
request into a hyperlink and Website Visitor link, inadvertently
sends it to visitors who may sending the request to
be logged into the site the website

4 Website validates request and


transfers funds from the visitor’s
account to the perpetrator

Website
Perpetrator

1 Perpetrator forges a request for


a fund transfer to a website

Fig. 5.3 Cross Site Request Forgery


For a CSRF attack to be possible, three key conditions must be in place:
• A relevant action. There is an action within the application that the attacker
has a reason to induce. This might be a privileged action (such as modifying
permissions for other users) or any action on user-specific data (such as
changing the user’s own password).
• Cookie-based session handling. Performing the action involves issuing one
or more HTTP requests, and the application relies solely on session cookies
to identify the user who has made the requests. There is no other mechanism
in place for tracking sessions or validating user requests.
• No unpredictable request parameters. The requests that perform the action
do not contain any parameters whose values the attacker cannot determine or
guess. For example, when causing a user to change their password, the function
is not vulnerable if an attacker needs to know the value of the existing password.
A number of effective methods exist for both prevention and mitigation of CSRF attacks.
From a user’s perspective, prevention is a matter of safeguarding login credentials and
denying unauthorized actors access to applications.Best practices include:
Web Security Landscape 51

• Logging off web applications when not in use


• Securing usernames and passwords
• Not allowing browsers to remember passwords
• Avoiding simultaneously browsing while logged into an application

5.6 CROSS SITE SCRIPTING


Cross-site scripting (also known as XSS) is a web security vulnerability that allows
an attacker to compromise the interactions that users have with a vulnerable
application. It allows an attacker to circumvent the same origin policy, which is
designed to segregate different websites from each other. Cross-site scripting
vulnerabilities normally allow an attacker to masquerade as a victim user, to carry
out any actions that the user is able to perform, and to access any of the user’s data.
If the victim user has privileged access within the application, then the attacker
might be able to gain full control over all of the application’s functionality and
data.
Cross-site scripting works by manipulating a vulnerable web site so that it returns
malicious JavaScript to users. When the malicious code executes inside a victim’s
browser, the attacker can fully compromise their interaction with the application.
The process of XSS is shown in Fig. 5.4.
Cross-Site Scripting (XSS) attacks occur when:
• Data enters a Web application through an untrusted source, most frequently a
web request.
• The data is included in dynamic content that is sent to a web user without
being validated for malicious content.
The malicious content sent to the web browser often takes the form of a segment
of JavaScript, but may also include HTML, Flash, or any other type of code that
the browser may execute. The variety of attacks based on XSS is almost limitless,
but they commonly include transmitting private data, like cookies or other session
information, to the attacker, redirecting the victim to web content controlled by the
attacker, or performing other malicious operations on the user’s machine under the
guise of the vulnerable site.
There are three main types of XSS attacks. These are:
• Reflected XSS, where the malicious script comes from the current HTTP
request.
• Stored XSS, where the malicious script comes from the website’s database.
• DOM-based XSS, where the vulnerability exists in client-side code rather
than server-side code.
52 Computer System Security

Fig. 5.4 XSS

5.7 DEFENSES AND PROTECTION AGAINST XSS


When we discuss vulnerabilities in applications, there are different categories that
we come across. Some vulnerability are extremely common yet allow for little or
no damage should an attacker discover and exploit them, while others are incredibly
rare but can have major, lasting impact on the organizations behind the attacked
application. Then, there’s the third category: Common and deadly. Cross-Site
Scripting, commonly shortened to XSS, is one of the most common vulnerabilities
found in applications, and can cause serious damage given the right time and the
right attacker.
XSS vulnerabilities are common enough to have graced applications as big and
popular as Facebook, Google, and PayPal, and XSS has been a mainstay on the
OWASP Top 10 list since its inception. XSS vulnerabilities are especially dangerous
because an attacker exploiting an XSS attack can gain the ability to do whatever
the user can do, and to see what the user sees – including passwords, payment and
financial information, and more. Worse – the victims, both the user and the
vulnerable application, often won’t be aware they’re being attacked.
XSS attacks, in essence, trick an application into sending malicious script through
the browser, which believes the script is coming from the trusted website. Each
time an end user accesses the affected page, their browser will download and run
the malicious script as if it was part of the page. In the majority of XSS attacks, the
attacker will try to hijack the user’s session by stealing their cookies and session
tokens, or will use the opportunity to spread malware and malicious JavaScript.
XSS vulnerabilities are difficult to prevent simply because there are so many vectors
where an XSS attack can be used in most applications. In addition, whereas other
vulnerabilities, such as SQL injection or OS command injection, XSS only affects the
Web Security Landscape 53

user of the website, making them more difficult to catch and even harder to fix. Also
unlike SQL injection, which can be eliminated with the proper use of prepared statements,
there’s no single standard or strategy to preventing cross-site scripting attacks.
There are two main types of cross-site scripting attacks: Stored (or persistent) XSS,
which is when malicious script is injected directly into the vulnerable application,
and reflected XSS, which involves ‘reflecting’ malicious script into a link on a
page, which will activate the attack once the link has been clicked.
Three ways to protect against XSS attack:

5.7.1 Escaping
The first method you can and should use to prevent XSS vulnerabilities from
appearing in your applications is by escaping user input. Escaping data means
taking the data an application has received and ensuring it’s secure before rendering
it for the end user. By escaping user input, key characters in the data received by a
web page will be prevented from being interpreted in any malicious way. In essence,
you’re censoring the data your web page receives in a way that will disallow the
characters – especially < and > characters – from being rendered, which otherwise
could cause harm to the application and/or users.
If your page doesn’t allow users to add their own code to the page, a good rule of
thumb is to then escape any and all HTML, URL, and JavaScript entities. However,
if your web page does allow users to add rich text, such as on forums or post
comments, you have a few choices. You’ll either need to carefully choose which
HTML entities you will escape and which you won’t, or by using a replacement
format for raw HTML such as Markdown, which will in turn allow you to continue
escaping all HTML.

5.7.2 Validating Input


Validating input is the process of ensuring an application is rendering the correct
data and preventing malicious data from doing harm to the site, database, and
users. While whitelisting and input validation are more commonly associated with
SQL injection, they can also be used as an additional method of prevention for
XSS. Whereas blacklisting, or disallowing certain, predetermined characters in
user input, disallows only known bad characters, whitelisting only allows known
good characters and is a better method for preventing XSS attacks as well as others.
Input validation is especially helpful and good at preventing XSS in forms, as it
prevents a user from adding special characters into the fields, instead refusing the
request. However, as OWASP maintains, input validation is not a primary prevention
method for vulnerabilities such as XSS and SQL injection, but instead helps to
reduce the effects should an attacker discover such vulnerability.
54 Computer System Security

5.7.3 Sanitizing
A third way to prevent cross-site scripting attacks is to sanitize user input. Sanitizing
data is a strong defense, but should not be used alone to battle XSS attacks. It’s
totally possible you’ll find the need to use all three methods of prevention in working
towards a more secure application. Sanitizing user input is especially helpful on
sites that allow HTML markup, to ensure data received can do no harm to users as
well as your database by scrubbing the data clean of potentially harmful markup,
changing unacceptable user input to an acceptable format.

5.8 FINDING VULNERABILITY


Mistakes happen, even in the process of building and coding technology. What’s
left behind from these mistakes is commonly referred to as a bug. While bugs
aren’t inherently harmful (except to the potential performance of the technology),
many can be taken advantage of by nefarious actors—these are known as
vulnerabilities. Vulnerabilities can be leveraged to force software to act in ways
it’s not intended to, such as gleaning information about the current security defenses
in place. A vulnerability scanner scans and compare your environment against a
vulnerability database, or a list of known vulnerabilities; the more information the
scanner has, the more accurate its performance. Once a team has a report of the
vulnerabilities, developers can use penetration testing as a means to see where the
weaknesses are, so the problem can be fixed and future mistakes can be avoided.
When employing frequent and consistent scanning, you’ll start to see common
threads between the vulnerabilities for a better understanding of the full system.
A Security Vulnerability is a weakness, flaw, or error found within a security system
that has the potential to be leveraged by a threat agent in order to compromise a
secure network.
There are a number of Security Vulnerabilities, but some common examples are:
• Broken Authentication: When authentication credentials are compromised,
user sessions and identities can be hijacked by malicious actors to pose as the
original user.
• SQL Injection: As one of the most prevalent security vulnerabilities, SQL
injections attempt to gain access to database content via malicious code
injection. A successful SQL injection can allow attackers to steal sensitive
data, spoof identities, and participate in a collection of other harmful activities.
• Cross-Site Scripting: Much like an SQL Injection, a Cross-site scripting (XSS)
attack also injects malicious code into a website. However, a Cross-site
scripting attack targets website users, rather than the actual website itself,
which puts sensitive user information at risk of theft.
Web Security Landscape 55

• Cross-Site Request Forgery: A Cross-Site Request Forgery (CSRF) attack


aims to trick an authenticated user into performing an action that they do not
intend to do. This, paired with social engineering, can deceive users into
accidentally providing a malicious actor with personal data.
• Security Misconfiguration: Any component of a security system that can be
leveraged by attackers due to a configuration error can be considered a
“Security Misconfiguration.”

5.9 SECURE DEVELOPMENT


A software development life cycle (SDLC) is a framework that defines the process
used by organizations to build an application from its inception to its decommission.
Over the years, multiple standard SDLC models have been proposed (waterfall,
iterative, agile, etc.) and used in various ways to fit individual circumstances. It is,
however, safe to say that in general, SDLCs include the following phases:
• Planning and requirements
• Architecture and design
• Test planning
• Coding
• Testing and results
• Release and maintenance
In the past, it was common practice to perform security-related activities only as
part of testing. This after-the-fact technique usually resulted in a high number of
issues discovered too late (or not discovered at all). It is a far better practice to
integrate activities across the SDLC to help discover and reduce vulnerabilities
early, effectively building security in.
The primary advantages of pursuing a secure SDLC approach are:
• More secure software as security is a continuous concern
• Awareness of security considerations by stakeholders
• Early detection of flaws in the system
• Cost reduction as a result of early detection and resolution of issues
• Overall reduction of intrinsic business risks for the organization
A secure SDLC is set up by adding security-related activities to an existing
development process. For example, writing security requirements alongside the
collection of functional requirements, or performing an architecture risk analysis
during the design phase of the SDLC.Many secure SDLC models have been
56 Computer System Security

proposed. Here are a few of them:


• MS Security Development Lifecycle (MS SDL): One of the first of its kind,
the MS SDL was proposed by Microsoft in association with the phases of a
classic SDLC.
• NIST 800-64: Provides security considerations within the SDLC. Standards
were developed by the National Institute of Standards and Technology to be
observed by US federal agencies.
• OWASP CLASP (Comprehensive, Lightweight Application Security Process):
Simple to implement and based on the MS SDL. It also maps the security
activities to roles in an organization.

Exercise
1. How does web security protects your web site?
2. What is SEO spam?
3. State and explain the features of HTTP.
4. Explain HTTP request and HTTP response.
5. What are cookies?
6. How do you protect your system from the effect of cookies?
7. Briefly explain the web server threats.
8. Explain the working of CSRF.
9. What are the various Cross site scripting attacks?
10. How do you protect your system against XSS?
11. Why is important to have security as an important part of the SDLC?
OR
“SDLC should have security as an important part”. Justify.
12. State and explain the common security vulnerabilities.
a
6
Basic Cryptography
Learning Objective
• Cryptography • E-Mail Security Certificates
• Public Key Encryption-RSA • Transport Layer Security
• Digital Signature • IP Security
• Public Key Distribution • DNS Security

Cryptography is the science of hiding information in plain sight, in order to conceal


it from unauthorized access. It is a technique of storing and transmitting data in a
particular form so that only those for whom it is intended can read and process
it.Cryptography makes secure web sites and electronic safe transmissions possible.
For a web site to be secure all of the data transmitted between the computers where
the data is kept and where it is received must be encrypted. Due to the large number
of commercial transactions on the internet, cryptography is very key in ensuring
the security of the transactions.

6.1 PUBLIC KEY CRYPTOGRAPHY


When the two parties communicate to each other to transfer the intelligible or
sensible message, referred to as plaintext, is converted into apparently random
nonsense for security purpose referred to as cipher text. The process of changing
the plaintext into the cipher text is referred to as encryption. The encryption process
consists of an algorithm and a key. The key is a value independent of the plaintext.
The security of conventional encryption depends on the major two factors:
1. The Encryption algorithm
2. Secrecy of the key
The algorithm will produce a different output depending on the specific key being
used at the time. Changing the key changes the output of the algorithm.
Once the cipher text is produced, it may be transmitted. Upon reception, the cipher
text can be transformed back to the original plaintext by using a decryption algorithm
and the same key that was used for encryption.
58 Computer System Security

The process of changing the cipher text to the plaintext that process is known as
decryption.
Asymmetric is a form of Cryptosystem in which encryption and decryption are
performed using different keys-Public key (known to everyone) and Private Key
(Secret key). This is known as Public Key Encryption.
Characteristics of Public Key Encryption:
• Public key Encryption is important because it is infeasible to determine the
decryption key given only the knowledge of the cryptographic algorithm and
encryption key.
• Either of the two key (Public and Private key) can be used for encryption with
other key used for decryption.
• Due to Public key cryptosystem, public keys can be freely shared, allowing
users an easy and convenient method for encrypting content and verifying
digital signatures, and private keys can be kept secret, and ensuring only the
owners of the private keys can decrypt content and create digital signatures.
• The most widely used public-key cryptosystem is RSA (Rivest–Shamir–
Adleman). The difficulty of finding the prime factors of a composite number
is the backbone of RSA.
To make the working of Public Key Encryption clearer, follow Fig. 6.1.
Public Key Private Key
(A,B,C,D) (c)

Plain Text Cipher Text Plain Text


Encryption Decryption
Algorithm Algorithm

Fig. 6.1 Public Key Encryption

6.1.1 Components of Public Key Encryption:


• Plain Text: This is the message which is readable or understandable. This
message is given to the Encryption algorithm as an input.
• Cipher Text: The cipher text is produced as an output of Encryption algorithm.
Basic Cryptography 59

We cannot simply understand this message.


• Encryption Algorithm: The encryption algorithm is used to convert plain
text into cipher text.
• Decryption Algorithm: It accepts the cipher text as input and the matching
key (Private Key or Public key) and produces the original plain text
• Public and Private Key: One key either Private key (Secret key) or Public
Key (known to everyone) is used for encryption and other is used for decryption

6.1.2 Weakness of the Public Key Encryption:


• Public key Encryption is vulnerable to Brute-force attack.
• This algorithm also fails when the user lost his private key, then the Public
key Encryption becomes the most vulnerable algorithm.
• Public Key Encryption also is weak towards man in the middle attack. In this
attack a third party can disrupt the public key communication and then modify
the public keys.
• If user private key used for certificate creation higher in the PKI (Public Key
Infrastructure) server hierarchy is compromised, or accidentally disclosed,
then a “man-in-the-middle attack” is also possible, making any subordinate
certificate wholly insecure. This is also the weakness of Public key Encryption.

6.1.3 Applications:
• Confidentiality can be achieved using Public Key Encryption. In this the Plain
text is encrypted using receiver public key. This will ensures that no one other
than receiver private key can decrypt the cipher text.
• Digital signature is for sender’s authentication purpose. In this sender encrypt
the plain text using his own private key. This step will make sure the
authentication of the sender because receiver can decrypt the cipher text using
sender’s pubic key only.
• This algorithm can be used in both Key-management and securely transmission of
data.

6.2 RSA PUBLIC KEY CRYPTOGRAPHY


RSA encryption is a system that solves what was once one of the biggest problems
in cryptography: How can you send someone a coded message without having an
opportunity to previously share the code with them? Under RSA encryption,
messages are encrypted with a code called a public key, which can be shared openly.
Due to some distinct mathematical properties of the RSA algorithm, once a message
60 Computer System Security

has been encrypted with the public key, it can only be decrypted by another key,
known as the private key. Each RSA user has a key pair consisting of their public
and private keys. As the name suggests, the private key must be kept secret.
Public key encryption schemes differ from symmetric-key encryption, where both
the encryption and decryption process use the same private key. These differences
make public key encryption like RSA useful for communicating in situations where
there has been no opportunity to safely distribute keys beforehand.
RSA encryption is often used in combination with other encryption schemes, or
for digital signatures which can prove the authenticity and integrity of a message.
It isn’t generally used to encrypt entire messages or files, because it is less efficient
and more resource-heavy than symmetric-key encryption.
To make things more efficient, a file will generally be encrypted with a symmetric-
key algorithm, and then the symmetric key will be encrypted with RSA encryption.
Under this process, only an entity that has access to the RSA private key will be able
to decrypt the symmetric key. Without being able to access the symmetric key, the
original file can’t be decrypted. This method can be used to keep messages and files
secure, without taking too long or consuming too many computational resources.
RSA encryption can be used in a number of different systems. It can be implemented
in OpenSSL, wolfCrypt, cryptlib and a number of other cryptographic libraries. As
one of the first widely used public-key encryption schemes, RSA laid the foundations
for much of our secure communications. It was traditionally used in TLS and was
also the original algorithm used in PGP encryption. RSA is still seen in a range of
web browsers, email, VPNs, chat and other communication channels. RSA is also
often used to make secure connections between VPN clients and VPN servers.
Under protocols like OpenVPN, TLS handshakes can use the RSA algorithm to
exchange keys and establish a secure channel.

6.2.1 Generation of RSA Key Pair


Each person or a party who desires to participate in communication using encryption
needs to generate a pair of keys, namely public key and private key. The process
followed in the generation of keys is described below:-

6.2.2 Generate the RSA modulus (n)


• Select two large primes, p and q.
• Calculate n = p × q. For strong unbreakable encryption, let n be a large number,
typically a minimum of 512 bits.

6.2.3 Find Derived Number (e)


• Number e must be greater than 1 and less than (p – 1)(q – 1).
Basic Cryptography 61

• There must be no common factor for e and (p – 1)(q – 1) except for 1. In other
words two numbers e and (p – 1)(q – 1) are coprime.

6.2.4 Form the public key


• The pair of numbers (n, e) form the RSA public key and is made public.
• Interestingly, though n is part of the public key, difficulty in factorizing a
large prime number ensures that attacker cannot find in finite time the two
primes (p & q) used to obtain n. This is strength of RSA.

6.2.5 Generate the private key


• Private Key d is calculated from p, q, and e. For given n and e, there is unique
number d.
• Number d is the inverse of e modulo (p – 1)(q – 1). This means that d is the
number less than (p – 1)(q – 1) such that when multiplied by e, it is equal to 1
modulo (p – 1)(q – 1).
• This relationship is written mathematically as follows:-
ed = 1 mod (p – 1)(q – 1)

6.2.6 RSA Encryption


• Suppose the sender wish to send some text message to someone whose public
key is (n, e).
• The sender then represents the plaintext as a series of numbers less than n.
• To encrypt the first plaintext P, which is a number modulo n. The encryption
process is simple mathematical step as “
C = Pe mod n
• In other words, the ciphertext C is equal to the plaintext P multiplied by itself
e times and then reduced modulo n. This means that C is also a number less
than n.
• Returning to our Key Generation example with plaintext P = 10, we get
ciphertext C:-
C = 105 mod 91

6.2.7 RSA Decryption


• The decryption process for RSA is also very straightforward. Suppose that
the receiver of public-key pair (n, e) has received a ciphertext C.
• Receiver raises C to the power of his private key d. The result modulo n will
62 Computer System Security

be the plaintext P.
Plaintext = Cd mod n
• Returning again to our numerical example, the ciphertext C = 82 would get
decrypted to number 10 using private key 29:-
Plaintext = 8229 mod 91 = 10

6.3 DIGITAL SIGNATURE


A digital signature is a mathematical technique which validates the authenticity
and integrity of a message, software or digital documents. It allows us to verify the
author name, date and time of signatures, and authenticate the message contents.
The digital signature offers far more inherent security and intended to solve the
problem of tampering and impersonation (Intentionally copy another person’s
characteristics) in digital communications.
The computer-based business information authentication interrelates both
technology and the law. It also calls for cooperation between the people of different
professional backgrounds and areas of expertise. The digital signatures are different
from other electronic signatures not only in terms of process and result, but also it
makes digital signatures more serviceable for legal purposes. Some electronic
signatures that legally recognizable as signatures may not be secure as digital
signatures and may lead to uncertainty and disputes.

6.3.1 Model of Digital Signature


As stated previously, the digital signature scheme is created on public key
cryptography. The model of digital signature scheme is depicted in the resulting
illustration (As shown in Fig. 6.2) –
Signer Verifier

Signer’s Hashing
Private Function
Data
Key Equal?
Data

Hashing Signature Verification


Signature Hash
Function Algorithm Algorithm

Hash Signer’s
Private
Key

Fig. 6.2 Model of Digital Signature


Basic Cryptography 63

The resulting opinions clarify the entire procedure in detail:


• Each person accepting this scheme has a public-private key pair.
• Usually, the key pairs used for encryption/decryption and signing/verifying
are dissimilar. The private key used for ratification is mentioned to as the
signature key and the public key as the verification key.
• Signer feeds data to the hash function and makes hash of data.
• Hash value and signature key are then providing for to the signature algorithm
which creates the digital signature on given hash. Signature is added to the
data and then both are sent to the verifier.
• Verifier feeds the digital signature and the verification key into the verification
algorithm. The verification algorithm offers some value as output.
• Verifier also runs same hash function on received data to generate hash value.
• For verification, this hash value and output of verification algorithm are
compared. Based on the comparison result, verifier decides whether the digital
signature is valid.
• Since digital signature is created by ‘private’ key of signer and no one else
can have this key; the signer cannot repudiate signing the data in future.

6.3.2 Encryption with Digital Signature


In numerous digital communications, it is needed to exchange encrypted messages
than plaintext to attain privacy. In public key encryption scheme, a public
(encryption) key of sender is obtainable in open domain, and hence everybody can
spoof his individuality and send any encrypted message to the receiver.
This makes it vital for users employing PKC for encryption to seek digital signatures
beside with encrypted data to be guaranteed of message authentication and non-
repudiation.
This can record by uniting digital signatures with encryption scheme. Let us for a
short time discuss how to attain this requirement. There are two possibilities, sign-
then-encrypt and encrypt-then-sign.
On the other hand, the crypto system based on sign-then-encrypt can be exploited
by receiver to spoof identity of sender and sent that data to third party. Therefore,
this method is not favored. The procedure of encrypt-then-sign is more dependable
and broadly adopted. This is illustrated in Fig. 6.3:-
64 Computer System Security

Sener’s Side

Encrypted Data
Data Encryption using +
Receiver’s Public Key Digital Signature

Hashing
Function

Digital Signature with


Hash Sender’s Private Key

Fig. 6.3 Digital Signature using Hash Function


The receiver after getting the encrypted data and signature on it, first confirms the
signature using sender’s public key. After safeguarding the validity of the signature,
he then saves the data through decryption using his private key.

6.3.3 Importance of Digital Signature


Out of all cryptographic primitives, the digital signature using public key
cryptography is measured as very important and useful tool to achieve information
security.
Apart from ability to offer non-repudiation of message, the digital signature also
offers message verification and data integrity. Let us temporarily see how this is
attained by the digital signature–
• Message authentication – When the verifier authenticates the digital signature
using public key of a sender, he is guaranteed that signature has been generated
only by sender who own the matching secret private key and no one else.
• Data Integrity – In case an attacker has admittance to the data and adjusts it,
the digital signature verification at receiver end fails. The hash of modified
data and the output delivered by the verification algorithm will not match.
Therefore, receiver can carefully deny the message assuming that data integrity
has been breached.
• Non-repudiation – Meanwhile it is expected that merely the signer has the
information of the signature key, he can only make unique signature on a
Basic Cryptography 65

given data. Thus the receiver can present data and the digital signature to a
third party as evidence if any dispute arises in the future.

6.4 PUBLIC KEY DISTRIBUTION


Several techniques have been proposed for the distribution of public keys. Virtually
all these proposals can be grouped into the following general schemes:

6.4.1 Public Announcement of Public Keys


On the face of it, the point of public key encryption is that the public key is public.
Thus, if there is some broadly accepted public-key algorithm, such as RSA, any
participant can send his or her public key to any other participant or broadcast the
keyto the community at large. For example, because of the growing pop- ularity of
PGP (pretty good privacy), which makes use of RSA, many PGP users have adopted
the practice of appending their public key to messages that they send to public
forums, such as USENET newsgroups and Internet mailing lists. Although this
approach is convenient, it has a major weakness. Anyone can forge such a public
announcement. That is, some user could pretend to be user A and send a public key
to another participant or broadcast such a public key. Until such time as user A
discovers the forgery and alerts other participants, the forger isable to read all
encrypted messages intended for A and can use the forged keys for authentication

6.4.2 Publicly Available Directory


A greater degree of security can be achieved by maintaining a publicly available
dynamic directory of public keys. Maintenance and distribution of the public direc-
tory would have to be the responsibility of entity or organization (Fig. 6.4). Such a
scheme would include the following elements:
• The authority maintains a directory with a {name, public key} entry for each
participant.
• · Each participant registers a public key with the directory authority. Registration
would have to be in person or by some form of secure authenti- cated
communication.
• A participant may replace the existing key with a new one at any time, either
because of the desire to replace a public key that has already been used for a
large amount of data, or because the corresponding private key has been com-
promised in some way.
66 Computer System Security

Public-key
directory

Pua Pub

A B

Fig. 6.4 Public Available Directory


• Participants could also access the directory electronically. For this purpose,
secure, authenticated communication from the authority to the participant is
mandatory.
This scheme is clearly more secure than individual public announcements but
still has vulnerabilities. If an adversary succeeds in obtaining or computing
the private key of the directory authority, the adversary could authoritatively
pass out counterfeit public keys and subsequently impersonate any participant
and eaves- drop on messages sent to any participant. Another way to achieve
the same end is for the adversary to tamper with the records kept by the
authority.

6.4.3 Public-Key Authority


Stronger security for public-key distribution can be achieved by providing tighter
control over the distribution of public keys from the directory. As before, the
scenario assumes that a central authority maintains a dynamic directory of public
keys of all participants. In addition, each participant reliably knows a public key
for the authority, with only the authority knowing the corresponding private key.

6.4.4 Public Key Certificates


Use certificates that can be used by participants to exchange keys without contacting
a public-key authority, in a way that is as reliable as if the keys were obtained
directly from a public-key authority. In essence, a certificate consists of a public
key, an identifier of the key owner, and the whole block signed by a trusted third
party. Typically, the third party is a certificate authority, such as a government
agency or a financial institution that is trusted by the user community. A user can
present his or her public key to the authority in a secure manner and obtain a
certificate. The user can then publish the certificate. Anyone needing this user’s
Basic Cryptography 67

public key can obtain the certificate and verify that it is valid by way of the attached
trusted signature. A participant can also convey its key information to another by
transmitting its certificate. Other participants can verify that the certificate was
created by the authority.

6.5 E-MAIL SECURITY CERTIFICATES


When you send an email through conventional email platforms such as Outlook,
Gmail, or Yahoo, the information could be visible to people who know how to
look. Emails are bounced around through a series of servers and across the internet.
As such, they’re not secure without having encryption or other protective
mechanisms in place. This means that they can be “read” by hackers, putting your
companies (and customers’) sensitive data at risk. This not only opens your company
up to the financial and reputational costs associated with a data breach, but to
exorbitant regulatory fines due to noncompliance as well. So, how can you secure
email communication in a time when phishing and data breaches are on the rise?
An email certificate is a digital file that is installed to your email application to
enable secure email communication. These certificates are known by many names
— email security certificates, email encryption certificates, S/MIME certificates,
etc. S/MIME, which stands for “secure/multipurpose internet mail extension,” is a
certificate that allows users to digitally sign their email communications as well as
encrypt the content and attachments included in them. Not only does this authenticate
the identity of the sender to the recipient, but it also protects the integrity of the
email data before it is transmitted across the internet.

6.5.1 How does it work?


S/MIME email certificate allows you to:
• Encrypt your emails so that only your intended recipient can access the content
of the message.
• Digitally sign your emails so the recipient can verify that the email was, in
fact, sent by you and not a phisher posing as you.
The way that an email encryption certificate works is by using asymmetric
encryption. It uses a public key to encrypt the email and send it so that the recipient,
who has the matching private key, can decrypt the entire message (and any
attachments) automatically. Asymmetric encryption is also what’s behind the SSL/
TLS protocol as well as cryptocurrencies.

6.5.2 Advantages
• Email Security Helps to Protect Your Business and Build Customer Trust
• Email Security Helps to Prevent Noncompliance
68 Computer System Security

6.6 TRANSPORT LAYER SECURITY


Computers send packets of data around the Internet using the TCP/IP protocols.
These packets are like letters in an envelope: an onlooker can easily read the data
inside them. If that data is public information like a news article, that’s not a big
deal. But if that data is a password, credit card number, or confidential email, then
it’s risky to let just anyone see that data.
The Transport Layer Security (TLS) protocol adds a layer of security on top of the
TCP/IP transport protocols. It takes advantage of both symmetric encryption and
public key encryption for securely sending private data, and adds additional security
features like authentication and message tampering detection.

6.6.1 TLS advantages


• Encryption: TLS/SSL can help to secure transmitted data using encryption.
• Interoperability: TLS/SSL works with most web browsers, including Microsoft
Internet Explorer and on most operating systems and web servers.
• Algorithm flexibility: TLS/SSL provides operations for authentication
mechanism, encryption algorithms and hashing algorithm that are used during
the secure session.
• Ease of Deployment: Many applications TLS/SSL temporarily on a windows
server 2003 operating systems.
• Ease of Use: Because we implement TLS/SSL beneath the application layer,
most of its operations are completely invisible to client.

6.6.2 TLS disadvantages


• Higher latency compared to other secure encryption protocols. A StackPath
study revealed that connections encrypted by TSL have a 5ms latency compared
to those that have not been encrypted. Furthermore, the machines on which
the ‘stress tests’ were conducted on showed a 2% CPU spike on processing
TLS-encrypted comms.
• Older TSL versions still vulnerable to MiM attacks. TLS versions 1.0 through
1.2 have still found to be susceptible to Man-in-the-Middle attacks, as well as
other forms of cyber aggression: POODLE, DROWN, and SLOTH.
• Few platforms support TLS 1.3. There are a handful of platforms that support
the latest TLS version: Chrome (version 67+), Firefox (version 61+), and
Apple’s Mac OS 10.3 (iOS 11). Microsoft is still struggling with the
implementation process.
Basic Cryptography 69

6.6.3 Working of TLS


The client connect to server (using TCP), the client will be something. The client
sends number of specification:
1. Version of SSL/TLS.
2. Which cipher suites, compression method it wants to use.
The server checks what the highest SSL/TLS version is that is supported by them
both, picks a cipher suite from one of the clients option (if it supports one) and
optionally picks a compression method. After this the basic setup is done, the server
provides its certificate. This certificate must be trusted either by the client itself or
a party that the client trusts. Having verified the certificate and being certain this
server really is who he claims to be (and not a man in the middle), a key is exchanged.
This can be a public key, “Premaster Secret” or simply nothing depending upon
cipher suite.
Both the server and client can now compute the key for symmetric encryption. The
handshake is finished and the two hosts can communicate securely. To close a
connection by finishing. TCP connection both sides will know the connection was
improperly terminated. The connection cannot be compromised by this through,
merely interrupted.TLS is used for many forms of secure communication on the
Internet, like secure email sending and secure file upload. However, it’s most well-
known for its use in secure website browsing (HTTPS).
TLS provides a secure layer on top of TCP/IP, thanks to its use of both public key
and symmetric encryption, and is increasingly necessary to secure the private data
flying across the Internet.

6.7 IP SECURITY
Internet Protocol Security (IPSec) is a framework of open standards for ensuring
private, secure communications over Internet Protocol (IP) networks, through the
use of cryptographic security services. IPSec is a suite of cryptography-based
protection services and security protocols. Because it requires no changes to
programs or protocols, you can easily deploy IPSec for existing networks.
The driving force for the acceptance and deployment of secure IP is the need for
business and government users to connect their private WAN/ LAN infrastructure
to the Internet for providing access to Internet services and use of the Internet as a
component of the WAN transport system. As we all know, users need to isolate
their networks and at the same time send and receive traffic over the Internet. The
authentication and privacy mechanisms of secure IP provide the basis for a security
strategy for us.
70 Computer System Security

IPsec protects one or more paths between a pair of hosts, a pair of security gateways,
or a security gateway and a host. A security gateway is an intermediate device,
such as a switch or firewall, that implements IPsec. Devices that use IPsec to protect
a path between them are called peers.
IPsec requires a PCI Accelerator Card (PAC) to provide hardware data compression
and encryption. A PAC is a hardware processing unit the switch’s CPU controls.
IPsec provides the following security services for traffic at the IP layer:
• Data origin authentication—identifying who sent the data.
• Confidentiality (encryption)—ensuring that the data has not been read en route.
• Connectionless integrity—ensuring the data has not been changed en route.
• Replay protection—detecting packets received more than once to help protect
against denial of service attacks.

6.7.1 Components of IP Security


• Encapsulating Security Payload (ESP)
It provides data integrity, encryption, authentication and anti replay. It also
provides authentication for payload.
• Authentication Header (AH)
It also provides data integrity, authentication and anti replay and it does not
provide encryption. The anti replay protection, protects against unauthorized
transmission of packets. It does not protect data’s confidentiality. It is shown
in Fig. 6.5.

IP HDR AH TCP DATA

Fig. 6.5 Authentication Header


• Internet Key Exchange (IKE)
It is a network security protocol designed to dynamically exchange encryption
keys and find a way over Security Association (SA) between 2 devices. The
Security Association (SA) establishes shared security attributes between 2
network entities to support secure communication. The Key Management
Protocol (ISAKMP) and Internet Security Association which provides a
framework for authentication and key exchange. ISAKMP tells how the set
up of the Security Associations (SAs) and how direct connections between
two hosts that are using Ipsec.
Internet Key Exchange (IKE) (As shown in Fig. 6.6) provides message content
protection and also an open frame for implementing standard algorithms such
Basic Cryptography 71

as SHA and MD5. The algorithm’s IP sec users produces a unique identifier
for each packet. This identifier then allows a device to determine whether a
packet has been correct or not. Packets which are not authorized are discarded
and not given to receiver.
Original Packet
IP HDR TCP DATA

IP ESP ESP ESP


TCP Data
HDR HDR Trailer Authen-
tication

Encryption

Authentication

Fig. 6.6 Internet Key Exchange format

6.7.2 Working of IP Security


• The host checks if the packet should be transmitted using IPsec or not. These
packet traffic triggers the security policy for themselves. This is done when
the system sending the packet apply an appropriate encryption. The incoming
packets are also checked by the host that they are encrypted properly or not.
• Then the IKE Phase 1 starts in which the 2 hosts (using IPsec) authenticate
themselves to each other to start a secure channel. It has 2 modes. The Main
mode which provides the greater security and the Aggressive mode which
enables the host to establish an IPsec circuit more quickly.
• The channel created in the last step is then used to securely negotiate the way
the IP circuit will encrypt data accross the IP circuit.
• Now, the IKE Phase 2 is conducted over the secure channel in which the two
hosts negotiate the type of cryptographic algorithms to use on the session and
agreeing on secret keying material to be used with those algorithms.
• Then the data is exchanged across the newly created IPsec encrypted tunnel.
These packets are encrypted and decrypted by the hosts using IPsec SAs.
• When the communication between the hosts is completed or the session times
out then the IPsec tunnel is terminated by discarding the keys by both the
hosts.

6.7.3 Advantages
When IPSec is implemented in a firewall or router, it provides strong security
whose application is to all traffic crossing this perimeter. Traffic within a company
72 Computer System Security

or workgroup does not incur the overhead of security-related processing.


IPSec is below the transport layer (TCP, UDP), and is thus transparent to
applications. There is no need to change software on a user or server system when
IPSec is implemented in the firewall or router.
Even if IPSec is implemented in end systems, upper layer software, including
applications is not affected. IPSec can be transparent to end users.
There is no need to train users on security mechanisms, issue keying material on a
per-user basis, or revoke keying material when users leave the organization. IPSec
can provide security for individual users if needed. This feature is useful for offsite
workers and also for setting up a secure virtual subnetwork within an organization
for sensitive applications.

6.8 DNS SECURITY


Domain Name Server is a prominent building block of the Internet. It’s developed
as a system to convert alphabetical names into IP addresses, allowing users to
access websites and exchange e-mails. DNS is organized into a tree-like
infrastructure where the first level contains topmost domains, such as .com and
.org. The second level nodes contain general, traditional domain names. The ‘leaf’
nodes on this tree are known as hosts.
DNS works similar to a database which is accessed by millions of computer systems
in trying to identify which address is most likely to solve a user’s query.
In DNS attacks, hackers will sometimes target the servers which contain the domain
names. In other cases, these attackers will try to determine vulnerabilities within
the system itself and exploit them for their own good.

6.8.1 TYPES OF ATTACKS


• Denial of service (DoS)
An attack where the attacker renders a computer useless (inaccessible) to the
user by making a resource unavailable or by flooding the system with traffic.
• Distributed denial of service (DDoS)
The attacker controls an overwhelming amount of computers (hundreds or
thousands) in order to spread malware and flood the victim’s computer with
unnecessary and overloading traffic. Eventually, unable to harness the power
necessary to handle the intensive processing, the systems will overload and
crash.
• DNS spoofing (also known as DNS cache poisoning)
Attacker will drive the traffic away from real DNS servers and redirect them
Basic Cryptography 73

to a “pirate” server, unbeknownst to the users. This may cause in the corruption/
theft of a user’s personal data.
• Fast flux
An attacker will typically spoof his IP address while performing an attack.
Fast flux is a technique to constantly change location-based data in order to
hide where exactly the attack is coming from. This will mask the attacker’s
real location, giving him the time needed to exploit the attack. Flux can be
single or double or of any other variant. A single flux changes address of the
web server while double flux changes both the address of web server and
names of DNS serves.
• Reflected attacks
Attackers will send thousands of queries while spoofing their own IP address
and using the victim’s source address. When these queries are answered, they
will all be redirected to the victim himself.
• Reflective amplification DoS
When the size of the answer is considerably larger than the query itself, a flux
is triggered, causing an amplification effect. This generally uses the same
method as a reflected attack, but this attack will overwhelm the user’s system’s
infrastructure further.

6.8.2 Measures against DNS attacks


• Use digital signatures and certificates to authenticate sessions in order to protect
private data.
• Update regularly and use the latest software versions, such as BIND. BIND is
an open source software that resolves DNS queries for users. It is widely used
by a good majority of the DNS servers on the Internet.
• Install appropriate patches and fix faulty bugs regularly.
• Replicate data in a few other servers, so that if data is corrupted/lost in one
server, it can be recovered from the others. This could also prevent single
point failure.
• Block redundant queries in order to prevent spoofing.
• Limit the number of possible queries.

Exercise
1. Differentiate between public key and private key.
2. Explain the process of Public-Key encryption.
74 Computer System Security

3. Give and explain the RSA algorithm.


4. State the significance of Digital Signatures.
5. Explain the various ways for Public Key distribution.
6. Explain the E-mail security certificates.
7. Why is DNS security important?
OR
Explain DNS Security.
8. Explain the working of IPSec.
9. Give and explain the Internet Key Exchange format.
10. Give the advantages and disadvantages of Transport Layer Security.
a
7
Internet Infrastructure
Learning Objective
• Security Problems • Internet Security-Weakness
• Routing Security • Firewalls-working, types

One of the greatest things about the Internet is that nobody really owns it. It is a global
collection of networks, both big and small. These networks connect together in many
different ways to form the single entity that we know as the Internet. In fact, the very
name comes from this idea of interconnected networks. The heart of the Internet exists
between this telecommunications component and the content that users send to each
other across those wires. That is what we call the Internet’s ‘infrastructure’.
Internet infrastructure is the physical hardware, transmission media, and software
used to interconnect computers and users on the Internet. Internet infrastructure is
responsible for hosting, storing, processing, and serving the information that makes
up websites, applications, and content.

7.1 BASIC SECURITY PROBLEMS


Cybersecurity is a daily concern in our personal and professional lives. When you
go online, whether it’s to shop, connect with friends and colleagues, or access an
account, you worry about who might be tracking you or breaking into your files.
Some of the basic security problems are as follows:

7.1.1 Code Injection


Hackers are sometimes able to exploit vulnerabilities in applications to insert
malicious code. Often the vulnerability is found in a text input field for users, such
as for a username, where an SQL statement is entered, which runs on the database,
in what is known as an SQL Injection attack. Other kinds of code injection attacks
include shell injection, operating system command attacks, script injection, and
dynamic evaluation attacks. Attacks of this type can lead to stolen credentials,
destroyed data, or even loss of control over the server.
There are two ways to prevent code injection: avoiding vulnerable code and filtering
76 Computer System Security

input. Applications can guard against vulnerable code by keeping data separate
from commands and queries, such as by using a safe API with parameterized queries.

7.1.2 Data Breach


The cost of data breaches is well documented. They are often caused by
compromised credentials, but the range of other common causes include software
misconfiguration, lost hardware, or malware (more on that below). Data breach
prevention requires a range of good practices. Site traffic and transactions should
be encrypted with SSL, permissions should be carefully set for each group of users,
and servers should be scanned. Employees should be trained in how to avoid being
caught by phishing attacks, and how to practice good password hygiene. The
principle of least privilege is worth noting here, as well.

7.1.3 Malware Infection


Most businesses are aware on some level of the security threat posed by malware, yet
many people are unaware that email spam is still the main vector of malware attack.
Because malware comes from a range of sources, several different tools are needed
for preventing infection. A robust email scanning and filtering system is necessary,
as are malware and vulnerability scans. Like breaches, which are often caused by
malware infection, employee education is vital to keep businesses safe from
malware.
Any device or system infected with malware must be thoroughly scrubbed, which
means identifying the hidden portions of code and deleting all infected files before
they replicate. This is practically impossible by hand, so requires an effective
automated tool.

7.1.4 Distributed Denial Service of attack


A Distributed Denial of Service (DDoS) attack generally involves a group of
computers being harnessed together by a hacker to flood the target with traffic.
One of the most worrying aspects of DDoS attacks for businesses is that without
even being targeted, the business can be affected just by using the same server,
service provider, or even network infrastructure.
If your business is caught up in a DDoS attack, put your disaster recovery plan into
effect, and communicate with employees and customers about the disruption. A
security tool such as a WAF is used to close off the port or protocol being saturated,
in a process which will likely have to be repeated as attackers adjust their tactics.

7.2 ROUTING SECURITY


Routing is fundamental to how the Internet works. Routing protocols direct the
Internet Infrastructure 77

movement of packets between your computer and any other computers it is


communicating with. By ensuring that packets go where they are supposed to,
routing has a central role in the reliable function of the Internet. It ensures that
emails reach the right recipients, e-commerce sites remain operational, and e-
government services continue to serve citizens. The security of the global routing
system is crucial to the Internet’s continued growth and to safeguard the
opportunities it provides for all users.
Every year, thousands of routing incidentsoccur, each with the potential to harm
user trust and handicap the Internet’s potential. These routing incidents can also
create real economic harms. Key services may become unreachable, disrupting the
ability of companies and users to participate in e-commerce. Or packets may get
diverted through malicious networks, providing an opportunity to spy on them.
While known security measures can address many of these routing incidents,
misaligned incentives limit their use.
All stakeholders including policymakers, must take steps to strengthen the security
of the global routing system. This can only be done while also preserving the vital
aspects of the routing system that have enabled the Internet to be so ubiquitous and
improving their security. Through leading by example in their own networks,
strengthening communication, and helping realign incentives to favor stronger
security, policymakers can help improve the routing security ecosystem.
There are three major types of routing incidents:
• Route/prefix hijacking, where a network operator or attacker impersonates
another network operator, pretending that it is the correct path to the server or
network being sought on the Internet.
• Route leaks, are the propagation of routing announcements beyond their
intended scope (in violation of their policies).
• IP spoofing, where someone creates IP packets with a false source IP address
to hide the identity of the sender or impersonate another system.
These incidents can create a serious strain on infrastructure, result in dropped traffic,
provide the means for traffic inspection, or even be used to perform domain name
server (DNS) amplification attacks,or other reflective amplification (RA) attacks.
The Mutually Agreed Norms for Routing Security (MANRS) is a set of visible,
baseline practices for network operators to improve the security of the global routing
system.
Despite the availability of solutions to common routing incidents, ecosystem
challenges limit their use.
• Routing incidents are hard to address far from the source and must instead
be addressed collectively. Wherever a threat is coming from, the networks
78 Computer System Security

closest to its origin are best positioned to address the threat (e.g. adjacent
networks can refuse to accept false announcements). When a network is
impacted further from the source of a routing incident, it can only attempt to
mitigate the impact. It must rely on other networks closer to the source of the
routing incident to fully address the problem.
• Economic externalities. Any network can be the source of an incident and
the insecurity of one network can impact all other networks. However, even if
a routing incident originates from one’s own network, the impact is most likely
to be felt on another network. Network operators are less likely to spend
resources on better routing security since the benefits will mostly go to other
networks, not their own.
• Routing security is not a market differentiator. Good routing security is
currently not an effective marketing tool for network operators. It is difficult
for network operators to communicate their level of routing security to their
customers. Users have limited understanding of the global routing system and
how their network’s routing security practices will impact them.
To improve routing security, we should:
i. Lead by Example. All stakeholders, including governments, should
improve infrastructure reliability and security by adopting best practices
in their own networks.
• All networks providing internet connectivity, including enterprise or
government networks, should use filtering, alongside IP source
validation, to help prevent and mitigate the impact of incidents.
• In addition, influential market players, such as large enterprises or
governments, should, where feasible, require compliance with routing
security baselines, such as the one documented by MANRS, for
procurement contracts with Internet service providers. MANRS, through
its MANRS Observatory, will provide measurements that can serve as a
valuable 3rd party assessment of a network operator’s security practices.
These assessments can help inform procurement decisions.
ii. Facilitate/encourage the adoption of common practices for routing
security. Industry associations, in close collaboration with governments
and other stakeholders, should promote common baseline for routing
security.
• Common baseline for network operators provide an industry standard
for routing security and promote greater information sharing among
network operators. They also provide a method for network operators
to signal their level of security to prospective customers.
Internet Infrastructure 79

• All stakeholders can contribute to the adoption and development of


common baseline and industry practices for routing security by
participating in the development process and, where feasible, through
funding.
iii. Support efforts to develop new, or strengthen existing, routing security
tools. To further improve the security of the global routing system
partnerships with the research community could help develop the next
generation of routing security tools and practices.
• Where feasible, stakeholders, including governments and the private
sector, can increase funding for research, development and experimental
deployment of the next generation of Internet protocols, including those
improving routing security.
• Researchers can develop technical guidance on performing IP source
validation, effective filtering, and global validation. Guidance should
also encourage network operators to implement BGPSecand RPKI.
iv. Encourage the use of security as a competitive differentiator. To make
routing security a competitive differentiator, stakeholders should support
public awareness of the importance of routing security and encourage
improved signaling of routing security between industry and customers.
• For Internet service providers, routing security is a core component of
their overall security posture. Signaling their attitude towards routing
security reflects strongly on their overall posture, which can differentiate
their services from competition.
• Enterprises will pay more for better routing security, however they need
ways to determine good routing security from bad routing security. In a
2017 survey, 94% of enterprises indicated that they would be willing to
pay more for a vendor who was a MANRS member in a competitive
situation. The same research also found that awareness of MANRS
was marginal among enterprises before the survey.
• Industry, consumer groups, governments and other stakeholders should work
together to promote the use of routing security baselines, such as MANRS,
as a competitive differentiator. In addition, they should support efforts to
educate local enterprises about routing security and existing best practices.
v. Strengthen communication and cooperation between network
operators and other stakeholders. Stakeholders should support the
development of better mechanisms for information sharing, engage in
information sharing on routing security, and collaborate with stakeholders
to address routing security threats.
80 Computer System Security

• The private sector, governments, civil society, academia and others can
support the development or strengthen existing computer security
incident response teams (CSIRTs). CSIRTs provide an important role in
information sharing and coordination in response to routing incidents
and threats.
vi. Identify and address legal barriers to information sharing, the
implementation of routing security technologies and research on
routing incidents and threats. Legal barriers can impede security
researchers and disincentivize network operators from deploying routing
security solutions and sharing information with one another.
• Identifying and eliminating legal and regulatory barriers can improve
information sharing and responses to routing incidents. Stakeholders,
particularly security researchers, may worry that disclosing routing
security incidents or threats could place them in legal jeopardy. Legal
barriers can also impede the development and deployment of routing
security technologies. In developing solutions to identified barriers,
stakeholders must pay close attention to their potential impact on the
privacy of individuals.

7.3 WEAKNESS OF INTERNET SECURITY


Internet security is a broad term that refers to the various steps individuals and
companies take to protect computers or computer networks that are connected to
the Internet. One of the basic truths behind Internet security is that the Internet
itself is not a secure environment. The Internet was originally conceived as an
open, loosely linked computer network that would facilitate the free exchange of
ideas and information. Data sent over the Internet—from personal e-mail messages
to online shopping orders—travel through an ever-changing series of computers
and network links. As a result, unscrupulous hackers and scam artists have ample
opportunities to intercept and change the information. It would be virtually
impossible to secure every computer connected to the Internet around the world,
so there will likely always be weak links in the chain of data exchange.
Due to the growth in Internet use, the number of computer security breaches
experienced by businesses has increased rapidly in recent years. At one time, 80
percent of security breaches came from inside the company. But this situation has
changed as businesses have connected to the Internet, making their computer
networks more vulnerable to access from outside troublemakers or industry spies.

7.3.1 Common Security Problems


Hackers have two main methods of causing problems for businesses’ computer
Internet Infrastructure 81

systems: they either find a way to enter the system and then change or steal
information from the inside, or they attempt to over-whelm the system with
information from the outside so that it shuts down. One way a hacker might enter
a small business’s computer network is through an open port, or an Internet
connection that remains open even when it is not being used. They might also
attempt to appropriate passwords belonging to employees or other authorized users
of a computer system. Many hackers are skilled at guessing common passwords,
while others run programs that locate or capture password information.
Another common method of attack used by hackers is e-mail spoofing. This method
involves sending authorized users of a computer network fraudulent e-mail that
appears as if it were sent by someone else, most likely a customer or someone else
the user would know. Then the hacker tries to trick the user into divulging his or
her password or other company secrets. Finally, some hackers manage to shut down
business computer systems with denial of service attacks. These attacks involve
bombarding a company’s Internet site with thousands of messages so that no
legitimate messages can get in or out.

7.3.2 Means of protection


Computer experts have developed ways to help small businesses protect themselves
against the most common security threats. For example, most personal computers
sold today come equipped with virus protection. A wide variety of antivirus software
is also available for use on computer networks. In addition, many software
companies and Internet Service Providers put updates online to cover newly
emerging viruses.
One of the most effective ways to protect a computer network that is connected to
the Internet from unauthorized outside access is a firewall. A firewall is a hardware
security device that is installed between a computer network and the Internet. It acts
like a Web server, routing traffic, but also blocks external users from accessing the
internal computer system. Of course, a firewall cannot protect information once it
leaves the network. A common method of preventing third parties from capturing
data while it is being transmitted over the Internet is encryption. Encryption programs
put data into a scrambled form that cannot be read without a key.
There are several methods available to help small businesses prevent unauthorized
access to their computer systems. One of the most common methods is authentication
of users through passwords. Since passwords can be guessed or stolen, some
companies use more sophisticated authentication technologies, such as coded ID
cards, voice recognition software, retinal scanning systems, or handprint recognition
systems. All of these systems verify that the person seeking access to the computer
network is an authorized user. They also make it possible to track computer activity
82 Computer System Security

and hold users accountable for their use of the system. Digital signatures can be
used to authenticate e-mails and other outside documents. This technology provides
proof of the origin of documents and helps prevent e-mail spoofing.

7.4 FIREWALLS
A firewall is a system designed to prevent unauthorized access to or from a private
network. You can implement a firewall in either hardware or software form, or a
combination of both. Firewalls prevent unauthorized internet users from accessing
private networks connected to the internet, especially intranets. All messages
entering or leaving the intranet (the local network to which you are connected) or
WAN must pass through the firewall (Fig. 7.1), which examines each message and
blocks those that do not meet the specified security criteria.

Firewall

LAN WAN

Fig.7.1: Illustration of Firewall


Firewalls need to be able to perform the following tasks:
• Defend resources
• Validate access
• Manage and control network traffic
• Record and report on events
• Act as an intermediary

7.4.1 Types of Firewalls


i. Packet filtering: The system examines each packet entering or leaving the
network and accepts or rejects it based on user-defined rules. Packet filtering
Internet Infrastructure 83

is fairly effective and transparent to users, but it is difficult to configure. In


addition, it is susceptible to IP spoofing.
ii. Circuit-level gateway implementation: This process applies security
mechanisms when a TCP or UDP connection is established. Once the
connection has been made, packets can flow between the hosts without further
checking.
iii. Acting as a proxy server: A proxy server is a type of gateway that hides the
true network address of the computer(s) connecting through it. A proxy server
connects to the internet, makes the requests for pages, connections to servers,
etc., and receives the data on behalf of the computer(s) behind it. The firewall
capabilities lie in the fact that a proxy can be configured to allow only certain
types of traffic to pass (for example, HTTP files, or web pages). A proxy
server has the potential drawback of slowing network performance, since it
has to actively analyse and manipulate traffic passing through it.
iv. Web application firewall: A web application firewall is a hardware appliance,
server plug-in, or some other software filter that applies a set of rules to a
HTTP conversation. Such rules are generally customized to the application
so that many attacks can be identified and blocked.

7.4.2 How Firewalls work?


Firewall match the network traffic against the rule set defined in its table. Once the
rule is matched, associate action is applied to the network traffic. For example,
Rules are defined as any employee from HR department cannot access the data
from code server and at the same time another rule is defined like system
administrator can access the data from both HR and technical department. Rules
can be defined on the firewall based on the necessity and security policies of the
organization.
From the perspective of a server, network traffic can be either outgoing or incoming.
Firewall maintains a distinct set of rules for both the cases. Mostly the outgoing
traffic, originated from the server itself, allowed to pass. Still, setting a rule on
outgoing traffic is always better in order to achieve more security and prevent
unwanted communication.
Incoming traffic is treated differently. Most traffic which reaches on the firewall is
one of these three major Transport Layer protocols- TCP, UDP or ICMP. All these
types have a source address and destination address. Also, TCP and UDP have
port numbers. ICMP uses type code instead of port number which identifies purpose
of that packet.
84 Computer System Security

Exercise
1. Why is cyber security important?
2. What are the various problems faced in cyber security world?
3. State the importance of rioting security.
4. What are the basic steps to improve the routing security?
5. State the problems or the weakness of Internet Security.
6. How can we protect our transactions in the world of cybercrime?
7. Why are Firewalls implemented?
8. Give examples of few Firewalls.
9. Explain the working of firewalls.
10. What are the different types of firewalls?
A
APPENDIX
SECURITY IN CLOUD COMPUTING ENVIRONMENT
Cloud-based IT services have been gaining popularity as they do not require big
investments. The new business model of on-demand services brought in by cloud
where customers can choose what they want, how much they want and pay only
for those services. This has been made possible by the shared use of resources —
such as storage, servers and applications and services—leading to potential cost
saving. Even though the cloud technology is not mature, there is an increased
confidence in its adoption by businesses. Though cloud computing just like any
other technological development was aimed at mass convenience and adoption,
there are those who see these advances as platforms for fraud and abuse. Some of
the security challenges in the cloud environment are traditional computing
challenges and some are unique to cloud computing. In addition to the traditional
computing security concerns such as hardware and software malfunction, the
complex cloud model, the way of service delivery and access to shared resources
pose a great deal of threats. Each layer in the cloud model can be a potential point
of attack. Major cloud service providers (CSPs) have data centres around the world,
replicated in multiple locations to maintain service continuity in case of a failure.
On one hand, global dispersion provides operational efficiency; on the other hand,
it raises jurisdiction issue for LEAs in case of an investigation, since each country
has unique laws on data usage and privacy.

Digital Investigation–Traditional Computing Environment versus Cloud


Computing Environment
Although the process of forensics investigation would remain the same whether it
is in traditional computing environment or the cloud, the challenge will be to identify
where the relevant data are actually stored and how they can be obtained and
analyzed for forensics investigation. Pyramid has a strong team of forensic analysts
who have been working with LEAs for digital forensic investigation. The
investigation starts with collecting digital evidences that are in the form of
documents, photographs, spreadsheets, Internet history, email, etc., which are stored
locally on the suspect or victim’s computer. To do so, the hard disk is acquired,
verified and analysed. The main source of information is the suspect’s or victim’s
computer. Not only is it easy to get hold of the computers but it is also easy to
analyze recently opened files and data that are stored locally on the suspect’s
computer. In the case of cloud computing, these files are no longer stored locally;
86 Computer System Security

hence, it is difficult to collect evidence. Since the data are not stored locally but are
on cloud, direct evidence is not available for the investigators. For evidence, the
investigator is dependent on the CSP who may be out of investigators’ jurisdiction.
Acquiring servers in the cloud environment will affect services to multiple
customers, which may raise a liability issue for the service provider. In the case of
an incident at the CSP end, the CSP will be more interested in restoring the service
than preserving the evidence. The CSP may not report the incident for the sake of
reputation or may start its own investigation without proper measures to preserve
evidence.

Security Threats for Cloud Computing Environment


i. Cryto jacking is a fairly new form of cyber attack, and it is also one that can
very easily go under the radar. It centers on the popular practice of mining for
crypto currencies like Bitcoin. Crypto jacking can be very tricky to spot and
deal with. The major issue here is the fact that when hackers use computing
resources from your cloud system means your operation will be slowed down,
but (crucially) it will continue to work. This means that it can seem as if
nothing malicious is happening and that perhaps the computers are just
struggling with their processing power.
ii. Perhaps the most common threat to cloud computing is the issue of leaks or
loss of data through data breaches. A data breach typically occurs when a
business is attacked by cybercriminals who are able to gain unauthorized access
to the cloud network or utilize programs to view, copy, and transmit data. If
you use cloud computing services, a data breach can be extremely damaging,
but it can happen relatively easily. Losing data can violate the General Data
Protection Regulation (GDPR), which could cause your business to face heavy
fines.
iii. One of the most damaging threats to cloud computing is a denial of service
(DoS) attack. These can shut down your cloud services and make them
unavailable both to your users and customers, but also to your staff and business
as a whole. Cybercriminals can flood your system with a very large amount of
web traffic that your servers are not able to cope with. This means that the
servers will not buffer, and nothing can be accessed. If the whole of your
system runs on the cloud, this can then make it impossible for you to manage
your business.
iv. When we think of cyber security challenges, we often consider the concept of
malicious criminals hacking into our systems and stealing data – however,
sometimes the problem originates from the inside of the company. In fact,
recent statistics suggest that insider attacks could account for more than 43
Computer System Security 87

percent of all data breaches. Insider threats can be malicious – such as members
of staff going rogue – but they can also be due to negligence or simple human
error. It is important, then, to provide your staff with training, and also ensure
that you are tracking the behaviour of employees to ensure that they cannot
commit crimes against the business.
v. If a criminal can gain access to your system through a staff account, they
could potentially have full access to all of the information on your servers
without you even realizing any crime has taken place. Cybercriminals use
techniques such as password cracking and phishing emails in order to gain
access to accounts – so once again, the key here is to provide your team with
the training to understand how to minimize the risk of their account being
hijacked.
vi. Sometimes it can be the case that your own system is highly secure, but you
are let down by external applications. Third-party services, such as applications,
can present serious cloud security risks, and you should ensure that your team
or cyber-security experts take the time to establish whether the application is
suitable for your network before they have it installed. Discourage staff from
taking matters into their own hands and downloading any application that
they think might be useful. Instead, you should make it necessary for the IT
team to approve any application before it is installed on the system. While this
might seem like a lengthy step to put in place, it can effectively take away the
risk of insecure applications.
vii. Most cybersecurity threats come in the form of outsider attacks, but this issue
is one caused by a problem inside the company. And this problem is in failing
to take the threat of cybercrime seriously. It is essential to invest in training on
the risks of cyberattacks – not just for your IT team, but for every member of
staff. Your team is your first line of defense against any kind of data breach or
cyberattack, so they need to be prepared with the latest information or relevant
threats to businesses like yours. Allocate time and budget for staff training,
and also make sure that this training is regularly updated so that your staff is
being taught about issues that are genuinely affecting organizations.
MODEL QUESTION PAPER
TOTAL MARKS:100
1. Attempt all questions in brief. 2×10 = 20
a) What is Computer Security Problem? What factors contribute to it?
b) What are the principles of secure design?
c) What is Encryption and Decryption?
d) What is the difference between HTTPs, SSL and TLS?
e) Explain System Call Interposition.
f) What are the differences between Discretionary Access Control and
Mandatory Access Control?
g) What is Web Security?
h) Give three benefits of IPSec.
i) What is SQL Injection?
j) What is the problem of covert channel in VMM Security?
2. Attempt any three of the following: 3×10 = 30
a) What is an Intrusion Detection System? What are the difficulties in Anomaly
detection?
b) Why is security hard?
c) What is Access Control List and also define the technologies used in access
control?
d) What is Cross Site Request forgery and what are the defences against it?
e) Explain SSL Encryption. What are the steps involved in SSL: server
authentication?
3. Attempt any one of the following: 1×10 = 10
a) What are Asymmetric Algorithms? Give their advantages, disadvantages.
b) Why do cyber criminals want to own machines?
4. Attempt any one of the following: 1×10 = 10
a) Write a short note on DES.
b) Write short notes on Software Fault Isolation:
i. Goal and Solution
ii. SFI approach
90 Computer System Security

5. Attempt any one of the following: 1×10 = 10


a) Write a short summery of IP Protocol.
b) Explain Control Hijacking with an example. Explain the term Buffer
Overflow in control hijacking.
6. Attempt any one of the following: 1×10 = 10
a) Write a short note on Secret Key Cryptography.
b) Explain Routing Security.
7. Attempt any one of the following: 1×10 = 10
a) Explain Domain Name System Security.
b) Write short note on the following:
i. Cross Site Scripting (XSS)
ii. HTTPs not used for all web traffic
GLOSSARY
Computer Security: It addresses three very important aspects of any computer-
related system namely, confidentiality, integrity and availability.
Computer System Threat: Anything that leads to loss or corruption of data or
physical damage to the hardware and/or infrastructure
Security Attack : Any action that compromises the security of information owned
by an organization
Security Mechanism : A mechanism that is designed to detect, prevent or recover
from a security attack
Security Service : A service that enhances the security of the data processing
systems and the information transfers of an organization
Passive Attack: These attacks involveeavesdropping on, or monitoring of,
transmissions.
Active Attacks : These attacks involve some modification of the data stream or
the creation of a false stream.
Hijacking : It is a type of network security attack in which the attacker takes
control of a communication
Discretionary Access Control (DAC) : It is basically a mechanism where a user
sets the access control to allow or deny access to any object.
Mandatory Access Control (MAC) : It is a mechanism where system controls
access to an object and a user cannot alter that access.
Virtual Machine : It is an operating system or application environment that is
installed on software, which imitates dedicated hardware
Rootkits : They are a kind of malware that are designed in a way that they can
remain hidden in the computer.
Intrusion Detection System : IDS is a device or software application used to
keep a check on the network for malicious activity
Browser Isolation : It is a cyber security model used to physically isolate an internet
users web browser and their browsing activity away from the local machine and
network
Malware : Short for “malicious software,” malware is a very common threat used
to steal sensitive customer data, distribute spam, and allow cybercriminals to access
your site, and more.
HTTP : HTTP is the underlying protocol used by the World Wide Web
92 Computer System Security

Magic Cookie : It is a term for a packet of data that a computer receives and then
sends back without changing or altering it
Trojan Horse : It is a malicious bit of attacking code or software that tricks users
into running it willingly, by hiding behind a legitimate program
Phishing : Phishing is a method of a social engineering with the goal of obtaining
sensitive data such as passwords, usernames, and credit card numbers
Cipher Text : The cipher text is produced as an output of Encryption algorithm
Digital Signature : A digital signature is a mathematical technique which validates
the authenticity and integrity of a message, software or digital documents
Internet Protocol Security : IPSec is a framework of open standards for ensuring
private, secure communications over Internet Protocol (IP) networks, through the
use of cryptographic security services
Distributed Denial of Service: DDoS attack generally involves a group of
computers being harnessed together by a hacker to flood the target with traffic
IP Spoofing : It is a process in which someone creates IP packets with a false
source IP address to hide the identity of the sender or impersonate another system
Firewall : A firewall is a system designed to prevent unauthorized access to or
from a private network
REFERENCES
1. William Stallings, Network Security Essentials: Applications and Standards,
Prentice Hall, 4th edition, 2010.
2. Michael T. Goodrich and Roberto Tamassia, Introduction to Computer Security,
Addison Wesley, 2011.
3. William Stallings, Network Security Essentials: Applications and Standards,
Prentice Hall, 4th edition, 2010.
4. Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone, Handbook
of Applied Cryptography, CRC Press, 2001.
5. https://ict.iitk.ac.in/product/computer-system-security/
6. https://searchsecurity.techtarget.com/definition/cryptography
7. https://www.infoblox.com/dns-security-resource-center/security-faq/
8. https://searchsecurity.techtarget.com/definition
9. https://phoenixnap.com/blog/cyber-security-
10. https://blog.netwrix.com/2018/05/15/top-10-most-common-types-of-cyber-
attacks/
11. https://cloudacademy.com/blog/key-cybersecurity-threats

You might also like