You are on page 1of 7

ASSIGNMENT-1

NETWORK SECURITY

MBA.TECH SEM VII OPEN ELECTIVE

1. Explain the below mentioned threat model in detail with their advantages
and disadvantages:
a. Attack Tree
Using attack trees to model threats is one of the oldest and most widely applied
techniques on cyber-only systems, cyber-physical systems, and purely physical systems.
Attack trees were initially applied as a stand-alone method and has since been combined
with other methods and frameworks.
Attack trees are diagrams that depict attacks on a system in tree form. The tree root is the
goal for the attack, and the leaves are ways to achieve that goal. Each goal is represented
as a separate tree. Thus, the system threat analysis produces a set of attack trees.

In the case of a complex system, attack trees can be built for each component instead of
for the whole system. Administrators can build attack trees and use them to inform
security decisions, to determine whether the systems are vulnerable to an attack, and to
evaluate a specific type of attack.

In recent years, this method has often been used in combination with other techniques
and within frameworks such as STRIDE, CVSS, and PASTA.
b. PASTA
The Process for Attack Simulation and Threat Analysis (PASTA) is a risk-centric threat-
modeling framework developed in 2012. It contains seven stages, each with multiple
activities, which are illustrated in below:
PASTA aims to bring business objectives and technical requirements together. It uses a
variety of design and elicitation tools in different stages. This method elevates the threat-
modeling process to a strategic level by involving key decision makers and requiring
security input from operations, governance, architecture, and development. Widely
regarded as a risk-centric framework, PASTA employs an attacker-centric perspective to
produce an asset-centric output in the form of threat enumeration and scoring.

2. What is Fuzzing? Explain various types of fuzzing techniques.


Fuzzing is a software testing methodology that can be used from either a black or white box
perspective and predominantly consists of providing deliberately malformed inputs to an
application to identify errors such as unhandled exceptions, memory spikes, thread hangs, read
access violations or buffer overruns that could lead to further compromise of a system.
Fuzzing relies on the assumptions that all software has bugs just waiting to be found. Hence,
given enough time, a methodical approach should find these previously unknown bugs. Fuzzing
can provide an additional avenue for bug identification alongside common testing techniques due
to its mechanical approach and the limited amount of effort needed to carry it out.
Types of fuzzer
Broadly speaking, fuzzers can be split into two categories based on how they create input
to programs – mutation-based and generation-based. This section details those categories
as well as offering a brief description of a more advanced technique called Evolutionary
Fuzzing.
Mutation
Mutation-based fuzzers are arguably one of the easier types to create. This technique
suites dumb fuzzing but can be used with more intelligent fuzzers as well. With mutation,
samples of valid input are mutated randomly to produce malformed input.
A dumb mutation fuzzer can simply select a valid sample input and alter parts of it
randomly. For many programs, this can provide a surprising amount of mileage, as inputs
are still often significantly similar enough to a valid input. This means good code
coverage can be achieved without the need for further intelligence.

Generation
Generation-based fuzzers actually generate input from scratch rather than mutating
existing input. They usually require some level of intelligence to construct input that
makes at least some sense to the program, although generating completely random data
would also technically be generation.
Generation fuzzers often split a protocol or file format into chunks, which they can build
up in a valid order, and randomly fuzz some of those chunks independently. This can
create inputs that preserve their overall structure, but contain inconsistent data within it.
The granularity of these chunks and the intelligence with which they’re constructed
define the level of intelligence of the fuzzer. While mutation-based fuzzing can have a
similar effect as generation fuzzing (as, over time, mutations will be randomly applied
without completely breaking the input’s structure), generating inputs ensures this will be
so.

Evolutionary
Evolutionary fuzzing’s an advanced technique, which we’ll briefly describe. It allows the
fuzzer to use feedback from each test case to learn the format of the input over time. For
example, by measuring the code coverage of each test case, the fuzzer can work out
which properties of the test case exercise a given area of code, and gradually evolve a set
of test cases that cover the majority of the program code. Evolutionary fuzzing often
relies on other techniques similar to genetic algorithms and may require some form of
binary instrumentation to operate correctly.

3. Explain various terms:


a. Information Security
Information Security is not only about securing information from unauthorized access.
Information Security is basically the practice of preventing unauthorized access, use,
disclosure, disruption, modification, inspection, recording or destruction of information.
Information can be physical or electronic one. Information can be anything like Your
details or we can say your profile on social media, your data in mobile phone, your
biometrics etc. Thus Information Security spans so many research areas like
Cryptography, Mobile Computing, Cyber Forensics, Online Social Media etc.
b. Data Security
Organizations around the globe are investing heavily in information technology (IT)
cyber security capabilities to protect their critical assets. Whether an enterprise needs to
protect a brand, intellectual capital, and customer information or provide controls for
critical infrastructure, the means for incident detection and response to protecting
organizational interests have three common elements: people, processes, and technology.
c. Network Security
Network security is a broad term that covers a multitude of technologies, devices and
processes. In its simplest term, it is a set of rules and configurations designed to protect
the integrity, confidentiality and accessibility of computer networks and data using both
software and hardware technologies. Every organization, regardless of size, industry or
infrastructure, requires a degree of network security solutions in place to protect it from
the ever-growing landscape of cyber threats in the wild today.
d. CIA Model
CIA triad broken down
Confidentiality
It's crucial in today's world for people to protect their sensitive, private information from
unauthorized access.
Protecting confidentiality is dependent on being able to define and enforce certain access
levels for information. In some cases, doing this involves separating information into
various collections that are organized by who needs access to the information and how
sensitive that information actually is - i.e. the amount of damage suffered if the
confidentiality was breached.
Some of the most common means used to manage confidentiality include access control
lists, volume and file encryption, and Unix file permissions.
Integrity
Data integrity is what the "I" in CIA Triad stands for. This is an essential component of
the CIA Triad and designed to protect data from deletion or modification from any
unauthorized party, and it ensures that when an authorized person makes a change that
should not have been made the damage can be reversed.
Availability
This is the final component of the CIA Triad and refers to the actual availability of your
data. Authentication mechanisms, access channels and systems all have to work properly
for the information they protect and ensure it's available when it is needed.
High availability systems are the computing resources that have architectures that are
specifically designed to improve availability. Based on the specific HA system design,
this may target hardware failures, upgrades or power outages to help improve availability,
or it may manage several network connections to route around various network outages.
e. AAA Server
An AAA server is a server program that handles user requests for access to computer
resources and, for an enterprise, provides authentication, authorization, and accounting
(AAA) services.
Authentication is the process of identifying an individual, usually based on a username
and password. Authentication is based on the idea that each individual user will have
unique information that sets him or her apart from other users.
Authorization is the process of granting or denying a user access to network resources
once the user has been authenticated through the username and password. The amount of
information and the amount of services the user has access to depend on the user's
authorization level.
Accounting is the process of keeping track of a user's activity while accessing the
network resources, including the amount of time spent in the network, the services
accessed while there and the amount of data transferred during the session. Accounting
data is used for trend analysis, capacity planning, billing, auditing and cost allocation.

4. What are the basic objectives of Network Security.


When considering networks, you’ll view them from different perspectives. for instance , senior
management might view the network as a business tool to facilitate the goals of the corporate .
Network technicians (at least some) might consider their networks to be the middle of the
universe. End users might consider the network to be just a tool for them to urge their job done,
or possibly as a source for recreation. Not all users appreciate their role keep data safe, and
unfortunately the users of the network represent a big vulnerability, therein they need usernames
and passwords (or other credentials, like one-time password token generators) that allow them
access to the network. If a user is compromised or an unauthorized individual gains access to
data, applications, or devices that they ought to not have access, the safety of the network should
fail as a result, even after you apply all the concepts that you simply learn during this book. So, a
crucial point to recollect is that the users’ behaviours pose a security risk which training users
may be a key a part of a comprehensive security policy.

5. Explain the Working of Sanboxing?


Sandbox testing proactively detects malware by executing, or detonating, code in a safe and
isolated environment to observe that code’s behavior and output activity. Traditional security
measures are reactive and based on signature detection—which works by looking for patterns
identified in known instances of malware. Because that detects only previously identified threats,
sandboxes add another important layer of security. Moreover, even if an initial security defense
utilize artificial intelligence or machine learning (signature less detection), these defenses are
only as good as the models powering these solutions – there is still a need to complement these
solution with an advanced malware detection.
6. Explain the 8 tips require to work with Legacy Code
1. Test the Code
One way to understand the code is to create characterization tests and unit tests. You can also use
a code quality tool — like a static code analyzer — over your code to identify potential problems.
2. Review Documentation
Reviewing documentation of the original requirements will help you understand where the code
came from.
3. Only Rewrite Code When It’s Necessary
Rewriting an inherited codebase can be tempting. But it’s usually a mistake.
4. Try Refactoring Instead
It’s better to try refactoring legacy rather than rewrite it. And, it’s best to do it gradually.
Refactoring is the process of changing the structure of the code — without changing its
functionality.
This cleans the code and makes it easier to understand. It also eliminates potential errors.
5. Make Changes in Different Review Cycles
Don’t make too many changes at once. It’s a bad idea to refactor in the same review cycle as
functional changes.
6. Collaborate with Other Developers
You may not know the codebase very well. But some of your fellow developers probably do. It’s
much faster to ask questions from those who know the codebase best.
7. Keep New Code Clean
There’s a way to avoid making the code more problematic. And that’s by ensuring the new code
is clean. It ought to be written to adhere to best practices.
8. Do Further Research
Working with an inherited codebase gets easier with time. A junior developer may not understand
why a codebase hasn’t been refactored (and may be keen to refactor it). But a senior developer
will know when to leave it alone.

7. Explain Control Hijacking in detail with any one attack of your choice.
Cyber hijacking, or computer hijacking, is a type of network security attack in which the attacker
takes control of computer systems, software programs and/or network communications. A wide
range of cyber attacks rely on hijacking in one form or another, and -- similar to other hijackings,
such as an airplane hijacker or criminals seizing control of an armored transport vehicle -- cyber
hijacking is often, but not always, highly illegal with severe consequences for both the attacker
and the victim.
There are several different kinds of cyber hijacking, among them:
• browser hijacking
• session hijacking
• domain hijacking
• clipboard hijacking
• domain name system (DNS) hijacking
• Internet Protocol (IP) hijacking
• page hijacking

Browser hijacking is a tactic used by hackers and unscrupulous online advertisers to take control
of a web browser. In practice, browser hijacking is most often used to redirect web traffic, alter
default browser settings or force a victim to click advertisements. However, there are also
instances where hackers use hijacked browsers to intercept sensitive information and even
make unwitting victims download additional malware.

You might also like