You are on page 1of 30

SWL

Hardware and Peripheral Devices


A peripheral is a piece of computer hardware that is added to a computer in order to expand
its abilities. The term peripheral is used to describe those devices that are optional in nature,
as opposed to hardware that is either demanded or always required in principle. There are all
different kinds of peripherals you can add your computer. The main disctinction among
peripherals is the way they are connected to your computer. They can be connected internally
or externally.

Buses
A bus is a subsystem that transfers data between computer components inside a computer or
between computers. Unlike a point-to-point connection, a bus can logically connect several
peripherals over the same set of wires. Each bus defines its set of connectors to physically
plug devices, cards or cables together. There are two types of buses: internal and external.
Internal buses are connections to various internal components. External buses are connections
to various external components. There are different kinds of slots that internal and external
devices can connect to.

Internal
Types of Slots

There are many different kinds of internal buses, but only a handful of popular ones.
Different computers come with different kinds and number of slots. It is important to know
what kind and number of slots you have on your computer before you go out and by a card
that matches up to a slot you don’t have.

PCI

PCI (Peripheral Component Interconnect) is common in modern PCs. This kind of bus is
being succeeded by PCI Express. Typical PCI cards used in PCs include: network cards,
sound cards, modems, extra ports such as USB or serial, TV tuner cards and disk controllers.
Video cards have outgrown the capabilities of PCI because of their higher bandwidth
requirements.

PCI Express

PCI Express was introduced by Intel in 2004. It was designed to replace the general-purpose
PCI expansion bus and the AGP graphics card interface. PCI express is not a bus but instead
a point-to-point conection of serial links called lanes. PCI Express cards have faster
bandwidth then PCI cards which make them more ideal for high-end video cards.

PCMCIA

PCMCIA (also referred to as PC Card) is the type of bus used for laptop computers. The
name PCMCIA comes from the group who developed the standard: Personal Computer
Memory Card International Association. PCMCIA was originally designed for computer
memory expansion, but the existence of a usable general standard for notbeook peripherals
led to many kinds of devices being made available in this form. Typical devices include
network cards, modems, and hard disks.

AGP

AGP (Accelerated Graphics Port) is a high-speed point-to-point channel for attaching a


graphics card to a computer’s motherboard, primarily to assist in the acceleration of 3D
computer graphics. AGP has been replaced over the past couple years by PCI Express. AGP
cards and motherboards are still available to buy, but they are becoming less common.

Types Of Cards
Video Card

A video card (also known as graphics card) is an expansion card whose function is to
generate and output images to a display. Some video cards offer added functions, such as
video capture, TV tuner adapter, ability to connect multiple monitors, and others. If the video
card is integrated in the motherboard, it may use the computer RAM memory. If it is not it
will have its own video memory called Video RAM. This kind of memory can range from
128MB to 2GB.

Sound Card

A sound card is an expansion card that facilitates the input and output of audio signals
to/from a computer under control of computer programs. Typical uses for sound cards include
providing the audio component for multimedia applications such as music composition,
editing video or audio, presentation/education, and entertainment. Many computers have
sound capabilities built in,, while others require additional expansion cards to provide for
audio capability.

Network Card

A network card is an expansion card that allows computers to communicate over a computer
network. It allows users to connect to each other either by using cables or wirelessly.
External
Types of Connections
USB

USB (Universal Serial Bus) is a serial bus standard to interface devices. USB was designed to
allow many peripherals to be connected using a single standardized interface socket and to
improve the plug-and-play capabilities by allowing devices to be connected and disconnected
without rebooting the computer.

Firewire

Firewire is a serial bus interface standard for high-speed communications and isochronous
real-time data transfer, frequently used in a personal computer. Almost all modern digital
camcorders have included this connection.

PS/2

The PS/2 connector is used for connecting some keyboards and mice to a PC compatible
computer system.

Devices
Removable Storage

The same kinds of CD and DVD drives that could come built-in on your computer can also
be attached externally. You might only have a CD-ROM drive built-in to your computer but
you need a CD writer to burn CDs. You can buy an external CD writer that connects to your
USB port and acts the same way as if it was built-in to your computer. The same is true for
DVD writers, Blu-ray drives, and floppy drives. Flash drives have become very popular
forms of removable storage especially as the price of flash drives decreases and the possible
size for them increases. Flash drives are usually USB ones either in the form USB sticks or
very small, portable devices. USB flash drives are small, fast, removable, rewritable, and
long-lasting. Storage capacities range from 64MB to 32GB or more. A flash drive does not
have any mechanically driven parts so as opposed to a hard drive which makes it more
durable and smaller usually.

Non-removable Storage

Non-removable storage can be a hard drive that is connected externally. External hard drives
have become very popular for backups, shared drives among many computers, and simply
expaning the amount of hard drive space you have from your internal hard drive. External
hard drives come in many shapes and sizes like flash drives do. An external hard drive is
usually connected by USB but you can also have a networked hardrive which will connect to
your network which allows all computers on that network to access that hard drive.

Input

Input devices are absolutely crucial to computers. The most common input devices are mice
and keyboards which barely every computer has. A new popular pointing device that may
eventually replace the mouse is touch screen which you can get on some tablet notebooks.
Other popular input devices include microphones, webcams, and fingerprint readers which
can also be built in to modern laptops and desktops. A scanner is another popular input
device that might be built-in to your printer.

Output

There are lots of different kinds of output devices that you can get for your computer. The
absolute most common external output device is a monitor. Other very popular output devices
are printers and speakers. There are lots of different kinds of printers and different sizes of
speakers for your computer. Monitors are connected usually through the HD-15 connector on
your video card. Printers are usually connected through a USB port. Speakers have their own
audio out port built-in to the sound card

Operating System Security


Every computer system and software design must handle all security risks and implement the
necessary measures to enforce security policies. At the same time, it's critical to strike a
balance because strong security measures might increase costs while also limiting the
system's usability, utility, and smooth operation. As a result, system designers must assure
efficient performance without compromising security.

What is Operating System Security?


The process of ensuring OS availability, confidentiality, integrity is known as operating
system security. OS security refers to the processes or measures taken to protect the operating
system from dangers, including viruses, worms, malware, and remote hacker intrusions.
Operating system security comprises all preventive-control procedures that protect any
system assets that could be stolen, modified, or deleted if OS security is breached.

What is application security?

Application security may include hardware, software, and procedures that identify or
minimize security vulnerabilities. A router that prevents anyone from viewing a computer’s
IP address from the Internet is a form of hardware application security. But security measures
at the application level are also typically built into the software, such as an application
firewall that strictly defines what activities are allowed and prohibited. Procedures can entail
things like an application security routine that includes protocols such as regular testing.

Application security is the process of developing, adding, and testing security features within
applications to prevent security vulnerabilities against threats such as unauthorized access
and modification.
Why application security is important
Application security is important because today’s applications are often available over
various networks and connected to the cloud, increasing vulnerabilities to security threats and
breaches. There is increasing pressure and incentive to not only ensure security at the
network level but also within applications themselves. One reason for this is because hackers
are going after apps with their attacks more today than in the past. Application security
testing can reveal weaknesses at the application level, helping to prevent these attacks.

Types of application security


Different types of application security features include authentication, authorization,
encryption, logging, and application security testing. Developers can also code applications to
reduce security vulnerabilities.

· Authentication: When software developers build procedures into an application to


ensure that only authorized users gain access to it. Authentication procedures ensure that a
user is who they say they are. This can be accomplished by requiring the user to provide a
user name and password when logging in to an application. Multi-factor authentication
requires more than one form of authentication—the factors might include something you
know (a password), something you have (a mobile device), and something you are (a thumb
print or facial recognition).

· Authorization: After a user has been authenticated, the user may be authorized to
access and use the application. The system can validate that a user has permission to access
the application by comparing the user’s identity with a list of authorized users. Authentication
must happen before authorization so that the application matches only validated user
credentials to the authorized user list.

· Encryption: After a user has been authenticated and is using the application, other
security measures can protect sensitive data from being seen or even used by a cybercriminal.
In cloud-based applications, where traffic containing sensitive data travels between the end
user and the cloud, that traffic can be encrypted to keep the data safe.

· Logging: If there is a security breach in an application, logging can help identify who
got access to the data and how. Application log files provide a time-stamped record of which
aspects of the application were accessed and by whom.

· Application security testing: A necessary process to ensure that all of these security
controls work properly.
Virtualization
Virtualization is a process that allows for more efficient utilization of physical computer
hardware and is the foundation of cloud computing.

Virtualization uses software to create an abstraction layer over computer hardware that allows
the hardware elements of a single computer—processors, memory, storage and more—to be
divided into multiple virtual computers, commonly called virtual machines (VMs). Each VM
runs its own operating system (OS) and behaves like an independent computer, even though
it is running on just a portion of the actual underlying computer hardware.

It follows that virtualization enables more efficient utilization of physical computer hardware
and allows a greater return on an organization’s hardware investment.

Today, virtualization is a standard practice in enterprise IT architecture. It is also the


technology that drives cloud computing economics. Virtualization enables cloud providers to
serve users with their existing physical computer hardware; it enables cloud users to purchase
only the computing resources they need when they need it, and to scale those resources cost-
effectively as their workloads grow.

Benefits of virtualization

Virtualization brings several benefits to data center operators and service providers:

• Resource efficiency: Before virtualization, each application server required its own
dedicated physical CPU—IT staff would purchase and configure a separate server for
each application they wanted to run. (IT preferred one application and one operating
system (OS) per computer for reliability reasons.) Invariably, each physical server
would be underused. In contrast, server virtualization lets you run several
applications—each on its own VM with its own OS—on a single physical computer
(typically an x86 server) without sacrificing reliability. This enables maximum
utilization of the physical hardware’s computing capacity.

• Easier management: Replacing physical computers with software-defined VMs


makes it easier to use and manage policies written in software. This allows you to
create automated IT service management workflows. For example, automated
deployment and configuration tools enable administrators to define collections of
virtual machines and applications as services, in software templates. This means that
they can install those services repeatedly and consistently without cumbersome, time-
consuming. and error-prone manual setup. Admins can use virtualization security
policies to mandate certain security configurations based on the role of the virtual
machine. Policies can even increase resource efficiency by retiring unused virtual
machines to save on space and computing power.

• Minimal downtime: OS and application crashes can cause downtime and disrupt user
productivity. Admins can run multiple redundant virtual machines alongside each
other and failover between them when problems arise. Running multiple redundant
physical servers is more expensive.
• Faster provisioning: Buying, installing, and configuring hardware for each
application is time-consuming. Provided that the hardware is already in place,
provisioning virtual machines to run all your applications is significantly faster. You
can even automate it using management software and build it into existing workflows.

Types of virtualization

• Desktop virtualization
• Network virtualization
• Storage virtualization
• Data virtualization
• Application virtualization
• Data center virtualization
• CPU virtualization
• GPU virtualization
• Linux virtualization
• Cloud virtualization

Desktop virtualization

Desktop virtualization lets you run multiple desktop operating systems, each in its own VM
on the same computer.

There are two types of desktop virtualization:

• Virtual desktop infrastructure (VDI)


• Local desktop virtualization

Network virtualization

Network virtualization uses software to create a “view” of the network that an administrator
can use to manage the network from a single console.

It abstracts hardware elements and functions (e.g., connections, switches, routers, etc.) and
abstracts them into software running on a hypervisor. The network administrator can modify
and control these elements without touching the underlying physical components, which
dramatically simplifies network management.

Types of network virtualization include

software-defined networking (SDN

network function virtualization (NFV

Storage virtualization

Storage virtualization enables all the storage devices on the network— whether they’re
installed on individual servers or standalone storage units—to be accessed and managed as a
single storage device.
Data virtualization

Modern enterprises store data from multiple applications, using multiple file formats, in
multiple locations, ranging from the cloud to on-premise hardware and software systems.
Data virtualization lets any application access all of that data—irrespective of source, format,
or location.

Data virtualization tools create a software layer between the applications accessing the data
and the systems storing it. The layer translates an application’s data request or query as
needed and returns results that can span multiple systems.

Application virtualization

Application virtualization runs application software without installing it directly on the user’s
OS. This differs from complete desktop virtualization (mentioned above) because only the
application runs in a virtual environment—the OS on the end user’s device runs as usual.
There are three types of application virtualization:

• Local application virtualization


• Application streaming
• Server-based application virtualization

What is threat modeling?

Threat modeling or threat assessment is the process of reviewing the threats to an enterprise
or information system and then formally evaluating the degree and nature of the threats.
Threat modeling is one of the first steps in application security and usually includes the
following five steps:

1. rigorously defining enterprise assets;

2. identifying what each application does or will do with respect to these assets;

3. creating a security profile for each application;

4. identifying and prioritizing potential threats; and

5. documenting adverse events and the actions taken in each case.


Systems Monitoring and Auditing
Systems monitoring and auditing, at (^Company^), must be performed to determine when a
failure of the information system security, or a breach of the information systems itself, has
occurred, and the details of that breach or failure.

Purpose
System monitoring and auditing is used to determine if inappropriate actions have occurred
within an information system.

System monitoring is used to look for these actions in real time while system auditing looks
for them after the fact.

This policy applies to all information systems and information system components of
(^Company^). Specifically, it includes:

1. Mainframes, servers, and other devices that provide centralized computing capabilities
2. Devices that provide centralized storage capabilities
3. Desktops, laptops, and other devices that provide distributed computing capabilities
4. Routers, switches, and other devices that provide network capabilities
5. Firewall, Intrusion Detection/Prevention (IDP) sensors, and other devices that provide
dedicated security capabilities

Policy Details

Information systems will be configured to record login/logout and all administrator activities
into a log file. Additionally, information systems will be configured to notify administrative
personnel if inappropriate, unusual, and/or suspicious activity is noted. Inappropriate,
unusual, and/or suspicious activity will be fully investigated by appropriate administrative
personnel and findings reported to the VP of IT or COO.

Information systems are to be provided with sufficient primary (on-line) storage to retain 30-
days’ worth of log data and sufficient secondary (off-line) storage to retain one year’s worth
of data. If primary storage capacity is exceeded, the information system will be configured to
overwrite the oldest logs. In the event of other logging system failures, the information
system will be configured to notify an administrator.

System logs shall be manually reviewed weekly. Inappropriate, unusual, and/or suspicious
activity will be fully investigated by appropriate administrative personnel and findings
reported to appropriate security management personnel.

System logs are considered confidential information. As such, all access to system logs and
other system audit information requires prior authorization and strict authentication. Further,
access to logs or other system audit information will be captured in the logs.
Introduction to Physical Security
It is very important to remember that software is not your only weapon when it comes to
cyber security. Physical Cyber Security is another tier in your line of defense.

Physical security is the protection of personnel, hardware, software, networks and data
from physical actions and events that could cause serious loss or damage to an
enterprise, agency or institution.

Physical Security is critical, especially for small business that does not have many resources
to devote to security personnel and tools as opposed to larger firms. When it comes to
Physical Security, the same principles apply here:

· Identify and classify your assets and resources.


· Identify plausible threats.
· Identify probable vulnerabilities that threats may exploit.
· Identify the expected cost in case if an attack occurs.
Factors on which Physical Security Depends
1. How many workplaces, buildings or sites are there in an organization?
2. Size of the building of the organization?
3. How many employees are employed in the organization?
4. How many entry and exit points are there in the organization?
5. Points of placement of data centers and other confidential information.
Layers of Physical Security

Layers in Physical Security are implemented at the perimeter and are moving towards an
asset. The layers are as follows:

1. Deterrence

The goal of Deterrence methods is to convince a potential attacker that a successful attack is
not possible due to strong defenses. For example: By placing your keys inside a highly secure
key control system made up of heavy metal like steel, you can help prevent attackers from
gaining access to assets. Deterrence methods are classified into 4 categories:

· Physical Barriers: These include fences, walls, vehicle barriers, etc. They also act as a
Psychological deterrent by defining the perimeter of the facility and making intrusion seem
more difficult.

· Combination Barriers: These are designed to defeat defined threats. This is a part of
building codes as well as fire codes.

· Natural Surveillance: In this architects seek to build places that are more open and
visible to authorized users and security personnel so that attackers are unable to perform the
unauthorized activity without being seen. For example- decreasing the amount of dense and
tall vegetation.
· Security Lighting: Doors, gates or other means of the entrance should be well lit as
Intruders are less likely to enter well-lit areas. Keep mind to place lighting in a manner, that
is difficult to tamper.

2. Detection

If you are using the manual key control system, you have no way of knowing the exact
timestamp of when an unauthorized user requested a key or has exceeded its time limit.
Detection methods can of the following types:

· Alarm Systems and Sensors: Alarm systems can be installed to alert security
personnel in case of an attempt of unauthorized access. They consist of sensors like perimeter
sensors, motion sensors, etc.

· Video Surveillance: Surveillance cameras can be used for detection if an attack has
already occurred and a camera is placed at the point of attack. Recorded video can be used

3. Access Control

These methods are used to monitor and control the traffic through specific access points.
Access Control includes the following methods:

· Mechanical Access Control Systems: These includes gates, doors, locks, etc.

· Electronic Access Control: These are used to monitor and control larger populations,
controlling for user life cycles, dates and individual access points.

· Identification System and access policies: These includes the use of policies,
procedures and processes to manage the access into the restricted area.

4. Security Personnel

They play a central role in all layers of security. They perform many functions like:

· Administering electronic access control.

· Responding to alarms.

· Monitoring and analyzing video footage and many more

Countermeasures and Protection Techniques

1. Protection against Dumpster Diving

Dumpster Diving is the process of finding some useful information about the person or
business from the trash that can later be used for hacking purpose. Since the information is in
the trash, it is not useful for the owner but deemed useful to the picker. To protect against it,
you need to follow certain measures:
· Ensure all important documents are shredded and they are still secure.

· Destroy any CDs/ DVDs containing personal data.

· Make sure that nobody can walk into your building and simply steal your garbage and
should have safe disposal policy.

· Firewalls can be used to prevent suspicious users from accessing the discarded data.

2. Site Access Control

Lack of Access Control can be highly devastating if a wrong person gets in and gets access to
sensitive information. Fortunately nowadays, you have a number of modern tools that will
help you to optimize your access control.

3. Secure Network-Enabled Printers

Network Printers are a very convenient option allowing anyone in the office to get connected,
without a need of extra wiring. Unfortunately, they have underlying security risks also.
Sometimes, due to default settings, they offer open WiFi access, thus allowing anyone to get
in and open vulnerabilities in the process.

· Only connect those to the Internet that actually needs to be.


· Remote access is not necessary for scenarios where only people from your office use the printer.
· You can add passwords to the connection if necessary.

4. Securing Your Backups

Physical backups are critical for business continuity, helping you prevent data loss in the
event of disasters, outages, and more. Most businesses secure their servers but they forget
that backups are equally important. They are holding the same level of sensitive data as
servers. Treat your backups as you treat your sensitive information and secure them.

7. Building Secure Guest Wifi

Guest WiFi is a natural solution when you have guests or visitors as it isolates Guest WiFi
from your internal devices and data.

8. Locking up your Servers

Any area in your organization that stores data need to be secured. Locking doors and making
sure server area gets extra protection.

9. Accounting for Loss or Stolen Devices

As devices are becoming more mobile, chances for them being stolen or falling out of
someone’s pocket becomes more frequent. Mobile Device Management can help you to
manage such situations and take the necessary precautions. The best solution in such cases is
to simply lock down and potentially wipe any lost or stolen devices from the organization
remotely.
10. Implementing video systems

To achieve a more secure premises, it is advisable to use a Video Surveillance system.

· Mere presence of cameras can deter potential attackers.


· Availability of video footage allows you to have continuous monitoring over the entire
premises.
· If an attack happens, you can check the recorded video, easily reconcile the process and catch
the perpetrator.

Policy vs Procedure: What Are Policies and Procedures?


In the information security industry, policies and procedures refer to the documentation that
describes how your business is run

What is a policy?
The definition of policy is a set of rules or guidelines for your organization and employees to
follow in order to achieve a specific goal (i.e. compliance).

Policies answer questions about what employees do and why they do it.

An effective policy should outline what employees must do or not do, directions, limits,
principles, and guidance for decision making.

Policies answer questions like: What? Why?

What is a procedure?
A procedure is the counterpart to a policy; it is the instruction on how a policy is followed.

It is the step-by-step instruction for how the policies outlined above are to be achieved.

A policy defines a rule, and the procedure defines who is expected to do it and how they are
expected to do it.

Procedures answer questions like: How? When? Where?

RISK ANALYSIS :
Risk Assessment – Risk Management is a recurrent activity, on the other hand Risk
assessment is executed at discrete points and until the performance of the next assessment.
Risk Assessment is the process of evaluating known and postulated threats and vulnerabilities
to determine expected loss.

It also includes establishing the degree of acceptability to system operations.

Risk Assessment receives input and output from Context establishment phase and output is
the list of assessed risk risks, where risks are given priorities as per risk evaluation criteria.

1. Risk Identification – In this step we identify the following:

· assets
· threats
· existing and planned security measures
· vulnerabilities
· consequence
· related business processes
· list of asset and related business processes with associated list of threats, existing and planned
security measures
· list of vulnerabilities unrelated to any identified threats
· list of incident scenarios with their consequences

2. Risk Estimation – There are 2 methods for Risk Assessment:

1. Quantitative Risk Assessment – This methodology is not mostly used by the


organizations except for the financial institutions and insurance companies. Quantitative risk
is mathematically expressed as Annualised Loss Expectancy (ALE).

ALE= SLE * ARO

ALE is the expected monetary loss that can be expected for an asset due to a risk being
realised over a one-year period.

Single Loss Expectancy (SLE) is the value of a single loss of the asset.

Annualised Rate of Occurrence (ARO) is how often the loss occurs.

2. Qualitative Risk Assessment – Qualitative Risk Assessment defines likelihood, impact


values and risk in subjective terms, keeping in mind that likelihood and impact values are
highly uncertain.

Qualitative risk assessments typically give risk results of “High”, “Moderate” and “Low”.
Following are the steps in Qualitative Risk Assessment:

1. Identifying Threats: Threats and Threat-Sources must be identified. Threats should


include threat-source to ensure accurate estimation. It is important to compile a list of all
possible threats that are present across the organization and use this list as the basis for all
risk management activities. Some of the examples of threat and threat-source are:
· Natural Threats- floods, earthquakes etc.

· Human Threats- virus, worms etc.

· Environmental Threats- power failure, pollution etc.

2. Identifying Vulnerabilities: Vulnerabilities are identified by numerous means. Some of


the tools are:

a. Vulnerability Scanners – This is the software the compare the operating system or
code for flaws against the database of flaw signatures.

b. Penetration Testing – Human Security analyst will exercise threats against the
system including operational vulnerabilities like Social Engineering.

c. Audit of Operational and Management Controls – Operational and management


controls are reviewed by comparing the current documentation to best practices for example
ISO 17799 and by comparing actual practices against current documented processes.

3. Relating Threats to Vulnerabilities: This is the most difficult and mandatory activity in
Risk Assessment. T-V pair list is established by reviewing the vulnerability list and pairing a
vulnerability with every threat that applies, then by reviewing the threat list and ensuring that
all the vulnerabilities that threat-action/threat can act against have been identified.

4. Defining Likelihood: Likelihood is the probability that a threat caused by a threat-source


will occur against a vulnerability. Sample Likelihood definitions can be like: Low -0-30%
chance of successful exercise of Threat during a one year period Moderate – 31-70% chance
of successful exercise of Threat during a one year period High – 71-100% chance of
successful exercise of Threat during a one year period This is just a sample definitions.
Organization can use their own definition like Very Low, Low, Moderate, High, Very High.

5. Defining Impact: Impact is best defined in terms of impact upon confidentiality,


integrity and availability.

A) Assessing Risk: Assessing risk is the process to determine the likelihood of the
threat being exercised against the vulnerability and the resulting impact from a
successful compromise.

B) Risk Evaluation – The risk evaluation process receives as input the output of risk
analysis process. It first compares each risk level against the risk acceptance criteria
and then prioritise the risk list with risk treatment indications.

3. Risk Mitigation/ Management – Risk Mitigation involves prioritizing, evaluating, and


implementing the appropriate risk-reducing controls recommended from the risk assessment
process. Since eliminating all risk in an organization is close to impossible thus, it is the
responsibility of senior management and functional and business managers to use the least-
cost approach and implement the most appropriate controls to decrease risk to an acceptable
level. As per NIST SP 800 30 framework there are 6 steps in Risk Mitigation.
1. Risk Assumption: This means to accept the risk and continue operating the system but
at the same time try to implement the controls to

2. Risk Avoidance: This means to eliminate the risk cause or consequence in order to
avoid the risk for example shutdown the system if the risk is identified.

3. Risk Limitation: To limit the risk by implementing controls that minimize the adverse
impact of a threat’s exercising a vulnerability (e.g., use of supporting, preventive, detective
controls)

4. Risk Planning: To manage risk by developing a risk mitigation plan that prioritizes,
implements, and maintains controls

5. Research and Acknowledgement: In this step involves acknowledging the vulnerability


or flaw and researching controls to correct the vulnerability.

6. Risk Transference: This means to transfer the risk to compensate for the loss for
example purchasing insurance guarantees not 100% in all cases but atleast some recovery
from the loss.

4. Risk Communication – The main purpose of this step is to communicate, give an


understanding of all aspects of risk to all the stakeholder’s of an organization. Establishing a
common understanding is important, since it influences decisions to be taken.

5. Risk Monitoring and Review – Security Measures are regularly reviewed to ensure they
work as planned and changes in the environment don’t make them ineffective. With major
changes in the work environment security measures should also be updated.Business
requirements, vulnerabilities and threats can change over the time. Regular audits should be
scheduled and should be conducted by an independent party.

6. IT Evaluation and Assessment – Security controls should be validated. Technical


controls are systems that need to tested and verified. Vulnerability assessment and
Penetration test are used for verifying status of security controls. Monitoring system events
according to a security monitoring strategy, an incident response plan and security validation
and metrics are fundamental activities to assure that an optimal level of security is obtained.
It is important to keep a check on new vulnerabilities and apply procedural and technical
controls for example regularly update software.
What are Network Firewalls
Network firewalls are security devices used to stop or mitigate unauthorized access to private
networks connected to the Internet, especially intranets. The only traffic allowed on the
network is defined via firewall policies — any other traffic attempting to access the network
is blocked.

Network firewalls sit at the front line of a network, acting as a communications liaison
between internal and external devices.

A network firewall can be configured so that any data entering or exiting the network has to
pass through it — it accomplishes this by examining each incoming message and rejecting
those that fail to meet the defined security criteria. When properly configured, a firewall
allows users to access any of the resources they need while simultaneously keeping out
unwanted users, hackers, viruses, worms or other malicious programs trying to access the
protected network.

Software vs. hardware firewalls

Firewalls can be either hardware or software. In addition to limiting access to a protected


computer and network, a firewall can log all traffic coming into or leaving a network, and
manage remote access to a private network through secure authentication certificates and
logins.

• Hardware firewalls: These firewalls are released either as standalone products for corporate
use, or more often, as a built-in component of a router or other networking device. They are
considered an essential part of any traditional security system and network configuration.
Hardware firewalls will almost always come with a minimum of four network ports that allow
connections to multiple systems. For larger networks, a more expansive networking firewall
solution is available.
• Software firewalls: These are installed on a computer, or provided by an OS or network
device manufacturer. They can be customized, and provide a smaller level of control over
functions and protection features. A software firewall can protect a system from standard
control and access attempts, but have trouble with more sophisticated network breaches.

A firewall is considered an endpoint protection technology. In protecting private information,


a firewall can be considered a first line of defense, but it cannot be the only defense.

Firewall types

Firewalls are relied upon to secure home and corporate networks. A simple firewall program
or device will sift through all information passing through the network — this process can
also be customized depending on the needs of the user and the capabilities of the firewall.
There are a number of major firewall types that prevent harmful information from passing
through the network:

• Application-layer Firewalls: This is a hardware appliance, software filter, or server plug-in. It


layers security mechanisms on top of defined applications, such as FTP servers, and defines
rules for HTTP connections. These rules are built for each application, to help identify and
block attacks to a network.
• Packet Filtering Firewalls: This filter examines every packet that passes through the
network — and then accepts or denies it as defined by rules set by the user. Packet filtering
can be very helpful, but it can be challenging to properly configure. Also, it’s vulnerable to IP
spoofing.
• Circuit-level Firewalls: This firewall type applies a variety of security mechanisms once a UDP
or TCP connection has been made. Once the connection is established, packets are
exchanged directly between hosts without further oversight or filtering.
• Proxy Server Firewalls: This version will check all messages that enter or leave a network,
and then hide the real network addresses from any external inspection.
• Next Generation Firewalls (NGFW): These work by filtering traffic moving through a
network — the filtering is determined by the applications or traffic types and the ports they
are assigned to. These features comprise a blend of a standard firewall with additional
functionality, to help with greater, more self-sufficient network inspection.
• Stateful Firewalls: Sometimes referred to as third generation firewall technology, stateful
filtering accomplishes two things: traffic classification based on the destination port, and
packet tracking of every interaction between internal connections. These newer technologies
increase usability and assist in expanding access control granularity — interactions are no
longer defined by port and protocol. A packet’s history in the state table is also measured.

All of these network firewall types are useful for power users, and many firewalls will allow
for two or more of these techniques to be used in tandem with one another.

Why Network Firewalls are Important


Without firewalls, if a computer has a publicly addressable IP — for instance, if it is directly
connected via ethernet — then any network service that is currently running on that device
may become accessible to the outside world. Any computer network that is connected to the
internet is also potentially at risk for an attack. Without a firewall, these networks become
vulnerable to malicious attacks. For example:

• If your network is connected to the internet, some types of malware find ways to divert
portions of your hardware’s bandwidth for its own purposes.
• Some types of malware are designed to gain access to your network to use sensitive
information such as credit card info, bank account numbers or other proprietary data like
customer information.
• Other types of malware are designed to simply destroy data or bring networks down.

For full-spectrum security, firewalls should be placed between any network that has a
connection to the internet, and businesses should establish clear computer security plans, with
policies on external networks and data storage.

In the cloud era, network firewalls can do more than secure a network. They can also help
ensure that you have uninterrupted network availability and robust access to cloud-hosted
applications.
Transport layer firewalls
Transport Layer is the second layer in the TCP/IP model and the fourth layer in the OSI
model. It is an end-to-end layer used to deliver messages to a host. It is termed an end-to-end
layer because it provides a point-to-point connection rather than hop-to- hop, between the
source host and destination host to deliver the services reliably.

The unit of data encapsulation in the Transport Layer is a segment.

Working of Transport Layer:


The transport layer takes services from the Network layer and provides services to
the Application layer

At the sender’s side: The transport layer receives data (message) from the Application layer
and then performs Segmentation, divides the actual message into segments, adds source and
destination’s port numbers into the header of the segment, and transfers the message to the
Network layer.

At the receiver’s side: The transport layer receives data from the Network layer, reassembles
the segmented data, reads its header, identifies the port number, and forwards the message to
the appropriate port in the Application layer.

Responsibilities of a Transport Layer:


Process to process delivery:

While Data Link Layer requires the MAC address (48 bits address contained inside the
Network Interface Card of every host machine) of source-destination hosts to correctly
deliver a frame and the Network layer requires the IP address for appropriate routing of
packets, in a similar way Transport Layer requires a Port number to correctly deliver the
segments of data to the correct process amongst the multiple processes running on a
particular host. A port number is a 16-bit address used to identify any client-server program
uniquely.

End-to-end Connection between hosts:

The transport layer is also responsible for creating the end-to-end Connection between hosts
for which it mainly uses TCP and UDP. TCP is a secure, connection-orientated protocol that
uses a handshake protocol to establish a robust connection between two end hosts. TCP
ensures reliable delivery of messages and is used in various applications. UDP, on the other
hand, is a stateless and unreliable protocol that ensures best-effort delivery. It is suitable for
applications that have little concern with flow or error control and requires sending the bulk
of data like video conferencing. It is often used in multicasting protocols.
Congestion Control:

Congestion is a situation in which too many sources over a network attempt to send data and
the router buffers start overflowing due to which loss of packets occur. As a result
retransmission of packets from the sources increases the congestion further. In this situation,
the Transport layer provides Congestion Control in different ways. It uses open
loop congestion control to prevent the congestion and closed-loop congestion control to
remove the congestion in a network once it occurred.

Data integrity and Error correction:

The transport layer checks for errors in the messages coming from the application layer by
using error detection codes, computing checksums, it checks whether the received data is not
corrupted and uses the ACK and NACK services to inform the sender if the data has arrived
or not and checks for the integrity of data.

Flow control:

The transport layer provides a flow control mechanism between the adjacent layers of the
TCP/IP model. TCP also prevents data loss due to a fast sender and slow receiver by
imposing some flow control techniques. It uses the method of sliding window protocol which
is accomplished by the receiver by sending a window back to the sender informing the size of
data it can receive.

Protocols of Transport Layer:


· TCP(Transmission Control Protocol)
· UDP (User Datagram Protocol)
· SCTP (Stream Control Transmission Protocol)
· DCCP (Datagram Congestion Control Protocol)
· ATP (AppleTalk Transaction Protocol)
· FCP (Fibre Channel Protocol)
· RDP (Reliable Data Protocol)
· RUDP (Reliable User Data Protocol)
· SST (Structured Steam Transport)
· SPX (Sequenced Packet Exchange)

What is a Web Application Firewall?


Web Application Firewall protects the web application by filtering, monitoring, and blocking
any malicious HTTP/S traffic that might penetrate the web application.

In simple words, a Web Application Firewall acts as a shield between a web application and
the Internet. This shield protects the web application from different types of attacks.
Working of Web Application Firewall
· According to the OSI model, WAF is a protocol layer seven defense.

· When a WAF is deployed in front of a web application, a shield is created between the
web application and the Internet.

· The advantage of WAF is that it functions independently from the application, but yet
it can constantly adapt to the application behavior changes.

· The clients are passed through the WAF before reaching the server in order to protect
the server from exposure.

· WAF can be set to various levels of examinations, usually in a range from low to high,
which allows the WAF to provide a better level of security.

Types of Web Application Firewall:


· Network-based WAFs are usually hardware-based. They provide latency reduction
due to local installation. Network-based WAFs are the most expensive and also require the
storage and maintenance of physical equipment.

· Host-based WAFs may be completely integrated into an application’s software. They


exist as modules for a web server. It is a cheaper solution compared to hardware-based
WAFs, which are used for small web applications. The disadvantage of a host-based WAF is
the consumption of local server resources because of which the performance may degrade.

· Cloud-based WAFs are low-cost and have fewer resources to manage. The cloud-
based solution is the perfect choice when a person doesn’t want to restrict themselves with
performance capabilities. The service providers can provide with unlimited hardware pool but
after a certain point of time, the service fees might increase.

Importance of Web Application Firewall:


There are several hackers out there who are ready to execute their malicious attacks. The
most common attacks such as XSS, SQL Injection, etc. can be prevented with the help of
WAF and that will be discussed further. The purpose of WAF is to protect your webpage
from such malicious attacks. The WAF constantly monitors for potential attacks, blocking
these attacks if they are found to be malicious in any way.

Policy in Web Application Firewall:


· The set of rules through which a WAF operates is called a policy.

· The purpose of these policies is to protect against the vulnerabilities in the application
by filtering out malicious traffic.
· The value of a WAF comes in part depending upon the speed and efficiency with
which the policy modification is implemented.

Types of Attacks a Web Application Firewall Can Prevent:


· DDOS Attack aims to target a particular web application/ website/ server with fake
traffic.

· Cross-Site Scripting (XSS) Attacks are aimed at those users who use vulnerable web
applications/ websites in order to gain access to and control their browsers.

· SQL Injection Attacks: A malicious SQL code is injected in the form of requests or
queries in the user input box on the web applications that the user is using.

· Man-in-the-middle attacks take place when the perpetrators position themselves in


between the application and the legitimate users in order to extract confidential details.

· Zero-day attacks are unexpected attacks that take place. The organization knows
about the existence of vulnerabilities in the hardware/ software only when the attack has
taken place.

Blocklist and Allowlist in Web Application Firewalls:


· Blocklist: A WAF that is based on a blocklist protects against known attacks.
Visualize blocklist WAF as a college security guard who is instructed to deny admittance to
the students who don’t bring their ID-Cards.

· Allowlist: A WAF based on an allow list only admits traffic that has been pre-
approved. This is like the college security guard who only admits people who are on the list.

Both Blocklist and Allowlist have equal advantages and disadvantages because of which
many WAFs offer a hybrid security model, which implements both.

What is business continuity?


Business continuity is a business’s level of readiness to maintain critical functions after an
emergency or disruption. These events can include:

· Security breaches
· Natural disasters
· Power outages
· Equipment failures
· Sudden staff departure
Why business continuity is important
Leading organizations make business continuity a top priority because maintaining critical
functions after an emergency or disruption can be the difference between the success and
failure of a business. If key business capabilities fail, a quick recovery time to bring systems
back up is crucial. Getting a business continuity strategy in place before disaster hits can save
a tremendous amount of time and money. The plan for recovery needs to include roles and
responsibilities, as well as which systems need to be recovered in which order. There are
many aspects of business continuity to consider and test, which is another reason to plan
ahead. For instance, large data sets can take an excruciatingly long time to restore from a
backup, so failover to a remote data center might be a better solution for businesses with a
large amount of data.

When resiliency and recovery plans fail, or when an unforeseen event occurs, a contingency
plan can act as a last resort. A contingency plan includes a practiced strategy and plan for
last-resort needs. These needs could range from asking third-party vendors for help to finding
a second location for emergency office space or remote back-up servers.

What does business continuity include?


A business continuity and risk management plan usually involves three considerations:

· Resiliency

· Recovery

· Contingency

There are many international standards and policies to guide the development of disaster
recovery and business continuity plans.

Business continuity tools


There are a wide variety of business continuity tools to choose from, which all perform
slightly different functions:

· Back-up: Backing up data is one of the simplest ways to ensure business continuity.
Storing data off site or on a remote drive provides some business continuity, but other tools
are needed to back up the IT infrastructure and keep it functioning in the event of a disaster.

· Backup as a Service: Backup as a Service is similar to backing up data at a remote


location, but a third-party provider performs the back-up. Again, only the data is backed up,
not the IT infrastructure.

· Point-in-time and Instant Recovery Copies: Point-in-time copies or snapshots copy


the entire database at regular intervals. Similar to point-in-time copies, instant recovery
copies take snapshots of entire virtual machines. If these copies are stored off-site or on a
virtual machine that is unaffected by the disaster, data can be restored from these backups.

· Cold Site: Businesses can set up a basic infrastructure in a second facility known as a
cold site, where employees can work after a natural disaster or fire. A cold site can help
business operations to continue, but it must be combined with other methods of disaster
recovery that protect or enable recovery of important data.

· Hot Site: A hot site is a second business location that functions like a cold site and also
maintains an up-to-date copy of data at all times. Hot sites dramatically reduce downtime, but
they are more expensive than cold sites and more time-consuming to set up.

· Disaster Recovery as a Service (DRaaS): A disaster recovery as a service (DRaaS)


provider moves an organization’s computer processing to their own cloud infrastructure in
the event of a disaster. Businesses pay for this service through a subscription or a pay-per-use
model. One advantage of DRaaS is that businesses can continue to operate seamlessly from
the vendor’s location, even if their own servers are down. Choosing a local DRaaS provider
will ensure higher latency, but if the vendor’s servers are too close to the disaster location,
their own servers may be affected by the same disaster.

· Physical Tools: Physical disaster recovery tools can mitigate the effects of certain
types of disasters, except cyber attacks. Physical elements that can support business
continuity include fire suppression tools to help data and computer equipment survive a fire,
and a back-up power source that supports businesses through short-term power outages.

What is Disaster Recovery?


Disaster recovery is an organization’s method of regaining access and functionality to its IT
infrastructure after events like a natural disaster, cyber attack, or even business disruptions
related to the COVID-19 pandemic. A variety of disaster recovery (DR) methods can be part
of a disaster recovery plan. DR is one aspect of business continuity

How does disaster recovery work?


Disaster recovery relies upon the replication of data and computer processing in an off-
premises location not affected by the disaster. When servers go down because of a natural
disaster, equipment failure or cyber attack, a business needs to recover lost data from a
second location where the data is backed up. Ideally, an organization can transfer its
computer processing to that remote location as well in order to continue operations.

5 top elements of an effective disaster recovery plan


1. Disaster recovery team: This assigned group of specialists will be responsible for
creating, implementing and managing the disaster recovery plan. This plan should define
each team member’s role and responsibilities. In the event of a disaster, the recovery team
should know how to communicate with each other, employees, vendors, and customers.

2. Risk evaluation: Assess potential hazards that put your organization at risk.
Depending on the type of event, strategize what measures and resources will be needed to
resume business. For example, in the event of a cyber attack, what data protection measures
will the recovery team have in place to respond?

3. Business-critical asset identification: A good disaster recovery plan includes


documentation of which systems, applications, data, and other resources are most critical for
business continuity, as well as the necessary steps to recover data.

4. Backups: Determine what needs backup (or to be relocated), who should perform
backups, and how backups will be implemented. Include a recovery point objective (RPO)
that states the frequency of backups and a recovery time objective (RTO) that defines the
maximum amount of downtime allowable after a disaster. These metrics create limits to guide
the choice of IT strategy, processes and procedures that make up an organization’s disaster
recovery plan. The amount of downtime an organization can handle and how frequently the
organization backs up its data will inform the disaster recovery strategy.

5. Testing and optimization: The recovery team should continually test and update its
strategy to address ever-evolving threats and business needs. By continually ensuring that a
company is ready to face the worst-case scenarios in disaster situations, it can successfully
navigate such challenges. In planning how to respond to a cyber attack, for example, it’s
important that organizations continually test and optimize their security and data protection
strategies and have protective measures in place to detect potential security breaches.

What are the types of disaster recovery?


Businesses can choose from a variety of disaster recovery methods, or combine several:

· Back-up: This is the simplest type of disaster recovery and entails storing data off site
or on a removable drive. However, just backing up data provides only minimal business
continuity help, as the IT infrastructure itself is not backed up.

· Cold Site: In this type of disaster recovery, an organization sets up a basic


infrastructure in a second, rarely used facility that provides a place for employees to work
after a natural disaster or fire. It can help with business continuity because business
operations can continue, but it does not provide a way to protect or recover important data, so
a cold site must be combined with other methods of disaster recovery.

· Hot Site: A hot site maintains up-to-date copies of data at all times. Hot sites are time-
consuming to set up and more expensive than cold sites, but they dramatically reduce down
time.
· Back Up as a Service: Similar to backing up data at a remote location, with Back Up
as a Service, a third party provider backs up an organization’s data, but not its IT
infrastructure.

· Data center disaster recovery: The physical elements of a data center can protect data
and contribute to faster disaster recovery in certain types of disasters. For instance, fire
suppression tools will help data and computer equipment survive a fire. A backup power
source will help businesses sail through power outages without grinding operations to a halt.
Of course, none of these physical disaster recovery tools will help in the event of a cyber
attack.

· Point-in-time copies: Point-in-time copies, also known as point-in-time snapshots,


make a copy of the entire database at a given time. Data can be restored from this back-up,
but only if the copy is stored off site or on a virtual machine that is unaffected by the disaster.

· Instant recovery: Instant recovery is similar to point-in-time copies, except that


instead of copying a database, instant recovery takes a snapshot of an entire virtual machine.

What is Vulnerability Assessment?


The Information System is an integrated set of the component for collecting, storing,
processing and communicating information. There are various phases involved in making an
information system. One of such phases includes a review of the system security. All systems
are prone to attacks like Cross-site scripting(XSS) and SQL injection. Thus, it is important
that the organization reviews the system for possible threats beforehand. This helps in
identifying the vulnerabilities and weaknesses of the system. This kind of systematic review
of a system is called vulnerability assessment.

How does Vulnerability Assessment help?

It helps any organization safeguard itself from cyber attacks by identifying the loopholes in
advance. Here are some threats that we can prevent if we use vulnerability assessment.

· Injection attacks like XSS and SQL injection

· Authentication faults that lead to unidentified access to important data

· Insecure settings and weak defaults

What are the different types of Vulnerability Assessments?

Vulnerability assessments can be of different types depending on the need and type of a
system.
· Host Vulnerability Assessment: Applications and information systems often use
servers to work at the backend. Many attackers use these servers to inject threats in the
system. Thus, it is important to test servers and review them for vulnerability.

· Database Vulnerability Assessment: Database is one of the most important aspect of


any information system. It is where crucial user data is stored. Breach in a database system
might lead to heavy losses. Thus, it is important to make sure that any outsider can neither
access the data nor alter or destroy it. This can be done by assessing the database for possible
threats and vulnerabilities.

· Network Vulnerability Assessment: Private as well as public networks are prone to


injection attacks. Checking a network for possible issues is a better way to prevent huge
losses in data.

· Application Scan Vulnerability Assessment: Most of the applications can be divided


into two parts

• The frontend
• The backend

Both of these parts have their own source code which must be statically as well as
dynamically analyzed for possible vulnerabilities. This assessment is often done through
automated scans of the source code.

The Process of Vulnerability Assessment:

The process of Vulnerability Assessment is divided into four stages. Let us discuss them one
by one.

· Testing or Vulnerability Identification: All the aspects of a system like networks,


servers, and databases are checked for possible threats, weaknesses, and vulnerabilities. The
goal of this step is to get a list of all the possible loopholes in the security of the system. The
testing is done through machines as well as manually and all parameters are kept in mind
while doing so.

· Analysis: From the first step, we get a list of vulnerabilities. Then, it is time that these
are analyzed in detail. The goal of this analysis is to identify where things went wrong so that
rectification can be done easily. This step aims at finding the root cause of vulnerabilities.

· Risk Assessment: When there are many vulnerabilities, it becomes important to


classify them on the basis of risks they might cause. The main objective of this step is to
prioritize vulnerabilities on the basis of data and systems they might affect. It also gauges the
severity of attacks and the damage they can cause.

· Rectification: Once if have a clear layout of the risks, their root cause, and their
severity, we can start making corrections in the system. The fourth step aims at closing the
gaps in security by introducing new security tools and measures.

Tools for Vulnerability Assessment:


Manually testing an application for possible vulnerabilities might be a tedious job. There are
some tools that can automatically scan the system for vulnerabilities. A few such tools
include:

· Simulation tools that test web applications.

· Scanners that test network services and protocols.

· Network scanners that identify malicious packets and defects in IP addresses.

What is Vulnerability Testing?


Vulnerability testing or Vulnerability Assessment is a process to identify the loopholes in the
security to reduce the security attacks in the application; identification and reducing the
vulnerable areas that are prone to hacker attacks is called Vulnerability testing. It is one of the
software testing techniques that is crucial for an application that demands high security and is
more likely to attack or unauthorized access.

For example, all POS applications, Banking applications, etc have high chances of malicious
attacks as they deal with money. These applications must go through vulnerability testing to
ensure they are safe to use and protect customers’ confidential data.
There are various tools and techniques available to process vulnerability testing, some of
them are Intruder, Acunetix, Nessus, etc. Vulnerability is based on the following types:

1. Data-based.

2. Host-based.

3. Network-based.

Vulnerability can be due to the following reasons:

· Internal design issues.

· Properly not following the security development process.

· Design architecture.

· Test failure.

· Uncover test scenario.

Why Vulnerability Testing?


Vulnerability testing unfolds the security loopholes which helps the developer to cover them
and safeguard an application. Some of the key points for doing vulnerability testing are as:

· Security: To make a system more secure and reliable, so that there is no unauthentic
access and no hacker attack. Vulnerability testing tests the system to identify the security
loopholes in the system and reduce them by referring them to the concerned developing team.

· Design issues: In vulnerability testing, the operating system, application software, and
network are scanned to identify the security leakage that helps in identifying the drawbacks in
designing the application and helps a developer to know the vulnerable areas and redesign
them.

· Prioritize the security issues: Vulnerability testing identifies the insecure design
issues and helps the developer to prioritize them as per severity.

· Password strengthening: The most important security option is the password, testers
validate that the password option is secure enough not to be cracked by attackers.

Types of Vulnerability Scanners

Vulnerability scanners are automated tools to scan all IT assets on the network to disclose the
vulnerability areas. These tools are paid and freely available. There are five types of
vulnerability scanners:

1. Host-based: A host is a web server to connect with other servers on the internet and
communicates with them. The host-based scanner identifies vulnerabilities in the workstation,
OS platform, and other related areas. It also calculates the damage to the system due to
unauthorized access. The host-based vulnerability scanner identifies the vulnerable areas and
resolves the detected damage and identifies the damage level.

2. Network-based: It identifies the possible vulnerable areas over the network as the
application interacts with the internet to provide services to users. It tries to identify security
attacks on wired or wireless networks by scanning the application on the network. It scans all
devices and software working over the network to identify security loopholes.

3. Database-based: A database is most prone to hackers’ attacks as it contains an


organization’s confidential information. If the database is attacked it affects the brand value,
revenue, and trust of customers. Scanning the database of an application to unfold the weak
areas of the database that are vulnerable to attack or are insecure and find ways to cover
insecure areas.

4. Application-based: These scanners scan an application to identify vulnerabilities in the


application due to updating an application. Cyberattacks are the most common security
attacks on an application, they add malicious data into the website’s original data that breaks
the customer’s trust. A vulnerability scanner helps in determining the new and existing
vulnerabilities with the amount of damage reported in an application.

5. Wireless-based: The wireless scanners scan the ports and identify the security issues in
the network of an application. After identifying the security weak points, it reports the team
and the developer tries to strengthen the security by using encryption or other ways.
Tools for Vulnerability Testing

1. Intruder: It aims to find security weaknesses before any hacker attacks. It is an online
vulnerability scanner to identify the security drawbacks of an application. It is a paid scanner
and provides a free demo. Its features are:

· Automatically scan an application to find loopholes.

· Alert the application when new ports are accessed and some new changes are made in
an application.

2. Acunetix: It is a vulnerability scanner for websites, web applications, and APIs. It is a paid
scanner and you can use its demo version to know more about it. Features of Acunetix are:

· It is automated and can detect around 7000 vulnerabilities including all vulnerable
areas.

· It helps in identifying the true vulnerabilities.

3. Frontline: It is the most popular vulnerability scanner with 4.5 ratings and is a network
vulnerability scanner. Along with finding vulnerable areas it also defines its remedies. The
features of Frontliner are as:

· It is user-friendly.

· It fixes some vulnerabilities just with a single click.

4. Nexus: The highly demanding vulnerability scanner with around 2 million downloads. It is
a freely available scanner and developed by Sonatype to identify security loopholes. Some of
the features of Nexus are as:

· Way to cover highly vulnerable areas.

· Identify the security risk in the early stages.

5. Nessus: it is a freely available tool for non-enterprises and a minimum charge for
enterprise use, it is sold by Tenable Security. It alerts the testing team on finding some
vulnerable areas and provides mitigation measures. Some of the features of Nessus are as:

· It identifies malicious attacks and quickly identifies vulnerable areas.

· High-speed recovery of IT assets.

You might also like