Professional Documents
Culture Documents
Buses
A bus is a subsystem that transfers data between computer components inside a computer or
between computers. Unlike a point-to-point connection, a bus can logically connect several
peripherals over the same set of wires. Each bus defines its set of connectors to physically
plug devices, cards or cables together. There are two types of buses: internal and external.
Internal buses are connections to various internal components. External buses are connections
to various external components. There are different kinds of slots that internal and external
devices can connect to.
Internal
Types of Slots
There are many different kinds of internal buses, but only a handful of popular ones.
Different computers come with different kinds and number of slots. It is important to know
what kind and number of slots you have on your computer before you go out and by a card
that matches up to a slot you don’t have.
PCI
PCI (Peripheral Component Interconnect) is common in modern PCs. This kind of bus is
being succeeded by PCI Express. Typical PCI cards used in PCs include: network cards,
sound cards, modems, extra ports such as USB or serial, TV tuner cards and disk controllers.
Video cards have outgrown the capabilities of PCI because of their higher bandwidth
requirements.
PCI Express
PCI Express was introduced by Intel in 2004. It was designed to replace the general-purpose
PCI expansion bus and the AGP graphics card interface. PCI express is not a bus but instead
a point-to-point conection of serial links called lanes. PCI Express cards have faster
bandwidth then PCI cards which make them more ideal for high-end video cards.
PCMCIA
PCMCIA (also referred to as PC Card) is the type of bus used for laptop computers. The
name PCMCIA comes from the group who developed the standard: Personal Computer
Memory Card International Association. PCMCIA was originally designed for computer
memory expansion, but the existence of a usable general standard for notbeook peripherals
led to many kinds of devices being made available in this form. Typical devices include
network cards, modems, and hard disks.
AGP
Types Of Cards
Video Card
A video card (also known as graphics card) is an expansion card whose function is to
generate and output images to a display. Some video cards offer added functions, such as
video capture, TV tuner adapter, ability to connect multiple monitors, and others. If the video
card is integrated in the motherboard, it may use the computer RAM memory. If it is not it
will have its own video memory called Video RAM. This kind of memory can range from
128MB to 2GB.
Sound Card
A sound card is an expansion card that facilitates the input and output of audio signals
to/from a computer under control of computer programs. Typical uses for sound cards include
providing the audio component for multimedia applications such as music composition,
editing video or audio, presentation/education, and entertainment. Many computers have
sound capabilities built in,, while others require additional expansion cards to provide for
audio capability.
Network Card
A network card is an expansion card that allows computers to communicate over a computer
network. It allows users to connect to each other either by using cables or wirelessly.
External
Types of Connections
USB
USB (Universal Serial Bus) is a serial bus standard to interface devices. USB was designed to
allow many peripherals to be connected using a single standardized interface socket and to
improve the plug-and-play capabilities by allowing devices to be connected and disconnected
without rebooting the computer.
Firewire
Firewire is a serial bus interface standard for high-speed communications and isochronous
real-time data transfer, frequently used in a personal computer. Almost all modern digital
camcorders have included this connection.
PS/2
The PS/2 connector is used for connecting some keyboards and mice to a PC compatible
computer system.
Devices
Removable Storage
The same kinds of CD and DVD drives that could come built-in on your computer can also
be attached externally. You might only have a CD-ROM drive built-in to your computer but
you need a CD writer to burn CDs. You can buy an external CD writer that connects to your
USB port and acts the same way as if it was built-in to your computer. The same is true for
DVD writers, Blu-ray drives, and floppy drives. Flash drives have become very popular
forms of removable storage especially as the price of flash drives decreases and the possible
size for them increases. Flash drives are usually USB ones either in the form USB sticks or
very small, portable devices. USB flash drives are small, fast, removable, rewritable, and
long-lasting. Storage capacities range from 64MB to 32GB or more. A flash drive does not
have any mechanically driven parts so as opposed to a hard drive which makes it more
durable and smaller usually.
Non-removable Storage
Non-removable storage can be a hard drive that is connected externally. External hard drives
have become very popular for backups, shared drives among many computers, and simply
expaning the amount of hard drive space you have from your internal hard drive. External
hard drives come in many shapes and sizes like flash drives do. An external hard drive is
usually connected by USB but you can also have a networked hardrive which will connect to
your network which allows all computers on that network to access that hard drive.
Input
Input devices are absolutely crucial to computers. The most common input devices are mice
and keyboards which barely every computer has. A new popular pointing device that may
eventually replace the mouse is touch screen which you can get on some tablet notebooks.
Other popular input devices include microphones, webcams, and fingerprint readers which
can also be built in to modern laptops and desktops. A scanner is another popular input
device that might be built-in to your printer.
Output
There are lots of different kinds of output devices that you can get for your computer. The
absolute most common external output device is a monitor. Other very popular output devices
are printers and speakers. There are lots of different kinds of printers and different sizes of
speakers for your computer. Monitors are connected usually through the HD-15 connector on
your video card. Printers are usually connected through a USB port. Speakers have their own
audio out port built-in to the sound card
Application security may include hardware, software, and procedures that identify or
minimize security vulnerabilities. A router that prevents anyone from viewing a computer’s
IP address from the Internet is a form of hardware application security. But security measures
at the application level are also typically built into the software, such as an application
firewall that strictly defines what activities are allowed and prohibited. Procedures can entail
things like an application security routine that includes protocols such as regular testing.
Application security is the process of developing, adding, and testing security features within
applications to prevent security vulnerabilities against threats such as unauthorized access
and modification.
Why application security is important
Application security is important because today’s applications are often available over
various networks and connected to the cloud, increasing vulnerabilities to security threats and
breaches. There is increasing pressure and incentive to not only ensure security at the
network level but also within applications themselves. One reason for this is because hackers
are going after apps with their attacks more today than in the past. Application security
testing can reveal weaknesses at the application level, helping to prevent these attacks.
· Authorization: After a user has been authenticated, the user may be authorized to
access and use the application. The system can validate that a user has permission to access
the application by comparing the user’s identity with a list of authorized users. Authentication
must happen before authorization so that the application matches only validated user
credentials to the authorized user list.
· Encryption: After a user has been authenticated and is using the application, other
security measures can protect sensitive data from being seen or even used by a cybercriminal.
In cloud-based applications, where traffic containing sensitive data travels between the end
user and the cloud, that traffic can be encrypted to keep the data safe.
· Logging: If there is a security breach in an application, logging can help identify who
got access to the data and how. Application log files provide a time-stamped record of which
aspects of the application were accessed and by whom.
· Application security testing: A necessary process to ensure that all of these security
controls work properly.
Virtualization
Virtualization is a process that allows for more efficient utilization of physical computer
hardware and is the foundation of cloud computing.
Virtualization uses software to create an abstraction layer over computer hardware that allows
the hardware elements of a single computer—processors, memory, storage and more—to be
divided into multiple virtual computers, commonly called virtual machines (VMs). Each VM
runs its own operating system (OS) and behaves like an independent computer, even though
it is running on just a portion of the actual underlying computer hardware.
It follows that virtualization enables more efficient utilization of physical computer hardware
and allows a greater return on an organization’s hardware investment.
Benefits of virtualization
Virtualization brings several benefits to data center operators and service providers:
• Resource efficiency: Before virtualization, each application server required its own
dedicated physical CPU—IT staff would purchase and configure a separate server for
each application they wanted to run. (IT preferred one application and one operating
system (OS) per computer for reliability reasons.) Invariably, each physical server
would be underused. In contrast, server virtualization lets you run several
applications—each on its own VM with its own OS—on a single physical computer
(typically an x86 server) without sacrificing reliability. This enables maximum
utilization of the physical hardware’s computing capacity.
• Minimal downtime: OS and application crashes can cause downtime and disrupt user
productivity. Admins can run multiple redundant virtual machines alongside each
other and failover between them when problems arise. Running multiple redundant
physical servers is more expensive.
• Faster provisioning: Buying, installing, and configuring hardware for each
application is time-consuming. Provided that the hardware is already in place,
provisioning virtual machines to run all your applications is significantly faster. You
can even automate it using management software and build it into existing workflows.
Types of virtualization
• Desktop virtualization
• Network virtualization
• Storage virtualization
• Data virtualization
• Application virtualization
• Data center virtualization
• CPU virtualization
• GPU virtualization
• Linux virtualization
• Cloud virtualization
Desktop virtualization
Desktop virtualization lets you run multiple desktop operating systems, each in its own VM
on the same computer.
Network virtualization
Network virtualization uses software to create a “view” of the network that an administrator
can use to manage the network from a single console.
It abstracts hardware elements and functions (e.g., connections, switches, routers, etc.) and
abstracts them into software running on a hypervisor. The network administrator can modify
and control these elements without touching the underlying physical components, which
dramatically simplifies network management.
Storage virtualization
Storage virtualization enables all the storage devices on the network— whether they’re
installed on individual servers or standalone storage units—to be accessed and managed as a
single storage device.
Data virtualization
Modern enterprises store data from multiple applications, using multiple file formats, in
multiple locations, ranging from the cloud to on-premise hardware and software systems.
Data virtualization lets any application access all of that data—irrespective of source, format,
or location.
Data virtualization tools create a software layer between the applications accessing the data
and the systems storing it. The layer translates an application’s data request or query as
needed and returns results that can span multiple systems.
Application virtualization
Application virtualization runs application software without installing it directly on the user’s
OS. This differs from complete desktop virtualization (mentioned above) because only the
application runs in a virtual environment—the OS on the end user’s device runs as usual.
There are three types of application virtualization:
Threat modeling or threat assessment is the process of reviewing the threats to an enterprise
or information system and then formally evaluating the degree and nature of the threats.
Threat modeling is one of the first steps in application security and usually includes the
following five steps:
2. identifying what each application does or will do with respect to these assets;
Purpose
System monitoring and auditing is used to determine if inappropriate actions have occurred
within an information system.
System monitoring is used to look for these actions in real time while system auditing looks
for them after the fact.
This policy applies to all information systems and information system components of
(^Company^). Specifically, it includes:
1. Mainframes, servers, and other devices that provide centralized computing capabilities
2. Devices that provide centralized storage capabilities
3. Desktops, laptops, and other devices that provide distributed computing capabilities
4. Routers, switches, and other devices that provide network capabilities
5. Firewall, Intrusion Detection/Prevention (IDP) sensors, and other devices that provide
dedicated security capabilities
Policy Details
Information systems will be configured to record login/logout and all administrator activities
into a log file. Additionally, information systems will be configured to notify administrative
personnel if inappropriate, unusual, and/or suspicious activity is noted. Inappropriate,
unusual, and/or suspicious activity will be fully investigated by appropriate administrative
personnel and findings reported to the VP of IT or COO.
Information systems are to be provided with sufficient primary (on-line) storage to retain 30-
days’ worth of log data and sufficient secondary (off-line) storage to retain one year’s worth
of data. If primary storage capacity is exceeded, the information system will be configured to
overwrite the oldest logs. In the event of other logging system failures, the information
system will be configured to notify an administrator.
System logs shall be manually reviewed weekly. Inappropriate, unusual, and/or suspicious
activity will be fully investigated by appropriate administrative personnel and findings
reported to appropriate security management personnel.
System logs are considered confidential information. As such, all access to system logs and
other system audit information requires prior authorization and strict authentication. Further,
access to logs or other system audit information will be captured in the logs.
Introduction to Physical Security
It is very important to remember that software is not your only weapon when it comes to
cyber security. Physical Cyber Security is another tier in your line of defense.
Physical security is the protection of personnel, hardware, software, networks and data
from physical actions and events that could cause serious loss or damage to an
enterprise, agency or institution.
Physical Security is critical, especially for small business that does not have many resources
to devote to security personnel and tools as opposed to larger firms. When it comes to
Physical Security, the same principles apply here:
Layers in Physical Security are implemented at the perimeter and are moving towards an
asset. The layers are as follows:
1. Deterrence
The goal of Deterrence methods is to convince a potential attacker that a successful attack is
not possible due to strong defenses. For example: By placing your keys inside a highly secure
key control system made up of heavy metal like steel, you can help prevent attackers from
gaining access to assets. Deterrence methods are classified into 4 categories:
· Physical Barriers: These include fences, walls, vehicle barriers, etc. They also act as a
Psychological deterrent by defining the perimeter of the facility and making intrusion seem
more difficult.
· Combination Barriers: These are designed to defeat defined threats. This is a part of
building codes as well as fire codes.
· Natural Surveillance: In this architects seek to build places that are more open and
visible to authorized users and security personnel so that attackers are unable to perform the
unauthorized activity without being seen. For example- decreasing the amount of dense and
tall vegetation.
· Security Lighting: Doors, gates or other means of the entrance should be well lit as
Intruders are less likely to enter well-lit areas. Keep mind to place lighting in a manner, that
is difficult to tamper.
2. Detection
If you are using the manual key control system, you have no way of knowing the exact
timestamp of when an unauthorized user requested a key or has exceeded its time limit.
Detection methods can of the following types:
· Alarm Systems and Sensors: Alarm systems can be installed to alert security
personnel in case of an attempt of unauthorized access. They consist of sensors like perimeter
sensors, motion sensors, etc.
· Video Surveillance: Surveillance cameras can be used for detection if an attack has
already occurred and a camera is placed at the point of attack. Recorded video can be used
3. Access Control
These methods are used to monitor and control the traffic through specific access points.
Access Control includes the following methods:
· Mechanical Access Control Systems: These includes gates, doors, locks, etc.
· Electronic Access Control: These are used to monitor and control larger populations,
controlling for user life cycles, dates and individual access points.
· Identification System and access policies: These includes the use of policies,
procedures and processes to manage the access into the restricted area.
4. Security Personnel
They play a central role in all layers of security. They perform many functions like:
· Responding to alarms.
Dumpster Diving is the process of finding some useful information about the person or
business from the trash that can later be used for hacking purpose. Since the information is in
the trash, it is not useful for the owner but deemed useful to the picker. To protect against it,
you need to follow certain measures:
· Ensure all important documents are shredded and they are still secure.
· Make sure that nobody can walk into your building and simply steal your garbage and
should have safe disposal policy.
· Firewalls can be used to prevent suspicious users from accessing the discarded data.
Lack of Access Control can be highly devastating if a wrong person gets in and gets access to
sensitive information. Fortunately nowadays, you have a number of modern tools that will
help you to optimize your access control.
Network Printers are a very convenient option allowing anyone in the office to get connected,
without a need of extra wiring. Unfortunately, they have underlying security risks also.
Sometimes, due to default settings, they offer open WiFi access, thus allowing anyone to get
in and open vulnerabilities in the process.
Physical backups are critical for business continuity, helping you prevent data loss in the
event of disasters, outages, and more. Most businesses secure their servers but they forget
that backups are equally important. They are holding the same level of sensitive data as
servers. Treat your backups as you treat your sensitive information and secure them.
Guest WiFi is a natural solution when you have guests or visitors as it isolates Guest WiFi
from your internal devices and data.
Any area in your organization that stores data need to be secured. Locking doors and making
sure server area gets extra protection.
As devices are becoming more mobile, chances for them being stolen or falling out of
someone’s pocket becomes more frequent. Mobile Device Management can help you to
manage such situations and take the necessary precautions. The best solution in such cases is
to simply lock down and potentially wipe any lost or stolen devices from the organization
remotely.
10. Implementing video systems
What is a policy?
The definition of policy is a set of rules or guidelines for your organization and employees to
follow in order to achieve a specific goal (i.e. compliance).
Policies answer questions about what employees do and why they do it.
An effective policy should outline what employees must do or not do, directions, limits,
principles, and guidance for decision making.
What is a procedure?
A procedure is the counterpart to a policy; it is the instruction on how a policy is followed.
It is the step-by-step instruction for how the policies outlined above are to be achieved.
A policy defines a rule, and the procedure defines who is expected to do it and how they are
expected to do it.
RISK ANALYSIS :
Risk Assessment – Risk Management is a recurrent activity, on the other hand Risk
assessment is executed at discrete points and until the performance of the next assessment.
Risk Assessment is the process of evaluating known and postulated threats and vulnerabilities
to determine expected loss.
Risk Assessment receives input and output from Context establishment phase and output is
the list of assessed risk risks, where risks are given priorities as per risk evaluation criteria.
· assets
· threats
· existing and planned security measures
· vulnerabilities
· consequence
· related business processes
· list of asset and related business processes with associated list of threats, existing and planned
security measures
· list of vulnerabilities unrelated to any identified threats
· list of incident scenarios with their consequences
ALE is the expected monetary loss that can be expected for an asset due to a risk being
realised over a one-year period.
Single Loss Expectancy (SLE) is the value of a single loss of the asset.
Qualitative risk assessments typically give risk results of “High”, “Moderate” and “Low”.
Following are the steps in Qualitative Risk Assessment:
a. Vulnerability Scanners – This is the software the compare the operating system or
code for flaws against the database of flaw signatures.
b. Penetration Testing – Human Security analyst will exercise threats against the
system including operational vulnerabilities like Social Engineering.
3. Relating Threats to Vulnerabilities: This is the most difficult and mandatory activity in
Risk Assessment. T-V pair list is established by reviewing the vulnerability list and pairing a
vulnerability with every threat that applies, then by reviewing the threat list and ensuring that
all the vulnerabilities that threat-action/threat can act against have been identified.
A) Assessing Risk: Assessing risk is the process to determine the likelihood of the
threat being exercised against the vulnerability and the resulting impact from a
successful compromise.
B) Risk Evaluation – The risk evaluation process receives as input the output of risk
analysis process. It first compares each risk level against the risk acceptance criteria
and then prioritise the risk list with risk treatment indications.
2. Risk Avoidance: This means to eliminate the risk cause or consequence in order to
avoid the risk for example shutdown the system if the risk is identified.
3. Risk Limitation: To limit the risk by implementing controls that minimize the adverse
impact of a threat’s exercising a vulnerability (e.g., use of supporting, preventive, detective
controls)
4. Risk Planning: To manage risk by developing a risk mitigation plan that prioritizes,
implements, and maintains controls
6. Risk Transference: This means to transfer the risk to compensate for the loss for
example purchasing insurance guarantees not 100% in all cases but atleast some recovery
from the loss.
5. Risk Monitoring and Review – Security Measures are regularly reviewed to ensure they
work as planned and changes in the environment don’t make them ineffective. With major
changes in the work environment security measures should also be updated.Business
requirements, vulnerabilities and threats can change over the time. Regular audits should be
scheduled and should be conducted by an independent party.
Network firewalls sit at the front line of a network, acting as a communications liaison
between internal and external devices.
A network firewall can be configured so that any data entering or exiting the network has to
pass through it — it accomplishes this by examining each incoming message and rejecting
those that fail to meet the defined security criteria. When properly configured, a firewall
allows users to access any of the resources they need while simultaneously keeping out
unwanted users, hackers, viruses, worms or other malicious programs trying to access the
protected network.
• Hardware firewalls: These firewalls are released either as standalone products for corporate
use, or more often, as a built-in component of a router or other networking device. They are
considered an essential part of any traditional security system and network configuration.
Hardware firewalls will almost always come with a minimum of four network ports that allow
connections to multiple systems. For larger networks, a more expansive networking firewall
solution is available.
• Software firewalls: These are installed on a computer, or provided by an OS or network
device manufacturer. They can be customized, and provide a smaller level of control over
functions and protection features. A software firewall can protect a system from standard
control and access attempts, but have trouble with more sophisticated network breaches.
Firewall types
Firewalls are relied upon to secure home and corporate networks. A simple firewall program
or device will sift through all information passing through the network — this process can
also be customized depending on the needs of the user and the capabilities of the firewall.
There are a number of major firewall types that prevent harmful information from passing
through the network:
All of these network firewall types are useful for power users, and many firewalls will allow
for two or more of these techniques to be used in tandem with one another.
• If your network is connected to the internet, some types of malware find ways to divert
portions of your hardware’s bandwidth for its own purposes.
• Some types of malware are designed to gain access to your network to use sensitive
information such as credit card info, bank account numbers or other proprietary data like
customer information.
• Other types of malware are designed to simply destroy data or bring networks down.
For full-spectrum security, firewalls should be placed between any network that has a
connection to the internet, and businesses should establish clear computer security plans, with
policies on external networks and data storage.
In the cloud era, network firewalls can do more than secure a network. They can also help
ensure that you have uninterrupted network availability and robust access to cloud-hosted
applications.
Transport layer firewalls
Transport Layer is the second layer in the TCP/IP model and the fourth layer in the OSI
model. It is an end-to-end layer used to deliver messages to a host. It is termed an end-to-end
layer because it provides a point-to-point connection rather than hop-to- hop, between the
source host and destination host to deliver the services reliably.
At the sender’s side: The transport layer receives data (message) from the Application layer
and then performs Segmentation, divides the actual message into segments, adds source and
destination’s port numbers into the header of the segment, and transfers the message to the
Network layer.
At the receiver’s side: The transport layer receives data from the Network layer, reassembles
the segmented data, reads its header, identifies the port number, and forwards the message to
the appropriate port in the Application layer.
While Data Link Layer requires the MAC address (48 bits address contained inside the
Network Interface Card of every host machine) of source-destination hosts to correctly
deliver a frame and the Network layer requires the IP address for appropriate routing of
packets, in a similar way Transport Layer requires a Port number to correctly deliver the
segments of data to the correct process amongst the multiple processes running on a
particular host. A port number is a 16-bit address used to identify any client-server program
uniquely.
The transport layer is also responsible for creating the end-to-end Connection between hosts
for which it mainly uses TCP and UDP. TCP is a secure, connection-orientated protocol that
uses a handshake protocol to establish a robust connection between two end hosts. TCP
ensures reliable delivery of messages and is used in various applications. UDP, on the other
hand, is a stateless and unreliable protocol that ensures best-effort delivery. It is suitable for
applications that have little concern with flow or error control and requires sending the bulk
of data like video conferencing. It is often used in multicasting protocols.
Congestion Control:
Congestion is a situation in which too many sources over a network attempt to send data and
the router buffers start overflowing due to which loss of packets occur. As a result
retransmission of packets from the sources increases the congestion further. In this situation,
the Transport layer provides Congestion Control in different ways. It uses open
loop congestion control to prevent the congestion and closed-loop congestion control to
remove the congestion in a network once it occurred.
The transport layer checks for errors in the messages coming from the application layer by
using error detection codes, computing checksums, it checks whether the received data is not
corrupted and uses the ACK and NACK services to inform the sender if the data has arrived
or not and checks for the integrity of data.
Flow control:
The transport layer provides a flow control mechanism between the adjacent layers of the
TCP/IP model. TCP also prevents data loss due to a fast sender and slow receiver by
imposing some flow control techniques. It uses the method of sliding window protocol which
is accomplished by the receiver by sending a window back to the sender informing the size of
data it can receive.
In simple words, a Web Application Firewall acts as a shield between a web application and
the Internet. This shield protects the web application from different types of attacks.
Working of Web Application Firewall
· According to the OSI model, WAF is a protocol layer seven defense.
· When a WAF is deployed in front of a web application, a shield is created between the
web application and the Internet.
· The advantage of WAF is that it functions independently from the application, but yet
it can constantly adapt to the application behavior changes.
· The clients are passed through the WAF before reaching the server in order to protect
the server from exposure.
· WAF can be set to various levels of examinations, usually in a range from low to high,
which allows the WAF to provide a better level of security.
· Cloud-based WAFs are low-cost and have fewer resources to manage. The cloud-
based solution is the perfect choice when a person doesn’t want to restrict themselves with
performance capabilities. The service providers can provide with unlimited hardware pool but
after a certain point of time, the service fees might increase.
· The purpose of these policies is to protect against the vulnerabilities in the application
by filtering out malicious traffic.
· The value of a WAF comes in part depending upon the speed and efficiency with
which the policy modification is implemented.
· Cross-Site Scripting (XSS) Attacks are aimed at those users who use vulnerable web
applications/ websites in order to gain access to and control their browsers.
· SQL Injection Attacks: A malicious SQL code is injected in the form of requests or
queries in the user input box on the web applications that the user is using.
· Zero-day attacks are unexpected attacks that take place. The organization knows
about the existence of vulnerabilities in the hardware/ software only when the attack has
taken place.
· Allowlist: A WAF based on an allow list only admits traffic that has been pre-
approved. This is like the college security guard who only admits people who are on the list.
Both Blocklist and Allowlist have equal advantages and disadvantages because of which
many WAFs offer a hybrid security model, which implements both.
· Security breaches
· Natural disasters
· Power outages
· Equipment failures
· Sudden staff departure
Why business continuity is important
Leading organizations make business continuity a top priority because maintaining critical
functions after an emergency or disruption can be the difference between the success and
failure of a business. If key business capabilities fail, a quick recovery time to bring systems
back up is crucial. Getting a business continuity strategy in place before disaster hits can save
a tremendous amount of time and money. The plan for recovery needs to include roles and
responsibilities, as well as which systems need to be recovered in which order. There are
many aspects of business continuity to consider and test, which is another reason to plan
ahead. For instance, large data sets can take an excruciatingly long time to restore from a
backup, so failover to a remote data center might be a better solution for businesses with a
large amount of data.
When resiliency and recovery plans fail, or when an unforeseen event occurs, a contingency
plan can act as a last resort. A contingency plan includes a practiced strategy and plan for
last-resort needs. These needs could range from asking third-party vendors for help to finding
a second location for emergency office space or remote back-up servers.
· Resiliency
· Recovery
· Contingency
There are many international standards and policies to guide the development of disaster
recovery and business continuity plans.
· Back-up: Backing up data is one of the simplest ways to ensure business continuity.
Storing data off site or on a remote drive provides some business continuity, but other tools
are needed to back up the IT infrastructure and keep it functioning in the event of a disaster.
· Cold Site: Businesses can set up a basic infrastructure in a second facility known as a
cold site, where employees can work after a natural disaster or fire. A cold site can help
business operations to continue, but it must be combined with other methods of disaster
recovery that protect or enable recovery of important data.
· Hot Site: A hot site is a second business location that functions like a cold site and also
maintains an up-to-date copy of data at all times. Hot sites dramatically reduce downtime, but
they are more expensive than cold sites and more time-consuming to set up.
· Physical Tools: Physical disaster recovery tools can mitigate the effects of certain
types of disasters, except cyber attacks. Physical elements that can support business
continuity include fire suppression tools to help data and computer equipment survive a fire,
and a back-up power source that supports businesses through short-term power outages.
2. Risk evaluation: Assess potential hazards that put your organization at risk.
Depending on the type of event, strategize what measures and resources will be needed to
resume business. For example, in the event of a cyber attack, what data protection measures
will the recovery team have in place to respond?
4. Backups: Determine what needs backup (or to be relocated), who should perform
backups, and how backups will be implemented. Include a recovery point objective (RPO)
that states the frequency of backups and a recovery time objective (RTO) that defines the
maximum amount of downtime allowable after a disaster. These metrics create limits to guide
the choice of IT strategy, processes and procedures that make up an organization’s disaster
recovery plan. The amount of downtime an organization can handle and how frequently the
organization backs up its data will inform the disaster recovery strategy.
5. Testing and optimization: The recovery team should continually test and update its
strategy to address ever-evolving threats and business needs. By continually ensuring that a
company is ready to face the worst-case scenarios in disaster situations, it can successfully
navigate such challenges. In planning how to respond to a cyber attack, for example, it’s
important that organizations continually test and optimize their security and data protection
strategies and have protective measures in place to detect potential security breaches.
· Back-up: This is the simplest type of disaster recovery and entails storing data off site
or on a removable drive. However, just backing up data provides only minimal business
continuity help, as the IT infrastructure itself is not backed up.
· Hot Site: A hot site maintains up-to-date copies of data at all times. Hot sites are time-
consuming to set up and more expensive than cold sites, but they dramatically reduce down
time.
· Back Up as a Service: Similar to backing up data at a remote location, with Back Up
as a Service, a third party provider backs up an organization’s data, but not its IT
infrastructure.
· Data center disaster recovery: The physical elements of a data center can protect data
and contribute to faster disaster recovery in certain types of disasters. For instance, fire
suppression tools will help data and computer equipment survive a fire. A backup power
source will help businesses sail through power outages without grinding operations to a halt.
Of course, none of these physical disaster recovery tools will help in the event of a cyber
attack.
It helps any organization safeguard itself from cyber attacks by identifying the loopholes in
advance. Here are some threats that we can prevent if we use vulnerability assessment.
Vulnerability assessments can be of different types depending on the need and type of a
system.
· Host Vulnerability Assessment: Applications and information systems often use
servers to work at the backend. Many attackers use these servers to inject threats in the
system. Thus, it is important to test servers and review them for vulnerability.
• The frontend
• The backend
Both of these parts have their own source code which must be statically as well as
dynamically analyzed for possible vulnerabilities. This assessment is often done through
automated scans of the source code.
The process of Vulnerability Assessment is divided into four stages. Let us discuss them one
by one.
· Analysis: From the first step, we get a list of vulnerabilities. Then, it is time that these
are analyzed in detail. The goal of this analysis is to identify where things went wrong so that
rectification can be done easily. This step aims at finding the root cause of vulnerabilities.
· Rectification: Once if have a clear layout of the risks, their root cause, and their
severity, we can start making corrections in the system. The fourth step aims at closing the
gaps in security by introducing new security tools and measures.
For example, all POS applications, Banking applications, etc have high chances of malicious
attacks as they deal with money. These applications must go through vulnerability testing to
ensure they are safe to use and protect customers’ confidential data.
There are various tools and techniques available to process vulnerability testing, some of
them are Intruder, Acunetix, Nessus, etc. Vulnerability is based on the following types:
1. Data-based.
2. Host-based.
3. Network-based.
· Design architecture.
· Test failure.
· Security: To make a system more secure and reliable, so that there is no unauthentic
access and no hacker attack. Vulnerability testing tests the system to identify the security
loopholes in the system and reduce them by referring them to the concerned developing team.
· Design issues: In vulnerability testing, the operating system, application software, and
network are scanned to identify the security leakage that helps in identifying the drawbacks in
designing the application and helps a developer to know the vulnerable areas and redesign
them.
· Prioritize the security issues: Vulnerability testing identifies the insecure design
issues and helps the developer to prioritize them as per severity.
· Password strengthening: The most important security option is the password, testers
validate that the password option is secure enough not to be cracked by attackers.
Vulnerability scanners are automated tools to scan all IT assets on the network to disclose the
vulnerability areas. These tools are paid and freely available. There are five types of
vulnerability scanners:
1. Host-based: A host is a web server to connect with other servers on the internet and
communicates with them. The host-based scanner identifies vulnerabilities in the workstation,
OS platform, and other related areas. It also calculates the damage to the system due to
unauthorized access. The host-based vulnerability scanner identifies the vulnerable areas and
resolves the detected damage and identifies the damage level.
2. Network-based: It identifies the possible vulnerable areas over the network as the
application interacts with the internet to provide services to users. It tries to identify security
attacks on wired or wireless networks by scanning the application on the network. It scans all
devices and software working over the network to identify security loopholes.
5. Wireless-based: The wireless scanners scan the ports and identify the security issues in
the network of an application. After identifying the security weak points, it reports the team
and the developer tries to strengthen the security by using encryption or other ways.
Tools for Vulnerability Testing
1. Intruder: It aims to find security weaknesses before any hacker attacks. It is an online
vulnerability scanner to identify the security drawbacks of an application. It is a paid scanner
and provides a free demo. Its features are:
· Alert the application when new ports are accessed and some new changes are made in
an application.
2. Acunetix: It is a vulnerability scanner for websites, web applications, and APIs. It is a paid
scanner and you can use its demo version to know more about it. Features of Acunetix are:
· It is automated and can detect around 7000 vulnerabilities including all vulnerable
areas.
3. Frontline: It is the most popular vulnerability scanner with 4.5 ratings and is a network
vulnerability scanner. Along with finding vulnerable areas it also defines its remedies. The
features of Frontliner are as:
· It is user-friendly.
4. Nexus: The highly demanding vulnerability scanner with around 2 million downloads. It is
a freely available scanner and developed by Sonatype to identify security loopholes. Some of
the features of Nexus are as:
5. Nessus: it is a freely available tool for non-enterprises and a minimum charge for
enterprise use, it is sold by Tenable Security. It alerts the testing team on finding some
vulnerable areas and provides mitigation measures. Some of the features of Nessus are as: