Professional Documents
Culture Documents
Department of ICT
Names Id No
In this article we discuss five endpoint threats, how they impact organizations and how
IT security departments can deal with them.
1. Phishing Attacks
In light of all this, how do you ensure proper endpoint protection against phishing
attacks?
The Technical Solution
A technical solution would include subscribing to service providers that provide threat
intelligence and research on phishing IP addresses and web pages. Examples of some
services you can use to combat phishing include Cymon and Firehol. Some organizations
may decide to implement machine learning, where they will acquire data sets from
websites such as PhishTank and Alexa in order to process the raw data and extract
meaningful information showing fraudulent domains. Meaningful datasets will be used
to train the company’s machines using powerful algorithms such as the famous Decision
Tree Algorithm.
Phishing attacks can also be stopped by analyzing inbound traffic for malware and
sandboxing inbound email addresses by deploying endpoint security solutions that
check on the safety of emails when users click to read.
The Educational Solution
Organizations that perform regular awareness training are in a better position to detect
phishing attacks than those that do not. You could train your employees using solutions
such as SecurityIQ, which allows you to execute a test phishing campaign. Conducting
such training ensures that user behavior within the company remains in line with policy
and that employees understand what to do when they encounter malicious emails.
2. Un patched Vulnerabilities
Think, for example, of the damage that followed the recent British Airways data breach
in which hackers walked away with customers’ financial data. The hackers that
perpetrated this attack now have a range of options, from making fake cards to making
online purchases with the victims’ money. Think of all the negative implications this
could cause to not only British Airways, but your company as well. Such attacks can be
controlled by the right endpoint security in place.
Organizations will also find it important to have regular assessments conducted within
their environments and infrastructure in order to detect unpatched vulnerabilities
before they are exploited. This can be done by conducting penetration testing,
vulnerability assessments, source code reviews and red team assessments.
3. Malvertising
Once company’s endpoint security has been breached, malvertising reduces the
productivity of your company. Employees constantly seeing adverts popping up on their
screens during work hours, will have their productivity affected by the unnecessary
redirection and distracting content.
Your website can be flagged for distributing malicious software by a search engine. In
case this happens, you can file an appeal only after removing the affected ads.
4. Drive-By Downloads
1. Using updated software: Up-to-date software ensures that you are protected
against the vulnerabilities permit drive-by downloads within your systems. This is
effective in thwarting some drive-by-download attacks.
2. Removing unnecessary plugins: Some software is no longer supported after a
while, meaning it isn’t up to date on the latest threats. Removing software that is
no longer being supported will go a long way in improving your endpoint security
and preventing potential attacks.
3. Installing an ad blocker: Most drive-by-downloads propagate by means of
infected ads. Having an ad blocker ensures that you are protected from being
redirected to sites that host drive-by-type malware, eventually protecting you
from them.
4. Installing a host-based firewall: Host-based firewalls will help detect malicious
links where infections reside and block you and your employees from accessing
the sites. A good host-based firewall is Comodo Firewall by Comodo.
Securing sensitive data in a locked secure storage that is safe from theft and
cyberattacks. Your company will be able to salvage sensitive information even
after a security breach (as long as the secured data is intact)
Maintaining backups of your data. This is especially effective against ransomware
attacks, since paying the ransom is discouraged
Properly disposing of outdated data and information. If outdated data is obtained
by hackers, it may still have the same undesired effect to your customers
Securely accessing your data through encrypted media is a step towards
thwarting hackers
Malware is intrusive software that is designed to damage and destroy computers and
computer systems. Malware is a contraction for “malicious software.” An example of
common malware includes viruses, worms, Trojan viruses, spyware, adware, and
ransom ware.
Types of malware
Virus
Viruses are a subgroup of malware. A virus is malicious software attached to a
document or file that supports macros to execute its code and spread from host to
host. Once downloaded, the virus will lay dormant until the file is opened and in use.
Viruses are designed to disrupt a system’s ability to operate. As a result, viruses can
cause significant operational issues and data loss.
Worms
Worms are a malicious software that rapidly replicates and spreads to any device
within the network. Unlike viruses, worms do not need host programs to
disseminate. A worm infects a device via a downloaded file or a network connection
before it multiplies and disperses at an exponential rate. Like viruses, worms can
severely disrupt the operations of a device and cause data loss.
Trojan virus
Trojan viruses are disguised as helpful software programs. But once the user
downloads it, the Trojan virus can gain access to sensitive data and then modify,
block, or delete the data. This can be extremely harmful to the performance of the
device. Unlike normal viruses and worms, Trojan viruses are not designed to self-
replicate.
Spyware
Spyware is malicious software that runs secretly on a computer and reports back to a
remote user. Rather than simply disrupting a device’s operations, spyware targets
sensitive information and can grant remote access to predators. Spyware is often
used to steal financial or personal information. A specific type of spyware is a
keylogger, which records your keystrokes to reveal passwords and personal
information.
Adware
Adware is malicious software used to collect data on your computer usage and
provide appropriate advertisements to you. While adware is not always dangerous, in
some cases adware can cause issues for your system. Adware can redirect your
browser to unsafe sites, and it can even contain Trojan horses and spyware.
Additionally, significant levels of adware can slow down your system noticeably.
Because not all adware is malicious, it is important to have protection that constantly
and intelligently scans these programs.
Ransomware
Ransomware is malicious software that gains access to sensitive information within a
system, encrypts that information so that the user cannot access it, and then
demands a financial payout for the data to be released. Ransomware is commonly
part of a phishing scam. By clicking a disguised link, the user downloads the
ransomware. The attacker proceeds to encrypt specific information that can only be
opened by a mathematical key they know. When the attacker receives payment, the
data is unlocked.
Fileless malware
Fileless malware is a type of memory-resident malware. As the term suggests, it is
malware that operates from a victim’s computer’s memory, not from files on the
hard drive. Because there are no files to scan, it is harder to detect than traditional
malware. It also makes forensics more difficult because the malware disappears
when the victim computer is rebooted. In late 2017, the Cisco Talos threat
intelligence team posted an example of fileless malware that they called
DNSMessenger
To put it simply, endpoint security management is an issue because laptops and other
wireless devices serve as potential entry points to the network, but are typically not
equipped with adequate security measures. They tend to be exposed to more risks than
a regular workstation, but face lower IT standards due to their nature as mobile,
temporarily connected devices.
This makes endpoints appealing to hackers as easy targets for many types of malware. If
these devices have full access to the internal network, it’s all too easy for threats to
spread throughout the business. In addition, because they are mobile, it’s possible that
the devices—and the data they have access to—could easily fall into the wrong hands.
MSPs need to implement tools that provide comprehensive management solutions for
these endpoints. Helping ensure endpoint security and adequate network protection
includes:
Patches and updates: It can be difficult to enforce software updates across the
network, let alone enforce updates on endpoints. There must be a process in
place to ensure that endpoint users aren’t using insecure or out-of-date versions
of applications. You can also consider whitelisting certain applications and not
others.
Device policies: Policies are coded rules that allow you to specify and control
how endpoints connect to the network. These policies will ideally be standard for
mobile devices across the network, and endpoints must prove compliance before
they are granted network access.
Access and control: Network access control is a crucial method for protecting
your network and helping ensure no unauthorized devices are given access. This
can mean that users must enter a username and password to gain entry. You can
also restrict access to network data, control user behavior (by blocking USB use
or file access, for instance), and implement specific anti-threat initiatives like
antivirus software. This is especially important for managing guest devices.
Threat detection: There are a number of reasons to check endpoints for threats.
Most importantly, you want to make sure threats don’t spread from these
devices to your internal network. But endpoints are also rich sources of threat
data you can use to improve network protection more generally.
The term endpoint detection was coined by Anton Chuvakin of Gartner, who in 2013
decided that “endpoint threat detection and response,” otherwise known as endpoint
detection and response or EDR, was an appropriate name for the emerging problem of
detecting suspicious activities on endpoints. Since then, EDR has become a popular
concept for professionals seeking to protect networks and minimize the risk that
endpoints continue to pose.
The purpose of EDR is to gain insights into the threats that could occur or have already
occurred. That means MSPs detect potential or existing threats and take appropriate
measures to prevent attacks or mitigate harm. Of course, this requires high-quality
monitoring of endpoint systems and how they are used. But to effectively protect a
customer’s network, simply looking for endpoint threats is not enough. The additional
capabilities that MSPs need for effective endpoint detection include:
Endpoint visibility means having meaningful insight into all managed devices. MSPs are
already tasked with collecting data across challenging environments like cloud platforms
and virtual machines, but it’s also important to collect data from endpoints like mobile
phones and laptops. By gathering and centralizing the right kind of data about individual
endpoints, MSPs can quickly answer key visibility questions that help ensure overall
network security. Potential questions about endpoints include the following:
Essentially, endpoint monitoring is about tracking activity and risks on all the mobile
devices that join your network. The term describes the ongoing, continuous process of
managing a dynamic array of endpoints on a business network. For that, you need
endpoint visibility and access, as well as the ability to detect (and automatically address)
threats. Information can be gathered in a central database to help ensure further
analysis, comparison, reporting, and alerting.
As an MSP, your customers expect that you’ll be able to keep them protected from the
rising number of security threats in today’s digital era. With RMM, the endpoint
detection and response feature allows even the busiest MSPs to stay ahead of potential
threats to offer customers effective endpoint protection. The tool lets you create
custom policies to manage endpoint agents and constantly analyzes files to detect
threats. Behavioral AI engines power data point analysis, meaning RMM is well-
equipped to help protect against ransomware, zero-day attacks, and the evolving threat
landscape. And when threats do occur, you don’t have to be online to take care of them
—the tool responds with automatic rollbacks (and sends an instant alert to keep you
informed).
A software procurement process that aligns solutions with core business needs.
Evaluation / audit of how the deployed solution is used, by who, and if it is
effective.
Optimize licensing based on use and necessity.
Monitor for and administer software updates.
Retire outdated solutions.
One of the key indicators of a mature SAM program is one that goes from being reactive
to predictive.
Once a SAM Maturity Assessment has been completed, you can use a SAM maturity
model to better understand your place and where you need to go. Gartner has
developed a SAM Maturity model of their own many have chosen to use, including us
here at SoftwareONE. These are the are five components within this model:
#1 – Chaotic
When an organization has chaotic SAM, they are dealing with audits on an ad-hoc basis.
They’re often dealing with multiple help desks, undocumented assets, and minimal IT
operations.
#2 - Reactive
Once an organization has reached a reactive level, they are working with a step up from
the bare minimum. The do not have much control over which of their IT assets are being
used and where, and taking care of an audit feels more like putting out a fire. These
organizations typically lack or have incomplete policies, procedures, resources, and
tools.
#3 – Proactive
At this point, an organization would be tracking assets and able to analyze trends. The
organization has moved from reacting to audits to predicting them. Mature problem
configuration, change, and asset and performance management is fully in place.
#4 – Service
This is a stage an organization should begin working with IT as a service provider. Tools,
procedures, and policies are being utilized daily to manage the software asset life-cycle.
An organization will be able to define services, classes, and other pricing.
#5 - Value
An organization is fully optimized at this level of maturity and partnering with IT as a
strategic business partner. An organization would now have enough accurate
information to manage assets to business targets. Alignment can be done in nearly real
time, so a program can adapt to changing business needs.
The biggest thing to remember here is that SAM Maturity is about continued
development. You must design an actionable roadmap that is easy to carry out and easy
to maintain. Otherwise, you will not find your organization crossing the threshold from
reactive to predictive. To make sure your plan is doable, set smaller goals that can be
done in a shorter amount of time to get a hold of where you can gain control.
a. Passwords.
b. Personal identification numbers.
c. Shared secrets.
d. Digital certificates and signatures.
e. Smart cards and tokens.
f. Biometrics.
g. Strong authentication.
Authentication Authorization
Multi-factor authentication
Multi-factor authentication is, in contrast, when several factors are required to perform
a successful identification. The most widespread methods are RFID+Password,
RFID/Password+Biometry (which is also called as 1:1 verification) or multiple biometric
factors. The number of possible combinations is rather high. Multi-factor authentication
can give higher security levels with individually lower quality methods (e.g. a simple
password and biometry is always stronger than a very hard to guess password and this
is), as people with malicious intents have to take that extra mile to obtain all
information and/or samples before attempting to spoof the system. If we consider this
further, the level of security is determined by how difficult it is to obtain the hardest-to-
obtain factor. This means that if, for example, a system uses a PIN code and a vein
pattern, both have to be acquired for a successful identification. Alone, neither is
enough to produce a successful identification, so that is why the hardest factor
determines the overall security (of course, only from this standpoint - if the IT
background or the devices themselves are vulnerable, that will adversely affect the
whole system, but that is another question). And, as you might have suspected, there is
a tradeoff between security and throughput - the higher the number of the factors that
are required, the slower will it be to pass through an access point with the particular
configuration. Also, it will require more user cooperation, which means that aside from
the cases where the individual voluntarily starts to use multiple factors for his/her own
benefit, companies will have to force users into multi-factor authentication. So use
multi-factor authentication where security is more important than throughput (or user
experience, for that matter).
A special case of multi-factor authentication is when two or more biometric features are
used to perform identification. Here, the lines that separate the pros and cons of single-
and multi-factor authentication start to get blurred. There are features, that can be
checked within the same process, at the same time (e.g. fingerprints and finger veins -
or palm veins - depending on the configuration), which gives the process speed akin to
single factor authentication while retaining the security level of multi-factor
authentication. Extending this idea, if a biometric factor needs cooperation, and during
that a different factor can be examined with little to no further cooperation (e.g. palm
veins and face recognition together), identification will be almost as convenient as with
their single factor counterparts. Note, that this case might be considered a single factor
method by some, as the multiple factors are from the same general type (biometric,
that is). This is really on the edges of both realms, drawing the positive aspects from
both while trying to mitigate the negative ones
Authorization Principles
Access must be granted based on personnel roles and the security principles of
clearance, need to know, separation of duties, and least privilege.
Clearances
For personnel without appropriate clearances or background investigations, access is
restricted to temporary information services. Managers must use eAccess to request
access authorization for individuals who do not have the appropriate clearance and are
responsible for the access activities of those individuals.
Need to Know
For sensitive-enhanced, sensitive, and critical information resources access must be
limited in a manner that is sufficient to support approved business functions. Access to
sensitive-enhanced and sensitive Postal Service information resources must be limited
to personnel who need to know the information to perform their duties.
Separation of Duties
Only authorized personnel are approved for access to Postal Service information
resources. This approval must be specific to an individual’s roles and responsibilities in
the performance of his or her duties and must specify the type of access (e.g., read,
write, delete, and execute); specific resources and information; and time periods for
which the approval is valid. Separation of duties and responsibilities are considered
when defining roles. For special situations where additional control is required, dual
authorization can be implemented.
Least Privilege
Identity Defined
A good electronic identity is something that is verifiable and difficult to reproduce. It
must also be easy to use. A difficult to use identity is an identity or a related
service/application not used.
One the other end of the identity effectiveness spectrum might be a solution that
provides nearly 100% probability of a subject’s identity but is frustrating and close to
unusable. For example, a combination of a personal certificate, token, password, and a
voice print to access a financial application is a waste of resources and a path to security
team unemployment. Identity verification process cost and complexity should mirror
the risk associated with unauthorized access and still make sense at the completion of a
cost-benefit analysis.
Effective and Reasonable Identity Solution Characteristics
Identity verification, like any other control, is stronger when supported by other
controls. For example, risk of account ID and password access is mitigated by strong
enforcement of separation of duties, least privilege, and need-to-know. Depending on
the data involved, this might be enough. For more restricted data classifications, we can
use a little probability theory to demonstrate the effectiveness of layered controls. First,
however, let us take a look at one of the most common multi-layer solutions: multi-
factor authentication.
Multi-factor Authentication (MFA)
MFA uses two of three dimensions, or factors:
Alex’s security director decided to take a middle path. The director believes strong
passwords cause more problems than they prevent: a view supported by business
management. He also believes that lowering biometrics false rejection rates is necessary
to maintain employee acceptance and maintain productivity levels. Instead of using only
one less than optimum authentication factor, he decided to layer two: passwords
(something Alex has) and biometrics (something Alex is).
The probability of someone masquerading as Alex is very low. We can model this by
applying probability theory to our example. As you might recall from our discussion of
attack tree analysis, when two conditions must exist in order to achieve a desired state,
we multiply the probability of one condition with that of the other. In this case, the
desired condition is access to the patient database. The two conditions are knowledge
of Alex’s password and counterfeiting her fingerprint. Consequently, the probability of
an unauthorized person accessing the database as Alex is (.30 x .20) = .06, or six percent.
AUTHENTICATOR MANAGEMENT
End User authenticator management is the responsibility of the deployer.
Intra-platform authenticator management is the responsibility of BOSH and Ops Man
and CredHub. Rotating intra-system authenticators in PAS is a supported procedure.
It is not yet fully automated, but may be accomplished through manual intervention.
it is a deployer responsibility to align organizational policy and operational procedures
to supplement native PCF capabilities if and as needed.
Validation of end user PKI credentials is delegated to the enterprise IdM.
Validation of intra-platform PKI credentials uses the deployer-configured CA trust chain.
However, there is no OCSP or CRL checking for intra-platform PKI credentials.
The strategy for avoiding reliance on a compromised credential is based upon based
upon frequent rotation of short lived credentials.
Control Description
The organization manages information system authenticators by:
Verifying, as part of the initial authenticator distribution, the identity of the individual,
group, role, or device receiving the authenticator;
Establishing initial authenticator content for authenticators defined by the organization;
Ensuring that authenticators have sufficient strength of mechanism for their intended
use;
Establishing and implementing administrative procedures for initial authenticator
distribution, for lost/compromised or damaged authenticators, and for revoking
authenticators;
Changing default content of authenticators prior to information system installation;
Establishing minimum and maximum lifetime restrictions and reuse conditions for
authenticators;
Changing/refreshing authenticators [Assignment: organization-defined time period by
authenticator type];
Protecting authenticator content from unauthorized disclosure and modification;
Requiring individuals to take, and having devices implement, specific security safeguards
to protect authenticators; and
Changing authenticators for group/role accounts when membership to those
accounts changes
b. What are different key considerations required for access control schema
What is access control?
Access control is a method of guaranteeing that users are who they say they are and
that they have the appropriate access to company data.
At a high level, access control is a selective restriction of access to data. It consists of
two main components: authentication and authorization, says Daniel Crowley, head of
research for IBM’s X-Force Red, which focuses on data security.
Authentication is a technique used to verify that someone is who they claim to be.
Authentication isn’t sufficient by itself to protect data, Crowley notes. What’s needed is
an additional layer, authorization, which determines whether a user should be allowed
to access the data or make the transaction they’re attempting.
Without authentication and authorization, there is no data security, Crowley says. “In
every data breach, access controls are among the first policies investigated,” notes Ted
Wagner, CISO at SAP National Security Services, Inc. “Whether it be the inadvertent
exposure of sensitive data improperly secured by an end user or the Equifax breach,
where sensitive data was exposed through a public-facing web server operating with a
software vulnerability, access controls are a key component. When not properly
implemented or maintained, the result can be catastrophic.”
Any organization whose employees connect to the internet—in other words, every
organization today—needs some level of access control in place. “That’s especially true
of businesses with employees who work out of the office and require access to the
company data resources and services,” says Avi Chesla, CEO of cybersecurity firm
empow.
Put another way: If your data could be of any value to someone without proper
authorization to access it, then your organization needs strong access control, Crowley
says.
Another reason for strong access control: Access mining
The collection and selling of access descriptors on the dark web is a growing problem.
For example, a new report from Carbon Black describes how one cryptomining botnet,
Smominru, mined not only cryptcurrency, but also sensitive information including
internal IP addresses, domain information, usernames and passwords. The Carbon Black
researchers believe it is "highly plausible" that this threat actor sold this information on
an "access marketplace" to others who could then launch their own attacks by remote
access.
These access marketplaces "provide a quick and easy way for cybercriminals to purchase
access to systems and organizations.... These systems can be used as zombies in large-
scale attacks or as an entry point to a targeted attack," said the report's authors. One
access marketplace, Ultimate Anonymity Services (UAS) offers 35,000 credentials with
an average selling price of $6.75 per credential.
The Carbon Black researchers believe cybercriminals will increase their use of access
marketplaces and access mining because they can be "highly lucrative" for them. The
risk to an organization goes up if its compromised user credentials have higher privileges
than needed.
“Adding to the risk is that access is available to an increasingly large range of devices,”
Chesla says, including PCs, laptops, smart phones, tablets, smart speakers and other
internet of things (IoT) devices. “That diversity makes it a real challenge to create and
secure persistency in access policies.”
In the past, access control methodologies were often static. “Today, network access
must be dynamic and fluid, supporting identity and application-based use cases,” Chesla
says.
Enterprises must assure that their access control technologies “are supported
consistently through their cloud assets and applications, and that they can be smoothly
migrated into virtual environments such as private clouds,” Chesla advises. “Access
control rules must change based on risk factor, which means that organizations must
deploy security analytics layers using AI and machine learning that sit on top of the
existing network and security configuration. They also need to identify threats in real-
time and automate the access control rules accordingly.”
7. Explain the following access control mechanisums with suitable example
a. Rule based access control b. Role based access control
c. Mandatory access control d. Discretionary access contr
A. Rule Based Access Control
Rule Based Access Control (RBAC) introduces acronym ambiguity by using the same four
letter abbreviation (RBAC) as Role Based Access Control.
Under Rules Based Access Control, access is allowed or denied to resource objects based
on a set of rules defined by a system administrator. As with Discretionary Access
Control, access properties are stored in Access Control Lists (ACL) associated with each
resource object. When a particular account or group attempts to access a resource, the
operating system checks the rules contained in the ACL for that object.
Examples of Rules Based Access Control include situations such as permitting access for
an account or group to a network connection at certain hours of the day or days of the
week.
As with MAC, access control cannot be changed by users. All access permissions are
controlled solely by the system administrator.
Role Based Access Control (RBAC), also known as Non discretionary Access Control,
takes more of a real world approach to structuring access control. Access under RBAC is
based on a user's job function within the organization to which the computer system
belongs.
Roles differ from groups in that while users may belong to multiple groups, a user under
RBAC may only be assigned a single role in an organization. Additionally, there is no way
to provide individual users additional permissions over and above those available for
their role. The accountant described above gets the same permissions as all other
accountants, nothing more and nothing less.
Mandatory Access Control (MAC) is the strictest of all levels of control. The design of
MAC was defined, and is primarily used by the government.
Similarly, each user account on the system also has classification and category
properties from the same set of properties applied to the resource objects. When a user
attempts to access a resource under Mandatory Access Control the operating system
checks the user's classification and categories and compares them to the properties of
the object's security label. If the user's credentials match the MAC security label
properties of the object access is allowed. It is important to note that both the
classification and categories must match. A user with top secret classification, for
example, cannot access a resource if they are not also a member of one of the required
categories for that object.
Mandatory Access Control is by far the most secure access control environment but
does not come without a price. Firstly, MAC requires a considerable amount of planning
before it can be effectively implemented. Once implemented it also imposes a high
system management overhead due to the need to constantly update object and account
labels to accommodate new data, new users and changes in the categorization and
classification of existing users.
Unlike Mandatory Access Control (MAC) where access to system resources is controlled
by the operating system (under the control of a system administrator), Discretionary
Access Control (DAC) allows each user to control access to their own data. DAC is
typically the default access control mechanism for most desktop operating systems.
Instead of a security label in the case of MAC, each resource object on a DAC based
system has an Access Control List (ACL) associated with it. An ACL contains a list of users
and groups to which the user has permitted access together with the level of access for
each user or group. For example, User A may provide read-only access on one of her
files to User B, read and write access on the same file to User C and full control to any
user belonging to Group 1.
It is important to note that under DAC a user can only set access permissions for
resources which they already own. A hypothetical User A cannot, therefore, change the
access control for a file that is owned by User B. User A can, however, set access
permissions on a file that she owns. Under some operating systems it is also possible for
the system or network administrator to dictate which permissions users are allowed to
set in the ACLs of their resources.
Implement a regular review of systems. At least annually, review all users on each
application to ensure that they still need access to the application and still require their
existing level of access. You should review high-risk systems more frequently, such as
quarterly. The reviews should be documented and approved, especially if a
decentralized system in place. During this review, verify that terminated employees
have been removed, access rights and administrative functions are still warranted, and
service, system level, and vendor accounts are still required. Some applications allow
you to create reports showing the last login date for accounts. This could help you
identify accounts that are no longer required.Implement alerting and log review. Can
you configure applications and systems so that appropriate personnel receive an email
alert any time a change is made or access is adjusted? Consider reviewing a change log
weekly to identify changes that should not have been made. This type of ongoing
monitoring can help make your overall user access management process more
effective.Implementation and follow through on these steps will help your organization
establish a solid user administration process and maintain clean user access for all your
applications.
authentication
authorisation
user management
central user repository (commonly referred to as 'directory services')
The figure below illustrates the four key components of IDAM, along with capabilities
within each component
Authentication
This covers functionality that enables a user to provide sufficient credentials to gain
controlled access to a system / resource / asset. In short, once a user is authenticated
(they are who they claim to be), a session is created and referenced throughout the
interaction between the user and the end resource until such time as the session is
terminated (e.g. via a timeout, logging off, etc.). Very often, authentication takes the
form of providing a set of credentials, such as a username and password. The central
maintenance of a session allows, for example, the ability to enable single sign-on i.e. no
further login is required by the user in order to gain access to multiple resources and
assets.
Authorization
This functionality follows on from authentication, i.e. once you are confirmed as ‘you’
(authentication), this is what are you allowed to do (authorization). This is typically
performed by analyzing the access request (such a web Uniform Resource Identifier or
URI) against predefined policies stored within the IDAM policy store. Role-based access
(RBAC) is key functionality to enable this; it can be overlaid by controls that may
additionally look at attributes associated with the user such as groups, user roles, nature
of action taken, channels, time, resource types, business rules, security policies,
compliance and regulatory requirements etc. to determine the level of access. (Note:
an alternative to RBAC is the use of Access Control Lists (ACL), and this, in turn, can be
translated to XACML (eXtensible Access Control Markup Language) - but that’s another
story.)
User management
User provisioning (and deprovisioning), password management, and role / group
management are some of the functions of this area. The focus of this capability within
IDAM is primarily administrative in nature and involves the lifecycle of an identity:
creation, propagation, maintenance, de-provisioning, etc. Some functionality can be
centralised, some can be delegated to end users (or groups), e.g. self-service, password
resets. In practice, delegation can often improve accuracy of data within the IDAM
primarily due to the end user becoming ‘closer’ to the system; trust between the user
and the enterpise is also increased.
2 - The identity is authenticated (i.e. the individual says who they say they are) and
authorized (i.e. what are they allowed to access)
3 - The individual (may) be allowed to access Self-Service to manage a given subset of
credentials themselves, and may choose to delegate their role (for example, to a
personal assistant)
4 - The password associated with the identity may be changed / updated /reset during
the course of its life
5 - The access associated with an identity may vary depending on their role(s) within an
organization which may evolve through the lifecycle
7 - As part of managing the lifecycle, reporting and analytics are essential (for example,
for security and audit purposes)
Like “machine learning” and “AI,” Zero Trust has become one of cybersecurity’s latest
buzzwords. With all the noise out in the market, it’s imperative to understand what Zero
Trust is, as well as what Zero Trust isn’t.
Zero Trust is a strategic initiative that helps prevent successful data breaches by
eliminating the concept of trust from an organization’s network architecture. Rooted in
the principle of “never trust, always verify,” Zero Trust is designed to protect modern
digital environments by leveraging network segmentation, preventing lateral
movement, providing Layer 7 threat prevention, and simplifying granular user-access
control.
Zero Trust was created by John Kindervag, during his tenure as a vice president and
principal analyst for Forrester Research, based on the realization that traditional security
models operate on the outdated assumption that everything inside an organization’s
network should be trusted. Under this broken trust model, it is assumed that a user’s
identity is not compromised and that all users act responsibly and can be trusted. The
Zero Trust model recognizes that trust is a vulnerability. Once on the network, users –
including threat actors and malicious insiders – are free to move laterally and access or
exfiltrate whatever data they are not limited to. Remember, the point of infiltration of
an attack is often not the target location.
According to The Forrester Wave™: Privileged Identity Management, Q4 2018, This trust
model continues to be abused credentials.1 Zero Trust is not about making a system
trusted, but instead about eliminating trust.
A Zero Trust Architecture
In Zero Trust, you identify a “protect surface.” The protect surface is made up of the
network’s most critical and valuable data, assets, applications and services – DAAS, for
short. Protect surfaces are unique to each organization. Because it contains only what’s
most critical to an organization’s operations, the protect surface is orders of magnitude
smaller than the attack surface, and it is always knowable.
With your protect surface identified, you can identify how traffic moves across the
organization in relation to protect surface. Understanding who the users are, which
applications they are using and how they are connecting is the only way to determine
and enforce policy that ensures secure access to your data. Once you understand the
interdependencies between the DAAS, infrastructure, services and users, you should put
controls in place as close to the protect surface as possible, creating a microperimeter
around it. This microperimeter moves with the protect surface, wherever it goes. You
can create a microperimeter by deploying a segmentation gateway, more commonly
known as a next-generation firewall, to ensure only known, allowed traffic or legitimate
applications have access to the protect surface.
The segmentation gateway provides granular visibility into traffic and enforces
additional layers of inspection and access control with granular Layer 7 policy based on
the Kipling Method, which defines Zero Trust policy based on who, what, when, where,
why and how. The Zero Trust policy determines who can transit the microperimeter at
any point in time, preventing access to your protect surface by unauthorized users and
preventing the exfiltration of sensitive data. Zero Trust is only possible at Layer 7.
Once you’ve built your Zero Trust policy around your protect surface, you continue to
monitor and maintain in real time, looking for things like what should be included in the
protect surface, interdependencies not yet accounted for, and ways to improve policy.
Zero Trust is not dependent on a location. Users, devices and application workloads are
now everywhere, so you cannot enforce Zero Trust in one location – it must be
proliferated across your entire environment. The right users need to have access to the
right applications and data.
Users are also accessing critical applications and workloads from anywhere: home,
coffee shops, offices and small branches. Zero Trust requires consistent visibility,
enforcement and control that can be delivered directly on the device or through the
cloud. A software-defined perimeter provides secure user access and prevents data loss,
regardless of where the users are, which devices are being used, or where your
workloads and data are hosted (i.e. data centers, public clouds or SaaS applications).
Workloads are highly dynamic and move across multiple data centers and public,
private, and hybrid clouds. With Zero Trust, you must have deep visibility into the
activity and interdependencies across users, devices, networks, applications and data.
Segmentation gateways monitor traffic, stop threats and enforce granular access across
north-south and east-west traffic within your on-premises data center and multi-cloud
environments.
Achieving Zero Trust is often perceived as costly and complex. However, Zero Trust is
built upon your existing architecture and does not require you to rip and replace existing
technology. There are no Zero Trust products. There are products that work well in Zero
Trust environments and those that don't. Zero Trust is also quite simple to deploy,
implement and maintain using a simple five-step methodology. This guided process
helps identify where you are and where to go next:
Creative a Zero Trust environment – consisting of a protect surface that contains a single
DAAS element protected by a microperimeter enforced at Layer 7 with Kipling Method
policy by a segmentation gateway – is a simple and iterative process you can repeat one
protect surface/DAAS element at a time.
There are eight main identity and access management (IAM) challenges associated with
adopting and deploying cloud and SaaS applications, as well as best practices for
addressing each of them.
1. User Password Fatigue
Although the SaaS model initially makes it easier for users to access their applications,
complexity quickly increases with the number of applications. Each application has
different password requirements and expiration cycles. The variety of requirements
multiplied by the variety of expiration cycles equals diminished user productivity and
increased user frustration as they spend time trying to reset, remember, and manage
these constantly changing passwords and URLs across all of their applications.
Perhaps of even greater concern are the security risks caused by the same users who
react to this “password fatigue” by using obvious or reused passwords written down on
Post-it notes or saved in Excel files on laptops.
Cloud-based IAM services can alleviate these concerns by providing single sign-on (SSO)
across all of these applications, giving users a central place to access all of their
applications with a single user name and password. Better yet, a cloud-based identity
management system can also enable various departments to manage identities for both
on-demand and on-premises applications.
The majority of enterprises use Microsoft Active Directory (AD) as the authoritative user
directory that governs access to basic IT services such as email and file sharing. AD is
often also used to control access to a broader set of business applications and IT
systems. The right on-demand IAM solution should leverage Active Directory, and allow
users to continue using their AD credentials to access SaaS applications; this increases
the likelihood that users will find the newest and best SaaS applications their company
provides them.
2. Failure-Prone Manual Provisioning and De- Provisioning Process
When a new employee starts at a company, IT often provides the employee with access
to the corporate network, file servers, email accounts, and printers. Since many SaaS
applications are managed at department level (Sales Operations manages
Salesforce.com, Accounting manages QuickBooks, Marketing manages Marketo, etc.),
access to these applications is often granted separately by the specific application’s
administrator, rather than by a single person in IT.
Given their on-demand architecture, SaaS apps should be easy to centrally provision. A
real cloud identity and access management service should be able to automate the
provisioning of new SaaS applications as a natural extension of the existing on-boarding
process. When a user is added to the core directory service (such as Active Directory),
their membership in particular security groups should ensure that they are
automatically provisioned with the appropriate applications and given the access
permissions they need.
Almost certainly, an employee termination is a bigger concern. IT can centrally revoke
access to email and corporate networks, but they have to rely on external application
administrators to revoke the terminated employee’s access to each SaaS application.
This leaves the company vulnerable, in that critical business applications and data are in
the hands of potentially disgruntled former employees and auditors looking for holes in
your deprovisioning solution.
A cloud-based IAM service should not only enable IT to automatically add new
applications, but it should also provide:
• Automated user de-provisioning across all on-premises and all cloud based
applications.
• Deep integration with Active Directory.
• Clear audit trails.
The IAM service should provide organizations with the peace of mind that once an
employee has left the company, the company’s data hasn’t left with them.
3. Compliance Visibility: Who Has Access to What?
It’s important to understand who has access to applications and data, where they are
accessing it, and what they are doing with it. This is particularly true when it comes to
cloud services. However only the most advanced offerings like Salesforce.com offer any
compliance-like reporting, and even then, it’s siloed for just one application.
To answer auditors who ask you which employees have access to your applications and
data, you need central visibility and control across all your systems. Your IAM service
should enable you to set access rights across services, and provide centralized
compliance reports across access rights, provisioning and de-provisioning, and user and
administrator activity.
4. Siloed User Directories for Each Application
Most enterprises have made a significant investment in a corporate directory (such as
Microsoft Active Directory) to manage access to on-premises network resources. As
organizations adopt cloudbased services, they need to leverage that investment and
extend it to the cloud, rather than create a parallel directory and access management
infrastructure just for those new SaaS applications.
A best-of-breed cloud-based IAM solution should provide centralized, out-of-the-box
integration into your central Active Directory or LDAP directory so you can seamlessly
leverage and extend that investment to these new applications—without on-premises
appliances or firewall modifications required. As you add or remove users from that
directory, access to cloudbased applications should be modified automatically, via
industry standards like SSL, without any network or security configuration changes. Just
set and forget.
5. Managing Access across an Explosion of Browsers and Devices
One of the great benefits of cloud applications is that access is available from any device
that is connected to the Internet. But more apps means more URLs and passwords, and
the rise of mobile devices introduces yet another access point to manage and support.
IT departments must facilitate access across multiple devices and platforms without
compromising security—a difficult feat with existing IAM systems.
A cloud-based IAM solution should help both users and administrators solve the
“anywhere, anytime, from any device” access challenge. It should not only provide
browser-based SSO to all user applications, but it should also enable access to those
same services from the user’s mobile device of choice.
6. Keeping Application Integrations Up to Date
Truly centralizing single sign-on and user management requires building integrations
with numerous applications and keeping track of the maintenance requirements for
new versions of each application. For the vast majority of organizations, having their IT
department maintain its own collection of “connectors” across that constantly changing
landscape is unrealistic and inefficient.
Today’s enterprise cloud applications are built with cutting-edge, Internet-optimized
architectures. The modern web technologies underlying these applications provide
excellent choices for vendors to develop their service and its associated interfaces.
Unfortunately for the IT professionals, that also means that every new vendor may
require a new approach when it comes to integration, particularly concerning user
authentication and management.
In addition, like on-premises applications, SaaS apps change over time. A good cloud-
based IAM solution should keep up with these changes and ensure that the application
integration, and thus your access, is always up to date and functional. Your IAM service
should mediate all the different integration technologies and approaches, making these
challenges transparent for IT. And as the various services’ APIs change and multiply, the
cloud IAM provider should manage these programmatic interfaces, offloading the
technological heavy-lifting away from your IT department, so they no longer have to
track dependencies between connectors and application versions.
This should also make adding a new application into your network as easy as adding a
new app to your iPhone. With only minimal, company-specific configuration, you should
be able to integrate new SaaS applications with SSO and user management capability
within minutes.
7. Different Administration Models for Different Applications
As cloud applications become easier and less expensive to get up and running,
companies are adopting more point SaaS solutions every day. These solutions are often
managed by the corresponding functional area in a company, such as the Sales
Operations group in the case of Salesforce.com. This can benefit IT (because it leaves
application administration to others and frees up time), but it can also create a new
problem because there is no central place to manage users and applications, or provide
reports and analytics.
A cloud IAM service should provide IT with central administration, reporting, and user
and access management across cloud applications. In addition, the service should
include a built-in security model to provide the right level of access to your individual
application administrators, so they can manage their specific users and applications
within the same IAM system.
8. Sub-Optimal Utilization, and Lack of Insight into Best Practices
One reason for the rise of cloud applications is that monthly subscription models have
replaced the upfront lump sum of the old, on-premises software license purchase. CFOs
clearly prefer to pay for the services that employees use as they go. With no centralized
insight into usage, however, IT and financial managers cannot manage these
subscription purchases and have little idea whether they are paying for more than they
actually use.
A cloud-based IAM service should provide accurate visibility into seat utilization and
help IT optimize SaaS subscription spend. Managers should have real-time access to
service utilization reports. In addition, by superimposing access trends to various
applications across top employee performers, corporate executives should be able to
use a centralized user management service to record and evangelize employee best
practices.
10. A. Explain common techniques of Identity theft.
How do identity thieves put their hands on your personal information? There are any
number of ways, from sophisticated technological attacks to simply being at the right
place at the right time. Here are seven ways your PII can land in the wrong hands,
possibly leading to identity theft, and a “best bet” that may help protect your
information:
1. Data breaches
Data breaches often make headlines, so this is one method you’ve likely heard
about before. Accidental or intentional, they can cause problems—for the
organizations that suffer them and the individuals whose information is exposed.
An accidental data breach might occur when an organization’s employee leaves a
work computer—containing PII or a way to access it—in a vulnerable place,
allowing someone to steal it. An intentional breach usually involves criminals
finding a way to access an organization’s computer network so that they can steal
PII. The criminals might deploy a sophisticated technical attack or simply trick an
employee into clicking on a link that creates an attack opening to be exploited.
Regardless of how it happens, a data breach can, in one fell swoop, expose the PII
of millions of unwitting victims.
Best bet: The less you share your PII, the better. In this digital era, though,
sharing your personal information is a regular part of life. But you can try to be
smart about it. If someone requests your Social Security number, ask yourself if
they really need it and, if you decide they do, ask them how they’ll protect it.
When shopping online, stick with familiar and trusted companies rather than
ones you’ve never heard of. A well-known company is more likely to invest in the
security measures to protect its business and your data. Of course, as we’ve seen,
even respected entities can fall short. And once you give your PII to a company,
you can’t often be sure that company won’t sell or share it with another.
2. Phishing
Why target employees? One industry official says criminals consider employees
the low-hanging fruit that attackers can try to manipulate to get into the system.
But be aware that phishing attacks can also target individuals outside a business
or government agency.
4. Mail theft
Even in this digital era, identity thieves stick with what works. And grabbing mail
from an unsecured mailbox is a tried-and-true method to steal someone’s PII. It’s
one thing if they’re grabbing only junk mail, but they could also grab bank or
credit card statements or, worse yet, tax forms that include your Social Security
number
Best bet: If it’s not already, make sure your mailbox is locked or otherwise secure
from everyone except you and the mail carrier.
5. Dumpster diving
Like mail theft, dumpster diving is a time-tested way criminals can put their hands
on PII. Identity thieves are not above digging through your trash to find financial
statements, tax documents or other information that might help them steal your
identity.
Imagine losing your wallet with your Social Security card and driver’s license. An
identity thief who found it would have your full name, address, birthdate and, of
course, Social Security number. You might as well have tied a ribbon around it
with a card that said, “Please steal my identity!”
Best bet: Don’t carry your Social Security card around with you. You seldom need
it, and given its importance to identity thieves, you want to keep the card—and
any documents that include your Social Security number—safe and secure.
Data protection may sound like a strictly digital term, but it has an analog
counterpart. If you invite strangers—or near-strangers—into your home, you
should keep this in mind. Could an appliance repair person, housecleaner or dog
walker come across information that you prefer to keep secret?
Best bet: Make sure documents and other items containing personal information
are safe and secure, not easily accessible by any visitor to your home
1. SQL injection
An unauthorized user gains access to the entire database of an application by
inserting malicious code into a standard SQL code. Often used to attack
websites, SQL injection can be avoided by the usage of dynamically generated
SQL in the code. It is also necessary to remove all stored procedures that are
rarely used and assign the least possible privileges to users who have
permission to access the database.
2. Guest-hopping attack
In guest-hopping attacks, due to the separation failure between shared
infrastructures, an attacker gets access to a virtual machine by penetrating
another virtual machine hosted in the same hardware. One possible
mitigation of guest-hopping attack is the Forensics and VM debugging
tools to observe any attempt to compromise the virtual machine. Another
solution is to use the High Assurance Platform (HAP), which provides a
high degree of isolation between virtual machines.
3. Side-channel attack
An attacker opens a side-channel attack by placing a malicious virtual machine on the
same physical machine as the victim machine. Through this, the attacker gains access to
all confidential information on the victim machine. The countermeasure to eliminate the
risk of side-channel attacks in a virtualized cloud environment is to ensure that no
legitimate user VMs reside on the same hardware of other users.
4. Malicious insider
A malicious insider can be a current or former employee or business associate who
maliciously and intentionally abuses system privileges and credentials to access and
steal sensitive customer information within the network of an organization. Strict
privilege planning and security auditing can minimize this security risk that originates
from within an organization.
5. Cookie poisoning
Cookie poisoning means to gain unauthorized access into an application or a webpage
by modifying the contents of the cookie. In a SaaS model, cookies contain user identity
credential information that allows the applications to authenticate the user identity.
Cookies are forged to impersonate an authorized user. A solution is to clean up the
cookie and encrypt the cookie data.
6. Backdoor and debug option
The backdoor is a hidden entrance to an application, which was created
intentionally or unintentionally by developers while coding. Debug option
is also a similar entry point, often used by developers to facilitate
troubleshooting in applications. But the problem is that the hackers can
use these hidden doors to bypass security policies and enter the website
and access the sensitive information. To prevent this kind of attack,
developers should disable the debugging option.
7. Cloud browser security
A web browser is a universal client application that uses Transport Layer Security (TLS)
protocol to facilitate privacy and data security for Internet communications. TLS
encrypts the connection between web applications and servers, such as web browsers
loading a website. Web browsers only use TLS encryption and TLS signature, which are
not secure enough to defend malicious attacks. One of the solutions is to use TLS and at
the same time XML based cryptography in the browser core.
8. Cloud malware injection attack
A malicious virtual machine or service implementation module such as SaaS or IaaS is
injected into the cloud system, making it believe the new instance is valid. If succeeded,
the user requests are redirected automatically to the new instance where the malicious
code is executed. The mitigation is to perform an integrity check of the service instance
before using it for incoming requests in the cloud system.
9. ARP poisoning
Address Resolution Protocol (ARP) poisoning is when an attacker exploits some ARP
protocol weakness to map a network IP address to one malicious MAC and then update
the ARP cache with this malicious MAC address. It is better to use static ARP entries to
minimize this attack. This tactic can work for small networks such as personal clouds,
but it is easier to use other strategies such as port security features on large-scale clouds
to lock a single port (or network device) to a particular IP address.
10. Network-level security attacks
Cloud computing largely depends on existing network infrastructure such as LAN, MAN,
and WAN, making it exposed to some security attacks which originate from users
outside the cloud or a malicious insider. In this section, let’s focus on the network level
security attacks and their possible countermeasures.
11. Domain Name System (DNS) attacks
It is an exploit in which an attacker takes advantage of vulnerabilities in the domain
name system (DNS), which converts hostnames into corresponding Internet Protocol
(IP) addresses using a distributed database scheme. DNS servers are subject to various
kinds of attacks since DNS is used by nearly all networked applications – including email,
Web browsing, eCommerce, Internet telephony, and more. It includes TCP SYN Flood
Attacks, UDP Flood Attack, Spoofed Source Address/LAND Attacks, Cache Poisoning
Attacks, and Man in the Middle Attacks.
12. Domain hijacking
Domain hijacking is defined as changing a domain’s name without the owner or
creator’s knowledge or permission. Domain hijacking enables intruders to obtain
confidential business data or perform illegal activities such as phishing, where a domain
is substituted by a similar website containing private information. One way to avoid
domain hijacking is to force a waiting period of 60 days between a change in registration
and a transfer to another registrar. Another approach is to use the Extensible
Provisioning Protocol (EPP), which utilizes a domain registrant-only authorization key as
a protection measure to prevent unintended name changes. Another approach is to use
the Extensible Provisioning Protocol (EPP), which utilizes a domain registrant-only
authorization key as a protection measure to prevent unauthorized name changes.
13. IP Spoofing
In IP spoofing, an attacker gains unauthorized access to a computer by pretending that
the traffic has originated from a legitimate computer. IP spoofing is used for other
threats such as Denial of Service and Middle Attack Man:
a. Denial of service attacks (DoS)
It is a type of attack that tries to make a website or network resource unavailable.
The attacker floods the host with a massive number of packets in a short amount of
time that require extra processing. It makes the targeted device waste time waiting
for a response that never comes. The target is kept so busy dealing with malicious
packets that it does not respond to routine incoming requests, leaving the legitimate
users with denied service.
An attacker can coordinate hundreds of devices across the Internet to send an
overwhelming amount of unwanted packets to a target. Therefore, tracking and
stopping DoS is very difficult. TCP SYN flooding is an example of a DoS attack in which
the intruder sends a flood of spoofed TCP SYN packets to the victim machine. This
attack exploits the limitations of the three-way handshake in maintaining half-open
connections.
b. Man In The Middle Attack (MITM)
A man-in-the-middle attack (MITM) is an intrusion in which the intruder relays remotely
or probably changes messages between two entities that think they communicate
directly with each other. The intruder utilizes network packet sniffer, filtering, and
transmission protocols to gain access to network traffic. MITM attack exploits the real-
time processing of transactions, conversations, or transfer of other data. It can be
reduced using packet filtering by firewall, secure encryption, and origin authentication
techniques.
14 End-user/host level attacks
The cloud end-user or host level attacks include phishing, an attempt to steal the user
identity that includes usernames, passwords, and credit card information. Phishing is to
send the user an email containing a link to a fake website that looks like a real one.
When the user uses the fake website, his username and password will be sent to the
hacker who can use them to attack the cloud.
B. Write a short note on key concerns of Cloud security alliance v.3.0
Despite the impressive array of benefits provided by cloud security services such as
dynamic scalability, virtually unlimited resources, and greater economies of scale that
exist with lower or no cost of ownership, there are concerns about security in the cloud
environment. Some security concerns are around compliance, multi-tenancy, and
vendor lock-in. While these are being cited as inhibitors to the migration of security into
the cloud, these same concerns exist with traditional data centers.
Security in the cloud environment is often based on the concern that lack of visibility
into security controls implemented means systems are not locked down as well as they
are in traditional data centers and that the personnel lack the proper credentials and
background checks. Security as a Service providers recognize the fragility of the
relationship and often go to extreme lengths to ensure that their environment is locked
down as much as possible. They often run background checks on their personnel that
rival even the toughest government background checks, and they run them often.
Physical and personnel security is one of the highest priorities of a Security as a Service
provider.
Compliance has been raised as a concern given the global regulatory environment.
Security as a Service providers have also recognized this and have gone to great efforts
to demonstrate their ability to not only meet but exceed these requirements or to
ensure that it is integrated into a client’s network. Security as a Service providers should
be cognizant of the geographical and regional regulations that affect the services and
their consumers, and this can be built into the offerings and service implementations.
The most prudent Security as a Service providers often enlist mediation and legal
services to preemptively resolve the regulatory needs of the consumer with the regional
regulatory requirements of a jurisdiction. When deploying Security as a Service in a
highly regulated industry or environment, agreement on the metrics defining the service
level required to achieve regulatory objectives should be negotiated in parallel with the
SLA documents defining service.
As with any cloud service, multi-tenancy presents concerns of data leakage between
virtual instances. While customers are concerned about this, the Security as a Service
providers are also highly concerned in light of the litigious nature of modern business.
As a result, a mature offering may take significant precautions to ensure data is highly
compartmentalized and any data that is shared is anonymized to protect the identity
and source. This applies equally to the data being monitored by the SecaaS provider and
to the data held by them such as log and audit data from the client’s systems (both
cloud and non-cloud) that they monitor.
Another approach to the litigious nature of multi-tenant environments is increased
analytics coupled with semantic processing. Resource descriptors and applied
jurimetrics, a process through which legal reasoning is interpreted as high-level
concepts and expressed in a machine-readable format, may be employed proactively to
resolve any legal ambiguity regarding a shared resource.
When utilizing a Security as a Service vendor, an enterprise places some, many or all
security logging, compliance, and reporting into the custody of a provider that might
sometimes have proprietary standards. In the event the enterprise seeks a new
provider, they must concern themselves with an orderly transition and somehow find a
way for the existing data and log files to be translated correctly and in a forensically
sound manner.
It is important to note that other than multi-tenancy, each of these concerns is not
“cloud unique” but are problems faced by both in-house models and outsourcing
models. For this reason, non-proprietary unified security controls, such as those
proposed by the Cloud Security Alliance Cloud Control Matrix, are needed to help
enterprises and vendors benefit from the Security as a Service environment.
The actual shift of responsibility depends on the cloud service model(s) used, leading to
a paradigm shift for agencies in relation to security monitoring and logging.
Organizations need to perform monitoring and analysis of information about
applications, services, data, and users, without using network-based monitoring and
logging, which is available for on-premises IT.
#2 On-Demand Self Service Simplifies Unauthorized Use. CSPs make it very easy to
provision new services. The on-demand self-service provisioning features of the cloud
enable an organization's personnel to provision additional services from the agency's
CSP without IT consent. The practice of using software in an organization that is not
supported by the organization's IT department is commonly referred to as shadow IT.
Due to the lower costs and ease of implementing PaaS and SaaS products, the
probability of unauthorized use of cloud services increases. However, services
provisioned or used without IT's knowledge present risks to an organization. The use of
unauthorized cloud services could result in an increase in malware infections or data
exfiltration since the organization is unable to protect resources it does not know about.
The use of unauthorized cloud services also decreases an organization's visibility and
control of its network and data.
#5 Data Deletion is Incomplete. Threats associated with data deletion exist because the
consumer has reduced visibility into where their data is physically stored in the cloud
and a reduced ability to verify the secure deletion of their data. This risk is concerning
because the data is spread over a number of different storage devices within the CSP's
infrastructure in a multi-tenancy environment. In addition, deletion procedures may
differ from provider to provider. Organizations may not be able to verify that their data
was securely deleted and that remnants of the data are not available to attackers. This
threat increases as an agency uses more CSP services.
The following are risks that apply to both cloud and on-premise IT data centers that
organizations need to address.
#6 Credentials are Stolen. If an attacker gains access to a user's cloud credentials, the
attacker can have access to the CSP's services to provision additional resources (if
credentials allowed access to provisioning), as well as target the organization's assets.
The attacker could leverage cloud computing resources to target the organization's
administrative users, other organizations using the same CSP, or the CSP's
administrators. An attacker who gains access to a CSP administrator's cloud credentials
may be able to use those credentials to access the agency's systems and data.
Administrator roles vary between a CSP and an organization. The CSP administrator has
access to the CSP network, systems, and applications (depending on the service) of the
CSP's infrastructure, whereas the consumer's administrators have access only to the
organization's cloud implementations. In essence, the CSP administrator has
administration rights over more than one customer and supports multiple services.
#7 Vendor Lock-In Complicates Moving to Other CSPs. Vendor lock-in becomes an issue
when an organization considers moving its assets/operations from one CSP to another.
The organization discovers the cost/effort/schedule time necessary for the move is
much higher than initially considered due to factors such as non-standard data formats,
non-standard APIs, and reliance on one CSP's proprietary tools and unique APIs.
This issue increases in service models where the CSP takes more responsibility. As an
agency uses more features, services, or APIs, the exposure to a CSP's unique
implementations increases. These unique implementations require changes when a
capability is moved to a different CSP. If a selected CSP goes out of business, it becomes
a major problem since data can be lost or cannot be transferred to another CSP in a
timely manner.
Key management and encryption services become more complex in the cloud. The
services, techniques, and tools available to log and monitor cloud services typically vary
across CSPs, further increasing complexity. There may also be emergent threats/risks in
hybrid cloud implementations due to technology, policies, and implementation
methods, which add complexity. This added complexity leads to an increased potential
for security gaps in an agency's cloud and on-premises implementations.
#9 Insiders Abuse Authorized Access. Insiders, such as staff and administrators for both
organizations and CSPs, who abuse their authorized access to the organization's or CSP's
networks, systems, and data are uniquely positioned to cause damage or exfiltrate
information.
The impact is most likely worse when using IaaS due to an insider's ability to provision
resources or perform nefarious activities that require forensics for detection. These
forensic capabilities may not be available with cloud resources.
#10 Stored Data is Lost. Data stored in the cloud can be lost for reasons other than
malicious attacks. Accidental deletion of data by the cloud service provider or a physical
catastrophe, such as a fire or earthquake, can lead to the permanent loss of customer
data. The burden of avoiding data loss does not fall solely on the provider's shoulders. If
a customer encrypts its data before uploading it to the cloud but loses the encryption
key, the data will be lost. In addition, inadequate understanding of a CSP's storage
model may result in data loss. Agencies must consider data recovery and be prepared
for the possibility of their CSP being acquired, changing service offerings, or going
bankrupt.
This threat increases as an agency uses more CSP services. Recovering data on a CSP
may be easier than recovering it at an agency because an SLA designates
availability/uptime percentages. These percentages should be investigated when the
agency selects a CSP.
#11 CSP Supply Chain is Compromised. If the CSP outsources parts of its infrastructure,
operations, or maintenance, these third parties may not satisfy/support the
requirements that the CSP is contracted to provide with an organization. An
organization needs to evaluate how the CSP enforces compliance and check to see if the
CSP flows its own requirements down to third parties. If the requirements are not being
levied on the supply chain, then the threat to the agency increases.
This threat increases as an organization uses more CSP services and is dependent on
individual CSPs and their supply chain policies.
It is important to remember that CSPs use a shared responsibility model for security.
The CSP accepts responsibility for some aspects of security. Other aspects of security are
shared between the CSP and the consumer. Finally, some aspects of security remain the
sole responsibility of the consumer. Effective cloud security depends on knowing and
meeting all consumer responsibilities. Consumers' failure to understand or meet their
responsibilities is a leading cause of security incidents in cloud-based systems.
In this blog post, we have identified five cloud-unique and seven cloud and on-premises
threats that organizations face as they consider migrating their data and assets to the
cloud. In the next post in this series, we will explore a series of best practices aimed at
helping organizations securely move data and applications to the cloud.
1. Data breaches
Data breach can be the main goal of an attack through which sensitive information such
as health, financial, personal identity; intellectual and other related information is
viewed, stolen or used by an unauthorized user.
Remediation:
Analyze data protection during design and run time.
Organizations must restrict access to data and maintain adherence to industry
standards and compliance.
Implementation of strong API access control.
The environment and infrastructure should be designed to restrict access and
monitor traffic.
Organizations must encrypt and protect data in transit.
Implement backup and retention strategies.
2. Insufficient identity, credential and access management
Security threats may occur due to inadequate protection of the credentials. An
unauthorised user might read, modify and delete data or release a malicious software.
Remediation:
Security awareness should be provided to contractors, third-party users and
employees.
Use of two-factor authentication should be implemented to secure accounts.
Organizations must identity and access rights to detect violations.
Segregate accounts based on business needs.
The data owner should restrict the internal corporate or customer (tenant) user-
account credentials.
3. Insecure interfaces and APIs
Cloud service providers expose a set of software user interfaces or application
programming interfaces (APIs) that organizations use to manage and interact with the
cloud services. Moreover, customers and third-party users often offer services to their
customers through these interfaces.
An unauthorized user may access and re-use these APIs or passwords. They may
transmit content, get authorizations and logging capabilities.
Remediations:
Use a good security model of software interfaces.
Practise strong authentication methods and limit access with encrypted
transmission.
Use standard API frameworks.
1. System vulnerability
Security breaches may occur due to exploitable bugs in programs that stay within a
system. This allows a bad actor to infiltrate and get access to sensitive information or
crash the service operations.
Remediations:
Customer access grants must be implemented using a need-to-know, need-to-
access protocol.
Organizations must regularly detect data assessments and system disclosure
alteration, or destruction.
Privileges should be separated between business-as-usual systems-level access,
and escrowed credential access for sensitive root or system accounts.
Frequent check of quality and integrity of system as well as services.
5. Account or service hijacking – using stolen passwords
Account or service hijacking can be done to gain access and abuse highly privileged
accounts. Attack methods like fraud, phishing, and exploitation of software vulnerability
are carried out mostly using the stolen passwords.
Remediations:
Use strong two-factor authentication techniques where possible.
The organization needs to take proper steps to verify identity, restrict access and
maintain adherence to industry standards and compliance.
6. Malicious insider
A malicious insider can access sensitive data of the system administrator or may even
get control over the cloud services at greater levels with little or no risk of detection. A
malicious insider may affect an organization through brand damage, financial impact
and productivity loss.
Remediations:
Organizations must understand the practices performed by cloud providers, how
to grant access to employees, and set compliance policies.
There should be security and privacy awareness programs to understand,
recognize and report any suspicious activity.
Organizations should automate their processes and use technologies that scan
frequently for misconfigured resources and remediate unknown activity in real time.
7. Data loss
The data loss threat occurs in cloud due to interaction with risks within the cloud or
architectural characteristics of the cloud application. Unauthorized parties may access
data to delete or alter records of an organization.
Remediations:
Cloud service providers should provide adequate security controls to customers
as well as specify backup and retention strategies to them.
Use strong API access control.
Encrypt security of data in transit.
8. Lack of due diligence
Most cloud providers develop a good strategy for due diligence when evaluating cloud
technologies. Enterprises that choose providers without analysing the technologies and
the due diligence expose of it, expose themselves to risks.
Remediations:
Organizations must know what certifications the cloud provider itself has in
place.
Clear protocols must be defined related to accountability and responsibility of
management support and involvement.
Use strong passwords with Multi-Factor Authentication (MFA) tokens.
9. Abuse and nefarious use of cloud services
This threat refers to attackers leveraging the resources of cloud computing to target
users, enterprises, and other cloud providers. Examples include launching DDoS attacks,
phishing, email spams, get access to credential databases, and more.
Remediations:
Organizations must use strong IDS/IPS.
Organizations must use firewalls that can inspect incoming and outgoing traffic.
The integration of cloud services must not be left up to individuals, groups for
implementation.
An An organization must choose their storage vendors wisely. The process must
be corporate IT or security team only.
10. Shared technology vulnerabilities
Cloud providers deliver their services by sharing applications, or infrastructure.
Sometimes, the components that make up the infrastructure for cloud technology as-a-
service offering are not designed to offer strong isolation properties for a multi-tenant
cloud service. This may lead to vulnerabilities in shared technology that can be attacked
in almost all delivery models.
Remediations:
Sensitive data should be protected via encryption.
Data should be segmented and protected according to sensitivity levels.
Organizations must conduct vulnerability scanning and configuration audits
regularly.
The rise of cloud computing as an evolving technology brings with it concerns for every
business on cloud security threats. Moving critical applications and data to the cloud
does not make them more secure and cloud providers should not be just blamed here.
Organizations must outline a good roadmap for evaluation of cloud technologies and
service providers. Plus, the IT and security teams within an organization must design
corrective controls as a disaster recovery plan, including penetration testing, regular
system updates, and provide security awareness training. Organization must choose
their storage vendors wisely. The process must be corporate
Covertly obtains information by transmitting data from the hard drive (spyware)
Phishing
Phishing is the practice of sending fraudulent communications that appear to come
from a reputable source, usually through email. The goal is to steal sensitive data like
credit card and login information or to install malware on the victim’s machine. Phishing
is an increasingly common cyberthreat.
Man-in-the-middle attack
Man-in-the-middle (MitM) attacks, also known as eavesdropping attacks, occur when
attackers insert themselves into a two-party transaction. Once the attackers interrupt
the traffic, they can filter and steal data.
1. On unsecure public Wi-Fi, attackers can insert themselves between a visitor’s device
and the network. Without knowing, the visitor passes all information through the
attacker.
2. Once malware has breached a device, an attacker can install software to process all of
the victim’s information.
Denial-of-service attack
A denial-of-service attack floods systems, servers, or networks with traffic to exhaust
resources and bandwidth. As a result, the system is unable to fulfill legitimate requests.
Attackers can also use multiple compromised devices to launch this attack. This is known
as a distributed-denial-of-service (DDoS) attack.
SQL injection
A Structured Query Language (SQL) injection occurs when an attacker inserts malicious
code into a server that uses SQL and forces the server to reveal information it normally
would not. An attacker could carry out a SQL injection simply by submitting malicious
code into a vulnerable website search box.
Zero-day exploit
A zero-day exploit hits after a network vulnerability is announced but before a patch or
solution is implemented. Attackers target the disclosed vulnerability during this window
of time. Zero-day vulnerability threat detection requires constant awareness.
DNS Tunneling
DNS tunneling utilizes the DNS protocol to communicate non-DNS traffic over port 53. It
sends HTTP and other protocol traffic over DNS. There are various, legitimate reasons to
utilize DNS tunneling. However, there are also malicious reasons to use DNS Tunneling
VPN services. They can be used to disguise outbound traffic as DNS, concealing data that
is typically shared through an internet connection. For malicious use, DNS requests are
manipulated to exfiltrate data from a compromised system to the attacker’s
infrastructure. It can also be used for command and control callbacks from the
attacker’s infrastructure to a compromised system.
When it comes to the safety of your data and technology systems, it’s vital that
your organization recognizes the reality of the “information security lifecycle.” By
its very name, the info security lifecycle indicates that true information security is
a process, not a “one and done” solitary project. Information security has no end-
point, and your operational framework should always strive to acknowledge that
fact.
At Vala Secure, we use a lifecycle model that serves as a useful baseline to help
build a solid foundation for any security program across any type of organization
and industry focus. Using the lifecycle model can provide you with a road map to
ensure that your information security is continually being improved.
As with any other aspect of your security program, implementing the security
lifecycle requires certain policies and standards. The Vala Secure lifecycle model
differs depending on the type of process framework that your organization uses,
but, in general, it adheres to the COBIT model (Control Objectives for Information
and Related Technology) created by the Information Systems Audit and Control
Association (ISACA).
The information security lifecycle is broken down into four key phases:
Planning and Organization
The first step in an effective information security framework is to understand what
exactly your organization is trying to protect.
You can start by thoroughly mapping out your network. Identify all of its key
components, including individual servers, the networking infrastructure that
connects those servers and the software running on them.
To get the most out of your mapping efforts, start at the highest level and drill
down. Begin by identifying the basic function of the various sectors in your
network, such as areas related primarily to software development, or areas
dedicated to e-commerce support. Don’t forget purely internally focused systems,
like those used by your human resources or accounting departments.
Next, start drilling down into each node of your network. Gather information
related to the operating systems being run by each individual computer, then the
applications and other software run on those computers. You should even go down
deep enough to identify and list what software versions are being run on individual
systems, and what specific patches and software hot fixes have been applied on
each PC.
Don’t forget to include mobile technology in your efforts. Each smartphone, pad or
other device that interacts with your systems should be recorded and categorized
just like your desktops and standalone boxes.
Acquire and Implement
Once you’ve thoroughly mapped and identified your organization’s technology
landscape, it’s time to begin preparing to implement your security measures.
The first step is to perform a thorough vulnerability assessment that covers the
entirety of your systems, from the highest level to the lowest. You should focus
especially on any areas that your previous mapping efforts identified as having
risk-factors, such as very outdated software versions or obsolete hardware
technology.
Next, begin acquiring all of the tools that will be necessary to implement your
security measures. Those tools may be security programs like firewalls, new
network hardware or software that will be used to monitor and maintain your
security routines once they’re in place.
This is also the time to examine how your servers are configured. Ensure their
configurations match with your new policies, and especially that they are
compliant with all regulatory and legal requirements.
Deliver and Support
In this phase, you’ll begin deploying your new hardware, software and policy
routines. Pay special attention to mitigating all of the risks identified during the
first two phases of the lifecycle. Whenever possible, work with a “most to least”
philosophy, focusing on the most important and/or vulnerable areas first, then
working down towards the least important and/or vulnerable areas.
Configure and update every system, hardware and network component, providing
technical support when and where it’s necessary. You should also make sure that
your security is strengthened in every aspect to comply with corporate policy,
especially when it comes to password security. This is especially true when it
comes to personnel, who will need to be educated about the new policies and how
they will be expected to support and maintain the new security regime.
Monitor and Evaluate
In this final phase, the cyclical nature of your information security comes to the
forefront. Computer systems are dynamic, and are continually being updated and
modified by administrators, developers and every other user who has access to
your network. Accordingly, there’s no way to call a system “secure” unless it ibeing
continuously monitored. In this phase, you should deploy your monitoring
routines, which typically involve automated systems calibrated to detect problems,
along with hands-on oversight from network administrators.
n the early phases of monitoring, your goal is to evaluate how effective all of your
information security systems are by examining them according to your new
procedures. Always strive to work with the “most to least” rubric discussed above,
where the most critical and/or vulnerable systems receive the highest degree of
scrutiny. Also, don't forget to have a qualified firm test and/or audit your
environment to ensure that controls and processes are working as expected.
Once you’ve implemented the information security lifecycle, your organization will
find itself even better prepared for future concerns, and will be able to adapt to
changes in the security landscape much more effectively.
14.A. Explain the essentiality of security governance
IT security governance is the system by which an organization directs and controls IT
security (adapted from ISO 38500). IT security governance should not be confused with
IT security management. IT security management is concerned with making decisions to
mitigate risks; governance determines who is authorized to make decisions. Governance
specifies the accountability framework and provides oversight to ensure that risks are
adequately mitigated, while management ensures that controls are implemented to
mitigate risks. Management recommends security strategies. Governance ensures that
security strategies are aligned with business objectives and consistent with regulations.
Enterprise security governance results from the duty of care owed by leadership
towards fiduciary requirements. This position is based on judicial rationale and
reasonable standards of care [1]. The five general governance areas are:
1. Govern the operations of the organization and protect its critical assets
2. Protect the organization's market share and stock price (perhaps not appropriate
for education)
3. Govern the conduct of employees (educational AUP and other policies that may
apply to use of technology resources, data handling, etc.)
4. Protect the reputation of the organization
5. Ensure compliance requirements are met
B. What is the focus of corporate governance?
Corporate governance is one key element in improving economic efficiency and growth
as well as enhancing investor confidence. Corporate governance involves a set of
relationships between a company's management, its board, its shareholders and other
stakeholders. Corporate governance also provides the structure through which the
objectives of the company are set, and the means of attaining those objectives and
monitoring performance are determined. Good corporate governance should provide
proper incentives for the board and management to pursue objectives that are in the
interests of the company and its shareholders and should facilitate effective monitoring.
This definition shows that corporate governance is not just focused on the interests of
the organization and its shareholders; rather, that it considers the relationship with all
stakeholders as well as their interests. It also shows that corporate governance at the
organizational level results in improving economic conditions of the market. Corporate
governance is a system that defines how the organization should be directed and
controlled. It has to be understood that corporate governance is not simply an internal-
looking regulatory function; rather that it involves consideration for external
stakeholders, such as the market, as well as the industry standards.
From a principal–agent relationship perspective, corporate governance focuses on
streamlining the relationship between the principal and the agent through monitoring
mechanisms, such as transparency, compliance, and reporting, as well as board and
shareholder composition.
Corporate governance is also about allowing managers to drive the company forward;
however, this freedom is under a framework of control mechanisms and accountability,
decision-making process, and clear distribution of power (remember my cricket kit!).
In addition, the legitimate rights of the shareholders, in particular, and the stakeholders,
in general, should be clearly defined within the governance framework. Chau (2011)
summarizes corporate governance framework by mentioning that "The heart of all
instruments and mechanisms should be directed to proper stewardship, integrity,
openness, transparency and accountability without excessive surveillance and
bureaucracy" (p. 10).
Types of spoofing
Email Spoofing Email spoofing occurs when an attacker uses an email message to trick a
recipient into thinking it came from a known and/or trusted source. These emails may
include links to malicious websites or attachments infected with malware, or they may
use social engineering to convince the recipient to freely disclose sensitive information.
Sender information is easy to spoof and can be done in one of two ways:
Website Spoofing
Website spoofing refers to when a website is designed to mimic an existing site known
and/or trusted by the user. Attackers use these sites to gain login and other personal
information from users.
IP Spoofing
Attackers may use IP (Internet Protocol) spoofing to disguise a computer IP address,
thereby hiding the identity of the sender or impersonating another computer system.
One purpose of IP address spoofing is to gain access to a networks that authenticate
users based on IP addresses.
ARP Spoofing
Address Resolution Protocol (ARP) is a protocol that resolves IP addresses to Media
Access Control (MAC) addresses for transmitting data. ARP spoofing is used to link an
attacker’s MAC to a legitimate network IP address so the attacker can receive data
meant for the owner associated with that IP address. ARP spoofing is commonly used to
steal or modify data but can also be used in denial-of-service and man-in-the-middle
attacks or in session hijacking.
A. Phishing
1. Keep Informed About Phishing Techniques – New phishing scams are being
developed all the time. Without staying on top of these new phishing techniques, you
could inadvertently fall prey to one. Keep your eyes peeled for news about new phishing
scams. By finding out about them as early as possible, you will be at much lower risk of
getting snared by one. For IT administrators, ongoing security awareness training and
simulated phishing for all users is highly recommended in keeping security top of mind
throughout the organization.
2. Think Before You Click! – It’s fine to click on links when you’re on trusted sites.
Clicking on links that appear in random emails and instant messages, however, isn’t such
a smart move. Hover over links that you are unsure of before clicking on them. Do they
lead where they are supposed to lead? A phishing email may claim to be from a
legitimate company and when you click the link to the website, it may look exactly like
the real website. The email may ask you to fill in the information but the email may not
contain your name. Most phishing emails will start with “Dear Customer” so you should
be alert when you come across these emails. When in doubt, go directly to the source
rather than clicking a potentially dangerous link.
3. Install an Anti-Phishing Toolbar – Most popular Internet browsers can be customized
with anti-phishing toolbars. Such toolbars run quick checks on the sites that you are
visiting and compare them to lists of known phishing sites. If you stumble upon a
malicious site, the toolbar will alert you about it. This is just one more layer of
protection against phishing scams, and it is completely free.
4. Verify a Site’s Security – It’s natural to be a little wary about supplying sensitive
financial information online. As long as you are on a secure website, however, you
shouldn’t run into any trouble. Before submitting any information, make sure the site’s
URL begins with “https” and there should be a closed lock icon near the address bar.
Check for the site’s security certificate as well. If you get a message stating a certain
website may contain malicious files, do not open the website. Never download files
from suspicious emails or websites. Even search engines may show certain links which
may lead users to a phishing webpage which offers low cost products. If the user makes
purchases at such a website, the credit card details will be accessed by cybercriminals.
5. Check Your Online Accounts Regularly – If you don’t visit an online account for a
while, someone could be having a field day with it. Even if you don’t technically need to,
check in with each of your online accounts on a regular basis. Get into the habit of
changing your passwords regularly too. To prevent bank phishing and credit card
phishing scams, you should personally check your statements regularly. Get monthly
statements for your financial accounts and check each and every entry carefully to
ensure no fraudulent transactions have been made without your knowledge.
6. Keep Your Browser Up to Date – Security patches are released for popular browsers
all the time. They are released in response to the security loopholes that phishers and
other hackers inevitably discover and exploit. If you typically ignore messages about
updating your browsers, stop. The minute an update is available, download and install
it.
7. Use Firewalls – High-quality firewalls act as buffers between you, your computer and
outside intruders. You should use two different kinds: a desktop firewall and a network
firewall. The first option is a type of software, and the second option is a type of
hardware. When used together, they drastically reduce the odds of hackers and
phishers infiltrating your computer or your network.
8. Be Wary of Pop-Ups – Pop-up windows often masquerade as legitimate components
of a website. All too often, though, they are phishing attempts. Many popular browsers
allow you to block pop-ups; you can allow them on a case-by-case basis. If one manages
to slip through the cracks, don’t click on the “cancel” button; such buttons often lead to
phishing sites. Instead, click the small “x” in the upper corner of the window.
9. Never Give Out Personal Information – As a general rule, you should never share
personal or financially sensitive information over the Internet. This rule spans all the
way back to the days of America Online, when users had to be warned constantly due to
the success of early phishing scams. When in doubt, go visit the main website of the
company in question, get their number and give them a call. Most of the phishing emails
will direct you to pages where entries for financial or personal information are required.
An Internet user should never make confidential entries through the links provided in
the emails. Never send an email with sensitive information to anyone. Make it a habit to
check the address of the website. A secure website always starts with “https”.
10. Use Antivirus Software – There are plenty of reasons to use antivirus software.
Special signatures that are included with antivirus software guard against known
technology workarounds and loopholes. Just be sure to keep your software up to date.
New definitions are added all the time because new scams are also being dreamed up
all the time. Anti-spyware and firewall settings should be used to prevent phishing
attacks and users should update the programs regularly. Firewall protection prevents
access to malicious files by blocking the attacks. Antivirus software scans every file
which comes through the Internet to your computer. It helps to prevent damage to your
system.
Strong passwords are usually the first defense against password attacks. The latest NIST
guidelines recommend easy to remember/hard to guess passwords. A good mix of
upper and lowercase characters, numbers, and special characters can help. Even better,
avoid use of common words and common phrases. Definitely avoid site-specific words
(including the name of the app you’re logging into in the password, for instance). NIST
also recommends checking passwords against a dictionary of known poor passwords.
Employee education is also important. One of the best defenses against social
engineering tactics is teaching users the techniques hackers use and how to recognize
them.
Strong passwords and education really aren’t enough these days, though. Computing
power allows cyber criminals to run sophisticated programs to obtain or try massive
numbers of credentials. That’s why NIST also recommends not relying on passwords
alone. Specifically, companies should adopt tools like single sign-on (SSO) and multi-
factor authentication (MFA), also known as two-factor authentication.
SSO helps eliminate passwords by letting employees login to all their apps and sites with
just one set of credentials. Users only need remember one, strong password. MFA
requires an additional piece of information when the user logs in, such as a pin
generated by an application like OneLogin Protect or fingerprint authentication. This
additional piece of information makes it far more difficult for cyber criminals to
impersonate a user.
C. 7 Best Practices for Preventing DDoS attacks
1. Develop a Denial of Service Response Plan.
Develop a DDoS prevention plan based on a thorough security assessment. Unlike
smaller companies, larger businesses may require complex infrastructure and involving
multiple teams in DDoS planning.
When DDoS hits, there is no time to think about the best steps to take. They need to be
defined in advance to enable prompt reactions and avoid any impacts.
Developing an incident response plan is the critical first step toward comprehensive
defense strategy. Depending on the infrastructure, a DDoS response plan can get quite
exhaustive. The first step you take when a malicious attack happens can define how it
will end. Make sure your data center is prepared, and your team is aware of their
responsibilities. That way, you can minimize the impact on your business and save
yourself months of recovery.
The key elements remain the same for any company, and they include:
Systems checklist. Develop a full list of assets you should implement to ensure
advanced threat identification, assessment, and filtering tools, as well as security-
enhanced hardware and software-level protection, is in place.
Form a response team. Define responsibilities for key team members to ensure
organized reaction to the attack as it happens.
Define notification and escalation procedures. Make sure your team members
know exactly whom to contact in case of the attack.
Include the list of internal and external contacts that should be informed about
the attack. You should also develop communication strategies with your customers,
cloud service provider, and any security vendors.
2. Secure Your Network Infrastructure.
This includes advanced intrusion prevention and threat management systems, which
combine firewalls, VPN, anti-spam, content filtering, load balancing, and other layers of
DDoS defense techniques. Together they enable constant and consistent network
protection to prevent a DDoS attack from happening. This includes everything from
identifying possible traffic inconsistencies with the highest level of precision in blocking
the attack.
Most of the standard network equipment comes with limited DDoS mitigation options,
so you may want to outsource some of the additional services. With cloud-based
solutions, you can access advanced mitigation and protection resources on a pay-per-
use basis. This is an excellent option for small and medium-sized businesses that may
want to keep their security budgets within projected limits.
In addition to this, you should also make sure your systems are up-to-date. Outdated
systems are usually the ones with most loopholes. Denial of Service attackers find holes.
By regularly patching your infrastructure and installing new software versions, you can
close more doors to the attackers.
Given the complexity of DDoS attacks, there’s hardly a way to defend against them
without appropriate systems to identify anomalies in traffic and provide instant
response. Backed by secure infrastructure and a battle-plan, such systems can minimize
the threat. More than that, they can bring the needed peace of mind and confidence to
everyone from a system admin to CEO.
Engaging in strong security practices can keep business networks from being
compromised. Secure practices include complex passwords that change on a regular
basis, anti-phishing methods, and secure firewalls that allow little outside traffic. These
measures alone will not stop DDoS, but they serve as a critical security foundation.
Second, the nature of the cloud means it is a diffuse resource. Cloud-based apps can
absorb harmful or malicious traffic before it ever reaches its intended destination. Third,
cloud-based services are operated by software engineers whose job consists of
monitoring the Web for the latest DDoS tactics.
Deciding on the right environment for data and applications will differ between
companies and industries. Hybrid environments can be convenient for achieving the
right balance between security and flexibility, especially with vendors providing tailor-
made solutions.
7. Consider DDoS-as-a-Service.
At the same time, it ensures that all the security infrastructure components meet the
highest security standards and compliance requirements. The key benefit of this model
is the ability of tailor-made security architecture for the needs of a particular company,
making the high-level DDoS protection available to businesses of any size.
D. Best Practices to Prevent Man-in-the-Middle Attacks
Strong WEP/WAP Encryption on Access Points
VPNs can be used to create a secure environment for sensitive information within a
local area network. They use key-based encryption to create a subnet for secure
communication. This way, even if an attacker happens to get on a network that is
shared, he will not be able to decipher the traffic in the VPN.
Force HTTPS
HTTPS can be used to securely communicate over HTTP using public-private key
exchange. This prevents an attacker from having any use of the data he may be
sniffing. Websites should only use HTTPS and not provide HTTP alternatives. Users
can install browser plugins to enforce always using HTTPS on requests.
Malvertising is more likely to end up on ad networks with lax security standards and
poor monitoring practices. When choosing an ad network, consider only reputable and
Google-certified options. You can also review each network’s client list or website to see
if it works with any well-known companies. Vetting partners might not prevent
malvertising completely, but it can help reduce the risk.
A content security policy, or CSP, can control which domains are able to host content on
your website. It will prevent unauthorized scripts from running, which means users
won’t unknowingly download malware from your site. Google’s guide can help you
understand what a CSP is and how to implement one.
Educated employees are valuable assets within your business because they can act like
human firewalls. You can curb future attacks by training your employees to identify the
signs of malvertising. You’ll also want to explain the consequences of malvertising,
which should encourage everyone to prevent the infection of company devices and
avoid phishing and ransomware attacks.
Install anti-virus software on local machines to identify and block malvertising attacks.
Then, remove browser plug-ins and make sure the operating system is updated on each
machine. It’s also a good idea to install ad-blocking software on company computers to
reduce the risk of employees clicking on malicious ads.
G. To Protect Your Computer Against Rogue Threats And Other Malware Attacks:
o Familiarize yourself with common phishing scams and attacks.
o Secure your PC with legitimate security programs — antivirus,
antispyware, firewall, etc…
o Make sure all security programs installed on your computer is up-to-date
and are always turned on.
o Think before you click on links on a website/email.
o Do a Google search for the product name before installing it on your
computer.
o Do not click on ads that look scary. If the product name is not in the ad, or
is trying to provoke fear, never click on it.
o Do not open email attachments that you were not expecting.
o Be careful while searching for security tools.
o Always download programs from its official source.