Professional Documents
Culture Documents
1
Host-Based Security System (HBSS) and Network-Based Security System ..................... 19
What is operation security (OPSEC)? .......................................................................................... 19
OPSEC Five Step Process ............................................................................................................... 20
Physical security and issues in Information Security?........................................................................ 20
What is access control? .......................................................................................................................... 22
types of access control: ................................................................................................................... 22
What is Information Flow Control? ............................................................................................... 23
Legal and Ethical Issues in Information Security ................................................................................ 23
Identification and Authentication in information security? .................................................................. 24
Risk Assessment ...................................................................................................................................... 25
symmetric Encryption Algorithms(private Encryption Key): ............................................. 26
block cipher : ....................................................................................................................................... 26
stream ciphers: ................................................................................................................................... 26
Triple Data Encryption Standard (3DES): .................................................................................... 26
Asymmetric Encryption Algorithms(Public Encryption Key): ........................................... 28
Encryption Standards ................................................................................................................... 29
Brute Force Attack : ......................................................................................................................... 30
Differential cryptanalytics (DC) ...................................................................................................... 30
Linear cryptanalytics (LC) ............................................................................................................... 30
Davies Attack (DA) ............................................................................................................................. 31
Hill Cipher .................................................................................................................................................. 31
Encrypt BSCS using hill cipher: ............................................................................................................. 33
What is DMZ? ........................................................................................................................................... 35
What is a denial-of-service attack?.................................................................................................... 35
What is the Risk Rating Matrix? ......................................................................................................... 36
how a hacker might discover information and gain access to a network: explain the
anatomy: ............................................................................................................................................... 36
Active attacks: .................................................................................................................................... 37
Masquerade –................................................................................................................................... 38
Masquerade Attack ............................................................................................................................ 39
Modification of messages – ........................................................................................................... 39
Repudiation – .................................................................................................................................. 39
Replay – ............................................................................................................................................ 40
2
Denial of Service – ........................................................................................................................ 40
Passive attacks: ............................................................................................................................. 41
The release of message content – ............................................................................................... 42
Traffic analysis – .............................................................................................................................. 42
Confidentiality
– means information is not disclosed to unauthorized individuals, entities
and process. For example if we say I have a password for my Gmail account
but someone saw while I was doing a login into Gmail account. In that case
my password has been compromised and Confidentiality has been
breached.
Integrity
– means maintaining accuracy and completeness of data. This means data
cannot be edited in an unauthorized way. For example if an employee
leaves an organisation then in that case data for that employee in all
departments like accounts, should be updated to reflect status to JOB LEFT
so that data is complete and accurate and in addition to this only authorized
person should be allowed to edit employee data.
Availability
– means information must be available when needed. For example if one
needs to access information of a particular employee to check whether
employee has outstanded the number of leaves, in that case it requires
collaboration from different organizational teams like network operations,
development operations, incident response and policy/change
management.
Denial of service attack is one of the factor that can hamper the availability
of information.
Apart from this there is one more principle that governs information security programs.
This is Non repudiation.
Non repudiation
– means one party cannot deny receiving a message or a transaction nor
can the other party deny sending a message or a transaction. For example
in cryptography it is sufficient to show that message matches the digital
signature signed with sender’s private key and that sender could have a
sent a message and nobody else could have altered it in transit. Data
Integrity and Authenticity are pre-requisites for Non repudiation.
Authenticity
– means verifying that users are who they say they are and that each input
arriving at destination is from a trusted source.This principle if followed
guarantees the valid and genuine message received from a trusted source
4
through a valid transmission. For example if take above example sender
sends the message along with digital signature which was generated using
the hash value of message and private key. Now at the receiver side this
digital signature is decrypted using the public key generating a hash value
and message is again hashed to generate the hash value. If the 2 value
matches then it is known as valid transmission with the authentic or we say
genuine message received at the recipient side
Accountability
– means that it should be possible to trace actions of an entity uniquely to
that entity. For example as we discussed in Integrity section Not every
employee should be allowed to do changes in other employees data. For
this there is a separate department in an organization that is responsible for
making such changes and when they receive request for a change then that
letter must be signed by higher authority for example Director of college and
person that is allotted that change will be able to do change after verifying
his bio metrics, thus timestamp with the user(doing changes) details get
recorded. Thus we can say if a change goes like this then it will be possible
to trace the actions uniquely to an entity.
advantages information security:
Improved security: By identifying and classifying sensitive information,
organizations can better protect their most critical assets from unauthorized
access or disclosure.
1. Compliance: Many regulatory and industry standards, such as HIPAA and
PCI-DSS, require organizations to implement information classification and
data protection measures.
2. Improved efficiency: By clearly identifying and labeling information,
employees can quickly and easily determine the appropriate handling and
access requirements for different types of data.
3. Better risk management: By understanding the potential impact of a data
breach or unauthorized disclosure, organizations can prioritize resources
and develop more effective incident response plans.
4. Cost savings: By implementing appropriate security controls for different
types of information, organizations can avoid unnecessary spending on
security measures that may not be needed for less sensitive data.
5. Improved incident response: By having a clear understanding of the
criticality of specific data, organizations can respond to security incidents in
a more effective and efficient manner.
disadvantages information security:
1. Complexity: Developing and maintaining an information classification
system can be complex and time-consuming, especially for large
organizations with a diverse range of data types.
2. Cost: Implementing and maintaining an information classification system
can be costly, especially if it requires new hardware or software.
3. Resistance to change: Some employees may resist the implementation of
an information classification system, especially if it requires them to change
their usual work habits.
5
4. Inaccurate classification: Information classification is often done by
human, so it is possible that some information may be misclassified, which
can lead to inadequate protection or unnecessary restrictions on access.
5. Lack of flexibility: Information classification systems can be rigid and
inflexible, making it difficult to adapt to changing business needs or new
types of data.
6. False sense of security: Implementing an information classification system
may give organizations a false sense of security, leading them to overlook
other important security controls and best practices.
7. Maintenance: Information classification should be reviewed and updated
frequently, if not it can become outdated and ineffective.
Network security is now of the utmost importance in today's environment, when practically
everything is connected via networks. Cyberattacks are a persistent threat to networks,
and the results can be disastrous. Network breaches can seriously harm both people and
organizations through identity theft and financial losses. To ensure network security, there
are a number of protective techniques that can be applied.
1. Firewalls: The first line of protection against network threats is the firewall. A
firewall is a network security device that monitors and restricts network traffic
based on predefined security rules. Firewalls, which may be deployed as hardware
or software, can prevent illegal network access.
2. Virtual Private Networks (VPNs): VPNs enable secure network access through
the internet. VPNs utilize encryption to safeguard data sent over the network,
making it more difficult for attackers to intercept and steal data. VPNs are
especially beneficial for distant workers who need to access the network from
locations other than the office.
3. Intrusion Detection and Prevention Systems (IDPS): IDPSs are intended to
identify and prevent network assaults. To identify and prevent suspect network
traffic, they employ a variety of approaches such as signature-based detection,
anomaly detection, and heuristic analysis. IDPSs can also be programmed to
respond to assaults automatically by restricting traffic or shutting down vulnerable
services.
4. Antivirus and Anti-Malware Software: Antivirus and anti-malware software are
critical for keeping viruses and other harmful software out of the network. These
applications monitor incoming and outgoing data for known risks and prevent them
from causing harm. Antivirus software should be updated on a regular basis to
guarantee that it can detect the most recent threats.
7
5. Patch Management: Keeping all software up to date is critical for network security.
Many software updates include security patches that fix vulnerabilities that can be
exploited by attackers. Regularly applying these patches can prevent attackers
from exploiting known vulnerabilities.
6. Access Controls: Access restrictions are used to limit network access to only
authorized users. Passwords, biometric authentication, and other authentication
techniques can be used to accomplish this. Access restrictions should be
evaluated and modified on a regular basis to ensure that only authorized users
have access to the network.
7. Employee Education and Training: Staff education and training are critical to
network security. Workers should be educated on the necessity of network security
and the dangers of network breaches. They should also be taught how to detect
and respond to questionable network behavior.
Finally, both people and companies must defend their networks against cyberattacks.
Applying the aforementioned security measures can assist to guarantee network security.
It's vital to remember that network security is a never-ending process that requires
ongoing monitoring and tweaking to stay up with new threats.
Network security is crucial in today's interconnected world since networks connect almost
everything. The process of preventing unauthorized access, usage, or destruction of
computer networks is known as network security. It comprises taking proactive steps to
avert cyberattacks and other illicit conduct that might affect the confidentiality, integrity,
and availability of network resources.
Security Kernel?
8
Data encryption converts data into a different form (code) that can only be accessed by
people who have a secret key (formally known as a decryption key) or password. Data
that has not been encrypted is referred to as plaintext, and data that has been encrypted
is referred to as ciphertext. Encryption is one of the most widely used and successful
data protection technologies in today’s corporate world.
Encryption is a critical tool for maintaining data integrity, and its importance cannot be
overstated. Almost everything on the internet has been encrypted at some point.
Data encryption rules:
9
10. Consider Data Classification: Classify data based on sensitivity, and apply
encryption selectively. Not all data may require the same level of encryption, so
tailor encryption practices to the specific needs of different types of data.
11. Regularly Audit and Monitor: Conduct regular audits and monitoring to ensure
the effectiveness of encryption implementations. Detect and respond to any
anomalies or security incidents promptly.
12. Compliance with Regulations: Ensure that encryption practices align with
relevant legal and regulatory requirements. Different industries and regions may
have specific data protection standards that organizations need to follow.
1. Symmetric Encryption
2. Asymmetric Encryption
Encryption is frequently used in one of two ways i.e. with a symmetric key or with an
asymmetric key.
Symmetric Key Encryption:
Symmetric Encryption
There are a few strategies used in cryptography algorithms. For encryption and
decryption processes, some algorithms employ a unique key. In such operations, the
unique key must be secured since the system or person who knows the key has
complete authentication to decode the message for reading. This approach is known as
“symmetric encryption” in the field of network encryption.
10
Asymmetric Key Encryption:
Asymmetric Encryption
Some cryptography methods employ one key for data encryption and another key for
data decryption. As a result, anyone who has access to such a public communication
will be unable to decode or read it. This type of cryptography, known as “public-key”
encryption, is used in the majority of internet security protocols. The term “asymmetric
encryption” is used to describe this type of encryption.
1. If the password or key is lost, the user will be unable to open the encrypted
file. Using simpler keys in data encryption, on the other hand, makes the data
insecure, and anybody may access it at any time.
2. Data encryption is a valuable data security approach that necessitates a lot
of resources, such as data processing, time consumption, and the use of
numerous encryption and decryption algorithms. As a result, it is a somewhat
costly approach.
11
3. Data protection solutions might be difficult to utilize when the user layers them
for contemporary systems and applications. This might have a negative
influence on the device’s normal operations.
4. If a company fails to realize any of the restrictions imposed by encryption
techniques, it is possible to set arbitrary expectations and requirements that
might undermine data encryption protection.
What is Hashing
Hashing is the procedure of translating a given key into a code. A hash function can be
used to substitute the data with a newly generated hash code. Hash algorithms are
generally used to offer a digital fingerprint of a file’s contents often used to provide that
the file has not been changed by an intruder or virus. Hash functions are also employed
by some operating systems to encrypt passwords. Hash functions support a measure of
the integrity of a file.
Hashing creates use of algorithms that convert blocks of information from a file in a much
shorter value or key of a constant length that define those strings. The resulting hash
value is a sort of concentrated summary of each string inside a given file, and must be
able to change even when an individual byte of data in that file is transformed (avalanche
effect).
This supports massive advantage in hashing in terms of data compression. While hashing
is not compression, it can work very much like file compression in that it takes a higher
data set and shrinks it into a more feasible form.
A good hash function for security goals should be a unidirectional process that need a
one-way hashing algorithm. Therefore, hackers can simply reverse engineer the hash to
transform it back to the original data, defeating the goals of the encryption in the first
place.
It can increase the uniqueness of encrypted outputs, random information can be added
to the input of a hash function. This technique is called a “salting” and guarantees unique
output even in the method of identical inputs.
An attacker who can do either of these things might, for instance, it can use them to
substitute an unauthorized message for an authorized one. Conceptually, it must not even
be feasible to discover two messages whose digests are substantially same; nor would
12
one want an attacker to be able to understand anything beneficial about a message given
only its digest. The attacker learns minimum one piece of information, the digest itself,
which for instance provides the attacker the ability to identify the same message should
it appear again.
Digital signatures can provide evidence of origin, identity and status of electronic
documents, transactions or digital messages. Signers can also use them to
acknowledge informed consent. In many countries, including the U.S., digital signatures
are considered legally binding in the same way as traditional handwritten document
signatures.
Digital signatures work through public key cryptography's two mutually authenticating
cryptographic keys. For encryption and decryption, the person who creates the digital
signature uses a private key to encrypt signature-related data. The only way to decrypt
that data is with the signer's public key.
If the recipient can't open the document with the signer's public key, that indicates
there's a problem with the document or the signature. This is how digital signatures are
authenticated.
Digital certificates, also called public key certificates, are used to verify that the public
key belongs to the issuer. Digital certificates contain the public key, information about its
owner, expiration dates and the digital signature of the certificate's issuer. Digital
certificates are issued by trusted third-party certificate authorities (CAs), such as
DocuSign or GlobalSign, for example. The party sending the document and the person
signing it must agree to use a given CA.
Digital signature technology requires all parties trust that the person who creates the
signature image has kept the private key secret. If someone else has access to the
private signing key, that party could create fraudulent digital signatures in the name of
the private key holder.
13
What are the benefits of digital signatures?
Digital signatures offer the following benefits:
Security audits are often used to determine compliance with regulations such as
the Health Insurance Portability and Accountability Act, the Sarbanes-Oxley Act and the
California Security Breach Information Act that specify how organizations must deal with
information.
These audits are one of three main types of security diagnostics, along with vulnerability
assessments and penetration testing. Security audits measure an information system's
14
performance against a list of criteria. A vulnerability assessment is a comprehensive
study of an information system, seeking potential security weaknesses. Penetration
testing is a covert approach in which a security expert tests to see if a system can
withstand a specific attack. Each approach has inherent strengths and using two or
more in conjunction may be the most effective approach.
15
abnormal behavior is observed, the alert can be sent to the administrator. An
example of a NIDS is installing it on the subnet where firewalls are located
in order to see if someone is trying to crack the firewall.
16
• Protocol-based Intrusion Detection System (PIDS): Protocol-based
intrusion detection system (PIDS) comprises a system or agent that would
consistently reside at the front end of a server, controlling and interpreting
the protocol between a user/device and the server. It is trying to secure the
web server by regularly monitoring the HTTPS protocol stream and
accepting the related HTTP protocol. As HTTPS is unencrypted and before
instantly entering its web presentation layer then this system would need to
reside in this interface, between to use the HTTPS.
• Application Protocol-based Intrusion Detection System (APIDS): An
application Protocol-based Intrusion Detection System (APIDS) is a system
or agent that generally resides within a group of servers. It identifies the
intrusions by monitoring and interpreting the communication on application-
specific protocols. For example, this would monitor the SQL protocol
17
explicitly to the middleware as it transacts with the database in the web
server.
• Hybrid Intrusion Detection System: Hybrid intrusion detection system is
made by the combination of two or more approaches to the intrusion
detection system. In the hybrid intrusion detection system, the host agent or
system data is combined with network information to develop a complete
view of the network system. The hybrid intrusion detection system is more
effective in comparison to the other intrusion detection system. Prelude is an
example of Hybrid IDS.
Database Security
Security of databases refers to the array of controls, tools, and procedures designed to
ensure and safeguard confidentiality, integrity, and accessibility. This tutorial will
concentrate on confidentiality because it's a component that is most at risk in data security
breaches.
Security for databases must cover and safeguard the following aspects:
Security of databases is a complicated and challenging task that requires all aspects of
security practices and technologies. This is inherently at odds with the accessibility of
databases. The more usable and accessible the database is, the more susceptible we
are to threats from security. The more vulnerable it is to attacks and threats, the more
difficult it is to access and utilize.
It highlights the problem of this system is less capable and prone to be compromised by
any cyber-attack. Moreover, they need additional computing power to work correctly.
• The signatures are usually outdated, not advanced, and fail to detect Zero-day
attacks
18
• Packet inspection seems to act blindly towards encrypted traffic. Besides, it gives
tough time to upgrade
• Network monitoring cannot see any host activity or any new processes carried
out by the customer
• The removable media cannot be detected
• They are not capable of handling switched networks
• Network monitoring fails in the department of “log collection”
Network security is a system solely made to target all the traffic passing from the
Internet to LAN and vice versa to create a secure infrastructure. It filters out all the users
and is found ideal for the defense of the underlying networking structure from illegal
access, misuse, or shoplifting. For enhanced security purposes of devices, applications,
and customers, it guards your data against intrusions and cyber threats.
OPSEC gets information technology (IT) and security managers to view their operations
and systems as potential attackers would. OPSEC includes analytical activities and
processes, such as social media monitoring, behavior monitoring and security best
practices.
OPSEC was developed as a methodology during the Vietnam War when U.S. Navy
Admiral Ulysses S. Grant Sharp, commander in chief of the U.S. Pacific Command,
established the Purple Dragon team to find out how the enemy obtained information on
military operations before those operations took place.
19
OPSEC Five Step Process :
OPSEC works in five steps in order to find the risks and find out which information
requires protection and various measures needed to protect them:
1. What information needs to be protected –
This includes the sensitive information that needs to be protected
2. Who are my enemy –
Second step includes finding the enemy. This enemy can be hackers or
other organizations or firms that need to steal your information.
3. System Vulnerability –
In this step, the focus is made in finding the drawbacks we might have.
4. Threat Level –
Here the threat level on the system is found whether it is low medium or
high.
5. Elimination of threat –
The last step includes the various methods that can be employed to remove
these threats.
Physical security and issues in Information Security?
A physical threat is a potential cause of an incident that can result in loss or physical harm
to the computer systems. Physical security is represented as the security of personnel,
hardware, programs, networks, and data from physical situations and events that can
support severe losses or harm to an enterprise, departments, or organization. This
contains security from fire, natural disasters, robbery, theft, elimination, and terrorism.
Physical threat to a computer system can be as an outcome of loss of the entire computer
system, damage of hardware, damage to the computer software, theft of the computer
system, vandalism, natural disaster including flood, fire, war, earthquakes etc. Acts of
terrorism including the attack on the globe trade centre is also one of the major threats to
computer which can be defined as physical threat.
Natural hazards, civil unrest and terrorism all appears under disaster. A disaster is
represented as a sudden misfortune that is catastrophic to an undertaking. This defines
that there is small time to react at the time of the misfortune.
Unauthorized Access − One of the most common security risks regarding computerized
information systems is the hazard of unauthorized access to confidential information .The
main concern appears from unwanted intruders, or hackers, who use the current
technology and their skills to divide into supposedly secure computers or to exhaust them.
A person who gains access to data system for malicious reason is often termed of cracker
instead of a hacker.
20
itself, therefore continuing to spread. Some viruses do small but duplicate others can
cause severe harm or adversely influence program and implementation of the system.
Accidents − Accidental misuse or damage will be influenced over time by the attitude
and disposition of the staff in addition to the environment. Human errors have a higher
impact on information system security than do manmade threats caused by purposeful
attacks. But most accidents that are serious threats to the security of information systems
can be diminished.
Policies may address virtually any “who, what, where, when, how, or why” parameters,
including who can access resources, when, from where, using what devices or
software, what the user can and cannot do once access is granted, for how long, and
with what auditing or monitoring. Policies may also address more technical interactions
or requirements such as protocols to accept, ports to use, or connection timeouts.
Why Is Policy Enforcement Important?
Organizations create policies to control, manage, and sometimes monetize their assets
and services. Network policy enforcement helps to automate data and asset security
measures, including BYOD requirements. It can enable a service provider, for instance,
to create differential rates for specific services or times of use. It also can be used to
help enforce enterprise ethical standards (such as use of company equipment and time
for personal ends) and to better understand and manage network use.
How Does Policy Enforcement Work?
Policy enforcement is typically handled by software or hardware serving as a gateway,
proxy, firewall, or other centralized point of control in the network. Policies must first be
defined, along with one or more actions that will be taken if a violation occurs. Once
policies are defined, the software or hardware becomes a policy enforcement point in
the network—a nexus where policy enforcement occurs in three parts:
21
• A resulting action, ranging from serving the request to disconnection or
denylisting.
For instance, a policy may identify known malicious IP addresses and specify that any
and all traffic from those addresses be rejected. More complex policies may allow a
specific user to connect to some applications but not others, or to perform some actions
once connected that will incur a higher fee than others (such as using a streaming
service on a high-resolution device). In this way, authentication, authorization, and
accounting (AAA) systems are a form of policy enforcement.
Network policy enforcement may require compliance with more sophisticated and
granular parameters such as the presence of unexpired certificates, the type or version
of a device or browser being used to connect, or an absence of patterns of behavior
associated with attacks.
Multiple F5 products can serve as gateways or full proxies that enable granular control
over policy creation and enforcement from a single, centralized point of control. In
particular, BIG-IP Policy Enforcement Manager (PEM) provides sophisticated controls
for service providers looking to monetize services and improve network performance.
For enterprises, BIG-IP Access Policy Manager (APM) delivers context-based
management of access to applications with a graphical user interface called Visual
Policy Editor (VPE) that makes it easy to create, edit, and manage identity aware,
context-based policies.
What is access control?
Access control is a fundamental component of data security that dictates who’s allowed
to access and use company information and resources. Through authentication and
authorization, access control policies make sure users are who they say they are and
that they have appropriate access to company data. Access control can also be applied
to limit physical access to campuses, buildings, rooms, and datacenters.
There are four main types of access control. Organizations typically choose the method
that makes the most sense based on their unique security and compliance
requirements. The four access control models are:
1. Discretionary access control (DAC): In this method, the owner or
administrator of the protected system, data, or resource sets the policies for who
is allowed access.
2. Mandatory access control (MAC): In this nondiscretionary model, people are
granted access based on an information clearance. A central authority regulates
access rights based on different security levels. This model is common in
government and military environments.
22
3. Role-based access control (RBAC): RBAC grants access based on defined
business functions rather than the individual user’s identity. The goal is to provide
users with access only to data that’s been deemed necessary for their roles
within the organization. This widely used method is based on a complex
combination of role assignments, authorizations, and permissions.
4. Attribute-based access control (ABAC): In this dynamic method, access is
based on a set of attributes and environmental conditions, such as time of day
and location, assigned to both users and resources.
Information Flow Control (IFC) is a new idea in which a system may track data movement
from one location to another and stop it if it isn't wanted. It's a security technique that
keeps track of information flow between a system and the rest of the world, also known
as the Internet. Users want their credentials to remain private; thus, IFC employs type
systems and enforces this through compile-time type checking.
The biggest information security issues that result in legal and ethical concerns are
mainly the insider threats. Employees of organizations must be well schooled on the
both the implications and the preventative measures that are involved in the work
ethics. By proving the masses the knowledge required to sustain acceptable behavior,
financial cost from both internal sources and external parties can be vastly limited.
Employee training on an organization’s security policies and how they can affect their
daily work behavior is helpful in streamlining core business values with personal and
corporate activities. One important but often neglected aspect of ethics in the working
environment is the cultural factor. With the world being a global village and people
travelling from the developing world to the West, the issues of culture and the
automated business environment has brought major concerns. This diversity in
workforce backgrounds further complicates the questions of what is ethical and what is
not. For instance, in the eyes of the Western culture, most Africans and Asians use
software that has been pirated. With the Asians, their traditions of collective ownership
conflicts with intellectual property laws. In African National Reporter (2011) mentioned,
“The commercial value of unlicensed software installed on personal computers in
Eastern and Southern Africa (ESA), which excludes South Africa, reached $109 million
23
in 2010 as 83 per cent of software deployed on PCs during the year was pirated.” A
propelling factor to this high rate of piracy is that most African countries are relaxed with
intellectual property copy restrictions. For instance, in Zambia it’s a common place to
find pirated application and system software that are functioning as a result of illegal
reverse engineering tweaks that bypass any registrations or subscriptions for their
activations. Zambian regulatory bodies such as the Government’s Information
Technology related Ministry, the Zambia Information and Communication Technology
Authority (ZICTA) and the Computer Society of Zambia have all failed in ensuring that
the vice of piracy is eliminated on a national level. As a result the few masses that are IT
literate have illegitimate software that are used for corporate purposes. This creates a
problem when such people find themselves working in the diaspora where stringent
consequences for personal and corporate violations are placed.
Identification and Authentication in information security?
Authentication is any procedure by which it can test that someone is who they claim they
are. This generally contains a username and a password, but can involve some other
method of demonstrating identity, including a smart card, retina scan, voice recognition,
or fingerprints. Authentication is similar to displaying the drivers license at the ticket
counter at the airport.
Authorization is discovering out if the person, once recognized, is allowed to have the
resource. This is generally decided by discovering out if that person is a part of a specific
group, if that person has paid admission, or has a specific level of security clearance.
Authorization is same to checking the guest record at an exclusive party, or checking for
the ticket when it can go to the opera.
Finally, access control is a usual way of talking about controlling access to a web
resource. Access can be granted or denied based on a broad variety of criteria, including
the network address of the user, the time of day, the process of the moon, or the internet
which the visitor is using.
Access control is similar to locking the gate at closing time, or only letting person onto the
ride who are higher than 48 inches tall it’s controlling entrance by some arbitrary condition
which may or may not have anything to do with the attributes of the specific visitor.
Because these three approaches are so closely associated in most real applications, it is
complex to talk about them independent from one another. In specific, authentication and
authorization are, in most actual execution, inextricable.
It can be deciding if a user is authorized to use an IT system involves the distinct phase
of identification and authentication. Identification concerns the manner in which a user
supports the unique identity to the IT system. The identity can be a name (e.g., first or
last) or a number (e.g., account number). The identity should be unique so that the system
can distinguish between multiple users. It is based on operational requirements, one
“identity” can define one individual, more than one individual, or one (or more) individual’s
only part of the time.
24
Authentication is the phase of relating an individual with the unique identity, that is, the
manner in which the individual creates the validity of the claimed identity. There are three
basic authentication means by which an individual can authenticate his identity.
A Security Risk Assessment (or SRA) is an assessment that contains recognizing the
risks in the company, the technology and the processes to check that controls are in place
to safeguard against security threats. Security risk assessments are generally required
by compliance standards, including PCI-DSS standards for payment card security.
Security Risk Assessments are implemented by a security assessor who will compute all
elements of the companies systems to recognize areas of risk. These can be as simple
as a system that enables weak passwords, or can be more complex problems, including
insecure business processes. The assessor will generally review everything from HR
policies to firewall configurations while working to recognize potential risks.
For example, during the discovery phase an assessor will recognize all databases
containing any sensitive data, an asset. That database is linked to the internet, a
vulnerability. It can protect that asset, it is required to have a control in place, and in this
case it would be a firewall.
A Security Risk Assessment identifies some critical assets, vulnerabilities and controls in
the company to provide that some risks have been properly mitigated. A Security Risk
Assessment is important in protecting the company from security risks.
A security risk assessment supports us with the blueprint of risks that exist in the
environment and provides vital information about how critical each issue is. It can be
25
understanding where to start when enhancing the security that allows us to maximize
your IT resources and budget, saving time and money.
Security Risk Assessments are deep dive computation of the company, or it can be a
definite IT project or even a company department. During the assessment, the objective
is to find problems and security holes before the bad guys do.
Two main symmetric encryption methods exist: block ciphers and stream ciphers.
block cipher :
divides data into fixed-size blocks for encryption, making them suitable for structured
data like files. They’re predictable but may have vulnerabilities if not used correctly. AES
is a well-known block cipher.
stream ciphers:
encrypt data bit by bit and are ideal for real-time streams like voice or video. They are
efficient but require synchronization between both the sender and receiver to avoid data
errors.
The strengths of 3DES lie in its backward compatibility with the original DES, making it
an easy upgrade for legacy systems. However, its main weakness is its relatively slow
processing speed due to the multiple encryption rounds, making it less efficient than
more modern encryption systems.
Consequently, the encryption community shifted its preference to AES in the early
2000s due to its improved security, rendering 3DES gradually obsolete in contemporary
data encryption practices.
26
symmetric-key block cipher, emerged as the winner and became the foundation for
AES.
AES encryption offers different security levels based on encryption key length, with 128-
bit, 192-bit, and 256-bit keys. The 128-bit key is suitable for most applications. The 192-
bit and 256-bit keys offer even higher security, making them ideal for more sensitive and
critical data protection.
Today, the AES symmetric algorithm protects various apps and systems, from securing
sensitive data communications over the internet to encrypting sensitive information in
storage. Its combination of security and efficiency has made it a cornerstone of modern
encryption methods, ensuring the confidentiality and integrity of data from e-commerce
transactions to secure sensitive data transmission.
Blowfish
Blowfish is a block cipher symmetric encryption method designed by Bruce Schneier in
1993. It gained popularity for its simplicity, speed, and security features. Blowfish is
suited for applications requiring fast encryption and decryption, such as securing data
on disk drives or network communications.
The algorithm’s flexibility in accommodating key sizes ranges from 32 to 448 bits. While
Blowfish has demonstrated robust security over the years, its smaller block size is a
potential vulnerability in some cases.
In today’s landscape, where more advanced encryption algorithms with larger block
sizes are available, Blowfish’s limited block size can be seen as a limitation for some
security-critical applications. As a result, other algorithms like AES have gained a
reputation for their wider adoption in high-security environments.
Twofish
Twofish also employs a well-regarded key whitening technique, making it more resistant
to specific attacks. Its security and adaptability have made it an attractive choice for
various encryption applications, especially when users need to strike a balance between
security and performance.
While Twofish is a solid encryption method with many strengths, its adoption remains
limited due to its relatively complex implementation compared to more straightforward
alternatives like AES encryption.
27
Format-Preserving Encryption (FPE)
Format-preserving encryption (FPE) is a technique that encrypts data while
preserving its original format, such as credit card numbers, dates, or social security
numbers. FPE is used in industries like finance and healthcare to maintain data format
integrity during encryption, making it compatible with existing systems and processes.
Unlike traditional data encryption methods that often produce ciphertext with longer or
significantly altered formats, FPE ensures that the encrypted information retains the
same data type, length, and structural characteristics.
FPE protects sensitive data like medical record numbers or birthdates, maintaining the
same format for seamless integration with electronic health records.
Now let’s talk about the different types of asymmetric encryption algorithms.
Asymmetric Encryption Algorithms(Public Encryption Key):
As you already know, in asymmetric encryption, we use two keys. The public key is for
everyone to know, while the private key is kept secret. The private key is the only key
that can decrypt messages encrypted with the public key.
RSA excels in secure data encryption and digital signatures. It’s widely adopted and
compatible. However, it requires periodic key size increases due to advancing
computing power, and key management is crucial for security.
RSA’s security relies on the difficulty of factoring large numbers, and future quantum
computers may pose a threat. Inadequate key management can also lead to breaches.
28
ECC offers significant advantages, particularly in resource-constrained environments.
Unlike traditional methods like RSA encryption, it provides strong security with much
shorter key lengths compared to
Since it’s more efficient for computation and bandwidth, ECC is ideal for devices with
limited processing power and memory, such as mobile phones and IoT devices.
Diffie-Hellman
The Diffie-Hellman algorithm, created by Diffie and Hellman in 1976, allows two
parties to create a shared secret over an unsecured channel. They agree on prime
numbers, compute public keys, and use them to get a shared secret for secure
communication without transmitting it over an unsecured channel.
Primary use cases for Diffie-Hellman include establishing secure channels in encrypted
communications, such as SSL/TLS, which secures data transmission on the Internet.
VPNs and messaging apps also use it.
A vulnerability in Diffie-Hellman is the man-in-the-middle attack. Mitigation strategies
include digital certificates and protocols like Internet Key Exchange (IKE) for
authentication. Long prime numbers also strengthen security.
DSA creates digital signatures (known as digital seals) with private keys for message
authenticity. Recipients use public keys to verify these signatures, ensuring the
message’s integrity and source. Unlike RSA encryption, which focuses on
confidentiality, DSA concentrates on the integrity and authenticity of data.
DSA secures email exchanges, software updates, and digital signatures in government,
finance, and security-focused applications. Its concerns include the risk of private key
compromise and potential efficiency issues.
Encryption Standards
Encryption plays a vital role in safeguarding information, with several standards and
methods employed based on use cases. Here are the following standards for
encryption:
29
File Encryption:
This involves encrypting individual files or directories directly by the file system, often
called file and folder encryption.
Disk Encryption:
Email Encryption:
Email encryption protects emails from unauthorized eyes. Typically, emails are sent in
the clear, making them vulnerable. Encryption methods for emails include Transport
Layer Security (TLS) and end-to-end encryption, with popular options being PGP and
S/MIME protocols. These methods secure the content and may validate the sender’s
and recipient’s authenticity.
30
Davies Attack (DA)
Davies's attack was initially steered by Donald Davies in the 80s and was a
specialized attack that solely applies to DES. The LC and DC square measure
general attacks appropriate for plenty of additional algorithms. Davies same that
breaking the DES it's needed 2^50 better-known plaintexts with a successful rate of
fifty-one.
Hill Cipher
Hill cipher is a polygraphic substitution cipher based on linear algebra.Each letter is
represented by a number modulo 26. Often the simple scheme A = 0, B = 1, …, Z = 25
is used, but this is not an essential feature of the cipher. To encrypt a message, each
block of n letters (considered as an n-component vector) is multiplied by an invertible
n × n matrix, against modulus 26. To decrypt the message, each block is multiplied by
the inverse of the matrix used for encryption.
The matrix used for encryption is the cipher key, and it should be chosen randomly
from the set of invertible n × n matrices (modulo 26).
Examples:
31
The message ‘ACT’ is written as vector:
Decryption
To decrypt the message, we turn the ciphertext back into a vector, then simply multiply
by the inverse matrix of the key matrix (IFKVIVVMI in letters).The inverse of the matrix
used in the previous example is:
32
For the previous Ciphertext ‘POH’:
33
34
What is DMZ?
Demilitarized zones, or DMZ for short, are used in cybersecurity. DMZs separate
internal networks from the internet and are often found on corporate networks. A DMZ
is typically created on a company’s internal network to isolate the company from external
threats. While the name might sound negative, a DMZ can be a helpful tool for network
security.
The DMZ is a network barrier between the trusted and untrusted network in a company’s
private and public network. The DMZ acts as a protection layer through which outside
users cannot access the company’s data. DMZ receives requests from outside users or
public networks to access the information, website of a company. For such type of
request, DMZ arranges sessions on the public network. It cannot initiate a session on
the private network. If anyone tries to perform malicious activity on DMZ, the web pages
are corrupted, but other information remains safe.
The goal of DMZ is to provide access to the untrusted network by ensuring the security
of the private network. DMZ is not mandatory, but a better approach to use it with
a firewall.
Restarting a system will usually fix an attack that crashes a server, but flooding attacks
are more difficult to recover from. Recovering from a distributed DoS (DDoS) attack in
which attack traffic comes from a large number of sources is even more difficult.
35
What is the Risk Rating Matrix?
A risk rating matrix, or risk assessment matrix, is a visual tool used to evaluate and
prioritize risks based on their likelihood and potential impact. It serves as a framework
for categorizing and quantifying the potential hazard of each risk, allowing organizations
to allocate resources efficiently and effectively.
Risk matrices are versatile tools that can be employed both in extensive and more
localized contexts. This method of risk analysis can be implemented on a smaller scale,
such as within individual projects, or on a broader scale encompassing the entire
enterprise.
how a hacker might discover information and gain access to a network: explain
the anatomy:
36
• Identify weaknesses and vulnerabilities in the target systems. This may
involve using automated tools to scan for known vulnerabilities in
software, operating systems, or misconfigurations.
5. Exploitation:
• Exploit identified vulnerabilities to gain unauthorized access. This could
involve using known exploits, custom-crafted malware, or social
engineering techniques to trick individuals into revealing sensitive
information or executing malicious code.
6. Privilege Escalation:
• After gaining initial access, the hacker seeks to escalate their privileges
within the system. This involves attempting to gain administrative or
higher-level access to expand control and access more sensitive data.
7. Maintaining Access:
• Install backdoors, rootkits, or other malicious tools to ensure continued
access to the compromised system. This allows the hacker to return even
if the initial vulnerability is patched.
8. Covering Tracks:
• Hackers attempt to erase or obfuscate evidence of their activities. This
may include modifying logs, deleting files, or manipulating timestamps to
avoid detection.
9. Exfiltration:
• Once access is established, hackers may attempt to steal or exfiltrate
sensitive information. This could involve copying files, accessing
databases, or intercepting communication.
10. Post-Exploitation Activities:
• Beyond data theft, hackers may use the compromised system for various
malicious purposes, such as launching attacks on other systems,
participating in botnets, or conducting further reconnaissance.
It's crucial for organizations and individuals to implement strong cybersecurity measures
to mitigate these risks. This includes regular system updates, security audits, employee
training on social engineering, and the use of intrusion detection systems to monitor for
suspicious activities. Ethical hacking, conducted by professionals in a controlled
environment, can also help identify and address vulnerabilities before malicious actors
exploit them.
Active attacks:
Active attacks are a type of cybersecurity attack in which an attacker attempts to
alter, destroy, or disrupt the normal operation of a system or network. Active
attacks involve the attacker taking direct action against the target system or network,
37
and can be more dangerous than passive attacks, which involve simply
monitoring or eavesdropping on a system or network.
Types of active attacks are as follows:
• Masquerade
• Modification of messages
• Repudiation
• Replay
• Denial of Service
Masquerade –
Masquerade is a type of cybersecurity attack in which an attacker pretends to be
someone else in order to gain access to systems or data. This can involve
impersonating a legitimate user or system to trick other users or systems into
providing sensitive information or granting access to restricted areas.
There are several types of masquerade attacks, including:
Username and password masquerade: In a username and password masquerade
attack, an attacker uses stolen or forged credentials to log into a system or
application as a legitimate user.
IP address masquerade: In an IP address masquerade attack, an attacker spoofs
or forges their IP address to make it appear as though they are accessing a
system or application from a trusted source.
Website masquerade: In a website masquerade attack, an attacker creates a fake
website that appears to be legitimate in order to trick users into providing
sensitive information or downloading malware.
Email masquerade: In an email masquerade attack, an attacker sends an email that
appears to be from a trusted source, such as a bank or government agency,
in order to trick the recipient into providing sensitive information or downloading
malware.
38
Masquerade Attack
Modification of messages –
It means that some portion of a message is altered or that message is delayed or
reordered to produce an unauthorized effect. Modification is an attack on the integrity
of the original data. It basically means that unauthorized parties not only gain access
to data but also spoof the data by triggering denial-of-service attacks, such as altering
transmitted data packets or flooding the network with fake data. Manufacturing is an
attack on authentication. For example, a message meaning “Allow JOHN to read
confidential file X” is modified as “Allow Smith to read confidential file X”.
Modification of messages
Repudiation –
Repudiation attacks are a type of cybersecurity attack in which an attacker attempts to
deny or repudiate actions that they have taken, such as making a transaction or
sending a message. These attacks can be a serious problem because they can make
39
it difficult to track down the source of the attack or determine who is responsible for a
particular action.
There are several types of repudiation attacks, including:
Message repudiation attacks: In a message repudiation attack, an attacker sends
a message and then later denies having sent it. This can be done by using spoofed
or falsified headers or by exploiting vulnerabilities in the messaging system.
Transaction repudiation attacks: In a transaction repudiation attack, an attacker
makes a transaction, such as a financial transaction, and then later denies having
made it. This can be done by exploiting vulnerabilities in the transaction processing
system or by using stolen or falsified credentials.
Data repudiation attacks: In a data repudiation attack, an attacker modifies or
deletes data and then later denies having done so. This can be done by exploiting
vulnerabilities in the data storage system or by using stolen or falsified credentials.
Replay –
It involves the passive capture of a message and its subsequent transmission to
produce an authorized effect. In this attack, the basic aim of the attacker is to save a
copy of the data originally present on that particular network and later on use this data
for personal uses. Once the data is corrupted or leaked it is insecure and unsafe for
the users.
Replay
Denial of Service –
Denial of Service (DoS) is a type of cybersecurity attack that is designed to make a
system or network unavailable to its intended users by overwhelming it with traffic or
requests. In a DoS attack, an attacker floods a target system or network with traffic or
requests in order to consume its resources, such as bandwidth, CPU cycles, or
memory, and prevent legitimate users from accessing it.
40
There are several types of DoS attacks, including:
Flood attacks: In a flood attack, an attacker sends a large number of packets or
requests to a target system or network in order to overwhelm
its resources.
Amplification attacks: In an amplification attack, an attacker uses a third-party
system or network to amplify their attack traffic and direct it towards the
target system or network, making the attack more effective.
To prevent DoS attacks, organizations can implement several measures, such
as:
1.Using firewalls and intrusion detection systems to monitor network traffic and block
suspicious activity.
2.Limiting the number of requests or connections that can be made to a system or
network.
3.Using load balancers and distributed systems to distribute traffic across multiple
servers or networks.
4.Implementing network segmentation and access controls to limit the impact of a
DoS attack.
Denial of Service
Passive attacks:
A Passive attack attempts to learn or make use of information from the system but
does not affect system resources. Passive Attacks are in the nature of eavesdropping
on or monitoring transmission. The goal of the opponent is to obtain information that is
being transmitted. Passive attacks involve an attacker passively monitoring or
collecting data without altering or destroying it. Examples of passive attacks include
eavesdropping, where an attacker listens in on network traffic to collect sensitive
41
information, and sniffing, where an attacker captures and analyzes data packets to
steal sensitive information.
Types of Passive attacks are as follows:
• The release of message content
• Traffic analysis
The release of message content –
Telephonic conversation, an electronic mail message, or a transferred file may contain
sensitive or confidential information. We would like to prevent an opponent from
learning the contents of these transmissions.
Passive attack
Traffic analysis –
Suppose that we had a way of masking (encryption) information, so that the attacker
even if captured the message could not extract any information from the message.
The opponent could determine the location and identity of communicating host and
could observe the frequency and length of messages being exchanged. This
information might be useful in guessing the nature of the communication that was
taking place.
The most useful protection against traffic analysis is encryption of SIP traffic. To do
this, an attacker would have to access the SIP proxy (or its call log) to determine who
made the call.
42
Traffic analysis:
Unlock the Power of Placement Preparation!
Feeling lost in OS, DBMS, CN, SQL, and DSA chaos? Our Complete Interview
Preparation Course is the ultimate guide to conquer placements. Trusted by over
100,000+ geeks, this course is your roadmap to interview triumph.
Ready to dive in? Explore our Free Demo Content and join our Complete Interview
Preparation course.
43