Professional Documents
Culture Documents
1. The Firewall
A firewall is a system designed to prevent unauthorized access to or from a private network. You
can implement a firewall in either hardware or software form, or a combination of both. Firewalls
prevent unauthorized internet users from accessing private networks connected to the internet,
especially intranets. All messages entering or leaving the intranet (i.e., the local network to which you
are connected) must pass through the firewall, which examines each message and blocks those that do
not meet the specified security criteria.
Note: In protecting private information, a firewall is considered a first line of defence; it cannot,
however, be considered the only such line. Firewalls are generally designed to protect network traffic
and connections, and therefore do not attempt to authenticate individual users when determining who
can access a particular computer or network.
A firewall is a system or group of systems that enforces an access control policy between two
or more networks. The actual means by which this is accomplished varies widely, but in principle, the
firewall can be thought of as a pair of mechanisms: one which exists to block traffic, and the other
which exists to permit traffic. Some firewalls place a greater emphasis on blocking traffic, while
others emphasize permitting traffic. Probably the most important thing to recognize about a firewall is
that it implements an access control policy. If you don't have a good idea of what kind of access you
want to allow or to deny, a firewall really won't help you. It's also important to recognize that the
firewall's configuration, because it is a mechanism for enforcing policy, imposes its policy on
everything behind it. Administrators for firewalls managing the connectivity for a large number of
hosts therefore have a heavy responsibility.
Some firewalls permit only email traffic through them, thereby protecting the network against
any attacks other than attacks against the email service. Other firewalls provide less strict protections,
and block services that are known to be problems.
Generally, firewalls are configured to protect against unauthenticated interactive logins from
the ``outside'' world. This, more than anything, helps prevent vandals from logging into machines on
your network. More elaborate firewalls block traffic from the outside to the inside, but permit users on
the inside to communicate freely with the outside. The firewall can protect you against any type of
network-borne attack if you unplug it.
Firewalls are also important since they can provide a single ``choke point'' where security and
audit can be imposed. Unlike in a situation where a computer system is being attacked by someone
dialing in with a modem, the firewall can act as an effective ``phone tap'' and tracing tool. Firewalls
provide an important logging and auditing function; often they provide summaries to the
administrator about what kinds and amount of traffic passed through it, how many attempts there were
to break into it, etc.
Because of this, firewall logs are critically important data. They can be used as evidence in a
court of law in most countries. You should safeguard, analyze and protect yoru firewall logs
accordingly.
This is an important point: providing this ``choke point'' can serve the same purpose on your
network as a guarded gate can for your site's physical premises. That means anytime you have a
change in ``zones'' or levels of sensitivity, such a checkpoint is appropriate. A company rarely has
only an outside gate and no receptionist or security staff to check badges on the way in. If there are
layers of security on your site, it's reasonable to expect layers of security on your network.
Firewalls can't protect against attacks that don't go through the firewall. Many corporations
that connect to the Internet are very concerned about proprietary data leaking out of the company
through that route. Unfortunately for those concerned, a magnetic tape, compact disc, DVD, or USB
flash drives can just as effectively be used to export data. Many organizations that are terrified (at a
management level) of Internet connections have no coherent policy about how dial-in access via
modems should be protected. It's silly to build a six-foot thick steel door when you live in a wooden
house, but there are a lot of organizations out there buying expensive firewalls and neglecting the
numerous other back-doors into their network. For a firewall to work, it must be a part of a consistent
overall organizational security architecture. Firewall policies must be realistic and reflect the level of
security in the entire network. For example, a site with top secret or classified data doesn't need a
firewall at all: they shouldn't be hooking up to the Internet in the first place, or the systems with the
really secret data should be isolated from the rest of the corporate network.
Another thing a firewall can't really protect you against is traitors or idiots inside your
network. While an industrial spy might export information through your firewall, he's just as likely to
export it through a telephone, FAX machine, or Compact Disc. CDs are a far more likely means for
information to leak from your organization than a firewall. Firewalls also cannot protect you against
stupidity. Users who reveal sensitive information over the telephone are good targets for social
engineering; an attacker may be able to break into your network by completely bypassing your
firewall, if he can find a ``helpful'' employee inside who can be fooled into giving access to a modem
pool. Before deciding this isn't a problem in your organization, ask yourself how much trouble a
contractor has getting logged into the network or how much difficulty a user who forgot his password
has getting it reset. If the people on the help desk believe that every call is internal, you have a
problem that can't be fixed by tightening controls on the firewalls.
Firewalls can't protect against tunneling over most application protocols to trojaned or poorly
written clients. There are no magic bullets and a firewall is not an excuse to not implement software
controls on internal networks or ignore host security on servers. Tunneling ``bad'' things over HTTP,
SMTP, and other protocols is quite simple and trivially demonstrated. Security isn't ``fire and forget''.
Lastly, firewalls can't protect against bad things being allowed through them. For instance,
many Trojan Horses use the Internet Relay Chat (IRC) protocol to allow an attacker to control a
compromised internal host from a public IRC server. If you allow any internal system to connect to
any external system, then your firewall will provide no protection from this vector of attack.
Design Issues
There are a number of basic design issues that should be addressed by the lucky person who
has been tasked with the responsibility of designing, specifying, and implementing or overseeing the
installation of a firewall.
The first and most important decision reflects the policy of how your company or
organization wants to operate the system: is the firewall in place explicitly to deny all services except
those critical to the mission of connecting to the Net, or is the firewall in place to provide a metered
and audited method of ``queuing'' access in a non-threatening manner? There are degrees of paranoia
between these positions; the final stance of your firewall might be more the result of a political than
an engineering decision.
The second is: what level of monitoring, redundancy, and control do you want? Having
established the acceptable risk level (i.e., how paranoid you are) by resolving the first issue, you can
form a checklist of what should be monitored, permitted, and denied. In other words, you start by
figuring out your overall objectives, and then combine a needs analysis with a risk assessment, and
sort the almost always conflicting requirements out into a laundry list that specifies what you plan to
implement.
The third issue is financial. We can't address this one here in anything but vague terms, but
it's important to try to quantify any proposed solutions in terms of how much it will cost either to buy
or to implement. For example, a complete firewall product may cost between $100,000 at the high
end, and free at the low end. The free option, of doing some fancy configuring on a Cisco or similar
router will cost nothing but staff time and a few cups of coffee. Implementing a high end firewall
from scratch might cost several man-months, which may equate to $30,000 worth of staff salary and
benefits. The systems management overhead is also a consideration. Building a home-brew is fine, but
it's important to build it so that it doesn't require constant (and expensive) attention. It's important, in
other words, to evaluate firewalls not only in terms of what they cost now, but continuing costs such
as support.
On the technical side, there are a couple of decisions to make, based on the fact that for all
practical purposes what we are talking about is a static traffic routing service placed between the
network service provider's router and your internal network. The traffic routing service may be
implemented at an IP level via something like screening rules in a router, or at an application level via
proxy gateways and services.
The decision to make is whether to place an exposed stripped-down machine on the outside
network to run proxy services for telnet, FTP, news, etc., or whether to set up a screening router as a
filter, permitting communication with one or more internal machines. There are benefits and
drawbacks to both approaches, with the proxy machine providing a greater level of audit and,
potentially, security in return for increased cost in configuration and a decrease in the level of service
that may be provided (since a proxy needs to be developed for each desired service). The old trade-off
between ease-of-use and security comes back to haunt us with a vengeance.
Types of firewalls
1. Network layer
2. Application layer
3. Hybrids
They are not as different as you might think, and latest technologies are blurring the distinction to
the point where it's no longer clear if either one is ``better'' or ``worse.'' As always, you need to be
careful to pick the type that meets your needs.
It depends on what mechanisms the firewall uses to pass traffic from one security zone to another.
The International Standards Organization (ISO) Open Systems Interconnect (OSI) model for
networking defines seven layers, where each layer provides services that ``higher-level'' layers depend
on. In order from the bottom, these layers are physical, data link, network, transport, session,
presentation, application.
The important thing to recognize is that the lower-level the forwarding mechanism, the less
examination the firewall can perform. Generally speaking, lower-level firewalls are faster, but are
easier to fool into doing the wrong thing.
These days, most firewalls fall into the ``hybrid'' category, which do network filtering as well as
some amount of application inspection. The amount changes depending on the vendor, product,
protocol and version, so some level of digging and/or testing is often necessary.
In above Figure , a network layer firewall called a ``screened host firewall'' is represented. In a
screened host firewall, access to and from a single host is controlled by means of a router operating at
a network layer. The single host is a bastion host; a highly-defended and secured strong-point that
(hopefully) can resist attack.
Example Network layer firewall: In above Figure, a network layer firewall called a ``screened subnet
firewall'' is represented. In a screened subnet firewall, access to and from a whole network is
controlled by means of a router operating at a network layer. It is similar to a screened host, except
that it is, effectively, a network of screened hosts.
These generally are hosts running proxy servers, which permit no traffic directly between
networks, and which perform elaborate logging and auditing of traffic passing through them. Since
the proxy applications are software components running on the firewall, it is a good place to do lots of
logging and access control. Application layer firewalls can be used as network address translators,
since traffic goes in one ``side'' and out the other, after having passed through an application that
effectively masks the origin of the initiating connection. Having an application in the way in some
cases may impact performance and may make the firewall less transparent. Early application layer
firewalls such as those built using the TIS firewall toolkit, are not particularly transparent to end users
and may require some training. Modern application layer firewalls are often fully transparent.
Application layer firewalls tend to provide more detailed audit reports and tend to enforce more
conservative security models than network layer firewalls.
Example Application layer firewall: In above figure , an application layer firewall called a
``dual homed gateway'' is represented. A dual homed gateway is a highly secured host that runs proxy
software. It has two network interfaces, one on each network, and blocks all traffic passing through it.
Most firewalls now lie someplace between network layer firewalls and application layer
firewalls. As expected, network layer firewalls have become increasingly ``aware'' of the information
going through them, and application layer firewalls have become increasingly ``low level'' and
transparent. The end result is that now there are fast packet-screening systems that log and audit data
as they pass through the system. Increasingly, firewalls (network and application layer) incorporate
encryption so that they may protect traffic passing between them over the Internet. Firewalls with
end-to-end encryption can be used by organizations with multiple points of Internet connectivity to
use the Internet as a ``private backbone'' without worrying about their data or passwords being sniffed.
3. Acting as a proxy server: A proxy server is a type of gateway that hides the true network
address of the computer(s) connecting through it. A proxy server connects to the internet,
makes the requests for pages, connections to servers, etc., and receives the data on behalf of
the computer(s) behind it. The firewall capabilities lie in the fact that a proxy can be
configured to allow only certain types of traffic to pass (e.g.,HTTP files, or web pages). A
proxy server has the potential drawback of slowing network performance, since it has to
actively analyze and manipulate traffic passing through it.
4. Web application firewall: A web application firewall is a hardware appliance, server plug-
in, or some other software filter that applies a set of rules to a HTTP conversation. Such rules
are generally customized to the application so that many attacks can be identified and
blocked.
In practice, many firewalls use two or more of these techniques in concert. In Windows and Mac
OS X, firewalls are built into the operating system. Third-party firewall packages also exist, such as
Zone Alarm, Norton Personal Firewall, Tiny, Black Ice Protection, and McAfee Personal Firewall.
Many of these offer free versions or trials of their commercial versions.
In addition, many home and small office broadband routers have rudimentary firewall capabilities
built in. These tend to be simply port/protocol filters, although models with much finer control are
available.
Implementation
Firewall Rules
As mentioned above, network traffic that traverses a firewall is matched against rules to determine if
it should be allowed through or not. An easy way to explain what firewall rules looks like is to show a
few examples, so we'll do that now.
Suppose you have a server with this list of firewall rules that apply to incoming traffic:
1. Accept new and established incoming traffic to the public network interface on port 80 and
443 (HTTP and HTTPS web traffic)
2. Drop incoming traffic from IP addresses of the non-technical employees in your office to port
22 (SSH)
3. Accept new and established incoming traffic from your office IP range to the private network
interface on port 22 (SSH)
Note that the first word in each of these examples is either "accept", "reject", or "drop". This specifies
the action that the firewall should do in the event that a piece of network traffic matches a
rule. Accept means to allow the traffic through, reject means to block the traffic but reply with an
"unreachable" error, and dropmeans to block the traffic and send no reply. The rest of each rule
consists of the condition that each packet is matched against.
As it turns out, network traffic is matched against a list of firewall rules in a sequence, or chain, from
first to last. More specifically, once a rule is matched, the associated action is applied to the network
traffic in question. In our example, if an accounting employee attempted to establish an SSH
connection to the server they would be rejected based on rule 2, before rule 3 is even checked. A
system administrator, however, would be accepted because they would match only rule 3.
Default Policy
It is typical for a chain of firewall rules to not explicitly cover every possible condition. For this
reason, firewall chains must always have a default policy specified, which consists only of an action
(accept, reject, or drop).
Suppose the default policy for the example chain above was set to drop. If any computer outside of
your office attempted to establish an SSH connection to the server, the traffic would be dropped
because it does not match the conditions of any rules.
If the default policy were set to accept, anyone, except your own non-technical employees, would be
able to establish a connection to any open service on your server. This would be an example of a very
poorly configured firewall because it only keeps a subset of your employees out.
As network traffic, from the perspective of a server, can be either incoming or outgoing, a firewall
maintains a distinct set of rules for either case. Traffic that originates elsewhere, incoming traffic, is
treated differently than outgoing traffic that the server sends. It is typical for a server to allow most
outgoing traffic because the server is usually, to itself, trustworthy. Still, the outgoing rule set can be
used to prevent unwanted communication in the case that a server is compromised by an attacker or a
malicious executable.
In order to maximize the security benefits of a firewall, you should identify all of the ways you want
other systems to interact with your server, create rules that explicitly allow them, then drop all other
traffic. Keep in mind that the appropriate outgoing rules must be in place so that a server will allow
itself to send outgoing acknowledgements to any appropriate incoming connections. Also, as a server
typically needs to initiate its own outgoing traffic for various reasons—for example, downloading
updates or connecting to a database—it is important to include those cases in your outgoing rule set as
well.
Suppose our example firewall is set to drop outgoing traffic by default. This means our
incoming accept rules would be useless without complementary outgoing rules.
To complement the example incoming firewall rules (1 and 3), from the Firewall Rules section, and
allow proper communication on those addresses and ports to occur, we could use these outgoing
firewall rules:
1. Accept established outgoing traffic to the public network interface on port 80 and 443 (HTTP
and HTTPS)
2. Accept established outgoing traffic to the private network interface on port 22 (SSH)
Note that we don't need to explicitly write a rule for incoming traffic that is dropped (incoming rule 2)
because the server doesn't need to establish or acknowledge that connection.
Now that we've gone over how firewalls work, let's take a look at common software packages that can
help us set up an effective firewall. While there are many other firewall-related packages, these are
effective and are the ones you will encounter the most.
1. Iptables
Iptables is a standard firewall included in most Linux distributions by default (a modern variant called
nftables will begin to replace it). It is actually a front end to the kernel-level netfilter hooks that can
manipulate the Linux network stack. It works by matching each packet that crosses the networking
interface against a set of rules to decide what to do.
2. UFW
UFW, which stands for Uncomplicated Firewall, is an interface to iptables that is geared towards
simplifying the process of configuring a firewall.
3. Firewal lD
4. Fail2ban
Fail2ban is an intrusion prevention software that can automatically configure your firewall to block
brute force login attempts and DDOS attacks.
An IDS needs only to detect threats and as such is placed out-of-band on the network
infrastructure, meaning that it is not in the true real-time communication path between the sender and
receiver of information. Rather, IDS solutions will often take advantage of a TAP or SPAN port to
analyze a copy of the inline traffic stream (and thus ensuring that IDS does not impact inline network
performance).
IDS was originally developed this way because at the time the depth of analysis required for
intrusion detection could not be performed at a speed that could keep pace with components on the
direct communications path of the network infrastructure.
As explained, the IDS is also a listen-only device. The IDS monitors traffic and reports its results
to an administrator, but cannot automatically take action to prevent a detected exploit from taking
over the system. Attackers are capable of exploiting vulnerabilities very quickly once they enter the
network, rendering the IDS an inadequate deployment for prevention device.
1. Network Intrusion Detection System (NIDS): This does analysis for traffic on a whole subnet
and will make a match to the traffic passing by to the attacks already known in a library of
known attacks.
2. Network Node Intrusion Detection System (NNIDS): This is similar to NIDS, but the traffic
is only monitored on a single host, not a whole subnet.
3. Host Intrusion Detection System (HIDS): This takes a “picture” of an entire system’s file set
and compares it to a previous picture. If there are significant differences, such as missing
files, it alerts the administrator.
Types of IDS
Intrusion detection software systems can be broken into two broad categories: host-based and
network-based; those two categories speak to where sensors for the IDS are placed (on a
host/endpoint or on a network).
Some experts segment the market even further, also listing perimeter IDS, VM-based IDS,
stack-based IDS, signature-based IDS and anomaly-based IDS (with similar abbreviations
corresponding to the IDS’ descriptive prefixes).
Whatever the type, analysts said the technology generally works the same, with the system designed
to detect intrusions at the points where the sensors are place and to alert security analysts to its
finding.
Compare that to firewalls that block out known malware and intrusion prevention system
(IPS) technology, which as the name describes, also blocks malicious traffic.
Although an IDS doesn’t stop malware, cybersecurity experts said the technology still has a
place in the modern enterprise.
“The functionality of what it does is still critically important,” said Eric Hanselman, chief
analyst with 451 Research. “The IDS piece itself is still relevant because at its core it’s detecting an
active attack.”
However, cybersecurity experts said organizations usually don’t buy and implement IDS as a
standalone solution as they once did. Rather, they buy a suite of security capabilities or a security
platform that has intrusion detection as one of many built-in capabilities.
Rob Clyde, board of directors vice chair ISACA, an association for IT governance
professionals, and executive chair for the board at White Cloud Security Inc., agreed that intrusion
detection is still a critical capability. But he said companies need to understand that an intrusion
detection system requires maintenance and consider whether, and how, they’ll support an IDS if they
opt for it.
“Once you’ve gone down the path to say we’re going to keep track of what’s going on in our
environment, you need someone to respond to alerts and incidents. Otherwise, why bother?” he said.
Given the work an IDS takes, he said smaller companies should have the capability but only
as part of a larger suite of functions so they’re not managing the IDS in addition to other standalone
solutions. They should also consider working with a managed security service provider for their
overall security requirements, as the provider due to scale can more efficiently respond to alerts.
“They’ll use machine learning or maybe AI and human effort to alert your staff to an incident or
intrusion you truly have to worry about,” he said.
“And at mid-size and larger companies, where you really need to know if someone is inside
the network, you do want to have the additional layer, or additional layers, than just what’s built into
your firewall,” he said.
1. False positives (i.e., generating alerts when there is no real problem). “IDSs are notorious for
generating false positives,” Rexroad said, adding that alerts are generally are sent to a
secondary analysis platform to help contend with this challenge.This challenge also puts
pressure on IT teams to continually update their IDSs with the right information to detect
legitimate threats and to distinguish those real threats from allowable traffic.
It’s no small task, experts said.
“IDS systems must be tuned by IT administrators to analyze the proper context and reduce
false-positives. For example, there is little benefit to analyzing and providing alerts on
internet activity for a server that is protected against known attacks. This would generate
thousands of irrelevant alarms at the expense of raising meaningful alarms. Similarly, there
are circumstances where perfectly valid activities may generate false alarms simply as a
matter of probability,” Rexroad said, noting that organizations often opt for a secondary
analysis platform, such as a Security Incident & Event Management (SIEM) platform, to help
with investigating alerts.
2. Staffing. Given the requirement for understanding context, an enterprise has to be ready to
make any IDS fit its own unique needs, experts advised.
“What this means is that an IDS cannot be a one-size-fits all configuration to operate
accurately and effectively. And, this requires a savvy IDS analyst to tailor the IDS for the
interests and needs of a given site. And, knowledgeable trained system analysts are scarce,”
Novak added.
3. Missing a legitimate risk. “The trick with IDS is that you have to know what the attack is to
be able to identify it. The IDS has always had the patient zero problem: You have to have
found someone who got sick and died before you can identify it,” Hanselman said.
IDS technology can also have trouble detecting malware with encrypted traffic, experts said.
Additionally, the speed and distributed nature of incoming traffic can limit the effectiveness of an
intrusion detection system in an enterprise.
“You might have an IDS that can handle 100 megabits of traffic but you might have 200 megabits
coming at it or traffic gets distributed, so your IDS only sees one out of every three or four packets,”
Hanselman said.
Hanselman said those limitations still don’t invalidate the value of an IDS as a function.
“No security tool is perfect. Different products have different blind spots, so the challenge is
knowing those blind spots,” he explained. “I continue to think that IDS will be with us for a long time
to come. There’s still that basic value in being able to identify specific hostile traffic on the wire.”
However, experts said this has some organizations rethinking the need for an IDS – even
though today implementing the technology remains a security best practice.
“This tuning and analysis requires a significant amount of effort based on the number of alerts
received. An organization may not have the resources to manage all devices in this capacity. Other
organizations may conduct a more comprehensive threat assessment and decide not to implement IDS
devices,” Rexroad said, adding that the high number of IDS false positives have some organizations
opting against implementing IPSs as well for fear of blocking legitimate business transactions.
He said other organizations may decide to focus on more advanced protections at the internet
gateway or use flow analysis from network devices in conjunction with log analysis from systems and
applications to identify suspect events instead of using an IDS.
3. The Intrusion Prevention System (IPS)
An Intrusion Prevention System (IPS) is a network security/threat prevention technology that
examines network traffic flows to detect and prevent vulnerability exploits. Vulnerability exploits
usually come in the form of malicious inputs to a target application or service that attackers use to
interrupt and gain control of an application or machine. Following a successful exploit, the attacker
can disable the target application (resulting in a denial-of-service state), or can potentially access to
all the rights and permissions available to the compromised application.
Prevention
The IPS often sits directly behind the firewall and provides a complementary layer of analysis that
negatively selects for dangerous content. Unlike its predecessor the Intrusion Detection System (IDS)
—which is a passive system that scans traffic and reports back on threats—the IPS is placed inline (in
the direct communication path between source and destination), actively analyzing and taking
automated actions on all traffic flows that enter the network. Specifically, these actions include:
As an inline security component, the IPS must work efficiently to avoid degrading network
performance. It must also work fast because exploits can happen in near real-time. The IPS must also
detect and respond accurately, so as to eliminate threats and false positives (legitimate packets
misread as threats).
Detection
The IPS has a number of detection methods for finding exploits, but signature-based detection
and statistical anomaly-based detection are the two dominant mechanisms.
IPS was originally built and released as a standalone device in the mid-2000s. This however,
was in the advent of today’s implementations, which are now commonly integrated
into Unified Threat Management (UTM) solutions (for small and medium size companies)
and next-generation firewalls (at the enterprise level).
For protection against intrusions to be effective, an IPS must have a system that keeps the file
that contains the identifiers of intrusions constantly up-to-date.
1. Network IPS:
1. These aim to protect the network segments or zones which they can access.
2. They capture network traffic (sniffers) and analyze them for patterns that could be some
type of attack.
3. If they are correctly installed in the network, they can analyze large networks and
generally have a minimum impact on traffic.
4. They use a network device configured in promiscuous mode. This means that they can
intercept and analyze all the packets in a network segment, even if they are not addressed
to a specific computer
5. They usually analyze traffic in real time.
6. They not only work at TCP/IP level, but can also operate in the application layer.
7. A network IPS can be located in the network segments exposed to external networks
(WAN and the Internet) in the zone that hosts the services and public servers (DMZ), or
they can simply inspect traffic in the internal network. The optimum solution for
detecting intruders from untrustworthy networks is to place the IPS and the firewall in the
same device.
2. Host IPS:
1. These were the first IDS (Intrusion Detection System) developed by the IT security
industry.
2. They protect a single computer.
3. They monitor a large amount of events and activities, accurately determining which
processes and users are involved in a certain action.
4. They collect system information, such as files, log files and resources to then analyze it
locally for possible incidents in the system.
The way that intrusion prevention systems work is by scanning network traffic as it goes across
the network; unlike an intrusion detection system, which is intended to just react, an intrusion
prevention system is intended to prevent malicious events from occurring by preventing attacks as
they are happening. There are a number of different attack types that can be prevented using an IPS
including (among others):
1. Denial of Service
2. Distributed Denial of Service
3. Exploits (Various types)
4. Worms
5. Viruses
It is also important to understand, that like an IDS, IPSs are limited to the signatures that they are
configured to look for. As of this writing, the IOS IPS system has protection for over 3700 different
signatures. These signatures are updated by Cisco constantly, but if they are not updated onto the
configured equipment they do little to help against new threats. The IOS IPS feature was also
designed to work with other IOS-based features including IOS Firewall, control-plane policing and
other IOS security protection features.
Packet Flow
A very important piece of the security configuration of an IOS device is being able to
understand which feature is allowed to process traffic and in what order. Figure 1 shows the general
order that is used to process packets as they come into a device.
Figure 1
There can also be some confusion when reading through Cisco documentation. Within the last
couple of IOS releases, there has been a transition from the Intrusion Prevention System Version 4.x
Signature Format to Version 5.x Signature Format. With this transition, there was a big change from
the use of .SDF files to .pkg files; this can be further complicated when looking through the different
documentation available on the Cisco website, as some refers to the version 4.x Signature Format and
other documentation refers to the Version 5.x Signature format. This article reviews the use of the
newer .pkg files and signature format.
IOS IPS relies on a number of different signature micro-engines (SMEs); each of these
engines is used to process different categories of signatures. These different categories are important
to be familiar with because IOS IPS cannot load all of the available signatures at the same time; the
way that IOS IPS has to be configured is by loading only the required categories of signatures that are
specific to the configured IOS IPS device and its purpose.
Two of these categories are intended for use, especially with IOS IPS devices; these include
the ios_basic category and the ios_advanced categories. A third category, specific to IOS IPS, was
introduced in IOS 15.0(1)M called ‘IOS IPS Default’ and currently has the same signatures as the
ios_advanced category.
Signature Actions
When a signature is downloaded from Cisco, it is automatically assigned a specific action that
will occur should the event be detected. There are a total of five available actions that are possible:
Any of these five actions can be combined and customized to individual signatures on the IOS
IPS device. In the past, these actions could be customized with Security Device Manager (SDM),
however, with IOS version 12.4(11)T and later, the use of SDM has been depreciated and the use of
Cisco Configuration Professional (CCP)(Single device), Cisco Security Manager (CSM)(Up to 5
devices) or direct IOS CLI tuning is now required.
IOS IPS Logging, Monitoring and Alarming
When a signature is detected on an IOS IPS device, there are two methods that can be used for
logging, monitoring and alarming:
Both the CCP and CME can be used to collect these events on smaller implementations; with
larger deployments, the use of the Cisco Security Monitoring, Analysis, and Response System is
required (MARS).
The following table summarizes the differences in technology intrinsic to IPS and the IDS
deployment:
Characteristics
Functions
Hides your IP address: Connecting to a Virtual Private Network often conceals your real IP
address.
Changes your IP address: Using a VPN will almost certainly result in getting a different IP
address.
Encrypts data transfers: A Virtual Private Network will protect the data you transfer over
public WiFi.
Masks your location: With a Virtual Private Network, users can choose the country of origin
for their Internet connection.
Accesses blocked websites: Get around website blocked by governments with a VPN.
How does a virtual private network (VPN) work?
A VPN extends a corporate network through encrypted connections made over the Internet.
Because the traffic is encrypted between the device and the network, traffic remains private as it
travels. An employee can work outside the office and still securely connect to the corporate network.
Even smartphones and tablets can connect through a VPN.
Secure remote access provides a safe, secure way to connect users and devices remotely to a
corporate network. It includes VPN technology that uses strong ways to authenticate the user or
device. VPN technology is available to check whether a device meets certain requirements, also called
a device’s posture, before it is allowed to connect remotely.
Yes, traffic on the virtual network is sent securely by establishing an encrypted connection
across the Internet known as a tunnel. VPN traffic from a device such as a computer, tablet, or
smartphone is encrypted as it travels through this tunnel. Offsite employees can then use the virtual
network to access the corporate network.
VPN protocols
There are several different protocols used to secure and encrypt users and corporate data:
IP security (IPsec) : A set of protocols developed by the IETF to support secure exchange of
packets at the IP layer. IPsec has been deployed widely to implement VPNs. IPsec supports
two encryption modes: transport and tunnel. PPTP has been around since the days of
Windows 95. The main selling point of PPTP is that it can be simply setup on every major
OS. In short, PPTP tunnels a point-to-point connection over the GRE protocol. Unfortunately,
the security of the PPTP protocol has been called into question in recent years. It is still
strong, but not the most secure.
Secure Sockets Layer (SSL) and Transport Layer Security (TLS)
Point-To-Point Tunneling Protocol (PPTP) : The Point-to-Point Tunneling Protocol is a
technology for creating VPNs, developed jointly by Microsoft, U.S. Robotics and several
remote access vendor companies, known collectively as the PPTP Forum.
Layer 2 Tunneling Protocol (L2TP) : Layer Two (2) Tunneling Protocol is an extension to
the PPP protocol that enables ISPs to operate Virtual Private Networks (VPNs). L2TP over
IPsec is more secure than PPTP and offers more features. L2TP/IPsec is a way of
implementing two protocols together in order to gain the best features of each. For example,
the L2TP protocol is used to create a tunnel and IPsec provides a secure channel. These
measures make for an impressively secure package.
OpenVPN: OpenVPN is an SSL-based Virtual Private Network that continues to gain
popularity. The software used is open source and freely available. SSL is a mature encryption
protocol, and OpenVPN can run on a single UDP or TCP port, making it extremely flexible.
The most common types of VPNs are remote-access VPNs and site-to-site VPNs.
Remote-access VPN
Site-to-site VPN
A site-to-site VPN uses a gateway device to connect the entire network in one location to the
network in another -- usually a small branch connecting to a data center. End-node devices in the
remote location do not need VPN clients because the gateway handles the connection. Most site-to-
site VPNs connecting over the internet use IPsec. It is also common to use carrier MPLS clouds,
rather than the public internet, as the transport for site-to-site VPNs. Here, too, it is possible to have
either Layer 3 connectivity (MPLS IP VPN) or Layer 2 (Virtual Private LAN Service, or VPLS)
running across the base transport.
PNs can also be defined between specific computers, typically servers in separate data centers, when
security requirements for their exchanges exceed what the enterprise network can deliver.
Increasingly, enterprises also use VPN connections in either remote-access mode or site-to-site mode
to connect -- or connect to -- resources in a public infrastructure-as-a-service environment. Newer
hybrid-access scenarios put the VPN gateway itself in the cloud, with a secure link from the cloud
service provider into the internal network.
Consumers use a private VPN service, also known as a VPN tunnel, to protect their online
activity and identity. By using an anonymous VPN service, a user's Internet traffic and data
remain encrypted, which prevents eavesdroppers from sniffing Internet activity. VPN services are
especially useful when accessing public Wi-Fihotspots because the public wireless services might not
be secure. In addition to public Wi-Fi security, a private VPN service also provides consumers with
uncensored Internet access and can help prevent data theft and unblock websites.
Companies and organizations will typically use a VPN to communicate confidentially over a
public network and to send voice, video or data. It is also an excellent option for remote workers and
organizations with global offices and partners to share data in a private manner.
One of the most common types of VPNs used by businesses is called a virtual private dial-up
network (VPDN). A VPDN is a user-to-LAN connection, where remote users need to connect to the
company LAN. Another type of VPN is commonly called a site-to-site VPN. Here the company would
invest in dedicated hardware to connect multiple sites to their LAN though a public network, usually
the Internet.
The benefit of using a secure VPN is it ensures the appropriate level of security to the
connected systems when the underlying network infrastructure alone cannot provide it. The
justification for using VPN access instead of a private network usually boils down to cost and
feasibility: It is either not feasible to have a private network -- e.g., for a traveling sales rep -- or it is
too costly to do so.
VPN performance can be affected by a variety of factors, among them the speed of users' internet
connections, the types of protocols an internet service provider may use and the type of encryption the
VPN uses. Performance can also be affected by poor quality of service and conditions that are outside
the control of IT.
5. Access Control
Access control is a security technique that can be used to regulate who or what can view or use
resources in a computing environment.
There are two main types of access control: physical and logical.
Physical access control limits access to campuses, buildings, rooms and physical IT assets.
Logical access limits connections to computer networks, system files and data.
You can make a strong argument that the entire field of cyber security rests almost completely on
identity verification and access control. Without those two functions, almost no other security
technique matters. Every other element of security depends on the system identifying the user and
validating their permissions to various objects.
Access control topologies in information technology span the digital and the physical realms. It’s
as important to secure a server room door with a lock as it is to secure the server itself with a
password.
And there is considerable crossover between digital and physical security in modern access
control systems, where entryways are often secured by RFID (Radio-frequency Identification),
keypad, or biometric readers that rely on electronic databases for identity verification and
authorization. In such cases, the controls are only as strong as the weakest link—a door can be
jimmied or a database hacked.
Identity management and access control are never far from the minds of cyber security teams.
Nonetheless, even major agencies with large information security teams occasionally fumble the
implementation of access control schemes. In 2016, a Government Accountability Office report found
that four government agencies, including NASA and the Department of Homeland Security had failed
to put in place adequate access control schemes for sensitive information.
Along the bottom is a full stack of access control hard- and software for managing
employees, visitors, executives but also for audit and controlling. Through integrations with a cloud
based physical access system at the core they get even better when used together, helping you to grow
your security strategy and practice.
As long as the combination of username and password is, in fact, a uniquely identifying signature,
this is no problem at all. But the relative weakness of most user-originated passwords and a long
history of successful cryptographic attacks on password mechanisms have raised concerns about how
much we can rely on the old user name/password combination.
An alternative verification mechanism to the password is the key fob token. Tokens are small
devices that generate a time-based key code that acts as an authentication mechanism. Unlike
password controls, a fob can only be in the physical possession of a single person at a time (this
ignores the problem of duplication, a concern dealt with by adequate cryptographic controls).
But in the real world, keys are lost or stolen, leading to a similar weakness as the password
scheme. The upside is that users can more reliably detect when a physical object has been stolen
versus a password.
But even two-factor schemes are susceptible to attack. According to a June 2016 article in
Engadget, civil rights activist DeRay McKesson had his Twitter account hijacked by hackers who use
social engineering to redirect the text-based one-time login code from his phone to one of their own.
Using the code, the hackers triggered a password reset and promptly owned the account.
More exotic authentication mechanisms rely on biometric data, personally identifying physical
characteristics like fingerprints and iris scans. This type of authentication relies on something the
user is and is much harder to spoof. But even biometric access control schemes are susceptible to
hacking, with artificial fingers being used to fool early versions of fingerprint scanners, reverse-
engineered irises passing muster with retina scanners and even face masks made convincingly enough
to fool facial recognition technology.
The challenge for cyber security professionals in an unending arms race with hackers will be to
develop more reliable methods of user verification that are also simple enough to be practical. A full
FBI background check would, presumably, be reliable enough, but could also cause login attempts to
take three to six months.
Once a user has proven they are who they say they are to the system they are accessing, that
system must implement controls to ensure they are only allowed to access the parts of that system
they have permission to view or use.
This opens up the realm of access control. Access controls are the doors and walls of the system.
Just as there are various methods for authenticating identity, there are a number of techniques that can
be used for controlling access to resources:
Mandatory Access Control (MAC) is a rule-based system for restricting access, often used
in high-security environments. In MAC, users do not have much freedom to determine who
has access to their files. For example, security clearance of users and classification of data (as
confidential, secret or top secret) are used as security labels to define the level of trust.
Discretionary Access Control (DAC) allows users to manipulate access settings of objects
under their control. In DAC, the data owner determines who can access specific resources.
For example, a system administrator may create a hierarchy of files to be accessed based on
certain permissions.
Role-based Access Control (RBAC) is determined by system policy and user role
assignment. RBAC allows access based on the job title. For example, a human resources
specialist should not have permissions to create network accounts; this should be a role
reserved for network administrators.
Rule-Based Access Control: An example of this would be only allowing students to use the
labs during a certain time of the day.
Implementing Policy-Based Access Controls
Of these, RBAC is probably the most common in today’s network settings. By establishing the
bounds and rights of various role-based archetypes in an organization, administrators can easily define
access permissions for a particular job function and then assign that role to everyone in the
organization that performs that function. This eliminates the laborious and time-consuming task of
reevaluating access for every individual.
The way in which these schemes are applied to data and services can further fall into one of two
basic categories:
ACLs (often pronounced like “hackles” without the “h”) rely on labeling each object in a system
with a set of permissions designating what level of access various groups should be allowed. These
permissions often have finite levels of discretion; one group may be able to read an object, for
instance, but only members of another group can change or delete it.
Capability-based models rely on something like a virtual key fob, a token that is bestowed to a
user account after authentication and verification, allowing the account to perform certain functions
for a certain limited amount of time. Although secure, managing capability-based schemes is
cumbersome and centralized.
Selecting the proper combination of identity and access control schemes to secure any particular
system requires knowledge and experience. Information security specialists that understand how the
pieces fit together generally have a background that includes studying cybersecurity at the graduate
level.
cscscsdc
Occasionally folks forget about covering the fundamentals of security and start off down a rabbit
hole following some shiny new technology that turns out to be just a rat hole. With today's limited
security budgets you need to be sure that you've adequately covered your highest risk areas before
moving on to other things. The high-risk areas are, of course, not the same for everyone and will
change on you fairly frequently. The bad guys are always mixing it up; the attacks we see prevalent
today are not those that we saw just a few years ago. Thus the reason for this article, to take a look at
the top 5 security solutions you can put in place today to cover the widest scope of current and
emerging threats. In many respects these solutions are considered obvious "no brainers". But, you'd be
surprised by how many companies (big and small) that doesn’t have them in place. Many times it is
the obvious that temporarily escapes us (or at least escapes those holding the purse strings ☺)
These 5 items working together will stop more cyber attacks on your data, network and users
than any other 5 items in the marketplace today. There are lots of other very useful security solutions
on the market but when it comes to picking the top five most effective and readily available ones here
are my choices:
lksdj