You are on page 1of 8

VPN

The next topic we look at is VPN concentrators. In today’s world we have lots of users working from home, lots of users working within other
organizations and need connectivity periodically to their own internal network. First practice we talk about is the use of a VPN, a virtual private network.
When you must have remote connectivity to your internal network, you need to use a virtual private network. The VPN, virtual private network ensures
that your traffic moving across the internet is within a virtual tunnel, such that malicious persons on the internet cannot eavesdrop on your network
communications. They cannot eavesdrop on your network communications, hence, keeping your network communications secure. Packets coming
through the VPN will be encrypted. These packets cannot be processed by certain devices on your network, your printer, say for example a user is
printing, the printer cannot handle encrypted traffic or PCs, some PCs cannot handle encrypted traffic, hence, we need a VPN concentrator. The VPN
concentrator is a processor robust device, memory intensive device that can do encryption and decryption of the packets as they come in and leave the
network. The VPN concentrator facilitates the decryption of the traffic, making sure that traffic going in is decrypted and it ensures that traffic going out
to the VPN is encrypted. If you have multiple users, this device can handle that traffic and not slow down the traffic moving through to the network or to
the internet. The function of the VPN concentrator is to facilitate encryption and decryption of network packets as they enter the network or leave the
network.

NW ADMINISTRATION
Hello, my name is John Oyeleke, subject matter expert for the security plus exam. Welcome to cybrary.IT. Today we will be talking about network
administration principles. We will be looking at flood guards, A.2.1X, we will be looking at rule based management, firewall rules, root protection,
implicit deny, log analysis and a few other topics. We’ll start off with rule based management. Rule based management typically applies to our firewalls
where we set up a set of rules to dictate what type of traffic, should be allowed in or out of the network across the firewalls. The network administrators
will follow the policy to configure the firewalls in such a way that only traffic that is meant to leave the network can leave the network and only traffic that
should come through the network, should come through the firewalls. Based on these rules which we call the firewall rules, based on these rules the
firewalls will manage traffic coming in and out. These rules are typically either to allow the traffic or to deny the traffic. Your rules will typically consist of,
to allow traffic or to deny traffic. It will be traffic to particular ports, traffic to particular IP addresses or the type of files or packets that are actually
moving across the network. The, allow or deny, constitute your rules. These are your firewall rules to allow traffic or to deny the traffic. Where you just
deploy a firewall on your network without proper configuration, the default settings could be to allow all traffic and this could be disastrous for your
network if you allow all traffic in and out of your network. Malicious persons out there on the internet or even internal users that have malicious intent
could now move packets in and out of the network. With your firewall rules, using the rule based management you are able to dictate what can
transverse your firewalls and what cannot leave your network. You must also do secure router management. Secure router management involves
securing our routers with passwords such that the routing tables within the routers could not just be maliciously changed without proper authorization.
By ensuring that the routers have proper passwords to lock them down, the routing tables, we can maintain the integrity of the routing tables.
Otherwise, some malicious person could connect to your router and alter the routing tables thereby redirecting your traffic through other systems on the
network. Secure router management ensures that your routers are secure before you deploy them on your networks. It is bad practice to deploy our
routers exactly as we receive them, they could be in their default configurations. Their default names could even be the name of the device. Malicious
persons could just do a simple search of that name on the internet and discover the logon name to configure the tables on the device. We must
practice secure router configurations. This locks down our routers to secure the network. Access control lists, these are lists generated within our
systems or our servers, also on the firewalls to determine what users can have access or what systems would have access. Basically, the access
control list is the metrics to determine the capability of a user or a system when it gains access to the network. This way we can dictate or limit what
users can do or not do within the period with which they have access to the network or to a system. The access control list is simply a metric to show
what capabilities every system or user, having access to the network can carry out. Then we talk about port security. When we discuss port security,
we have to talk about logical security and physical security. For physical security you want to have your devices locked down in such a way that you
are able to restrict physical access to the ports. You could lock down devices within cabinet to restrict physical access to ports. You could also ensure
that rooms are locked so that not just anybody and everybody has physical access to systems and can connect devices to ports. For logical ports,
these ports, you could disable them within the system. You could also even disable physical ports within the system if you connect into BIOS (basic
input, output system). Within the BIOS settings you could disable your USBs for example, these are the physical ports. You could disable the ports on
the system. You could also disable ports that are not in use, best practice. Ports that are not used, these are logical ports, you disable them, such that
if these ports were to be left open, malicious persons will discover them and they would use them, and when they do certainly not in your favor. You
could also implement A.2.1X. A.2.1X. This is a port based authentication standard to ensure that rogue devices do not connect to our networks. If you
implement A.2.1X, this is done on the switches, not just anybody can connect a device to the switch or to the ports on the wall. These ensures anybody
connecting a device must authenticate, no malicious person could just sneak into your building, plug in a router to the port on the wall or the switch and
go stay in the car park hoping to track or capture your network, they must authenticate. A.2.1X is a port based authentication standard to secure your
network so that rogue devices cannot function even if they connect to your networks. Now we will look at flood guards. Flood guards could be
standalone devices or devices that are built into your firewall to ensure that they keep your network safe. Flood guards, when we have different, we
could have different types of floods, you could have ping flood, a sync flood or all other types of floods. Some malicious persons could try to overwhelm
your devices, your servers on your networks by flooding those servers with requests. Usually, our networks would allow a ping, ping is to test for
connectivity, if you have your network configured, somebody else could ping your servers or ping systems on your network to see if they are available.
However, malicious persons are able to flood that server with pings in such a way that the server becomes overwhelmed trying to process those ping
requests, thereby passing a denial of service attack. Once the server is overwhelmed with all the pings, it’s unable to cater for legitimate requests or
even a sync flood. Malicious persons could craft the sync packets in such a way that the machine becomes overwhelmed trying to process these
packets. Administrators could ensure that their flood guards are activated so that when these flood guards detect a flood, the flood guards will block
further traffic so the servers are not overwhelmed trying to process these packets. This is to ensure we secure our networks against denial of service
attacks. When we set up our routers and switches, our routers especially, when we set up our routers on the network, the routers have the algorithm
within them in such a way that if we accidentally are to create a loop, we don’t want loops on our networks. If we are to accidently, create a loop, the
spanning tree protocol could be implemented to prevent looping. This is to break any loops that are created as a result of, maybe it could be as a result
of a configuration error on our networks. The spanning tree protocol is usually triggered to prevent the loops. Loops are not very good on our networks
and will allow traffic to keep running around on the network and this could ultimately bring down or bring your network to a standstill. Using the
spanning tree protocol we are able to prevent loops. If we detect a loop, a link could be broken that way to stop the loop and make your network
secure. We could also implement something called implicit deny. For implicit deny, this is a network strategy, secure strategy in which we say all traffic
unless explicitly allowed should be deemed suspect. If we present the principle of implicit deny, everything should be seemed suspect unless explicitly
allowed, nothing should be allowed. If you have, say for example firewalls and it’s been configured, such that you want to allow a certain type of traffic,
You will say, “Allow, so, so, so traffic,” but some other traffic if not dictated that you are to allow it, should be denied. That is what the principle of implicit
deny dictates. Everything should be deemed suspect unless explicitly allowed. If it is not said that this traffic can go through, it should be denied. We
must also do log analysis. To secure our networks, we must do log analysis. Everything that transpires on a network is capture in a log. We have
different types of logs, we have your event logs, you have your incident logs, you have your successful logs and you have your failed logs. You must do
log analysis to check what transpired on the network, what failed, what was successful, what logons were successful, what accesses were successful,
access to the printer, access to the server, access to systems? It’s not just good practice to just capture the logs and we don’t analyze the logs. Logs
could be very good. Logs could tell you what has happened. Logs could also tell you what is happening and by reviewing your logs carefully you could
infer what could happen? You logs are good for the past, present and possibly for the future. By doing careful log analysis you would go to your servers
and you would review the logs within the servers. Best practice is that our log should be secured on systems that are NTFS based so that we can
ensure their integrity. If you review these logs, you can tell what transpired on your network. It could be an incidence occurred where nobody was there,
an incident occurred and nobody could see it, even if people are there but by Reviewing the logs, you can determine what is happening. It is good
practice that you must review your logs. A lot of organizations today will use some tools. We have security incidents events managers. They are tools
that can allow you capture the precise logs that are very important to you because you could have millions of logs for activities taking place on your
systems. It is good practice that you use some solution that can alert you precisely to specific logs that are very important. That way you are not just
having tons and tons of logs to review, this becomes a burden and nobody wants to do it. Using a security incidents event manager, you are able to
bring to one interface all the logs that are of high priority to yourself. The last topic within the network administration principles is unified threat
Management. When we talk about unified threat management, these means you are booting all together, all the solutions to manage threats within one
device. You unify your threat management within one device. A lot of organizations are building what we call next generation devices or next
generation firewalls. You have devices that would have the firewall intrusion detection system, intrusion prevention system, all in one. You unify your
threat management capabilities such that you buy one box you’ve bought all the solutions. This is a beautiful strategy but there are some downsides to
such a solution. If you have 1 device to take care of your intrusion detection system, intrusion prevention system, Your firewalls, you could have
something called a single point of failure. If you have a single point of failure, If you have a single point of failure, that one device goes down, your
firewall is gone, your detection system is gone. Your prevention system is gone. In as much as they appear to be a beautiful solution, the downside is
that you could have a single point of failure. You unify all your threat management capabilities within one device. You could do your content filtering,
your malware inspection, intrusion detection system, intrusion prevention system, your firewall as well, all within one device. We refer to that as you are
unifying your threat management. You unified threat management devices.

NW DESIGN
We will now be looking at section 1.3 of the syllabus. This has to do with explaining network design elements and components. We explain network
design, elements and components. What the design is – the strategy for the design. What components we use to achieve these strategic designs. The
first item we look at is the DMZ. The word DMZ [demilitarized Zone]. Organizations have their internal network. This is the internal network to this
organization, and this is the public network. We implement firewalls as the first line of defense. On this network, this organization they have a web
server. It is bad practice to put the web server on the internal network because someone coming in across the internet would have access to that
server on the inside of your network. It is also possible for them to possibly have access to all of these other servers. What you do is you put that server
in a zone protected by firewalls. In this scenario, a user coming across the internet can only come into this web server. Yes. This could be an online
business for example. We need your money but we don’t trust you we will put the server here in what is called the DMZ Demilitarized Zone. It is a zone
that you allow only trusted persons into on your network without granting them access to your internal network. These firewalls will prevent traffic from
the public coming into the internal network. This helps makes the server available to users online but they have no access to your internal network and
this is what we call the demilitarized zone. Some organizations will put their web servers in a DMZ, their e-mail servers in a DMZ so your staff working
from home could also access your e-mail server. Staff internals could also have access to such servers. This is a DMZ. Another item we look at is
remote access. Periodically, some of our staff might be required to work from home, or work from other organizations to which we offer service. These
people will be connecting remotely to the internal systems on our networks so if we must do remote access, we want to ensure that our
communications are going via a VPN across the internet; a Virtual Private Network. These cells guarantee confidentiality for traffic as it moves across
the network, the internet. We also would want to ensure that people connecting remotely, we have servers in place. Remote access servers that would
do authentication. Authentication is the process by which the system verifies that a user is who they say they are. We want to authenticate all users
connecting remotely so that we can say that our users are the ones connecting to our networks. We don’t want unauthorized persons connecting to our
networks. So, for remote access we will implement these technologies to guarantee who has access to our networks. Another topic we look at in this
section is telephoning. At some point in time your users internal to the organization might need to correspond to users or customers outside the
organization so telephones come into use. Best practice should be followed within the organizations. If we have desktop phones, phones in your
conference rooms, phones on the desktops of all your users, we should have secure code such that if people are on the inside and need to make calls,
they have to punch in particular codes. This way, we can monitor who has use of such resources. You don’t want people coming into the network,
picking up any phone and making a long distance call so each user should have an access code that guarantees exit of the network. We also have
VOIP; VOIP solution, Voice over Internet Protocol. Organizations are embracing these technology these days to lower their phone bills. So we can
move our voice packets—we digitize and move our voice packets on the same data networks that we have in place. The reliance on regular phone
networks will then reduce to lower our overheads. However, whether we are using desktop phones or VOIP solutions, best practice is that we do voice
encryption. If we can encrypt our data packets to guarantee confidentiality, it is also good practice to encrypt our voice packets. This way, malicious
persons cannot eavesdrop on our voice communications. There are all sorts of types of attacks. A type of attack is called war dialing. Some
organizations connect to the internet via telephone lines and they use modems to modulate the traffic coming from the systems, move it into analogue
signals on the telephone lines and then demodulate at the other end to their computers. What malicious persons could do is they have software and
they load banks of numbers into these machines and call numbers within an organization randomly. Numbers that are being picked up, they know that
is a desktop phone. Numbers that are not picked up they know to reserve this for a possible attack later. This could be maybe the numbers to some
modems. When you notice a large set of phones just starting to ring collectively, at the same time, what type of attack could be in place? .It could be a
war dialing attack. Malicious persons are trying to call to see what phones get picked up and what phones don’t get picked up and that is what we call a
war dialing attack. They use this to identify modems on the networks so that they can possibly attack those networks. One strategy to control access to
our networks is to implement network access control. The idea behind this is you want to monitor the state of health of all your machines. All the
machines accessing your network should meet a specific baseline so you set the baseline on a server such that any system attempting to log on will be
reviewed by that server to see that the systems meet the baseline. Periodically, some users disconnect from the network possibly to go work remotely,
or some are on vacation for some reason or the other. When they do disconnect from the network. When they are returning to the network, it could be
that they have been infected, it could be that there’s been an update in the applications or drivers or other solutions in use on the network. When these
systems return to connect to the network, we want to monitor them to see do they meet the baseline? If they don’t, we fix them before they connect. If
they do, we allow access so if we follow this diagram, we can illustrate a typical example where network access control is implemented. This is not a
one solution strategy, this is just an illustration to show how we could do network access control. We have over here, a health check server. On this
server, we dictate our baseline. Now let’s put down here the user PC that is logging on. Up here we put a remediation server. We all know that on a
network, authentication takes place on the domain controller. We will have specified on this health check server the baseline, what version of explorer,
what versions of all programs we are using. We name and we put in there all the programs we are running, their versions, all the drivers. Everything is
populated on this health check server. So, what we see here is, if this PC, User PC, attempts to log on to the network, the user PC is directed to the
health check server. The health check server will review the machine; is it lacking some updates? Is it missing some applications or is it even missing
some…? We moved from one version to another version. Possibly, the last time these[issues] are connected to the network, we were running internet
explorer version 7. Now we are running internet explorer version 11. Maybe some vulnerabilities have been discovered in version 7, we’ve upgraded to
version 11. If this person is attempting to log on the system is redirected to a health check server that will scan that machine to see. If the machine
does not meet the baseline, it is directed to the remediation server. As the name dictates, this is a server at which this PC will be fixed. What this PC
does next is to route that back and say ok check him out, fixed him. Now once this machine deems this PC fit, that machine is then allowed connection
to the domain controller. The user is able to log on and join the network. With this strategy in place, every machine gets inspected by the health check
server to guarantee the state of health of all the computers connecting to the network. This is what we call network access control. You’re controlling
access to your network for all the devices connecting to your network. Now we talk about virtualization. In the past what we could do is, we have one
operating system running on a machine. We can only run XP on this computer because only XP is installed. Later on we learn to partition machines.
When you have a partition you could install one operating system into one partition. So in this system, we are said to be multi-booting. We have
multiple partitions and we have multiple operating systems, different operating systems installed on each partition. In this scenario, when the system
starts off it will tell you, it will advertise to you all the operating systems installed but then you make a choice. Do you want to run windows XP, 7 or
Ubuntu? You can only run one operating system at a time. Even though you have multiple operating systems installed, you can only run one at a time.
Then we discovered virtualization. With virtualization you have your host PC On which you install a hypervisor. You install the Hypervisor on your PC
.The hypervisor is the software environment within which you can build other computers. You can build one computer within another computer using
the hypervisor. The hypervisor will share resources with your host computer. Resources like the processor, resources like the memory, ports, et cetera.
These are the most important. Not all systems support virtualization but those that support virtualization will allow you to do virtualization and you must
enable virtualization in the BIOS before people can do virtualization on that system. You install the hypervisor on the host machine, it will share
resources with the host machine then you create your virtual machines. This could be Windows 7 on your host machine and this could be XP, Windows
7, Windows Vista, Server 2003, Server 2008, Server 2012, Ubuntu, another XP, you could even have Suse Linux on there provided you have enough
memory and the processors that are robust, it is possible to install your printing systems in each of this machines and run them all at the same time.
Yes so you can run them all at the same time. Beautiful solution, rather than having to buy one, two, three, four, five, six, seven, eight, nine, [these]
boxes plus these ten boxes, you only buy one box and using the hypervisor, you can build this virtual machines all in one. Organizations are able to
save cost this way. We have different types of hypervisors out there but that is not the focus of the exam. All we need to know about the hypervisor is
that it is the software environment within which we build the virtual machines. Some hypervisors could be, you have Microsoft Virtual PC, you have
Windows Virtual PC, you have Hyper V, you have Sound virtual Box, you have VMware, you have VM Fusion. Virtualization has come to stay. A lot of
organizations have started to use virtualization for their servers so it is good practice to learn to use all of these hypervisors. Most of them are available
online for free. So Virtualization does offer some benefits. It allows organizations to save costs, rather than having to buy 12 boxes, the organization
can buy one very solid box and build the other machines within that box as virtual machines. Organizations could also save cost in terms of hardware.
They could save cost in terms of overheads, electricity. You don’t need electricity for seven, eight boxes anymore. You only need electricity for one
box. What about saving cost in terms of licensing? You could have virtual applications. Before, say an application like Microsoft Office. If we assume a
particular application set is $500 and you have three thousand users. All these users need portions of Microsoft office for example. You are not going to
spend $500 in 3000 different places. You could save cost. You install these virtual applications on a server. When users need it, they connect, use it,
disconnect. This allows you to only buy a limited number yet you can service 3000 users. Maybe we buy for 600 users, so as people need it, they
connect, use it and disconnect. Now that we install it for users who probably even use it for two minutes a day, or five minutes a day so we could install
virtual applications to save cost on licensing. The use of virtualization also allows us to maximize our hardware. In many cases we buy machines that
are very robust but we are only using a fraction of their capabilities. With virtualization now, we are going to make these machines really do the work
they are designed for. We maximize the use of this hardware. Virtualization gives a lot of organizations the ability to test software. So you want to test,
how will this software react with XP? Test it on the virtual machine. You could have a test machine, a test server that is hosting multiple operating
systems. When you get your drivers, when you get your updates, don’t just deploy them straight away, test them on these different operating systems
to see how they perform. Once you are satisfied, then you can move this to your production networks. This is virtualization. There are some security
concerns with virtualization. When we do virtualization, all the configurations we do to protect our host machines should also be done on our virtual
machines. These virtual machines are being hosted on these host machines because if we don’t secure our virtual machines, malicious persons could
attack the virtual machines via the internet take over the hypervisor, cripple other computers, virtual machines or possibly cripple your host PC. It is
also possible that some of your staff might want to run prohibited software within the virtual machine. It is much more difficult to detect people running
prohibited software within the virtual machine. How do you protect against this? Prohibited software, maybe gambling software, the one they are
running within the virtual machines to hide the fact that they are running it. Administrators will go into BIOS, disable the use of virtualization in the BIOS
so that user cannot even build the virtual machine. Once you disable the use of virtualization in BIOS, you then lock your configurations with the BIOS
passwords. When you lock the configuration with the BIOS password, you then lock the box with a padlock, physical lock. So you see here, we are
having multiples layers of defense. This is what we call defense in depth. Otherwise if you disable the virtualization in BIOS, and not lock the box with a
padlock that has a key, If I have access to that box I can simply remove a jumper on the motherboard, a very tiny plastic. Once I remove the jumper on
that motherboard, the sister will forget the passwords. That way, I can then change the configuration settings to enable virtualization. We must have
several layers of defense to secure our networks. In the previous topics we talked about virtualization. How to protect users or rather how to prevent
users from setting up virtual machines. If we were to look at defense in depth, using that example, our systems or resources could be configured in
such a way that we have multiple layers of defense around them. The idea is that when you have multiple layers of defense, the malicious person will
have to go through several layers to get at your resource. These layers will be protected by different types of technologies such that no one technology
can compromise all the layers. Our resource is being protected by the BIOS configurations to disallow users from setting up virtual machines. We could
then lock the box with the padlock, another layer. Ensure that there is a door lock to that room, another layer. Ensure that we have CCTV close circuit
television, another layer and possibly have guards for physical at the perimeter. You can see how applying several Layers, we can protect our
resource. This is what we call layered security or defense in depth. You apply several layers such that the layers have different levels of technology
before they can get to your resources at the core of the strategy.

TCP/IP

We will be talking about TCP/IP. Transmission control protocol, internet protocol. We look at the TCP portion of the name and then we will discuss the
IP part of that. The transmission control protocol is the protocol that is largely used for transmitting packets across the internet. It is widely preferred to
transmit packet from one system to another across the internet. We have a few properties of TCP/IP that make it a protocol of choice. It is what is
called a connection oriented protocol. Connection oriented protocol, meaning that it establishes a logical connection in what we call a 3-way hand
shake. It establishes the dual negotiation. There is the sim and there is an acknowledgment between TCP on both sides as to how packets would be
sent in what size and what frequencies and what speed they will do the transmissions. TCP also does proper sequencing. It does a proper sequencing
of packets to be sent from one session to another. This is very essential so that it can track what packets have been sent to know what needs to be re-
sent. It also has something we call the sliding window. The sliding window allows TCP to check for messages that have been sent, if they’ve been
received. If not received the messages will be resent so TCP will not go past one sliding window until the message has been acknowledged as
received. It would check with the other side to say, “Hey did you get that packet?” That is the ‘acknowledge’ so essentially TCP does something we call
guaranteed delivery. These properties are very good properties about TCP so as a result of this we say TCP is a reliable protocol. It does guarantee
delivery sequencing, connection oriented and it is preferred as a protocol of choice for delivery across the internet. The IP part of the name TCP/IP is
something called internet protocol, IP. The sole purpose of IP is for logical addressing. We need to know devices from where packets are coming and
we need to know devices through which packets are moving on the network. When we talk about IP, we have internet protocol version 4, internet
protocol version 6. These are 2 types of protocol completely different from each other. Basically they do achieve the same purpose because they are
used for addressing. Let us look at IPv4 in some detail, internet protocol version 4, this is a 32 bit address. It is expressed in decimals. It has 4 octets
and each octet is 8 bits long. The octets are punctuated by period signs. If we were to look at an IP address we will show an IP address could be
written in this way. Remember it’s in decimals, so we would say maybe, 192.168.10.150. We can see one octet, 2nd octet, the 3rd octet, the 4th octet.
Each octet is 8 bits long. When we look at the IP address, we could tell different classes of IP addresses. This is where another chart needs to be
learnt. We need to understand that by looking at the value of the first octet we can determine the class of the IP addresses. If we have anything
between, we have a class A, class B and class C. Anything between 1 and 126 in the first octet is a class A IP address. 128 to 191 is a class B. 192 all
the way 23, that’s a class C. Remember to find the class of IP address, we consider the value of the first octet. So if you look at this IP address, this is
would be considered A class C IP address because it falls in that range. Someone would ask me, what about 127? We reserve 127 for loop bug
testing. We reserve 127 for loop bug testing so the address is read in this fashion, 192.168.10.150. You can tell the different classes of the IP address.
This way, remember that each octet is 8 bits long punctuated by the period sign. When we talk about IPv4 addresses, we have something called
private IPv4 addresses. These are addresses that can only be used on your local intranet, used within your organization. We also have 3 classes of
that. Class A, class B, class C private address. Private addresses cannot go to the internet and the table reads this way for class A we have 10.0.0.0 all
the way to 10.255.255.255 and next would be class B. That would be 172.16.0.0 to 172.16.255.255. The last class is the class C, this will be
192.168.0.0 all the way to 192.168.255.255. These are referred to as private addresses. Organizations will use private addresses for hosts on their
intranet to configure devices such that they can function on the network. We now discuss the methods by which we assign IP addresses. The first
method to assign an IP address to a system is something we call the manual method. The manual method of assigning the IP addresses is also
regarded as a static address. When we do IP addresses, manually, the administrator has to visit the system. It’s a very lengthy procedure. You click on
start, you move to control panel, network and sharing center. You have to do change adaptor settings on the left hand side and on the system, you
would select the local area network. You right click on it, select properties, a page pops up and on that page you scroll down, you see IPv4, you click
on the name IPv4, select properties to the right hand side, another box shows up, in that box that shows up, you have 2 [radial] buttons at the top.
Obtain IP address automatically. Use the following IP address. If you are going to be doing manual assignment you click on use the following IP
address. The page now becomes active for you to punch in the IP address. This is a very lengthy procedure, if you have to do it for one system or 5
systems it’s okay. Imagine if you have to do it for 3000 systems. Oh my goodness! That would be too difficult. It is also a method that is prone to errors.
Some people want to type very fast or they want to, they have very big fingers they make a mistake. You make a typo while you’re putting in the IP
address, the system cannot effectively function on the network. As a result of these errors, and not scalable methods, we move away from manual for
large networks, we go automatic. When we do automatic, we call it dynamic. You can see that the names have changed now. If you’re doing it
manually we say it’s static. It is static because if you assign the IP address, it doesn’t change. You can come back, unless someone else has changed
the IP address, the IP address remains what was assigned. However when we do automatic addressing, the addresses could change periodically,
hence we call it dynamic. So how do we do automatic addressing? We install something called the DHCP server. Dynamic Host Configuration Protocol.
The Dynamic Host Configuration Protocol is installed on the server. We install this on the server, you [through] your server manager, you install the
server, you assign the role for the DHCP for the server, you then go ahead to authenticate your server on the network. One of the things you want to
create is something called a DHCP scope. A DHCP scope is a range of available IP addresses from which the system will lease out possible IP
addresses. The key word is lease, so addresses are leased out. Usually the administrators will configure the lease period. The default lease period is 8
days. Administrators could then change this to suit the dictation of the policy. Addresses are leased out from the DHCP scope to devices on the
network. Having created your DHCP scope should also create what is called a reservation. The purpose of the reservation is to isolate some particular
IP addresses. You have some network devices, you never want their addresses to change. Every time they make a request for IP address based on
their mark address, specific IP addresses could be assigned to these devices. Devices like printers, servers on your network. Using the addresses that
are kept in the reservation you could configure the server, the DHCP server to assign specific addresses to such devices, every time they request an
address. Using the DHCP we can do automatic addressing. It is dynamic because the lease would expire after a certain number of days. An IP address
would then change on the machine. If the machine is on your network and the machine attempts to get an IP address, for some reasons the DHCP
server is unavailable to lease out an IP address. What will the system do? At this point, the system will self-assign something called APIPA. APIPA is
Automatic Private IP addressing. How do we recognize it? We recognize APIPA if you see on the system 169.254.0.1 All the way to 169.254.255.255.
The machine could use any one of those numbers. Usually it would do a test to see if the number is in use on the network, if not then it would assign
itself an APIPA IP address. APIPA IP addresses will only enable the system access to the network. APIPA IP addresses are not internet routable. So
APIPA IP addresses will not allow a system access to the internet. So when you troubleshoot and you find your system is having an APIPA address,
you know something was either wrong with your DHCP server. We now look at IPV6, another class of IP address. This is internet protocol version 6.
The internet protocol version 6 is a 128 bit address. It’s expressed in hexadecimals. This means we are going to be seeing numbers and alphabets for
IPV6. It has 8 quartets. This is a very important property of IPV6 we need to know. It has 8 quartets and it’s punctuated by colon signs. If we have an
IPV6 address 1, 2, 3, 4, 5, 6, 7, 8 quartets. You can see we have-it’s represented in hexadecimal, so you have letters, you have numbers. There are
some rules we need to know when we write down the IPV6 addresses. We could shrink the name by following some standard rules. You can drop a
leading zero but you cannot drop a trailing zero. You see here you have a trailing zero, you can’t drop a trailing zero, you can drop a leading zero.
Wherever you have zeros, you can shrink them. If you happen to have zeros in multiple places like this, you can also shrink them down. If we were to
compress this address we could have something like this. We see we lost from zeros, we shrunk them down, we can drop a leading zero but you can’t
a trailing zero. And if you have repeating zeros following each other, we could still shrink this down some more, we now say this could appear like this.
These have been shrunk down to 2 sets of colons. We have to be very careful with this. Say we have 3 sets or 4 sets of double zeros, If you have
maybe 2 over there and another 2 over here, once you do this on one side, you can’t do it on another side. You can only perform this once within the
IPV6 address. So looking at this and knowing the rule that it has 8 quartets, you know this is 1, 2, 3, 4, 5, 6, 7, 8 quartets. That tells you this is one set
of 4 zeros and that’s another set of 4 zeros. If you’re assigning IPV6 addresses manually it could also be a tedious process. So we could also assign
IPV6 addresses their DHCP. If we are doing DHCP for IPV6, we denote it as DHCP version 6. This lets us know that we are doing DHCP for IPV6, and
this is it for IPV4 and IPV6.

WIRELESS NETWORK

Now we’ll be discussing section 1.5 of the security plus syllabus. This question has given a scenario troubleshoot security issues related to wireless
networking. We want to troubleshoot security related issues as they relate to wireless networking. When we implement the wireless network, we
usually put in place a wireless access point. The wireless access point is the point of the device at which users can gain access to the network
wirelessly. It puts the signals on the network in the air so devices can connect to the network by picking up these signals. One of the first things we do
is antenna placement. Where do we put the antenna such that there is a good signal spread to all the devices on the network? We also want to
consider the location of the antenna to limit access to it physically so someone doesn’t take it and walk off with the network. We have to do a site
survey, where we do a site survey. We have 2 types of site survey. We have something called a formal site survey and an informal site survey. In those
site surveys, what we do is you temporarily mount the access point and then you test from all the devices to see the signal strength. If you are not
happy with the signal strength, you move the access point around the room. Moving the access point around and then you continuously test again to
see. In certain instances the very fabric of your environment could mitigate signal spread. You could have your cubicles and your infrastructure
interfering with your signals. When you’re happy with the position of the antenna because it’s able to give good signal strength to all the machines, then
you permanently mount the antenna. You must do a site survey. Otherwise some people will suffer poor signal all the time while some people also
enjoy very good signals. The first thing we do when we place our access point in the network, also to limit physical access to the routers or the access
point. The default SSID, admin password and admin ID. These devices, when they ship to you, will ship with the manufacturer’s defaults. Best practice
is that we change them, change these items. Your SSID is the service Set Identifier. In many cases it will just ship with model number and the name of
the device, say links is blah, blah, blah, links is 425 or Sony 123. Anybody checking on google can tell, If they see the default name, possibly what the
admin ID and password is, and that way they know how to compromise your device so best practice, change the SSID to read a name that has nothing
to describe you so people don’t scan your available ID and tell “Oh this network is for this person. Let’s focus the attack on that person. You change the
admin ID as well, make it some name only you know and change the password. In many cases the admin ID for default would just be Admin, the
password: password. Users can easily get into it. Best practices is that we remove those configuration settings. We also have the SSID, the Service
Set Identifier is set to “broadcast” so that when you search for available networks, you can see what network is available. You can see your network as
well but as you can see, so can other people. They see your network is available. “Okay let me attack this network” so people can choose and pick
what they want to attack because they can see it. Some people would say, as a form of security, you disable SSID broadcast. Now your service is
there but it’s not syncing anymore. Only people that know your service is there can attempt to connect to it. That way if they want to connect to the
system, they will still connect to the network. The system will then prompt them. What is my SSID? What network do you want me to connect to? So at
that point they have to state the SSID. The SSID will get the request and “Okay, yes. I see want to connect to me, what is the password?” The user will
put in the password, then they gain access. The user must know that the access point exists before they attempt to make a connection. Once they’ve
made a connection they can always have that connection in place but the beautiful thing is the SSID is not set to broadcast anymore. Anyone just
randomly scouting for SSID presence out there cannot detect. It is just one layer of security. However some malicious people would know how to
compromise that. They wait for a system to make a request at which point they know that a system is out there with the SSID of such. Best practice is
that we should also always use passwords on our access points unless the access point is for the public place like a library, at the airport, at the metro,
at the restaurants; you can leave it open but for private use or for organizational use, best practice is that we put a password in place. If you have no
password on your access point, anybody, everybody, somebody will use your access point possibly for malicious purposes and when the police come
knocking or the FBI come knocking, they’re knocking on your door. Best practice, protect your access point with a password. If you leave your access
point open, some people will want to upload child porn to the internet, they will use your access point as a launching path. Terabytes, useless materials
going on the internet. If the police come knocking, they’re knocking on your door, they’re taking you away. Well, it might take you a few days to explain
“Oh I didn’t do it” but consider what could happen to your name in the community. You could be labelled so many things that you’re not. You could lose
goodwill in the community. We have something called wall driving. Wall driving is when you have people within a vehicle with wireless equipment,
driving around the neighborhood, detecting wireless networks. With the use of mobile devices, they’re able to drive around the neighborhood detecting
wireless networks. Your SSID is broadcasting, is singing to everybody that cares to listen. “I am available connect to me, I am available connect to me”.
These people are driving around the neighborhood. They are detecting all these networks. When they detect the networks, they could then plan to
attack the networks later. Detecting wireless networks by driving around the neighborhood is said to be wall driving. Having found these networks, they
could decide to put chop marks maybe on the fence or the pavements or the building. This is to help them identify, “Okay we have a network here, it is
secured. We have another network there. It is weak but secured. We have a network here, it is strong but possibly unsecured.” By doing that, they are
wall chocking so that they know where they have access to networks, what networks are strong, what networks are weak and what networks are
secure or unsecure. When you do that, you are said to be wall chocking. At our wireless access point, it is also possible to implement some access
control. We could implement access control on our wireless access point using the mark addresses of the devices that connect. We can limit access to
the access point based on the mark addresses, and when we do this, we are said to be mark filtering. We can have all the wireless access point. The
firewall within the wireless access point, we can populate the “allow” list the mark addresses of specific devices we need to connect. We could also
populate the “do not allow” list with devices we don’t want to connect. That way, the wireless access point will always implement access control only
letting those devices that are stated in and blocking devices that are specifically stated as well. If we limit access based on mark addresses, we are
said to be doing access control. The mark address is a 42 bit address you leave to each device that can connect to a network. On our wireless access
point, best practice is also that we do encryption. We need to do encryption to guarantee confidentiality and integrity of the data being sent in and out of
our networks. How can we do this? We have WEP. WEP used to be the alias form of encryption we had for our wireless access point. However WEP
depends on RC4, and this was easily compromised because RC4 was very largely dependent on a very small set of keys. The problem with that is
when you encrypt messages, you run out of keys and you start to repeat keys and every time you repeat keys, the malicious persons can then start to
see that there is a pattern in your sequence of use of keys and it was very easy for malicious persons to crack this because he was using limited
number of keys to make its encryption. With WEP cracked, we moved to WPA. WPA use TKIP; the temporal key integrity protocol. This was put in
place to address the problems with WEP but some people found a way to compromise WEP. In a short time WEP was compromised. We went back to
the drawing board and we came up with WPA2. WPA2 uses CCMPand to date, this is the strongest form of encryption we can have on our wireless
access point. Best practice today, please do not set WEP. It is very weak and can easily be cracked. At best you should have WPA or WPA2. If your
devices cannot support WPA2, you could then use WPA but even then, those have been compromised, so the best for which we have encryption is
WPA2.WPA2 relies on CCMP. It is possible that sometimes you are trying to connect 2 devices or 2 buildings wirelessly. In this scenario, you are
pushing the wireless signals from one building to another using antennas. May be the antenna is such that the signals do not get to the next antenna.
What do you do to push the signals further? You could increase the power level controls. By increasing the power level controls, you push your signals.
You can increase the spread of your signals. It would also be that the signals are going too far. To reduce the spread of the signals, you decrease the
power level controls. Where the question say, what do you do to alter your signals spread? You could have to increase or decrease the power level
controls and when we look at all of these collectively, we are able to implement effective wireless networks. We must do antenna placement, we do site
survey. Your default parameters you change them to only things that are known by yourself, your SSID you could disable as a layer of security.
Disabling your SSID does not mean it is bullet proof. It is just an extra layer of security. You should also use passwords and wall driving can be
prevented by disabling your SSID. If you disable your SSID somebody attempting a wall driving attempt cannot see your SSID because it is no longer
syncing. It is not broadcasting. You could also do mark filtering to limit what devices can connect and your encryption based practice should be WPA 2
which depends on CCMP. To increase or decrease your signal spread you increase or decrease your power level controls and this is it for section 1.5

Now we will be discussing security issues related to wireless networking. With wireless networks the most important thing here is our wireless access
point. The wireless access point is a device with which we gain access to the network wirelessly, access to the internet or access to the network
wirelessly, usually a device we have to set up on the facility to put the signals in the air. The first thing to consider is the antenna placement. Where do
we place the antenna? We have to give considerations for physical security. You don’t want to put your antenna where somebody could just walk in,
grab the antenna and walk away. You want to have physical security for your antenna. You also want to ensure that all users of that antenna have
good signal coverage. So you must do what we call a site survey. You carry out a site survey to determine the best position possible such that all your
users have very good access and signal strength. Then you have to give consideration to the default parameters that ship with the access point.
Parameters like the SSID. The Service Set Identifier. This could be in many cases just the product name itself. You want to change that. You also want
to change the default ID and the password with which the router is managed. You want to change all these because if you leave those, somebody
scanning the environment to see available networks could simply learn the name of the device and check out online the default parameters for such a
device. You want to change your SSID, you want to change the account name, you want to change the password. When you change the SSID, it is
also good practice you don’t have to give it your name so someone does not know it’s for you to specifically target you. Some random names and
numbers could be used. Our SSID; that is the Service Set Identifier. A name to identify your service. When you’re searching for available networks,
best practice is to prevent some other people scanning and determining your SSID. In many cases by default it’s enabled. We could disable the
broadcast. You could disable the broadcast such that somebody scanning to see available networks do not see your SSID. If you want to increase the
spread of your signals or reduce the spread of your signals, then you have to alter your power level controls. The power level controls increase or
reduce the spread the signals. So you could increase it to increase the range your signals can travel or you would reduce the power level controls to
reduce the spread of the signals. You also could determine that you will need to do access control for your access point. Then we do MAC filtering. The
ability to limit access to the access point based on the MAC address. The MAC address is a unique set of numbers for every device that can connect to
your wireless access point or a network. You could enable MAC filtering based on the MAC address of the device to prohibit or limit what devices have
access to your wireless access point. When users attempt to connect to your wireless access point, they could also have captive portals. This is where
an interface could be built through the browser where they must identify themselves, provide their credentials, their usernames and ID and password
before they’re able to have access to the network. We call this captive portals. Encryption for our wireless access point is also very important. We have
several types of encryption. We have WEP, WPA and WPA2. WEP is the weakest form of encryption. It is not advised these days to have WEP. It can
easily be cracked. WEP is vulnerable to the IV attack where it keeps the credentials constant so they can easily be cracked. Rather we would prefer to
do WPA and WPA depends on TKIP. The temporal key integrity protocol. However some people have been able to compromise that so we move over
to WPA2. WPA2 depends on CCMP. Till date, this is the strongest form of encryption we have so if your devices can support either WPA or WPA2, it is
better to have these than it is to have WEP. Whatever you do you must ensure you have some form of encryption on your access point. Otherwise your
access point could be used to launch attacks on the internet or to upload prohibited material on the internet.

RISK RELATED CONCEPT

Now we review risk calculations. The first topic we look at here is the mean time to repair. Periodically, machines might fail on the network. A machine
might crash, a machine might shut down because of mechanical failure or some other sort of failure. The mean time to repair considers how soon it will
take to fix the device and put it back in production. The mean time to repair is the measure of time for which we can accept that machine to be down.
It’s the measure of the down time that is tolerated by the organization, for the computer to be down. Best practice organizations and administrators
within the organization, when they give a mean time to repair it should include the time it takes to fix the device and also test the device. The mean time
to repair should include a time to fix and test the device. The mean time between failure, the mean time between failure is a measure of how long will I
use this device before it fails? The mean time between failure allows us to know how long we can use the device before it fails. At the same time, this is
the measure for devices that can fail and be repaired, devices that can fail and be repaired. You know want to know how long you are able to use it
before it fails. The mean time to failure is similar to the mean time between failure. However, the mean time to failure is for devices that you do not plan
to return. You want to know how long you are able to use the device for before it fails. That is the end of that device. The mean time between failure
and mean time to failure can only be given by the manufacturers. We usually use these for purchase decisions, “Why should I buy this device and not
that device?” You want to know the mean time between failure. It is not just by looking at the cheaper option, the cheaper option might not always be
the best option for you. You want to consider, how long would, you use it before it fails. You don’t need a device you pay a cheaper price for and every
day you have to want to fix it. You are not using it if you are fixing it. You are only using it when it’s doing what it meant to do. You want devices that
have a higher mean time between failure. This allows you to make a purchase decisions. This is so that when you take devices offline you want to
know the mean time to repair, how soon can they be back online, how soon will they be back in production? This is determined by your network
administrators and that would also include the amount of time required for testing. Next, we talk about the analyzed loss expectancy. This is the
measure of how much in terms of cost will be lost if an incident were to happen. If an incident was to happen, what do you expect to lose a year? What
do you expect to lose annually? The calculation is usually done to derive what is called the analyzed loss expectancy. What loss do you expect
annually? In many cases the calculations could be provided that you do a calculation for 2 years. The ALE, the analyzed loss expectancy, if giving 2
years would be to divide whatever you have by 2 so that you can tell what it would be for each year, annual loss each year, what do you expect to lose
each year? Then we talk about the single loss expectancy, within any network, threats could exploit vulnerabilities, if a threat exploits a vulnerability,
what is the single loss expectancy? What do you expect to lose if that thing happens? If that loss is experienced, what do you expect to lose in terms
of, or if the threat exploits a vulnerability, what loss do you expect in dollars? That is your single loss expectancy. Each time it happens, what do we
lose? Analyzed rate of occurrence, the analyzed rate of occurrence means, how many times does this happen? What is rate of occurrence annually?
You say, if this happens, what is the rate of occurrence annually? You want to do calculations to know how many times an event could compromise or
a threat could compromise vulnerability? You talk about the analyzed rate of occurrence that is the ARO. When we calculate risk, we have quantitative
analysis or qualitative analysis. If we do quantitative analysis we are calculating risk based on numeric values. There is a dollar amount you are using
numeric values to calculate risk for quantitative quantity, how much loss is experienced? When we do qualitative analysis, this is analysis based on
experience or individual opinion. Quantitative could be very subjective because the same person could be or the same threat could be measured but by
different people and they have different results. With quantitative analysis we are using numeric values. What some people experience could be
different from the experience of others but for the same incidence. Qualitative analysis is analysis based on user experience or experience by whoever
is measuring, doing the analysis, the calculations. These are the 2 forms of analysis we could do. Always bear in mind that when we do numeric
values, when we do that we are doing quantitative analysis. At the end of the day there is always a numeric value, say if this thing happens, what do
we lose? We lose 400 million. A numeric value is used to measure the risk. Here, where you say, “How much did we lose? We lost a lot of information,
we lost reputation.” You can only measure that on experience and that could be very subjective. We also have to define vulnerabilities, trend factors,
risks. Vulnerability is defined as the absence or weakness of a control. If your controls are there, but they are weak, you have vulnerability. If your
controls are missing you have a vulnerability. An example could be where we have a lock on a door but the lock could be that random keys could just
arbitrarily compromise the lock. It is a lock, it is a control but it is a weak control. It will also be that the control is not even there, there is no lock. The
best definition for vulnerability is that we have the absence or weakness of a control. When you have vulnerabilities on your network, it could also be
that patches are missing. Vulnerabilities could exist through that. Maybe people are not following best practice procedures, they leave at the end of the
day, they don’t log off their systems, they leave to use the rest room, they don’t lock their screen. Those are vulnerabilities within the network. Trend
factors are any agents that could exploit vulnerabilities. Any entity that can exploit vulnerability is a threat. Risk is defined as the likelihood that
something negative will happen, likelihood, there is a probability of it happening, it might happen. It might not happen. All these factors need to be
considered because, when you have vulnerabilities, it is probable that the threat agents are able to exploit the vulnerabilities or not. That is the risk
when we look at the network or the facility. The next topic for this portion is risk response. How do we respond to risk? There are numerous methods by
which we respond to risk, one of which is you mitigate the risk, where you mitigate a risk, you put controls in place to reduce the impact felt by the risk.
You put controls in place so that should a risk happen, the impact is limited by the controls to reduce the effect of the risk. You could also decide to
transfer the risk. When you transfer the risk, you buy insurance such that, some other party is responsible for fixing the problem. You’ve paid for the
insurance, you take up a policy. Another person will be responsible for taking care of whatever the outcome is. When you avoid the risk, you could back
out of a planned activity to avoid the risk involved in that activity. That way you avoid a risk, maybe you plan to set up a factory at a location and you
later on learn that that location is prone to some natural environmental disasters. You then decide not to go ahead with that set up. You have backed
out of that risk. Risk deterrence, you could put in controls to deter the threats. Controls like putting up signs maybe to deter the threats, let them go do
their malicious actions elsewhere. This is how you do risk deterrence. For risk acceptance, sometimes putting in controls might overweigh the benefits
derived from the asset. We decide to accept the risk and do nothing, maybe by putting in certain controls. In some instances we could put controls up
to a certain state and then say, “We have spent enough in terms of controls, at this level we accept any other risk that might come into play.” You
cannot reduce risk to 0, rather you can reduce it to an acceptable level at which you then decide to let it be because maybe you can’t spend any much
more in protecting the asset. That way you accept the risk. These are the numerous methods by which we could respond to risk, mitigate the risk,
transfer the risk, avoid the risk, deter the risk or accept the risk. What are some risks with cloud computing? What is cloud computing? Cloud
computing is, you are carrying out your business operations across the internet on some other people’s computer. We have several models of cloud
computing. We talk about infrastructure as a service, platform as a service, software as a service, network as a service, and security as service. In
these different strategies, you are carrying out your business operating and your computing operations on other people’s computers across the
internet. There are some risks inherent in doing this, some of which is confidentiality. Some of us use email, the emails are set on corporate servers
elsewhere, somewhere else in the world. Can we guarantee confidentiality? No, we don’t know who is at the server, looking at the emails. It is possible
somebody else could be looking at your emails on the server. You can’t guarantee confidentiality, you hope they are following best practice but what if
there are malicious individuals there that have access to the server and they can glean the content of your emails? Also, we can’t guarantee
availability, server availability. If the server goes down, you only wait for them to bring it back on. Yes you could have service level agreements in place
to say that the server should never go down, but it can happen, the server could crash, the server could be brought down for any reason or the other.
You can’t guarantee availability of the server by saying, “Can you go the server room turn it back on?” If it were within your premises you could do that
but this is on some other persons’ server, you can only hope they bring it back up in time. Control of you data is in other people’s hands. What are they
doing with it? How are they copying your data? Are they backing up your data? You can only hope they are doing so. Even if you have service level
agreement in place to ensure that they do back up your data, you want to be certain that they are following best practice, that the access that is granted
to the backup location is limited. Security for your primary location should be the same as security for your backup locations because someone in
possession of your backup tapes is as good as someone in front of your server. These are some security concerns with cloud computing. The cloud
computing offers a lot of benefits but we also should bear in mind that these concerns could create a problem for data confidentiality, data integrity and
availability. In a previous video we talked about virtualization, virtualization is technology that allows us to build multiple computers within the
hypervisor. That is, within a software environment residing on another computer. This way, we can build multiple virtual machines, within one machine.
With the use of virtualization, there are also some risks associated with virtualization. You have orphan virtual machines. The word orphan is a word
that describes a child having no parents. A child with no parents cannot be well taken care of. A child with no parents will be malnourished. The same
thing applies to virtual machines. If a machine is no longer in use, it’s been decommissioned it’s not going to be receiving its updates. Nobody is giving
it updates and the machine is still on the network, what if somebody comes along to use it? That introduces a vulnerability into your network. A
machine that is lacking updates is a good source of attack, a good point of attack onto the network. Then you have VMSK, machines that have access
to the internet in terms of virtual machines should also have the same level of security that your host machines have, otherwise, a malicious person
could have access to the hypervisor through your virtual machines. Somebody could attack the virtual machine and take over the hypervisor. That way,
it is possible to kill other virtual machines or even take over your host PC. Your operating system should be secure at this level, at the same time you
need to secure the individual virtual machines, giving them updates, hardening the operating systems and blocking unnecessary services or ports to
ensure that malicious persons could not come in to the individual virtual machines, take over the hypervisor or even control the host PC. It is also
possible that some of your personnel, staff might want to run prohibited software. The use of prohibited software within virtual machine makes it difficult
to detect the use of such software. Administrator should ensure that the disabled use of virtual machines or machines that are not meant to have
virtualization. Every system that is virtual should be documented and accounted for. The essence of documentation is so that we know where these
machines exist on the network, we know where they are and we also give them their required updates. We also harden them exactly as we would
harden our host PCs. Then finally there is the risk associated with virtualization that discusses best practices and standards, because we are running
virtual machines we must also pay for licenses. It is best practice that if you are running a software. You pay for the licenses for such software. The fact
that you are running a virtual machine does not mean you don’t pay for your license. Organizations have individuals that would do whistle blowing, to
disclose the fact that they are not using licensed software. Organizations could fall into a risk of being fined if they are not following best practices and
standards that govern the use of virtual machines. Some standards such as the payment card industry requirement for credit cards and debit cards do
require that if we collect information about our customers, these should be stored on different machines. If you are storing customer information in a
virtual machine, maybe you have 2 virtual machines A and B, within another host PC. You store credit card numbers here and you store user
addresses over there. These 2 machines are still in the same vessel. This does not meet best practice in terms of standards. We shouldn’t store certain
information in the same machine. Even if they are in different virtual machines, they are still within the same vessel. We have to follow best practice
and there is the risk that somebody attacking this machine has access to A and B, thereby compromising confidentiality of data that should be
protected by the payment card industry standards. The final topic for section 2.1 is the RTO and the RPO, the recovery time objective and the recovery
point objective. The recovery time objective, RTO, is a measure of time with which we can recover a device that is down. It is a measure of time that
the organization can tolerate a device, a server, to be down. You want to measure down time, we look at the recovery time objective, how long could
these machines be down without it giving us a concern? You must know your recovery time objective so that you are able to evaluate. This device has
been down for so long, at which point it will introduce a concern, a major concern to the organization. When we talk about recovery point objective,
when you want to do a recovery, you must give a point at which you want to recover from. Say a user has lost, accidentally deleted all the emails in
their inbox, they tell the administrator, “Please can you recover my emails for me?” the administrator will gladly say, “Yes, but when do I recover from at
what point should I recover from?” That is the measure of recovery point objective. “Take me back to January.” “January of this year? January of last
year or the year before?” you want to know the point at which the data should be recovered from and that is what we refer to as the recovery point
objective. The recovery time objective deals with how long it will take to do that recovery From the backup? The recovery point objective deals with how
far within the backup should we go to recover you from? This is for section 2.1 of the security plus syllabus.

You might also like