You are on page 1of 8

Q.

1) A telecommunication network is a collection of diverse media supporting


communication between end-point. Explain in detail.
Network Technology Diversity
Modern telecommunications network systems can be viewed as consisting of the following two basic
types of technologies:
Circuit-switched—this includes legacy, circuit-switched systems that support traditional plain old
telephone services (POTS) and related voice and data services. The public switched telephone
network (PSTN) is the most significant example of deployed circuit-switched technology.
Packet-switched—this includes more modern, packet-switched systems that support Internet Protocol (IP)
and related voice, data, and multimedia services. In addition to the Internet as the most obvious example
of packet switching, the signaling network controlling the PSTN is itself a packet-switched system.
For the most part, both logical and physical diversity naturally exist between these two types of services,
largely due to technology interoperability. That is, the vast majority of equipment, software, processes,
and related infrastructure for these services are fundamentally different. Packets cannot accidentally or
intentionally spill into circuits, and vice versa.
Circuit-switched and packet-switched systems automatically provide diversity when compared to one
another.
From a networking perspective, what this means is that a security event that occurs in one of these
technologies will generally not have any effect on the other. For example, if a network worm is unleashed
across the Internet, as the global community experienced so severely in the 2003–2004 time frame, then
the likelihood that this would affect traditional time-division multiplexed (TDM) voice and data services
is negligible. Such diversity is of significant use in protecting national infrastructure, because it becomes
so much more difficult for a given attack such as a worm to scale across logically separate technologies.
Even with the logical diversity inherent in these different technologies, one must be careful in drawing
conclusions. A more accurate view of diverse telecommunications, for example, might expose the fact
that, at lower levels, shared transport infrastructure might be present. For example, many
telecommunications companies use the same fiber for their circuit-switched delivery as they do for IP-
based services. Furthermore, different carriers often use the same right-of-way for their respective fiber
delivery. What this means is that in many locations such as bridges, tunnels, and major highways, a
physical disaster or targeted terrorist attack could affect networks that were designed to be carrier diverse.
Unfortunately, vulnerabilities will always be present in IP-based and circuit-switched systems.
While sharing of fiber and right-of-way routes makes sense from an operational implementation and cost
perspective, one must be cognizant of the shared infrastructure, because it does change the diversity
profile. As suggested, it complicates any reliance on a multivendor strategy for diversity, but it also
makes it theoretically possible for an IP-based attack, such as one producing a distributed denial of
service (DDOS) effect, that would have negative implications on non-IP-based transport due to volume.
This has not happened in practical settings to date, but because so much fiber is shared it is certainly a
possibility that must be considered.
A more likely scenario is that a given national service technology, such as modern 2G and 3G wireless
services for citizens and business, could see security problems stemming from either circuit- or packet-
switched-based attacks. Because a typical carrier wireless infrastructure, for example, will include both a
circuit- and packet-switched core, attacks in either area could cause problems. Internet browsing and
multimedia messaging could be hit by attacks at the serving and gateway systems for these types of
services; similarly, voice services could be hit by attacks on the mobile switching centers supporting this
functionality. So, while it might be a goal to ensure some degree of diversity in these technology
dependencies, in practice this may not be possible.
Diversity may not always be a feasible goal.
What this means from a national infrastructure protection perspective is that maximizing diversity will
help to throttle large-scale attacks, but one must be certain to look closely at the entire architecture. In
many cases, deeper inspection will reveal that infrastructure advertised as diverse might actually have
components that are not. This does not imply that sufficient mitigations are always missing in no diverse
infrastructure, but rather that designers must take the time to check. When done properly, however,
network technology diversity remains an excellent means for reducing risk. Many a security officer will
report, for example, the comfort of knowing that circuit-switched voice services will generally survive
worms, botnets, and viruses on the Internet.

Q.2) Explain Certificates and PKI in detail.


Public Key Infrastructure
The most distinct feature of Public Key Infrastructure (PKI) is that it uses a pair of keys to achieve the
underlying security service. The key pair comprises of private key and public key.
Since the public keys are in open domain, they are likely to be abused. It is, thus, necessary to establish
and maintain some kind of trusted infrastructure to manage these keys.

Key Management
It goes without saying that the security of any cryptosystem depends upon how securely its keys are
managed. Without secure procedures for the handling of cryptographic keys, the benefits of the use of
strong cryptographic schemes are potentially lost.
It is observed that cryptographic schemes are rarely compromised through weaknesses in their design.
However, they are often compromised through poor key management.
There are some important aspects of key management which are as follows −
 Cryptographic keys are nothing but special pieces of data. Key management refers to the secure
administration of cryptographic keys.
 Key management deals with entire key lifecycle as depicted in the following illustration.
 There are two specific requirements of key management for public key cryptography.
 Secrecy of private keys. Throughout the key lifecycle, secret keys must remain secret from all
parties except those who are owner and are authorized to use them.
Assurance of public keys. In public key cryptography, the public keys are in open domain and seen as
public pieces of data. By default there are no assurances of whether a public key is correct, with whom it
can be associated, or what it can be used for. Thus key management of public keys needs to focus much
more explicitly on assurance of purpose of public keys.
The most crucial requirement of ‘assurance of public key’ can be achieved through the public-key
infrastructure (PKI), a key management systems for supporting public-key cryptography.
Public Key Infrastructure (PKI)
PKI provides assurance of public key. It provides the identification of public keys and their distribution.
An anatomy of PKI comprises of the following components.
 Public Key Certificate, commonly referred to as ‘digital certificate’.
 Private Key tokens.
 Certification Authority.
 Registration Authority.
 Certificate Management System.
 Digital Certificate
For analogy, a certificate can be considered as the ID card issued to the person. People use ID cards such
as a driver's license, passport to prove their identity. A digital certificate does the same basic thing in the
electronic world, but with one difference.
Digital Certificates are not only issued to people but they can be issued to computers, software packages
or anything else that need to prove the identity in the electronic world.
Digital certificates are based on the ITU standard X.509 which defines a standard certificate format for
public key certificates and certification validation. Hence digital certificates are sometimes also referred
to as X.509 certificates.
Public key pertaining to the user client is stored in digital certificates by The Certification Authority (CA)
along with other relevant information such as client information, expiration date, usage, issuer etc. CA
digitally signs this entire information and includes digital signature in the certificate.
Anyone who needs the assurance about the public key and associated information of client, he carries out
the signature validation process using CA’s public key. Successful validation assures that the public key
given in the certificate belongs to the person whose details are given in the certificate. The process of
obtaining Digital Certificate by a person/entity is depicted in the following illustration. The CA, after duly
verifying identity of client, issues a digital certificate to that client.
Certifying Authority (CA)
As discussed above, the CA issues certificate to a client and assist other users to verify the certificate. The
CA takes responsibility for identifying correctly the identity of the client asking for a certificate to be
issued, and ensures that the information contained within the certificate is correct and digitally signs it.
Key Functions of CA
The key functions of a CA are as follows −
Generating key pairs − The CA may generate a key pair independently or jointly with the client.
Issuing digital certificates − The CA could be thought of as the PKI equivalent of a passport agency −
the CA issues a certificate after client provides the credentials to confirm his identity. The CA then signs
the certificate to prevent modification of the details contained in the certificate.
Publishing Certificates − The CA need to publish certificates so that users can find them. There are two
ways of achieving this. One is to publish certificates in the equivalent of an electronic telephone
directory. The other is to send your certificate out to those people you think might need it by one means
or another.
Verifying Certificates − The CA makes its public key available in environment to assist verification of
his signature on clients’ digital certificate.
Revocation of Certificates − At times, CA revokes the certificate issued due to some reason such as
compromise of private key by user or loss of trust in the client. After revocation, CA maintains the list of
all revoked certificate that is available to the environment.
Classes of Certificates
There are four typical classes of certificate −
Class 1 − these certificates can be easily acquired by supplying an email address.
Class 2 − these certificates require additional personal information to be supplied.
Class 3 − these certificates can only be purchased after checks have been made about the requestor’s
identity.
Class 4 − they may be used by governments and financial organizations needing very high levels of trust.
Registration Authority (RA)
CA may use a third-party Registration Authority (RA) to perform the necessary checks on the person or
company requesting the certificate to confirm their identity. The RA may appear to the client as a CA, but
they do not actually sign the certificate that is issued.
Certificate Management System (CMS)
It is the management system through which certificates are published, temporarily or permanently
suspended, renewed, or revoked. Certificate management systems do not normally delete certificates
because it may be necessary to prove their status at a point in time, perhaps for legal reasons. A CA along
with associated RA runs certificate management systems to be able to track their responsibilities and
liabilities.
Private Key Tokens
While the public key of a client is stored on the certificate, the associated secret private key can be stored
on the key owner’s computer. This method is generally not adopted. If an attacker gains access to the
computer, he can easily gain access to private key. For this reason, a private key is stored on secure
removable storage token access to which is protected through a password.
Different vendors often use different and sometimes proprietary storage formats for storing keys. For
example, Entrust uses the proprietary .epf format, while Verisign, Global Sign, and Baltimore use the
standard .p12 format.
Hierarchy of CA
With vast networks and requirements of global communications, it is practically not feasible to have only
one trusted CA from whom all users obtain their certificates. Secondly, availability of only one CA may
lead to difficulties if CA is compromised.
In such case, the hierarchical certification model is of interest since it allows public key certificates to be
used in environments where two communicating parties do not have trust relationships with the same CA.
The root CA is at the top of the CA hierarchy and the root CA's certificate is a self-signed certificate.
The CAs, which are directly subordinate to the root CA (For example, CA1 and CA2) have CA
certificates that are signed by the root CA. The CAs under the subordinate CAs in the hierarchy (For
example, CA5 and CA6) have their CA certificates signed by the higher-level subordinate CAs.
Certificate authority (CA) hierarchies are reflected in certificate chains. A certificate chain traces a path of
certificates from a branch in the hierarchy to the root of the hierarchy. The following illustration shows a
CA hierarchy with a certificate chain leading from an entity certificate through two subordinate CA
certificates (CA6 and CA3) to the CA certificate for the root CA.Verifying a certificate chain is the
process of ensuring that a specific certificate chain is valid, correctly signed, and trustworthy. The
following procedure verifies a certificate chain, beginning with the certificate that is presented for
authentication −
 A client whose authenticity is being verified supplies his certificate, generally along with the
chain of certificates up to Root CA.
 Verifier takes the certificate and validates by using public key of issuer. The issuer’s public key is
found in the issuer’s certificate which is in the chain next to client’s certificate.
 Now if the higher CA who has signed the issuer’s certificate, is trusted by the verifier,
verification is successful and stops here.
 Else, the issuer's certificate is verified in a similar manner as done for client in above steps. This
process continues till either trusted CA is found in between or else it continues till Root CA.

Q.3) Explain Wireless Eavesdropping with examples.


Eavesdropping Attack
An eavesdropping attack, also known as a sniffing or snooping attack, is a theft of information as it is
transmitted over a network by a computer, smartphone, or another connected device.
The attack takes advantage of unsecured network communications to access data as it is being sent or
received by its user.
Key Takeaways
 Avoid public Wi-Fi networks.
 Keep your antivirus software updated.
 Use strong passwords.
Eavesdropping is a deceptively mild term. The attackers are usually after sensitive financial and business
information that can be sold for criminal purposes. There also is a booming trade in so-called spouse
ware, which allows people to eavesdrop on their loved ones by tracking their smartphone use.
Understanding the Eavesdropping Attack
An eavesdropping attack can be difficult to detect because the network transmissions will appear to be
operating normally.
To be successful, an eavesdropping attack requires a weakened connection between a client and a server
that the attacker can exploit to reroute network traffic. The attacker installs network monitoring software,
the "sniffer," on a computer or a server to intercept data as it is transmitted.
 Amazon Alexa and Google Home are vulnerable to eavesdropping, as are any internet-connected
devices. Any device in the network between the transmitting device and the receiving device is a point of
weakness, as are the initial and terminal devices themselves.
How to Foil an Eavesdropping Attack
Eavesdropping attacks can be prevented by using a personal firewall, keeping antivirus software updated,
and using a virtual private network (VPN).
Using a strong password and changing it frequently helps, too. And don't use the same password for every
site you log onto.
Public Wi-Fi networks such as those that are available free in coffee shops and airports should be
avoided, especially for sensitive transactions. They are easy targets for eavesdropping attacks. The
passwords for these public networks are readily available, so an eavesdropper can simply log on and,
using free software, monitor network activity and steal login credentials along with any data that other
users transmit over the network. If your Facebook or email account has been hacked lately, this is
probably how it happened.
Virtual Assistants Can Be Spied Upon
Virtual assistants such as Amazon Alexa and Google Home also are vulnerable to eavesdropping and
their "always-on" mode makes them difficult to monitor for security.
(Some reported incidents that make it appear that the companies carried out the snooping themselves
appear to have been accidents caused by mistakes in speech recognition.)
Avoid Dodgy Links
Another way to limit your vulnerability to an attack is to make sure your phone is running the most recent
version available of its operating system. However, its availability is up to the phone vendor, who may or
may not be efficient about offering the update.
Even if you do all of the above, you have to be careful from day to day. Avoid clicking on dodgy links.
The sites they link to may install malware on your device. Download apps only from the official Android
or Apple stores.

Q.4) Explain Recovering Language from Encrypted VoIP.


Voice over Internet Protocol (VoIP)
Voice over Internet Protocol (VoIP), also called IP telephony, is a method and group of technologies for
the delivery of voice communications and multimedia sessions over Internet Protocol (IP) networks, such
as the Internet. The terms Internet telephony, broadband telephony, and broadband phone
service specifically refer to the provisioning of communications services (voice, fax, SMS, voice-
messaging) over the public Internet, rather than via the public switched telephone network (PSTN), also
known as plain old telephone service (POTS).
Protocols
Voice over IP has been implemented with proprietary protocols and protocols based on open standards in
applications such as VoIP phones, mobile applications, and web-based communications.
A variety of functions are needed to implement VoIP communication. Some protocols perform multiple
functions, while others perform only a few and must be used in concert. These functions include:
Network and transport – Creating reliable transmission over unreliable protocols, which may involve
acknowledging receipt of data and retransmitting data that wasn't received.
Session management – Creating and managing a session (sometimes glossed as simply a "call"), which
is a connection between two or more peers that provides a context for further communication.
Signaling – Performing registration (advertising one's presence and contact information) and discovery
(locating someone and obtaining their contact information), dialing (including reporting call progress),
negotiating capabilities, and call control (such as hold, mute, transfer/forwarding, dialing DTMF keys
during a call [e.g. to interact with an automated attendant or IVR], etc.).
Media description – Determining what type of media to send (audio, video, etc.), how to encode/decode
it, and how to send/receive it (IP addresses, ports, etc.).
Media – Transferring the actual media in the call, such as audio, video, text messages, files, etc.
Quality of service – Providing out-of-band content or feedback about the media such as synchronization,
statistics, etc.
Security – Implementing access control, verifying the identity of other participants (computers or
people), and encrypting data to protect the privacy and integrity of the media contents and/or the control
messages.

VoIP Monitoring Systems


One of the crucial elements of disaster recovery is notification. How do you know when to kick your
backup plans into action? VoIP monitoring systems can help with that. There are several monitoring tools
available for on premise SIP deployments. If you purchase cloud services from an external vendor, ask
them if they have such tools for your use. Even a few minutes of downtime can be costly if you do a lot of
business over the phone. Being proactive will only help you recover faster.
Automatic Failover
Reputed VoIP service providers will offer automatic failover protection. That means if calls cannot be
completed for any reason, the system will roll over to an alternate solution. If you don’t use hosted
services, you might need to create your own failsafe systems.
Do you automatically forward calls to your employees’ mobile devices? Not everyone likes to get
business calls on their personal phones. Will you allow your staff to use their personal VoIP services
instead? If yes, you will have to reimburse them for business calls. You may have to create DR plans in
conjunction with other policies such as BYOD, corporate expense reimbursement, storm evacuation etc.

Q.5) Explain End-To-End Argument and Security.


The end-to-end principle is one of the central design principles of the Internet and is implemented in the
design of the underlying methods and protocols in the Internet Protocol Suite. It is also used in other
distributed systems. The principle states that, whenever possible, communications protocol operations
should be defined to occur at the end-points of a communications system, or as close as possible to the
resource being controlled.

According to the end-to-end principle, protocol features are only justified in the lower layers of a system
if they are a performance optimization, hence, Transmission Control Protocol (TCP) retransmission for
reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has
been reached.

The concept and research of end-to-end connectivity and network intelligence at the end-nodes reaches
back to packet-switching networks in the 1970s, cf. CYCLADES. A 1981 presentation entitled End-to-
end arguments in system design by Jerome H. Saltzer, David P. Reed, and David D. Clark, argued that
reliable systems tend to require end-to-end processing to operate correctly, in addition to any processing
in the intermediate system. They pointed out that most features in the lowest level of a communications
system have costs for all higher-layer clients, even if those clients do not need the features, and are
redundant if the clients have to implement the features on an end-to-end basis.

This leads to the model of a dumb, minimal network with smart terminals, a completely different model
from the previous paradigm of the smart network with dumb terminals.

In 1995, the Federal Networking Council adopted a resolution defining the Internet as a “global
information system” that is logically linked together by a globally unique address space based on the
Internet Protocol (IP) or its subsequent extensions/follow-ons; is able to support communications using
the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-
ons, and/or other IP-compatible protocols; and provides, uses or makes accessible, either publicly or
privately, high level services layered on this communications and related infrastructure.

In the Internet Protocol Suite, the Internet Protocol is a simple (“dumb”), stateless protocol that moves
datagrams across the network, and TCP is a smart transport protocol providing error detection,
retransmission, congestion control, and flow control end-to-end. The network itself (the routers) needs
only to support the simple, lightweight IP; the endpoints run the heavier TCP on top of it when needed.

A second canonical example is that of file transfer. Every reliable file transfer protocol and file transfer
program should contain a checksum, which is validated only after everything has been successfully stored
on disk. Disk errors, router errors, and file transfer software errors make an end-to-end checksum
necessary. Therefore, there is a limit to how secure TCP checksum should be, because it has to be
implemented for any robust end-to-end application to be secure.

A third example (not from the original paper) is the Ether Type field of Ethernet. An Ethernet frame does
not attempt to provide interpretation for the 16 bits of type in an original Ethernet packet. To add special
interpretation to some of these bits would reduce the total number of Ether types, hurting the scalability of
higher layer protocols, i.e. all higher layer protocols would pay a price for the benefit of just a few.
Attempts to add elaborate interpretation (e.g. IEEE 802 SSAP/DSAP) have generally been ignored by
most network designs, which follow the end-to-end principle.

You might also like