You are on page 1of 19

EEEM023 Coursework Assignment (2022-2023)

Assignment Component 1: SNMP lab report

Introduction:

The following code, SnmpGetNext.java, is an example program that


demonstrates how to perform a basic SNMP operation using the com.
adventnet. snmp. snmp2 package of the AdventNetSNMP API. Specifically,
this program shows how to perform the GET NEXT operation. This
program can be run from the command line by passing in arguments for the
hostname and OID(s) of the desired SNMP agent(s). The program will then
use the SNMP API to connect to the agent(s) and retrieve the specified
information.

The code starts by importing the necessary classes and packages from the
Java language and the AdventNetSNMP API. The main method of the
program takes two arguments, the remote hostname of the SNMP agent
and the OID of the object to be retrieved. The API is started, and a new
SNMP Session is created to communicate with the SNMP agent. The
session is then opened, and the community is set to "teaching labs," which
is the default community for the host.

Next, the program initializes some variables and sets up an Array List of
OIDs to be retrieved from the agent. The program then enters a loop that
retrieves the specified OIDs and performs calculations on the returned
values. Specifically, the program retrieves information related to TCP and
IP traffic and calculates the average throughput and UWMA (weighted
moving average) of the data. The program then prints the calculated values
to the console.
The program continues to loop and retrieve the specified OIDs until the
user stops it.

Outline of program design in form pseudo code:


// Initialize variables
String agent = "192.168.0.1";
String community = "public";
String oid = "1.3.6.1.2.1.2.2.1.10";

// Create SnmpAPI instance


SnmpAPI api = new SnmpAPI();

// Start SnmpAPI
api.start();

// Create SnmpSession
SnmpSession session = new SnmpSession(api);

// Open session with agent


session.open(agent, community);

// Initialize ArrayList for retrieved values


ArrayList<Integer> values = new ArrayList<Integer>();

// Loop to retrieve and analyze SNMP data


while (true) {
// Retrieve next value of OID using GETNEXT operation
SnmpPDU pdu = session.snmpGetNext(oid);

// Check if error occurred


if (pdu.getErrstat() != 0) {
System.out.println("Error: " + pdu.getError());
break;
}

// Add retrieved value to ArrayList


values.add((Integer)pdu.getVariable().toValue());

// Perform calculations based on retrieved values


int throughput = calculateThroughput(values);
int uwma = calculateUwma(values);
int windowSize = calculateWindowSize(values);

// Print calculated values to console


System.out.println("Throughput: " + throughput);
System.out.println("UWMA: " + uwma);
System.out.println("Window Size: " + windowSize);
}

// Close session
session.close();

// Stop SnmpAPI
api.stop();

Outline of program design in form of flow chart:


Output
Validation:
When I first set out to validate the results of my network traffic
monitoring program, I knew that I needed to display the calculated
values of network parameters in a clear and concise way. That's why I
made sure to format the output of the program to display the values of
throughput, window size, and UWMA of TCP traffic on the console. By
doing this, I was able to easily see the values of these parameters in
real-time as the program retrieved new SNMP data from the specified
OID.

Of course, I also knew that the accuracy of the program's output


depended on the SNMP data available on the network device being
monitored. To ensure that I was getting the most accurate data possible,
I ran the program on a machine with all the necessary dependencies
and a properly configured SNMP device. This helped me to make sure
that the program was producing reliable and accurate results.

Once I had the program up and running, I turned my attention to


validating the results of the UWMA calculations. To do this, I compared
the UWMA values obtained from the program with the manually
calculated values. This involved using a mathematical formula to
calculate the UWMA manually, and then comparing the manually
calculated values with the values produced by the program.

The formula I used to calculate UWMA is:

UWMA_t = ((1 - α) * UWMA_t-1) + (α * Y_t)


Where UWMA_t is the current UWMA value at time t, α is the smoothing
factor (usually between 0 and 1), UWMA_t-1 is the previous UWMA
value at time t-1, and Y_t is the current value at time t.

By comparing the UWMA values produced by the program with the


manually calculated values, I was able to determine whether the
program was producing accurate results. If the values were the same or
similar, I could be confident that the program was working correctly.

To further ensure the accuracy and consistency of the program's results,


I repeated the validation process with different data sets and
parameters. This helped me to identify any potential issues or errors in
the program's calculations and to make any necessary adjustments.

Overall, by carefully validating the results of my network traffic


monitoring program, I was able to ensure that the calculated network
parameters were correct and reliable. This gave me confidence in the
accuracy of the program's output and helped me to make informed
decisions about network performance based on the data it provided.
Assignment Component 2: Network performance analysis on
a real ISP network:

Network performance analysis on a real ISP network

Sprint Network QoS Performance Analysis:-


Sprint is a well-known telecommunications company providing mobile and fixed-line services in
the United States. Sprint's network is built on an IP/MPLS infrastructure, which allows for the
delivery of various services with different performance requirements. In this analysis, we will
examine Sprint's network QoS performance data for the period of January to December 2022.

Sprint publishes its network performance data, including its committed values or Service Level
Objectives (SLOs), on a monthly basis. The SLOs specify the minimum level of performance
that Sprint aims to achieve for its customers. The actual values are the measured values of
network performance.

Metrics of Backbone Delay, Packet Loss, and Jitter

Three metrics of network performance that are of particular importance for IP/MPLS networks
are backbone delay, packet loss, and jitter.

Backbone delay is the time taken for a packet to travel from one end of the network to the
other. It is also known as end-to-end delay or latency. Backbone delay can have a significant
impact on the quality of real-time applications such as voice and video.

Packet loss is the percentage of packets that are lost in transit. Packet loss can be caused by
network congestion, faulty equipment, or other issues. High levels of packet loss can lead to
poor application performance and user experience.

Jitter is the variation in delay between packets in a stream of traffic. Jitter can cause problems
for real-time applications, as it can result in packet loss and poor audio or video quality.

SLO Setting Strategies vs. Actual Performance:-


Sprint's SLOs for backbone delay, packet loss, and jitter are 40ms, 0.0500% and 1ms,
respectively. These values are in line with industry standards and are indicative of a network
that is capable of delivering high-quality services to its customers.
For North America:

However, when we examine the actual performance data for the period of March 2022 to March
2023, we find that most of the time Sprint has not always met its SLOs. In some months, the
network performance has fallen significantly below the committed values, while in other months,
it has been above the SLOs.

For example, in March 2022, the backbone delay was 36.63ms, which is lower than the SLO of
40ms. Similarly, in May 2022, the packet loss was 0.0063%, which is higher than the SLO of
0.0500%, However, in March 2022, the jitter was only 0.0118ms , which is lower than the SLO of
1ms.

Performance Figure:-

To illustrate our analysis, we have plotted the backbone delay data for the period of January to
December 2022. The figure shows the monthly average of backbone delay and the SLO for
backbone delay.

From the figure, we can see that the backbone delay was consistently lower than the SLO of
40ms in the first half of the year. In the second half of the year, the backbone delay was mostly
below the SLO, with the exception of October.
Conclusions:-

In conclusion, Sprint's network performance data shows that the company has set ambitious
SLOs for backbone delay, packet loss, and jitter. While the actual performance data indicates
that the company has not always met these SLOs, the network generally performs well and is
capable of delivering high-quality services to its customers.

However, Sprint should continue to monitor its network performance and adjust its SLOs as
needed to ensure that it meets its customers' needs. Additionally, Sprint should invest in its
network infrastructure to further improve its performance and reliability.
Assignment Component 3: Fault management:

Fault management
Scenario of dynamic changing of OSPF weight for a particular link that can incur the IP
reconvergence procedure leading to a transient forwarding loop:

Task 1:

Scenario: The link between Node B and Node C has its weight changed from 3 to 1.
• Link to be changed: Link between Node B and Node C
• New link weight value: 1
• Source and destination nodes of affected traffic flow: Node A as the source and Node
E as the destination.

Procedure for forming the transient loop:

Initially, traffic flows from Node A to Node E via the shortest path of ABDE, with equal
cost paths through BCD and BCF.
When the weight of the link between Node B and Node C is changed to 1, the cost of
the path from B to C becomes less than the cost of the path from B to D or F.
As a result, Node B updates its routing table to forward traffic destined for Node E
through Node C instead of Node D or F.
At the same time, Node C updates its routing table to forward traffic destined for Node E
through Node B instead of Node D.
As a result, a transient forwarding loop occurs where traffic flows from Node B to Node
C, then back to Node B, and so on, causing duplicate packets and network congestion.

Task 2:
Successful scenario: The link between Node D and Node E is protected by the
LFA backup path.
• Link to be protected: Link between Node D and Node E
• Repairing node: Node C
• Source and destination nodes of affected traffic flow: Node A as the source and Node
E as the destination.
• Selected node as LFA candidate: Node C
• Actual data path: Traffic flows from Node A to Node E via the shortest path of ABDE.
In the event of a link failure between Node D and Node E, Node C can immediately
begin forwarding traffic to Node E via its LFA backup path of CBFE. This provides a
loop-free alternate path for traffic to reach its destination.

Unsuccessful scenario: The link between Node B and Node F is not able to be
protected by the LFA backup path.
• Link to be protected: Link between Node B and Node F
• Repairing node: Node C
• Source and destination nodes of affected traffic flow: Node A as the source and
Node E as the destination.
• Selected node as LFA candidate: Node C
• Actual data path: Traffic flows from Node A to Node E via the shortest path of ABDE.
In the event of a link failure between Node B and Node F, Node C is not able to provide
a loop-free alternate path for traffic to reach its destination because its LFA backup path
of CBFE contains the failed link. Therefore, IP reconvergence must occur to determine
a new path for traffic to reach Node E.
Assignment Component 4: Security management:

1: Design a De-Militarized Zone (DMZ) :


Design a De-Militarized Zone (DMZ) based network configuration and specify the
necessary access policies between different network segments separated by inner and
outer firewalls. Place each of the aforementioned computing facilities at specific network
locations and explain why they should be there.

The network can be designed using a DMZ architecture that allows for secure online
shopping and email communication while protecting the internal workstation containing
confidential information. The network should have two firewalls, an inner and an outer
firewall.

The outer firewall should be connected to the public internet and should be configured
to allow HTTP and HTTPS traffic from the internet to the web server. It should also be
configured to allow SMTP traffic from the internet to the SMTP server. The outer firewall
should block all other traffic from the internet.

The inner firewall should be connected to the outer firewall and should be configured to
allow traffic from the web server to the internal network and vice versa. It should also be
configured to allow SMTP traffic from the SMTP server to the internal network and vice
versa. The inner firewall should block all other traffic from the internet.

The web server should be placed in the DMZ, which is the area between the inner and
outer firewalls. This will allow it to be accessible from the internet, while still providing a
degree of protection from the internal network. The SMTP server should also be placed
in the DMZ for the same reasons.

The internal workstation containing confidential information should be placed on the


internal network, behind the inner firewall. This will provide additional protection for the
workstation and its sensitive data.

Access policies should be put in place to control traffic between different network
segments. For example, the inner firewall should only allow traffic from the web server
and SMTP server to the internal network, and vice versa. The outer firewall should only
allow traffic from the internet to the DMZ, and vice versa.

2: Allocation of IP Addresses:
Allocate an IP address to each of the servers/workstations mentioned above, as well as
router(s) (interfaces) required to implement the DMZ.

Assuming a Class C network with the IP address range 192.168.0.0/24, the following IP
addresses can be allocated:

Outer firewall: 192.168.0.1


Web server: 192.168.0.2
SMTP server: 192.168.0.3
Inner firewall (DMZ interface): 192.168.0.4
Inner firewall (Internal network interface): 192.168.1.1
Manager's workstation: 192.168.1.2

3 : NAT Procedure :
It is required that the manager’s workstation is behind a network address translation
(NAT) gateway, explain the NAT procedure if the manager launches from his
workstation an HTTP request to visit the website of www.google.co.uk.

Assuming the NAT gateway has an IP address of 192.168.1.1 on the internal network
and a public IP address of 203.0.113.1 on the internet-facing interface, the NAT
procedure would be as follows:

 The manager's workstation sends an HTTP request to www.google.co.uk.


 The request is forwarded to the inner firewall, which allows the request to pass
through to the NAT gateway.
 The NAT gateway replaces the source IP address of the request with its own
public IP address (203.0.113.1) and adds a mapping to its NAT table.
 The request is forwarded to the outer firewall, which allows it to pass through to
the internet.
 Google's web server receives the request from the public IP address
203.0.113.1.
 Google's web server sends the response to the public IP address 203.0.113.1.
 The outer firewall receives the response
Assignment Component 5: Mini essay on Net Neutrality:-

Introduction:
Network neutrality, also known as Internet neutrality, is the concept that all internet
traffic should be treated equally, without any preferential treatment or discrimination
based on the source, destination, or type of data being transmitted. This concept has
been a topic of debate for many years, particularly with the rise of Quality of Service
(QoS) differentiation techniques such as Integrated Services (IntServ) and Differentiated
Services (DiffServ) in classical ISP networks. In this essay, I will explore my
understanding of network neutrality in the context of these QoS mechanisms, as well as
the concept of network slicing in 5G networks.

Quality of Service Differentiation Techniques:

QoS differentiation techniques such as IntServ and DiffServ are used in classical ISP
networks to prioritize certain types of traffic over others based on their QoS
requirements. IntServ provides end-to-end QoS guarantees for individual flows by
reserving network resources, while DiffServ assigns traffic to different classes and
applies different levels of treatment to each class based on its priority.

While these techniques can improve the performance of critical applications such as
video streaming and voice over IP (VoIP), they can also be used by ISPs to discriminate
against certain types of traffic or to favor their own services over those of competitors.
This is where the concept of network neutrality comes in, as it seeks to prevent such
discriminatory practices and ensure that all traffic is treated equally.

Network Slicing in 5G Networks:

Network slicing is a key concept in 5G networks that allows the creation of multiple
virtual networks on a single physical network infrastructure, each tailored to the specific
QoS requirements of a particular vertical application or use case. This enables network
operators to offer differentiated services to their customers based on their specific
needs, without affecting the performance of other applications on the network.

While network slicing can provide significant benefits in terms of QoS, there are
concerns that it could also be used to prioritize certain applications or customers over
others, leading to a violation of network neutrality principles. To address these
concerns, it is important to ensure that network slicing is implemented in a transparent
and non-discriminatory manner, with clear guidelines and regulations to prevent any
abuse of the system.

My Opinion on Network Neutrality:

In my opinion, network neutrality is an important concept that is essential to maintaining


a free and open internet. All traffic should be treated equally, without any preferential
treatment or discrimination based on the source, destination, or type of data being
transmitted.

While QoS differentiation techniques such as IntServ and DiffServ can provide benefits
in terms of improved performance for critical applications, they should not be used to
discriminate against certain types of traffic or to favor the services of one provider over
another. Instead, ISPs should focus on improving their network infrastructure to ensure
that it can handle the increasing demands of modern applications and services.

Similarly, while network slicing can provide significant benefits in terms of tailored
support for vertical applications with heterogeneous QoS requirements, it is important to
ensure that it is implemented in a transparent and non-discriminatory manner, with clear
guidelines and regulations to prevent any abuse of the system.

In conclusion, network neutrality is a fundamental principle that is essential to


maintaining a free and open internet. While QoS differentiation techniques and network
slicing can provide benefits in terms of improved performance and tailored support for
specific applications and use cases, they should be implemented in a transparent and
non-discriminatory manner to ensure that all traffic is treated equally and that no one is
given preferential treatment or discrimination.

You might also like