You are on page 1of 41

Unit 1- Introduction to software Defined Networking (SDN)

1.1 Definition-

Fig.1 Overview of the SDN Security Survey

Software-Defined Networking (SDN) is a networking approach that uses software-based


controllers or application programming interfaces (APIs) to communicate with the underlying
hardware infrastructure and route traffic over the network. This model differs from traditional
networks, which use dedicated hardware devices (such as routers and switches) to control
network traffic. SDN can create and control virtual networks through software or control
traditional hardware. SDN architecture offers distinct security advantages. For example,
information generated from traffic analysis and anomaly detection within the network can be
periodically sent to a central controller. The Central Controller can analyze and correlate this
feedback from the network using the full network view supported by SDN. Based on this, new
security policies can be propagated throughout the network to prevent attacks. The performance
and programmability improvements of SDN, together with the network view, are expected to
accelerate the control and containment of network security threats
1.2 Need of SDN-

The need for SDN-SDN is important because it offers network operators new ways of
designing, building and operating networks. SDN separates the control (brain) and forwarding
(muscle) planes of a network, providing a centralized view of distributed networks for more efficient
orchestration and automation of network services. The SDN controller platform your organization
uses enables communication between isolated network tiers. NFV focuses on optimizing network
services themselves, separating network functions such as Domain Name System (DNS), caching,
firewalling, routing, and load balancing from their own hardware devices. White-box devices such as
switches and routers are based on “commercial” off-the-shelf silicon network chipsets that anyone
can buy, as opposed to proprietary silicon chips designed by a single network vendor. This means
that specific software and network protocols can be applied and customized via SDN. There is no
constraint to deal with vendor-specific limitations in hardware. Software-defined cloud networking
uses white box devices. Cloud providers often use the generic hardware so they can easily bring
changes to the cloud data center while saving on capex and opex costs.
1.3 History of Software Defined Networking (SDN)-

The history of SDN principles can be traced back to the separation of the control and data
plane first used in the public switched telephone network as a way to simplify provisioning and
management well before this architecture began to be used in data networks. The Internet
Engineering Task Force (IETF) started thinking about numerous methods to decouple the manipulate
and forwarding features in a proposed interface widespread posted in 2004 correctly named
"Forwarding and Control Element Separation" (ForCES). The ForCES Working Group additionally
proposed a accomplice SoftRouter Architecture. Additional early requirements from the IETF that
pursued keeping apart manipulate from facts consist of the Linux Netlink as an IP Services
Protocol[9] and A Path Computation Element (PCE)-Based Architecture.[10]These early tries did not
benefit traction for 2 reasons. One is that many withinside the Internet network considered keeping
apart manipulate from facts to be risky, particularly because of the capacity for a failure withinside
the manipulate aircraft. Second, providers have been worried that growing a widespread utility
programming interface (API) among the manipulate aircraft and facts aircraft could boom
competition. The use of open supply software program in a break up manipulate/facts aircraft
structure dates lower back to the Ethane challenge withinside the Stanford University Computer
Science Department. Ethane's easy transfer layout brought about the improvement of OpenFlow ].
OpenFlow's API become first created in 2008. In the identical year, NOX, an running gadget for
networks, become developed. Several patent programs have been filed in his 2007 with the assist of
impartial researchers describing practical programs.working gadget for networks,[15] community
infrastructure compute devices as a multi-middle CPU, and a way for digital community
segmentation primarily based totally on functionality. These packages have become public in 2009
and feature considering been abandoned, rendering all data inside earlier art.
Research of SDN covered emulators along with vSDNEmul, EstiNet, and Mininet.

Work on OpenFlow persevered at Stanford, along with with the introduction of testbeds to assess
the usage of the protocol in a unmarried campus community, in addition to throughout the WAN as
a spine for connecting more than one campuses. In instructional settings there had been some
studies and manufacturing networks primarily based totally on OpenFlow switches from NEC and
Hewlett-Packard; in addition to primarily based totally on Quanta Computer whiteboxes, beginning
from approximately 2009.

Beyond academia, the primary deployments had been with the aid of using Nicira in 2010 to govern
OVS from Onix, co-evolved with NTT and Google. A great deployment become Google's B4
deployment in 2012.[23][24] Later Google stated their first OpenFlow with Onix deployments of
their Datacenters on the identical time.Another recognized huge deployment is at China Mobile.

The Open Networking Foundation become based in 2011 to sell SDN and OpenFlow.

At the 2014 Interop and Tech Field Day, software-described networking become proven with the aid
of using Avaya the usage of shortest route bridging (IEEE 802.1aq) and OpenStack as an automatic
campus, extending automation from the facts middle to the cease device, getting rid of manual
provisioning from provider delivery.
2. Fundamental Characteristics of SDN-

In this section, we begin our discussion by understanding the characteristics of SDN in detail. These
characteristics are highlighted and represent specific characteristics of the SDN
framework/architecture that can impact SDN security through the introduction of vulnerabilities or
the enabling of enhanced network security. The six functions are labeled on the diagram with the
layers/interfaces/network elements they affect. Possible attacks are described in the next section.

2.1) Logical Centralized Management:

A fundamental feature of SDN is the logically centralized but physically distributed controller
components. Controllers maintain a global network view of the underlying forwarding infrastructure
and program forwarding entries based on policies defined by network services running on top of
them. Early controller development (NOX , Beacon , Floodlight , etc.) was intended to serve as an
OpenFlow driver, but various new implementations (OpenDaylight , OpenContrail ) provides the
abstraction needed for network services and support for multiple programming interfaces
(NETCONF, XMPP, BGP, etc.) for managing transport devices.

Similarly, starting with a single controller design, multiple distributed control (controller cluster)
options are proposed for scalability and reliability requirements, as shown in Figure 3. Distributed
control with multiple controller instances is proposed in Onix .SoftCell, HyperFlow , Kandoo.These
approaches are described in the section. ONOS and OpenDaylight implement distributed control
with multiple instances forming a cluster, In either case, each individual controller instance is the
exclusive master of a set of switches, and controllers are clustered into master/slave groups.

Fig.2 SDN Characteristics


2. 2) Open Programmable Interfaces:

Fig no.3

Unlike traditional network equipment, SDN physically separates control plane and data plane
entities. The main purpose of this feature is to simplify the routing device and allow the network
software in the controller to evolve independently. This capability provides innovation potential and
makes it easier to adopt new solutions. OpenFlow , a standardized programmable interface, has
been adopted by the industry to program multiple types of forwarding devices (ASICs, FPGA-based,
network processors, virtual switches, etc.), so the underlying hardware The complexity of is
abstracted away. Figure 4 shows some interfaces.
Control data interfaces (southbound APIs such as OpenFlow, OF-Config, OVSDB, NETCONF),
application control interfaces (northbound APIs such as REST APIs), and east-west interfaces
between controllers. East-West interfaces specify bi-directional and lateral communication between
SDN controllers. These controllers can belong to the same or different SDN control domains. The
East/Westbound API for this interface is described in Jarschel et al. It proposes definitions related to
the east interface for communication between SDN controllers and the west interface for
communication between SDN controllers and other non-SDN control planes. However,
interoperability between SDN and legacy control planes is outside the scope of this work.

.
(a)

(b)

Fig. 4. Distributed Control Frameworks for SDN (a) Controller Clustering, and (b) Hierarchical

2. 3) Switch management protocol:


Companion interfaces to the above programmable interfaces are switch management protocols
(OF-Config, OVSDB [23], etc.). Such protocols are necessary to standardize the configuration and
management functions of programmable hardware. For example, the OF-Config protocol is used
to configure and manage OpenFlow-enabled switches and multiple logical switches that can be
instantiated on a device. Internally, the protocol uses NETCONF as a transport protocol that
defines a set of operations over the messaging layer (RPC) that exchange switch configuration
information between configuration points and packet forwarding entities.

2. 4) Third Party Network Services:


SDN allows integration of third-party network services into the architecture. In monolithic SDN
controller implementations (RYU, POX, NOX, etc.), these applications are compiled and run as part
of the controller module, whereas in controllers like OpenDayLight, they can be run without
rebuilding and starting the controller module. You can instantiate your application at runtime. It is
similar to an operating system where software modules and libraries can be downloaded and
integrated into the execution environment. From a deployment perspective, this encourages
innovation, enables customization of services, provides flexibility throughout the architecture to
accommodate new capabilities, and reduces the cost of proprietary services. Depending on the
controller implementation, third-party services can communicate with the controller module
using internal APIs supported by the controller or open northbound APIs such as REST APIs.
.
2. 5) Virtualized Logical Networks:

Fig no.5

Virtualization of SDN components supports multi-client capabilities in the infrastructure. In a typical


SDN network, multiple logical switches can be instantiated on a shared physical substrate, allowing
each entity to represent an individual tenant/customer. The goal is to containerize SDN components
to ensure customized performance, security, and quality of service (QoS) based on tenant
requirements. As SDN evolves in the IT community, Network Functions Virtualization (NFV) is
evolving by the telecom industry. NFV uses IT virtualization technology to virtualize network
functions/services that were previously implemented in proprietary hardware appliances. This
supports dynamic and agile delivery of network services. NFV and SDN are closely related and offer a
software-based networking paradigm.

2. 6) Central monitoring unit:

Although not unique to the SDN architecture, a centralized monitoring unit coordinates
infrastructure analytics functions, creating feedback control loops with controllers to automate
network function updates. For example, a TAP monitor can route traffic to a deep packet inspection
(DPI) engine. The DPI engine evaluates traffic, identifies attack patterns, and programmatically
updates forwarding tables to block attack traffic. (Note:
For clarity, the monitoring unit in Figure 2 is separated from the controller. It is also possible to
centralize this functionality in the controller. ) SDN units may contain multiple monitoring functions
internally, but typical network deployments would consider deploying a dedicated monitoring
solution in the infrastructure. For example, the OpenFlow protocol provides statistics and status
information about the switch and its internal state (such as flow status maintained in flow tables,
port and connection status, statistics about flows, ports, queues, and counters). . These are inherent
monitoring capabilities that are part of the underlying architectural components. As mentioned
earlier, in a real-world deployment, we can also provide visualization tools and solutions such as
sFlow, NetFlow, or a third-party His Visibility Fabric integration for monitoring purposes. Part of the
feedback loop, the monitoring logic is responsible for understanding the information collected and
updating the controller for any necessary updates that it needs to provide to network devices.
Simplifying the functionality in Figure 2 into a series of layers and interfaces, we can identify the
challenges associated with each layer of the framework and the interfaces between them, as shown
in Figure 4. This framework is used throughout the study to categorize both challenges and proposed
solutions to SDN security.

Fig No.6
SDN Functional Architecture illustrating the data, control and application layers and interfaces
3. Advantages of SDN

Benefits or advantages of SDN are:

➨Network devices can be centrally managed.


➨ Useful for automation of network equipment.
➨ Provide improvements for end-users.
➨ Offers flexibility, scalability, and efficiency compared to traditional networks.
➨ Widely used by social networking sites (Facebook, Twitter, Google Plus, etc.) and major database
search engines (Google, Yahoo, Ask, etc.).

4. Disadvantages of SDN

The disadvantages of SDN are:

➨ Implementing SDN protocols and SDN controllers requires changes to the entire network
infrastructure. Therefore a complete reconfiguration of the network is required. This increases the
cost for reconfiguration.
➨ Personnel must be trained.
➨ We need to procure new management tools and train everyone to use them.
➨Security is a big issue for SDN.

5. Distributed Control Planes-


Fig. 7 Distributed control plane

Distributed Control Plane Architecture Implemented Unlike the distributed control plane
architecture, each controller has only one view of the domain for which it is responsible and can
make decisions [26]. This SDN control plane architecture not only improves the number of flow
requests per second, but also reduces the flow time for each flow request [16][27-28][48]. This
control plane architecture has been implemented in a 5G cellular network with some modifications.
Figure 5 shows the use of many controllers that act as a centralized controller for a physically
distributed but logically directly connected network. Additionally, all OF switches are connected to
the Internet, as in the previous case. All controllers are connected, but each OF switch is connected
to only one controller. Each controller is responsible for handling different Internet traffic. As with
other control plane architectures, each switch connects directly to a single access point. Each access
point can serve her 10 users. All stations are dynamic with a minimum velocity of 1.25 m/s and a
maximum velocity of 1.3 m/s, and this movement is in random directions just like for mobile users.

Distributed Control Plane Architecture Implemented Unlike the distributed control plane
architecture, each controller has only one view of the domain for which it is responsible and can
make decisions [26]. This SDN control plane architecture not only improves the number of flow
requests per second, but also reduces the flow time for each flow request [16][27-28][48]. This
control plane architecture has been implemented in a 5G cellular network with some modifications.
coward. Figure 5 shows the use of many controllers that are physically distributed but that act as a
centralized controller for a logically directly connected network. Additionally, all OF switches are
connected to the Internet, as in the previous case. All controllers are connected, but each OF switch
is connected to only one controller. Each controller is responsible for handling different Internet
traffic. As with other control plane architectures, each switch connects directly to a single access
point. Each access point can serve 10 users. All stations are dynamic with a minimum speed of 1.25
m/s and a maximum speed of 1.3 m/s. This movement is in random directions, just like for mobile
users.

Software-Defined Networking (SDN) is recognized as the best solution for dealing with ever-growing
mobile data traffic. SDN separates the data plane from the control plane, enabling network
scalability and programmability. His early SDN deployments promoted a centralized architecture
with a single controller managing the entire network. This design has proven unsuitable for today's
large networks. Multi-controller architectures are growing in popularity, but they also bring new
challenges. A key challenge is how to perform path computation efficiently in large networks,
considering the large amount of computational resources required. In this article, we propose DiSC,
a high-performance distributed control plane for path computation in large-scale SDN. it supports

Realizing a communication middleware in a software-defined network can leverage


significant performance gains in terms of latency, throughput and bandwidth efficiency. For
example, filtering operations in an event-based middleware can be performed highly efficiently in
the TCAM memory of switches enabling line-rate forwarding of events. A key challenge in a
software-defined network, however, is to ensure high responsiveness of the control plane to
dynamically changing communication interactions. In this paper, we propose a methodology for
both vertical and horizontal scaling of the distributed control plane that is capable of improving the
responsiveness by enabling concurrent network updates in the presence of high dynamics while
ensuring consistent changes to the data plane of a communication middleware. In contrast to
existing scaling approaches that aim for a general-purpose distributed control plane, our approach
uses knowledge of the application semantics that is already available in the design of the data plane
of a communication middleware, e.g. subscriptions and advertisements in an event-based
middleware. By proposing an application-aware control distribution methodology, in the context of
PLEROMA, an event-based middleware, application-awareness avoids synchronization bottlenecks,
ensures consistency during concurrent network updates, and Indicates that it is very beneficial for
improving control. Greatly improves the responsiveness of the plane.
6. Load Balancing-

Fig. 8 Load Balancing

SDN Load Balancing stands for Software Defined Networking Load Balancing.
SDN-based load balancers physically separate the network control plane from the forwarding plane.
When load balancing with SDN, multiple devices can be controlled simultaneously. This global view
results in better load balancing.
Diagram showing SDN load balancing of applications distributed in the data forwarding plane to the
network control plane of physical servers and virtual machines. What is SDN Load Balancing?
Software-defined networking (SDN) provides flexible control so organizations can respond more
quickly to changing business needs. SDN load balancing separates the physical network control plane
from the data plane. SDN-based load balancers can control multiple devices. In this way the network
becomes more agile. You can directly program the network controller to improve the responsiveness
and efficiency of your application services.

Computing and storage have revolutionized with virtualization and automation, but networking has
lagged behind. With SDN load balancing, the network functions like a virtualized version of compute
and storage.

How does load balancing work with SDN?


Software-Defined Networking (SDN) Load Balancing removes hardware-level protocols to improve
network management and diagnostics. SDN controller load balancers make data path control
decisions without relying on algorithms defined in traditional network equipment. An SDN-based
load balancer saves runtime by controlling an entire network of application servers and web servers.
Load balancing with SDN finds the best path and server for the fastest delivery of requests.

Fig no. 9
7. Control Planes-

Fig. 10 Control Plane

At a very high level, the control plane creates local records that are used to create routing
table entries. This record is used by the data plane to route traffic between the ingress and egress
ports of the device. It uses a network topology called the Routing Information Base (RIB). The RIB is
often kept consistent (that is, loop free) by exchanging information between other control plane
entities in the network. Forwarding table entries are commonly referred to as the forwarding
information base (FIB) and are often mirrored between the control and data planes of a typical
device. Once the RIB is deemed consistently stable, the FIB is programmed. To perform this task the
controller/program must create a view of the network

1. As part of its development, the Open Networking Foundation has alternately associated
its definition of SDN with OpenFlow, either tightly (that is, OpenFlow = SDN) or loosely (that is,
OpenFlow is an important component of SDN). Anyway, it is undeniable that OpenFlow's existence
and ONF's aggressive marketing have generated market/public discussion and interest in SDN.
2. The management plane is responsible for configuring elements that affect local
forwarding decisions (forwarding capabilities) such as access control lists (ACLs) and policy-based
routing (PBR).

A topology that satisfies certain constraints. This view of the network can be programmed
manually, learned by observation, or constructed from information gleaned through interaction with
other instances of the control plane. This can be done using one or more routing protocols, manual
programming, or a combination of both. .
Figure 2-2 shows how the control plane and data plane work. Figure 2-2 shows a network of
interconnected switches. The diagram above shows a network of switches with expanded control
and data plane details for two of these switches (labeled A and B). In the diagram, the packet is
received by Switch A on the far left control plane and eventually forwarded to Switch B on the right
side of the diagram. Note that within each extension, the control plane and data plane are
separated, with the control plane running on its own processor/card and the data plane running on
another processor/card. Both are contained in a single housing. Later in this chapter, we discuss
these and other variations on this theme of the physical layout of the control plane and data plane.
In this diagram, a packet is received at the ingress port of the line card where the data plane resides.
For example, when a packet is received that originates from an unknown MAC address, it is
punctured or redirected (4) to the device's control plane, where it is learned, processed, and
forwarded. The same treatment applies to control traffic such as routing protocol messages (such as
OSPF link state advertisements). Once the packet is delivered to the control plane, the information it
contains is processed, the RIB has changed, and an additional message may be sent to alert peers of
this update (i.e. new routes are learned) . Once the RIB stabilizes, both the control plane and data
plane will update her FIB. The redirect is then updated to reflect those changes. However, in this
case, the received packet was with one of the unlearned MAC addresses, so the control plane
returns the packet (C) to the data plane (2), and the data plane forwards the packet accordingly (3 ).
If additional FIB programming is required, it is also done in the (C) step.

Fig. 11 Control plane

In reality, the control plane for the Internet that was just discussed is some combination of
layer 2 or layer 3 control planes. As such, it should be no surprise then that the same
progression and evolution has taken place for both layer 2 and layer 3 networks and the
protocols that made up these control planes. In fact, the progression of the Internet
happened because these protocols evolved both in terms of functionality and hardware
vendors learned how to implement them in highly scalable and highly available ways.
The Layer 2 control plane focuses on hardware or physical layer addresses such as IEEE MAC
addresses. The Layer 3 control plane is built to facilitate network layer addressing like IP
protocols. In Layer 2 networks, the behavior surrounding MAC address learning, the
mechanisms used to ensure an acyclic graph (familiar to most readers of the Spanning Tree
Protocol), and BUM traffic (broadcast, unicast unknown, and multicast) flood. It exposes
unique scalability challenges and also exposes scalability limits. There have been several
iterations and generations of standards-based Layer 2 control protocols aimed at addressing
these and other issues. These include IEEE's SPB/802.1aq and IETF's TRILL. However, as a
generalization, Layer 2 and Layer 3 scaling concerns and the resulting control plane design
will eventually merge or hybridize. Layer 2 networks ultimately do not scale well due to the
large number of end hosts. At the heart of these issues is how to deal with end hosts moving
between networks. This causes significant changes to the forwarding table and requires fast
table updates so as not to disrupt traffic flow. In Layer 2 networks, forwarding focuses on
MAC address reachability. Layer 2 networks are therefore primarily concerned with storing
MAC addresses for forwarding purposes. Host MAC addresses can be numerous in large
corporate networks, making them difficult to manage. Even worse, imagine having to manage
all MAC addresses across multiple companies and the Internet.
In Layer 3 networks, forwarding focuses on network address reachability. Reachability
information for Layer 3 networks is primarily concerned with the reachability of destination
IP prefixes. This includes network prefixes across various address families for unicast and
multicast. All recent cases use layer 3 networking to segment or combine layer 2 domains to
solve the layer 2 scaling problem. In particular, layer 2 bridges representing sets of IP subnets
are typically connected to layer 3 routers. Layer 3 routers are connected together to form
larger networks—or really different subnetwork address ranges. Larger networks connect to
other networks via gateway routers that often specialize in simply interconnecting large
networks. However, in all of these cases, the router routes traffic between networks at layer
3 and will only forward packets at layer 2 when it knows the packet has arrived at the final
destination layer 3 network that must then be delivered to a specific host.
Some notable blurring of these lines occurs with the Multiprotocol Label Switching (MPLS)
protocol, the Ethernet Virtual Private Network (EVPN) protocol, and the Lo‐ cator/ID
Separation Protocol (LISP). The MPLS protocol—really a suite of protocols— was formed on
the basis of combining the best parts of layer 2 forwarding (or switching) with the best parts
of layer 3 IP routing to form a technology that shares the extremely fast-packet forwarding
that ATM invented with the very flexible and complex path signaling techniques adopted
from the IP world. The EVPN protocol is an attempt to solve the layer 2 network scaling
problem mentioned above by effectively tunneling remote layer 2 bridges over an MPLS (or
GRE) infrastructure. It doesn't pollute (or affect) the scale of the underlying Layer 3 network.
Reachability information between remote bridges is exchanged as data within the new BGP
address family, without polluting the underlying network. There are also other adjustments
that limit the amountAt a slightly lower level, there are adjunct control processes particular
to certain network types that are used to augment the knowledge of the greater control
plane. The services provided by these processes include verification/notification of link
availability or qual‐ ity information, neighbor discovery, and address resolution.
Because some of these services have very tight performance loops (for short event de‐
tection times), they are almost invariably local to the data plane (e.g., OAM)—regardless of
the strategy chosen for the control plane. This is depicted in Figure by showing the various
routing protocols as well as RIB-to-FIB control that comprises the heart of the control plane.
Note that we do not stipulate where the control and data planes reside, only that the data
plane resides on the line card (shown in Figure in the LC box), and the control plane is
situated on the route processor (denoted by the RP box).

Fig. 12 Control Plane

8. Evaluation of Networking Technologies-


Fig 13. Networking technology
Initially, there were various SDN controllers, most of which focused on OpenFlow and the
accompanying Open vSwitch (OVS). However, some of them have taken a different approach and
provided more platforms to enable protocols of interest (ONOS, OpenDaylight).

What made SDN really important was the introduction of his OpenStack by Open Virtual Network
(OVN) around 2016. Founded in 2010, OpenStack has demonstrated what SDN has to offer while
being an open source network (it took a month). However, it takes a few years for that to happen).

It also streamlines Network Function Virtualization (NFV), making it the standard platform for the
telecom industry to run vendor-supplied network functions. A lot has happened in the open source
community (and various standards bodies) since then. The Linux Foundation Networking (LFN) and
Open Networking Foundation (ONF) have helped bring together vendors, operators and businesses,
to name a few. Both host many projects that are important for momentum and adoption (ONOS, P4,
ONAP, OpenDaylight, Open vSwitch, etc.).

example

Use SNA to monitor research network assessments

(From 52 weeks of BetterEvaluation:

Week 8:

Using Social Network Analysis for M&E, by Cris Sette)


In this example, the ILAC initiative used his SNA approach to develop a system to monitor the
development of specific research networks commissioned by large research programs.

The project team developed a survey asking members of the newly formed research network to
identify partners with whom they have collaborated over the past year or so. The survey also asked
whether collaboration (formal or informal) was the result of newly formed research networks.
Information collected was processed using Excel and UCINET software.

Data and map analysis enabled the project team to develop a baseline to support his M&E strategy
for the research program he commissioned research.

The characteristics of the network, such as member characteristics, affiliations, areas of expertise,
geographic distribution, work areas, and types of research conducted, may evolve over time as a
result of coordination of research collaborations. and the result. The research program that
commissioned the SNA survey uses the same questionnaires and methods on a regular basis to
monitor network development. Understanding the role of the World Bank Group in a crowded
institutional environment

In this example, an Independent Evaluation Group (IEG – part of the World Bank Group) used
network analysis to assess the role of the World Bank Group policy interventions in the health sector
in Liberia in relation to many other organizations and interventions. I understood better in This blog
introduces two of his network diagrams:

The first SNA chart shows the role of the World Bank Group as a financier of the Liberian health
system in relation to other types of organizations. The color and size of the bubbles indicate the type
of organization and amount of the country's annual healthcare budget. The second chart shows
knowledge leadership perceptions of different organizations in the healthcare sector.

9. An change approach of SDN community programmability


In its purist shape SDN advocates the decoupling of the manage and information aircraft, each
historically living inner community gadgets together with switches and routers. Once such
decoupling is achieved, manage aircraft capabilities are eliminated from the community gadgets and
positioned on an x86 server platform, functioning because the controller.
The OpenFlow protocol is most customarily used for communique among the controller and the
community gadgets, and this of path means that the community gadgets aid OpenFlow APIs
(introduced via way of means of OpenFlow agent) to simply accept outside manage aircraft
connections over the IP community.
Extraction of manage aircraft capabilities from man or woman gadgets doubtlessly gets rid of the
complexities of going for walks disbursed manage aircraft protocols, together with OSPF, BGP or
Spanning Tree, for that reason doubtlessly simplifying ordinary community setup. Instead of
disbursed manage aircraft protocols, the SDN controller computes the community topology in a
centralized manner and packages the forwarding desk entries immediately into the community
gadgets' forwarding ASICs.
SDN implementation thru APIs refers to southbound APIs that configure and software the manage
aircraft lively at the tool. There are some of legacy community tool APIs in use that provide
exceptional stages of manage (SNMP, CLO, TL1, RADIUS, TR-069, etc.) and some of more moderen
ones (NETCONF/YANG, REST, XMPP, BGP-LS, etc.) that provide exceptional stages of manage over
the community gadgets, information aircraft, topology, etc., every having exceptional benefits and
disadvantages. I may not cowl them extensive on this weblog publish however I need to make
certain all of us recognize one key distinction among them and the Open SDN approach:
OpenFlow is used to immediately manage the information aircraft, now no longer simply the
configuration of the gadgets and the manage aircraft.

The infrastructure layer includes diverse networking equipment, for instance, community switches,
servers or gateways, which shape the underlying community to ahead community visitors to their
destinations.
The manage layer is the mid layer that connects the infrastructure layer and the utility layer. It
manner the centralized SDN controller software program and serves because the land of manage
aircraft in which shrewd common sense is hooked up to the utility aircraft.
The utility

layer contains network applications or functions that organizations use. There can be several
applications related to network monitoring, network troubleshooting, network policies and security.
To communicate between the three layers of SDN network, northbound and southbound application
programming interfaces (APIs) are used. Northbound API enables communications between the
application layers and the controller, while southbound API allows the controller communicate with
the networking equipment

9.1 SDN network type

Depending on how the controller layer is connected to the SDN devices, SDN networks can be
divided into four different types that can be classified as follows:

9.1.1 Open SDN

Open SDN has a centralized control plane and uses OpenFlow for the southbound API for traffic from
physical or virtual switches to the SDN controller.

9.1.2 API SDN

API SDN works well with traditional switches, unlike open SDN, which requires OpenFlow-enabled
switches. SDN over existing APIs consists of using the ability to connect devices over remote
connections, including traditional methods such as SNMP and CLI, and newer methods such as REST
APIs. SDN over API is non-proprietary and open, but individual APIs used in API SDN are specific to a
particular vendor. Also, openness varies by provider. Overlay model SDN

Overlay Model SDN does not address the underlying physical network, but builds a virtual network
on top of current hardware. It works in overlay networks and provides tunnels with channels to data
centers to solve data center connectivity issues.

9.1.3 Hybrid model SDN

The hybrid SDN model, also known as automation-based SDN, combines SDN capabilities with
traditional network devices. It uses agents, automation tools like python, and components that
support different types of operating systems. Hybrid model SDN is often used as a phrase-in
approach to SDN.

Benefits of SDN networks

Different SDN models have their own advantages. Only the general benefits that SDN brings to the
network are discussed here.

9.1. 4 centralized management

Centralization is one of the main advantages of SDN. SDN networks enable centralized management
over the network using a single management tool available to data center administrators. It breaks
down the barriers created by traditional systems and provides greater agility in deploying virtual and
physical networks, all from a central location.

9.1. 5 safety

SDN controllers provide network engineers with a central location for overall control of network
security. Even if the trend toward virtualization makes it more difficult to protect networks from
external threats, SDN offers significant advantages. SDN controllers ensure that security policies and
information are implemented within the network. SDN also comes with a single management system
that helps improve security. Cost reduction

SDN networks offer users low operating costs and low investment costs. On the other hand, network
availability has traditionally been ensured through additional device redundancy, which of course
incurs additional costs. Compared to traditional methods, software-defined networking is much
more efficient without the need to purchase additional network switches. Second, SDN works well
with virtualization and helps reduce additional hardware costs.

9.1.6 Scalability

SDN offers users more scalability thanks to the OpenFlow agent and his SDN controller, which allows
access to various network components through centralized management. Compared to traditional
network setups, engineers have more options to change their network infrastructure on the fly
without manually purchasing and configuring resources.

9.2 What impact will SDN networks have on data centers?

With the trend of virtualization and new demands on IT staff to support new applications and
services such as cloud computing services, BYOD, and big data applications, data centers require
greater flexibility, better performance, and better performance. security is required.

Data centers are actually starting to use his SDN to meet their needs. Software-defined strategies are
difficult to manage in many hyperscale data centers such as Amazon and Google. However, using
SDN simplifies communication with virtual machines. It also supports connectivity to multiple data
centers. SDN helps manage bandwidth and low-cost data flow to maximize network resource
optimization.

SDN, on the other hand, will support future data center multi-tenancy requirements. SDN helps data
centers consolidate legacy networks and improve network performance without the need to
manually change hardware configurations.

Finally, with the advent of cloud-based applications and cloud data centers, SDN helps with real-time
monitoring and dynamic allocation of redundant resources.

9.3 SDN via Hypervisor-Based Overlays

I What is actually happening is that the virtual switches establish communication tunnels among
themselves using general IP addressing. As packets cross the physical infrastructure, as far as the
virtual network is concerned they are passed directly from one virtual switch to another via virtual
links. These virtual links are the tunnels we just described. The virtual ports of these virtual switches
correspond to the virtual tunnel endpoints (VTEPs) defined in Section The actual payload of the
packets between these vSwitches is the original layer two frame being sent between the VMs. In
Chapter 4 we defined this as MAC-in-IP encapsulation, which was depicted graphically in Figure We
provide a detailed discussion about MAC-in-IP encapsulation in Chapter

Much as we did for SDN via APIs, we restrict our definition of SDN via hypervisor-based overlays to
those solutions that utilize a centralized controller. We acknowledge that there are some SDN
overlay solutions that do not use a centralized controller (e.g., MidoNet, which has MidoNet agents
located in individual hypervisors), but we consider those to be exceptions that cloud the argument
that this alternative is, generally speaking, a controller-based approach.Overlay Controller

Fig no.14

By our definition, SDN via hypervisor-based overlays utilizes a central controller, as do the other SDN
alternatives discussed thus far. The central controller keeps track of hosts and edge switches. The
overlay controller has knowledge of all hosts in its domain, along with networking information about
each of those hosts, specifically their IP and MAC addresses. These hosts will likely be virtual
machines in data center networks. The controller must also be aware of the edge switches that are
responsible for each host. Thus, there will be a mapping between each host and its adjacent edge
switch. These edge switches will most often be virtual switches associated with a hypervisor resident
in a physical server and attaching the physical server’s virtual machines to the network.

In an overlay environment, these edge switches act as the endpoints for the tunnels that carry the
traffic across the top of the physical network. These endpoints are the VTEPs mentioned above. The
hosts themselves are unaware that anything other than normal layer two Ethernet forwarding is
being used to transport packets throughout the network. They are unaware of the tunnels and
operate just as though there were no overlays involved whatsoever. It is the responsibility of the
overlay controller to keep track of all hosts and their connecting VTEPs so that the hosts need not
concern themselves with network details.

9. 4 Overlay Operation

Fig no.15

A finds out the IP and MAC addresses of host B using the usual methods (DNS and ARP). Host A uses
its ARP broadcast mechanism to resolve host B's MAC address, but there are various ways in which
this Layer 2 broadcast is translated into the tunneling system. Original VXLAN attempts to map Layer
2 broadcasts directly to the overlay model by flooding Layer 2 broadcasts using IP multicast. This
allows you to map virtual network MAC addresses to virtual network IP addresses using Ethernet-
style MAC address learning. There is also a proprietary control plane implementation where tunnel
endpoints exchange VM MAC address to VTEP IP address mapping information. Another method is
to use MP-BGP to transport MPLS VPN membership information between controllers where MPLS is
used as the tunneling mechanism. In general, learning virtual MAC addresses across this virtual
network is where virtual network overlay solutions differ the most. We have chosen a few
alternative approaches just to highlight these differences and do not attempt to provide
authoritative reviews of different approaches.
creates an appropriate packet containing MAC and IP addresses and forwards it upstream to the
local switch for forwarding.
The local switch receives an incoming packet from Host A, looks up the destination VTEP to which
Host B is connected, and constructs an encapsulated packet as follows:
The external destination IP address is the destination VTEP and the external source IP address is the
local VTEP. The entire Layer 2 frame originating from Host A is encapsulated in the IP payload of a
new packet.
An encapsulated packet is sent to the target VTEP.
The destination VTEP receives the packet, strips off the outer encapsulation information, and
forwards the original frame to Host B.
Figure 6.11 shows these five steps. From the perspective of the two hosts, the frames sent from one
to the other are the original frames constructed by the originating host. However, as packets
traverse the physical network, they are encapsulated by VTEPs and forwarded directly from one
VTEP to another, where they are finally decapsulated and presented to the destination host.

Examples of SDN via Hypervisor-Based Overlays

Prior to Nicira’s acquisition by VMware in 2012, Nicira’s Network Virtualization Platform (NVP) was a
very popular SDN via hypervisor-based overlays offering in the market. NVP has been widely
deployed in large data centers. It permits virtual networks and services to be created independently
of the physical network infrastructure below. Since the acquisition, NVP is now bundled with
VMware’s vCloud Network and Security (vCNS) and is marketed as VMware NSX where it continues
to enjoy considerable market success. The NVP system includes an open source virtual switch, Open
(OVS) that works with the hypervisors in the NVP architecture. Open vSwitch has become the most
popular open source virtual switch implementation, even outside of NVP implementations.
Interestingly, NVP uses OpenFlow as the southbound interface from its controller to the OVS
switches that form its overlay network. Thus, it is both an SDN via hypervisor-based overlays and an
Open SDN implementation

Another important SDN via hypervisor-based overlays offering is IBM’s Software Defined Network
for Virtual Environments (SDN VE) SDN VE is based on IBM’s established overlay technology,
Distributed Overlay Virtual Ethernet (DOVE).

9. 5 Ranking SDN with hypervisor-based overlays

Side note here. SDN with a hypervisor-based overlay is often touted as providing network
virtualization over existing physical networks. At face value, this does not force simplification of the
device, it only allows simplification. A customer may be very happy to keep a very complex switch
even if there is another way to scrap that investment and move to a simpler switch. This actually
applies to OpenFlow as well. Many vendors add the OpenFlow protocol to existing complex switches
to achieve hybrid switches. So OpenFlow itself does not prescribe simple devices, but it does allow
them.

Plane separation is more complicated. The virtual network's control plane is actually decoupled in
the sense that the physical network topology is abstracted and the virtual network only tunnels into
this abstraction. However, the physical switch itself still implements a traditional physical network
control plane locally, so we give it a moderate score.
Criteria for excessive change are also difficult to classify here. On the one hand, you can typically
continue to use the same physical network infrastructure, eliminating the need for forklift upgrades
of equipment. On the other hand, shifting the network administrator's mindset to virtual networks is
a game-changer, and it would be a mistake to underestimate the magnitude of the change it entails.
So classify this medium.

9.6 Ranking SDN with hypervisor-based overlays

Side note here. SDN with a hypervisor-based overlay is often touted as providing network
virtualization over existing physical networks. At face value, this does not force simplification of the
device, it only allows simplification. A customer may be very happy to keep a very complex switch
even if there is another way to scrap that investment and move to a simpler switch. This actually
applies to OpenFlow as well. Many vendors add the OpenFlow protocol to existing complex switches
to achieve hybrid switches. So OpenFlow itself does not prescribe simple devices, but it does allow
them.

Plane separation is more complicated. The virtual network's control plane is actually decoupled in
the sense that the physical network topology is abstracted and the virtual network only tunnels into
this abstraction. However, the physical switch itself still implements a traditional physical network
control plane locally, so we give it a moderate score. Criteria for excessive change are also difficult to
classify here. On the one hand, you can typically continue to use the same physical network
infrastructure, eliminating the need for forklift upgrades of equipment. On the one hand, shifting the
network administrator's mindset to virtual networks is a game-changer, and it would be a mistake to
underestimate the magnitude of change this means. So classify this medium.

SDN may or may not be based on the concept of a centralized controller over a hypervisor-based
overlay, hence his N/A rank in the single point of failure category. Similarly, we believe it deserves
the same intermediate rank as Open SDN in terms of performance and scope. Because virtual
switches can be freely implemented under the overlay paradigm, they outperform Open SDN in
terms of deep packet inspection and stateful flow recognition. Different flows between the same
host can be mapped to different tunnels. This freedom comes from the fact that these virtual
switches can implement their own features such as deep packet inspection and are not constrained
by standards such as OpenFlow. While these features may be supported by proprietary overlay
solutions, they are not currently supported by some of the major incumbent overlay vendors, so
these two are rated moderate. SDN over hypervisor-based overlays scored top in the last two
categories. The MAC forwarding table size issue is resolved because the physical network device only
handles MACs for VTEPs. This is a much smaller number than you would have to handle all VM MAC
addresses. Similarly, virtualization can be achieved through tunnels using this technology, so the
system does not have to rely heavily on VLANs for virtualization.
In summary, SDN was specifically designed for data centers over hypervisor-based overlay networks
and is not tailored for other environments such as campuses, service providers, carrier and transport
networks. This allows it to directly address the needs of some critical data centers, but has less
application in other network environments. Overlay alternatives do not fundamentally change the
underlying physical network. Looking at his SDN use cases outside the data center in this chapter, we
see that solutions often rely on changes in the underlying physical network. In such cases, the
overlay approach is clearly not suitable. SDN over hypervisor-based overlays is a key solution that
has attracted a lot of attention in data center environments, but it does not always offer the
simplicity and openness of Open SDN devices.

The core of traditional network switches is built on custom silicon, either ASICs, FPGAs, or NPUs.
FPGAs and NPUs are field-modifiable, but still have limited functionality and are relatively expensive.
These hardware devices can forward packets at the required "wire speed" based on Layer 2 and
Layer 3 inputs. A notable trade-off is performance versus flexibility. The control plane handles
routing functions and computes packet forwarding rules.

10.1 Traditional switch Architecture

The most common type of network, a traditional network, uses fixed and dedicated hardware and
network devices such as switches and routers to control network traffic

Scalability is a common problem in traditional networks. Most switching hardware and software is
proprietary and APIs are not commonly exposed for deployment.

Traditional networks typically work well with proprietary deployment software. Unfortunately, in
traditional networks, this software cannot be changed as needed, and its use can be very limited in
hardware-centric networks. Traditional network functions and features are implemented in the
following ways:

These functions are implemented by dedicated devices using switches, routers, and application
delivery controls.

This functionality is primarily implemented in application-specific integrated circuits (ASICs) and


other specialized hardware.
Fig no.16 Traditional Switch Architecture.

A fundamental problem with traditional network architectures is that they are static. The user has
less control and the user's current network tries to accommodate his needs. This means that his
window of innovation around network control, virtualization, automation, scalability,
programmability, etc. is too narrow.

Software-Defined Network (SDN) as a network architecture that separates control and forwarding
functions. It allows network operators and administrators to easily and centrally configure networks
across thousands of devices. This study develops and evaluates quality of service (QoS) performance
between two networks using SDN-based and non-SDN-based architectures. MinNet as a software
emulator used as a data plane in a software-defined network network. In this study, we compare
QoS values in networks based on Software-Defined Networks and traditional networks when running
tests from the source node. Network test with traffic load. Traffic loads from 20 Mbit/s to 100 Mbit/s
are used. The results validated that his QoS analysis of software-defined network architectures
outperforms traditional network architectures. Software-defined networks have latency delay values
of 0.019 to 0.084 ms and 0% packet loss when network traffic is addressed at 10 to 100 Mbit/s.

10.1.1 Traditional Switch Architecture Roles:

The core of traditional network switches is built on custom silicon, either ASICs, FPGAs, or
NPUs. FPGAs and NPUs are field-modifiable, but still have limited functionality and are
relatively expensive. These hardware devices can forward packets at the required "wire
speed" based on Layer 2 and Layer 3 inputs. A notable trade-off is performance versus
flexibility. The control plane handles routing functions and computes packet forwarding
rules.

The following diagram shows a traditional non-SDN switch with the following components

• Transceiver (TRX), a port that transmits and receives communications over a medium
(copper, optical, radio frequency, etc.). • Process incoming and outgoing data packets using
Application Specific Integrated Circuits (ASICs). ASICs are dedicated silicon devices that
perform only a limited set of tasks, and in this particular case are very fast (up to 40Gbps per
port).

• Layer 2 and Layer 3 tables are the core building blocks in which the ASIC works.

New features have been added to better isolate hosts or virtualized machines to meet the
needs of virtualized servers, virtual local area networks (VLANs) or virtual routing functions
(VRFs).

Access control, Quality of Service (QoS), port groups, etc. Most of these features must be
manually configured and must be configured differently for each provider.
Fig no 17. A traditional non-SDN switch

Traditional switch and 3 levels

The following diagram shows what the switch looks like when the non-SDN switch components are
combined with the data, control, and management planes.

• Transceivers (TRX) and ASICs form the data plane.

• General purpose CPUs host both the control and management planes.

• The control plane handles routing functions and is also responsible for calculating forwarding rules.

• The management plane is used to set up and change network switch configurations.

Fig no.18 Data, Control, and Management Planes in a Switch.


10.1.2 New packet arrives at switch

When a packet enters the switch, the data plane looks up forwarding rules based on information in
the packet header. If it matches, the packet is sent on its way. If there is no match, the packet is sent
to the control plane where the routing process (Layer 3) takes place. Packets are returned to the
data plane and forwarded to the appropriate egress port. The control plane then adds a new
forwarding rule to the Layer 2 forwarding table so that subsequent similar packets are no longer
exceptions and are forwarded at line speed.

When a new packet arrives:

1. A new packet arrives at the receive port and is buffered.

2. If the data plane has no rule matching this packet, the control plane needs to decide what to do.

3. The control plane receives packets from the data plane and performs routing functions. 4. The
control plane then saves the calculated actions (egress port, etc.) in the forwarding table.
Forwarding tables are stored in content-addressable memory, allowing fast and efficient lookups and
matching.

5. At this point, the data plane can apply the rules stored in TCAM (Trinary Content Addressable
Memory). 6. Data plane forwards the packet to the egress port.

7. The output port then sends the packet over the medium.

Requests to the control plane take longer because the actions have to be computed and the control
plane CPU runs slower.

Fig no. 19 New Network Packet Arriving at Switch.

subsequent packets arriving at the switch


The next packet that arrives with the same source and destination will be forwarded based on the
existing rule unless it is valid and has expired

1.The package arrives.

2. Compared to rules stored in TCAM.

3. The packet is then forwarded to the egress port.

4. The output port sends the packet over the medium.

Fig no. 20 Network packets with known forwarding rules.

This article described the architecture and operation of traditional data switches. We've shown how
the three levels are implemented and how they interact. The basic tradeoffs offered are
performance and flexibility. This compromise and subsequent limitations led to the emergence of
SDN. Part 3 describes the implementation and operation of software-defined switches and
introduces the concept of SDN controllers. Finally, RFC 7426 is presented, highlighting the various
layers of abstraction created to simplify complex data networks.
10.1.3 Separation of data

Fig no. 21 Control & Data planes separation and SDN architecture.

To achieve this, we need to be able to send Instant Request automated data requests for third-party
integrity verification for security purposes, including data integrity for inference in cloud storage.
[11][12][13] ] If additional requests should be sent to all networks. Implement and experiment to
accurately measure network response time in this case, and how accurately the proposed method
can provide the most optimized large-scale data sampling using big data on cloud servers. can be
centered software-defined. Network infrastructure (SDN) context with fault tolerant mechanisms.

The transmission of data from sensors or monitoring devices in electronic health, vehicle
informatics, or Internet of Things (IoT) networks faces the constant challenge of improving data
accuracy with relative efficiency. Previous studies have suggested using inference systems on sensor
devices to minimize data transmission frequency and data size, saving network usage and battery
resources. This was implemented using different algorithms for sampling and inference considering
the trade-off between accuracy and efficiency. This paper proposes to improve accuracy without
sacrificing efficiency by introducing a novel sampling algorithm via hybrid inference methods.
Experimental results show that accuracy can be significantly improved without compromising
efficiency. These algorithms help save operational and maintenance costs for data collection with
constrained or limited computing and battery resources. B. Wireless personal area networks that
emerged with IoT networks.

10.2 Control plane

The control plane handles traffic destined for network devices.


1. For unicast traffic, the destination IPv4 or IPv6 field of traffic entering the device is set to the IPv4
or IPv6 address assigned to the network device.

2. For link-local multicast traffic, the destination IPv4 or IPv6 field of traffic entering the device is set
to the IPv4 or IPv6 address that the network device is listening on.

A practical example of control plane traffic in this topology is ICMP traffic destined for the network
device itself. When a network device receives an ICMP Echo Request packet destined for the IP
address 192.168.10.1 (assigned to the network device), the data plane recognizes that the network
device itself has this IP address and forwards the packet to Further forward to the control plane. In-
band interface. This action is known as a "punt".

When the control plane receives this ICMP echo request packet over the inband interface, it
examines it and "forwards" it to the ICMP software process for proper handling by the ICMP process.

The ICMP software process must generate an ICMP echo reply packet that is sent to the inband
interface of the control plane. This packet is dequeued by the data plane and forwarded to the host
by Ethernet1/1.

Other common examples of control plane traffic are routing protocol traffic (such as OSPF, EIGRP,
BGP, or PIM packets) and Layer 2 protocols (such as Spanning Tree Protocol, LACP, CDP, or LLDP
frames).

10.3 Management plane

Fig no .21

Above you can see the control plane where we use routing protocols like OSPF and EIGRP and some
static routing. The best routes are installed in the routing table. Another table that the router has to
build is the ARP table.
Information from the routing and ARP table is then used to build the forwarding table. When the
router receives an IP packet, it will be able to forward it quickly since the forwarding table has
already been built.

Management Plane - Responsible for monitoring, configuring, and maintaining network devices. B.
Determination of Network Device State. The management plane can be used to configure the
forwarding plane, but this is done less frequently and with a more comprehensive approach than the
control plane.

Management Plane

The management plane handles traffic to network devices that are used to configure, manage, or
monitor network devices. In other words, management plane traffic can be qualified in the same
way as control plane traffic, but the purpose of the traffic is to configure, manage, or monitor
network devices.

A practical example of management plane traffic in this topology is SSH traffic destined for the
network device itself. When a network device receives an ICMP Echo Request packet destined for
the IP address 192.168.10.1 (assigned to the network device), the data plane recognizes that the
network device itself has this IP address and forwards the packet to Further forward to the control
plane. In-band interface. This action is known as a "punt".

When the control plane receives this SSH packet over the inband interface, it examines it and
forwards it to the SSH software process so that it can handle it appropriately.

The SSH software process should generate SSH traffic in response. It is sent to the inband control
plane interface, dequeued by the data plane, and forwarded to the host by Ethernet1/1

Other common examples of management plane traffic are SNMP traffic (which can be used to
monitor network devices and configure devices), NETCONF traffic, and gRPC traffic (which can be
used to monitor network devices through model-driven telemetry). you can). . .

Some network devices have a dedicated out-of-band management port that can primarily send and
receive management plane traffic. This management port may be able to send and receive LLDP or
CDP, but rarely supports other types of control plane protocols (Spanning Tree Protocol, routing
protocols such as OSPF/EIGRP/BGP, etc.).

10.4 SDN Management and Control

RFC7426 focuses on four characteristics that distinguish SDN management and control. The first
feature is the timescale. The time scale indicates how fast the aircraft will react and how fast it
should react. The control plane reacts on a very short timescale, but the management plane does
not necessarily need to react quickly to changes.
The second property is persistence. This refers to the period during which the device state is stable.
Control plane states typically change rapidly, while management plane states can remain static for
long periods of time. The third feature is regionality. The control plane is typically distributed and
attached to the device, while the management plane tends to be centralized and external to the
device.

Finally, RFC7426 states that a distributed system designer can choose between three properties:
consistency, availability, and split tolerance. only. SDN proponents initially discussed centralized
controllers, so CAP provides a great tool for identifying the problems this can cause.

10.5 SDN API’s

SDN implementation through APIs refers to southbound APIs that configure and software the
manage aircraft energetic at the tool. There are some of legacy community tool APIs in use that
provide exclusive stages of manage (SNMP, CLO, TL1, RADIUS, TR-069, etc.) and some of more
moderen ones (NETCONF/YANG, REST, XMPP, BGP-LS, etc.) that provide exclusive stages of manage
over the community gadgets, statistics aircraft, topology, etc., every having exclusive blessings and
disadvantages. I may not cowl them extensive on this weblog put up however I need to make certain
all of us apprehend one key distinction among them and the Open SDN method:

OpenFlow is used to at once manage the statistics aircraft, now no longer simply the configuration of
the gadgets and the manage aircraft.

The new SDN method I included in my preceding weblog has many technological and operational
blessings, however it calls for a corporation, organization or operator to update vintage hardware for
brand spanking new hardware that helps the generation, and in a few cases, new protocols like
OpenFlow.

Obviously, no corporation goes to update all in their hardware overnight, as it'd require vast
expense, implementation and structure demanding situations that, till resolved, may want to effect
corporation operations. In addition, there might be masses of non- technical issues, like personnel
understanding tool X and community OS Y just like the palm in their arms and now no longer
searching ahead to the time it would take to analyze new generation and processes.

When a corporation makes a decision to convert to a software program described networking


infrastructure, they will now no longer get aid from their current Network hardware dealer, which
might also additionally were taking part in hefty margins in community hardware income and now
no longer pleased to push a generation as a way to make their highly-priced containers replaceable
for reasonably-priced dealer agnostic white containers.

The left picture suggests an structure view of a conventional community tool (router, switch, etc.)
with the software program additives and applications (Upper Rectangle) and hardware additives
(Lower Rectangle) inclusive of ASIC (software unique incorporated circuit for packet processing) and
memory.
By including a RESTful API interface we upload a further abstraction layer and improve legacy
gadgets permitting to be managed through an SDN controller the use of non OpenFlow standards.

10.6 Northbound API and southbound API


A tech-savvy person can easily understand what SDN is, its various branches, its core values, etc.,
while the non-tech-savvy person is lying in a pool of confusion trying to interpret the terminology
definition. increase. A quick overview:

Enterprise data center software-defined networking (SDN) is a comprehensive set of computing


technologies aimed at making network infrastructures more flexible and agile, enabling IT
administrators to quickly respond to changing business environments. increase. make it happen.

Southbound and Northbound are parts of SDN with different responsibilities. Let's see the difference
between the two terms. The software that defines the network does this through an API. The API is
the control point for all components of the network.

OpenFlow switches, SDN controllers, network management systems and network analytics. Because
HTTP is ubiquitous, API-based software can be written in many languages and iterated rapidly,
regardless of network hardware production and deployment cycles. Virtualization is a system model,
whereas API is an abstract model. Virtualization allows existing physical system descriptions to be
reused in a logical environment. The API allows a complete resource abstraction. Virtualization is
required to extend applications connected to physical or logical systems. For applications built using
APIs, resource bindings are fully dynamic, resolving to virtualized or abstracted resources during API
requests. This late binding can be done at the network layer rather than in application code (as was
envisioned and practiced in the early days of distributed computing). In Software Defined
Networking, binding is achieved by changing the network topology (packet routing). This allows the
network layer to mitigate the performance penalties associated with late binding.

SDN API’s-With the advent of software-defined networking (SDN), the industry has turned to open
concepts in hopes of reducing and ultimately eliminating reliance on various function-specific
software tools developed and maintained by device vendors. migrated. One of the most important
elements in the effort to accelerate the evolution to open cloud networking and virtualization is the
use of application programming interfaces (APIs) for development. An API is a set of routines,
protocols, and tools for building software applications that specify how software components and
services interact. Ciena's new white paper, Leveraging Rich APIs for New DCI Operational Paradigms,
examines development APIs and the advantages of optical networking over the use of traditional
management protocols. Here are the five main APIs we use today to realize the concept of SDN-
based virtual networks.

1. Representative state transfers (REST):

A generic application control interface that provides a mechanism for retrieving or passing
information to or from network resources. It is the most widely used open API framework for
products and web services and supports stand-alone events such as alarms. 2. REST configuration
(RESTCONF):
An HTTP-based protocol and network interface for managing applications using REST. You can access
two data stores:

Configuration, which contains data injected through the controller, and operational, which contains
data injected through the network. 3. Network configuration (NETCONF):

A protocol for transferring and retrieving XML-encoded data between Element Management
Software (EMS) and Network Elements (NE). It is designed to modify network configuration and
provide a more functional management and configuration interface than its predecessor. 4.
Openflow:

A new networking approach that simplifies switch functions such as control plan and data
separation. It provides fine-grained, low-level control over data propagation and is managed by the
Open Networking Foundation (ONF). 5. Google Remote Procedure Call (gRPC):

Created by Google, it is open source and aims to simplify building distributed applications and
services by calling methods of server applications on other systems as if they were local objects. This
is an API initiative.

APIs provide a simple and convenient way to manage network resources, enabling seamless
integration with IT tools and efficient use of IT resources. For more information, download the Ciena
white paper below.

10.6.1 Two types of SDN API’s-

Northbound API –

The Northbound Software-Defined Application Programming Interface (SDN Northbound API) is a


RESTful SDN API used for communication between SDN controllers and services and applications
that typically run over a network. These APIs facilitate efficient network optimization and
automation for different application needs through SDN network programmability. How does the
Northern API work? The North API is the link between your application and the SDN controller.
Applications can tell the network what they need (data, disk space, bandwidth, etc.) and the
network can provision or advertise those resources. These APIs support a wide variety of
applications. This is probably why the SDN northbound API is one of the easiest components to
design in the SDN environment. Different interfaces are provided at different places in the stack to
control different types of applications through the SDN controller. Northbound APIs are also used to
integrate the SDN controller with automation stacks such as Puppet, Chef, SaltStack, Ansible,
CFEngine, and orchestration platforms such as OpenStack, VMware's vCloudDirector, and Apache's
open source CloudStack. The goal is to abstract the inner workings of the network so that application
developers can "plug into" the network and modify it to meet their application's needs. Northern API
for interacting with SDN controllers on your network. In this case, the controller is Floodlight, an
open source controller based on OpenFlow. Again, an open-source RESTful API runs above the
controller and below the application, acting as an interface between them. sauce:
searchlight project. There are several open source projects and groups involved in developing
Northern and REST APIs. For example, the Linux Open API Initiative aims to create open source
programmable APIs that can be used across a wide variety of programs, interfaces, and operating
systems.

Southbound API-

The Southbound Software-Defined Application Programming Interface (SDN Southbound API) is used
for communication between SDN controllers and switches and routers in the network. They may be
open source or proprietary. How does the SDN Southbound API work? With the Southbound API,
you can easily control your network and dynamically change the SDN controller to meet your needs
and requirements in real time. Developed by the Open Networking Foundation (ONF), OpenFlow is
the first and most popular southbound interface. OpenFlow defines how SDN controllers interact
with the transport layer in order to adapt the network to better adapt to changing business needs.
OpenFlow allows entries to be added and removed from the internal flow tables of switches and
routers to make the network more responsive to real-time traffic demands. Many companies create
and sell their own APIs that complement their core products. These companies include YouTube,
Google, Facebook, and Amazon. Oxygen, the latest version of the OpenDaylight SDN controller. The
platform has been developed and is compatible with products from dozens of companies including
Google, Juniper Networks and Cisco. sauce:

opendaylight These are other proprietary he SDN protocols for the Southern API that use other
methods for the same task that OpenFlow solves. The Network Configuration Protocol (NetConf)
uses Extensible Markup Language (XML) to communicate with switches and routers to make settings
and configuration changes. Lisp, also driven by ONF, can be used to support flow mapping.
Additionally, there are more established network protocols such as Open Shortest Path First (OSPF),
MPLS, BGP, SPB, and IS-IS that are finding ways to work in the SDN environment.

10.7 SDN devices:

1) CISCO DNA Center -Cisco DNA Center is the network management system, foundational
controller, and analytics platform at the heart of Cisco's intent-based network. DNA Center
addresses the demands of digitization, cloud, IoT, and mobility by eliminating IT complexity.

2) CISCO ACI

Cisco Application Centric Infrastructure (ACI) is network virtualization technology.

3)IBM Cloud Internet


IBM Cloud Internet Services is a set of edge network services for securing internet-facing
applications from DDoS attacks, data theft, and bot attacks, as well as optimizing their web
apps or ensuring global responsiveness and the ongoing availability of their internet-facing

4) Cisco SD Acess

Cisco's Software-Defined Access (SD-Access) provides automated end-to-end segmentation


to separate user, device and application traffic without redesigning the network. Cisco SD-
Access automates user access policy so organizations can make sure the right policies are
established…

5) Cradlepoint NetCloud Engine

Cradlepoint in Boise offers the all-inclusive NetCloud Solution Packages for branch, mobile,
and IoT networks combine tailored NetCloud services with fit-for-purpose hardware and a
comprehensive support plan. 6IBM Networking Services for Software-Defined Networking

IBM Networking Services for Software-defined Networks (SDN) transforms hardware- and
device-centric networks into virtual software-defined networks, improving agility, security,
and cost efficiency.

6) Extreme Cloud SD-WAN

Extreme Networks' wireless products have grown since it acquired Enterasys in 2013 and
then Aerohive. ExtremeCloud SD-WAN is Extreme Networks' solution for simplicity and
control that integrates all disparate elements and centralizes control of your network down
to the branch.

9) VMware NSX

VMware NSX is a network virtualization technology.

10) Control network

Juniper Networks supports SDN with Contrail Networking, a solution that provides end-to-
end dynamic network policy and control for any cloud, workload, NFV, and deployment
from a single pane of glass . Convert abstract workflows into specific guidelines and
simplify...

11) speed up VPN

Speedify is a new breed of bonded VPN built from the ground up for speed, security and
reliability. The vendor says Speedify's bonding protocol lets it do things no other VPN can:

switching between Wi-Fi and Cellular without breaking sockets, and bonding connections
together…
12) Paragon Pathfinder

Paragon Pathfinder (formerly NorthStar Controller) is a cloud-native controller that


simplifies traffic engineering, making it easier for you to leverage benefits provided by
transport service paths, such as MPLS/RSVP, segment routing, and network slicing. It
enables operations team…

13 Tempered Airwall

Tempered Networks is network security technology from the company of the same name in
Seattle, Washington.

14) Junos Space Network Management

Junos Space Network Management Platform works with Juniper Networks' management
applications to simplify and automate management of Juniper's switching, routing, and
security devices. The platform provides broad fault, configuration, accounting, performance,
and security management…

15) Nuage Networks Virtulized Sevice Platform

Nuage Networks, a Nokia company, offers the Nuage Networks Virtualized Services Platform
(VSP), which provides software-defined networking (SDN) and policy-based automation for
cloud deployments. Designed for large enterprises and service providers, the vendor boasts
supporting clouds…

You might also like