Professional Documents
Culture Documents
Centralized Control
An SDN controller is the application that acts as a strategic
control point in a software-defined network. Essentially, it is
the “brains” of the network.
OpenFlow
OpenFlow, an open source standard supported by many vendors, is the first
software defined networking (SDN) control protocol. It separates the control
plane (decision-making) from the forwarding plane (packet routing).
SDN architecture enabled by OpenFlow separates the network into three distinguishable layers,
connected via northbound and southbound APIs.
In traditional network design, each switch would contain a routing table that it
used to decide how to route each packet. This routing table is largely static; it
would be updated by the administrator individually on each router.
In OpenFlow, an SDN controller is the control plane. The SDN controller
contains the logic and does the decision-making for how the network traffic
should flow between the switches. The SDN controller establishes a
connection to each switch to pass messages. This connection uses
Transmission Control Protocol (TCP) and is often encrypted with Transport
Layer Security (TLS). It uses port 6653 with earlier versions using 6633.
The controller sends commands to the OpenFlow switches, which handle the
network data. The OpenFlow commands change the switch's flow
routing table. The flow table is the OpenFlow equivalent of the routing
and MAC address forwarding tables. It contains all the instructions for how the
switch will handle network traffic.
The flow table contains many rows of flow entries which tell the switch how to
handle each packet. The flow entries can use each OSI layer of a packet,
including MAC address match, IP address match, protocol match or port
match. These rules can be multilevel and combined to create complex rules.
This level of flexibility allows each OpenFlow switch to act as a
basic firewall as well. Switches can forward packets that do not match any
rules to the SDN controller for the controller to inspect and create a new flow
rule for it.
Security:
SDNs need efficient and powerful security mechanisms to avoid security vulnerabilities across the data,
control, and application planes. A number of securities mechanisms have been proposed for SDNs. A
security mechanism generally involves three phases, namely monitoring the network (this generates
extra traffic both at the control and data planes), detecting the security breach (this takes some time for
algorithm execution), and the recovery (once the attack is detected, this phase incurs both time delay
and traffic overhead by taking the proper counter measures against the detected attack).
The scalability of the existing security mechanisms is a big concern as the number of hosts, switches,
controllers, flows, and attackers increases. The existing approaches typically attempt to achieve the
scalability by improving the performance of an individual phase. For example, Fawcett et al. in
TENNISON: A distributed SDN framework for scalable network security in this SI reduce the traffic
overhead and the execution time of the monitoring phase. Similarly, the Athena approach [9] focuses on
avoiding security vulnerabilities in the data plane. The main open research challenge is to develop
holistic security approaches that improve the performance of all phases across all planes. These holistic
approaches should reduce the execution time and increase the accuracy for increasing numbers of
controllers and flows
SDN architectures are often used in large-scale networks, such as data centers and cloud
computing environments, where managing network traffic can be a complex and time-consuming
task. By automating many of these processes through SDN monitoring software, SDN can help
reduce the workload on network administrators and improve overall network performance.
An API, or application programming interface, is a set of defined rules that enable different
applications to communicate with each other. It acts as an intermediary layer that processes data
transfers between systems, letting companies open their application data and functionality to
external third-party developers, business partners, and internal departments within their
companies.
The definitions and protocols within an API help business connect the many different
applications they use in day-to-day operations, which saves employees time and breaks down
silos that hinder collaboration and innovation. For developers, API documentation provides the
interface for communication between applications, simplifying application integration.
Types of APIs
Today most APIs are web APIs that expose an application's data and functionality over the
internet. Here are the four main types of web API:
Open APIs are open-source application programming interfaces you can access with the
HTTP protocol. Also known as public APIs, they have defined API endpoints and request
and response formats.
Partner APIs connect strategic business partners. Typically, developers access these
APIs in self-service mode through a public API developer portal. Still, they need to
complete an onboarding process and get login credentials to access partner APIs.
Internal APIs remain hidden from external users. These private APIs aren't available for
users outside of the company and are instead intended to improve productivity and
communication across different internal development teams.
Composite APIs combine multiple data or service APIs. They allow programmers to
access several endpoints in a single call. Composite APIs are useful in microservices
architecture where performing a single task may require information from several
sources.
OSPF
RIP
BGP
is the network architecture layer that physically handles the traffic based
on the configurations supplied from the Control Plane.
- Network Virtualization
Network virtualization (NV) transfers network resources from hardware to software. Network
virtualization can integrate many physical networks into a single virtual, software-based
network. Or it can divide a single physical network into separate, independent virtual networks.
Network virtualization software lets network managers move virtual machines between domains.
They can do this without having to reconfigure the network. The programme generates a network
overlay. This overlay allows many virtual network layers. These layers run on top of a single
physical network fabric.
Network virtualization is altering the rules for the delivery of services. It does so from the
software-defined data centre to the cloud and all the way to the periphery. This method
transforms networks from rigid and inefficient to dynamic and agile.
Modern networks need to keep up with the need for cloud-hosted, distributed apps. Also, they
need to keep up with the growing threats posed by hackers. This must take place while providing
the speed and agility required for a faster time to market the applications. Network virtualization
eliminates the need to spend extra time in setting up the infrastructure. The user can launch apps
and modify them in minutes, resulting in a quick time to value.
- Process of Network Virtualization
Network virtualization decouples network services. It does this from the underlying hardware. It
enables virtual network deployment across an entire network. Network virtualization enables
creating provisioning and managing networks. This takes place in software while retaining the
underlying physical network. This underlying physical network acts as the packet-forwarding
backplane.
Switching, routing, firewall, load balance, VPN, and other physical network resources get
pooled. They are given as software. It requires Internet Protocol (IP) packet forwarding from the
underlying physical network.
In software, network and security services need deployment to a virtual layer. The virtual layer
can also be understood as hypervisors in the data centre. These services are then attached to
particular workloads. These workloads can be such as your virtual machines or containers.
They are in line with networking & security policies set for each linked application. As a
workload transfers to a new host, network services and security policies follow. More workloads
develop to grow an application. This results in the application of relevant policies to these new
workloads. Thus resulting in increased policy consistency and network agility