Professional Documents
Culture Documents
Chapter 5 Network layer control plane: our goals
Network Layer: understand principles instantiation, implementation
Control Plane behind network control in the Internet:
plane: • OSPF, BGP
• traditional routing algorithms • OpenFlow, ODL and ONOS
• SDN controllers controllers
• network management, • Internet Control Message
configuration Protocol: ICMP
• SNMP, YANG/NETCONF
Computer Networking: A
Top‐Down Approach
8th edition
Jim Kurose, Keith Ross
Pearson, 2020
Network Layer: 5‐2
Network layer: “control plane” roadmap Various Controls
introduction
routing protocols
link state
distance vector
intra‐ISP routing: OSPF
routing among ISPs: BGP
network management,
SDN control plane configuration
Internet Control Message • SNMP
Protocol • NETCONF/YANG
Network Layer: 5‐3
Network‐layer functions Per‐router control plane
Individual routing algorithm components in each and every
forwarding: move packets from router’s router interact in the control plane
input to appropriate router output
data plane
routing: determine route taken by Routing
packets from source to destination
control plane Algorithm
control
plane
data
plane
Two approaches to structuring network control plane:
per‐router control (traditional) values in arriving
Network Layer: 5‐5 Network Layer: 5‐6
1
11/29/2022
Software‐Defined Networking (SDN) control plane Network layer: “control plane” roadmap
Remote controller computes, installs forwarding tables in routers
introduction
Remote Controller
routing protocols
link state
control
plane distance vector
data
plane
intra‐ISP routing: OSPF
CA routing among ISPs: BGP network management,
values in arriving
CA CA CA CA SDN control plane configuration
packet header
Internet Control Message • SNMP
0111 1 Protocol • NETCONF/YANG
2
3
Network Layer: 5‐7 Network Layer: 5‐8
Routing protocols mobile network
Graph abstraction: link costs
national or global ISP 5
Routing protocol goal: determine ca,b: cost of direct link connecting a and b
“good” paths (equivalently, routes), 3 e.g., cw,z = 5, cu,z = ∞
v w 5
2
from sending hosts to receiving host, application
transport
u
through network of routers network
link
2
3
1 z cost defined by network operator:
physical 1 2
path: sequence of routers packets network
link
physical
network
link
physical
x 1
y could always be 1, or inversely related
traverse from given initial source host to bandwidth, or inversely related to
to final destination host network
link network
link
congestion
“good”: least “cost”, “fastest”, “least
physical
physical network
link datacenter
graph: G = (N,E)
physical network
congested” N: set of routers = { u, v, w, x, y, z }
routing: a “top‐10” networking
application
transport
network
challenge! enterprise
network
link
physical
E: set of links ={ (u,v), (u,x), (v,x), (v,w), (x,w), (x,y), (w,y), (w,z), (y,z) }
Network Layer: 5‐9 Network Layer: 5‐10
Routing algorithm classification Network layer: “control plane” roadmap
global: all routers have complete introduction
topology, link cost info routing protocols
• “link state” algorithms
dynamic: routes change link state
How fast
do routes static: routes change more quickly distance vector
change? slowly over time • periodic updates or in
response to link cost
intra‐ISP routing: OSPF
changes routing among ISPs: BGP network management,
decentralized: iterative process of
computation, exchange of info with neighbors SDN control plane configuration
• routers initially only know link costs to Internet Control Message • SNMP
attached neighbors Protocol • NETCONF/YANG
• “distance vector” algorithms
global or decentralized information? Network Layer: 5‐11 Network Layer: 5‐12
2
11/29/2022
Dijkstra’s link‐state routing algorithm Dijkstra’s link‐state routing algorithm
1 Initialization:
centralized: network topology, link notation 2 N' = {u} /* compute least cost path from u to all other nodes */
costs known to all nodes 3 for all nodes v
cx,y: direct link cost from 4 if v adjacent to u /* u initially knows direct‐path‐cost only to direct neighbors */
• accomplished via “link state node x to y; = ∞ if not direct 5 then D(v) = cu,v /* but may not be minimum cost! */
broadcast” neighbors 6 else D(v) = ∞
• all nodes have same info
D(v): current estimate of cost 7
computes least cost paths from one of least‐cost‐path from source 8 Loop
node (“source”) to all other nodes to destination v 9 find w not in N' such that D(w) is a minimum
• gives forwarding table for that node p(v): predecessor node along 10 add w to N'
path from source to v 11 update D(v) for all v adjacent to w and not in N' :
iterative: after k iterations, know
N': set of nodes whose least‐ 12 D(v) = min ( D(v), D(w) + cw,v )
least cost path to k destinations cost‐path definitively known 13 /* new least‐path‐cost to v is either old least‐cost‐path to v or known
14 least‐cost‐path to w plus direct‐cost from w to v */
15 until all nodes in N'
Network Layer: 5‐13 Network Layer: 5‐14
Dijkstra’s algorithm: an example Dijkstra’s algorithm: an example
v w x y z 5
Step N' D(v),p(v) D(w),p(w) D(x),p(x) D(y),p(y) D(z),p(z)
0 u 2,u 5,u 1,u ∞ ∞ 3
v w 5
ux 2
1 2,u 4,x 2,x ∞
2 uxy 2,u 4,y u 2 1 z
3,y 3
3 uxyv 3,y 4,y 1 2
4 uxyvw 4,y
x 1
y
5 uxyvwz
Initialization (step 0): For all a: if a adjacent to then D(a) = cu,a
resulting least‐cost‐path tree from u: resulting forwarding table in u:
5
3
find a not in N' such that D(a) is a minimum v w
destination outgoing link
v w 5 add a to N' v (u,v) route from u to v directly
2
u update D(b) for all b adjacent to a and not in N' : u z x (u,x)
2
3
1 z
1 D(b) = min ( D(b), D(a) + ca,b ) y (u,x) route from u to all
x y 2 x y w (u,x) other destinations
1 via x
x (u,x)
Network Layer: 5‐15 Network Layer: 5‐16
Dijkstra’s algorithm: another example Dijkstra’s algorithm: discussion
v w x y z
D(v), D(w), D(x), D(y), D(z), x algorithm complexity: n nodes
9
Step N' p(v) p(w) p(x) p(y) p(z)
each of n iteration: need to check all nodes, w, not in N
0 u 7,u 3,u 5,u ∞ ∞ 5 7
4 n(n+1)/2 comparisons: O(n2) complexity
1 uw 6,w 5,u 11,w ∞ 8 more efficient implementations possible: O(nlogn)
2 uwx 6,w 11,w 14,x 3 w z
u y
2 message complexity:
3 uwxv 10,v 14,x
3 each router must broadcast its link state information to other n routers
4 uwxvy 12,y 7 4
v efficient (and interesting!) broadcast algorithms: O(n) link crossings to disseminate a
5 uwxvyz broadcast message from one source
notes: each router’s message crosses O(n) links: overall message complexity: O(n2)
construct least‐cost‐path tree by tracing predecessor nodes
ties can exist (can be broken arbitrarily)
Network Layer: 5‐17 Network Layer: 5‐18
3
11/29/2022
Dijkstra’s algorithm: oscillations possible Network layer: “control plane” roadmap
when link costs depend on traffic volume, route oscillations possible introduction
sample scenario:
• routing to destination a, traffic entering at d, c, e with rates 1, e (<1), 1 routing protocols
• link costs are directional, and volume‐dependent link state
distance vector
1
a 1+e 2+e
a
0 0
a
2+e 2+e
a
0
intra‐ISP routing: OSPF
d b d b
d
0
0 0
e
b
1 0
1+e 1
0 1 1 1
0 0
1
d
0
1+e 1 b
routing among ISPs: BGP network management,
1 c c c 1+e 1 c 0 1
1
e e e SDN control plane configuration
e
given these costs,
Internet Control Message • SNMP
given these costs, given these costs,
initially find new routing…. find new routing…. find new routing…. Protocol • NETCONF/YANG
resulting in new costs resulting in new costs resulting in new costs
Network Layer: 5‐19 Network Layer: 5‐20
Distance vector algorithm Bellman‐Ford Example
Suppose that u’s neighboring nodes, x,v,w, know that for destination z:
Based on Bellman‐Ford (BF) equation (dynamic programming):
Dv(z) = 5 Dw(z) = 3 Bellman‐Ford equation says:
Bellman‐Ford equation 5
Du(z) = min { cu,v + Dv(z),
3 w
v cu,x + Dx(z),
Let Dx(y): cost of least-cost path from x to y. 2 5
u z cu,w + Dw(z) }
Then: 2
3
1
1 = min {2 + 5,
Dx(y) = minv { cx,v + Dv(y) } x 1
y 2
1 + 3,
5 + 3} = 4
Dx(z) = 3
v’s estimated least‐cost‐path cost to y node achieving minimum (x) is
next hop on estimated least‐
min taken over all neighbors v of x direct cost of link from x to v cost path to destination (z)
Network Layer: 5‐21 Network Layer: 5‐22
Distance vector algorithm Distance vector algorithm:
key idea: each node: iterative, asynchronous: each local
from time‐to‐time, each node sends its own distance vector estimate iteration caused by:
to neighbors local link cost change
wait for (change in local link
when x receives new DV estimate from any neighbor, it updates its cost or msg from neighbor) DV update message from neighbor
own DV using B‐F equation:
distributed, self‐stopping: each
Dx(y) ← minv{cx,v + Dv(y)} for each node y ∊ N recompute DV estimates using
node notifies neighbors only when
DV received from neighbor its DV changes
neighbors then notify their
under minor, natural conditions, the estimate Dx(y) converge to the if DV to any destination has neighbors – only if necessary
actual least cost dx(y) changed, notify neighbors no notification received, no
actions taken!
Network Layer: 5‐23 Network Layer: 5‐24
4
11/29/2022
Distance vector: Bellman Ford: example Distance vector: Bellman Ford: another example
DV in a:
cost to cost to
Dx() x y z x y z x y z
cost to Da(a)=0
Da(b) = 8
x 0 2 7 x 0 2 3 x 0 2 3 Da(c) = ∞ a b c
1
from
y ∞∞ ∞ y 2 0 1 8
from
Da(d) = 1
y 2 0 1
from
z ∞∞ ∞ z 7 1 0 Da(e) = ∞
z 3 1 0 t=0 Da(f) = ∞
Da(g) = ∞ 1 1
cost to cost to cost to y
Dy() x y z x y z x y z 2 1 All nodes have
Da(h) = ∞
Da(i) = ∞
distance estimates
x ∞ ∞ ∞ x 0 2 7 x 0 2 3
x z A few asymmetries:
7 to nearest d e f
from
y 2 0 1 y 2 0 1
from
y 2 0 1 missing link
from
neighbors (only) 1 1
z ∞∞ ∞ z 7 1 0 z 3 1 0 larger cost
All nodes send
their local
cost to cost to cost to distance vector to 1 1 1
Dz() x y z x y z x y z their neighbors
x ∞∞ ∞ x 0 2 7 x 0 2 3
from
from
y 2 0 1 y 2 0 1
from
y ∞∞ ∞ g h i
z 3 1 0 z 3 1 0 1 1
z 7 1 0
time Network Layer: 5‐25 Network Layer: 5‐26
Distance vector example: iteration Distance vector example: iteration
a b c a
compute b
compute c
compute
8 1 8 1
t=1 1 1
t=1 1 1
All nodes: All nodes:
receive distance receive distance
vectors from vectors from
neighbors d e f neighbors d
compute e
compute f
compute
compute their new 1 1 compute their new 1 1
local distance local distance
vector vector
send their new 1 1 1 send their new 1 1 1
local distance local distance
vector to neighbors vector to neighbors
g h i g
compute h
compute i
compute
1 1 1 1
Network Layer: 5‐27 Network Layer: 5‐28
Distance vector example: iteration Distance vector example: iteration
a b c a b c
8 1 8 1
t=1 1 1
t=2 1 1
All nodes: All nodes:
receive distance receive distance
vectors from vectors from
neighbors d e f neighbors d e f
compute their new 1 1 compute their new 1 1
local distance local distance
vector vector
send their new 1 1 1 send their new 1 1 1
local distance local distance
vector to neighbors vector to neighbors
g h i g h i
1 1 1 1
Network Layer: 5‐29 Network Layer: 5‐30
5
11/29/2022
Distance vector example: iteration Distance vector example: iteration
a
compute compute
b compute
c a b c
2 1 8 1
t=2 1 1
t=2 1 1
All nodes: All nodes:
receive distance receive distance
vectors from vectors from
neighbors d
compute compute
e compute
f neighbors d e f
compute their new 1 1 compute their new 1 1
local distance local distance
vector vector
send their new 1 1 1 send their new 1 1 1
local distance local distance
vector to neighbors vector to neighbors
g
compute h
compute compute
i g h i
8 1 1 1
Network Layer: 5‐31 Network Layer: 5‐32
DV in b: DV in c:
Distance vector example: iteration Distance vector example: computation Db(a) = 8 Db(f) = ∞ Dc(a) = ∞
Db(c) = 1 Db(g) = ∞ Dc(b) = 1
Db(d) = ∞ Db(h) = ∞ Dc(c) = 0
DV in a: Dc(d) = ∞
Db(e) = 1 Db(i) = ∞
Da(a)=0
Dc(e) = ∞
Da(b) = 8
Dc(f) = ∞
…. and so on Da(c) = ∞ a b c Dc(g) = ∞
Da(d) = 1 8 1
Dc(h) = ∞
Da(e) = ∞
Dc(i) = ∞
Let’s next take a look at the iterative computations at nodes t=1 Da(f) = ∞
1 1
Da(g) = ∞
b receives DVs Da(h) = ∞ DV in e:
from a, c, e Da(i) = ∞ De(a) = ∞
De(b) = 1
d e f De(c) = ∞
1 1 De(d) = 1
De(e) = 0
De(f) = 1
1 1 1 De(g) = ∞
De(h) = 1
De(i) = ∞
g h i
1 1
Network Layer: 5‐33 Network Layer: 5‐34
6
11/29/2022
Distance vector: state information diffusion Distance vector: link cost changes
Iterative communication, computation steps diffuses information through network:
1
link cost changes: y
t=0 c’s state at t=0 is at c only 4 1
a
8
b
1
c node detects local link cost change x z
50
t=1
c’s state at t=0 has propagated to b, and
may influence distance vector computations
updates routing info, recalculates local DV
up to 1 hop away, i.e., at b 1 1 t=1 if DV changes, notify neighbors
t=2
c’s state at t=0 may now influence distance
t=2 vector computations up to 2 hops away, i.e., t0 : y detects link‐cost change, updates its DV, informs its neighbors.
d e f
at b and now at a, e as well 1 1
“good news t1 : z receives update from y, updates its table, computes new least
c’s state at t=0 may influence distance vector travels fast”
t=3 computations up to 3 hops away, i.e., at b,a,e cost to x , sends its neighbors its DV.
1 1 1 t=3
and now at c,f,h as well t2 : y receives z’s update, updates its distance table. y’s least costs
c’s state at t=0 may influence distance vector do not change, so y does not send a message to z.
t=4 computations up to 4 hops away, i.e., at b,a,e, g i
h 1
c, f, h and now at g,i as well 1 t=4
Network Layer: 5‐40
7
11/29/2022
Comparison of LS and DV algorithms Routing Algorithms: Summary
message complexity robustness: what happens if router
1. Distance Vectors: Distance to all nodes in the network sent to
LS: n routers, O(n2) messages sent malfunctions, or is compromised? neighbors. Small # of large messages.
DV: exchange between neighbors; LS:
convergence time varies 2. Link State: Cost of link to neighbors sent to entire network.
• router can advertise incorrect link cost
Large # of small messages.
• each router computes only its own
speed of convergence table 3. Dijkstra’s algorithm is used to compute shortest path using link
LS: O(n2) algorithm, O(n2) messages state
DV:
• may have oscillations 4. Bellman Ford’s algorithm is used to compute shortest paths using
• DV router can advertise incorrect path
DV: convergence time varies cost (“I have a really low cost path to distance vectors
• may have routing loops everywhere”): black‐holing 5. Distance Vector algorithms suffer from the count‐to‐infinity
• count‐to‐infinity problem problem
• each router’s table used by others:
error propagate thru network
Network Layer: 5‐43 Network Layer: 4‐44
Network layer: “control plane” roadmap Making routing scalable
introduction our routing study thus far ‐ idealized
routing protocols all routers identical
intra‐ISP routing: OSPF network “flat”
routing among ISPs: BGP … not true in practice
SDN control plane scale: billions of destinations: administrative autonomy:
Internet Control Message network management, can’t store all destinations in Internet: a network of networks
Protocol routing tables! each network admin may want to
configuration routing table exchange would control routing in its own network
• SNMP swamp links!
• NETCONF/YANG
Network Layer: 5‐45 Network Layer: 5‐46
Internet approach to scalable routing Interconnected ASes
aggregate routers into regions known as “autonomous forwarding table configured by intra‐
systems” (AS) (a.k.a. “domains”) Intra-AS
and inter‐AS routing algorithms
Inter-AS
Routing Routing intra‐AS routing determine entries for
destinations within AS
intra‐AS (aka “intra‐domain”): inter‐AS (aka “inter‐domain”): forwarding
table
inter‐AS & intra‐AS determine entries
routing among within same AS routing among AS’es for external destinations
(“network”) gateways perform inter‐domain
all routers in AS must run same intra‐ routing (as well as intra‐domain
routing) intra‐AS
3c
domain protocol routing3a inter‐AS routing 2c
intra‐AS
routers in different AS can run different 3b 2a routing
2b
intra‐domain routing protocols 1c
intra‐AS
AS3
1a routing 1b AS2
gateway router: at “edge” of its own AS, 1d
has link(s) to router(s) in other AS’es AS1
Network Layer: 5‐47 Network Layer: 5‐48
8
11/29/2022
Inter‐AS routing: a role in intradomain forwarding Inter‐AS routing: routing within an AS
suppose router in AS1 receives AS1 inter‐domain routing must: most common intra‐AS routing protocols:
datagram destined outside of AS1: 1. learn which destinations reachable
• router should forward packet to through AS2, which through AS3
RIP: Routing Information Protocol [RFC 1723]
gateway router in AS1, but which 2. propagate this reachability info to all • classic DV: DVs exchanged every 30 secs
one? routers in AS1 • no longer widely used
EIGRP: Enhanced Interior Gateway Routing Protocol
• DV based
3c • formerly Cisco‐proprietary for decades (became open in 2013 [RFC 7868])
3a other
2c
3b 2a
2b
networks OSPF: Open Shortest Path First [RFC 2328]
1c • link‐state routing
AS3
other 1a 1b AS2
networks
1d • IS‐IS protocol (ISO standard, not RFC standard) essentially same as OSPF
AS1
Network Layer: 5‐49 Network Layer: 5‐50
OSPF (Open Shortest Path First) routing Hierarchical OSPF
two‐level hierarchy: local area, backbone.
“open”: publicly available
• link‐state advertisements flooded only in area, or backbone
classic link‐state • each node has detailed area topology; only knows direction to reach
• each router floods OSPF link‐state advertisements (directly over IP other destinations
rather than using TCP/UDP) to all other routers in entire AS
area border routers: boundary router:
• multiple link costs metrics possible: bandwidth, delay “summarize” distances to connects to other ASes
backbone
• each router has full topology, uses Dijkstra’s algorithm to compute destinations in own area, backbone router:
advertise in backbone runs OSPF limited
forwarding table
to backbone
security: all OSPF messages authenticated (to prevent malicious local routers:
• flood LS in area only area 3
intrusion) • compute routing within
area
internal
• forward packets to outside routers
area 1
via area border router
Network Layer: 5‐51
area 2 Network Layer: 5‐52
Network layer: “control plane” roadmap Internet inter‐AS routing: BGP
introduction
BGP (Border Gateway Protocol): the de facto inter‐domain routing
routing protocols protocol
intra‐ISP routing: OSPF • “glue that holds the Internet together”
routing among ISPs: BGP allows subnet to advertise its existence, and the destinations it can
reach, to rest of Internet: “I am here, here is who I can reach, and how”
SDN control plane
BGP provides each AS a means to:
Internet Control Message network management, • eBGP: obtain subnet reachability information from neighboring ASes
Protocol configuration • iBGP: propagate reachability information to all AS‐internal routers.
• SNMP • determine “good” routes to other networks based on reachability information
• NETCONF/YANG and policy
Network Layer: 5‐53 Network Layer: 5‐54
9
11/29/2022
eBGP, iBGP connections BGP basics
BGP session: two BGP routers (“peers”) exchange BGP messages over
2b semi‐permanent TCP connection:
• advertising paths to different destination network prefixes (BGP is a “path
2a 2c
∂ vector” protocol)
1b 3b
2d when AS3 gateway 3a advertises path AS3,X to AS2 gateway 2c:
1a 1c ∂
3a 3c • AS3 promises to AS2 it will forward datagrams towards X
AS 2
1d 3d AS 3 3b
AS 1 eBGP connectivity AS 3 AS 1 1b 3a 3c
logical iBGP connectivity
1a 1c AS 2 3d
2b
1c gateway routers run both eBGP and iBGP protocols 1d BGP advertisement:
2a 2c X
AS3, X
2d
Network Layer: 5‐55 Network Layer: 5‐56
Path attributes and BGP routes BGP path advertisement
AS 3 3b
BGP advertised route: prefix + attributes AS 1 1b 3a 3c
• prefix: destination being advertised 1a 1c AS 2 2b 3d X
• two important attributes: AS3, X
1d
• AS‐PATH: list of ASes through which prefix advertisement has passed AS2,AS3,X 2a 2c
• NEXT‐HOP: indicates specific internal‐AS router to next‐hop AS 2d
policy‐based routing:
• gateway receiving route advertisement uses import policy to AS2 router 2c receives path advertisement AS3,X (via eBGP) from AS3 router 3a
accept/decline path (e.g., never route through AS Y). based on AS2 policy, AS2 router 2c accepts path AS3,X, propagates (via iBGP) to all
• AS policy also determines whether to advertise path to other other AS2 routers
neighboring ASes based on AS2 policy, AS2 router 2a advertises (via eBGP) path AS2, AS3, X to
AS1 router 1c
Network Layer: 5‐57 Network Layer: 5‐58
BGP path advertisement (more) BGP messages
AS 3 3b
AS 1 1b AS3,X
BGP messages exchanged between peers over TCP connection
AS3,X 3a 3c
1a
AS3,X
1c
BGP messages:
AS 2 2b 3d X
1d
AS3,X
AS3, X • OPEN: opens TCP connection to remote BGP peer and authenticates
AS2,AS3,X 2a 2c
sending BGP peer
2d • UPDATE: advertises new path (or withdraws old)
• KEEPALIVE: keeps connection alive in absence of UPDATES; also ACKs
gateway router may learn about multiple paths to destination:
OPEN request
AS1 gateway router 1c learns path AS2,AS3,X from 2a
AS1 gateway router 1c learns path AS3,X from 3a • NOTIFICATION: reports errors in previous msg; also used to close
based on policy, AS1 gateway router 1c chooses path AS3,X and advertises path connection
within AS1 via iBGP
Network Layer: 5‐59 Network Layer: 5‐60
10
11/29/2022
BGP path advertisement BGP path advertisement
AS 3 3b AS 3 3b
AS 1 1b AS3,X 3a 3c AS 1 1b 3a 3c
AS3,X
1 1
AS3,X
1a 1c AS 2 3d X 1a 1c AS 2 3d X
2 2b 2 2b
local link AS3,X
2 1 AS3, X
interfaces 1d 1d
at 1a, 1d AS2,AS3,X 2a 2c 2a 2c
2d 2d
dest interface
dest interface recall: 1a, 1b, 1d learn via iBGP from 1c: “path to X goes through 1c” … … recall: 1a, 1b, 1d learn via iBGP from 1c: “path to X goes through 1c”
… … 1c 2
1c 1 at 1d: OSPF intra‐domain routing: to get to 1c, use interface 1 at 1d: OSPF intra‐domain routing: to get to 1c, use interface 1
X 2
X 1 at 1d: to get to X, use interface 1 … … at 1d: to get to X, use interface 1
… …
at 1a: OSPF intra‐domain routing: to get to 1c, use interface 2
at 1a: to get to X, use interface 2
Network Layer: 5‐61 Network Layer: 5‐62
Why different Intra‐, Inter‐AS routing ? Hot potato routing
AS 3 3b
policy: AS 1 1b 3a 3c
inter‐AS: admin wants control over how its traffic routed, who 1a 1c AS 2 2b 3d X
routes through its network 112
1d
intra‐AS: single admin, so policy less of an issue AS1,AS3,X 2a
201 263
2c
AS3,X
scale: 2d
OSPF link weights
hierarchical routing saves table size, reduced update traffic
performance: 2d learns (via iBGP) it can route to X via 2a or 2c
intra‐AS: can focus on performance hot potato routing: choose local gateway that has least intra‐domain
inter‐AS: policy dominates over performance cost (e.g., 2d chooses 2a, even though more AS hops to X): don’t worry
about inter‐domain cost!
Network Layer: 5‐63 Network Layer: 5‐64
BGP: achieving policy via advertisements BGP: achieving policy via advertisements (more)
A,w B provider B provider
x network x network
w A w A
legend: legend:
A,w C y customer C y customer
network: network:
ISP only wants to route traffic to/from its customer networks (does not want ISP only wants to route traffic to/from its customer networks (does not want
to carry transit traffic between other ISPs – a typical “real world” policy) to carry transit traffic between other ISPs – a typical “real world” policy)
A advertises path Aw to B and to C A,B,C are provider networks
B chooses not to advertise BAw to C! x,w,y are customer (of provider networks)
B gets no “revenue” for routing CBAw, since none of C, A, w are B’s customers x is dual‐homed: attached to two networks
C does not learn about CBAw path policy to enforce: x does not want to route from B to C via x
C will route CAw (not using B) to get to w .. so x will not advertise to B a route to C
Network Layer: 5‐65 Network Layer: 5‐66
11
11/29/2022
BGP route selection Network layer: “control plane” roadmap
introduction
router may learn about more than one route to destination
AS, selects route based on: routing protocols
1. local preference value attribute: policy decision intra‐ISP routing: OSPF
2. shortest AS‐PATH routing among ISPs: BGP
3. closest NEXT‐HOP router: hot potato routing SDN control plane
4. additional criteria Internet Control Message network management,
Protocol configuration
• SNMP
• NETCONF/YANG
Network Layer: 5‐67 Network Layer: 5‐68
Software defined networking (SDN) Why Programmability?
Internet network layer: historically implemented via Reduces the cost of services
distributed, per‐router control approach: More flexibility
• monolithic router contains switching hardware, runs proprietary Our scalability metric
implementation of Internet standard protocols (IP, RIP, IS‐IS, OSPF,
BGP) in proprietary router OS (e.g., Cisco IOS)
• different “middleboxes” for different network layer functions:
firewalls, load balancers, NAT boxes, ..
~2005: renewed interest in rethinking network control plane
Network Layer: 5‐69
Why Programmability? Per‐router control plane
Individual routing algorithm components in each and every router
Cost‐to‐Service interact in the control plane to computer forwarding tables
Time to service
Routing
Algorithm
control
plane
data
plane
values in arriving
packet header
0111 1
2
3
Network Layer: 4‐72
12
11/29/2022
Software‐Defined Networking (SDN) control plane Software defined networking (SDN)
Remote controller computes, installs forwarding tables in routers
Why a logically centralized control plane?
Remote Controller easier network management: avoid router misconfigurations,
greater flexibility of traffic flows
control
plane table‐based forwarding (recall OpenFlow API) allows
data “programming” routers
plane
• centralized “programming” easier: compute tables centrally and distribute
CA • distributed “programming” more difficult: compute tables as result of
CA CA CA CA distributed algorithm (protocol) implemented in each‐and‐every router
values in arriving
packet header
open (non‐proprietary) implementation of control plane
0111 1
• foster innovation: let 1000 flowers bloom
2
3
Network Layer: 4‐73 Network Layer: 5‐74
SDN analogy: mainframe to PC revolution Traffic engineering: difficult with traditional routing
5
Ap Ap Ap Ap Ap Ap Ap Ap Ap Ap
App 3
Specialized p p p p p p p p p p
2 v w 5
Applications Open Interface
u 2
3
1 z
Specialized 1
Operating or or 2
x 1 y
System
Windows Linux MAC OS
Specialized Open Interface
Hardware Q: what if network operator wants u‐to‐z traffic to flow along
Microprocessor uvwz, rather than uxyz?
Vertically integrated Horizontal A: need to re‐define link weights so traffic routing algorithm
Closed, proprietary Open interfaces computes routes accordingly (or need a new routing algorithm)!
Slow innovation Rapid innovation
Small industry Huge industry link weights are only control “knobs”: not much control!
* Slide courtesy: N. McKeown Network Layer: 5‐75 Network Layer: 5‐76
Traffic engineering: difficult with traditional routing Traffic engineering: difficult with traditional routing
5 5
3 3
2 v w 5 2 v w 5
u 2
3
1 z u 2
3
1 z
1 1
2 2
x 1 y x 1 y
Q: what if network operator wants to split u‐to‐z Q: what if w wants to route blue and red traffic differently from w to z?
traffic along uvwz and uxyz (load balancing)? A: can’t do it (with destination‐based forwarding, and LS, DV routing)
A: can’t do it (or need a new routing algorithm)
We learned in Chapter 4 that generalized forwarding and SDN can
be used to achieve any routing desired
Network Layer: 5‐77 Network Layer: 5‐78
13
11/29/2022
Software defined networking (SDN) Software defined networking (SDN)
4. programmable routing
access
control
… load
balance
3. control plane functions
network-control applications
external to data‐plane Data‐plane switches:
control switches routing
…
applications Remote Controller fast, simple, commodity switches load
access
implementing generalized data‐plane control balance
control forwarding (Section 4.4) in hardware control
plane plane
northbound API
flow (forwarding) table computed,
data
plane installed under controller supervision SDN Controller
CA 2. control, data API for table‐based switch control (network operating system)
CA CA CA CA
plane separation (e.g., OpenFlow)
southbound API
• defines what is controllable, what is not
protocol for communicating with data
plane
1: generalized “flow‐based” controller (e.g., OpenFlow)
forwarding (e.g., OpenFlow)
SDN-controlled switches
Network Layer: 5‐79 Network Layer: 5‐80
Software defined networking (SDN) Software defined networking (SDN)
network-control applications network-control applications
SDN controller (network OS): … network‐control apps: …
routing routing
maintain network state access load “brains” of control: access load
information control balance implement control functions control balance
tolerance, robustness
SDN-controlled switches SDN-controlled switches
Network Layer: 5‐81 Network Layer: 5‐82
Components of SDN controller OpenFlow protocol
access load
routing
control balance operates between controller, switch OpenFlow Controller
TCP used to exchange messages
interface layer to network Interface, abstractions for network control apps
network RESTful … • optional encryption
control apps: abstractions API graph API intent
three classes of OpenFlow messages:
network‐wide state statistics … flow tables
SDN • controller‐to‐switch
management : state of Network-wide distributed, robust state management
networks links, switches, controller • asynchronous (switch to controller)
services: a distributed database Link-state info host info … switch info
• symmetric (misc.)
communication: communicate OpenFlow … SNMP distinct from OpenFlow API
between SDN controller and Communication to/from controlled devices • API used to specify generalized
controlled switches forwarding actions
Network Layer: 5‐83 Network Layer: 5‐84
14
11/29/2022
OpenFlow: controller‐to‐switch messages OpenFlow: switch‐to‐controller messages
Key controller‐to‐switch messages OpenFlow Controller Key switch‐to‐controller messages OpenFlow Controller
features: controller queries switch packet‐in: transfer packet (and its
features, switch replies control) to controller. See packet‐out
configure: controller queries/sets message from controller
switch configuration parameters flow‐removed: flow table entry deleted
modify‐state: add, delete, modify flow at switch
entries in the OpenFlow tables port status: inform controller of a
packet‐out: controller can send this change on a port.
packet out of specific switch port
Fortunately, network operators don’t “program” switches by creating/sending
OpenFlow messages directly. Instead use higher‐level abstraction at controller
Network Layer: 5‐85 Network Layer: 5‐86
SDN: control/data plane interaction example SDN: control/data plane interaction example
Dijkstra’s link-state Dijkstra’s link-state
routing 1 S1, experiencing link failure uses routing
4 OpenFlow port status message to 4 5
network
graph
RESTful
API
… intent notify controller network
graph
RESTful
API
… intent 5 link state routing app interacts
statistics
3 … flow tables
2 SDN controller receives OpenFlow
statistics
3 … flow tables
with flow‐table‐computation
message, updates link status info component in SDN controller,
Link-state info host info … switch info Link-state info host info … switch info
which computes new flow tables
2 3 Dijkstra’s routing algorithm 2 needed
OpenFlow
… SNMP
application has previously registered OpenFlow
… SNMP
to be called when ever link status 6 controller uses OpenFlow to
changes. It is called. install new tables in switches
1 1 that need updating
4 Dijkstra’s routing algorithm
s2 access network graph info, link s2
s1 s1
s4 state info in controller, computes s4
s3 new routes s3
Network Layer: 5‐87 Network Layer: 5‐88
OpenDaylight (ODL) controller ONOS controller
Traffic
Engineering Firewalling Load Balancing … Network Orchestrations and Applications
Traffic
Engineering Firewalling Load Balancing … Network Applications
Northbound API Northbound API
control apps separate
northbound
REST/RESTCONF/NETCONF APIs REST API Intent abstractions, from controller
protocols
Enhanced Basic Network Functions
intent framework: high‐
Services
Topology Switch Stats hosts paths flow rules topology
level specification of
AAA … processing mgr. mgr. service: what rather
ONOS than how
Forwarding Host devices links statistics distributed
… rules mgr. Tracker
Service Abstraction Layer: core considerable emphasis
on distributed core:
config. and
Service Abstraction interconnects internal, device link host flow packet southbound
operational data messaging
abstractions, service reliability,
store Layer (SAL) external applications OpenFlow Netconf OVSDB protocols replication performance
… Southbound API and services
OpenFlow NETCONF SNMP OVSDB Southbound API scaling
Network Layer: 5‐89 Network Layer: 5‐90
15
11/29/2022
SDN: selected challenges SDN and the future of traditional network protocols
hardening the control plane: dependable, reliable, performance‐ SDN‐computed versus router‐computer forwarding tables:
scalable, secure distributed system • just one example of logically‐centralized‐computed versus protocol
• robustness to failures: leverage strong theory of reliable distributed computed
system for control plane
• dependability, security: “baked in” from day one? one could imagine SDN‐computed congestion control:
networks, protocols meeting mission‐specific requirements • controller sets sender rates based on router‐reported (to
• e.g., real‐time, ultra‐reliable, ultra‐secure controller) congestion levels
Internet‐scaling: beyond a single AS
SDN critical in 5G cellular networks How will implementation of
network functionality (SDN
versus protocols) evolve?
Network Layer: 5‐91 Network Layer: 5‐92
Network layer: “control plane” roadmap ICMP: internet control message protocol
introduction used by hosts and routers to
Type Code description
routing protocols communicate network‐level 0 0 echo reply (ping)
information 3 0 dest. network unreachable
intra‐ISP routing: OSPF 3 1 dest host unreachable
• error reporting: unreachable host,
routing among ISPs: BGP network, port, protocol
3
3
2
3
dest protocol unreachable
dest port unreachable
SDN control plane • echo request/reply (used by ping) 3
3
6
7
dest network unknown
dest host unknown
Internet Control Message network‐layer “above” IP: 4 0 source quench (congestion
network management, • ICMP messages carried in IP control - not used)
Protocol configuration datagrams
8
9
0
0
echo request (ping)
route advertisement
• SNMP ICMP message: type, code plus 10
11
0
0
router discovery
TTL expired
• NETCONF/YANG first 8 bytes of IP datagram causing 12 0 bad IP header
error
Network Layer: 5‐93 Network Layer: 4‐94
Traceroute and ICMP Network layer: “control plane” roadmap
3 probes 3 probes introduction
3 probes routing protocols
source sends sets of UDP segments to intra‐ISP routing: OSPF
stopping criteria:
destination
UDP segment eventually
routing among ISPs: BGP
• 1st set has TTL =1, 2nd set has TTL=2, etc. arrives at destination host SDN control plane
datagram in nth set arrives to nth router: destination returns ICMP
Internet Control Message
• router discards datagram and sends source “port unreachable”
message (type 3, code 3) Protocol network management,
ICMP message (type 11, code 0)
• ICMP message possibly includes name of source stops configuration
router & IP address • SNMP
when ICMP message arrives at source: record RTTs • NETCONF/YANG
Network Layer: 4‐95 Network Layer: 5‐96
16
11/29/2022
What is Network Management? What is network management?
Traffic on Network = Data + Control + Management
Data= Bytes/Messages sent by users autonomous systems (aka “network”): 1000s of interacting
hardware/software components
Control= Bytes/messages added by the system to properly transfer the data
(e.g., routing messages) other complex systems requiring monitoring, configuration,
Management= Optional messages to ensure that the network functions properly control:
and to handle the issues arising from malfunction of any component • jet airplane, nuclear power plant, others?
If all components function properly, control is still required but management is
optional. "Network management includes the deployment, integration
Examples: and coordination of the hardware, software, and human
elements to monitor, test, poll, configure, analyze, evaluate,
• Detecting failures of an interface card at a host or a router
and control the network and element resources to meet the
• Monitoring traffic to aid in resource deployment real-time, operational performance, and Quality of Service
• Intrusion Detection requirements at a reasonable cost."
Network Layer: 4‐97 Network Layer: 5‐98
Components of Network Management How is Network Managed?
Management = Initialization, Monitoring, Control
1. Fault Management: Detect, log, and respond to fault conditions
Manager, Agents, and Management Information Base (MIB)
2. Configuration Management: Track and control which devices are
on or off
3. Accounting Management: Monitor resource usage for records and
billing
4. Performance Management: Measure, report, analyze, and control
traffic, messages
5. Security Management: Enforce policy for access control,
authentication, and authorization
FCAPS
Network Layer: 5‐99 Network Layer: 4‐100
Components of network management
Managed device:
Managing server: agent data equipment with manageable,
application, typically managing configurable hardware,
with network server/controller software components
managers (humans) in data managed device
the loop Data: device “state”
agent data configuration data,
Network agent data
operational data,
management managed device device statistics
protocol: used by managed device
managing server to query,
agent data
configure, manage device;
used by devices to inform agent data
Network Layer: 5‐101 Network Layer: 5‐102
17
11/29/2022
Network operator approaches to management SNMP protocol
CLI (Command Line Interface) Two ways to convey MIB info, commands:
• operator issues (types, scripts) direct to agent data
individual devices (e.g., vis ssh) managing managing data managing data
server/controller server/controller server/controller
SNMP/MIB data managed device
• operator queries/sets devices data request
(MIB) using Simple Network agent data
Management Protocol (SNMP)
agent data
managed device
NETCONF/YANG response trap message
managed device
• more abstract, network‐wide, holistic
• emphasis on multi‐device configuration agent data
agent data agent data
management. agent data
• YANG: data modeling language managed device managed device managed device
• NETCONF: communicate YANG‐compatible managed device
SNMP protocol: message types SNMP protocol: message formats
Get/set header Variables to get/set
Message type Function PDU Error
Request Error
GetRequest manager‐to‐agent: “get me data” message types 0‐3 type Status Name Value Name Value ….
ID Index
(data instance, next data in list, (0-3) (0-5)
GetNextRequest
GetBulkRequest block of data).
Network Layer: 5‐105 Network Layer: 5‐106
SNMP: Management Information Base (MIB) Network layer: Summary
we’ve learned a lot!
managed device’s operational (and some configuration) data agent data
approaches to network control plane
gathered into device MIB module • per‐router control (traditional)
• 400 MIB modules defined in RFC’s; many more vendor‐specific MIBs • logically centralized control (software defined networking)
Structure of Management Information (SMI): data definition language traditional routing algorithms
• implementation in Internet: OSPF , BGP
example MIB variables for UDP protocol:
SDN controllers
Object ID Name Type Comments • implementation in practice: ODL, ONOS
1.3.6.1.2.1.7.1 UDPInDatagrams 32‐bit counter total # datagrams delivered
Internet Control Message Protocol
1.3.6.1.2.1.7.2 UDPNoPorts 32‐bit counter # undeliverable datagrams (no application at port)
1.3.6.1.2.1.7.3 UDInErrors 32‐bit counter # undeliverable datagrams (all other reasons) network management
1.3.6.1.2.1.7.4 UDPOutDatagrams 32‐bit counter total # datagrams sent
next stop: link layer!
1.3.6.1.2.1.7.5 udpTable SEQUENCE one entry for each port currently in use
Network Layer: 5‐107 Network Layer: 5‐108
18
11/29/2022
Network layer, control plane: Done!
introduction
routing protocols
link state
distance vector
intra‐ISP routing: OSPF
routing among ISPs: BGP
network management,
SDN control plane configuration
Internet Control Message • SNMP
Protocol • NETCONF/YANG
Network Layer: 5‐109
19