Professional Documents
Culture Documents
This book focuses on containerized Routing Protocol Daemon (cRPD) and covers its charac-
teristics, advantages, use cases, and installation, while also providing some perspective on
containerized networking. Test out the trial version and follow along: https://www.juniper.
DAY ONE: CLOUD NATIVE ROUTING
“Containerized RPD brings the scalability and stability of Junos that was not previously seen into the
WITH cRPD
modern software stack. It truly bridges the gap between software engineering and networking. Best of
all, there is no need to reinvest in tools and monitoring systems. cRPD opens the door for creative system
designs and many thanks to the Juniper engineering team to make this happen so seamlessly.”
Kai Ren, Comcast
Containerized RPD allows you to
“This power-packed book provides many great examples that are easy to follow and that can help
you accelerate your transition towards cloud native networking. The authors provide an excellent use the features of Junos® within
overview of the concepts and Juniper’s RPD is a world-class product and it’s great to see how eas- today's modern software stack.
ily it can be used in cloud environments. Above all, it was great fun to read and it inspired me for
future projects.” Andree Toonk, Cloud Infrastructure Architect Learn how with this fast-paced
book of cRPD examples and tips.
“This Day One book gives meaningful and easy-to-understand insights into cRPD. The power of
cRPD is its flexibility, which enables a variety of use cases across a range of customer segments,
thanks to its Junos OS routing stack, automation, and manageability stack, and its ability to pro-
gram third-party data planes. The authors present complex concepts in easily understandable tutori-
als and I highly recommended it for anyone who wants to get started with this powerful Juniper
technology.”
- Dr. Martin Horneffer, Squad Lead Internet Backbone Architecture, Tier 1 Operator
ISBN 978-1736316054
53000
By Hitesh Mali, Melchior Aelmans, Arijit Paul, Vinay K Nallamothu,
Juniper Networks Books are focused on network reliability and
This book focuses on containerized Routing Protocol Daemon (cRPD) and covers its charac-
teristics, advantages, use cases, and installation, while also providing some perspective on
containerized networking. Test out the trial version and follow along: https://www.juniper.
DAY ONE: CLOUD NATIVE ROUTING
“Containerized RPD brings the scalability and stability of Junos that was not previously seen into the
WITH cRPD
modern software stack. It truly bridges the gap between software engineering and networking. Best of
all, there is no need to reinvest in tools and monitoring systems. cRPD opens the door for creative system
designs and many thanks to the Juniper engineering team to make this happen so seamlessly.”
Kai Ren, Comcast
Containerized RPD allows you to
“This power-packed book provides many great examples that are easy to follow and that can help
you accelerate your transition towards cloud native networking. The authors provide an excellent use the features of Junos® within
overview of the concepts and Juniper’s RPD is a world-class product and it’s great to see how eas- today's modern software stack.
ily it can be used in cloud environments. Above all, it was great fun to read and it inspired me for
future projects.” Andree Toonk, Cloud Infrastructure Architect Learn how with this fast-paced
book of cRPD examples and tips.
“This Day One book gives meaningful and easy-to-understand insights into cRPD. The power of
cRPD is its flexibility, which enables a variety of use cases across a range of customer segments,
thanks to its Junos OS routing stack, automation, and manageability stack, and its ability to pro-
gram third-party data planes. The authors present complex concepts in easily understandable tutori-
als and I highly recommended it for anyone who wants to get started with this powerful Juniper
technology.”
-Yogesh Kumar, Director of Product Management, Cloud Ready Data Center, Juniper Networks
ISBN 978-1736316054
53000
By Hitesh Mali, Melchior Aelmans, Arijit Paul, Vinay K Nallamothu,
Juniper Networks Books are focused on network reliability and
Chapter 5: Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
iv
© 2021 by Juniper Networks, Inc. All rights reserved. Arijit Paul is a DevOps and Cloud Solution engineer at
Juniper Networks and Junos are registered trademarks of Juniper Networks. He is passionate about network
Juniper Networks, Inc. in the United States and other automation, open source, and cloud native technologies,
countries. The Juniper Networks Logo and the Junos logo, and believes in building robust network solutions by
are trademarks of Juniper Networks, Inc. All other consuming them. He has gained a wide range of experience
trademarks, service marks, registered trademarks, or in working with various networking technologies for the
registered service marks are the property of their respective past two decades.
owners. Juniper Networks assumes no responsibility for any
inaccuracies in this document. Juniper Networks reserves Vinay K Nallamothu is a Principal Engineer at Juniper’s
the right to change, modify, transfer, or otherwise revise this routing technology team, currently working in the areas of
publication without notice. SDN, Disaggregation, and Cloud networking. He is closely
involved in the design and implementation of cRPD. In the
Published by Juniper Networks Books past, he worked on multicast routing, VPN, and security
Authors: Hitesh Mali, Melchior Aelmans, Arijit Paul, technologies. He enjoys working with customers to build
Vinay K Nallamothu, Praveena Kodali, Valentijn Flik, and robust, scalable, and programmable networks. He has been
Michel Tepper with Juniper Networks since 2005.
Technical Reviewers: Marcel Wiget, Qasim Arham
Editor in Chief: Patrick Ames Praveena Kodali is an Information Development Engineer at
Copyeditor: Nancy Koerbel Juniper Networks. Since 2018 she has been creating,
writing, and developing technical documentation for
ISBN Paper: 978-1-7363160-5-4 networking applications with a focus on physical, virtual-
ISBN Ebook: 978-1-7363160-4-7 ized, and containerized security and routing products.
Version History: v1, August 2021
2 3 4 5 6 7 8 9 10 Valentijn Flik is a Network/System architect at VPRO
Broadcasting Company, where he has been working on the
About the Authors design and evolution of their networks. He has over 15 years
of experience in various operations and network/systems
Hitesh Mali is a Consulting Engineer at Juniper. He enjoys engineering. Always interested in new and emerging new
working on various emerging networking technologies and network techniques/designs, he likes to visualize everything
has spent the last decade helping Cable MSO (Multiple in Visio.
System Operators) build their network solutions.
Michel Tepper is a Solution Architect for Nuvias in the
Melchior Aelmans is a Consulting Engineer at Juniper Netherlands. He has also been a Juniper instructor since
Networks where he has been working with many operators 2006, and a Juniper Ambassador, holding a number of
on the design and evolution of their networks. He has over Juniper certifications for different tracks.
15 years of experience in various operations and engineering
positions with Cloud Providers, Data Centers, and Service Acknowledgments
Providers. Melchior enjoys evangelizing and discussing The authors would like to thank Manish Gupta, Marcel
routing protocols, routing security, internet routing and Wiget, Qasim Arham, Rick Mur, and Patrick Ames for their
peering, and data center architectures. He also participates valuable contributions, content, input, software, comments,
in IETF and RIPE, is a regular attendee and presenter at and mental support.
conferences and meetings, is a member of the NANOG
Program Committee, and a board member at the NLNOG
foundation.
v
Introduction to cRPD
A data center used to be a small room in an office building where a few servers that
hosted business applications were placed. Switches connected those servers togeth-
er in a Layer 2 domain, and routers forwarded them to other networks, as firewalls
ensured that no traffic left or entered the network unless a policy determined
otherwise.
Later, the dependency on these applications increased when companies switched
over to digital transformation. To ensure better housing conditions—including
cooling, redundant power, and physical security—these systems were placed at
co-location facilities or in purpose-built rooms. A business could lose money every
second the network was not available. This begot a new generation of data center
networking hardware that introduced switches upgraded with Layer 3 routing fea-
tures able to accommodate more traffic, and switch and route that traffic at higher
speeds.
8 Chapter 1: Introduction to cRPD
With the introduction of public cloud services like Amazon Web Services (AWS),
Microsoft Azure, Google GCP, Alibaba Cloud, and many others; data center net-
working now involves much more than just connecting a few routers and switches
together. At the same time, the scale of local (or co-located) data centers—now
referred to as private clouds—grew tremendously, and data center networking
designs had to change completely to accommodate the staggering growth of re-
quired resources. The biggest change in designing a data center network was that
these networks were growing too big to keep supporting stretched Layer 2 net-
works. It didn’t mean that Layer 2 networking was going away, but it no longer
provided the right infrastructure on which to build the data center infrastructure.
Now switching hardware is capable of IP routing at line-rate speeds. Today’s
modern data center infrastructure is based on Layer 3-routed IP networks and not
on Layer 2 VLANs.
As applications may still require Layer 2 connectivity (VMware, VMotion, or
other VM live migration technologies have long been the dominant use case for
this requirement) the option still needs to be available. The ability to run applica-
tions other than traffic forwarding on the network, the desire to create smaller
network segments to secure resources inside the data center and enable highly
scalable multi-tenancy (one very important requirement for any cloud type de-
ployment), and the requirement to build massively scalable data centers has
sparked the need for a new type of data center networking.
One of the defining concepts behind software-defined networking (SDN) is sepa-
rating the control plane from the data plane, thus separating traffic forwarding
from higher level connectivity requirements, ensuring that the data center net-
work transforms into a highly-scalable fabric architecture that supports many
different applications to run on top of it. We typically refer to this as the underlay
(the fabric) and the overlay (the applications).
With the advent of this evolution of SDN, vendors like Juniper Networks started
to disaggregate hardware and software. Previously these were combined, and one
would buy a box, a router, or a switch with software running on it. More or less
by definition they couldn’t be used separately, and software from vendor A
couldn’t run on vendor B’s box. Today, those purchasing networking equipment
have a wide variety of options to choose from with regard to hardware and its
control plane, the software. It’s not uncommon to run SONiC on a switch and
replace the routing stack in SONiC with that from another vendor. This way op-
erators can mix and match and tailor solutions to fit their networking demands.
This book focuses on containerized routing protocol daemon (cRPD), it covers
characteristics, advantages, use cases, and installation, and gives some perspective
on containerized networking. If you get as enthusiastic about cRPD as we did, we
invite you to test it out yourself. A cRPD trial version is free to download via:
https://www.juniper.net/us/en/dm/crpd-trial/.
9 Introduction to Linux Routing
Netlink
As stated, Netlink is a Linux kernel interface used for communication between
both the kernel and user space processes, and between different userspace pro-
cesses. cRPD is an example of a userspace process as Netlink communication in
itself cannot traverse host boundaries.
Netlink provides a standard socket-based interface for userspace processes, and a
kernel-side API for internal use by kernel modules.
When multiple cRPD instances are running on a system in the same namespace,
containers are connected to the host network stack through bridges. cRPD is con-
nected to the Linux OS using bridges. The host interfaces are connected to the
container using bridges. Multiple containers can connect to the same bridge and
communicate with one another. See Figure 1.1 for reference.
Further Reading
Here are some great resources that take you deeper into the working of routing on
the Linux stack and Juniper Networks cRPD:
https://www.techrepublic.com/article/understand-the-basics-of-linux-routing/
https://opensource.com/business/16/8/introduction-linux-network-routing
https://www.linuxjournal.com/article/7356
https://www.juniper.net/documentation/us/en/software/crpd/crpd-deploy-
ment/topics/concept/understanding-crpd.html
https://www.juniper.net/documentation/us/en/software/crpd/crpd-deploy-
ment/index.html
https://containerlab.srlinux.dev/manual/kinds/crpd/
https://iosonounrouter.wordpress.com/2020/11/24/transform-your-server-
into-a-junos-powered-router-with-crpd/
11 Underlay and Overlay Networking
Underlay Networking
The underlay network is a physical infrastructure over which an overlay network
is built. In data center environments, the role of the physical underlay network is
to provide Unicast IP connectivity from any physical device (server, storage device,
router, or switch) to any other physical device.
As the underlying network responsible for delivery of packets across networks, an
ideal underlay network provides low-latency, non-blocking, high-bandwidth con-
nectivity from any point in the network to any other point in the network.
Overlay Networking
Overlay networking, often referred to as SDN, is a method of using software to
create layers of network abstraction that can be used to run multiple, separate, dis-
crete, and virtualized network layers on top of the physical network, often provid-
ing new application and security benefits.
All nodes in an overlay network are connected to one another by means of logical
or virtual links and each of these links correspond to a path in the underlying
network.
The purpose of the overlay network is to add functionality without a complete
network redesign. Overlay network protocols include Virtual Extensible LAN
(VXLAN), Network Virtualization using Generic Encapsulation (NVGRE), ge-
neric routing encapsulation (GRE), IP multicast, network virtualization overlays 3
(NVO3), MPLSoUPD, MPLSoGRE, SRoMPLS, and SRv6.
Common examples of an overlay network are distributed systems such as peer-to-
peer networks and client server applications because their nodes run on top of the
internet.
The nodes of the overlay network are interconnected using virtual or logical links,
which form an overlay topology, although these nodes may be connected through
physical links in the underlying networks. From an application perspective the
overlay network ‘looks and feels’ as if it has a direct connection. In other words,
12 Chapter 1: Introduction to cRPD
two nodes may be connected with a logical connection (overlay tunnel) despite
being several hops apart in the underlying network.
The overlay network interconnects all the application nodes and provides the con-
nectivity between them.
Table 1.1 quickly compares underlay and overlay networks.
Transmits packets which traverse over Transmits packets only along the virtual
Traffic Flow
network devices like switches and routers. links between the overlay nodes.
Multipath Less scalable options of multipath Support for multipath forwarding within
Forwarding forwarding. virtual networks.
MORE? This article published in the Internet Protocol Journal dives much
deeper into the different protocols used in data center networking and the authors
highly recommend it: https://ipj.dreamhosters.com/wp-content/
uploads/2020/09/232-ipj-2.pdf.
Chapter 2
Junos cRPD
Figure 2.1 cRPD Architecture Components and How They Are Linked Together
BGP-LU
BGB Flowspec
Multi-topology routing
Kubernetes integration
16 Chapter 2: Junos cRPD
NOTE A list of features supported in Junos cRPD appears in the 20.4R1 release,
additional features will be added in later cRPD Junos releases.
This chapter shows you how to install Docker on different operating systems, how
to set up docker networks, and how to spin up containers on the Docker platform.
Docker Installation
Let’s install Docker on a couple of different platforms.
Cosmic 18.10
Docker CE is supported on x86_64 (or amd64), armhf, arm64, s390x (IBM Z),
and ppc64le (IBM Power) architectures. For more details, see https://docs.docker.
com/engine/install/ubuntu/.
To start, uninstall older versions of Docker called docker, docker.io, or docker en-
gine. This can be done by running:
$ sudo apt-get remove docker docker-engine docker.io containerd runc
4. Verify that you now have the key with the fingerprint 9DC8 5822 9FC7 DD38
854A E2D8 8D81 803C 0EBF CD88, by searching for the last eight characters of
the fingerprint:
$ sudo apt-key fingerprint 0EBFCD88
pub rsa4096 2017-02-22 [SCEA] 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid [ unknown] Docker Release (CE deb) <docker@docker.com>
sub rsa4096 2017-02-22 [S]
5. Use the following command to set up the stable repository. For x86_64/amd64,
you can run:
$ sudo add-apt-repository \
> .deb [arch=amd64] https://download.docker.com/linux/ubuntu \
> $(lsb_release -cs) \
> stable.
18 Chapter 2: Junos cRPD
Verification
After installing Docker on any OS, and the docker -v command in the command
line, you can check the current docker version at the terminal:
# docker -v
Docker version 19.03.2, build 6a30dfc
To remove an image:
# docker rmi [OPTIONS] IMAGE [IMAGE...]
Container-Related Commands
To create a container but not to start it:
# docker create [OPTIONS] IMAGE [COMMAND] [ARG...]
The container will stop after the command has finished command execution by de-
fault. For interactive processes (like a shell), you must use -i -t together to allocate a
19 Verification
tty for the container process. To start a container in detached mode, option “-d”
should be used. Detached mode means the container starts up and runs in the
background.
To list all Docker containers:
# docker ps -a
To stop a container:
# docker stop [OPTIONS] CONTAINER [CONTAINER...]
To delete a container:
# docker rm [OPTIONS] CONTAINER [CONTAINER...]
Network-Related Commands
To create a network:
# docker network create [OPTIONS] NETWORK
To remove a network:
# docker network rm NETWORK [NETWORK...]
1 CPU core
2 GB Memory
10 GB hard drive
1 Ethernet port
Download the cRPD software using the Juniper Docker Registry. Follow the in-
structions listed here: https://www.juniper.net/documentation/us/en/software/
crpd21.1/crpd-deployment/topics/task/crpd-linux-server-install.html.
root@crpd01> configure
Entering configuration mode
[edit]
root@crpd01# show
## Last changed: 2019-02-13 19:28:26 UTC
version "19.2I20190125_1733_rbu-builder [rbu-builder]";
system {
root-authentication {
encrypted-password "$6$JEc/p$QOUpqi2ew4tVJNKXZYiCKT8CjnlP3SLu16BRIxvtz0CyBMc57WGu2oCyg/
lTr0iR8oJMDumtEKi0HVo2NNFEJ."; ## SECRET-DATA
}
}
routing-options {
forwarding-table {
export pplb;
}
router-id 90.90.90.20;
autonomous-system 100;
}
protocols {
bgp {
group test {
type internal;
local-address 90.90.90.20;
family inet {
unicast;
}
neighbor 90.90.90.10 {
bfd-liveness-detection {
minimum-interval 100;
}
}
neighbor 90.90.90.30 {
bfd-liveness-detection {
minimum-interval 100;
}
}
}
group test6 {
type internal;
local-address abcd::90:90:90:20;
family inet6 {
unicast;
}
neighbor abcd::90:90:90:10 {
bfd-liveness-detection {
minimum-interval 100;
}
}
neighbor abcd::90:90:90:30 {
bfd-liveness-detection {
minimum-interval 100;
}
}
}
}
isis {
23 Running cRPD Instances
level 1 disable;
interface all {
family inet {
bfd-liveness-detection {
minimum-interval 100;
}
}
family inet6 {
bfd-liveness-detection {
minimum-interval 100;
}
}
}
}
ospf {
area 0.0.0.0 {
interface all {
bfd-liveness-detection {
minimum-interval 100;
}
}
}
}
ospf3 {
area 0.0.0.0 {
interface all {
bfd-liveness-detection {
minimum-interval 100;
}
}
}
}
}
policy-options {
policy-statement pplb {
then {
load-balance per-packet;
}
}
}
interfaces {
lo0 {
unit 0 {
family iso {
address 49.0005.1111.2222.3333.00;
}
}
}
}
24 Chapter 2: Junos cRPD
License Model Routing Stack for Host SKUs Use Case Examples or Solutions
Next, Table 2.2 lists the licensing support with use case examples Route Reflector
on cRPD.
Licenses Licenses Licenses Licenses Expiry
Feature name used installed needed available
containerized-rpd-standard 0 10 0 10 2019-12-
01 23:59:00 UTC
Licenses installed:
License identifier: 6fc1139b-ed66-4923-b55b-8ccb99508714
License SKU: S-CRPD-S-3
License version: 1
Order Type: commercial
Customer ID: TEXAS STATE UNIV.-SAN MARCOS
License Lease End Date: 2019-12-01 23:59:00 UTC
Features:
containerized-rpd-standard - Containerized routing protocol daemon with standard features
date-based, 2019-10-02 00:00:00 UTC - 2019-12-01 23:59:00 UTC
The following example shows how to install ‘tree’ Linux utilities in the Linux shell
of cRPD:
root@node-2:~# apt-get install tree
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
grub-pc-bin
Use 'apt autoremove' to remove it.
The following NEW packages will be installed:
tree
0 upgraded, 1 newly installed, 0 to remove and 23 not upgraded.
Need to get 40.7 kB of archives.
After this operation, 105 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 tree amd64 1.7.0-5 [40.7 kB]
Fetched 40.7 kB in 1s (65.0 kB/s)
Selecting previously unselected package tree.
(Reading database ... 137961 files and directories currently installed.)
Preparing to unpack .../tree_1.7.0-5_amd64.deb ...
Unpacking tree (1.7.0-5) ...
Setting up tree (1.7.0-5) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
root@node-2:~#
Below is the content of runit-int.sh file and this file should be present in the same
directory as Dockerfile when building modified cRPD image. You can easily ex-
tend this idea to build various custom cRPD images per your deployment needs.
#!/bin/bash
export > /etc/envvars
exec /sbin/runit-init 0
27 cRPD Installation in Kubernetes
NOTE It is assumed that the reader already has a working Kubernetes cluster. In
this section show how you can bring up cRPD in the Kubernetes cluster as an
application POD.
Please ensure that you have docker images for cRPD available on each kubernetes
cluster node. You can do that either by setting up a local registry for cRPD image or
you can individually copy a cRPD docker image on each Kubernetes node.
ports:
- containerPort: 179
securityContext:
privileged: true
volumeMounts:
- name: crpd-storage
mountPath: /var/log/crpd-storage
volumes:
- name: crpd-storage
persistentVolumeClaim:
claimName: crpd-pv-claim
You can see it is important to have containerPort: 179 to ensure that cRPD is able to
install routes in the kernel.
You can limit resource usage by cRPD by allocating resources as in the resource sec-
tion of the above POD manifest file.
If you don’t want to lose some data, for example a configuration if the cRPD POD
dies, then you can define persistent volume and attach it in the POD manifest file.
28 Chapter 2: Junos cRPD
You can create Persistent Volume as shown here and attach it in the POD manifest
file under the volumes section listed above:
apiVersion: v1
kind: PersistentVolume
metadata:
name: crpd-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: crpd-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
This manifest file is an example of how to define cRPD deployment. The impor-
tant thing to note is that you can dynamically bring the number of cRPD PODs up
and down by changing the value of replicas defined in the YAML file. We also
need to have a Kubernetes service abstraction and expose the deployment using
Kubernetes services.
Chapter 3
This chapter provides an overview of some of the aforementioned use cases where
cRPD can play a role. Some use cases will contain just an overview of an idea, oth-
ers go into great detail and cover complete solutions. You can tailor these cases to
your own situation as we suggest new directions and new ways to deploy cRPD.
Overlay / Underlay
The way data centers are designed is changing rapidly. Until recently a lot of Layer
2 designs were being used, and even though these are designs understood by most
network engineers there are some huge disadvantages, namely:
Loop prevention can become complex
Ethernet switching tables can grow rapidly and become a limiting factor
Figure 3.1 A Typical Use Case Where cRPD Uses BGP-LU to Signal Its Upstream Routers Which
Path to Take Further Up the Network
Requirements
This example uses the following hardware and software components in a topology
illustrated by Figure 3.2:
Linux VM (H0), that simulates the data center server and hosts the cRPD
docker container
Linux server running:
Other nodes in the topology are vMX routers running Junos Release 19.2R2.2
R1 is a separate vMX that simulates the rest of the routers (ToR, RR, P, Peer,
U1) as logical systems
33 Egress Peering Engineering From the Host
Figure 3.2 Topology Used in this Example with BGP Labeled Unicast Egress Peer Traffic Engineer-
ing on Ingress Using cRPD
Configuration
This section shows how to enable egress traffic engineering on the ASBR R0 and
demonstrates ingress functionality on the cRPD. Configuration of other routers is
generic and is omitted to focus on the details relevant to this example.
MORE? This example is based on a Juniper TechLibrary example that you can
use as a reference to configure the other routers in the network: Configuring
Egress Peer Traffic Engineering Using BGP Labeled Unicast.
3. Create policies that export the ARP routes that egress traffic engineering created
and apply them to the IBGP core in the labeled unicast family:
set policy-options policy-statement export_to_rrs term a from protocol arp
set policy-options policy-statement export_to_rrs term a from rib inet.3
set policy-options policy-statement export_to_rrs term a then next-hop self
set policy-options policy-statement export_to_rrs term a then accept
set policy-options policy-statement export_to_rrs term b from protocol arp
set policy-options policy-statement export_to_rrs term b from rib inet6.3
set policy-options policy-statement export_to_rrs term b then next-hop self
set policy-options policy-statement export_to_rrs term b then accept
set policy-options policy-statement export_to_rrs term c from protocol bgp
set policy-options policy-statement export_to_rrs term c then accept
set policy-options policy-statement export_to_rrs term default then reject
set protocols bgp group toRRs type internal
set protocols bgp group toRRs local-address 10.19.19.19
set protocols bgp group toRRs family inet labeled-unicast rib inet.3
set protocols bgp group toRRs family inet6 labeled-unicast rib inet6.3
set protocols bgp group toRRs export export_to_rrs
set protocols bgp group toRRs neighbor 10.6.6.6
set protocols bgp group toRRs neighbor 10.7.7.7
set protocols bgp group toRRs export export_to_rrs
4. Re-advertise Internet routes from external peers with the next hop unchanged:
set protocols bgp group toRRs family inet unicast add-path receive
set protocols bgp group toRRs family inet unicast add-path send path-count 6
set protocols bgp group toRRs family inet6 unicast add-path receive
set protocols bgp group toRRs family inet6 unicast add-path send path-count 6
set protocols bgp group toRRs export export_to_rrs
Configure the cRPD (Ingress Node) to Control EPE Decisions in the Network
1. On the Linux shell create the IP tunnel:
host@h0:~# ip tunnel add gre1 mode gre remote 10.19.19.19 local 10.20.20.20 ttl 255
host@h0:~# ip link set gre1 up
2. Enter the cRPD Junos CLI and configure the protocols. These will bring up
OSPF and BGP sessions at cRPD01. The routes installed on the cRPD will bring
up the GRE tunnel in Linux:
set policy-options prefix-list SvrV6Pfxes 2001:db8::172:16:88:1/128
set policy-options prefix-list SvrV4Pfxes 172.16.88.1/32
set policy-options prefix-list SvrV4lo 10.20.20.20/32
set policy-options policy-statement export_lo1 term a from prefix-list SvrV4lo
set policy-options policy-statement export_lo1 term a then accept
set policy-options policy-statement export_lo1 term def then reject
set policy-options policy-statement export_to_peers term a from prefix-list SvrV4Pfxes
set policy-options policy-statement export_to_peers term a then accept
set policy-options policy-statement export_to_peers term b from prefix-list SvrV6Pfxes
set policy-options policy-statement export_to_peers term b then accept
set policy-options policy-statement export_to_peers term def then reject
set routing-options router-id 10.20.20.20
set routing-options autonomous-system 19
set routing-options rib inet.3 static route 10.19.19.19/32 next-hop gre1
35 Egress Peering Engineering From the Host
Verification
1. On crpd01, verify that the routing protocol sessions are up:
host@crpd01> show ospf neighbor
Address Interface State ID Pri Dead
10.20.21.2 ens3f1 Full 8.8.8.8 128 36
2. On crpd01, verify that IPv4 routes for U1 are installed. You should see the BGP
routes with all available next hops: 192.168.0.1, 192.168.1.1, 192.168.2.1, and
192.168.3.1:
host@crpd01> show route 172.16.77.1/32
2001:db8::172:16:77:1/128
*[BGP/170] 00:09:45, localpref 100, from 10.6.6.6
AS path: 1 4 I, validation-state: unverified
> via gre1, Push 301072
[BGP/170] 00:09:45, localpref 100, from 10.6.6.6
AS path: 1 4 I, validation-state: unverified
> via gre1, Push 301088
[BGP/170] 00:09:45, localpref 100, from 10.6.6.6
AS path: 2 4 I, validation-state: unverified
> via gre1, Push 301104
[BGP/170] 00:09:45, localpref 100, from 10.6.6.6
AS path: 3 4 I, validation-state: unverified
> via gre1, Push 301120
6. On H0, verify that IPv4 routes are installed in THE Linux FIB with MPLS
encapsulation:
host@h0:~# ip route | grep 172.16.77.1
7. On H0, verify that IPv6 routes are installed in Linux FIB accordingly, with
MPLS encapsulation:
host@h0:~# ip -6 route | grep 172:16:77:1
8. Keep the ping running and monitor interface statistics at R3. Verify that traffic is
exiting toward Peer1:
host@10.53.33.247> monitor interface ge-0/0/0.0
9. Add the following configuration to install a static route at R0 for R6/32 destina-
tion, with next hop of Peer2, with the resolve option. This configuration simulates
a Controller API installed route to move the traffic to Peer2:
host@crpd01# edit routing-options
rib inet6.3 {
rib inet6.0 { . . .}
static {
route 2001:db8::172:16:77:1/128 {
next-hop 19:2::2;
resolve;
}
}
}
edit routing-options
static {
route 172.16.77.1/32 {
next-hop 192.168.2.1;
resolve;
}
}
10. On H0, observe routes in the Linux FIB changes to encapsulate toward the
new next hop:
host@h0:~# ip route | grep 172.16.77.1
11. Run ping from R0 to R6. Traffic is steered toward Peer2, as directed by the
controller installed static route:
host@10.53.33.247> monitor interface ge-0/0/0.0
Delay: 1/0/150
Interface: ge-0/0/0.0, Enabled, Link is Up
Flags: SNMP-Traps 0x4000
Encapsulation: ENET2
VLAN-Tag [ 0x8100.11 ]
Local statistics: Current delta
Input bytes: 1043286 [37753]
Output bytes: 1298727 [46180]
Input packets: 14427 [527]
Output packets: 14447 [515]
Remote statistics:
Input bytes: 29000 (0 bps) [0]
Output bytes: 38398552 (0 bps) [10690144]
Input packets: 400 (0 pps) [0]
Output packets: 408337 (0 pps) [113731]
IPv6 statistics:
Input bytes: 22952 (640 bps) [0]
Output bytes: 21417856 (0 bps) [5911048]
Input packets: 292 (0 pps) [0]
Output packets: 206141 (0 pps) [56837]
Traffic statistics:
Input bytes: 1072286 [37753]
Output bytes: 39697279 [10736324]
Input packets: 14827 [527]
Output packets: 422784 [114246]
Protocol: inet, MTU: 1500, Flags: None
L3VPN in MSP
This use case discusses the configuration of Layer 3 VPN (VRF) on a cRPD in-
stance with an MPLS Segment Routing data plane. In edge cloud infrastructure
application-specific PE routers are replaced by a cloud-native workload, such as
the cRPD for routing function with Linux forwarding plane, to provide Layer 3
overlay services. The cRPD Layer 3 VPN feature leverages the support provided in
Linux for VRF and MPLS, and enables the routing instance functionality along the
Junos RPD multiprotocol BGP support to allow Layer 3 overlay functionality.
Using cRPD’s multiprotocol BGP functionality, the local prefixes and the learned
prefixes in the VRF are advertised and received. Corresponding MPLS routes are
added into the Linux MPLS forwarding table. The prefixes learned from remote
PEs get into the VRF routing table based on route-target policy.
Linux VRF
A VRF in the Linux kernel is represented by a network device handled by a VRF
driver. When a VRF device is created, it is associated with a routing table and net-
work interfaces are then assigned to a VRF device as shown in Figure 3.3.
Packets that come in through such devices to the VRF are looked up in the routing
table associated with the VRF device. Similarly egress routing rules are used to
send packets to the VRF driver before sending it out on the actual interface.
MPLS in Linux
The MPLS configuration is supported in cRPD, including SR-MPLS, for forward-
ing packets to the destination in the MPLS network. To leverage cRPD’s MPLS
capability, it’s advised to run the containers on a system running Linux kernel ver-
sion 4.8 or later. The MPLS kernel module mpls_router must be installed in the
container for MPLS to work in the Linux kernel.
The steps to load the MPLS module on Linux host are:
Load the MPLS modules in the container using modprobe:
lab@poc-server-20:~$ modprobe mpls_iptunnel
lab@poc-server-20:~$ modprobe mpls_router
After loading the mpls_router on the host, configure the following commands to
activate MPLS on the interface:
lab@poc-server-20:~$ sudo sysctl -w net.mpls.platform_labels=1048575
Okay, now that’s done, the Day One topology for this use case is illustrated in
Figure 3.4.
Figure 3.4 Topology Used in this Example for L3VPN Using cRPD
43 L3VPN in MSP
Before you configure a Layer 3 VPN (VRF), MPLS modules need to be installed
on the host OS on which the cRPD instance is created. Here we will configure IGP
and MPLS protocols on cRPD.
When cRPD is launched in host networking mode, the server host interface and IP
address is visible to the cRPD container. You do not have to configure the inter-
face IP address in cRPD, but the IGP and MPLS protocol is a required configura-
tion on cRPD.
In this example ISIS IGP and MPLS segment routing is used for the data plane:
root@crpd01# show |display set
set protocols isis interface ens1f0
set protocols isis interface lo.0 passive
set protocols isis source-packet-routing srgb start-label 8000
set protocols isis source-packet-routing srgb index-range 100000
set protocols isis source-packet-routing node-segment ipv4-index 10
set protocols isis source-packet-routing node-segment ipv6-index 11
set protocols isis level 1 disable
[edit]
root@crpd01#
Notice the cRPD learned the remote PE lo0 address 10.10.10.6 through ISI:
root@crpd01# run show route protocol isis 10.10.10.6/32
[edit]
root@crpd01#
On the Linux host the forwarding table is programed with the correct
information:
lab@poc-server-20:~$ ip route | grep 10.10.10.6
10.10.10.6 via 1.1.1.73 dev ens1f0 proto 22
lab@poc-server-20:~$
44 Chapter 3: cRPD Use Cases
Now let’s verify the MPLS. In a typical Junos router the family MPLS configura-
tion is required on the interface. In the Linux host family an MPLS configuration is
not required. As the MPLS modules have been loaded, MPLS will be enabled on all
host interfaces which will be used for MPLS forwarding.
Junos uses MPLS.0 table for the label route but cRPD by default does not create
MPLS.0 table and this needs to be explicitly enabled in cRPD:
root@crpd01# set routing-options rib mpls.0
Now verify the MPLS SR and MPLS route table is on the cRPD:
root@crpd01# run show isis overview
Instance: master
Router ID: 10.10.10.10
Hostname: crpd01
Sysid: 0010.0010.0010
Areaid: 49.0010
Adjacency holddown: enabled
Maximum Areas: 3
LSP life time: 1200
Attached bit evaluation: enabled
SPF delay: 200 msec, SPF holddown: 5000 msec, SPF rapid runs: 3
IPv4 is enabled, IPv6 is enabled, SPRING based MPLS is enabled
Traffic engineering: enabled
Restart: Disabled
Helper mode: Enabled
Layer2-map: Disabled
Source Packet Routing (SPRING): Enabled
SRGB Config Range :
SRGB Start-Label : 8000, SRGB Index-Range : 100000
SRGB Block Allocation: Success
SRGB Start Index : 8000, SRGB Size : 100000, Label-Range: [ 8000, 107999 ]
Node Segments: Enabled
Ipv4 Index : 10, Ipv6 Index : 11
SRv6: Disabled
Post Convergence Backup: Disabled
Level 1
Internal route preference: 15
External route preference: 160
Prefix export count: 0
Wide metrics are enabled, Narrow metrics are enabled
Source Packet Routing is enabled
Level 2
Internal route preference: 18
External route preference: 165
45 L3VPN in MSP
[edit]
root@crpd01#
Okay, now let’s check the mpls.0 table on cRPD and the mpls.0 label route for the
remote PE 10.10.10.6 in the table. cRPD will push MPLS SR label 8034 to reach
to the remote PE 10.10.10.6:
root@crpd01# run show route table mpls.0
Now that there is an IGP and MPLS path between cRPD and remote PE, you can
configure a Layer 3 VPN between them:
root@crpd01# show protocols bgp
group vpn {
type internal;
family inet-vpn {
unicast;
}
neighbor 10.10.10.6 {
local-address 10.10.10.10;
}
}
[edit]
root@crpd01#
[edit]
root@crpd01#
[edit]
root@crpd01#
DDoS Protection
Distributed Denial of Service (DDoS) is a common attack launched against net-
work services/infrastructure by directing many compromised hosts on the Inter-
net to send simultaneous requests toward a target site or service, typically through
the use of bots. The attack overwhelms the system’s resources or network band-
width and prevents legitimate requests to be serviced. Often DDoS attacks are
hard to defend as the attacks originate from random IP addresses. To mitigate
these attacks, threat detection systems analyze the traffic and build the IP address-
es and services to be blocked or redirected to traffic scrubbers using firewall filter
rules, but distribution and applying these rules throughout the network in real
time is challenging and error prone.
BGP flowspec provides a way to dynamically distribute firewall rules to all net-
work devices across the network. Supporting implementations can install these
filters into their forwarding planes and filter the packets. The advantage of using
BGP flowspec over other mechanisms to distribute the filters is that it is based on
interoperable standards and works in multivendor environments.
cRPD can be used as the centralized station to inject these rules into the network.
The cRPD CLI can be used to configure these filters statically, which can then be
advertised over BGP.
cRPD also provides JET APIs that a controller or DDoS protection application
can use to add these rules dynamically as shown in Figures 3.5 and 3.6. As a re-
ceiver of these flowspec rules, cRPD can also install them to the Linux kernel
where BGP flow rules are translated to equivalent Linux rules.
48 Chapter 3: cRPD Use Cases
}
then accept;
}
}
chain __flowspec_default_inet__ {
ip length 1000 ip dscp { 1-2, 4-4, 8-8 } ip protocol { 1-10, 13-15, 30 } icmp type { echo-
reply, echo-request } icmp code { host-unreachable, 7 } meta l4proto @tcp/udp payload @th,0,16 { 1234,
4321 } meta l4proto @tcp/udp icmp checksum { 1234, 4321 } ip saddr 1.1.1.1 ip daddr 2.2.2.2 tcp flags
syn,ack counter packets 0 bytes 0 accept comment "2.2.2.2,1.1.1.1,proto>=1&<=10,>=13&<=15,=30,dstport
=1234,=4321,srcport=4321,=1234,icmp-type=0,=8,icmp-code=7,=1,tcp-flag:10,#I1"
}
}
50 Chapter 3: cRPD Use Cases
In order to scale cRPD to the numbers in Table 3.1 Juniper uses the lab setup
shown in Figure 3.6. But we didn’t want to get bottlenecked by protocol imple-
mentation on commercial traffic generators, so we used cRPD wherever possible
to pump routes into the DUT, which was also a cRPD.
While deploying this scale customers will have multiple RRs sharing the load, but
it was a handy use case for cRPD to validate its stability given enough CPU and
memory. We were able to bring it up to scale with a cRPD acting as a DUT with
this configuration for BGP:
protocols {
bgp {
path-selection external-router-id;
update-threading;
family inet {
unicast {
add-path {
send {
path-count 16;
multipath;
}
}
no-install;
}
}
group group1 {
type internal;
local-address 5.5.5.5;
peer-as 64512;
local-as 64512 loops 2;
neighbor 1.1.1.2;
neighbor 1.1.1.3;
neighbor 1.1.1.4;
neighbor 1.1.1.5;
neighbor 1.1.1.6;
neighbor 1.1.1.7;
neighbor 1.1.1.8;
neighbor 1.1.1.9;
neighbor 1.1.1.10;
neighbor 1.1.1.11;
neighbor 1.1.1.12;
neighbor 1.1.1.13;
neighbor 1.1.1.14;
neighbor 1.1.1.15;
neighbor 1.1.1.16;
neighbor 1.1.1.17;
}
.
.
.
.
local-address 5.5.5.5;
advertise-inactive;
mtu-discovery;
log-updown;
drop-path-attributes 128;
cluster 5.5.5.5;
multipath;
multipath-build-priority {
low;
}
}
54 Chapter 3: cRPD Use Cases
2. Create the Docker volume for persistent storage for cRPD configuration and log
messages:
docker volume create dut-config
docker volume create dut-varlog
Figure 3.8 displays screenshots of traffic generators vIXIA and vSpirent after all
the sessions came up with DUT.
55 Route Reflector / Route Server
And Figure 3.9 captures output from the DUT after everything came up.
Whitebox/SONiC
Traditionally networking gear is sold as a fully integrated product. Vendors often
use in-house designed ASICs and software and hardware tightly coupled with their
proprietary network OS.
These proprietary networking products are often deployed across a wide variety of
networks catering to different complex use cases. So they must be feature rich, but
can also be expensive to build and develop. Given the number of features that they
pack, any new feature must go through long and rigorous qualification cycles to
ensure backwards compatibility.
The hyper-scale data centers use CLOS architecture for the data center network
fabric, so they typically don’t require many of the advanced routing features such
as MPLS and IGP routing protocols. Most of the deployments just use single-hop
EBGP to bring up the CLOS underlay. However, these network operators have the
challenge of rapidly changing requirements as new applications are onboarded.
Most often their requirements are so unique to their network and dynamic in na-
ture that traditional routing protocols are not suited and often require SDN con-
trollers to orchestrate the network. These network operators found that
proprietary closed systems are not suited for their rapidly changing requirements
and moved toward a model where they can better operate.
A whitebox router/switch is a networking product whose hardware and software
are disaggregated and made available by different vendors. This gives flexibility to
customers who don’t always want the rich features of a proprietary OS but still
allows them to build and run their own software.
SONiC is an Opensource Network Operating System (NOS) for whitebox routing
and switching platforms. Initiated by Microsoft, it has been open sourced and cur-
rently sponsored by ONF with many players from the networking, telco, and data
center world participating in its development. It’s gaining popularity. Originally
intended for deployment in data centers as a TOR device, it has since found its
way into diverse roles in the spine as cell site routers in 5G RAN.
MORE? If you want to learn more about SONiC visit the following URLs:
https://azure.github.io/SONiC/
https://github.com/Azure/SONiC/wiki/Architecture
SONiC comes with the Free Range Routing (FRR) routing stack as the default
routing stack. cRPD can be used as an alternative routing stack for SONiC plat-
forms as shown in Figure 3.10.
57 Whitebox/SONiC
Architecture
SONiC uses Docker containers to host various control-plane applications making
it possible to upgrade and replace individual components. SONiC maintains the
device and route state in a redis database. A pub/sub model is used where each ser-
vice consumes state from the database published by other components, and then it
publishes its own objects and state.
An interesting detail about SONiC architecture is that routes must be programmed
to two FIBs: a FIB in the Linux kernel to manage packet I/O for the control plane
and an FIB in the data plane. To publish route state to the SONiC database, the
routing application uses the FPM channel to push route state to fpmsyncd, which
then publishes it to the redis database.
Deployment
Juniper SONiC devices include cRPD as the default routing stack. It can also be
installed on any SONiC device with x86-64 based control planes. SONiC uses
container-based architecture to host the control plane services and cRPD is a natu-
ral fit.
To deploy, transfer the cRPD image onto a SONiC device and perform these instal-
lation steps.
58 Chapter 3: cRPD Use Cases
4. Start cRPD:
docker start crpd
59 Whitebox/SONiC
Troubleshooting
To troubleshoot the FPM channel these commands will come in useful.
To check fpm channel state:
cli> show krt state channel fpm
To check tcpdump:
tcpdump -nvi lo tcp port 2620
MORE? This chapter isn’t a full deep dive into the RIFT protocol so the authors
suggest reading all about RIFT in the following Day One book: https://www.
juniper.net/documentation/en_US/day-one-books/DO_RIFT.pdf. It covers not
only the Juniper RIFT implementation but also the open-source implementation,
the RIFT Wireshark dissector, and troubleshooting tips and tricks.
61 Routing in Fat Trees (RIFT)
Work in IETF
Work on this new protocol started in IETF when the RIFT workgroup charter
was approved in February, 2018. As stated in the charter: “The Routing in Fat
Trees (RIFT) protocol addresses the demands of routing in Clos and Fat-Tree
networks via a mixture of both link-state and distance-vector techniques collo-
quially described as 'link-state towards the spine and distance vector towards the
leafs'. RIFT uses this hybrid approach to focus on networks with regular topolo-
gies with a high degree of connectivity, a defined directionality, and large scale.”
In the charter the protocol will:
Deal with automatic construction of fat-tree topologies based on detection
of links,
Minimize the amount of routing state held at each topology level,
Addressing Challenges
RIFT doesn’t just solve a few spot issues but addresses a whole set of problems.
Basically, instead of adjusting and modifying an existing protocol, one could say
that RIFT is designed with today’s requirements in mind.
Figure 3.11 summarizes the issues modified ISIS and BGP have, and shows how
RIFT solves them quite nicely. It also displays some unique challenges RIFT
solves which currently cannot be addressed in Link-State or Distance-Vector
protocols.
62 Chapter 3: cRPD Use Cases
Figure 3.11 Comparing the Use of BGP, ISIS and RIFT for Data Center Underlay
Basic Operations
As briefly described earlier in this chapter, RIFT combines concepts from both
Link-State and Distance-Vector protocols. A Topology Information Element (TIE)
is used to carry topology and reachability information; this is like an OSPF LSA or
an IS-IS LSP. Figure 3.12 illustrates the advertisement of reachability and topology
information in RIFT.
Figure 3.12 RIFT North- and Southbound Behavior and Lateral “one layer bounce” Behavior in
Case of Link Failure
63 Routing in Fat Trees (RIFT)
Okay, with those definitions and a drawing in our pocket, let’s look at how the
protocol works:
The topology in Figure 3.12 shows a 5-stage CLOS or a fabric with three
levels; Top of Fabric (ToF) in red, spines in green, and the leaf switches are
blue.
Looking at the leaf routing table one can observe, in normal operations, only
default routes towards the higher level in the fabric. This is somewhat compa-
rable to a Level 1 router’s routing table in IS-IS or a totally stubby area in
OSPF.
The higher up into the fabric, the more routes one will see in their respective
switches routing table with the ToF holding every route in its routing table.
Hence, spines will partially, and ToF will have full knowledge of the complete
fabric and thread routes as a distance-vector protocol would.
Routing information sent down into the fabric is only bounced up again one
layer. This is used for automatic disaggregation and flood reduction. We’ll get
to the specifics of that later on.
It’s interesting to note that in contrast to some other protocols, RIFT currently
doesn’t require anything new from a forwarding plane and ASIC perspective – it’s
a control plane protocol. Routes installed into forwarding are just like those we
are used to.
Operational Advantages
Authors from Cisco, Juniper, Yandex, and Comcast have worked on the specifica-
tions, hence RIFT is, just like IS-IS and BGP, an open standard. So it is by nature
multivendor and interoperable, and besides fixing existing routing protocol issues
it aims to add some very appealing features.
Automatic Disaggregation
In normal circumstances, devices other than those in the ToF stage rely on the de-
fault route to forward packets towards the ToF, while more specific routes are
available to forward packets towards the ToR switches. What happens in Figure
3.13 if the C3—>D2 link fails? B2, B6, and D6 will continue forwarding traffic
towards C3 based on the default route being advertised in the TIE flooded by C3,
but C3 will no longer have a route by which it can reach 2001:db8:3e8:100::/64—
so this traffic will be dropped at C3.
64 Chapter 3: cRPD Use Cases
To resolve this problem RIFT uses the two-hop neighbors advertised to all routers
to automatically determine when there is a failed link and push the required routes
along with the default route down the fabric. In this case, C4 can determine D3
should have an adjacency with D2, but the adjacency does not exist. Because of the
failed adjacency, C4 can flood reachability to 2001:db8:3e8:100::/64 alongside the
default route it is already sending to all its neighbors. This more specific route will
draw any traffic destined to the 100::/64 route, so C3 no longer receives this traffic.
The default route will continue to draw traffic towards C3 for the other destina-
tions it can still reach.
RIFT also offers the ability to perform unequal-cost load balancing from the ToR
towards the ToF. Since each node has only the default route, and the stages closer
to the ToF have more complete routing information, it is not possible for the ToR
to cause a routing loop by choosing one possible path over another, or unequally
sharing traffic along its available links.
Security
From the RIFT daemon perspective, it doesn’t really matter whether it is running
on a network device (switch) or ‘compute host.’ The RIFT daemon will use the
forwarding capabilities of its host to program routes into and it will advertise its
local routes upstream into the fabric. From the RIFT daemon perspective, the
‘compute host’ is “just a leaf.”
Hosts are usually more exposed to security vulnerabilities. So when routing on the
host is configured, extra steps are advised in order to increase the security (authen-
tication, message integrity and anti-replay). You can configure the interface pro-
tection (outer key) to secure RIFT with the host:
protocols {
rift {
authentication-key 12 {
key ""; ## SECRET-DATA
}
interface ge-0/0/0.0 {
67 cRPD as Cloud Native Routing Stack in 5G
lie-authentication strict;
allowed-authentication-keys 12; # (Example using key-id 12)
lie-origination-key 12;
}
}
}
The most interesting part of ROTH using RIFT is probably that it eliminates the
need for Layer 2 connectivity between the host and fabric because by nature RIFT
will advertise its routes on all available links and load-balance traffic over those
links. It will also receive default routes from its northbound tier and load-balance
egress traffic over those links. Hence there is no need for Layer 2 anymore.
MORE? The authors realize these few pages don’t even come close covering all
the aspects of the RIFT protocol. Hence, we strongly recommend reading https://
www.juniper.net/documentation/en_US/day-one-books/DO_RIFT.pdf which gives
a full 360 degree view of the protocol.
If you break down Figure 3.14 further, you get the deployment scenario shown in
Figure 3.15.
You can see in Figure 3.15 that vRAN can be implemented using a Kubernetes
environment with DU, CU, and a router running as a CNF. There is a strict la-
tency requirement for the fronthaul, which is around 70 microseconds, and the
F2 link can terminate on a SmartNIC. The latency requirement for F1 link or
mid-haul is not that strict (in terms of milliseconds) and a DPDK forwarding
plane could be used in the implementation.
70 Chapter 3: cRPD Use Cases
Let’s do a lab demonstration, where the vRAN part is being implemented and:
The router is being replaced by cRPD as the cloud native routing stack in
Kubernetes deployments, DU, and CU.
The DU and CU application is simulated using Kubernetes pod running
busybox application.
Instead of an enhanced data plane like DPDK, the Linux kernel is used to
showcase the demonstration.
Calico is used as the primary Kubernetes CNI but in principle any CNI could
be used.
The DU and CU application pods are acting as CE device to the cRPD (which
is acting as a router).
Communication between DU and CU is facilitated by using SR-MPLS
transport.
The lab is capable to bringing up both IPv4/IPv6 transport. Please keep in
mind that operators are mostly inclined towards IPv6 for vRAN infrastruc-
ture.
The lab topology is shown in Figure 3.16:
Figure 3.16 Sample Kubernetes Cluster with Three Nodes – Physical Connectivity
71 cRPD as Cloud Native Routing Stack in 5G
In this lab only two worker nodes are shown but you can bring-up as many
as you need
2. Once the VMs are up and connected, install Kubernetes with one master and
worker nodes. There are various ways to bring up the Kubernetes cluster (ku-
beadm, kind, minikube etc.). It is assumed that the Kubernetes setup is ready to
be used with a primary CNI of your choice.
3. You want to install cRPD to act as a cloud native routing stack for all the
PODs that want to use the transport network for communication and take care
of route exchange and installing forwarding state. Figure 3.17 depicts the
deployment.
Figure 3.17 Lab Deployment of oCU and oDU Application in a 3 Nodes Kubernetes Cluster
72 Chapter 3: cRPD Use Cases
2. Add cRPD License and append the cRPD license to the manifest file:
kubectl create secret generic crpd-license -n kube-system --from-
literal=CRPDLICENSE="ENTIRE LICENSE STRING GOES HERE" --dry-run=client -o yaml >> crpd_cni_
secondary_asymmetric.yml
===>
Containerized Routing Protocols Daemon (CRPD)
Copyright (C) 2020, Juniper Networks, Inc. All rights reserved.
<===
root@kubernetes-master:/# cli
root@kubernetes-master> show bgp summary
Threading mode: BGP I/O
Default eBGP mode: advertise - accept, receive - accept
Groups: 2 Peers: 5 Down peers: 0
Unconfigured peers: 5
Table Tot Paths Act Paths Suppressed History Damp State Pending
bgp.l3vpn.0
0 0 0 0 0 0
bgp.l3vpn-inet6.0
1 1 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/
Damped...
100.1.1.2 64512 17 16 0 0 7:45 Establ
bgp.l3vpn.0: 0/0/0/0
100.1.1.3 64512 18 17 0 0 6:48 Establ
bgp.l3vpn.0: 0/0/0/0
2001:db8:100:1::2 64512 17 18 0 0 7:43 Establ
bgp.l3vpn-inet6.0: 0/0/0/0
2001:db8:100:1::3 64512 18 17 0 0 6:47 Establ
bgp.l3vpn-inet6.0: 0/0/0/0
2001:db8:100:1::4 64512 5 3 0 0 30 Establ
bgp.l3vpn-inet6.0: 1/1/1/0
04:19:21.189652 52:54:00:0b:0a:5a > 52:54:00:4d:c5:09, ethertype MPLS unicast (0x8847), length 102:
MPLS (label 16, exp 0, [S], ttl 63) 10.1.3.2 > 10.1.2.2: ICMP echo request, id 178, seq 2, length 64
04:19:21.189879 52:54:00:4d:c5:09 > 52:54:00:0b:0a:5a, ethertype MPLS unicast (0x8847), length 102:
MPLS (label 16, exp 0, [S], ttl 63) 10.1.2.2 > 10.1.3.2: ICMP echo reply, id 178, seq 2, length 64
04:17:32.199816 52:54:00:0b:0a:5a > 52:54:00:4d:c5:09, ethertype MPLS unicast (0x8847), length 122:
MPLS (label 16, exp 0, [S], ttl 63) 2001:db8:3::2 > 20
01:db8:2::2: ICMP6, echo request, seq 0, length 64
04:17:32.200059 52:54:00:4d:c5:09 > 52:54:00:0b:0a:5a, ethertype MPLS unicast (0x8847), length 122:
MPLS (label 16, exp 0, [S], ttl 63) 2001:db8:2::2 > 20
01:db8:3::2: ICMP6, echo reply, seq 0, length 64
04:17:33.200038 52:54:00:0b:0a:5a > 52:54:00:4d:c5:09, ethertype MPLS unicast (0x8847), length 122:
MPLS (label 16, exp 0, [S], ttl 63) 2001:db8:3::2 > 20
01:db8:2::2: ICMP6, echo request, seq 1, length 64
04:17:33.200406 52:54:00:4d:c5:09 > 52:54:00:0b:0a:5a, ethertype MPLS unicast (0x8847), length 122:
MPLS (label 16, exp 0, [S], ttl 63) 2001:db8:2::2 > 2001:db8:3::2: ICMP6, echo reply, seq 1, length 64
04:17:34.200203 52:54:00:0b:0a:5a > 52:54:00:4d:c5:09, ethertype MPLS unicast (0x8847), length 122:
MPLS (label 16, exp 0, [S], ttl 63) 2001:db8:3::2 > 2001:db8:2::2: ICMP6, echo request, seq 2, length
64
04:17:34.200488 52:54:00:4d:c5:09 > 52:54:00:0b:0a:5a, ethertype MPLS unicast (0x8847), length 122:
MPLS (label 16, exp 0, [S], ttl 63) 2001:db8:2::2 > 2001:db8:3::2: ICMP6, echo reply, seq 2, length 64
Chapter 4
cRPD supports various automation and telemetry frameworks like JET, YANG-
based model-driven streaming telemetry, openconfig, NetConf, PyEz, and gRPC.
This chapter shows you how to set up a monitoring solution using open-source
tools. The entire setup can be defined using a single Docker-composed file. Let’s
use a very simple network topology using three cRPDs as shown in Figure 4.1.
The aim of this lab is to showcase how to set up a cloud-native monitoring frame-
work for cRPD using:
1. Grafana as a Dashboard
2. Prometheus as a collector for node-exporter and cAdvisor
3. Telegraf as a collector for cRPD Junos OpenConfig Telemetry
In this setup:
There are iBGP sessions among three cRPDs: Test-cRPD, baseline-cRPD, and
Internet-cRPD
Test-cRPD is our DUT
Internet-cRPD is pulling the internet routing table from the lab and reflecting
it to Test-cRPD and baseline-cRPD
79
To bring up the above network topology along with monitoring the infrastructure
let’s use this Docker-composed YAML file:
---
version: "3"
networks:
tig:
services:
influxdb:
container_name: influxdb
expose:
- 8086
- 8083
networks:
- tig
image: influxdb:1.8.0
telegraf:
container_name: telegraf
image: telegraf:1.15.1
networks:
- tig
80 Chapter 4: Monitoring and Telemetry
volumes:
- $PWD/telegraf.conf:/etc/telegraf/telegraf.conf:ro
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- influxdb
grafana:
container_name: grafana
environment:
GF_SECURITY_ADMIN_USER: jnpr
GF_SECURITY_ADMIN_PASSWORD: jnpr
ports:
- "3000:3000"
networks:
- tig
image: grafana/grafana:7.0.3
volumes:
- $PWD/datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yaml:ro
depends_on:
- influxdb
- prometheus
prometheus:
image: prom/prometheus:latest
container_name: prometheus
networks:
- tig
ports:
- 9090:9090
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
links:
- cadvisor:cadvisor
- nodeexporter:nodeexporter
cadvisor:
privileged: true
image: gcr.io/google-containers/cadvisor:latest
container_name: cadvisor
networks:
- tig
ports:
- 8080:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
- /sys/fs/cgroup:/sys/fs/cgroup:ro
nodeexporter:
privileged: true
image: prom/node-exporter:v1.0.1
container_name: nodeexporter
networks:
- tig
ports:
- 9100:9100
volumes:
81
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
- ${PWD}/crpd_image_size:/host/crpd_image_size
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
- '--collector.textfile.directory=/host/crpd_image_size'
restart: unless-stopped
testcrpd:
privileged: true
image: crpd:20.4I-20201227.0.1259
container_name: crpdtest
networks:
- tig
volumes:
- ${PWD}/testcrpd:/config
- ${PWD}/license:/config/license
baselinecrpd:
privileged: true
image: crpd:20.3R1.4
container_name: crpdbaseline
networks:
- tig
volumes:
- ${PWD}/baselinecrpd:/config
- ${PWD}/license:/config/license
internetcrpd:
privileged: true
image: crpd:20.3R1.4
container_name: internetcrpd
network_mode: bridge
volumes:
- ${PWD}/internetcrpd:/config
- ${PWD}/license:/config/license
retention_policy = ""
write_consistency = "any"
timeout = "5s"
[[inputs.jti_openconfig_telemetry]]
## List of device addresses to collect telemetry from
servers = ["testcrpd:50051", "baselinecrpd:50051"]
sample_frequency = "2000ms"
sensors = [
"/network-instances/network-instance/protocols/protocol/bgp/neighbors",
"/bgp-rib",
]
retry_delay = "1000ms"
..
In this configuration file 50051 is the gRPC port number that cRPD is exposing for
streaming telemetry, and here is the relevant configuration snapshot from cRPD in
order to demonstrate the same:
system {
..
extension-service {
request-response {
grpc {
clear-text {
address 0.0.0.0;
port 50051;
}
max-connections 8;
skip-authentication;
}
}
##
## Warning: configuration block ignored: unsupported platform (crpd)
##
notification {
allow-clients {
address 0.0.0.0/0;
}
}
}
..
Grafana: As discussed, Grafana is used to build the dashboard GUI. It uses Influx-
DB and Prometheus as data sources. Here the Grafana version of 7.0.3 is used. It
exposes port 3000 for any HTTP connection via a web browser. A username and
password to log in to the browser are also provided in the service definition.
Datasource.yml provides credentials for Grafana to automatically fetch data from
influxdb and Prometheus.
Prometheus: Prometheus is a widely adopted open-source metrics-based monitor-
ing and alerting system. It has inbuilt TSDB and scrapes the endpoints defined in
its configuration file for available metrics. Here the configuration file Prometheus.
yml gets mounted when the container comes up. Excerpts from the configuration
file are given below for an endpoint scraping configuration:
83
..
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['prometheus:9090','cadvisor:8080','nodeexporter:9100','pushgateway:9091']
CAdvisor: This provides insights about the resource usage and performance char-
acteristics of running containers in a system.
Nodeexporter: This provides server level details about various system parameters
like memory, CPU, etc. and can be used to visualize server health in the dashboard.
The remainder of this chapter shows various cRPD container services used to
match the topology. Let's run this command to bring up the setup:
docker-compose -f docker-compose.yml up
root@server# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
53eb0284a9e0 grafana/grafana:7.0.3 "/run.sh" 14 seconds ago Up
12 seconds 0.0.0.0:3000->3000/tcp grafana
1a3545a58f3d prom/prometheus:latest "/bin/prometheus --c…
" 16 seconds ago Up 14 seconds 0.0.0.0:9090->9090/
tcp prometheus
e9e341da31e0 telegraf:1.15.1 "/entrypoint.
sh tele…" 17 seconds ago Up 13 seconds 8092/udp, 8125/udp, 8094/
tcp telegraf
10a039df2b48 influxdb:1.8.0 "/entrypoint.
sh infl…" 23 seconds ago Up 17 seconds 8083/tcp, 8086/
tcp influxdb
6b782911159b crpd:20.4I-20201227.0.1259 "/sbin/runit-init.
sh" 23 seconds ago Up 21 seconds 22/tcp, 179/tcp, 830/tcp, 3784/
tcp, 4784/tcp, 6784/tcp, 7784/tcp, 50051/tcp crpdtest
e1ea4c9a4747 prom/pushgateway "/bin/
pushgateway" 23 seconds ago Up 19 seconds 0.0.0.0:9091->9091/
tcp pushgateway
736ce19cbb8e gcr.io/google-containers/cadvisor:latest "/usr/bin/
cadvisor -…" 23 seconds ago Up 16 seconds (health: starting) 0.0.0.0:8080->8080/
tcp cadvisor
5f36b4a45088 crpd:20.3R1.4 "/sbin/runit-init.
sh" 23 seconds ago Up 22 seconds 22/tcp, 179/tcp, 830/tcp, 3784/
tcp, 4784/tcp, 6784/tcp, 7784/tcp, 50051/tcp internetcrpd
697c750514fb prom/node-exporter:v1.0.1 "/bin/node_
exporter …" 23 seconds ago Up 18 seconds 0.0.0.0:9100->9100/
tcp nodeexporter
84 Chapter 4: Monitoring and Telemetry
8f5a25d90c65 crpd:20.3R1.4 "/sbin/runit-init.
sh" 23 seconds ago Up 20 seconds 22/tcp, 179/tcp, 830/tcp, 3784/
tcp, 4784/tcp, 6784/tcp, 7784/tcp, 50051/tcp crpdbaseline
b237f0c58f09 932d15951c1e "/sbin/runit-init.sh"
Here is a configuration for pushing the open-config telemetry from the cRPD
baseline:
openconfig-network-instance:network-instances {
network-instance default {
config {
type DEFAULT_INSTANCE;
}
protocols {
protocol BGP bgp-1 {
bgp {
global {
config {
as 100;
router-id 2.2.2.2;
}
}
neighbors {
neighbor 53.1.1.2 {
config {
peer-group qtest1;
neighbor-address 53.1.1.2;
peer-as 100;
}
route-reflector {
config {
route-reflector-cluster-id 2;
route-reflector-client true;
}
}
apply-policy {
config {
export-policy ibgp-export-internet;
}
}
}
neighbor 52.1.1.2 {
config {
peer-group qtest2;
neighbor-address 52.1.1.2;
peer-as 100;
}
route-reflector {
config {
route-reflector-cluster-id 2;
route-reflector-client true;
}
}
}
}
85
}
}
}
}
}
root@ubuntu:~/jtimon/jtimon# ./jtimon -h
Usage of ./jtimon:
--compression string Enable HTTP/2 compression (gzip)
--config strings Config file name(s)
--config-file-list string List of Config files
--consume-test-data Consume test data
--explore-config Explore full config of JTIMON and exit
--generate-test-data Generate test data
--json Convert telemetry packet into JSON
--log-mux-stdout All logs to stdout
--max-run int Max run time in seconds
--no-per-packet-goroutines Spawn per packet go routines
--pprof Profile JTIMON
--pprof-port int32 Profile port (default 6060)
--prefix-check Report missing __prefix__ in telemetry packet
--print Print Telemetry data
--prometheus Stats for prometheus monitoring system
--prometheus-host string IP to bind Prometheus service to (default "127.0.0.1")
86 Chapter 4: Monitoring and Telemetry
root@ubuntu:~/jtimon/jtimon#
Make sure port 50051 is exposed on the local host. You can do that by either run-
ning cRPD on the host networking mode or you can map the cRPD telemetry port
to the local port 50051.
5. Once set up, run the following commands to check if BGP sensors are exported
by cRPD:
root@hyderabad:~/jtimon/jtimon# ./jtimon --config ./test.json --print
2020/12/28 19:15:27 Version: v2.3.0-79d878c471c92ce67477c662c1d31fd49b74301f-master BuildTime 2020-
12-23T13:40:23-0800
2020/12/28 19:15:27 logging in for 172.17.0.1:50051 [periodic stats every 0 seconds]
Running config of JTIMON:
{
"port": 50051,
"host": "172.17.0.1",
"user": "",
"password": "",
"cid": "",
"meta": false,
"eos": false,
"grpc": {
"ws": 1048576
},
"tls": {
"clientcrt": "",
"clientkey": "",
"ca": "",
"servername": ""
},
"influx": {
"server": "",
"port": 0,
"dbname": "",
"user": "",
"password": "",
"recreate": false,
"measurement": "",
"batchsize": 102400,
"batchfrequency": 2000,
87
There you go! Figure 4.2 shows the final output. In the Dashboard you can visual-
ize the host/server:
Uptime for the server or host
Troubleshooting
Troubleshooting determines why something does not work as expected and how
to resolve the problem. You can troubleshoot issues at the CLI, container, Docker,
rpd crashes, core files, and by using log messages stored at /var/log/messages.
One may also need to run “apt-get update” prior to running any “apt install”
command.
Also, please check “/etc/resolv.conf” to make sure DNS is properly configured
both at the cRPD Linux shell and at the host, so that domain names are properly
resolved.
NOTE Unlike Junos, corefiles generated by cRPD don’t include config files and
other data and should be collected separately.
Check if a VRF device got created in the kennel corresponding to the routing-in-
stance configured:
root@crpd> show configuration routing-instances
VRF1 {
instance-type vrf;
interface veth91c08573;
route-distinguisher 100.1.1.3:64512;
vrf-target target:1:1;
vrf-table-label;
}
root@crpd>
root@crpd:/# lldpcli
[lldpcli] $ show
-- Show running system information
neighbors Show neighbors data
interfaces Show interfaces data
chassis Show local chassis data
statistics Show statistics
configuration Show running configuration
running-configuration Show running configuration
[lldpcli] $ show neighbors
[edit]
root@internet#