Professional Documents
Culture Documents
[Version 3.11]
Contents
1 Hardware Requirements 2
2 Software Requirements 3
4 BGP Configuration 6
Introduction
This document is intended for use by network architects, network engineers or by any technical staff that
will be involved in the Noction Intelligent Routing Platform (IRP) deployment in a specific network. It
contains basic information on the way IRP interconnects with the existing network elements.
Conventional network diagrams are used where some components not related to the current scenario can
be omitted.
Noction can provide assistance and help plan the IRP deployment in your network. Noction will
sign an NDA if required.
1 Hardware Requirements
In production, a dedicated server for each IRP instance is strongly recommended. The system can
also be deployed on a Virtual Machine with matching specifications, provided that this is hardware-
or paravirtualization (Xen, KVM, VMware). OS-level virtualization (OpenVZ/Virtuozzo or similar)
is not supported.
In case of KVM or VMware, please request a pre-built image with OS and IRP packages from
your Sales Manager..
1. CPU
2. RAM
3. HDD
• SAS disks are recommended (SSDs are required only for 40Gbps+ networks);
• HDD partitioning:
– LVM is recommended;
– At least 100GB disk space usable for /var or separate partition;
– At least 10GB disk space usable for /tmp or separate partition. This is required for big
mysql tables manipulation. More disk space might be required under heavy workload.
4. NIC
• if providing sFlow/NetFlow data - at least 1 x 1000Mbps NIC while two NICs are recom-
mended (one will be dedicated to management purposes).
• if providing raw traffic data by port mirroring - additional 10G interfaces are required for
each of the configured SPAN ports (Myricom 10G network cards with Sniffer10G license are
recommended to be used for high pps networks). When configuring multiple SPAN ports the
same number of additional CPU cores are needed to analyze traffic.
In very large networks carrying hundreds or more of Gbps of traffic and in IRP configurations
with very aggressive optimization settings configurations with 2x or 4x CPUs are recommended. The
IRP servers in these cases should also allocate double the recommended RAM and use SSD storage.
Noction can size, setup and mail to you an appliance conforming to your needs. The appliance is delivered
with OS installed (latest CentOS 7) and IRP software deployed.
2 Software Requirements
Clean Linux system, with the latest CentOS 7 or Ubuntu Server LTS of x86_64 architecture installed
on the server.
• Network is multi-homed.
Eventually the following is needed in order to deploy and configure IRP:
1. A network diagram with all the horizontal(own) as well as upstream (providers) and downstream
(customers) routers included. Compare if network topology is logically similar to one or more of
the samples listed below for example Flow export configuration.
2. List of prefixes announced by your AS that should be analyzed and optimized by IRP.
3. Review the output of commands below from all Edge Routers :
The settings relating to BGP configuration, prefixes announced by your ASN, the route maps,
routing policies, access control list, sFlow/Netflow and related interfaces configurations are used
to setup similar IRP settings or to determine what settings do not conflict with existing network
policies.
(a) sFlow, Netflow (v1, 5, 9, 10(IPFIX)) or jFlow and send it to the main server IP. Make sure
the IRP server gets both inbound and outbound traffic info.
Egress and Ingress flow accounting should be enabled on each provider link or, if this is not
technically possible to enable egress accounting then ingress flow accounting should be enabled
on all interfaces facing both the internal network and providers.
NetFlow is most suitable for high traffic volumes, or in the case of a sophisticated network
infrastructure, where port mirroring is not technically possible.
Recommended sampling rates:
i. For traffic up to 1Gbps: 1024
ii. For traffic up to 10Gbps: 2048
Configuration examples can be found in the official documentation, section 2.2.3.1 (Irpflowd
Configuration)
(b) Port mirroring (a partial traffic copy is enough). In this case, additional network interfaces
on the server are required - one for each mirrored port. See Figures 2 and 3.
5. Policy Based Routing (PBR) for IRP active probing (see Figure 4).
• Apart from the main server IP, please add one public IP address for each provider to be
assigned as probing alias IP, and configure PBR for traffic originating from each of these IPs
to be routed over different providers respectively.
• No route maps should be enforced for the main server IP, traffic originating from it should
pass the routers using the default routes.
• Define the Provider↔PBR IP routing map
In specific complex scenarios, traffic from the IRP server should pass multiple routers before getting
to the provider. If a separate probing Vlan cannot be configured across all routers, GRE tunnels
from IRP to the Edge routers should be configured (one GRE tunnel per each edge router), as
shown in Figure 5
GRE tunnels are mainly used to prevent additional overhead from route maps configured on the
whole IRP←→Edge routers path.
4 BGP CONFIGURATION 6
PBR and GRE configuration examples for major network equipment vendors can be found in the
IRP documentation, section 2.2.4.1 (Specific PBR configuration scenarios)
6. Provide SNMP (a read-only community is enough) for each provider link, and provide the following
information:
This information is required for report generation, Commit Control decision-making and prevention
of overloading a specific provider with an excessive number of improvements.
7. Provide maximum throughput/interface capacity to configure load limit (90% of the capacity) for
IRP to be able to improve traffic to a provider only when the traffic load is less than the specified
load limit value in Mbps. Additionally to setup commit control or cost related settings, provide
the the cost per Mbps and the 95th centile values for each provider (optional).
The above is sufficient to proceed with a non-intrusive setup. To start the system in full Intrusive mode,
iBGP sessions between IRP and edge routers will be required. Please see the BGP Configuration section
for detailed information.
4 BGP Configuration
IRP interconnects with edge router via an iBGP session. In case there are multiple edge routers which
have connections to the providers, an iBGP session for each router is required.
For each improved destination network, if running in Intrusive mode, the system announces back the
improved prefix with an updated next-hop. In order for the improved route to take precedence over other
routes received from external peers, a custom local-preference value is used. Depending on the existing
routers configuration, other mechanisms may be used for prioritizing the routes announced by the IRP
(e.g: weight, community).
• ICMP ECHO REQUEST (ICMP payload contains text "NOCTION" with some spaces)
• UDP destination ports: 33434 - 33484 (data contains text "NOCTION" with some spaces)
• TCP SYN destination ports: 33434 - 33484 (without any data text)
Probing packets source IPs belong to the Noction IRP server. Other fields, such as ICMP Echo Request
ID, TCP/UDP Source Port, DSCP, TTL may vary.
Response packets must be able to reach back the IRP Server and can be any form of World reaction to
the above mentioned packets: