You are on page 1of 12

RFP

for
Setup of
Next Gen. Data Center Network
in
NDCSP
Purpose of RFP?
◦ To procure new Switches to connect servers with 10G/25G
support. Simultaneously upgrade backbone of network
infrastructure in NDCSP.

◦ To Build network infrastructure with next gen. technologies


capable of resolving existing issues.
A. Limited Agility on network design.
i. Restricted placement of services (FW, LB, WAF, NAS, Backup) in
network path.
ii. Traffic re-routing/redirection/bypass capabilities
iii. MPLS/VPN Connectivity
iv. Related multiple issues: traffic congestion at load-balancer, inter-
tenant access control, Change management, Dynamic routing,
Design limitation, NAS access, Backup traffic, etc.
B. Physical restriction on workload placement.
C. Monitoring, management, automation and integration of
network infrastructure.
Purpose of RFP?
◦ Hire few Experts to lookover overall network
infrastructure’s design, process, management,
automation and integration part.
Technology/Methodology used
 Layer-3 Multi Stage CLOS Topology: Well proven and most recommended way
of building todays cloud scale data centers.

 Distributed service methodology for better scale: All layer-2 and Layer-3
services will be provided through Leaf switches only. Spines will remain dumb
and provide only transport service. It is analogical to MPLS P-PE design.

 VXLAN: an overlay/tunneling mechanism that will run over Layer-3 CLOS


networks. Analogy is MPLS.

 EVPN: an BGP extension that compliments VXLAN to efficiently support Layer-2


and Layer-3 service. Analogy is BGP VPNv4 +

 Centralized management and automation through fabric-controller: It will


utilize standards based mechanism like REST, netconf, SNMP to monitor,
manage and automate network.

 Use of Open standards: like VXLAN, EVPN, Open APIs, Netconf, Telemetry, etc,
to avoid vendor lock-in , for better interoperability. For better agility,
Physical topology
Physical topology Highlights
 Multi-stage CLOS topology. (hierarchal leaf-
spine)
 A pair of Core Switch for North-South
connectivity and services
 Each DC hall have 2xSuper-Spines and multiple
PODs
 Each POD contains 2xSpines and can have upto
50xLeafs connected in leaf-spine topology
 Dedicated inter-hall connectivity through
super-spines @3200Gbps.
 Inter-Pod Bandwidth @1600Gbps
 All inter-switch links are multiple of 100Gbps
Network Management View
Orchestrator

North Bound APIs


for
integration with Orchestrator
Or
Mgmt by admins
Or
Automation tasks.
Fabric Controller

South Bound APIs


for monitoring, mgmt
& automation.

Switch Switch Switch Switch Switch Switch


RFP highlights
 Empanelment based. Procure as you require.
 Empanelment duration: 2 + 1 for devices
 Warranty and O&M support for 5 years.
 No-restriction on no. of bids by single OEM.
 No. of expert level man power = 2 nos.
 Includes one time implementation services
 Includes redundant inter-row over-head tray

installation work
RFP highlights
 All devices must be from single OEM.
 Devices types:

◦ 1. Fabric Controller
◦ 2. Leaf Switch type-1 and type-2
◦ 3. Spine Switch
◦ 4. Super Spine Switch
◦ 5. Core Switch
◦ 6. 12 types of transceivers and cables
 Includes fabric specs also. i.e. expected
functionality of the network fabric as a whole.
Suggested logical Design

DDoS
Flow gen.
Usr_Ext_Pri_VRF
Shrd_Ext_VRF WAF

LB

VPN

Nw mon
FW+IPS

NAS Usr_int_VRF
Route_Leak
Backup

Object Str
Shrd_Srv_VRF
Missing pieces
 Requirement of Orchestrator for ease of management
 VRF aware physical firewall OR virtual firewall per
tenant with IPS inbuilt.  this is specifically for self
managed automated model by reducing firewall rules
management complexity.
 Automation capability on service appliance like
firewall, LB and WAF. No need of virtual appliances
for LB and WAF functions, in this topology.
 Few requirements related to traffic forwarding from
service appliance. Like 1) One-arm mode support
with NAT 2) feature compatibility with P2P connect
model instead of LAN connect model. 3) Dynamic
routing preferred.
Missing pieces
 East-West access control.
◦ On switches
 Only zone based basic access control
 Scalability issues with large no. of customers in cloud
 No micro-segmentation
◦ On virtual switch
 – e.g. NSX distributed firewall
◦ On Host itself with centralized manager

All above models provide distributed firewalling that provide


high throughput and is essential for this design

All above models are highly dependent of this network


design which provides 1st level of security between tenants
through VRF segregation

You might also like