Data Center Topology

MyTinyDataCenterThe Testbed is designed to simulate commonly deployed Data Center topologies.

The testbed is based on a Leaf & Spine design with OS6900’s and OS6860’s acting as Leaf Switches.

The Leaf switches are named DC-EDGE-101 through DC-EDGE-1xx. The Leaf Switches act as Default gateway for those subnets/vlans present on the switches. The subnets are isolated to the particular Leaf switch.

However, Some L2 domains such as FCOE vlans and a couple of other vlans to test L2 Multicast & Broadcast will stretch across the entire network.

In general, it is good networking practice to isolate L2 broadcast domains to a single switch, and use newer Encapsulation technologies such as VxLAN/SPB to support expansion of tenant/Customer domains. This also encourages us to think and implement networks via the Service model (Think Vxlan VNID or SPB isid rather than VLAN/Subnet).

Servers’s, ISCSI Storage devices and Fiber-Channel Switches are connected to the Leaf Switches. Fiber-Channel Storage is connected to the FC Switch. Remember, it is not a good idea to connect Servers directly to the Core Switch.

Each of the Leaf switches are connected via LACP Link-Aggregation or fixed port uplinks  to Spine Switches DC-CORE-01(vc OS10K) and DC-CORE-02 (vc OS6900x72 + x40).

These connections are L3 point-to-point and can either be OSPF/BGP/ISIS. In my testbed, all three adjacencies are present. It is only a matter of changing the route-redistribution scheme on each of the leaf switches to route the subnets on the Leaf Switches across the core via a particular protocol. By default route-distribution is done via OSPF. Refer Dr Google for Data Center technologies built with BGP ( Microsoft/Yahoo etc)

Since all three routing protocols support ECMP, traffic flow is across all the Spine Switches, as well as being fully redundant in case of a failure. Currently, The OS10K (DC-CORE-01) virtual-Chassis is configured as the rendezvous point for all PIM sparse groups. PIM-Bidir is used to create distribution trees for Vxlan.

In addition to routed traffic, there is also L2 traffic (FCOE, L2 Broadcast test traffic and L2 Multicast). For those L2 vlans which need to be propagated across multiple leaf switches, The OS6900 VC (DC-CORE-02) is configured to be the spanning-tree root bridge.
The OS10K (DC-CORE-01) is configured as the backup root bridge.

Work in Progress( Dual-Stacking)– Currently the DataCenter is predominantly IPV4. An IPV6 network is alsocreated across the same physical topology. The IPV6 addressing scheme closely follows the IPV4 addressing scheme.

A note about Redundancy

There is multiple levels of redundancy built in. Link-Aggregation with hashing ensures that traffic takes one of many ports. ECMP ensures that traffic takes one of many equal-cost paths with sub-second convergence.
The connections reflect test requirements. Real-datacenters might not have both L2 and L3 redundancy because the costs might scale up quite quickly.

 

Resources :

http://bradhedlund.com/2012/01/25/construct-a-leaf-spine-design-with-40g-or-10g-an-observation-in-scaling-the-fabric/

https://www.nanog.org/meetings/nanog55/presentations/Monday/Lapukhov.pdf

https://en.wikipedia.org/wiki/Multitenancy

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s