Servers in the Data Center

I use Linux/OpenSource for most of my testing needs. I installed Ubuntu 14.04.3 LTS with Virtualization support because I was comfortable with its package management options.

Here are the servers that I installed on my system. Contrary to prevalent opinion, I found documentation for installation and setting up the services fairly easy to follow and straightforward.

Authentication, authorization, and accounting (AAA) – I use Both Tacacs & FreeRadius for AAA. OpenSource options for both are available.

FreeRadius –

Tacacs –

Domain Name System (DNS) /Bind –

SFLOW –  ntopng

Syslog Server (rsyslogd) –

Network Time Protocol Server (NTP) –

NFS Server – &

DHCP Server – The OmniSwitch series of switches have DHCP-Server support.

In addition, I have other applications running
Python with Paramiko for scripting support
<To be updated as I remember>


Data Center Topology

MyTinyDataCenterThe Testbed is designed to simulate commonly deployed Data Center topologies.

The testbed is based on a Leaf & Spine design with OS6900’s and OS6860’s acting as Leaf Switches.

The Leaf switches are named DC-EDGE-101 through DC-EDGE-1xx. The Leaf Switches act as Default gateway for those subnets/vlans present on the switches. The subnets are isolated to the particular Leaf switch.

However, Some L2 domains such as FCOE vlans and a couple of other vlans to test L2 Multicast & Broadcast will stretch across the entire network.

In general, it is good networking practice to isolate L2 broadcast domains to a single switch, and use newer Encapsulation technologies such as VxLAN/SPB to support expansion of tenant/Customer domains. This also encourages us to think and implement networks via the Service model (Think Vxlan VNID or SPB isid rather than VLAN/Subnet).

Servers’s, ISCSI Storage devices and Fiber-Channel Switches are connected to the Leaf Switches. Fiber-Channel Storage is connected to the FC Switch. Remember, it is not a good idea to connect Servers directly to the Core Switch.

Each of the Leaf switches are connected via LACP Link-Aggregation or fixed port uplinks  to Spine Switches DC-CORE-01(vc OS10K) and DC-CORE-02 (vc OS6900x72 + x40).

These connections are L3 point-to-point and can either be OSPF/BGP/ISIS. In my testbed, all three adjacencies are present. It is only a matter of changing the route-redistribution scheme on each of the leaf switches to route the subnets on the Leaf Switches across the core via a particular protocol. By default route-distribution is done via OSPF. Refer Dr Google for Data Center technologies built with BGP ( Microsoft/Yahoo etc)

Since all three routing protocols support ECMP, traffic flow is across all the Spine Switches, as well as being fully redundant in case of a failure. Currently, The OS10K (DC-CORE-01) virtual-Chassis is configured as the rendezvous point for all PIM sparse groups. PIM-Bidir is used to create distribution trees for Vxlan.

In addition to routed traffic, there is also L2 traffic (FCOE, L2 Broadcast test traffic and L2 Multicast). For those L2 vlans which need to be propagated across multiple leaf switches, The OS6900 VC (DC-CORE-02) is configured to be the spanning-tree root bridge.
The OS10K (DC-CORE-01) is configured as the backup root bridge.

Work in Progress( Dual-Stacking)– Currently the DataCenter is predominantly IPV4. An IPV6 network is alsocreated across the same physical topology. The IPV6 addressing scheme closely follows the IPV4 addressing scheme.

A note about Redundancy

There is multiple levels of redundancy built in. Link-Aggregation with hashing ensures that traffic takes one of many ports. ECMP ensures that traffic takes one of many equal-cost paths with sub-second convergence.
The connections reflect test requirements. Real-datacenters might not have both L2 and L3 redundancy because the costs might scale up quite quickly.


Resources :


This blog outlines and describes some of the interesting work that I often get to do while  implementing DataCenter Solutions in the lab.

The DataCenter products that I work on are the OmniSwitch series of Chassis based & Top of rack switches from Alcatel-Lucent Enterprise.

This  blog is merely a collection of the notes that I made over the Test & Development life-cycle to help clarify concepts. They ended up being useful to me and something that I could refer to over time. In the same spirit, I do hope some of it is useful for those using the OmniSwitch product line.

The features & Solutions that I am responsible for AND/OR contribute to are :

  • High-availability Data center solutions based on L3 Leaf & Spine ECMP designs – mainly
    – VxLAN overlay networks( implementation of hardware VxLAN Tunnel Endpoints, Virtual-Machine Snooping & VM aware QoS solutions)
    – SPB 802.1aq Multipath networks.
  • Converged Storage Area Network solutions – mainly
    – FCoE Storage Area Networking and supporting protocols [N-Port ID Virtualization, E-Port Tunnelling over Ethernet, FIP Snooping & other features].
    – ISCSI & NFS ( Interop with third-party vendors)
  • Data Center Bridging Protocols –  mainly
    – Priority Flow Control (IEEE 802.1Qbb)
    – Enhanced Transmission Selection (802.1Qaz)
    – Quantized Congestion Notification (802.1Qau)
    and Data Center Bridging Exchange Protocol (DCBx)]
  • Shared Memory and Virtual Output Queuing systems .[1G, 10G and 40G line cards].
  • Virtual Network Profiles.

Needless to say, the thoughts & opinions that are in this blog are mine alone.