Servers in the Data Center

I use Linux/OpenSource for most of my testing needs. I installed Ubuntu 14.04.3 LTS with Virtualization support because I was comfortable with its package management options.

Here are the servers that I installed on my system. Contrary to prevalent opinion, I found documentation for installation and setting up the services fairly easy to follow and straightforward.

Authentication, authorization, and accounting (AAA) – I use Both Tacacs & FreeRadius for AAA. OpenSource options for both are available.

FreeRadius – http://www.ubuntugeek.com/install-freeradius-on-ubuntu-15-04-server-and-manage-using-daloradius-freeradius-web-management-application.html

Tacacs – http://www.routingloops.co.uk/cisco/tacacs-on-ubuntu-14-04-lts/

Domain Name System (DNS) /Bind – https://help.ubuntu.com/lts/serverguide/dns.html

SFLOW –  ntopng http://idroot.net/tutorials/how-to-install-ntopng-on-ubuntu-14-04/

Syslog Server (rsyslogd) – https://community.spiceworks.com/how_to/65683-configure-ubuntu-server-12-04-lts-as-a-syslog-server

Network Time Protocol Server (NTP) – http://ubuntuforums.org/showthread.php?t=862620

NFS Server – https://help.ubuntu.com/community/SettingUpNFSHowTo & https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-14-04

DHCP Server – The OmniSwitch series of switches have DHCP-Server support.

In addition, I have other applications running
Python with Paramiko for scripting support
Github/Golang
OpenSsh
TFTP
<To be updated as I remember>

Advertisements

Data Center Topology

MyTinyDataCenterThe Testbed is designed to simulate commonly deployed Data Center topologies.

The testbed is based on a Leaf & Spine design with OS6900’s and OS6860’s acting as Leaf Switches.

The Leaf switches are named DC-EDGE-101 through DC-EDGE-1xx. The Leaf Switches act as Default gateway for those subnets/vlans present on the switches. The subnets are isolated to the particular Leaf switch.

However, Some L2 domains such as FCOE vlans and a couple of other vlans to test L2 Multicast & Broadcast will stretch across the entire network.

In general, it is good networking practice to isolate L2 broadcast domains to a single switch, and use newer Encapsulation technologies such as VxLAN/SPB to support expansion of tenant/Customer domains. This also encourages us to think and implement networks via the Service model (Think Vxlan VNID or SPB isid rather than VLAN/Subnet).

Servers’s, ISCSI Storage devices and Fiber-Channel Switches are connected to the Leaf Switches. Fiber-Channel Storage is connected to the FC Switch. Remember, it is not a good idea to connect Servers directly to the Core Switch.

Each of the Leaf switches are connected via LACP Link-Aggregation or fixed port uplinks  to Spine Switches DC-CORE-01(vc OS10K) and DC-CORE-02 (vc OS6900x72 + x40).

These connections are L3 point-to-point and can either be OSPF/BGP/ISIS. In my testbed, all three adjacencies are present. It is only a matter of changing the route-redistribution scheme on each of the leaf switches to route the subnets on the Leaf Switches across the core via a particular protocol. By default route-distribution is done via OSPF. Refer Dr Google for Data Center technologies built with BGP ( Microsoft/Yahoo etc)

Since all three routing protocols support ECMP, traffic flow is across all the Spine Switches, as well as being fully redundant in case of a failure. Currently, The OS10K (DC-CORE-01) virtual-Chassis is configured as the rendezvous point for all PIM sparse groups. PIM-Bidir is used to create distribution trees for Vxlan.

In addition to routed traffic, there is also L2 traffic (FCOE, L2 Broadcast test traffic and L2 Multicast). For those L2 vlans which need to be propagated across multiple leaf switches, The OS6900 VC (DC-CORE-02) is configured to be the spanning-tree root bridge.
The OS10K (DC-CORE-01) is configured as the backup root bridge.

Work in Progress( Dual-Stacking)– Currently the DataCenter is predominantly IPV4. An IPV6 network is alsocreated across the same physical topology. The IPV6 addressing scheme closely follows the IPV4 addressing scheme.

A note about Redundancy

There is multiple levels of redundancy built in. Link-Aggregation with hashing ensures that traffic takes one of many ports. ECMP ensures that traffic takes one of many equal-cost paths with sub-second convergence.
The connections reflect test requirements. Real-datacenters might not have both L2 and L3 redundancy because the costs might scale up quite quickly.

 

Resources :

http://bradhedlund.com/2012/01/25/construct-a-leaf-spine-design-with-40g-or-10g-an-observation-in-scaling-the-fabric/

https://www.nanog.org/meetings/nanog55/presentations/Monday/Lapukhov.pdf

https://en.wikipedia.org/wiki/Multitenancy

Intro

This blog outlines and describes some of the interesting work that I often get to do while  implementing DataCenter Solutions in the lab.

The DataCenter products that I work on are the OmniSwitch series of Chassis based & Top of rack switches from Alcatel-Lucent Enterprise.

This  blog is merely a collection of the notes that I made over the Test & Development life-cycle to help clarify concepts. They ended up being useful to me and something that I could refer to over time. In the same spirit, I do hope some of it is useful for those using the OmniSwitch product line.

The features & Solutions that I am responsible for AND/OR contribute to are :

  • High-availability Data center solutions based on L3 Leaf & Spine ECMP designs – mainly
    – VxLAN overlay networks( implementation of hardware VxLAN Tunnel Endpoints, Virtual-Machine Snooping & VM aware QoS solutions)
    – SPB 802.1aq Multipath networks.
  • Converged Storage Area Network solutions – mainly
    – FCoE Storage Area Networking and supporting protocols [N-Port ID Virtualization, E-Port Tunnelling over Ethernet, FIP Snooping & other features].
    – ISCSI & NFS ( Interop with third-party vendors)
  • Data Center Bridging Protocols –  mainly
    – Priority Flow Control (IEEE 802.1Qbb)
    – Enhanced Transmission Selection (802.1Qaz)
    – Quantized Congestion Notification (802.1Qau)
    and Data Center Bridging Exchange Protocol (DCBx)]
  • Shared Memory and Virtual Output Queuing systems .[1G, 10G and 40G line cards].
  • Virtual Network Profiles.

Needless to say, the thoughts & opinions that are in this blog are mine alone.