A Layer 3 Underlay architecture for Vxlan

L3 Routing Underlay

Everyone is aware of the benefits of restricting L2 broadcast domains to the Top of the Rack switches. ( Route when you can, Switch when you must)
In a Leaf/Spine Network, Routing can be extended until the Leaf Switches(TOR) with Point to Point Routing to each of the Spines. Traffic Hashing is done via ECMP on a per-flow basis.
There are some interesting demonstrations of deploying BGP in the data center as an IGP. of course, OSPF and ISIS can be used as well.

https://tools.ietf.org/html/draft-ietf-rtgwg-bgp-routing-large-dc-10

In the test-setup that I use, VLAN’s are restricted to the top of the rack switch. The TOR acts as the default gateway for servers connected to it. Hence there is no VRRP or additional spanning tree configs.
These Local vlans are -redistributed to the Spine(Core) via a L3 routing adjacacencies.

Most Overlay Technologies (SPB,Trill, VxLAN) have to solve the problem of the Flooding Domain. How to create a E-LAN or a tree such that BUM traffic could be flooded across efficiently and only to those end-points which need them.

Since Vxlan does not have a Control Plane (Unlike SPB/Trill which uses ISIS ) ; PIM Bidir is used since it is most efficient for the (*.G) joins which is needed by gateways to reflect dynamic VM presence.
There is a another mode of running Vxlan called Headend Mode which does not need PIM deployment.

In future posts, I will demonstrate how the OmniSwitch OS6900 series of switches act as a Vxlan Gateway (Hardware Vxlan Tunnel Endpoints).

These Tunnel End points/VxLAN gateway are configured at the edges ( Leaf Switches). The Core does not need to support any additional new features except Layer 3 routing and PIM Bidir ( Optional) since they do not do any tunnel terminations or encapsulations.

The Use Cases I will demonstrate are:

a) Supporting Multi-tenancy with Vxlan Gateway Configuration on the AOS Switches
b) Interop with VMWare Esx 5.5 ( Over a Converged Network )
c) Interop with IP based Storage (ISCSI/NFS) Solution
d) Interop with Software Vxlan Tunnel Endpoints [OpenVSwitch(KVM/Qemu)]
e) Applying Quality of Service guarantees to a Virtualized Environment [including VM Snooping and differential Treatment based on Tenant/VNID at Core ]
e) User Network Profiles to apply Network Access Control to devices based on matching Characteristics.

Stay Tuned.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s