Recommended

EVPN VXLAN Fundamentals: Building Modern Data Center Overlays with Arista and Cisco

Kunal Nagaria

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged.

EVPN/VXLAN Fundamentals: Building the Modern Data Center Overlay

Modern data centers demand networks that are flexible, scalable, and programmable. Traditional Layer 2 spanning tree designs simply cannot keep pace with the requirements of cloud-native applications, virtualization, and multi-tenant environments. Enter EVPN/VXLAN — a powerful combination that has become the de facto standard for building overlay networks in data centers across the industry, supported by leading vendors including Arista and Cisco.

This post breaks down the fundamentals of EVPN and VXLAN, explains how they work together, and walks through the key concepts you need to understand before deploying this technology in your environment.


What Is VXLAN?

Data center network diagram showing EVPN VXLAN overlay fabric topology with interconnected leaf and spine switches.

VXLAN (Virtual Extensible LAN) is a network virtualization technology defined in RFC 7348. At its core, VXLAN solves a fundamental problem: how do you stretch Layer 2 networks across a Layer 3 routed infrastructure without the limitations of traditional VLANs?

Standard 802.1Q VLANs are limited to 4,096 unique identifiers. In a large multi-tenant data center hosting hundreds of customers, each with their own isolated Layer 2 domains, this limit is reached quickly. VXLAN solves this by encapsulating Layer 2 Ethernet frames inside UDP/IP packets, effectively tunneling them across an IP fabric.

The VXLAN Header

A VXLAN-encapsulated packet includes:

  • Outer Ethernet header — for Layer 2 forwarding on the underlay
  • Outer IP header — identifies the source and destination VTEP
  • Outer UDP header — destination port 4789 (IANA assigned)
  • VXLAN header — contains the 24-bit VNI (VXLAN Network Identifier)
  • Original inner Ethernet frame — the tenant payload

The VNI (VXLAN Network Identifier) is the equivalent of a VLAN ID but offers a much larger namespace — up to 16 million unique segments. This is what makes VXLAN practical for multi-tenant cloud environments.

VTEPs: The Tunnel Endpoints

A VTEP (VXLAN Tunnel Endpoint) is the device responsible for encapsulating and decapsulating VXLAN traffic. VTEPs can be:

  • Hardware VTEPs — physical switches like Arista 7050 series or Cisco Nexus 9000 series
  • Software VTEPs — hypervisors such as VMware ESXi or Linux with Open vSwitch

Each VTEP has an IP address on the underlay network. When a VM on one host wants to communicate with a VM on another host in the same VNI, the source VTEP encapsulates the frame and sends it to the destination VTEP IP address over the IP underlay.


What Is EVPN?

VXLAN solves the encapsulation problem beautifully, but it leaves an important question unanswered: how do VTEPs learn about each other and about the MAC and IP addresses of endpoints in the overlay?

Early VXLAN deployments relied on multicast or ingress replication (flood and learn) for BUM (Broadcast, Unknown Unicast, Multicast) traffic. These approaches work but introduce scale limitations, operational complexity, and unnecessary traffic in the underlay.

EVPN (Ethernet VPN), defined in RFC 7432 and extended for VXLAN in RFC 8365, provides a control plane for VXLAN. Instead of flooding traffic to discover endpoints, EVPN uses BGP to distribute MAC and IP reachability information across the fabric in a scalable, controlled manner.

Why BGP?

BGP was chosen as the control plane for EVPN for good reason:

  • It is mature, well-understood, and highly scalable
  • It supports route policies and filtering
  • It already runs in most data center fabrics as the underlay routing protocol
  • It supports multiple address families, including the EVPN address family (AFI 25, SAFI 70)

By using MP-BGP (Multiprotocol BGP), EVPN can carry Ethernet MAC addresses, IP prefixes, and VTEP reachability information alongside traditional IP routes — all within a single protocol.


EVPN Route Types

EVPN defines several route types, each serving a specific purpose. The most commonly used in data center VXLAN deployments are:

Route Type Name Purpose
Type 2 MAC/IP Advertisement Advertises MAC and IP address bindings
Type 3 Inclusive Multicast Ethernet Tag Announces VTEP membership in a VNI
Type 5 IP Prefix Route Advertises IP prefixes for inter-subnet routing

Type 2: MAC/IP Advertisement

This is the workhorse of EVPN. When a VTEP learns a new MAC address (for example, when a VM powers on), it generates a Type 2 route and advertises it to other VTEPs via BGP. The receiving VTEPs install this information directly into their MAC tables — no flooding required.

Type 2 routes can also carry IP address bindings, enabling Distributed Symmetric IRB (Integrated Routing and Bridging), where every leaf switch can perform both Layer 2 switching and Layer 3 routing for tenant traffic.

Type 3: IMET Route

Type 3 routes are used to build the BUM tree for a VNI. When a VTEP joins a VNI, it advertises a Type 3 route. Other VTEPs receive this advertisement and add the advertising VTEP to their replication list for that VNI. This replaces the need for multicast in the underlay.

Type 5: IP Prefix Route

Used for advertising external prefixes or summarized routes into the EVPN fabric. This is particularly important when connecting the VXLAN overlay to external networks through border leaf switches.


The Underlay: Building a Solid Foundation

EVPN/VXLAN is an overlay technology, which means it depends entirely on a properly functioning underlay IP network. The underlay carries VXLAN-encapsulated packets between VTEPs and must provide:

  • IP reachability between all VTEPs — typically via loopback addresses
  • Low latency and high bandwidth — usually achieved with a spine-leaf topology
  • Fast convergence — to minimize tenant traffic disruption during failures

The Spine-Leaf Topology

Most modern data centers using EVPN/VXLAN adopt a spine-leaf (Clos) architecture:

  • Leaf switches — connect to servers, storage, and edge devices; act as VTEPs
  • Spine switches — provide high-bandwidth interconnects between leaves; act as route reflectors

BGP is the most popular underlay routing protocol in this design, running as eBGP between spine and leaf layers. Both Arista EOS and Cisco NX-OS support this design natively, with extensive automation capabilities for large-scale deployments.


Symmetric vs. Asymmetric IRB

When traffic needs to move between different subnets (Layer 3 routing) within the VXLAN fabric, there are two models to consider.

Asymmetric IRB

In asymmetric IRB, the ingress leaf performs both routing and bridging before sending the packet. The egress leaf only bridges. This is simpler to configure but requires every leaf to have all VLANs/VNIs configured locally — which can be a scaling concern.

Symmetric IRB

In symmetric IRB (the preferred model in most deployments), both the ingress and egress leaf perform routing. A special L3VNI (Layer 3 VNI) is used to carry traffic between leaf switches at Layer 3. This approach:

  • Scales much better because leaves only need the VNIs they locally host
  • Pairs with EVPN Type 2 routes carrying IP bindings and Type 5 routes for prefixes
  • Requires each VRF to have a dedicated L3VNI

Both Arista and Cisco recommend symmetric IRB with dedicated L3VNIs for scalable multi-tenant data center designs.


Multi-Tenancy with VRFs and L3VNIs

One of the most powerful aspects of EVPN/VXLAN is its ability to support multi-tenancy at both Layer 2 and Layer 3.

Each tenant gets their own VRF (Virtual Routing and Forwarding) instance on the leaf switches. The VRF maps to a unique L3VNI, which is used to route traffic between subnets belonging to that tenant across the fabric. Tenants are completely isolated from one another — overlapping IP address spaces are fully supported.

For example:

  • Tenant A uses VRF TENANT_A, L3VNI 10001, with subnets 10.1.1.0/24 and 10.1.2.0/24
  • Tenant B uses VRF TENANT_B, L3VNI 10002, with subnets 10.1.1.0/24 and 10.1.3.0/24

Despite sharing the same IP ranges, these tenants are completely isolated. EVPN carries the VRF RT (Route Target) information in BGP to ensure routes are imported into the correct VRF on each leaf.


Vendor Implementations: Arista and Cisco

Arista EOS

Arista’s EOS (Extensible Operating System) has strong native support for EVPN/VXLAN. Arista leverages eBGP as both underlay and EVPN overlay protocol, often using a BGP-only fabric design. Their CloudVision platform provides centralized telemetry and automation for large EVPN deployments.

Cisco NX-OS

Cisco’s Nexus 9000 series with NX-OS is another widely deployed platform for EVPN/VXLAN. Cisco supports both eBGP and OSPF underlay designs and has integrated EVPN into their ACI (Application Centric Infrastructure) and standalone NX-OS fabrics. NDFC (Nexus Dashboard Fabric Controller) provides automation and lifecycle management for these deployments.

Both vendors have reached high feature parity for standard EVPN/VXLAN use cases, and the choice often comes down to existing infrastructure, operational familiarity, and ecosystem integration.


Conclusion

EVPN/VXLAN has fundamentally changed how modern data centers are built. By combining VXLAN’s scalable encapsulation with EVPN’s BGP-based control plane, network engineers can build overlays that support millions of tenants, eliminate spanning tree from the data plane, and enable seamless workload mobility across a routed fabric.

Whether you are deploying on Arista or Cisco, the foundational concepts remain the same: a well-designed underlay, properly configured VTEPs, and a solid understanding of EVPN route types and IRB models will carry you far. Mastering these fundamentals is the essential first step toward building a network that can truly support the demands of today’s cloud-scale applications.

Tags :

Kunal Nagaria

Recent News

Recommended

Subscribe Us

Get the latest creative news from BlazeTheme

    Switch on. Learn more

    Gadget

    World News

    @2023 Packet-Switched- All Rights Reserved