Data Center

Configuring VxLAN in EVE-NG

Today we are going to configure VxLAN both Flood & Learn and EVPN modes. I am using NexusOS 9Kv in EVE-NG for labbing it up.

For detailed understanding of VxLAN and BGP EVPN, please follow this blog: VXLAN And EVPN

This post will completely focus on configuration and packet captures.

Let me first describe the setup we are going to use to lab it up.


  • We have two Spines and 4 Leafs. Leaf-3 and Leaf-4 are behaving as vPC pairs.
  • The host port on Leaf-1 is in VLAN 10 and on Leaf-2 in VLAN 10 & 20
  • vPC pair has a L2 switch connected to its member port and is passing all Vlans 10, 20 , 30.


  • Configure basic IP reachability between Leaf and Spine switches. I am using ospf here.


  • Configure multicast between Leaf and Spine. Best practice is to use bidirectional mode. We can configure anycast or phantom RP for Spine redundancy. Make sure to allow the correct group address to be used for VxLAN.


Flood and Learn (F&L)

As the name suggests, F&L used exactly flooding and learning procedure to learn the end hosts. F&L works in data plane.

  • Configure “feature nv overlay” and “feature vn-segment-vlan-based”56
  • Configure Vlan and associate unique vn-segment per vlan.7
  • Configure “nve” interface with source interface of Loopback 0. Associate vni to nve interface and assign multicast group of which particular vni should be part of.8
  • Configure a common secondary IP on loopback for vpc peers. The nve peering will be done with secondary IP only.29
  • Configure VPC nve vlan on both vpc peers and SVI with ospf on it. Make sure to have increased cost on this link, so that it cant be used as transit for all traffic.30
  • In F&L, there is no configuration needed on Spine.
  • Once nve interface is configured on all the leafs, you are ready to test the connectivity.9
  • You will see (*,G) and (S,G) entry in mroute table.10
  • Below is the packet capture of first ARP broadcast packet from Host SW1 ( to Host SW3 (



  • Configure “nv overlay evpn” on all switches.20
  • For control plane, configure BGP from Leaf to Spine with address family l2vpn evpn.1213
  • Now under nve interface, configure to use bgp as a protocol for host reachability.14
  • Now we have to map the vni to evpn and configure RD/RT. Here we have an option configure device to automatically generated RD/RT value.15
  • Configure spine as route-reflector-client.21
  • This is data driven protocol, which means you will see nve peer and mac address only when there is an active traffic. So lets initiate a traffic from Leaf-1 to Leaf-4.16
  • Once the address is learned on leaf, BGP control plane will advertise the same to remote peers. For more detail on route types, kindly refer the VxLAN theory blog mentioned above.371718
  • Below is the packet capture of MAC advertisement. There is a new EVPN NLRI for advertising/withdrawing the route.44
  • Till now we have seen routing only for single vlan 10. Lets configure another vlan 20.36        22            2324

Inter Vlan Routing

  • For routing between Vlan 10 and Vlan 20, lets create another Vlan 500 , which will act as L3 VNI for inter vlan routing.
  • Create a VRF and assign unique vni and RD/RT to it.25
  • Associate the L3VNI to the nve interface.26
  • Configure a common mac address to be shared by all leafs.
  • Configure SVI for both vlans – end host vlan and L3 vlan.
  • Allow end host SVI to use the fabric anycast gateway as the MAC address. This will be the gateway for all hosts connecting to this vlan. This is useful for host mobility as MAC address is same for gateway on all leaf switches.28
  • VRF L3VNI will host both end host SVI and L3VNI SVI.
  • Allow SVI for L3 VLAN to forward ip packets. There wont be any ip address on this SVI.
  • This vlan should be allowed on vpc peer-link.32
  • Make sure VNI is up in control plane mode.33
  • Now lets test the connectivity from SW1 to vlan 10 and vlan 20.394041
  • Below is the packet capture for ICMP traffic between the hosts in same vlan46
  • Now ping host in Vlan 20 from host in Vlan 10. You will see the host route in vrf L3VNI as depicted below:4243
  • Below is the inter vlan traffic packet capture, which has VNID 10500 (L3VNI) in the VNID field.45
  • PCAP for ARP broadcast:47
  • ARP reply:48
  • ICMP Packet:49


Download link for Lab file : Downloads

Happy Labbing 🙂


Categories: Data Center, Switching, Vxlan

21 replies »

  1. Hi Yogita,

    Which Nx-os are you using for nexus on eve-ng?
    Are you able to run more than one nexus without issues?
    Can you share your server resources allocation?

    Many thanks.

    Liked by 3 people

    • Hi Angelo,

      I am using nxosv-final.7.0.3.I7.2.qcow2 on eve-ng. Yes, all the switches are running without any issues.
      Regarding the resource allocation, I have a remote server with 128GB RAM. Please let me know, if you are looking for some more specifications.

      Liked by 3 people

      • Hi Yogita,

        Thank you for your reply. I want to check nxos version on my eve. If I have a different one I’ll try to download the same as the yours.
        After that I’ll try to reproduce your lab.
        Thank you very much.

        Have a good day

        Liked by 3 people

  2. hi Yogita,
    Could you also tell about cpu specification of your environment. i am looking to buy server for eve-ng. what is your suggestion regarding server with 2xX5660 CPU with 64gb ram.

    Liked by 3 people

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s