Data Center

Virtual Port Channel (vPC)- Part 1

In our last post, we discussed about Fabricpath and its advantages over Spanning Tree Protocol. The core part of the network was running Fabricpath mode hence, utilizing full bandwidth and redundant links as well. However towards end server, we are still running Classical Ethernet and hence STP.

So today’s motive is to discuss how vPC is beneficial in utilizing both the links when servers are dual homed towards Nexus Switches.

Virtual Port Channel Series: We are going to have detailed discussion on vPC in this series of post. Below are the topics which are covered in different posts.
  1.  Basics of vPC
  2. vPC Inconsistency and Control Plane: vPC – Part II
  3. vPC Failure Scenarios : vPC – Part III
  4. vPC with HSRP : vPC with HSRP – Part IV
  5. vPC Design Variations vPC Design Variations

Consider a below scenario where Server-1 is connected to switches SW-1 and SW-2. Under normal STP conditions, one of the link between Server and Switch will be blocked. Hence will be left completely unutilized until unless failure occurs on the other link.

Even per Vlan STP also has problem as one of the Vlans will be blocked on the other link. One way is to bundle the interfaces and using the full capacity of the link.

But wait a minute, bundling only works between two devices. So how the links from Server dual homed to NXOS Switches be bundled together????

vPC comes to a rescue here. It is a virtualization technology which allows both the Switches to be visualized as a single switch to the server hence creating a Port Channel between Virtual Switch and Server ( third device can be anything having aggregation capability).


vPC also belongs to MultiChassis EtherChannel (MCEC) family. Both the switches should be identical from hardware perspective.

This way both the links can be utilized based on the hashing or load-balance algorithm being used by Server. vPC also helps in avoiding single point of node failure. Now you will ask, how the connected device assume them as a single switch as they both will be sending BPDUs with their own MAC address.


Here is a catch!!! Only one switch sends the BPDUs. We will discuss this in detail in Control Plane Section.

vPC Components:

  • feature vpc” is required to initiate the vpc process.
  • feature lacp” is optional but is recommended to use LACP for port channel.
  • Topology Used:


1. vPC Domain: vPC domain is a logical collection of vPC components which includes both the switches, keepalive and peer link.  vPC domain is unique per vPC switch pair. Most of the global configuration goes under vpc domain. Each vPC pair share a common System MAC generated by domain ID. The well known vPC system MAC is 00:23:04:ee:be:xx where xx is the Domain Id.

2. Peer Keepalive: As the name suggests, peer keepalive link is used to detect heartbeats  from the peer device to confirm if peer is alive. This is Layer 3 link and uses UDP ping on port 3200 to confirm the peer status. The default keepalive timer is 1 second and timeout is 5 seconds.

The TOS used is 192 which means COS 6. The status can be verified using “show vpc peer-keepalive


Packet Capture of vPC Keepalive Message:


It is recommended to have a separate VRF for keepalive link to keep the traffic isolated from default vrf. There are three ways to configure keepalive link:

  • Management Port:
    • The management port is used to have Layer 3 OOB connectivity.
    • Mgmt0 port has its own dedicated vrf “management”. Currently in our demonstration, we are using mgmt port.


  • Layer 3 Routed Port:
    • Keepalive can also be configured on a Layer3 interface between the two switches.
    • Separate VRF , like “vrf context keepalive” should be used.
    • This link is only going to be used for keepalives , hence will not consume much bandwidth.
    • So using a dedicated Layer 3 routed port will be a wastage of resources.
  • SVI interface on Layer 2 Switched Port
    • We can also configure a Layer 3 SVI interface for keepalives between two switches.
    • The VLAN needs to be allowed on the peer link between peers.

All these options have some advantages as well as disadvantages, which we will discuss in detail under Failure Section.

For instance, if we are using SVI and the Vlan is only allowed on the peer link between Switches. In case the peer link fails, SVI will also go down as there is no active interface left in that particular Vlan. There is a dependency of SVI interface on the physical interface. We will demonstrate in brief about the complications of using L3 and SVI interface.

If 3 keepalives are missed, switch assumes that peer is down. However there are some scenarios, where keepalive failure won’t have any impact which we will discuss in Failure Scenarios Section.

3. vPC Peer Link :

  • Peer link is the Layer 2 Port Channel between two switches. This is also known as Multi Chassis EtherChannel Trunk.
  • Peer link is responsible for all Control Plane synchronization between two switches like MAC address table.
  • This synchronization of control plane between the two switches make them appear as one.
  • Peer link also carries Broadcast and Multicast traffic , however Unicast traffic should not traverse via Peer Link.
  • All the control plane traffic is processed by Primary peer. However both the switches are capable of forwarding Data Plane Traffic.
  • The configuration on the Peer Link should be identical on both the switches. All the Member Vlans should be allowed on the Peer Link. In case it is not allowed, we will see inconsistency on the port and that particular Vlan will be pruned.
  • vpc peer-link” command is used to configure the Port Channel as peer link and Port channel should be up before configuring.
  • Bridge Assurance is enabled by default on the peer-link.


4. vPC Member port :

  • The port connected towards the Classical Ethernet and connected device is Dual Homed to both the switches is the vPC member port.
  • The configuration should be identical for the Member port on both the switches. The member port will be bundled in a Port channel with “vpc” configured on it.
  • The “vpc” number needs to be identical for a particular Member on both the switches.
  • If vpc number is not explicitly configured, then by default port channel is number is mapped to the vpc. This is not compulsory but is done so for easy administration.
  • It is always recommended to use LACP for the Ether Channel due to its intelligence properties. Suppose the cable is by mistake connected to different port , then CE will receive frames from two different switches , LACP will detect this and disable the port.

See here, device has automatically inherited vpc number from Port channel “vpc 53”.



Below is a Packet capture of LACP for a bundle between vPC Switch and Server-1. Note that the MAC address for the vPC is the virtual MAC address (00:23:04:ee:be:05)  instead of the peer’s local MAC.


5. vPC Vlan : The VLAN which is allowed on the vPC Peer link is the vPC VLAN. In the above snip, 10 and 20 are vPC Vlans.

6.  Orphan Port : When the device is not dual homed to both the switches, then the port on which that device is connected is known as Orphan port. During outages/failure scenarios, orphan are the only ports which gets isolated from the network as they are not part of the vPC and not dual homed to vPC peers.

7.  Non vPC Vlan : The VLAN not allowed on peer link is non vPC Vlan.

vPC Order of Operation

Now lets take a look at how vPC forms adjacency and its order of operations. Please do take care of Order of Operation as it is followed very religiously and every step is dependent on the previous, else vPC will not come up.

  1. Enable the vPC Feature.
  2. Configure vPC domain and peer keepalive under it.
  3. Once peer keepalive is up, configure peer-link on the Portchannel between switches.
    1. PortChannel should be up before enabling peer-link
  4. Once Peer Link is up, vPC role election takes place. The one with lowest priority becomes the primary and in case of tie, lowest local system MAC address is the winner.


5. After the Role Selection, consistency check is performed between both the switches. As they are behaving as a single switch so all the configuration should be identical so that services are not impacted during failure situation. There are two types of Inconsistencies, which we will discuss in next section.
6.  After the Consistency check, L3 SVIs (if any) are moved to up/up state so that routing is converged before traffic starts hitting the switches. No one wants traffic to be dropped especially in Data Centers.
7.  Last but not the least, vPC Member ports are configured to start pushing the data traffic.
8.  Now the vPC if up and operational.

** Configuration is already explained in vPC Components Section.
*** vPC follows strict order of operation and if not followed vPC will not come up.

  • vPC Member port will not come up until unless vPC adjacency is not formed.
  • vPC adjacency will not come up until unless vPC peer link is not up.
  • vPC peer link will not come up until unless vPC keepalive link is not up.

Categories: Data Center, Switching, vPC

6 replies »

  1. It’s really a cool and useful piece of information. I am happy that you simply shared this useful info with us. Please stay us up to date like this. Thank you for sharing.

    Liked by 3 people

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s