3rd July 2025

Hi there guys! First time poster, very long time lurker right here. That is gonna be lengthy so thanks for taking the time to learn and remark.

I’m managing a reasonably sized community with two datacenters. At present we’re utilizing stretched VLANs which we might be glad to maneuver away from to one thing extra trendy and strong. A 3rd greenfield datacenter is being constructed and it appears logical to implement a spine-leaf material with VXLAN EVPN. We’re a Cisco store which is one thing I’ve little affect over. However after doing the analysis on datacenter know-how I consider Cisco will not be unhealthy solely possibly overpriced.

I’ve restricted expertise with VXLAN applied sciences and multicast however have been learning the know-how for some time. I’d be very pleased if somebody might assist me validate issues as there are few examples within the documentation detailing the best way to transfer current workloads into this new know-how.

Lab topology and with hyperlink annotations

Design query 1: The only option for us appears to be to make use of the identical OSPF area for the underlay that’s in use for the remainder of the community. We’re working single-area OSPF with ~45 L3 gadgets and round 1200 routes. A brand new BGP personal AS could be created for datacenter use with two route reflectors on the 2 new backbone switches. Are these sane choices?

Design query 2: For BUM visitors we would choose a bit of unused multicast tackle vary and configure Anycast-RP with each the backbone switches sharing the RP function. The logical determination appears to make use of a separate multicast tackle for every L2VNI. Is that this the most effective observe?

Design query 3: My thought is to construct the brand new material and as a primary step, prolong the previous datacenter VLANs into one of many leaf switches with leaving the SVIs exterior. From that leaf change construct L2VNIs for every datacenter VLAN to the opposite leaf switches. The following step could be shifting workloads at the moment residing within the datacenter VLANs to the brand new datacenter material. When that is completed I’d transfer the SVIs into the VXLAN area (with L3 VNIs and anycast-gateway) and take away the datacenter VLANs fully from the remainder of the community if. For firewalling causes we might use a number of VRFs interconnected by exterior firewalls. Is that this a suitable course of or is there a greater method?

Design query 4: Along with the brand new greenfield datacenter, we have now two VXLAN-capable Nexus TOR switches already in manufacturing in one of many datacenters. These switches don’t have any VXLAN configuration at the moment. It will be fascinating for us to increase the VXLAN material into one other bodily location. Nevertheless these two gadgets should not and won’t be a part of a spine-leaf structure. My thought was to increase the VXLAN material to those two TOR gadgets by configuring BGP EVPN, peer them with the backbone switches (the route reflectors) and likewise peer them with one another. The thought course of is that this retains the VXLAN material working if the 2 TOR gadgets get remoted from the remainder of the community. Our core community is multicast-aware and may transport jumbo frames between the brand new datacenter and these previous Nexus gadgets. Is that this thought-about a foul design concept?

Design query 5: For multicast visitors I’d add the 2 previous Nexus TOR gadgets to the Anycast-RP. The purpose could be that the BUM visitors additionally retains working between the 2 switches in case the 2 TOR gadgets get remoted from the remainder of the community. Is including them to Anycast-RP thought-about a foul design concept?

Drawback: I configured the lab principally in the way in which described above and have been struggling to increase an L2VNI from the spine-leaf datacenter to the separate TOR switches by way of a L3 core. The L2VNI works superb within the spine-leaf a part of the material. There’s underlay reachability, the TOR switches have endpoint MAC addresses of their MAC desk pointing to the proper NVE IP addresses however pings do not undergo. I feel there’s a multicast misconfiguration so I set the “Core” gadget as a static RP for testing but it surely didn’t clear up the difficulty.

Configuration of Core DC2-TOR-1 DC1-Leaf-Four DC1-Leaf-2 DC1-Backbone-1 DC1-Backbone-2

MAC desk of TOR change and reachability

Multicast group configured for L2VNI on TOR change , Core change , DC1-Leaf-4 , DC1-Leaf-2 , DC1-Leaf-1 , DC1-Backbone-2

I’m able to put on a regular basis essential to be taught and lab up this new know-how. It appears very fascinating and helpful however not broadly adopted in my “skilled bubble”. 🙂

Thanks once more for studying and commenting.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.