SlideShare a Scribd company logo
1 of 22
A Practice Guide to
vCNS and VXLAN
Technical Overview and Design Guide
Prasenjit Sarkar – VMware
Hongjun Ma – HP
Andy Grant – HP
Agenda
What will we focus on
High level overview how VXLAN works
VXLAN implementation using vCNS including
• Infrastructure Components
• Packet Flow
Deployment Prerequisites
Network Considerations
• Multicast requirements
• Multicast implementation
VTEP Performance and Overhead
• HP Virtual Connect & load-balancing
VXLAN Introduction
Target Audience
Architects, Engineers, Consultants, Admins responsible for Data Center Infrastructure and
VMware virtualization technologies

What is VXLAN
VXLAN - Virtual eXtensible Local Area Network is a network overlay that encapsulates
layer 2 traffic within layer 3
• Submitted it IETF by Cisco, VMware, Citrix, Red Hat, Broadcom, & Arista.
•

Coined network virtualization or ‘virtual wires’ by VMware

Competing Solutions?
NVGRE - Network Virtualization using Generic Routing Encapsulation
• Submitted to IETF by Microsoft, Arista, Intel, Dell, HP, Broadcom, Emulex
SST - Stateless Transport Tunneling
• Submitted to IETF by Nicira (VMware)
VXLAN Introduction
Why VXLAN?
•
•
•
•
•

Ability to manage overlapping addresses between multiple tenants
Decoupling of the virtual topology provided by the tunnels from the physical topology of the network
Support for virtual machine mobility independent of the physical network
Support for essentially unlimited numbers of virtual networks (in contrast to VLANs, for example)
Decoupling of the network service provided to servers from the technology used in the physical
network (e.g. providing an L2 service over an L3 fabric)
• Isolating the physical network from the addressing of the virtual networks, thus avoiding issues such
as MAC table size in physical switches.
• VXLAN provides up to 16 million virtual networks in contrast to the 4094 limit of VLAN’s
• Application agnostic, all work is performed in the ESXi host.

Where are we today?
•
•

VXLAN still in experimental status in IETF
Primarily targeted in vCloud environments but standalone product available.
VXLAN Introduction
How VXLAN?
• VMware vSphere ESXi 5.1 AND
– vCloud Networking Security 5.1 Edge
OR
– Cisco Nexus 1000V
VMware vCloud Networking and Security Edge
• Available vCNS deployment options
– Standalone (licensed per VM)
– AutoDeploy
• Deploying VXLAN through Auto Deploy
– vCloud Director 5.1 (licensed in vCloud Suite)
• Currently tested to support 5000 VXLAN segments
– vCloud Networking and Security 5.1 Edge configuration limits and throughput
Cisco Nexus 1000V
• Currently tested to support 2000 VXLAN segments
– Deploying the VXLAN Feature in Cisco Nexus 1000V Series Switches
Network Virtualization Conceptual View
Analogy between computer virtualization and network virtualization (overlay
transport)
vCloud Networking and Security - Edge
What is vCloud Networking and Security Edge?
Part of the VMware vCloud Networking and Security suite
• Previously known as the vShield suite.
• Provides gateway services including
– VPN
– DHCP
– DNS
– NAT
– Firewall (5 tuple)
– VXLAN & inter-VXLAN routing
– Load-Balancing (Advanced License)
– High Availability (Advanced License)

Licensing Options
– Standalone per-VM Standard or Advanced licensing
– Bundled with vCloud Suite
VXLAN: How it works
What is vCloud Networking and Security Edge?
Part of the VMware vCloud Networking and Security suite
• Encapsulation
– Performed by a kernel module installed on ESXi host
• Acts as the Virtual Tunnel End Point or VTEP
– Adds 24bit identifier and 50 bytes to packet size.
– MAC in UDP + IP
• MAC in UDP + IP
– Why MAC in IP is better than vCNI (MAC in MAC)
• Multicast
– Where it is used, how this impacts scalability
vCNS + Edge + VXLAN: Prerequisites
What is vCloud Networking and Security Edge?
Part of the VMware vCloud Networking and Security suite
• Previously known as the vShield suite.
• Highly integrated with vCloud but vCD is not necessary with standalone licenses.

VXLAN + vCNS Edge requires;
• Physical network components;
•
•

•

MTU increase (1550 MIN)
Multicast enabled (depending on topology, more to come)

VMware components;
•
•
•

•

vDS 5.1 (implies vSphere Enterprise Plus licensing & vCenter)
A vCNS Manager
A vCNS Edge

VMware recommends
•
•
•

a single vDS across all clusters.
you isolate your VTEP traffic from VM VLAN’s
Etherchannel or LACP to your host for the VXLAN transport Port Group
Multicast
What needs to be enabled on HP or Cisco switches?
What are the multicast design considerations?
• Limits of physical network hardware platforms using multicast
– Cisco Nexus 7000 supports 15,000 L2 IGMP entries
(http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/brochure_mulitcast_w
ith_cisco_nexus_7000.pdf)
– Cisco Nexus 7000 supports 32,000 MC entries (15K vPC)
(http://www.cisco.com/en/US/docs/switches/datacenter/sw/verified_scalability/b_Cisco_Nexus_700
0_Series_NXOS_Verified_Scalability_Guide.html#reference_04BA8513CF3140D2A2A6C5E5B4E7C60C)
– Check HP gear limits.
– So what do these limits mean?
– VMware recommends one VXLAN ‘virtual wire’ per MC segment therefore we can only support this
for up to 15K or 32K?
• If we don’t follow this recommendation, how does this impact a VM broadcast flooding other
VTEP’s w/ multicast traffic?
• Is better to use IGMP snooping/querier (L2 topology) or PIM w/ L3 topology?
– How does this impact Data Center Interconnects (DCI) and stretched VXLAN implementations?
VXLAN Logical View
Packet flow across virtual wires on the same layer 2 VXLAN transport network

•
•

•

VXLAN Fabric
vDS

•

Layer 2

List Pro/Con’s here
Multicast configuration
options
• IGMP
snooping/querier
Explain how they work in
next slide
Design considerations?
• Eg. Broadcast storms?
VXLAN Logical View
Packet flow across virtual wires on different layer 3 VXLAN transport networks

•
•

•
VXLAN Fabric

•
vDS

Layer 3

List Pro/Con’s here
Multicast configuration
options
• PIM
Explain how they work in
next slide
Design considerations?
• Eg. Broadcast storms?
High Level Physical Deployment

VXLAN Fabric

VTE
P

VTE
P

VTE
P
vSphere Distributed Switch

VTE
P

Solution Components
• vDS 5.1

ESXi

ESXi

ESXi

ESXi

• VXLAN virtual fabric
• VTEP (vmk adapter
in a dedicated Port
Group)
• vCNS Edge 5.1
• vCNS Manager 5.1
Physical Deployment – A Closer Look

VXLAN Fabric

• vCNS Manager manages the vCNS deployment
• supports many Edge devices.

VTEP

VTEP

vSphere Distributed Switch

ESXi

ESXi

• VTEP is a single vmkernel interface per host
automatically created on VXLAN vDS Port Group
• LACP, EtherChannel or (static) failover only
supported load balancing methods.
• VLAN ‘trunking’ or virtual switch tagging (VST)
not recommended. Dedicate ‘access’ phyical
uplinks to VXLAN Port Groups
• vCNS Edge virtual appliance provides gateway
services
Physical Deployment – Intra-Host Packet Flow

VXLAN Fabric

VM Packet Flow
1. VM sents packet to remote destination on
same virtual wire

VTEP

VTEP

vSphere Distributed Switch

ESXi

ESXi

2. Packet hits vDS and is forwarded to
destination VM
Physical Deployment – Inter-Host Packet Flow

VXLAN Fabric

VM Packet Flow
1. VM sents packet to remote destination on
same virtual wire

VTEP

VTEP

vSphere Distributed Switch

ESXi

ESXi

2. Destination VM is remote and packet will
traverses VXLAN network
3. ESXi host encapsulates packet and
transmits on via VTEP vmkernel adapter
4. Target ESXi host running the destination
VM receives the packet on the VTEP,
forward to VM
Physical Deployment – Routed Packet Flow

VXLAN Fabric

VM Packet Flow
1. VM transmits packet to remote
destination

VTEP

VTEP

vSphere Distributed Switch

ESXi

ESXi

2. VTEP kernel module in ESXi host
encapsulates packet and transmits on
VXLAN network
3. ESXi host running the Edge device
receives packet and processes through
rule engine
4. Packet processed by firewall/NAT/routing
rules and is sent out external interface on
Edge device
5. Packet hits physical network
infrastructure
Comparison of vSphere NIC Teaming
Load Distribution vs Load Balancing vs Active/Standby
vCNS Edge supports LACP & Etherchannel or Failover “aka, Active/Standby” NIC
teaming options
Load Distribution (of IP flows)
Load Balancing (bandwidth)
Active/Standby

90%
load

LAC
P

20%
load

55%
load

LBT

40%
load

0%
load

Active /
Standb
y

IP Flows

(conversations
)

Attempts to evenly distribute
IP traffic flows, bandwidth is
NOT a consideration

Attempts to evenly distribute
bandwidth capacity

Single active link, no
automatic load
distribution/balancing

100%
load
VXLAN with HP Virtual Connect Interconnects
Virtual Connect Advantage
East/West Fencing (VTEP) Traffic stays in the VC domain using cross-connect or stacking
links reducing North/South bandwidth requirements.

Virtual Connect Disadvantage
Virtual Connect does not support downstream server EtherChannel or LACP connectivity.
• Limited to the vCNS Teaming Policy of “Failover”
•
•
•

Effectively an Active/Standby configuration
Cuts North/South bandwidth efficiency in half due to idle link
This is not as bad as it sounds due to the East/West traffic savings using cross-connects/stacking
links

Possible Solutions?
• VC Tunnel Mode? – Does it pass link aggregation control traffic? Looks to be a NO
• Multiple Edge devices using an alternating Active/Standby teaming on VXLAN Port
Group?
•

•

Static load-distribution sucks!

Other?
VXLAN Performance
Encapsulation Overhead
VXLAN introduces an additional layer of packet processing at the hypervisor level. For
each packet on the VXLAN network, the hypervisor needs to add protocol headers on the
sender side (encapsulation) and remove these headers (decapsulation) on the receiver
side. This causes the CPU additional work for each packet.
Apart from this CPU overhead, some of the offload capabilities of the NIC cannot be used
because the inner packet is no longer accessible. The physical NIC hardware offload
capabilities (for example, checksum offloading and TCP segmentation offload (TSO)) have
been designed for standard (non-encapsulated) packet headers, and some of these
capabilities cannot be used for encapsulated packets. In such a case, a VXLAN enabled
packet will require CPU resources to perform a task that otherwise would have been done
more efficiently by physical NIC hardware. There are certain NIC offload capabilities that
can be used with VXLAN, but they depend on the physical NIC and the driver being used.
As a result, the performance may vary based on the hardware used when VX
http://www.vmware.com/files/pdf/techpaper/VMware-vSphere-VXLAN-Perf.pdfLAN is
configured.
VXLAN Isn’t Perfect
Compared to MAC in MAC encapsulation (vCNI) then VXLAN (MAC in UPD) moves in the
right direction for broadcast scalability
• Broadcasts on internal networks (“protected” with vCDNI) get translated into global
broadcasts. This behavior totally destroys scalability. In VLAN-based designs, the number of hosts
and VMs affected by a broadcast is limited by the VLAN configuration... unless you stretch VLANs all
across the data center (but then you ask for trouble). Ivan Pepelnjak

VXLAN Fenced networks communicate via the VXLAN vmk adapter that only uses a single
Netqueue NIC queue. This limits scalability by increasing CPU pressure on the host for a
single pCPU.
vCNS Teaming Policy in conjunction with Virtual Connect. VC has no downstream
EtherChannel/LACP support and therefore VXLAN will always effectively be Active/Passive
going out the chassis. You will be limited to the bandwidth of a single upstream link per
vCNS Edge device (typically per cluster).
The lack of control plane virtualization and reliance on the physical network for MAC
propagation introduces limits imposed by multicast.
–
–

Multicast administrator expertise (not your typical data center protocol)
Multicast segment support limits of physical network infrastructure
Thank you

More Related Content

What's hot

MP BGP-EVPN 실전기술-1편(개념잡기)
MP BGP-EVPN 실전기술-1편(개념잡기)MP BGP-EVPN 실전기술-1편(개념잡기)
MP BGP-EVPN 실전기술-1편(개념잡기)
JuHwan Lee
 
How VXLAN works on Linux
How VXLAN works on LinuxHow VXLAN works on Linux
How VXLAN works on Linux
Etsuji Nakai
 
Waris l2vpn-tutorial
Waris l2vpn-tutorialWaris l2vpn-tutorial
Waris l2vpn-tutorial
rakiva29
 

What's hot (20)

VPC PPT @NETWORKERSHOME
VPC PPT @NETWORKERSHOMEVPC PPT @NETWORKERSHOME
VPC PPT @NETWORKERSHOME
 
VXLAN
VXLANVXLAN
VXLAN
 
Bidirectional Forwarding Detection (BFD)
Bidirectional Forwarding Detection (BFD) Bidirectional Forwarding Detection (BFD)
Bidirectional Forwarding Detection (BFD)
 
Virtual Extensible LAN (VXLAN)
Virtual Extensible LAN (VXLAN)Virtual Extensible LAN (VXLAN)
Virtual Extensible LAN (VXLAN)
 
MP BGP-EVPN 실전기술-1편(개념잡기)
MP BGP-EVPN 실전기술-1편(개념잡기)MP BGP-EVPN 실전기술-1편(개념잡기)
MP BGP-EVPN 실전기술-1편(개념잡기)
 
How to configure vlan, stp, dtp step by step guide
How to configure vlan, stp, dtp step by step guideHow to configure vlan, stp, dtp step by step guide
How to configure vlan, stp, dtp step by step guide
 
CCNA CheatSheet
CCNA CheatSheetCCNA CheatSheet
CCNA CheatSheet
 
Demystifying EVPN in the data center: Part 1 in 2 episode series
Demystifying EVPN in the data center: Part 1 in 2 episode seriesDemystifying EVPN in the data center: Part 1 in 2 episode series
Demystifying EVPN in the data center: Part 1 in 2 episode series
 
Operationalizing EVPN in the Data Center: Part 2
Operationalizing EVPN in the Data Center: Part 2Operationalizing EVPN in the Data Center: Part 2
Operationalizing EVPN in the Data Center: Part 2
 
EVPN Introduction
EVPN IntroductionEVPN Introduction
EVPN Introduction
 
Huawei switch configuration commands
Huawei switch configuration commandsHuawei switch configuration commands
Huawei switch configuration commands
 
CCNP Security-Firewall
CCNP Security-FirewallCCNP Security-Firewall
CCNP Security-Firewall
 
Ccnp workbook network bulls
Ccnp workbook network bullsCcnp workbook network bulls
Ccnp workbook network bulls
 
How VXLAN works on Linux
How VXLAN works on LinuxHow VXLAN works on Linux
How VXLAN works on Linux
 
ccna cheat_sheet
ccna cheat_sheetccna cheat_sheet
ccna cheat_sheet
 
VPLS Fundamental
VPLS FundamentalVPLS Fundamental
VPLS Fundamental
 
MPLS L3 VPN Deployment
MPLS L3 VPN DeploymentMPLS L3 VPN Deployment
MPLS L3 VPN Deployment
 
Waris l2vpn-tutorial
Waris l2vpn-tutorialWaris l2vpn-tutorial
Waris l2vpn-tutorial
 
Deploying Carrier Ethernet features on ASR 9000
Deploying Carrier Ethernet features on ASR 9000Deploying Carrier Ethernet features on ASR 9000
Deploying Carrier Ethernet features on ASR 9000
 
Juniper mpls best practice part 1
Juniper mpls best practice   part 1Juniper mpls best practice   part 1
Juniper mpls best practice part 1
 

Viewers also liked

CCNA Basic Switching and Switch Configuration
CCNA Basic Switching and Switch ConfigurationCCNA Basic Switching and Switch Configuration
CCNA Basic Switching and Switch Configuration
Dsunte Wilson
 
CCNA Introducing Networks
CCNA Introducing NetworksCCNA Introducing Networks
CCNA Introducing Networks
Dsunte Wilson
 

Viewers also liked (6)

CCNA 1 Routing and Switching v5.0 Chapter 3
CCNA 1 Routing and Switching v5.0 Chapter 3CCNA 1 Routing and Switching v5.0 Chapter 3
CCNA 1 Routing and Switching v5.0 Chapter 3
 
CCNA 1 Routing and Switching v5.0 Chapter 2
CCNA 1 Routing and Switching v5.0 Chapter 2CCNA 1 Routing and Switching v5.0 Chapter 2
CCNA 1 Routing and Switching v5.0 Chapter 2
 
CCNA Basic Switching and Switch Configuration
CCNA Basic Switching and Switch ConfigurationCCNA Basic Switching and Switch Configuration
CCNA Basic Switching and Switch Configuration
 
CCNA Introducing Networks
CCNA Introducing NetworksCCNA Introducing Networks
CCNA Introducing Networks
 
CCNA 1 Routing and Switching v5.0 Chapter 4
CCNA 1 Routing and Switching v5.0 Chapter 4CCNA 1 Routing and Switching v5.0 Chapter 4
CCNA 1 Routing and Switching v5.0 Chapter 4
 
CCNA 1 Routing and Switching v5.0 Chapter 1
CCNA 1 Routing and Switching v5.0 Chapter 1CCNA 1 Routing and Switching v5.0 Chapter 1
CCNA 1 Routing and Switching v5.0 Chapter 1
 

Similar to VXLAN Practice Guide

NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_AliNET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
shezy22
 
Atf 3 q15-4 - scaling the the software driven cloud network
Atf 3 q15-4 - scaling the the software driven cloud networkAtf 3 q15-4 - scaling the the software driven cloud network
Atf 3 q15-4 - scaling the the software driven cloud network
Mason Mei
 

Similar to VXLAN Practice Guide (20)

VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
 
NSX-MH
NSX-MHNSX-MH
NSX-MH
 
VMworld 2013: Operational Best Practices for NSX in VMware Environments
VMworld 2013: Operational Best Practices for NSX in VMware Environments VMworld 2013: Operational Best Practices for NSX in VMware Environments
VMworld 2013: Operational Best Practices for NSX in VMware Environments
 
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_AliNET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
 
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
 
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
 
VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture
 
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco InfrastructureVMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
 
VMware vSphere 6.0 - Troubleshooting Training - Day 3
VMware vSphere 6.0 - Troubleshooting Training - Day 3 VMware vSphere 6.0 - Troubleshooting Training - Day 3
VMware vSphere 6.0 - Troubleshooting Training - Day 3
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3
 
Atf 3 q15-4 - scaling the the software driven cloud network
Atf 3 q15-4 - scaling the the software driven cloud networkAtf 3 q15-4 - scaling the the software driven cloud network
Atf 3 q15-4 - scaling the the software driven cloud network
 
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
 
VMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - Segmentation
VMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - SegmentationVMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - Segmentation
VMworld 2013: NSX PCI Reference Architecture Workshop Session 1 - Segmentation
 
PLNOG15: Is there something less complicated than connecting two LAN networks...
PLNOG15: Is there something less complicated than connecting two LAN networks...PLNOG15: Is there something less complicated than connecting two LAN networks...
PLNOG15: Is there something less complicated than connecting two LAN networks...
 
Midokura OpenStack Meetup Taipei
Midokura OpenStack Meetup TaipeiMidokura OpenStack Meetup Taipei
Midokura OpenStack Meetup Taipei
 
Nexus 1000_ver 1.1
Nexus 1000_ver 1.1Nexus 1000_ver 1.1
Nexus 1000_ver 1.1
 
VMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep Dive
 
VMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep Dive
 
[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...
[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...
[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...
 
VMworld 2014: vSphere Distributed Switch
VMworld 2014: vSphere Distributed SwitchVMworld 2014: vSphere Distributed Switch
VMworld 2014: vSphere Distributed Switch
 

Recently uploaded

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 

VXLAN Practice Guide

  • 1. A Practice Guide to vCNS and VXLAN Technical Overview and Design Guide Prasenjit Sarkar – VMware Hongjun Ma – HP Andy Grant – HP
  • 2. Agenda What will we focus on High level overview how VXLAN works VXLAN implementation using vCNS including • Infrastructure Components • Packet Flow Deployment Prerequisites Network Considerations • Multicast requirements • Multicast implementation VTEP Performance and Overhead • HP Virtual Connect & load-balancing
  • 3. VXLAN Introduction Target Audience Architects, Engineers, Consultants, Admins responsible for Data Center Infrastructure and VMware virtualization technologies What is VXLAN VXLAN - Virtual eXtensible Local Area Network is a network overlay that encapsulates layer 2 traffic within layer 3 • Submitted it IETF by Cisco, VMware, Citrix, Red Hat, Broadcom, & Arista. • Coined network virtualization or ‘virtual wires’ by VMware Competing Solutions? NVGRE - Network Virtualization using Generic Routing Encapsulation • Submitted to IETF by Microsoft, Arista, Intel, Dell, HP, Broadcom, Emulex SST - Stateless Transport Tunneling • Submitted to IETF by Nicira (VMware)
  • 4. VXLAN Introduction Why VXLAN? • • • • • Ability to manage overlapping addresses between multiple tenants Decoupling of the virtual topology provided by the tunnels from the physical topology of the network Support for virtual machine mobility independent of the physical network Support for essentially unlimited numbers of virtual networks (in contrast to VLANs, for example) Decoupling of the network service provided to servers from the technology used in the physical network (e.g. providing an L2 service over an L3 fabric) • Isolating the physical network from the addressing of the virtual networks, thus avoiding issues such as MAC table size in physical switches. • VXLAN provides up to 16 million virtual networks in contrast to the 4094 limit of VLAN’s • Application agnostic, all work is performed in the ESXi host. Where are we today? • • VXLAN still in experimental status in IETF Primarily targeted in vCloud environments but standalone product available.
  • 5. VXLAN Introduction How VXLAN? • VMware vSphere ESXi 5.1 AND – vCloud Networking Security 5.1 Edge OR – Cisco Nexus 1000V VMware vCloud Networking and Security Edge • Available vCNS deployment options – Standalone (licensed per VM) – AutoDeploy • Deploying VXLAN through Auto Deploy – vCloud Director 5.1 (licensed in vCloud Suite) • Currently tested to support 5000 VXLAN segments – vCloud Networking and Security 5.1 Edge configuration limits and throughput Cisco Nexus 1000V • Currently tested to support 2000 VXLAN segments – Deploying the VXLAN Feature in Cisco Nexus 1000V Series Switches
  • 6. Network Virtualization Conceptual View Analogy between computer virtualization and network virtualization (overlay transport)
  • 7. vCloud Networking and Security - Edge What is vCloud Networking and Security Edge? Part of the VMware vCloud Networking and Security suite • Previously known as the vShield suite. • Provides gateway services including – VPN – DHCP – DNS – NAT – Firewall (5 tuple) – VXLAN & inter-VXLAN routing – Load-Balancing (Advanced License) – High Availability (Advanced License) Licensing Options – Standalone per-VM Standard or Advanced licensing – Bundled with vCloud Suite
  • 8. VXLAN: How it works What is vCloud Networking and Security Edge? Part of the VMware vCloud Networking and Security suite • Encapsulation – Performed by a kernel module installed on ESXi host • Acts as the Virtual Tunnel End Point or VTEP – Adds 24bit identifier and 50 bytes to packet size. – MAC in UDP + IP • MAC in UDP + IP – Why MAC in IP is better than vCNI (MAC in MAC) • Multicast – Where it is used, how this impacts scalability
  • 9. vCNS + Edge + VXLAN: Prerequisites What is vCloud Networking and Security Edge? Part of the VMware vCloud Networking and Security suite • Previously known as the vShield suite. • Highly integrated with vCloud but vCD is not necessary with standalone licenses. VXLAN + vCNS Edge requires; • Physical network components; • • • MTU increase (1550 MIN) Multicast enabled (depending on topology, more to come) VMware components; • • • • vDS 5.1 (implies vSphere Enterprise Plus licensing & vCenter) A vCNS Manager A vCNS Edge VMware recommends • • • a single vDS across all clusters. you isolate your VTEP traffic from VM VLAN’s Etherchannel or LACP to your host for the VXLAN transport Port Group
  • 10. Multicast What needs to be enabled on HP or Cisco switches? What are the multicast design considerations? • Limits of physical network hardware platforms using multicast – Cisco Nexus 7000 supports 15,000 L2 IGMP entries (http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/brochure_mulitcast_w ith_cisco_nexus_7000.pdf) – Cisco Nexus 7000 supports 32,000 MC entries (15K vPC) (http://www.cisco.com/en/US/docs/switches/datacenter/sw/verified_scalability/b_Cisco_Nexus_700 0_Series_NXOS_Verified_Scalability_Guide.html#reference_04BA8513CF3140D2A2A6C5E5B4E7C60C) – Check HP gear limits. – So what do these limits mean? – VMware recommends one VXLAN ‘virtual wire’ per MC segment therefore we can only support this for up to 15K or 32K? • If we don’t follow this recommendation, how does this impact a VM broadcast flooding other VTEP’s w/ multicast traffic? • Is better to use IGMP snooping/querier (L2 topology) or PIM w/ L3 topology? – How does this impact Data Center Interconnects (DCI) and stretched VXLAN implementations?
  • 11. VXLAN Logical View Packet flow across virtual wires on the same layer 2 VXLAN transport network • • • VXLAN Fabric vDS • Layer 2 List Pro/Con’s here Multicast configuration options • IGMP snooping/querier Explain how they work in next slide Design considerations? • Eg. Broadcast storms?
  • 12. VXLAN Logical View Packet flow across virtual wires on different layer 3 VXLAN transport networks • • • VXLAN Fabric • vDS Layer 3 List Pro/Con’s here Multicast configuration options • PIM Explain how they work in next slide Design considerations? • Eg. Broadcast storms?
  • 13. High Level Physical Deployment VXLAN Fabric VTE P VTE P VTE P vSphere Distributed Switch VTE P Solution Components • vDS 5.1 ESXi ESXi ESXi ESXi • VXLAN virtual fabric • VTEP (vmk adapter in a dedicated Port Group) • vCNS Edge 5.1 • vCNS Manager 5.1
  • 14. Physical Deployment – A Closer Look VXLAN Fabric • vCNS Manager manages the vCNS deployment • supports many Edge devices. VTEP VTEP vSphere Distributed Switch ESXi ESXi • VTEP is a single vmkernel interface per host automatically created on VXLAN vDS Port Group • LACP, EtherChannel or (static) failover only supported load balancing methods. • VLAN ‘trunking’ or virtual switch tagging (VST) not recommended. Dedicate ‘access’ phyical uplinks to VXLAN Port Groups • vCNS Edge virtual appliance provides gateway services
  • 15. Physical Deployment – Intra-Host Packet Flow VXLAN Fabric VM Packet Flow 1. VM sents packet to remote destination on same virtual wire VTEP VTEP vSphere Distributed Switch ESXi ESXi 2. Packet hits vDS and is forwarded to destination VM
  • 16. Physical Deployment – Inter-Host Packet Flow VXLAN Fabric VM Packet Flow 1. VM sents packet to remote destination on same virtual wire VTEP VTEP vSphere Distributed Switch ESXi ESXi 2. Destination VM is remote and packet will traverses VXLAN network 3. ESXi host encapsulates packet and transmits on via VTEP vmkernel adapter 4. Target ESXi host running the destination VM receives the packet on the VTEP, forward to VM
  • 17. Physical Deployment – Routed Packet Flow VXLAN Fabric VM Packet Flow 1. VM transmits packet to remote destination VTEP VTEP vSphere Distributed Switch ESXi ESXi 2. VTEP kernel module in ESXi host encapsulates packet and transmits on VXLAN network 3. ESXi host running the Edge device receives packet and processes through rule engine 4. Packet processed by firewall/NAT/routing rules and is sent out external interface on Edge device 5. Packet hits physical network infrastructure
  • 18. Comparison of vSphere NIC Teaming Load Distribution vs Load Balancing vs Active/Standby vCNS Edge supports LACP & Etherchannel or Failover “aka, Active/Standby” NIC teaming options Load Distribution (of IP flows) Load Balancing (bandwidth) Active/Standby 90% load LAC P 20% load 55% load LBT 40% load 0% load Active / Standb y IP Flows (conversations ) Attempts to evenly distribute IP traffic flows, bandwidth is NOT a consideration Attempts to evenly distribute bandwidth capacity Single active link, no automatic load distribution/balancing 100% load
  • 19. VXLAN with HP Virtual Connect Interconnects Virtual Connect Advantage East/West Fencing (VTEP) Traffic stays in the VC domain using cross-connect or stacking links reducing North/South bandwidth requirements. Virtual Connect Disadvantage Virtual Connect does not support downstream server EtherChannel or LACP connectivity. • Limited to the vCNS Teaming Policy of “Failover” • • • Effectively an Active/Standby configuration Cuts North/South bandwidth efficiency in half due to idle link This is not as bad as it sounds due to the East/West traffic savings using cross-connects/stacking links Possible Solutions? • VC Tunnel Mode? – Does it pass link aggregation control traffic? Looks to be a NO • Multiple Edge devices using an alternating Active/Standby teaming on VXLAN Port Group? • • Static load-distribution sucks! Other?
  • 20. VXLAN Performance Encapsulation Overhead VXLAN introduces an additional layer of packet processing at the hypervisor level. For each packet on the VXLAN network, the hypervisor needs to add protocol headers on the sender side (encapsulation) and remove these headers (decapsulation) on the receiver side. This causes the CPU additional work for each packet. Apart from this CPU overhead, some of the offload capabilities of the NIC cannot be used because the inner packet is no longer accessible. The physical NIC hardware offload capabilities (for example, checksum offloading and TCP segmentation offload (TSO)) have been designed for standard (non-encapsulated) packet headers, and some of these capabilities cannot be used for encapsulated packets. In such a case, a VXLAN enabled packet will require CPU resources to perform a task that otherwise would have been done more efficiently by physical NIC hardware. There are certain NIC offload capabilities that can be used with VXLAN, but they depend on the physical NIC and the driver being used. As a result, the performance may vary based on the hardware used when VX http://www.vmware.com/files/pdf/techpaper/VMware-vSphere-VXLAN-Perf.pdfLAN is configured.
  • 21. VXLAN Isn’t Perfect Compared to MAC in MAC encapsulation (vCNI) then VXLAN (MAC in UPD) moves in the right direction for broadcast scalability • Broadcasts on internal networks (“protected” with vCDNI) get translated into global broadcasts. This behavior totally destroys scalability. In VLAN-based designs, the number of hosts and VMs affected by a broadcast is limited by the VLAN configuration... unless you stretch VLANs all across the data center (but then you ask for trouble). Ivan Pepelnjak VXLAN Fenced networks communicate via the VXLAN vmk adapter that only uses a single Netqueue NIC queue. This limits scalability by increasing CPU pressure on the host for a single pCPU. vCNS Teaming Policy in conjunction with Virtual Connect. VC has no downstream EtherChannel/LACP support and therefore VXLAN will always effectively be Active/Passive going out the chassis. You will be limited to the bandwidth of a single upstream link per vCNS Edge device (typically per cluster). The lack of control plane virtualization and reliance on the physical network for MAC propagation introduces limits imposed by multicast. – – Multicast administrator expertise (not your typical data center protocol) Multicast segment support limits of physical network infrastructure