Developer Data Modeling Mistakes: From Postgres to NoSQL
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
1. How to Deploy VMware NSX with Cisco Infrastructure
Ron Fuller, VMware
Paul Mancuso, VMware
NET8364R
#NET8364R
2. Growing NSX Momentum
A rapid journey of customer adoption across industries
1700+ Customers
8 out of VMware’s
top 10 deals in Q216
included NSX
100% YoY
growth
Consistent year-to-year Q216
3. Security
Inherently secure infrastructure
Automation
IT at the speed of business
Application continuity
Data center anywhere
NSX customer use cases
Micro-segmentation
DMZ anywhere
Secure end user
IT automating IT
Multi-tenant infrastructure
Developer cloud
Disaster recovery
Cross cloud
Multi data center pooling
4. • This presentation may contain product features that are currently under development.
• This overview of new technology represents no commitment from VMware to deliver these
features in any generally available product.
• Features are subject to change, and must not be included in contracts, purchase orders, or
sales agreements of any kind.
• Technical feasibility and market demand will affect final delivery.
• Pricing and packaging for any new technologies or features discussed or presented have not
been determined.
Disclaimer
CONFIDENTIAL 4
5. Session Abstract
Many enterprises rely on both VMware vSphere and Cisco Nexus/UCS to build the foundation of
their data center infrastructure. While VMware NSX brings advanced network automation and
security capabilities to vSphere on any network infrastructure, this session will cover the NSX
design considerations specific to environments using Cisco Nexus switches for the physical
network, and Cisco UCS for the vSphere compute resources. This session will also show how to
run NSX while utilizing the underlay functionality of Cisco ACI. The session will provide a review
of the most important topics from the two VMware NSX + Cisco Nexus/ACI design guides already
published, while adding additional “lessons learned” from real deployments in the field since
those publications.
CONFIDENTIAL 5
6. Session Goals
• Session is a complement to the NSX & Cisco Design Guide available here and the
Deploying NSX with Cisco ACI design guide available here
• Understand how to implement a Nexus and ACI based design with NSX
• Covers both Cisco Nexus 9K in NX-OS mode and ACI mode
CONFIDENTIAL 6
7. Ron Fuller
• Staff Systems Engineer – NSBU
• Cisco Certified Internetwork Expert (CCIE)
#5851 Emeritus (R&S/Storage)
• More than 20 years of networking experience
• 9 Years at Cisco – SE/CSE/Data Center
TSA/TME
• CiscoPress Published Author
• VMwarePress Published Author
CONFIDENTIAL 7
8. Paul Mancuso
Technical Product Manager – NSBU
• VCDX#159
• VCIX-NV
• VCI
• CCSI
• CCNP; CCNP Data Center
• MCSE
• CISSP
• 25 years of networking experience
• Author of 5 books
• Author of over 3 dozen courses on Cisco and Microsoft
CONFIDENTIAL 8
14. vSphere Host (ESXi)
15
Layer 2 or Layer 3 Uplinks
based on topology
VLAN Trunk (802.1Q)
VLAN 66
Mgmt
10.66.1.2/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.2/26
DGW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.2/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.2/26
DGW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
VMkernel Networking – L2 OR L3 Topology
CONFIDENTIAL
15. VLANs Requirements for NSX
• At minimum 4 VLANs for compute
• Minimum two additional VLANs for
N-S traffic for edge hosts only
• P-V bridging VLANs as needed
• VLAN ID for VXLAN needs to be
consistent across transport zone
• The dedicated TCP/IP stack
for VXLAN and vMotion
– “Default” used for everything else
• Starting with vSphere 6.0, routed
vmkernel network for vMotion no
longer requires RPQ
CONFIDENTIAL 16
Traffic Type Function
VLAN
ID
Management
ESXi and NSX
Management
100
vMotion VM Mobility 101
IP Storage
VLAN
Application and
infrastructure DS
connectivity
102
VXLAN Guest VM Traffic 103
N-S Traffic Route Peering 105/106
Bridging
Optional
Migration and non
IP traffic
107
ComputeCluster
EdgeCluster
VLAN values shown are for example use
16. NSX VXLAN Capability – No Multicast Required
• Standard VXLAN requires multicast on
physical network
– L2 network – relatively easy IGMP
Snooping & Querier
– L3 network – additionally requires PIM
• NSX offers a choice in which multicast
configuration is not required
• L2 frame is encapsulated at the
hypervisor – reducing complex
configuration on physical network
• VTEP (VXLAN Tunnel EndPoint)
is a logical interface (VMkernel)
connects to TZ for encap/decap
VXLAN traffic
17
VM Sends a
standard L2
Frame
Source Hypervisor (VTEP)
encapsulates VXLAN, UDP
& IP Headers
Destination Hypervisor
(VTEP) decapsulate
headers
Physical Network
forwards frame as
standard IP frame
Original L2 Frame
delivered to VM
VXLAN
VTEP
VXLAN
VTEP
IP FrameL2 Frame L2 Frame
1
2 43 5
L2 IP UDP VXLAN PayloadL2
1
2
3
4
3
UDP VXLAN PayloadL2
5
Nexus 93XX
NX-OS Mode
Nexus 93XX
NX-OS Mode
CONFIDENTIAL
17. Transport Zone, VTEP, Logical Networks and VDS
• Transport Zone: Collection of VXLAN prepared
ESXi clusters
• VTEP is a VMkernel interface created dynamically
during the cluster VXLAN preparation
– Belongs to a specific VLAN backed port-group
– vDS uplink configuration requires consistency
for the vDS
– VLAN ID for the VXLAN Transport must be
the same
• Only 1 vDS per ESXi cluster can be enabled for VXLAN
• Logical Switch is represented as a dvPortgroup may
span multiple vDS
• vDS is licensed with NSX on any vSphere version
– Enterprise+ is required for Non-NSX use of vSphere hosts
• vDS is the ONLY vSwitch that supports NSX
– MUST be managed by vSphere
• N1Kv is not supported but may co-exist for non-NSX cluster usage
18
vSphere
Host
VXLAN Transport
Network
10.20.10.10
Host 1
10.20.10.11
VTEP1 VTEP2
VM
VXLAN 5002
MAC2
vSphere
Host
10.20.10.12
Host 2
10.20.10.13
VM
MAC4
VM
MAC1
VM
MAC3
VTEP3 VTEP4
Compute VDS Edge VDS
CONFIDENTIAL
18. VDS Uplink Design
• NSX create dvUplink port-groups for VXLAN
enabled hosts. This uplink connectivity carrying
VXLAN traffic.
• Must be consistent for all hosts belonging to the VDS
• Must carry same teaming policy
– For the VXLAN traffic, the choice in teaming
mode depends on
• Simplicity
• Bandwidth requirement
– Recommended teaming mode with standard rack
servers
• Dependent upon functional goals as listed in table
– Recommended teaming mode with UCS Blade
System is “route Based on Originating Port”
• LACP is not possible from UCS blade
– Having multiple VDS for compute and Edge allow
flexibility of teaming mode for uplink configuration
19
Teaming and
Failover Mode
NSX
Supp
ort
Multi-
VTEP
Support
Uplink
Behavior
2 x 10G
Nexus Port
Configuration
Route based on
Originating Port
✓ ✓ Both Active Standard
Route based on
Source MAC
hash
✓ ✓ Both Active Standard
LACP ✓ ×
Flow based –
both active
vPC Port-Channel -
LACP
Route based on
IP Hash (Static
EtherChannnel)
✓
×
Flow based –
both active
vPC Port-Channel –
LACP mode OFF
Explicit Failover
Order
✓ ×
Only one link is
active
Standard
Route based on
Physical NIC
Load (LBT)
×
× × Standard
CONFIDENTIAL
20. NSX is AGNOSTIC to Underlay Network Topology
L2 or L3 or Any Combination
Regardless of switched infrastructure
Only Two Requirements
2) IP Connectivity
1) MTU of 1600
NSX Switching Overview and Requirements
21CONFIDENTIAL
21. Jumbo MTU Considerations
• VXLAN encap traffic is >= 1600 UDP frame
• VDS Max MTU is 9000 Byte
• Nexus 7xxx, 9xxx Series
– L2 only requires global configuration
– L3 requires per interface MTU change
• Nexus 5xxx, 56xx and 6xxx requires
– MTU to be changed with Policy-Map
– L3 requires per interface MTU change
• All links belonging to fabric must be
enabled with Jumbo MTU
22
Layer 2 Interface Layer 3 Interface
system jumbomtu 9216 Global configurations
interface Ethernet1/9
description to esx-vmnic3-VMK
switchport mode trunk
switchport trunk allowed vlan 22-25
spanning-tree port type edge trunk
mtu 9216 Layer 2 MTU
channel-group 9 mode active
interface Vlan151 SVI Interface
no ip redirects
ip address 10.114.221.34/27
hsrp 1
ip 10.114.221.33
description VXLAN Transport Zone
no shutdown
mtu 9216
interface Ethernet2/12 Layer 3 Interface
description L3 Link to Spine
no switchport
speed 40000
duplex full
mtu 9216
ip address 10.114.211.117/31
no shutdown
Nexus 7000 / 9000 series
CONFIDENTIAL
22. • VXLAN encap traffic is >= 1600 UDP frame
• VDS Max MTU is 9000 Byte
• Nexus 7xxx, 9xxx Series
– L2 only requires global configuration
– L3 requires per interface MTU change
• Nexus 5xxx, 56xx and 6xxx requires
– MTU to be changed with Policy-Map
– L3 requires per interface MTU change
• All links belonging to fabric must be
enabled with Jumbo MTU
23
All L2 interfaces Layer 3 Interface
Only global configurations
Create policy-map:
policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
Apply policy-map:
system qos
service-policy type network-qos jumbo
interface Vlan151 SVI Interface
no ip redirects
ip address 10.114.221.34/27
hsrp 1
ip 10.114.221.33
description VXLAN Transport Zone
no shutdown
mtu 9216
interface Ethernet2/12 Layer 3 Interface
description L3 Link to Spine
no switchport
speed 40000
duplex full
mtu 9216
ip address 10.114.211.117/31
no shutdown
Nexus 5xxx and 6xxx series
Jumbo MTU Considerations
CONFIDENTIAL
23. 24
VLANs & IP Subnet Defined at each
ToR
SVI Interface VLAN ID IP Subnet
Management 100 10.100.R_ID.x/24
vMotion 101 10.101.R_ID.x/24
Storage 102 10.102.R_ID.x/24
VXLAN 103 10.103.R_ID.x/24
VLANs & IP Subnet Defined at 95xx for
POD A
SVI Interface VLAN ID IP Subnet
Management 100 10.100.A.x/24
vMotion 101 10.101.A.x/24
Storage 102 10.102.A.x/24
VXLAN 103 10.103.A.x/24
VLANs & IP Subnet Defined at 95xx for
POD B
SVI Interface VLAN ID IP Subnet
Management 200 10.200.B.x/24
vMotion 201 10.201.B.x/24
Storage 202 10.202.B.x/24
VXLAN 103 10.103.B.x/24
VXLAN VLAN ID 103 - Transport Zone Scope (extends across ALL PODs/clusters)
Compute
Cluster A
Compute
Cluster B
VLAN ID 100, 101 & 102 Scope VLAN ID 200, 201 and 203 Scope
POD A
L3
L2
UCS B-Series
95xx95xx
93xx 93xx 93xx 93xx
95xx 95xx
93xx93xx93xx93xx
POD B
UCS B-Series
L3 Core
L3
L2
Compute
Cluster A
Compute
Cluster B
VLAN ID 100, 101 & 102 Scope VLAN ID 200, 201 and 203 Scope
L3
L2
UCS B-Series
95xx95xx
93xx 93xx 93xx 93xx
95xx 95xx
93xx93xx93xx93xx
UCS B-Series
L3 Core
Rack 1 Rack N Rack 1 Rack N
VXLAN VLAN ID 103 - Transport Zone Scope (extends across ALL PODs/clusters)
POD A POD B
L3
L2
95xx95xx
95xx95xx
Cisco DC Topologies – Pod design – NSX is Agnostic
CONFIDENTIAL
24. Cisco DC Topologies – Fabric Path / DFA
NSX is Agnostic
25
VLAN ID 100, 101 and 102 Scope – VXLAN VLAN ID 103 - Transport Zone Scope (extends across ALL PODs/clusters)
Compute
Cluster A
Compute
Cluster B
77xx 77xx
UCS B-Series
55xx55xx 55xx 55xx
UCS B-Series
56xx 56xx 60xx 60xx
70xx 70xxSpine
Leaf
UCS B-Series UCS B-Series
Border Leaf
Mgt / Edge
Cluster
VLANs & IP Subnet Defined at each
ToR
SVI Interface VLAN ID IP Subnet
Management 100 10.100.100.x/24
vMotion 101 10.101.101.x/24
Storage 102 10.102.102.x/24
VXLAN 103 10.103.103.x/24
L3 Spine
Management
Cluster
DC Core
Internet/DMZ
CONFIDENTIAL
25. Cisco DC Topologies – ACI
NSX is Agnostic
26
VLAN ID 100, 101 and 102 Scope – VXLAN VLAN ID 103 - Transport Zone Scope (extends across ALL PODs/clusters)
Compute
Cluster A
Compute
Cluster B
UCS B-Series UCS B-Series
Spine
Leaf
UCS B-Series UCS B-Series
Border Leaf
Mgt / Edge
Cluster
VLANs & IP Subnet Defined at each
ToR
SVI Interface VLAN ID IP Subnet
Management 100 10.100.100.x/24
vMotion 101 10.101.101.x/24
Storage 102 10.102.102.x/24
VXLAN 103 10.103.103.x/24
DC Core
Internet/DMZ
CONFIDENTIAL
26. NSX Component Mapping
• Separation of compute, management and Edge
function with following design advantage
• Management Cluster
– Can co-exist with Edge Cluster in same
UCS Chassis
– Minimum three hosts – more if needed
– LACP can be used on rack-mount
• Edge Cluster
– Should be independent UCS C series
– Edge VM for North-south traffic
– Active-standby Control-VM
– Can hold NSX Controller is optimization of
resources is desired
• Compute Cluster
– VXLAN is enabled per cluster
– Can co-exist with physical bare-metal compute
27
Function NSX Components
Recommended
Cluster Designation
Management Plane NSX Manager & VC Management Cluster
Control Plane
NSX Controller Cluster
Management or
Edge Cluster
Logical Router Control
VM
Edge Cluster
Data Plane
East-West
VXLAN forwarding -
compute and edge
VDS kernel
components &
DLR(Distributed
Logical Routers)
Compute and Edge
Cluster
Data Plane
North-South
ECMP Edge or Edge
Services
Edge Cluster
Bridging Traffic DLR Control VM Edge Cluster
CONFIDENTIAL
28. NSX + Cisco ACI for the physical network
underlay functions
• ACI EPG’s will map to VLANs
• NSX to create overlay network and provide
all network virtualization functions
Customer Benefits
• Customer can use APIC to program all
switches and perform tasks like config,
upgrades, etc. from APIC
• Leverage fabric for enforcing infrastructure
traffic isolation via EPG
• Full SDDC capabilities are available
Customer Proof Points
• Some deployments of ACI and NSX
29
NSX provides the VXLAN overlay networks
ACI EPG maps to a VLAN to enforce the overlay
NSX with Cisco Underlay in ACI Mode
CONFIDENTIAL
29. NSX Using Cisco Nexus Switches as Underlay
• Full NSX Stack over ACI to leverage
– Core network use case
– Security
• Cisco ACI Specific Design Highlights
– L2 fabric underlay with min 1600 MTU
– EPG = BD = VLAN
– 4 EPGs for fabric (compute and edge)
– 2 EPG for Edge ECMP connectivity
– NSX Edge maps to border leaf
– BGP to ACI Border Leaf, active-active design
• Design Guide: Deploying NSX on ACI Underlay
– https://communities.vmware.com/docs/DOC-30849
30
Recommended Design
CONFIDENTIAL
30. 32
Web DB
DLR
E1
Border Leaf
App
ECMP
E8
EPG/VLAN 20
Transit VXLAN
Active Standby
Routing
Adjacency
…
EPG/VLAN 10
L3
L2
DC Core
ACI Spine
Edge Cluster
Host 1
Host 2
Host 3
L3
L2
ACI Border
Leaf
Host 4
DC CoreACI Spine
NSX Edge Mapping to ACI Border Leaf
CONFIDENTIAL
31. Active-active Deployment
Application active on both sides
VIP-active
APP
A
APP
B
APP
C
GSLB
VIP-active
APP
A
APP
B
APP
E
vCenter-A vCenter-B
Active-active application pair
Active-active
application pair
CONFIDENTIAL 33
32. Site A
Host 1
Host 2
Host 3
L3
L2
ACI Border
Leaf
Host 4
DC Core
ACI Spine
Site B
Host 1
Host 2
Host 3
L3
L2
Non ACI
Network
Host 4
Even with Multi-DC:
• NSX is always agnostic to the
underlay
• Zero requirement for identical
hardware infrastructure at
each site
• Use non-proprietary L3
connectivity between sites
• Still require the same few
underlay features
• Jumbo MTU
• Stabile L2/L3 IP
infrastructure
Multi-DC Heterogeneous DR
CONFIDENTIAL 34
34. 95xx
UCS and NSX
• Fabric Connectivity
• The Cisco Fabric Interconnect runs in end-host mode
• The uplinks from Cisco UCS Fabric Interconnects have vPC
connectivity to Nexus switches to provide loop-free topology
• Network Connectivity
• vNICs (UCS logical interface) are either dedicated or shared based
on bandwidth and isolation requirements
• VDS uplinks are equal to the number of VMNIC exposed per UCS
blade for each VDS
• Cisco UCS Blade Server multiple VTEPs are recommended with
each VTEP mapping to different vNICs
• The VDS uplink teaming mode cannot use LACP (its an orphaned
connection) since fabric interconnect does not support server side
LACP nor pass-through
– vSphere Teaming Mode of Src ID
• Fabric Connectivity for Edge Cluster (N-S Traffic) is a separate
straight through connection
UCS
Edge Leaf
L3
L2
95xx
93xx
Edge Cluster
UCS C-Series
DC Core
Internet/DMZ
DC Fabric
93xx
VPN
VPN
VPN
VPN
CONFIDENTIAL 36
35. VXLAN dvPortgroup (VTEP)
UCS Network Adapter & VXLAN
• Assumes single UCS adaptor *
• Two VTEPs per blade is recommended
– VXLAN VDS is exposed with two uplinks vNICs
– Improves VXLAN throughput
– Each VTEP is mapped to unique vNIC which then
maps to fabric A and B
• VXLAN traffic can be shared with other traffic on
single VDS
• On UCS B-Series, recommend a dedicated VDS
for the non-VPC transit VLAN used for routing
N-S traffic
• * If more then one UCS network adaptor employed,
recommendation is to expose two uplinks for the
VDS that carries VXLAN traffic which will utilize two
VTEPs when using vSphere Src ID Teaming
VMNIC 0 VMNIC 1
UCS
B-Series
Blade
vNIC 1
Teaming
Mode
VMkernel
VXLAN VTEP – 1
Traffic
Type
SRC_ID
vNIC 0
dvUplink
VMkernel
VXLAN VTEP – 2
SRC_ID
VDS - 1 MGMT, vMotion, NFS, VXLAN & Bridging
CONFIDENTIAL 37
36. Routing
Adjacency
Uplink Teaming
Mode
Non-VPC
VXLAN 5020
Transit Link
VXLAN 5020
Transit Link
Routing
Adjacency
vSphere
Host
vSphere
Host
Uplink Teaming
Mode
VPC
Peering over VPC
Not Supported on Cisco Nexus
Few exception
Nexus 7K with Release 7.2
Nexus 3K – BGP
Peering over non-VPC
Non-LACP Teaming
UCS C Series
Peering over non-VPC
Parallel Links
UCS FI vPC
L3
L2
L3
L2
vPC and Routing Peer Termination
37. NSX Edge Routing Design with
UCS C Series & Intel NICs
39
• 1:1 mapping is created between Edge uplinks
(VLAN backed networks) and routing neighbors
on the ToR A/B
– Map each of these VLANs (portgroups) to a different
dvUplink on Edge VDS to ensures distribution of N/S
traffic across dvUplinks
– The VLANs used for dynamic routing are local to
each router
– Edge Uplink = Host Uplink = VLAN = Adjacency
– Avoid using VPC to ToR for route peering due to
vendor dependencies
– Teaming mode is “route Based on Originating Port”
• Redundancy is handled by Dynamic Routing as
Edges have adjacencies with both routers
• eBGP is used between NSX ESG and routers A/B
• Equally applicable to OSPF
• Default route must follow the uplink status
NSX Edge Gateway
(all ESGs use same uplink configuration)
VLAN 10 SVI
(9K-A Only)
VLAN 20 SVI
(9K-B Only)
Uplink A
VLAN 10
vNIC1
Uplink A
Uplink B
VLAN20
vNIC2
Uplink B
Edge Uplink Int 1 from
VLAN 10 dvPortgroup
(vmnicX active
vmnicY unused)
Edge Uplink Int 1 from
VLAN 10 dvPortgroup
(vmnicX active
vmnicY unused)
eBGP
Peering
Default route
advertised downstream
to 9K ToRs. Loss of
both uplinks should
withdraw all routes
9K-B9K-A
95xx 95xx
VDS-Edge
CONFIDENTIAL
38. NSX Edge Routing Design with
UCS B Series
40
• Dedicated non-VPC links from FI to ToR
(pinned) VLANs used for Edge peering
with N9Ks
• Port-Channel is used to maintain redundancy
and scaling BW based on number of Edge
VMs deployed
• 1:1 mapping between Edge uplinks (VLAN
backed networks) and BGP neighbors on N9Ks.
• Redundancy with Dynamic Routing as Edges
have adjacencies with both N9Ks
• Dedicated vNICs are created in UCS so
that traffic for these VLANs is pinned to the
FI uplinks
• eBGP is used between NSX ESG and N9Ks
• Default route must follow the uplink status
VLAN 10 SVI
(9K-A Only)
VLAN 20 SVI
(9K-B Only)
Dedicated non-
vPC pinned for
VLAN 10
1 vNIC maps to
Fabric A
Edge Uplink Int 1
from VLAN 10
dvPortgroup
(vmnicX active
vmnicY unused)
Dedicated non-
vPC pinned for
VLAN 20
1 vNIC maps to
Fabric B
Default route
advertised downstream
to 9K ToRs. Loss of
both uplinks should
withdraw all routes
UCS
FI-A
UCS
FI-B
NSX Edge Gateway
(all ESGs use same uplink configuration)
eBGP
Peering
Edge Uplink Int 2
from VLAN 20
dvPortgroup
(vmnicX active
vmnicY unused)
95xx 95xx
9K-A 9K-B
VDS-Edge
CONFIDENTIAL
39. VMNIC 0 VMNIC 1 VMNIC 2 VMNIC 3
UCS
B-Series
Blade
vNIC 3vNIC 1
2204 FEX2204 FEX
6248 (A) 6248 (B)
Teaming
Mode
VMkernel
VXLAN
VTEP – 1
Traffic
Type
VMkernel
VXLAN
VTEP - 2
VMkernel
vMotion
VMkernel
Mgmt
VMkernel
IP
Storage
LBT SRC_ID SRC_ID Explicit
Failover
LBT
Nexus 93XX
NX-OS Mode
Nexus 93XX
NX-OS Mode
vNIC 4vNIC 2
VDS – 2 Routing
Routing
VLAN PG
2
Bridging
PG
dvUplink
Routing
VLAN PG
1
VXLAN
dvPg
VDS - 1 MGMT, vMotion, NFS, VXLAN & Bridging
VDS Design, Uplink & Traffic Mapping
• Recommend a minimum of two vDS for Edge Cluster
• VDS 1 – All traffic except N-S routing
• VDS 2 – N-S Traffic Routing VLANs
– External Connectivity requires
dedicated non-VPC Links
– Recommend a VDS for routing
VLAN pinned on dedicated straight
through links
– Dual VTEP – each VTEP active on
Fabric A and B respectively
– The teaming recommendation for
VTEP is SRC_ID
– For the rest of the traffic teaming
is based on local requirements
– For UCS C-series Edge cluster,
a single VDS would suffice
CONFIDENTIAL 41
40. • MTU: 9K – For line rate performance
– 9000 MTU on the Fabric, VDS and pNIC
– 8900 MTU on the VM – vNIC
– Disable VXLAN Offload
– Enable NetQueue
– Number of VTEPs: Two
– 2 Policies
• VMQ Connection Policy
• Adapter Policy
Settings for B-Series
• Regardless of VLAN or VXLAN
Tuning Guideline for B-Series Blade
42
Adapter Policy
VMQ Connection
Policy
CONFIDENTIAL
41. NSX Design Guides
• Reference Design: Deploying NSX with Cisco UCS
and Nexus 9000 Infrastructure
https://communities.vmware.com/docs/DOC-29373
• Design Guide for Vmware NSX running with a Cisco
ACI Underlay Fabric
https://communities.vmware.com/docs/DOC-30849
• VMware® NSX for vSphere Network Virtualization
Design Guide version 3.0
https://communities.vmware.com/docs/DOC-27683
• More design documents
https://www.vmware.com/support/pubs/vmware-
validated-design-pubs.html
CONFIDENTIAL 43
42. Related VMworld Sessions
• Introduction to Vmware NSX [NET7834]
• Vmware NSX Deep Dive [9152]
• Practical Path to NSX [8675]
• NSX on ACI Customer Panel [8729]
• Explore NSX Feature Labs
– Vmware NSX Feature Tour [SPL-1703-SDC-1]
• Self Paced Labs
– Vmware NSX Feature Tour [ELW-1703-SDC-1]
• Expert Led Workshop
44CONFIDENTIAL
43. NSX partner ecosystem
Physical Infrastructure
Security
Application Delivery
Operations and Visibility
DYNAMIC INSERTION OF
PARTNER SERVICES
44. Learn
Connect & Engage
communities.vmware.com
NSX Product Page & Technical Resources
vmware.com/products/nsx
Network Virtualization Blog
blogs.vmware.com/networkvirtualization
VMware NSX on YouTube
youtube.com/user/vmwarensx
Where to get started
Experience
70+ Unique NSX Sessions
Spotlights, breakouts, quick talks & group discussions
Visit the VMware Booth
Use case demos, chat with NSX experts
Visit NSX Technical Partner Booths
Integration demos – EPSec & NetX, Hardware VTEP,
Ops & Visibility
Test Drive NSX with free Hands-on Labs
Expert-led or Self-paced. labs.hol.vmware.com
Use
NSX Proactive Support Service
Optimize performance based on data monitoring
and analytics to help resolve problems, mitigate
risk and improve operational efficiency.
vmware.com/consulting
Take
Training and Certification
Several paths to professional certifications. Learn
more at the Education & Certification Lounge.
vmware.com/go/nsxtraining
45.
46.
47. How to Deploy VMware NSX with Cisco Infrastructure
Ron Fuller, VMware
Paul Mancuso, VMware
NET8364R
#NET8364R