SlideShare a Scribd company logo
1 of 54
Cormac Hogan
Andreas Scherr
STO1193BU
#STO1193BU
A Closer Look at vSAN
Networking Design and
Configuration
Considerations
• This presentation may contain product features that are currently under development.
• This overview of new technology represents no commitment from VMware to deliver these
features in any generally available product.
• Features are subject to change, and must not be included in contracts, purchase orders, or
sales agreements of any kind.
• Technical feasibility and market demand will affect final delivery.
• Pricing and packaging for any new technologies or features discussed or presented have not
been determined.
Disclaimer
2
Agenda
1 vSAN Networking Overview
2 Multicast and Unicast
3 NIC Teaming and Load Balancing
4 Network Topologies (incl. Stretched and 2-node)
5 Network Performance Considerations
3
Where should I begin? StorageHub!
• https://storagehub.vmware.com/#!/vmware-vsan/plan-and-design
4
vSAN Networking Overview
5
vSAN Networking – Major Software Components
• CMMDS (Cluster Monitoring, Membership, and Directory Service)
• Inter cluster communications and metadata exchange
– Multicast with <= vSAN 6.5
– Unicast with >= vSAN 6.6
– Heartbeat sent from master to all hosts every second
• Traffic light in steady state
• RDT (Reliable Datagram Transport)
• Bulk of vSAN traffic
– Virtual Disk data distributed across cluster
– Replication /Resynch Traffic
6
vSAN Networking – Ports and Firewalls
• ESXi Firewall considerations
– On enablement of vSAN on a given cluster, all required ports are
enabled/disabled automatically; no admin action
• Ports
– CMMDS (UDP 12345, 23451, 12321)
– RDT (TCP 2233)
– VSANVP (TCP 8080)
– Witness Host (TCP port 2233 and UDP Port 12321)
– vSAN Encryption / KMS Server
• Communication between vCenter and KMS to obtain keys
• vSAN Encryption has special dynamic firewall rule opened on
demand on ESXi hosts
7
Network Connectivity – IPv6
• vSAN can operate in IPv6-only mode
– Available since vSAN 6.2
– All network communications are through IPv6 network
• vSAN supports mixed IPv4 & IPv6 during upgrade only
– Do not run mixed mode in production
8
Minimum NIC requirements for vSAN Networking
9
+10Gb
support
1Gb
support Comments
Hybrid Cluster Y Y
10Gb min. recommended, but 1Gb supported,
<1ms RTT
All-Flash Cluster Y N
All Flash requires 10Gb min. 1Gb not supported,
<1ms RTT
Stretched Cluster - Data to Data Y N
10Gb required between data sites*,
<5ms RTT
Stretched Cluster - Witness to Data Y Y
100Mbps connectivity required from data sites to witness.
<200ms RTT
2-node Data to Data Y Y
10Gb min. required for All-Flash. 1Gb supported for hybrid,
but 10Gb recommended
2-node Witness to Data Y Y
1.5Mbps bandwidth required.
<500ms RTT
Distributed or Standard Switches?
10
• vSphere Standard Switch
• No management dependence on vCenter
• Recovery is simple
• Prone to misconfiguration in larger setups
• vSphere Distributed Switch
• Consistency
Avoids configuration skew
• Teaming and Failover
LACP/LAG/ether-channel
• Network I/O Control
Manage/allocate network bandwidth for
different vSphere traffic types
vSphere Distributed Switch is Free with vSAN
Network I/O Control (NIOC) Configuration Sample
• Single 10-GbE physical adapters for simplicity
• NICs handles traffic for vSAN, vMotion, and virtual machines and management traffic
• If adapter becomes saturated, Network I/O Control controls bandwidth allocation
• Sample configuration:
11
Traffic Type Custom Shares Value Bandwidth
vSAN 100 5Gbps
vMotion 50 2.5Gbps
Virtual Machine 30 1.5Gbp
Management 20 1Gbps
NIC Teaming and Failover options
12
• Keep it simple folks!
• All Virtual Switches Support (vSS + vDS)
– Routed based on IP Hash / Virtual Port ID
• Distributed Switch Only (vDS)
– Route based on Physical NIC Load (LBT)
• Distributed Switch + Physical Switch Only
– Physical switches that support LACP/LAG/ether-
channel provide additional load balancing algorithms
Multi chassis link aggregation capable switches
vSAN Multicast & Unicast
13
What is Multicast?
14
• vSAN 6.5 (and earlier) used multicast traffic as a discovery
protocol to find all other nodes trying to join a vSAN cluster.
• Multicast is a network communication technique utilized to
send information simultaneously (one-to-many or many-to-
many) to a group of destinations over an IP network.
• Multicast needs to be enabled on the switch/routers of the
physical network.
• Internet Group Management Protocol (IGMP) used within
an L2 domain for group membership (follow switch vendor
recommendations)
• Protocol Independent Multicast (PIM) used for routing
multicast traffic to a different L3 domain
Multicast added complexity to vSAN networking
IGMP Considerations
• Consideration with multiple vSAN clusters
– Prevent individual clusters from receiving all multicast streams
– Option 1 – Separate VLANs for each vSAN cluster
– Option 2 - When multiple vSAN clusters reside on the same layer 2 network, VMware
recommends changing the default multicast address
• See VMware KB 2075451
15
Multicast Group Address on vSAN
• The vSAN Master Group Multicast Address created is 224.1.2.3 – CMMDS updates.
• The vSAN Agent Group Multicast Address is 224.2.3.4 – heartbeats.
• The vSAN traffic service will assign the default multicast address settings to each host node.
16
# esxcli vsan network list
Interface
VmkNic Name: vmk2
IP Protocol: IP
Interface UUID: 26ce8f58-7e8b-062e-ba57-a0369f56deac
Agent Group Multicast Address: 224.2.3.4
Agent Group IPv6 Multicast Address: ff19::2:3:4
Agent Group Multicast Port: 23451
Master Group Multicast Address: 224.1.2.3
Master Group IPv6 Multicast Address: ff19::1:2:3
Master Group Multicast Port: 12345
Host Unicast Channel Bound Port: 12321
Multicast TTL: 5
vSAN 6.6 introduces Unicast
in place of Multicast for vSAN
communication
17
vSAN and Unicast
• vSAN 6.6 now communicates using unicast for
CMMDS updates.
• A unicast transmission/stream sends IP packets to a
single recipient on a network.
• vCenter becomes the new source of truth for vSAN
membership.
– List of nodes is pushed to the CMMDS layer
• The Networking Mode (unicast/multicast) is not
configurable
18
vSAN 6.6 and above
Unicast
vSAN and Unicast
• The Cluster summary now shows if a vSAN cluster network mode is Unicast or Multicast:
19
Member Coordination with Unicast on vSAN 6.6
• Who tracks cluster membership if we no
longer have multicast?
• vCenter now becomes the source of truth for
vSAN cluster membership with unicast
• The vSAN cluster continues to operate in
multicast mode until all participating nodes are
upgraded to vSAN 6.6
• All hosts maintain a configuration generation
number in case vCenter has an outage.
– On recovery, vCenter checks the configuration
generation number to see if the cluster
configuration has changed in its absence.
20
vCenter
New Unicast considerations
in vSAN 6.6
21
Upgrade / Mixed Cluster Considerations with unicast
22
vSAN Cluster
Software
Configuration
Disk Format
Version(s)
CMMDS
Mode
Comments
6.6 Only Nodes* All Version 5 Unicast
Permanently operates in unicast. Cannot switch to
multicast. Adding older nodes will partition cluster.
6.6 Only Nodes*
All Version 3 or
below
Unicast
6.6 nodes operate in unicast mode.
Switches back to multicast if < vSAN 6.6 node added.
Mixed 6.6 and vSAN
pre-6.6 Nodes
Mixed Version 5 with
Version 3 or below
Unicast
6.6 nodes with v5 disks operate in unicast mode. Pre-6.6
nodes with v3 disks will operate in multicast mode.
*** This causes a cluster partition! ***
Mixed 6.6 and vSAN
pre-6.6 Nodes
All Version 3 or
Below
Multicast
Cluster operates in multicast mode. All vSAN nodes must
be upgraded to 6.6 to switch to unicast mode.
*** Disk format v5 will make unicast mode permanent ***
vSAN 6.6 only nodes – additional considerations with unicast
• All hosts running vSAN 6.6, cluster will communicate using unicast
– Even if disk groups are formatted with < version 5.0, e.g. version 3.0
• vSAN will revert to multicast mode if a non-vSAN 6.6 node is added to the 6.6 cluster
– But only if no disk group format == version 5.0
• A vSAN 6.6+ cluster will only ever communicate in unicast if a version 5.0 disk group exists
• If a non-vSAN 6.6 node is added to a 6.6 cluster which contains at least one version 5.0 disk
group, this node will be partitioned and will not join the vSAN cluster
23
Considerations with Unicast
• Considerations with vSAN 6.6 unicast and DHCP
– vCenter Server deployed on a vSAN 6.6 cluster
– vSAN 6.6 nodes obtained IP addresses via DHCP
– If IP addresses change, vCenter VM may become unavailable
• Can lead to cluster partition as vCenter cannot update membership
– This is not supported unless DHCP reservations are used.
• Considerations with vSAN 6.6 unicast and IPv6
– IPv6 is supported with unicast communications in vSAN 6.6.
– However IPv6 Link Local Addresses are not supported for
unicast communications on vSAN 6.6
• vSAN doesn’t use link local addresses to track membership
24
vCenter
Query Unicast with esxcli
• vSAN cluster node now displays the CMMDS networking mode - unicast or multicast.
– esxcli vsan cluster get
25
Query Unicast with esxcli
• One can also check which vSAN cluster nodes are operating in unicast mode
– esxcli vsan cluster unicastagent list:
• Unicast info is also displayed in vSAN network details
– esxcli vsan network list
26
NIC Teaming and Load-Balancing
Recommendations
27
NIC Teaming – single vmknic, multiple vmnics (uplinks)
• Route based on originating virtual port
– Pros
• Simplest teaming mode, with very minimum physical
switch configuration.
– Cons
• A single VMkernel interface cannot use more than a single
physical NIC's bandwidth.
• Route Based on Physical NIC Load
– Pros
• No physical switch configuration required.
– Cons
• Since only one VMkernel port, effectiveness of using this is
limited.
• Minor overhead when ESXi re-evaluates the load
28
Load Balancing - single vmknic, multiple vmnics (uplinks)
• vSAN does not use NIC teaming for load
balancing
• vSAN has no load balancing mechanism
to differentiate between multiple vmknics.
• As such, the vSAN IO path chosen is not
deterministic across physical NICs
29
0
100000
200000
300000
400000
500000
600000
700000
800000
900000
1000000
Node 1 Node 2 Node 3 Node 4
KBps Utilization per vmnic -Multiple VMknics
vmnic0 vmnic1
NIC Teaming – LACP & LAG (***Preferred***)
• Pros
– Improves performance and bandwidth
– If a NIC fails and the link-state goes down, the
remaining NICs in the team continue to pass traffic.
– Many load balancing options
– Rebalancing of traffic after failures is automatic
– Based on 802.3ad standards.
• Cons
– Requires that physical switch ports be configured in
a port-channel configuration.
– Complexity on configuration and maintenance
30
Load Balancing – LACP & LAG (***Preferred***)
• More consistency compared to “Route
based on physical NIC load”
• More individual Clients (VMs) will cause
further increase probability of a balanced
load
31
0
50000
100000
150000
200000
250000
300000
350000
400000
450000
500000
Node 1 Node 2 Node 3 Node 4
KBps Utilization per vmnic - LACP Setup
vmnic0 vmnic1
vSAN network on different subnets
• vSAN networks on 2 different subnets?
– If subnets are routed, and one host’s NIC fails,
host will communicate on other subnet
– If subnets are air-gapped, and one host’s NIC
fails, it will not be able to communicate to the
other hosts via other subnet
– That host with failing NIC will become isolated
– TCP timeout 90sec on failure
32
Supported Network Topologies
33
Topologies
• Single site, multiple hosts
• Single site, multiple hosts with Fault Domains
• Multiple sites, multiple hosts with Fault Domains (campus cluster but not stretched cluster)
• Stretched Cluster
• ROBO/2-node
• Design considerations
– L2/L3
– Multicast/Unicast
– RTT (round-trip-time)
34
Simplest topology - Layer-2, Single Site, Single Rack
• Single site, multiple hosts, shared subnet/VLAN/L2 topology, multicast with IGMP
• No need to worry about routing the multicast traffic in pre-vSAN 6.6 deployments
• Layer-2 implementations are simplified even further with vSAN 6.6, and unicast. With such a
deployment, IGMP snooping is not required.
35
Layer-2, Single Site, Multiple Racks – pre-vSAN 6.6 (multicast)
• pre-vSAN 6.6 where vSAN traffic is multicast
• Vendor specific multicast configuration required (IGMP/PIM)
36
Layer-2, Single Site, Multiple Racks – 6.6 and later (unicast)
• vSAN 6.6 where vSAN traffic is unicast
• No need to configure IGMP/PIM on the switches
37
Stretch Cluster Topologies
38
Stretched Cluster – L2 for data, L3 to witness or L3 everywhere
• vSAN 6.5 and earlier, traffic between data sites is multicast (meta) and unicast (IO).
• vSAN 6.6 and later, all traffic is unicast.
• In all versions of vSAN, the witness traffic between a data site and the witness site has always
been unicast.
39
Stretched Cluster - Why not L2 everywhere? (unsupported)
• Consider a situation where the link between Data Site 1 and Data Site 2 is broken
• Spanning Tree may discover a path between Data Site 1 and Data Site 2 exists via switch S1
• Possible performance decrease if data network traffic passes through a lower specification
witness site
40
2-Node (ROBO)
41
2-Node vSAN for Remote Locations
• Both hosts in remote office store data
• Witness in central office or 3rd site
stores witness data
• Unicast connectivity to witness
appliance
– 500ms RTT Latency
– 1.5Mbps bandwidth from Data Site to
WitnessCluster
vSphere vSAN
vSphere vSAN
vSphere vSAN
Witness
vSphere vSAN
Witness
500ms RTT latency
1.5Mbps bandwidth
500ms RTT latency
1.5Mbps bandwidth
42
2-node Direct Connect and Witness traffic separation
43
VSAN
Datastore
witness
10GbE
vSAN traffic via Direct Cable
management & witness traffic
• Separating the vSAN data traffic from witness traffic
• Ability to connect Data nodes directly using Ethernet cables
• Two cables between hosts for higher availability of network
• Witness traffic uses management network
Note: Witness Traffic Separation is NOT supported for
Stretch Cluster at this time
vSAN and Performance
Network relevance
44
General Concept on Network Performance
• Understanding vSAN concepts and features
– Standard vSAN setup vs. Stretch Cluster, FTT=1 or RAID5/6
• Understand Network Best Practice for optimum Performance – physical switch topology
– ISL trunks are not over subscripted
– MTU size factor
– No errors/drops/pause frames on the Network switches
45
General Concept on Network Performance
• Understand Host communication
– No errors/drops/CRC/pause frames on the Network card
– Driver/Firmware as per our HCL
– Use SFP/Gbic certified by your Hardware Vendor
– Use of NIOC to optimize traffic on the protocol layer if links sharing traffic (Ex. VM/vMotion/..)
46
DEMO: Adding 10ms network latency
47
Summary: Graphical interpretation IOPS vs. latency
48
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
0 5 10 15 20 25
IOPS
additional latency increase ms
latency ms Linear (latency ms)
+10ms latency = ~23100 IOPS
+5ms latency = ~33000 IOPS
Native = ~47000 IOPS
DEMO: Network 2% and 10% packet loss
49
Summary: Graphical interpretation IOPS vs. loss %
50
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
0 5 10 15 20 25
IOPS
% loss
loss % Expon. (loss %)
1% loss = ~42300 IOPS
Native = ~47000 IOPS
2% loss = ~32000 IOPS
10% loss = ~3400 IOPS
Nerd Out With These Key vSAN Activities at VMworld
#HitRefresh on your current data center and discover the possibilities!
Earn VMware digital badges to
showcase your skills
• New 2017 vSAN Specialist
Badge
• Education & Certification Lounge:
VM Village
• Certification Exam Center:
Jasmine EFG, Level 3
Become a
vSAN Specialist
Learn from self-paced and expert
led hands on labs
• vSAN Getting Started Workshop
(Expert led)
• VxRail Getting Started (Self
paced)
• Self-Paced lab available online
24x7
Practice with
Hands-on-Labs
Discover how to assess if your IT
is a good fit for HCI
• Four Seasons Willow Room/2nd
floor
• Open from 11am – 5pm Sun,
Mon, and Tue
• Learn more at Assessing &
Sizing in STO1500BU
Visit SDDC
Assessment Lounge
3 Easy Ways to Learn More about vSAN
52
• Live at VMworld
• Practical learning of
vSAN, VxRail and more
• 24x7 availability online
– for free!
vSAN Sizer
vSAN Assessment
New vSAN Tools
• StorageHub.vmware.com
• Reference architectures,
off-line demos and more
• Easy search function
• And More!
Storage Hub Technical Library Hands-On Lab
Test drive vSAN
for free today!
Cormac Hogan
@CormacJHogan
Andreas Scherr
@vsantester

More Related Content

What's hot

VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16David Pasek
 
Hyper-Converged Infrastructure Vx Rail
Hyper-Converged Infrastructure Vx Rail Hyper-Converged Infrastructure Vx Rail
Hyper-Converged Infrastructure Vx Rail Jürgen Ambrosi
 
VMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes EverythingVMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes EverythingVMUG IT
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Duncan Epping
 
VMware NSX 101: What, Why & How
VMware NSX 101: What, Why & HowVMware NSX 101: What, Why & How
VMware NSX 101: What, Why & HowAniekan Akpaffiong
 
Virtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure softwareVirtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure softwareDuncan Epping
 
VMware Horizon (view) 7 Lab Manual
VMware Horizon (view) 7 Lab Manual VMware Horizon (view) 7 Lab Manual
VMware Horizon (view) 7 Lab Manual Sanjeev Kumar
 
Hci solution with VxRail
Hci solution with VxRailHci solution with VxRail
Hci solution with VxRailAnton An
 
VMware Vsphere Graduation Project Presentation
VMware Vsphere Graduation Project PresentationVMware Vsphere Graduation Project Presentation
VMware Vsphere Graduation Project PresentationRabbah Adel Ammar
 
Esxi troubleshooting
Esxi troubleshootingEsxi troubleshooting
Esxi troubleshootingOvi Chis
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3Vepsun Technologies
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep diveVepsun Technologies
 
Dell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready NodesDell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready NodesAndrew McDaniel
 
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...Maichino Sepede
 
VMware vSphere Performance Troubleshooting
VMware vSphere Performance TroubleshootingVMware vSphere Performance Troubleshooting
VMware vSphere Performance TroubleshootingDan Brinkmann
 
VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017Novosco
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentationvirtualsouthwest
 
vSAN Beyond The Basics
vSAN Beyond The BasicsvSAN Beyond The Basics
vSAN Beyond The BasicsSumit Lahiri
 
VMware Cloud Foundation - PnP presentation 8_6_18 EN.pptx
VMware Cloud Foundation - PnP presentation 8_6_18 EN.pptxVMware Cloud Foundation - PnP presentation 8_6_18 EN.pptx
VMware Cloud Foundation - PnP presentation 8_6_18 EN.pptxBradLai3
 
VMUGbe 21 Filip Verloy
VMUGbe 21 Filip VerloyVMUGbe 21 Filip Verloy
VMUGbe 21 Filip VerloyFilip Verloy
 

What's hot (20)

VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16
 
Hyper-Converged Infrastructure Vx Rail
Hyper-Converged Infrastructure Vx Rail Hyper-Converged Infrastructure Vx Rail
Hyper-Converged Infrastructure Vx Rail
 
VMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes EverythingVMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes Everything
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015
 
VMware NSX 101: What, Why & How
VMware NSX 101: What, Why & HowVMware NSX 101: What, Why & How
VMware NSX 101: What, Why & How
 
Virtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure softwareVirtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure software
 
VMware Horizon (view) 7 Lab Manual
VMware Horizon (view) 7 Lab Manual VMware Horizon (view) 7 Lab Manual
VMware Horizon (view) 7 Lab Manual
 
Hci solution with VxRail
Hci solution with VxRailHci solution with VxRail
Hci solution with VxRail
 
VMware Vsphere Graduation Project Presentation
VMware Vsphere Graduation Project PresentationVMware Vsphere Graduation Project Presentation
VMware Vsphere Graduation Project Presentation
 
Esxi troubleshooting
Esxi troubleshootingEsxi troubleshooting
Esxi troubleshooting
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep dive
 
Dell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready NodesDell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready Nodes
 
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...
 
VMware vSphere Performance Troubleshooting
VMware vSphere Performance TroubleshootingVMware vSphere Performance Troubleshooting
VMware vSphere Performance Troubleshooting
 
VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentation
 
vSAN Beyond The Basics
vSAN Beyond The BasicsvSAN Beyond The Basics
vSAN Beyond The Basics
 
VMware Cloud Foundation - PnP presentation 8_6_18 EN.pptx
VMware Cloud Foundation - PnP presentation 8_6_18 EN.pptxVMware Cloud Foundation - PnP presentation 8_6_18 EN.pptx
VMware Cloud Foundation - PnP presentation 8_6_18 EN.pptx
 
VMUGbe 21 Filip Verloy
VMUGbe 21 Filip VerloyVMUGbe 21 Filip Verloy
VMUGbe 21 Filip Verloy
 

Viewers also liked

VMware Site Recovery Manager
VMware Site Recovery ManagerVMware Site Recovery Manager
VMware Site Recovery ManagerJürgen Ambrosi
 
Open source for you - November 2017
Open source for you - November 2017Open source for you - November 2017
Open source for you - November 2017Heart Disk
 
Výhody Software Defined Storage od VMware
Výhody Software Defined Storage od VMwareVýhody Software Defined Storage od VMware
Výhody Software Defined Storage od VMwareMarketingArrowECS_CZ
 
VMware Esx Short Presentation
VMware Esx Short PresentationVMware Esx Short Presentation
VMware Esx Short PresentationBarcamp Cork
 
VMware virtual SAN 6 overview
VMware virtual SAN 6 overviewVMware virtual SAN 6 overview
VMware virtual SAN 6 overviewsolarisyougood
 

Viewers also liked (7)

VMware Site Recovery Manager
VMware Site Recovery ManagerVMware Site Recovery Manager
VMware Site Recovery Manager
 
VMware Horizon - news
VMware Horizon - newsVMware Horizon - news
VMware Horizon - news
 
Open source for you - November 2017
Open source for you - November 2017Open source for you - November 2017
Open source for you - November 2017
 
VMware Workspace One
VMware Workspace OneVMware Workspace One
VMware Workspace One
 
Výhody Software Defined Storage od VMware
Výhody Software Defined Storage od VMwareVýhody Software Defined Storage od VMware
Výhody Software Defined Storage od VMware
 
VMware Esx Short Presentation
VMware Esx Short PresentationVMware Esx Short Presentation
VMware Esx Short Presentation
 
VMware virtual SAN 6 overview
VMware virtual SAN 6 overviewVMware virtual SAN 6 overview
VMware virtual SAN 6 overview
 

Similar to A Closer Look at vSAN Networking Design and Configuration Considerations

VMworld 2015: Networking Virtual SAN's Backbone
VMworld 2015: Networking Virtual SAN's BackboneVMworld 2015: Networking Virtual SAN's Backbone
VMworld 2015: Networking Virtual SAN's BackboneVMworld
 
Presentation v mware v-sphere distributed switch—technical deep dive
Presentation   v mware v-sphere distributed switch—technical deep divePresentation   v mware v-sphere distributed switch—technical deep dive
Presentation v mware v-sphere distributed switch—technical deep divesolarisyourep
 
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco InfrastructureVMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco InfrastructureVMworld
 
Cumulus Linux 2.5 Overview
Cumulus Linux 2.5 OverviewCumulus Linux 2.5 Overview
Cumulus Linux 2.5 OverviewCumulus Networks
 
Virtual Deep-Dive: Citrix Xen Server
Virtual Deep-Dive: Citrix Xen ServerVirtual Deep-Dive: Citrix Xen Server
Virtual Deep-Dive: Citrix Xen ServerDigicomp Academy AG
 
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld
 
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_AliNET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Alishezy22
 
Partner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealizePartner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealizeErik Bussink
 
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...SkillFactory
 
VMworld - vSphere Distributed Switch 6.0 Technical Deep Dive
VMworld - vSphere Distributed Switch 6.0 Technical Deep DiveVMworld - vSphere Distributed Switch 6.0 Technical Deep Dive
VMworld - vSphere Distributed Switch 6.0 Technical Deep DiveChris Wahl
 
DevOops - Lessons Learned from an OpenStack Network Architect
DevOops - Lessons Learned from an OpenStack Network ArchitectDevOops - Lessons Learned from an OpenStack Network Architect
DevOops - Lessons Learned from an OpenStack Network ArchitectJames Denton
 
VMworld 2014: vSphere Distributed Switch
VMworld 2014: vSphere Distributed SwitchVMworld 2014: vSphere Distributed Switch
VMworld 2014: vSphere Distributed SwitchVMworld
 
NSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
NSX: La Virtualizzazione di Rete e il Futuro della SicurezzaNSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
NSX: La Virtualizzazione di Rete e il Futuro della SicurezzaVMUG IT
 
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld
 
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld
 
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant
Implementing an IPv6 Enabled Environment for a Public Cloud TenantImplementing an IPv6 Enabled Environment for a Public Cloud Tenant
Implementing an IPv6 Enabled Environment for a Public Cloud TenantShixiong Shang
 

Similar to A Closer Look at vSAN Networking Design and Configuration Considerations (20)

VMworld 2015: Networking Virtual SAN's Backbone
VMworld 2015: Networking Virtual SAN's BackboneVMworld 2015: Networking Virtual SAN's Backbone
VMworld 2015: Networking Virtual SAN's Backbone
 
Presentation v mware v-sphere distributed switch—technical deep dive
Presentation   v mware v-sphere distributed switch—technical deep divePresentation   v mware v-sphere distributed switch—technical deep dive
Presentation v mware v-sphere distributed switch—technical deep dive
 
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco InfrastructureVMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
 
10 sdn-vir-6up
10 sdn-vir-6up10 sdn-vir-6up
10 sdn-vir-6up
 
Cumulus Linux 2.5 Overview
Cumulus Linux 2.5 OverviewCumulus Linux 2.5 Overview
Cumulus Linux 2.5 Overview
 
Virtual Deep-Dive: Citrix Xen Server
Virtual Deep-Dive: Citrix Xen ServerVirtual Deep-Dive: Citrix Xen Server
Virtual Deep-Dive: Citrix Xen Server
 
Inf net2227 heath
Inf net2227 heathInf net2227 heath
Inf net2227 heath
 
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
 
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_AliNET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
 
Partner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealizePartner Presentation vSphere6-VSAN-vCloud-vRealize
Partner Presentation vSphere6-VSAN-vCloud-vRealize
 
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
 
VMworld - vSphere Distributed Switch 6.0 Technical Deep Dive
VMworld - vSphere Distributed Switch 6.0 Technical Deep DiveVMworld - vSphere Distributed Switch 6.0 Technical Deep Dive
VMworld - vSphere Distributed Switch 6.0 Technical Deep Dive
 
DevOops - Lessons Learned from an OpenStack Network Architect
DevOops - Lessons Learned from an OpenStack Network ArchitectDevOops - Lessons Learned from an OpenStack Network Architect
DevOops - Lessons Learned from an OpenStack Network Architect
 
VMworld 2014: vSphere Distributed Switch
VMworld 2014: vSphere Distributed SwitchVMworld 2014: vSphere Distributed Switch
VMworld 2014: vSphere Distributed Switch
 
NSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
NSX: La Virtualizzazione di Rete e il Futuro della SicurezzaNSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
NSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
 
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
 
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
 
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant
Implementing an IPv6 Enabled Environment for a Public Cloud TenantImplementing an IPv6 Enabled Environment for a Public Cloud Tenant
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant
 
NSX-MH
NSX-MHNSX-MH
NSX-MH
 
mod8-VLANs.ppt
mod8-VLANs.pptmod8-VLANs.ppt
mod8-VLANs.ppt
 

Recently uploaded

New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 

Recently uploaded (20)

New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 

A Closer Look at vSAN Networking Design and Configuration Considerations

  • 1. Cormac Hogan Andreas Scherr STO1193BU #STO1193BU A Closer Look at vSAN Networking Design and Configuration Considerations
  • 2. • This presentation may contain product features that are currently under development. • This overview of new technology represents no commitment from VMware to deliver these features in any generally available product. • Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind. • Technical feasibility and market demand will affect final delivery. • Pricing and packaging for any new technologies or features discussed or presented have not been determined. Disclaimer 2
  • 3. Agenda 1 vSAN Networking Overview 2 Multicast and Unicast 3 NIC Teaming and Load Balancing 4 Network Topologies (incl. Stretched and 2-node) 5 Network Performance Considerations 3
  • 4. Where should I begin? StorageHub! • https://storagehub.vmware.com/#!/vmware-vsan/plan-and-design 4
  • 6. vSAN Networking – Major Software Components • CMMDS (Cluster Monitoring, Membership, and Directory Service) • Inter cluster communications and metadata exchange – Multicast with <= vSAN 6.5 – Unicast with >= vSAN 6.6 – Heartbeat sent from master to all hosts every second • Traffic light in steady state • RDT (Reliable Datagram Transport) • Bulk of vSAN traffic – Virtual Disk data distributed across cluster – Replication /Resynch Traffic 6
  • 7. vSAN Networking – Ports and Firewalls • ESXi Firewall considerations – On enablement of vSAN on a given cluster, all required ports are enabled/disabled automatically; no admin action • Ports – CMMDS (UDP 12345, 23451, 12321) – RDT (TCP 2233) – VSANVP (TCP 8080) – Witness Host (TCP port 2233 and UDP Port 12321) – vSAN Encryption / KMS Server • Communication between vCenter and KMS to obtain keys • vSAN Encryption has special dynamic firewall rule opened on demand on ESXi hosts 7
  • 8. Network Connectivity – IPv6 • vSAN can operate in IPv6-only mode – Available since vSAN 6.2 – All network communications are through IPv6 network • vSAN supports mixed IPv4 & IPv6 during upgrade only – Do not run mixed mode in production 8
  • 9. Minimum NIC requirements for vSAN Networking 9 +10Gb support 1Gb support Comments Hybrid Cluster Y Y 10Gb min. recommended, but 1Gb supported, <1ms RTT All-Flash Cluster Y N All Flash requires 10Gb min. 1Gb not supported, <1ms RTT Stretched Cluster - Data to Data Y N 10Gb required between data sites*, <5ms RTT Stretched Cluster - Witness to Data Y Y 100Mbps connectivity required from data sites to witness. <200ms RTT 2-node Data to Data Y Y 10Gb min. required for All-Flash. 1Gb supported for hybrid, but 10Gb recommended 2-node Witness to Data Y Y 1.5Mbps bandwidth required. <500ms RTT
  • 10. Distributed or Standard Switches? 10 • vSphere Standard Switch • No management dependence on vCenter • Recovery is simple • Prone to misconfiguration in larger setups • vSphere Distributed Switch • Consistency Avoids configuration skew • Teaming and Failover LACP/LAG/ether-channel • Network I/O Control Manage/allocate network bandwidth for different vSphere traffic types vSphere Distributed Switch is Free with vSAN
  • 11. Network I/O Control (NIOC) Configuration Sample • Single 10-GbE physical adapters for simplicity • NICs handles traffic for vSAN, vMotion, and virtual machines and management traffic • If adapter becomes saturated, Network I/O Control controls bandwidth allocation • Sample configuration: 11 Traffic Type Custom Shares Value Bandwidth vSAN 100 5Gbps vMotion 50 2.5Gbps Virtual Machine 30 1.5Gbp Management 20 1Gbps
  • 12. NIC Teaming and Failover options 12 • Keep it simple folks! • All Virtual Switches Support (vSS + vDS) – Routed based on IP Hash / Virtual Port ID • Distributed Switch Only (vDS) – Route based on Physical NIC Load (LBT) • Distributed Switch + Physical Switch Only – Physical switches that support LACP/LAG/ether- channel provide additional load balancing algorithms Multi chassis link aggregation capable switches
  • 13. vSAN Multicast & Unicast 13
  • 14. What is Multicast? 14 • vSAN 6.5 (and earlier) used multicast traffic as a discovery protocol to find all other nodes trying to join a vSAN cluster. • Multicast is a network communication technique utilized to send information simultaneously (one-to-many or many-to- many) to a group of destinations over an IP network. • Multicast needs to be enabled on the switch/routers of the physical network. • Internet Group Management Protocol (IGMP) used within an L2 domain for group membership (follow switch vendor recommendations) • Protocol Independent Multicast (PIM) used for routing multicast traffic to a different L3 domain Multicast added complexity to vSAN networking
  • 15. IGMP Considerations • Consideration with multiple vSAN clusters – Prevent individual clusters from receiving all multicast streams – Option 1 – Separate VLANs for each vSAN cluster – Option 2 - When multiple vSAN clusters reside on the same layer 2 network, VMware recommends changing the default multicast address • See VMware KB 2075451 15
  • 16. Multicast Group Address on vSAN • The vSAN Master Group Multicast Address created is 224.1.2.3 – CMMDS updates. • The vSAN Agent Group Multicast Address is 224.2.3.4 – heartbeats. • The vSAN traffic service will assign the default multicast address settings to each host node. 16 # esxcli vsan network list Interface VmkNic Name: vmk2 IP Protocol: IP Interface UUID: 26ce8f58-7e8b-062e-ba57-a0369f56deac Agent Group Multicast Address: 224.2.3.4 Agent Group IPv6 Multicast Address: ff19::2:3:4 Agent Group Multicast Port: 23451 Master Group Multicast Address: 224.1.2.3 Master Group IPv6 Multicast Address: ff19::1:2:3 Master Group Multicast Port: 12345 Host Unicast Channel Bound Port: 12321 Multicast TTL: 5
  • 17. vSAN 6.6 introduces Unicast in place of Multicast for vSAN communication 17
  • 18. vSAN and Unicast • vSAN 6.6 now communicates using unicast for CMMDS updates. • A unicast transmission/stream sends IP packets to a single recipient on a network. • vCenter becomes the new source of truth for vSAN membership. – List of nodes is pushed to the CMMDS layer • The Networking Mode (unicast/multicast) is not configurable 18 vSAN 6.6 and above Unicast
  • 19. vSAN and Unicast • The Cluster summary now shows if a vSAN cluster network mode is Unicast or Multicast: 19
  • 20. Member Coordination with Unicast on vSAN 6.6 • Who tracks cluster membership if we no longer have multicast? • vCenter now becomes the source of truth for vSAN cluster membership with unicast • The vSAN cluster continues to operate in multicast mode until all participating nodes are upgraded to vSAN 6.6 • All hosts maintain a configuration generation number in case vCenter has an outage. – On recovery, vCenter checks the configuration generation number to see if the cluster configuration has changed in its absence. 20 vCenter
  • 22. Upgrade / Mixed Cluster Considerations with unicast 22 vSAN Cluster Software Configuration Disk Format Version(s) CMMDS Mode Comments 6.6 Only Nodes* All Version 5 Unicast Permanently operates in unicast. Cannot switch to multicast. Adding older nodes will partition cluster. 6.6 Only Nodes* All Version 3 or below Unicast 6.6 nodes operate in unicast mode. Switches back to multicast if < vSAN 6.6 node added. Mixed 6.6 and vSAN pre-6.6 Nodes Mixed Version 5 with Version 3 or below Unicast 6.6 nodes with v5 disks operate in unicast mode. Pre-6.6 nodes with v3 disks will operate in multicast mode. *** This causes a cluster partition! *** Mixed 6.6 and vSAN pre-6.6 Nodes All Version 3 or Below Multicast Cluster operates in multicast mode. All vSAN nodes must be upgraded to 6.6 to switch to unicast mode. *** Disk format v5 will make unicast mode permanent ***
  • 23. vSAN 6.6 only nodes – additional considerations with unicast • All hosts running vSAN 6.6, cluster will communicate using unicast – Even if disk groups are formatted with < version 5.0, e.g. version 3.0 • vSAN will revert to multicast mode if a non-vSAN 6.6 node is added to the 6.6 cluster – But only if no disk group format == version 5.0 • A vSAN 6.6+ cluster will only ever communicate in unicast if a version 5.0 disk group exists • If a non-vSAN 6.6 node is added to a 6.6 cluster which contains at least one version 5.0 disk group, this node will be partitioned and will not join the vSAN cluster 23
  • 24. Considerations with Unicast • Considerations with vSAN 6.6 unicast and DHCP – vCenter Server deployed on a vSAN 6.6 cluster – vSAN 6.6 nodes obtained IP addresses via DHCP – If IP addresses change, vCenter VM may become unavailable • Can lead to cluster partition as vCenter cannot update membership – This is not supported unless DHCP reservations are used. • Considerations with vSAN 6.6 unicast and IPv6 – IPv6 is supported with unicast communications in vSAN 6.6. – However IPv6 Link Local Addresses are not supported for unicast communications on vSAN 6.6 • vSAN doesn’t use link local addresses to track membership 24 vCenter
  • 25. Query Unicast with esxcli • vSAN cluster node now displays the CMMDS networking mode - unicast or multicast. – esxcli vsan cluster get 25
  • 26. Query Unicast with esxcli • One can also check which vSAN cluster nodes are operating in unicast mode – esxcli vsan cluster unicastagent list: • Unicast info is also displayed in vSAN network details – esxcli vsan network list 26
  • 27. NIC Teaming and Load-Balancing Recommendations 27
  • 28. NIC Teaming – single vmknic, multiple vmnics (uplinks) • Route based on originating virtual port – Pros • Simplest teaming mode, with very minimum physical switch configuration. – Cons • A single VMkernel interface cannot use more than a single physical NIC's bandwidth. • Route Based on Physical NIC Load – Pros • No physical switch configuration required. – Cons • Since only one VMkernel port, effectiveness of using this is limited. • Minor overhead when ESXi re-evaluates the load 28
  • 29. Load Balancing - single vmknic, multiple vmnics (uplinks) • vSAN does not use NIC teaming for load balancing • vSAN has no load balancing mechanism to differentiate between multiple vmknics. • As such, the vSAN IO path chosen is not deterministic across physical NICs 29 0 100000 200000 300000 400000 500000 600000 700000 800000 900000 1000000 Node 1 Node 2 Node 3 Node 4 KBps Utilization per vmnic -Multiple VMknics vmnic0 vmnic1
  • 30. NIC Teaming – LACP & LAG (***Preferred***) • Pros – Improves performance and bandwidth – If a NIC fails and the link-state goes down, the remaining NICs in the team continue to pass traffic. – Many load balancing options – Rebalancing of traffic after failures is automatic – Based on 802.3ad standards. • Cons – Requires that physical switch ports be configured in a port-channel configuration. – Complexity on configuration and maintenance 30
  • 31. Load Balancing – LACP & LAG (***Preferred***) • More consistency compared to “Route based on physical NIC load” • More individual Clients (VMs) will cause further increase probability of a balanced load 31 0 50000 100000 150000 200000 250000 300000 350000 400000 450000 500000 Node 1 Node 2 Node 3 Node 4 KBps Utilization per vmnic - LACP Setup vmnic0 vmnic1
  • 32. vSAN network on different subnets • vSAN networks on 2 different subnets? – If subnets are routed, and one host’s NIC fails, host will communicate on other subnet – If subnets are air-gapped, and one host’s NIC fails, it will not be able to communicate to the other hosts via other subnet – That host with failing NIC will become isolated – TCP timeout 90sec on failure 32
  • 34. Topologies • Single site, multiple hosts • Single site, multiple hosts with Fault Domains • Multiple sites, multiple hosts with Fault Domains (campus cluster but not stretched cluster) • Stretched Cluster • ROBO/2-node • Design considerations – L2/L3 – Multicast/Unicast – RTT (round-trip-time) 34
  • 35. Simplest topology - Layer-2, Single Site, Single Rack • Single site, multiple hosts, shared subnet/VLAN/L2 topology, multicast with IGMP • No need to worry about routing the multicast traffic in pre-vSAN 6.6 deployments • Layer-2 implementations are simplified even further with vSAN 6.6, and unicast. With such a deployment, IGMP snooping is not required. 35
  • 36. Layer-2, Single Site, Multiple Racks – pre-vSAN 6.6 (multicast) • pre-vSAN 6.6 where vSAN traffic is multicast • Vendor specific multicast configuration required (IGMP/PIM) 36
  • 37. Layer-2, Single Site, Multiple Racks – 6.6 and later (unicast) • vSAN 6.6 where vSAN traffic is unicast • No need to configure IGMP/PIM on the switches 37
  • 39. Stretched Cluster – L2 for data, L3 to witness or L3 everywhere • vSAN 6.5 and earlier, traffic between data sites is multicast (meta) and unicast (IO). • vSAN 6.6 and later, all traffic is unicast. • In all versions of vSAN, the witness traffic between a data site and the witness site has always been unicast. 39
  • 40. Stretched Cluster - Why not L2 everywhere? (unsupported) • Consider a situation where the link between Data Site 1 and Data Site 2 is broken • Spanning Tree may discover a path between Data Site 1 and Data Site 2 exists via switch S1 • Possible performance decrease if data network traffic passes through a lower specification witness site 40
  • 42. 2-Node vSAN for Remote Locations • Both hosts in remote office store data • Witness in central office or 3rd site stores witness data • Unicast connectivity to witness appliance – 500ms RTT Latency – 1.5Mbps bandwidth from Data Site to WitnessCluster vSphere vSAN vSphere vSAN vSphere vSAN Witness vSphere vSAN Witness 500ms RTT latency 1.5Mbps bandwidth 500ms RTT latency 1.5Mbps bandwidth 42
  • 43. 2-node Direct Connect and Witness traffic separation 43 VSAN Datastore witness 10GbE vSAN traffic via Direct Cable management & witness traffic • Separating the vSAN data traffic from witness traffic • Ability to connect Data nodes directly using Ethernet cables • Two cables between hosts for higher availability of network • Witness traffic uses management network Note: Witness Traffic Separation is NOT supported for Stretch Cluster at this time
  • 45. General Concept on Network Performance • Understanding vSAN concepts and features – Standard vSAN setup vs. Stretch Cluster, FTT=1 or RAID5/6 • Understand Network Best Practice for optimum Performance – physical switch topology – ISL trunks are not over subscripted – MTU size factor – No errors/drops/pause frames on the Network switches 45
  • 46. General Concept on Network Performance • Understand Host communication – No errors/drops/CRC/pause frames on the Network card – Driver/Firmware as per our HCL – Use SFP/Gbic certified by your Hardware Vendor – Use of NIOC to optimize traffic on the protocol layer if links sharing traffic (Ex. VM/vMotion/..) 46
  • 47. DEMO: Adding 10ms network latency 47
  • 48. Summary: Graphical interpretation IOPS vs. latency 48 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 0 5 10 15 20 25 IOPS additional latency increase ms latency ms Linear (latency ms) +10ms latency = ~23100 IOPS +5ms latency = ~33000 IOPS Native = ~47000 IOPS
  • 49. DEMO: Network 2% and 10% packet loss 49
  • 50. Summary: Graphical interpretation IOPS vs. loss % 50 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 0 5 10 15 20 25 IOPS % loss loss % Expon. (loss %) 1% loss = ~42300 IOPS Native = ~47000 IOPS 2% loss = ~32000 IOPS 10% loss = ~3400 IOPS
  • 51. Nerd Out With These Key vSAN Activities at VMworld #HitRefresh on your current data center and discover the possibilities! Earn VMware digital badges to showcase your skills • New 2017 vSAN Specialist Badge • Education & Certification Lounge: VM Village • Certification Exam Center: Jasmine EFG, Level 3 Become a vSAN Specialist Learn from self-paced and expert led hands on labs • vSAN Getting Started Workshop (Expert led) • VxRail Getting Started (Self paced) • Self-Paced lab available online 24x7 Practice with Hands-on-Labs Discover how to assess if your IT is a good fit for HCI • Four Seasons Willow Room/2nd floor • Open from 11am – 5pm Sun, Mon, and Tue • Learn more at Assessing & Sizing in STO1500BU Visit SDDC Assessment Lounge
  • 52. 3 Easy Ways to Learn More about vSAN 52 • Live at VMworld • Practical learning of vSAN, VxRail and more • 24x7 availability online – for free! vSAN Sizer vSAN Assessment New vSAN Tools • StorageHub.vmware.com • Reference architectures, off-line demos and more • Easy search function • And More! Storage Hub Technical Library Hands-On Lab Test drive vSAN for free today!
  • 53.

Editor's Notes

  1. Go to storagehub. Select vSAN. Select Plan and Design.
  2. This section will describe the fundamentals behind vSAN’s architecture.
  3. There are a number of distinct parts to vSAN networking. First there is the communication that takes place between all of the ESXi hosts in the vSAN cluster, indicating that they are still actively participating in vSAN. This has traditionally been done via multicast traffic, and a heartbeat is sent from the master to all hosts once every second to ensure they are still active. However, since the release of vSAN 6.6, this communication is now done via unicast traffic. This is a significant change compared to previous versions of vSAN, and should make vSAN configuration much easier from a networking perspective Lastly, there is virtual machine disk I/O. This makes up the majority of the traffic on the vSAN network. Because VMs on the vSAN datastore are made up of a set of objects, these objects may be made up of one or more components. For example, a number of RAID-0 stripes or a number of RAID-1 mirrors. Invariably, a VMs compute and a VMs storage will be located on different ESXi hosts in the cluster. It may also transpire that if a VM has been configured to tolerate one or more failures, the compute may be on node 1, the first RAID-0 mirror may be on host 2 and the second RAID-0 mirror could be on host 3. In this case, disk reads and writes for this virtual machine will have to traverse the vSAN network. This is unicast traffic, and forms the bulk of the vSAN network traffic. RDT traffic has always been unicast. VSANVP (Virtual SAN VASA Provider – Storage Awareness APIs) Used for Storage Policy Based Management Each vSAN node registers a VASA provider to vCenter Server via TCP
  4. KMS server port varies from vendor to vendor Enabling encryption requires a rolling upgrade approach to write the DEKs (Disk Encryption Keys) to disk.
  5. Speaker notes: In some cases, 10GB may not be required for stretched cluster. It will depend on the number of components, and rebuild bandwidth. Details are in our documentation. In many cases, when the witness and the vSAN traffic are separated in 2-node, witness traffic is placed on the same network as the management traffic.
  6. Speaker notes. Single switches become a pain in large configs where you have to ensure MTU, VLAN, subnet mask, gatewa Distributed switches, while being a little more complex to configure, do offer greater advantages to vSAN customers. And it is free.
  7. Keep is Simple! Use a single VMkernel port for vSAN and have the networking stack provide resiliency. On this please deploy what your networking team is most comfortable with! Designing vSAN networks – Dedicated or Shared Interfaces? https://blogs.vmware.com/virtualblocks/2017/01/20/designing-vsan-networks-dedicated-shared-interfaces/ IP HASH balance on a per host basis, so a connection between any two hosts will ONLY use one NIC. As a cluster grows the opportunity for a host to use more than one NIC and balance flows grows. It should be noted that seeing 80% on uplink 1, and 20% usage on uplink 2 is not unheard of - this is the nature of how great LACP is vMotion uses a less than documented hash that includes Source and Destination port and opens multiple connections so LACP (even if its benefit for vSAN is not significant) can help vMotion with large VM evacuations.
  8. This section will describe the fundamentals behind vSAN’s architecture.
  9. An IP Multicast address is called a Multicast Group (MG). Internet Group Management Protocol (IGMP) is a communication protocol used to dynamically add receivers to IP Multicast group membership. There are multiple versions V1, V2 , V3 Protocol Independent Multicast (PIM) is a family of Layer 3 multicast routing protocols that provide different communication techniques for IP Multicast traffic to reach receivers that are in different Layer 3 segments from the Multicast Groups sources. IP multicast sends source packets to multiple receivers as a group transmission, and provides an efficient delivery of data to a number of destinations with minimum network bandwidth consumption IGMP is a communication protocol used to dynamically add receivers to IP Multicast group membership. The IGMP operations are restricted within individual Layer 2 domains. IGMP allows receivers to send requests to the Multicast Groups they would like to join. Becoming a member of a Multicast Group allows routers to know to forward traffic that is destined for the Multicast Groups on the Layer 3 segment where the receiver is connected (switch port). This allows the switch to keep a table of the individual receivers that need a copy of the Multicast Group traffic.   IP Multicast is a fundamental requirement of vSAN prior to v6.6. Earlier vSAN versions depended on IP multicast communication for the process of joining and leaving cluster groups as well as other intra-cluster communication services. IP multicast must be enabled and configured in the IP Network segments that will carry the vSAN traffic service Some customers who do not wish to to use PIM for routing multicast traffic may consider encapsulating the multicast traffic in a VxLAN, or some other fabric overlay.
  10. IGMP snooping is a mechanism to constrain multicast traffic to only the ports that have receivers attached. The mechanism adds efficiency because it enables a Layer 2 switch to selectively send out multicast packets on only the ports that need them When a network/VLAN does not have a router that can take on the multicast router role and provide the multicast router discovery on the switches, you can turn on the IGMP querier feature. The feature allows the Layer 2 switch to proxy for a multicast router and send out periodic IGMP queries in that network. This action causes the switch to consider itself an multicast router port. If it’s a small layer 2 environment and you only have 1 VSAN cluster per VLAN and NOTHING else goes on that VLAN besides Vmkernel ports you can use “flooding” and disable snooping for that VLAN. (No significant overhead as your basically having the switch broadcast that traffic). In a larger environment you will
  11. This is true even in vSAN 6.6, in case there is a ’revert’ to multicast from unicast. Port 23451 is used by the master for sending a heartbeat to each host in the cluster every second. Port 12345 is used for the CMMDS updates.
  12. This section will describe the fundamentals behind vSAN’s architecture.
  13. Speaker notes: vCenter Server and ESXi hosts must be 6.50d EP2 or later vSAN cluster IP address list is maintained by vCenter and is pushed to each node. The following changes will trigger an update from vCenter: A vSAN cluster is formed A new vSAN node is added or removed from vSAN enabled cluster An IP address change or vSAN UUID change on an existing node
  14. This section will describe the fundamentals behind vSAN’s architecture.
  15. Table for education purposes – only really interested in first 3 behaviours.
  16. When vCenter server recovers, vCenter (vSAN health) will attempt to reconcile its current list of unicast addresses with the vSAN cluster, and will push down stale unicast addresses to vSAN nodes. This may trigger a vSAN cluster partition and vCenter may no longer be accessible (since it runs on that vSAN cluster). DHCP with reservations (i.e. assigned IP addresses that are bound to the mac addresses of vSAN VMkernel ports) is supported, as is DHCP without reservations but with the managing vCenter hosted residing outside of the vSAN cluster IPv6 is supported with unicast communications in vSAN 6.6. With IPv6, a link-local address is an IPv6 unicast address that can be automatically configured on any interface using the link-local prefix. vSAN, by default, does not add a node’s link local address to other cluster nodes (as a neighbor). As a consequence, IPv6 Link local addresses are not supported for unicast communications on vSAN 6.6. A link-local address is an IPv6 unicast address that can be automatically configured on any interface using the link-local prefix FE80::/10 (1111 1110 10) . IPv6 link-local addresses are a special scope of address which can be used only within the context of a single layer two domain
  17. This section will describe the fundamentals behind vSAN’s architecture.
  18. Note: vSAN does not use NIC teaming for load balancing
  19. In a simple I/O test performed in our labs - using 120 VMs with a 70:30 read/write ratio with a 64K block size on a four node all flash vSAN cluster, we can clearly see vSAN make no attempt to balance the traffic
  20. Again using a simple I/O test performed in our labs, using 120 VMs with a 70:30 read/write ratio with a 64K block size on a four node all flash vSAN cluster
  21. Note: this is contrary to what we had in our documentation. Not recommended, if subnet not routed
  22. This section will describe the fundamentals behind vSAN’s architecture.
  23. Multiple TOR (top of rack) switches Explain IGMP needed on all switches
  24. Multiple TOR (top of rack) switches Explain IGMP needed on all switches
  25. This section will describe the fundamentals behind vSAN’s architecture.
  26. Multiple TOR (top of rack) switches Explain IGMP needed on all switches Between Data Site 1 and Data Site 2, VMware supports implementing a stretched L2 (switched) configuration or a L3 (routed) configuration. Both topologies are supported. Between Data Sites and Witness Site, VMware supports an L3 (routed) configuration
  27. Multiple TOR (top of rack) switches Explain IGMP needed on all switches
  28. This section will describe the fundamentals behind vSAN’s architecture.
  29. Key Message/Talk track: Remote offices and branch offices (ROBO) is a common geographic model for many organizations. vSAN makes it easy to run a 2 node vSAN cluster to provide all of the storage needs in these branch offices, while using the primary site as the location to house the witness appliances. This makes for a fast, affordable, and flexible management and delivery of services in environments that require this type of topology. Additionally, vSAN ROBO Edition licensing has no host limit restriction. There is only a restriction of the number of VMs that may be run in a site. A maximum of 25 VM’s can be run in a single site, or across sites. Any multiple of 25 requires an additional 25 VM-pack license. There is no upgrade path from vSAN ROBO licenses to regular CPU or Desktop vSAN licenses. ---------------------------------- Overview: 2 Node ROBO for vSAN designed for branch office scenarios (retail, etc.) Both hosts in remote office store data. Each remote office seen as a 2 Node cluster Witness VM(s) lives in primary site – one witness VM for each remote office Can easily scale from 2 Node to more by adding additional hosts and removing the vSAN Witness. All sites managed by one vCenter instance Minimum requirements when using 2 Node from the site to the location of the vSAN Witness Appliance. 500ms RTT latency 1.5Mbps bandwidth
  30. This section will describe the fundamentals behind vSAN’s architecture.
  31. 4k blocksize, threads=32, 10x VMs, Stretch-Cluster native, no latency introduced: ~ 46938.13 IOPS 1ms RTT latency: ~ 39377.07 IOPS, ~ 83% from native 5ms RTT latency: ~ 32679.18 IOPS, ~ 69% from native 10ms RTT latency: ~ 20333.63 IOPS, ~ 43% from native 20ms RTT latency: ~ 11828.53 IOPS, ~ 25% from native
  32. Native ~ 46k IOPS First test 1% -- 44k IOPS Second test 2% -- 30k IOPS 3rd test 5% -- 8k IOPS Native again ~ 46k IOPS 0.1% loss: ~ 45298.27 IOPS, ~ 97% from native 0.5% loss: ~ 44689.50 IOPS, ~ 96% from native 1% loss : ~ 43337.53 IOPS, ~ 93% from native 5% loss : ~ 9373.43 IOPS, ~ 20% from native 10% loss : ~ 3117.98 IOPS, ~ 6% from native
  33. Mention metrics in talk track