SlideShare a Scribd company logo
1 of 31
Eran Bello, Director of Business Development
March 2014 | NFV&SDN Summit | Paris, France
Mellanox Approach to NFV & SDN
© 2014 Mellanox Technologies 2
Leading Supplier of End-to-End Interconnect Solutions
Virtual Protocol Interconnect
Storage
Front / Back-End
Server / Compute Switch / Gateway
56G IB & FCoIB 56G InfiniBand
10/40/56GbE & FCoE 10/40/56GbE
Virtual Protocol Interconnect
Host/Fabric SoftwareICs Switches/GatewaysAdapter Cards Cables/Modules
Comprehensive End-to-End InfiniBand and Ethernet Portfolio
Metro / WAN
© 2014 Mellanox Technologies 3
Virtual Protocol Interconnect (VPI) Technology
64 ports 10GbE
36 ports 40/56GbE
48 10GbE + 12 40/56GbE
36 ports IB up to 56Gb/s
8 VPI subnets
Switch OS Layer
Mezzanine Card
VPI Adapter VPI Switch
Ethernet: 10/40/56 Gb/s
InfiniBand:10/20/40/56 Gb/s
Unified Fabric Manager
Networking Storage Clustering Management
Applications
Acceleration Engines
LOM Adapter Card
3.0
From data center to
campus and metro
connectivity
© 2014 Mellanox Technologies 4
 Highest Capacity in 1RU
• From 12 QSFP to 36 QSFP 40/56Gb ports  4.03Tb/s
• 64 x 10GbE
• 48x10GbE plus 12x40/56Gbps
 Unique Value Proposition
• VPI 10/40/56Gbps
• End to end solution
Ethernet Switch Portfolio
SX1036
The Ideal 40GbE ToR/Aggregation
SX1024
Non-blocking 10GbE 40GbE ToR
SX1016
Highest density 10GbE ToR
 Latency
• 220ns L2 latency
• 330ns L3 latency
 Power (SX1036)
• SX1036 – 83w
• SX1024 – 70w
• SX1016 – 64w
• 1W per 10Gb port, 2.3W per 40Gb port
SX1012
Ideal storage/Database 10/40GbE Switch
© 2014 Mellanox Technologies 5
Independent
Software Vendors
BRAS
Firewall
DPI
CDN
Tester/QoE
monitor
WAN
Acceleration
Message
Router
Radio Network
Controller
Carrier
Grade NAT
Session Border
Controller
Classical Network Appliance
Approach
PE Router
SGSN/GGSN
Generic High Volume
Ethernet Switches
Generic High Volume Servers
Generic High Volume Storage
Orchestrated,
automatic
remote install
Network Functions Virtualisation
Approach
 Ideal platform for ETSI NFV: Network Functions Virtualization
• Consolidate Network Equipment onto standard servers, switches and storage
• Leverage Software Defined Networking
• Driven within the ETSI: European Telecom Standard Institution
 The migration to x86 based platforms is the enabler
• 3G/4G Network Core
• Load Balancing / Traffic and Policy Enforcement
• Internet Security Gateways
• Network Monitoring / VPN
• CDN / Video Processing and Optimization / IPTV
Telecom and Security Network Functions Virtualization
ATCA
Platforms
Compute
and Storage
Platforms
© 2014 Mellanox Technologies 6
HP c7000 with Mellanox 40GbE Interconnect
Mezz Adapter
HPPN 644161-B22 2P Blade
NFF
Cables
Switch Blade
SX1018HP
VPI Ready: Same HW for both ETH and IB
Highest Capacity: 2P 40GbE PCIe 3.0 x 8 lanes
Lowest Latency RoCE (App to App): 1.3us
Lowest Power (40GbE) Typ 2-port 40GbE: 5.1W
•56Gbps FDR IB / 40GbE QSFP
•QSA: QSFP to SFP+ Adapter
VPI Ready: Same HW for both ETH and IB
Highest Capacity: 2.72 Tb/s bandwidth
16 Internal 40/10GbE ports
18 External 40/10GbE ports (QSFP+)
Lowest Latency:
220 nsec latency 40GbE
270 nsec latency 10GbE
Lowest Power:
82W (Typical Power with Passive Cables)
C-Class double wide form factor
Up to two SX1018HP switches per enclosure
© 2014 Mellanox Technologies 7- Mellanox Confidential -
Mellanox ConnectX-3 Dual-Port 40GbE NIC and Switch release in Q3/2013
 14 Compute Blades each using Single
EN6132 Dual Port 40GbE NIC
 2 Switch Blades each using EN6131 SwitchX-2 32 Ports 40GbE
 Compute I/O 1.12 Tbps @40 GbE
 Uplink I/O up to 1.44Tbps @40 GbE
Dual Star Architecture Dual-Dual Star Architecture
10GbE
10GbE
40GbE
40GbE
22 x 10GbE
18 x 40GbE
40GbE
40GbE
40GbE
40GbE
18 x 40GbE
18 x 40GbE
22 x 10GbE
18 x 40GbE
18 x 40GbE
18 x 40GbE
EN4093R
EN4093R
EN6131
EN6131
EN6131
EN6131
EN6131
EN6131
ITE
SWITCH
ITE
SWITCH
IBM PureFlex System
 14 Compute Blades each using Dual
EN6132 Dual Port 40GbE NIC
 4 Switch Blades each using EN6131 SwitchX-2 32 Ports 40GbE
 Compute I/O 2.24 Tbps @40 GbE
 Uplink I/O up to 2.24 Tbps @40 GbE
IBM PureFlex with Mellanox 40GbE Interconnect
Single wide chassis: 14x ITEs / Blade Servers support 2 adapters per server
Double-wide chassis: 7x ITEs / Blade Servers support 4 adapters per server
© 2014 Mellanox Technologies 8
CloudNFV: The 1st ETSI ISG NFV Approved PoC
Demo Console
Optimization, Active
Virtualization and Demo
console:
NFV Orchestration and
Metro Ethernet Switches:
Demo Virtual Function:
Traffic Telemetry and DPI as a
service:
Servers, Data Center Switching,
Lab Facilities. Systems Integration:
Data Path Acceleration:
Demo
Virtual Function
Overall Architecture:
High Performance Server
and Storage Interconnect
Active Data Center
© 2014 Mellanox Technologies 9
Mellanox and 6WIND ConnectX-3 NIC Driver for Intel® DPDK
6WIND or Intel® DPDK
• Data Plane libraries
• Optimized NIC drivers
Client’s Application Software
High-performance packet processing solutions for
• Gateways
• Security appliances
• UTMs
• Virtual appliances
• etc.
Multicore Processor
……
librte_pmd_mlx4
librte_pmd driver provided as
an addon into the DPDK
(no need to patch the DPDK)
Based on the generic
librte_eal and librte_ether
API of the DPDK.
Best design since it co-works
with the ibverb framework.
librte_crypto_nitrox
6WIND addons
VMware …
© 2014 Mellanox Technologies 10
Neutron Plug-in
OpenStack integration
High performance 10/40/56Gbps
SR-IOV enabled
OpenFlow enabled eSwitch
OpenStack Neutron Plug-in
PMD for DPDK: VM OS bypass
Multi cores and RSS support
Delivering bare-metal performance
Mellanox NIC SR-IOV with PMD for Intel® DPDK in the Guest VM
OS
VM
OS
VM
Hypervisor
Legacy Software
vSwitches
SR-IOV eSwitch
Hardware Offload
OpenFlow enabled
VM
6WIND or Intel® DPDK
• Data Plane libraries
• Optimized NIC drivers
Client’s Application Software
High-performance packet processing solutions for
• Gateways
• Security appliances
• UTMs
• Virtual appliances
• etc.
Multicore Processor
……
librte_pmd_mlx4 librte_crypto_nitrox 6WIND addonsVMware …
10/40/56Gbps
© 2014 Mellanox Technologies 11
 Slow Application Performance
• 1/10GbE
• 50us latency for VM to VM connectivity
• Slow VM Migration
• Slow Storage I/O
 Expensive and inefficient
• High CPU overhead for I/O processing
• Multiple adapters needed
 Limited isolation
• Minimal QoS and Security in software
Mellanox NIC Based I/O Virtualization Advantages
 Fastest Application Performance
• 10/40GbE with RDMA, 56Gb InfiniBand
• Only 2us for VM to VM connectivity
• >3.5x faster VM Migration
• >6x faster storage access
 Superb efficiency
• Offload hypervisor CPU, more VMs
• I/O consolidation
 Best isolation
• Hardware-enforced security and QoS
OS
VM
OS
VM
OS
VM
Hypervisor
Software based
vSwitches
OS
VM
OS
VM
OS
VM
Hypervisor
Legacy Software
vSwitches
Hardware
Offload +
vSwitches
Legacy
NICs
© 2014 Mellanox Technologies 12
I/O Virtualization Future – NIC Based Switching
OS
VM
OS
VM
OS
VM
Hypervisor OS
VM
eSwitches
(embedded switches)
Physical Ports (pPort)
NIC/HCA
Hardware “LAG”
vPorts with multi-level QoS and
hardware based congestion control
Virtual NICs (vNICs)
vPort Security Filters, ACLs, and
Tunneling (EoIB/VXLAN/NVGRE)
HW Based teaming
HW Based VM Switching
pPort QoS and DCB
vPort Priority tagging
Controlled via SDN/OpenFlow
 eSwitch supported match fields :
• Destination MAC address
• VLAN ID
• Ether Type
• Source/Destination IP address
• Source/Destination UDP/TCP port
 eSwitch supported actions:
• Drop
• Allow
• Count
• Trap/Mirror
• Set priority (VLAN priority, egress
queue & policer)
© 2014 Mellanox Technologies 13
Mellanox and Radware Defense Pro : SDN demo
The Traditional Way: Bump in the wire Appliances
A Better Way: SDN and OpenFlow with Flow Based Routing
© 2014 Mellanox Technologies 14
ConnectX-3 Pro NVGRE and VXLAN Performance
0
2
4
6
8
10
12
2 4 8 16
BandwidthGb/s
VM Pairs
NvGRE Throughput ConnectX-3 Pro 10GbE
NVGRE Offload Disabled
0.00
1.00
2.00
3.00
4.00
5.00
1 VM 2 VMs 3 VMs
VxLAN in software 3.50 3.33 4.29
VxLAN HW Offload 0.90 0.89 1.19
CPU%/Bandwidth
(Gbit/sce)
CPU Usage Per Gbit/sec with VxLAN
0
5
10
15
20
25
1 VM 2 VMs 3 VMs
VxLAN in software 2 3 3.5
VxLAN HW Offload 10 19 21
Bandwidth[Gb/s]
Total VM Bandwidth when using VxLAN
The Foundation of Cloud 2.0
The World’s First
NVGRE / VXLAN Offloaded NIC
© 2014 Mellanox Technologies 15
6WIND demonstration of 195 Gbps Accelerated Virtual Switch
iproute2iptables
Fast Path
IP IPsec
OVS
Acceleration
TCP VLAN GRE
MPLS ACL LAG
Custom GTPu NAT
Intel® DPDK
Shared
Memory
Statistics
Protocol
Tables
Linux Kernel
6WINDGate
fast path
statistics
Linux Networking Stack
6WINDGate
Sync
Daemons
NIC(s)
Multicore Processor Platform
Quagga
6WINDGate includes the Mellanox poll mode driver (PMD)
- Provide Direct access to the networking hardware: Linux OS Bypass
The demo include 5 Mellanox ConnectX®-3 Pro cards with dual 40G Ports.
© 2014 Mellanox Technologies 16
Managing the VM Networking Via OpenFlow / SDN
Neutron
Plug-In
SDN
Applications
SDN
Applications
Cloud Management
OpenStack Manager
SDN Controller
OS
VM
Para-
virtual
OS
VM
OS
VM
OS
VM
SR-IOV
to the VM
10/40GbE or
InfiniBand ports
Embedded
Switch
OpenFlow
Agent
Neutron
Agent
Create/delete,
configure policy
per VM vNIC
Servers
tap tap
 OpenFlow control over switch and NIC
 Adapter hardware acceleration for OpenFlow and overlay functions
 Native integration to OpenStack and SDN controllers
The Benefits of VM Provision & Fabric Policy in Hardware
Isolation, Performance & Offload, Simpler SDN
© 2014 Mellanox Technologies 17
 Allow Service Orchestration over the Telecom WAN Network.
 Leverage OpenStack for the Telecom Datacenter
 Leverage Mellanox Neutron Plug-in allow SR-IOV
• Near bear metal performance to the VMs
CYAN Blue Planet: Carrier Grade SDN Orchestration Platform
© 2014 Mellanox Technologies 18
 Using CloudBand, service providers can create cloud services that offer virtually limitless growth and that capitalize
on their broad range of distributed data center and network resources. By building their own carrier clouds, service
providers can meet stringent service level agreements (SLAs) and deliver the performance, access and security that
enterprises and consumers demand.
 “Network Function Virtualization can provide service providers with significant gains in automation and reductions in
costs. Working in conjunction with the Alcatel-Lucent CloudBand Ecosystem, Mellanox’s industry-leading, end-to-end
InfiniBand and Ethernet interconnect products with support for NFV provides cloud and telecommunications networks
with best-in-class virtualization features, performance and efficiency.”
Alcatel-Lucent CloudBand: Mellanox Solution Partner
© 2014 Mellanox Technologies 19
Calsoft Labs: Virtual B-RAS solution
 High Performance Virtual B-RAS solution
 Addresses Broadband service
requirements
 Intel® DPDK optimized solution
 Powered by highly optimized data plane
processing software from 6WIND
 Performance & capabilities accelerated
by Mellanox ConnectX-3 NIC in DELL
servers
 Delivers 256K PPPoE tunnels on a 2U
rack DELL server with Intel Sandy Bridge
 Can be integrated with Calsoft Labs
Cloud NOC™ orchestration framework or
third party NFV management systems.
 PPPoX termination with VRF support for Multi-
tenants
 DHCP support:
o DHCP Relay
o DHCP Server for IPv4/IPv6
 Tunneling:
o L2TP and GRE with VRF support
o IPsec/PPP interworking per VRF
 AAA (Authentication, Authorization,
Accounting) – RADIUS
 Security:
o IP address tracking
o Centralized Firewall
 QoS:
o QoS per service
o QoS per subscriber, Hierarchical QoS
o Dynamic Bandwidth management
Key Features
POWERED BY
© 2014 Mellanox Technologies 20
RDMA/RoCE I/O Offload
RDMA over InfiniBand or
Ethernet
KERNELHARDWAREUSER
RACK 1
OS
NIC Buffer 1
Application
1
Application
2
OS
Buffer 1
NICBuffer 1
TCP/IP
RACK 2
HCA HCA
Buffer 1Buffer 1
Buffer 1
Buffer 1
Buffer 1
© 2014 Mellanox Technologies 21
6200
1200 800
0
2000
4000
6000
8000
I/O Size - 64 [KB]Bandwidth
[MB/s]
SCSI Write Example, Linux KVM
iSER 16 VMs Write
10GbE
Fiber Channel - 8Gb
Accelerating Cloud Performance
38
10
0
10
20
30
40
50
Time[s]
Migration of Active VM
10GE-A 40GE-A
Storage
Migration
Virtualization
3.5X
20X
6X
40
20
10
20
30
40
50
Message Size - 256 [bytes]
Latency[us]
VM-to-VM Latency Performance
TCP ParaVirtualization
RDMA Direct Access
10 GbE
Fibre Channel 8Gb
40 GbE
iSER 40GbE VMs Write
© 2014 Mellanox Technologies 22- Mellanox Confidential -
“To make storage cheaper we use lots more network!
How do we make Azure Storage scale? RoCE (RDMA
over Ethernet) enabled at 40GbE for Windows Azure
Storage, achieving massive COGS savings”
Microsoft Keynote at Open Networking Summit 2014 on RDMA
RDMA at 40GbE Enables Massive Cloud Saving For Microsoft Azure
Keynote
Albert Greenberg, Microsoft
SDN Azure Infrastructure
© 2014 Mellanox Technologies 23
 Using OpenStack Built-in components and management (Open-iSCSI, tgt target, Cinder), no
additional software is required, RDMA is already inbox and used by our OpenStack customers !
 Mellanox enable faster performance, with much lower CPU%
 Next step is to bypass Hypervisor layers, and add NAS & Object storage
Faster Cloud Storage Access
Hypervisor (KVM)
OS
VM
OS
VM
OS
VM
Adapter
Open-iSCSI w iSER
Compute Servers
Switching Fabric
iSCSI/iSER Target (tgt)
Adapter Local Disks
RDMA Cache
Storage Servers
OpenStack (Cinder)
Using RDMA
to accelerate
iSCSI storage
0
1000
2000
3000
4000
5000
6000
7000
1 2 4 8 16 32 64 128 256
Bandwidth[MB/s]
I/O Size [KB]
iSER 4 VMs Write
iSER 8 VMs Write
iSER 16 VMs Write
iSCSI Write 8 vms
iSCSI Write 16 VMs
PCIe Limit
6X
© 2014 Mellanox Technologies 24
Mellanox CloudX OpenCloud
Any Software
Open NIC, Open Switch, Open Server, Open Rack
© 2014 Mellanox Technologies 25
Fat-tree SDN Switch Network
40GbE
56Gbps
IB FDR
Fabric
40Gbps
Fabric
Platform 1
Platform 2
40Gbps
Fabric
Platform X
40Gbps
Server Attached and or Network Attached HWA
DPI
BRAS
SGSN
GGSN
PE Router
Firewall
CG-NAT SBC
STB
Ethernet Ethernet Ethernet
A
A
A
A
A
A
A
A
A
A
A
A
B
B
B
B
B
B
B
B
B
B
B
B
C
C
C
C
C
C
C
C
C
C
C
C
A
A
A
A
B
B
B
B
C
C
C
C
A
A
A
A
B
B
B
B
C
C
C
C
AA
AA
BB
BB
CC
CC
AA BB
CC AA BB
CC
AA BB
CC
AA BB
CC AA BB
CC
AA BB
CC
AA BB
CC
Server Attached and or Network Attached HWA are
Non-Scalable and lead back to the custom appliance based model
© 2014 Mellanox Technologies 26
Fat-tree SDN Switch Network
40GbE
56Gbps
IB FDR
SX1024 Ethernet Switch
HWA /
Signal
Processing
Fabric
40Gbps
SX1024 Ethernet Switch
HWA /
Signal
Processing
Fabric
Platform 1
Platform 2
40Gbps
SX1024 Ethernet Switch
HWA /
Signal
Processing
Fabric
Platform X
Nx40Gbps Nx40Gbps Nx40Gbps
40Gbps
Remote HWA as a Service in NFV Cloud Model
DPI
BRAS
SGSN
GGSN
PE Router
Firewall
CG-NAT SBC
STB
Ethernet Ethernet Ethernet
RDMA/RoCE
RDMA/RoCE
RDMA/RoCE
AA BB CC AA BB
CC AA BB
CC
AA BB
CC AA BB
CC
AA BB
CC
© 2014 Mellanox Technologies 27
Fat-tree SDN Switch Network
10/40/100Gbps
ToR
Aggregation
Ethernet Switch
SAN/NAS Storage
Compute Storage
10/40/100Gbps
10/40/100Gbps
Ethernet Switch
SAN/NAS Storage
Compute Storage
Rack 1 Rack 2 10/40/100Gbps
10/40/100Gbps
Ethernet Switch
SAN/NAS Storage
Compute Storage
Rack n
12x10/40/100Gbps 12x10/40/100Gbps 12x10/40/100Gbps
10/40/100Gbps
iSCSI SAN/NAS Storage Architecture in an NFV Cloud model
iSCSI SAN/NAS Storage over Standard Ethernet Network: Shared Resource
RDMA/RoCE
RDMA/RoCE
RDMA/RoCE
© 2014 Mellanox Technologies 28
The GPU as a Service Implementation
 GPUs as a network-resident service
• Little to no overhead when using FDR InfiniBand
 Virtualize and decouple GPU services from CPU services
• A new paradigm in cluster flexibility
• Lower cost, lower power and ease of use with shared GPU resources
• Remove difficult physical requirements of the GPU for standard compute
servers
GPU
CPU
GPU
CPUGPU
CPU
GPU
CPU
GPU
CPU
GPUs in every server GPUs as a Service
CPU
VGPU
CPU
VGPUv
CPU
VGPU
GPUGPUGPUGPUGPUGPUGPUGPUGPUGPUGPU
© 2014 Mellanox Technologies 29
Local and Remote GPU HWA Solutions
Application/GPU servers GPU as a Service with Mellanox
GPUDirect™ 1.0
rCUDA daemon
Network Interface
CUDA
Driver + runtime
Network Interface
rCUDA library
Application
Application Server
Side
Remote GPU Side
Application
CUDA
Driver + runtime
CUDA Application
Mellanox GPUDirect™ 1.0
enables remote access from
every node to any GPU in the
system with a single copy
Data path is copied through
CPU Memory to or from
Network Interface and GPU
HWA Device
GPU as a Service with Mellanox
PeerDirect™
Network Interface
CUDA
Driver + runtime
Network Interface
rCUDA library
Application
Application Server
Side
Remote GPU Side
P2P
Plugin
HCA
Driver
Peer
Driver
EXPORT Peer Device
Memory Functions
ib_umem_* functions are “tunneled”
thru the p2p plugin module
Mellanox PeerDirect™
enables remote access from
every node to any GPU in the
system with a zero copy
Data path is directly from
Network Interface to GPU HWA
Device
© 2014 Mellanox Technologies 30
 Ideal for Cloud Datacenter, Data Processing Platforms and Network Functions Virtualization
• Leading SerDes Technology: High Bandwidth – Advanced Process
• 10/40/56Gb VPI with PCIe 3.0 Interface
• 10/40/56Gb High Bandwidth Switch: 36 ports of 10/40/56Gb or 64 ports of 10Gb
• RDMA/RoCE technology: Ultra Low Latency Data Transfer
• Software Defined Networking: SDN Switch and Control End to End Solution
• Cloud Management: OpenStack integration
 Paving the way to 100Gb/s Interconnect
• End to End Network Interconnect for Compute/Processing and Switching
• Software Defined Networking
 High Bandwidth, Low Latency and Lower TCO: $/Port/Gb
Mellanox Interconnect Solutions
Mellanox Interconnect is Your competitive Advantage!
Thank You

More Related Content

What's hot

InfiniBand Essentials Every HPC Expert Must Know
InfiniBand Essentials Every HPC Expert Must KnowInfiniBand Essentials Every HPC Expert Must Know
InfiniBand Essentials Every HPC Expert Must Know
Mellanox Technologies
 

What's hot (20)

Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
 
Interconnect Product Portfolio
Interconnect Product PortfolioInterconnect Product Portfolio
Interconnect Product Portfolio
 
Interconnect Your Future with Connect-IB
Interconnect Your Future with Connect-IBInterconnect Your Future with Connect-IB
Interconnect Your Future with Connect-IB
 
Mellanox IBM
Mellanox IBMMellanox IBM
Mellanox IBM
 
MetroX™ – Mellanox Long Haul Solutions
MetroX™ – Mellanox Long Haul SolutionsMetroX™ – Mellanox Long Haul Solutions
MetroX™ – Mellanox Long Haul Solutions
 
InfiniBand Essentials Every HPC Expert Must Know
InfiniBand Essentials Every HPC Expert Must KnowInfiniBand Essentials Every HPC Expert Must Know
InfiniBand Essentials Every HPC Expert Must Know
 
The Generation of Open Ethernet
The Generation of Open Ethernet The Generation of Open Ethernet
The Generation of Open Ethernet
 
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions
 
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
 
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
 
Big Data Benchmarking with RDMA solutions
Big Data Benchmarking with RDMA solutions Big Data Benchmarking with RDMA solutions
Big Data Benchmarking with RDMA solutions
 
Interop Tokyo 2014 -- Mellanox Demonstrations
Interop Tokyo 2014 -- Mellanox DemonstrationsInterop Tokyo 2014 -- Mellanox Demonstrations
Interop Tokyo 2014 -- Mellanox Demonstrations
 
Intel® Ethernet Update
Intel® Ethernet Update Intel® Ethernet Update
Intel® Ethernet Update
 
IBTA Releases Updated Specification for RoCEv2
IBTA Releases Updated Specification for RoCEv2IBTA Releases Updated Specification for RoCEv2
IBTA Releases Updated Specification for RoCEv2
 
Mellanox's Sales Strategy
Mellanox's Sales StrategyMellanox's Sales Strategy
Mellanox's Sales Strategy
 
EANTC Test Report: ADVA FSP 150 ProVMe
EANTC Test Report: ADVA FSP 150 ProVMeEANTC Test Report: ADVA FSP 150 ProVMe
EANTC Test Report: ADVA FSP 150 ProVMe
 
6WIND Virtual Accelerator Product Presentation
6WIND Virtual Accelerator Product Presentation6WIND Virtual Accelerator Product Presentation
6WIND Virtual Accelerator Product Presentation
 
CloudX on OpenStack
CloudX on OpenStackCloudX on OpenStack
CloudX on OpenStack
 
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PLNOG16: Data center interconnect dla opornych, Krzysztof MazepaPLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
 
Presentation comparing server io consolidation solution with i scsi, infini...
Presentation   comparing server io consolidation solution with i scsi, infini...Presentation   comparing server io consolidation solution with i scsi, infini...
Presentation comparing server io consolidation solution with i scsi, infini...
 

Viewers also liked

Viewers also liked (8)

Hyperscale? Don't Try This at Home - Jesse Proudman - OpenStack Day Israel 2016
Hyperscale? Don't Try This at Home - Jesse Proudman - OpenStack Day Israel 2016Hyperscale? Don't Try This at Home - Jesse Proudman - OpenStack Day Israel 2016
Hyperscale? Don't Try This at Home - Jesse Proudman - OpenStack Day Israel 2016
 
Running a vCPE using OpenStack, OpenDaylight and SFC
Running a vCPE using OpenStack, OpenDaylight and SFCRunning a vCPE using OpenStack, OpenDaylight and SFC
Running a vCPE using OpenStack, OpenDaylight and SFC
 
OpenStack & OVS: From Love-Hate Relationship to Match Made in Heaven - Erez C...
OpenStack & OVS: From Love-Hate Relationship to Match Made in Heaven - Erez C...OpenStack & OVS: From Love-Hate Relationship to Match Made in Heaven - Erez C...
OpenStack & OVS: From Love-Hate Relationship to Match Made in Heaven - Erez C...
 
A Tale of Two OpenStack Contributors: A Newbie Developer and a Frustrated Ope...
A Tale of Two OpenStack Contributors: A Newbie Developer and a Frustrated Ope...A Tale of Two OpenStack Contributors: A Newbie Developer and a Frustrated Ope...
A Tale of Two OpenStack Contributors: A Newbie Developer and a Frustrated Ope...
 
Meetup 1st _ SDN/NFV Use case in Operators' Networks: vCPE
Meetup 1st _ SDN/NFV Use case in Operators' Networks: vCPEMeetup 1st _ SDN/NFV Use case in Operators' Networks: vCPE
Meetup 1st _ SDN/NFV Use case in Operators' Networks: vCPE
 
SDN Network virtualization, NFV & MPLS synergies
SDN Network virtualization, NFV & MPLS synergiesSDN Network virtualization, NFV & MPLS synergies
SDN Network virtualization, NFV & MPLS synergies
 
vCPE 2.0 – the business case for an open vCPE framework
vCPE 2.0 – the business case for an open vCPE frameworkvCPE 2.0 – the business case for an open vCPE framework
vCPE 2.0 – the business case for an open vCPE framework
 
Ericsson NFVi solution
Ericsson NFVi solutionEricsson NFVi solution
Ericsson NFVi solution
 

Similar to Mellanox Approach to NFV & SDN

Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
Ceph Community
 
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД. Кон...
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД.  Кон...Современные сетевые аспекты, которые нужно учитывать при построении ЦОД.  Кон...
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД. Кон...
Nick Turunov
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
inside-BigData.com
 

Similar to Mellanox Approach to NFV & SDN (20)

Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
 
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
 
6WINDGate™ - Enabling Cloud RAN Virtualization
6WINDGate™ - Enabling Cloud RAN Virtualization6WINDGate™ - Enabling Cloud RAN Virtualization
6WINDGate™ - Enabling Cloud RAN Virtualization
 
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
Ceph Day Berlin: Deploying Flash Storage for Ceph without Compromising Perfor...
 
6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC
6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC
6WINDGate™ - Accelerated Data Plane Solution for EPC and vEPC
 
 Network Innovations Driving Business Transformation
 Network Innovations Driving Business Transformation Network Innovations Driving Business Transformation
 Network Innovations Driving Business Transformation
 
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД. Кон...
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД.  Кон...Современные сетевые аспекты, которые нужно учитывать при построении ЦОД.  Кон...
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД. Кон...
 
OpenNebula - Mellanox Considerations for Smart Cloud
OpenNebula - Mellanox Considerations for Smart CloudOpenNebula - Mellanox Considerations for Smart Cloud
OpenNebula - Mellanox Considerations for Smart Cloud
 
High Performance Networking Leveraging the DPDK and Growing Community
High Performance Networking Leveraging the DPDK and Growing CommunityHigh Performance Networking Leveraging the DPDK and Growing Community
High Performance Networking Leveraging the DPDK and Growing Community
 
#IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal" #IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal"
 
The new imperative in the data center with workload centric networking
The new imperative in the data center with workload centric networkingThe new imperative in the data center with workload centric networking
The new imperative in the data center with workload centric networking
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
 
2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL
 
Open coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi AlkobiOpen coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi Alkobi
 
Новые коммутаторы QFX10000. Технология JunOS Fusion
Новые коммутаторы QFX10000. Технология JunOS FusionНовые коммутаторы QFX10000. Технология JunOS Fusion
Новые коммутаторы QFX10000. Технология JunOS Fusion
 
DPDK Summit - 08 Sept 2014 - 6WIND - High Perf Networking Leveraging the DPDK...
DPDK Summit - 08 Sept 2014 - 6WIND - High Perf Networking Leveraging the DPDK...DPDK Summit - 08 Sept 2014 - 6WIND - High Perf Networking Leveraging the DPDK...
DPDK Summit - 08 Sept 2014 - 6WIND - High Perf Networking Leveraging the DPDK...
 
Building the SD-Branch using uCPE
Building the SD-Branch using uCPEBuilding the SD-Branch using uCPE
Building the SD-Branch using uCPE
 
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
 
6WINDGate™ - Enabling NFV for Telco Architectures
6WINDGate™ - Enabling NFV for Telco Architectures6WINDGate™ - Enabling NFV for Telco Architectures
6WINDGate™ - Enabling NFV for Telco Architectures
 
Netsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfvNetsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfv
 

More from Mellanox Technologies

More from Mellanox Technologies (16)

Ahead of the NFV Curve with Truly Scale-out Network Function Cloudification
Ahead of the NFV Curve with Truly Scale-out Network Function CloudificationAhead of the NFV Curve with Truly Scale-out Network Function Cloudification
Ahead of the NFV Curve with Truly Scale-out Network Function Cloudification
 
InfiniBand FAQ
InfiniBand FAQInfiniBand FAQ
InfiniBand FAQ
 
InfiniBand Strengthens Leadership as the Interconnect Of Choice
InfiniBand Strengthens Leadership as the Interconnect Of ChoiceInfiniBand Strengthens Leadership as the Interconnect Of Choice
InfiniBand Strengthens Leadership as the Interconnect Of Choice
 
CloudX – Expand Your Cloud into the Future
CloudX – Expand Your Cloud into the FutureCloudX – Expand Your Cloud into the Future
CloudX – Expand Your Cloud into the Future
 
Mellanox High Performance Networks for Ceph
Mellanox High Performance Networks for CephMellanox High Performance Networks for Ceph
Mellanox High Performance Networks for Ceph
 
Interconnect Your Future With Mellanox
Interconnect Your Future With MellanoxInterconnect Your Future With Mellanox
Interconnect Your Future With Mellanox
 
Become a Supercomputer Hero
Become a Supercomputer HeroBecome a Supercomputer Hero
Become a Supercomputer Hero
 
Unified Fabric Manager - HP Insight CMU Connector
Unified Fabric Manager - HP Insight CMU ConnectorUnified Fabric Manager - HP Insight CMU Connector
Unified Fabric Manager - HP Insight CMU Connector
 
Print 'N Fly - SC13
Print 'N Fly - SC13Print 'N Fly - SC13
Print 'N Fly - SC13
 
Mellanox 2013 Analyst Day
Mellanox 2013 Analyst DayMellanox 2013 Analyst Day
Mellanox 2013 Analyst Day
 
Mellanox's Technological Advantage
Mellanox's Technological AdvantageMellanox's Technological Advantage
Mellanox's Technological Advantage
 
Storage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving GrowthStorage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving Growth
 
Mellanox's Operational Excellence
Mellanox's Operational ExcellenceMellanox's Operational Excellence
Mellanox's Operational Excellence
 
Mellanox Financial Overview
Mellanox Financial OverviewMellanox Financial Overview
Mellanox Financial Overview
 
Mellanox Market Leading Solutions
Mellanox Market Leading SolutionsMellanox Market Leading Solutions
Mellanox Market Leading Solutions
 
Scale Out Database Solution
Scale Out Database SolutionScale Out Database Solution
Scale Out Database Solution
 

Recently uploaded

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 

Recently uploaded (20)

Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 

Mellanox Approach to NFV & SDN

  • 1. Eran Bello, Director of Business Development March 2014 | NFV&SDN Summit | Paris, France Mellanox Approach to NFV & SDN
  • 2. © 2014 Mellanox Technologies 2 Leading Supplier of End-to-End Interconnect Solutions Virtual Protocol Interconnect Storage Front / Back-End Server / Compute Switch / Gateway 56G IB & FCoIB 56G InfiniBand 10/40/56GbE & FCoE 10/40/56GbE Virtual Protocol Interconnect Host/Fabric SoftwareICs Switches/GatewaysAdapter Cards Cables/Modules Comprehensive End-to-End InfiniBand and Ethernet Portfolio Metro / WAN
  • 3. © 2014 Mellanox Technologies 3 Virtual Protocol Interconnect (VPI) Technology 64 ports 10GbE 36 ports 40/56GbE 48 10GbE + 12 40/56GbE 36 ports IB up to 56Gb/s 8 VPI subnets Switch OS Layer Mezzanine Card VPI Adapter VPI Switch Ethernet: 10/40/56 Gb/s InfiniBand:10/20/40/56 Gb/s Unified Fabric Manager Networking Storage Clustering Management Applications Acceleration Engines LOM Adapter Card 3.0 From data center to campus and metro connectivity
  • 4. © 2014 Mellanox Technologies 4  Highest Capacity in 1RU • From 12 QSFP to 36 QSFP 40/56Gb ports  4.03Tb/s • 64 x 10GbE • 48x10GbE plus 12x40/56Gbps  Unique Value Proposition • VPI 10/40/56Gbps • End to end solution Ethernet Switch Portfolio SX1036 The Ideal 40GbE ToR/Aggregation SX1024 Non-blocking 10GbE 40GbE ToR SX1016 Highest density 10GbE ToR  Latency • 220ns L2 latency • 330ns L3 latency  Power (SX1036) • SX1036 – 83w • SX1024 – 70w • SX1016 – 64w • 1W per 10Gb port, 2.3W per 40Gb port SX1012 Ideal storage/Database 10/40GbE Switch
  • 5. © 2014 Mellanox Technologies 5 Independent Software Vendors BRAS Firewall DPI CDN Tester/QoE monitor WAN Acceleration Message Router Radio Network Controller Carrier Grade NAT Session Border Controller Classical Network Appliance Approach PE Router SGSN/GGSN Generic High Volume Ethernet Switches Generic High Volume Servers Generic High Volume Storage Orchestrated, automatic remote install Network Functions Virtualisation Approach  Ideal platform for ETSI NFV: Network Functions Virtualization • Consolidate Network Equipment onto standard servers, switches and storage • Leverage Software Defined Networking • Driven within the ETSI: European Telecom Standard Institution  The migration to x86 based platforms is the enabler • 3G/4G Network Core • Load Balancing / Traffic and Policy Enforcement • Internet Security Gateways • Network Monitoring / VPN • CDN / Video Processing and Optimization / IPTV Telecom and Security Network Functions Virtualization ATCA Platforms Compute and Storage Platforms
  • 6. © 2014 Mellanox Technologies 6 HP c7000 with Mellanox 40GbE Interconnect Mezz Adapter HPPN 644161-B22 2P Blade NFF Cables Switch Blade SX1018HP VPI Ready: Same HW for both ETH and IB Highest Capacity: 2P 40GbE PCIe 3.0 x 8 lanes Lowest Latency RoCE (App to App): 1.3us Lowest Power (40GbE) Typ 2-port 40GbE: 5.1W •56Gbps FDR IB / 40GbE QSFP •QSA: QSFP to SFP+ Adapter VPI Ready: Same HW for both ETH and IB Highest Capacity: 2.72 Tb/s bandwidth 16 Internal 40/10GbE ports 18 External 40/10GbE ports (QSFP+) Lowest Latency: 220 nsec latency 40GbE 270 nsec latency 10GbE Lowest Power: 82W (Typical Power with Passive Cables) C-Class double wide form factor Up to two SX1018HP switches per enclosure
  • 7. © 2014 Mellanox Technologies 7- Mellanox Confidential - Mellanox ConnectX-3 Dual-Port 40GbE NIC and Switch release in Q3/2013  14 Compute Blades each using Single EN6132 Dual Port 40GbE NIC  2 Switch Blades each using EN6131 SwitchX-2 32 Ports 40GbE  Compute I/O 1.12 Tbps @40 GbE  Uplink I/O up to 1.44Tbps @40 GbE Dual Star Architecture Dual-Dual Star Architecture 10GbE 10GbE 40GbE 40GbE 22 x 10GbE 18 x 40GbE 40GbE 40GbE 40GbE 40GbE 18 x 40GbE 18 x 40GbE 22 x 10GbE 18 x 40GbE 18 x 40GbE 18 x 40GbE EN4093R EN4093R EN6131 EN6131 EN6131 EN6131 EN6131 EN6131 ITE SWITCH ITE SWITCH IBM PureFlex System  14 Compute Blades each using Dual EN6132 Dual Port 40GbE NIC  4 Switch Blades each using EN6131 SwitchX-2 32 Ports 40GbE  Compute I/O 2.24 Tbps @40 GbE  Uplink I/O up to 2.24 Tbps @40 GbE IBM PureFlex with Mellanox 40GbE Interconnect Single wide chassis: 14x ITEs / Blade Servers support 2 adapters per server Double-wide chassis: 7x ITEs / Blade Servers support 4 adapters per server
  • 8. © 2014 Mellanox Technologies 8 CloudNFV: The 1st ETSI ISG NFV Approved PoC Demo Console Optimization, Active Virtualization and Demo console: NFV Orchestration and Metro Ethernet Switches: Demo Virtual Function: Traffic Telemetry and DPI as a service: Servers, Data Center Switching, Lab Facilities. Systems Integration: Data Path Acceleration: Demo Virtual Function Overall Architecture: High Performance Server and Storage Interconnect Active Data Center
  • 9. © 2014 Mellanox Technologies 9 Mellanox and 6WIND ConnectX-3 NIC Driver for Intel® DPDK 6WIND or Intel® DPDK • Data Plane libraries • Optimized NIC drivers Client’s Application Software High-performance packet processing solutions for • Gateways • Security appliances • UTMs • Virtual appliances • etc. Multicore Processor …… librte_pmd_mlx4 librte_pmd driver provided as an addon into the DPDK (no need to patch the DPDK) Based on the generic librte_eal and librte_ether API of the DPDK. Best design since it co-works with the ibverb framework. librte_crypto_nitrox 6WIND addons VMware …
  • 10. © 2014 Mellanox Technologies 10 Neutron Plug-in OpenStack integration High performance 10/40/56Gbps SR-IOV enabled OpenFlow enabled eSwitch OpenStack Neutron Plug-in PMD for DPDK: VM OS bypass Multi cores and RSS support Delivering bare-metal performance Mellanox NIC SR-IOV with PMD for Intel® DPDK in the Guest VM OS VM OS VM Hypervisor Legacy Software vSwitches SR-IOV eSwitch Hardware Offload OpenFlow enabled VM 6WIND or Intel® DPDK • Data Plane libraries • Optimized NIC drivers Client’s Application Software High-performance packet processing solutions for • Gateways • Security appliances • UTMs • Virtual appliances • etc. Multicore Processor …… librte_pmd_mlx4 librte_crypto_nitrox 6WIND addonsVMware … 10/40/56Gbps
  • 11. © 2014 Mellanox Technologies 11  Slow Application Performance • 1/10GbE • 50us latency for VM to VM connectivity • Slow VM Migration • Slow Storage I/O  Expensive and inefficient • High CPU overhead for I/O processing • Multiple adapters needed  Limited isolation • Minimal QoS and Security in software Mellanox NIC Based I/O Virtualization Advantages  Fastest Application Performance • 10/40GbE with RDMA, 56Gb InfiniBand • Only 2us for VM to VM connectivity • >3.5x faster VM Migration • >6x faster storage access  Superb efficiency • Offload hypervisor CPU, more VMs • I/O consolidation  Best isolation • Hardware-enforced security and QoS OS VM OS VM OS VM Hypervisor Software based vSwitches OS VM OS VM OS VM Hypervisor Legacy Software vSwitches Hardware Offload + vSwitches Legacy NICs
  • 12. © 2014 Mellanox Technologies 12 I/O Virtualization Future – NIC Based Switching OS VM OS VM OS VM Hypervisor OS VM eSwitches (embedded switches) Physical Ports (pPort) NIC/HCA Hardware “LAG” vPorts with multi-level QoS and hardware based congestion control Virtual NICs (vNICs) vPort Security Filters, ACLs, and Tunneling (EoIB/VXLAN/NVGRE) HW Based teaming HW Based VM Switching pPort QoS and DCB vPort Priority tagging Controlled via SDN/OpenFlow  eSwitch supported match fields : • Destination MAC address • VLAN ID • Ether Type • Source/Destination IP address • Source/Destination UDP/TCP port  eSwitch supported actions: • Drop • Allow • Count • Trap/Mirror • Set priority (VLAN priority, egress queue & policer)
  • 13. © 2014 Mellanox Technologies 13 Mellanox and Radware Defense Pro : SDN demo The Traditional Way: Bump in the wire Appliances A Better Way: SDN and OpenFlow with Flow Based Routing
  • 14. © 2014 Mellanox Technologies 14 ConnectX-3 Pro NVGRE and VXLAN Performance 0 2 4 6 8 10 12 2 4 8 16 BandwidthGb/s VM Pairs NvGRE Throughput ConnectX-3 Pro 10GbE NVGRE Offload Disabled 0.00 1.00 2.00 3.00 4.00 5.00 1 VM 2 VMs 3 VMs VxLAN in software 3.50 3.33 4.29 VxLAN HW Offload 0.90 0.89 1.19 CPU%/Bandwidth (Gbit/sce) CPU Usage Per Gbit/sec with VxLAN 0 5 10 15 20 25 1 VM 2 VMs 3 VMs VxLAN in software 2 3 3.5 VxLAN HW Offload 10 19 21 Bandwidth[Gb/s] Total VM Bandwidth when using VxLAN The Foundation of Cloud 2.0 The World’s First NVGRE / VXLAN Offloaded NIC
  • 15. © 2014 Mellanox Technologies 15 6WIND demonstration of 195 Gbps Accelerated Virtual Switch iproute2iptables Fast Path IP IPsec OVS Acceleration TCP VLAN GRE MPLS ACL LAG Custom GTPu NAT Intel® DPDK Shared Memory Statistics Protocol Tables Linux Kernel 6WINDGate fast path statistics Linux Networking Stack 6WINDGate Sync Daemons NIC(s) Multicore Processor Platform Quagga 6WINDGate includes the Mellanox poll mode driver (PMD) - Provide Direct access to the networking hardware: Linux OS Bypass The demo include 5 Mellanox ConnectX®-3 Pro cards with dual 40G Ports.
  • 16. © 2014 Mellanox Technologies 16 Managing the VM Networking Via OpenFlow / SDN Neutron Plug-In SDN Applications SDN Applications Cloud Management OpenStack Manager SDN Controller OS VM Para- virtual OS VM OS VM OS VM SR-IOV to the VM 10/40GbE or InfiniBand ports Embedded Switch OpenFlow Agent Neutron Agent Create/delete, configure policy per VM vNIC Servers tap tap  OpenFlow control over switch and NIC  Adapter hardware acceleration for OpenFlow and overlay functions  Native integration to OpenStack and SDN controllers The Benefits of VM Provision & Fabric Policy in Hardware Isolation, Performance & Offload, Simpler SDN
  • 17. © 2014 Mellanox Technologies 17  Allow Service Orchestration over the Telecom WAN Network.  Leverage OpenStack for the Telecom Datacenter  Leverage Mellanox Neutron Plug-in allow SR-IOV • Near bear metal performance to the VMs CYAN Blue Planet: Carrier Grade SDN Orchestration Platform
  • 18. © 2014 Mellanox Technologies 18  Using CloudBand, service providers can create cloud services that offer virtually limitless growth and that capitalize on their broad range of distributed data center and network resources. By building their own carrier clouds, service providers can meet stringent service level agreements (SLAs) and deliver the performance, access and security that enterprises and consumers demand.  “Network Function Virtualization can provide service providers with significant gains in automation and reductions in costs. Working in conjunction with the Alcatel-Lucent CloudBand Ecosystem, Mellanox’s industry-leading, end-to-end InfiniBand and Ethernet interconnect products with support for NFV provides cloud and telecommunications networks with best-in-class virtualization features, performance and efficiency.” Alcatel-Lucent CloudBand: Mellanox Solution Partner
  • 19. © 2014 Mellanox Technologies 19 Calsoft Labs: Virtual B-RAS solution  High Performance Virtual B-RAS solution  Addresses Broadband service requirements  Intel® DPDK optimized solution  Powered by highly optimized data plane processing software from 6WIND  Performance & capabilities accelerated by Mellanox ConnectX-3 NIC in DELL servers  Delivers 256K PPPoE tunnels on a 2U rack DELL server with Intel Sandy Bridge  Can be integrated with Calsoft Labs Cloud NOC™ orchestration framework or third party NFV management systems.  PPPoX termination with VRF support for Multi- tenants  DHCP support: o DHCP Relay o DHCP Server for IPv4/IPv6  Tunneling: o L2TP and GRE with VRF support o IPsec/PPP interworking per VRF  AAA (Authentication, Authorization, Accounting) – RADIUS  Security: o IP address tracking o Centralized Firewall  QoS: o QoS per service o QoS per subscriber, Hierarchical QoS o Dynamic Bandwidth management Key Features POWERED BY
  • 20. © 2014 Mellanox Technologies 20 RDMA/RoCE I/O Offload RDMA over InfiniBand or Ethernet KERNELHARDWAREUSER RACK 1 OS NIC Buffer 1 Application 1 Application 2 OS Buffer 1 NICBuffer 1 TCP/IP RACK 2 HCA HCA Buffer 1Buffer 1 Buffer 1 Buffer 1 Buffer 1
  • 21. © 2014 Mellanox Technologies 21 6200 1200 800 0 2000 4000 6000 8000 I/O Size - 64 [KB]Bandwidth [MB/s] SCSI Write Example, Linux KVM iSER 16 VMs Write 10GbE Fiber Channel - 8Gb Accelerating Cloud Performance 38 10 0 10 20 30 40 50 Time[s] Migration of Active VM 10GE-A 40GE-A Storage Migration Virtualization 3.5X 20X 6X 40 20 10 20 30 40 50 Message Size - 256 [bytes] Latency[us] VM-to-VM Latency Performance TCP ParaVirtualization RDMA Direct Access 10 GbE Fibre Channel 8Gb 40 GbE iSER 40GbE VMs Write
  • 22. © 2014 Mellanox Technologies 22- Mellanox Confidential - “To make storage cheaper we use lots more network! How do we make Azure Storage scale? RoCE (RDMA over Ethernet) enabled at 40GbE for Windows Azure Storage, achieving massive COGS savings” Microsoft Keynote at Open Networking Summit 2014 on RDMA RDMA at 40GbE Enables Massive Cloud Saving For Microsoft Azure Keynote Albert Greenberg, Microsoft SDN Azure Infrastructure
  • 23. © 2014 Mellanox Technologies 23  Using OpenStack Built-in components and management (Open-iSCSI, tgt target, Cinder), no additional software is required, RDMA is already inbox and used by our OpenStack customers !  Mellanox enable faster performance, with much lower CPU%  Next step is to bypass Hypervisor layers, and add NAS & Object storage Faster Cloud Storage Access Hypervisor (KVM) OS VM OS VM OS VM Adapter Open-iSCSI w iSER Compute Servers Switching Fabric iSCSI/iSER Target (tgt) Adapter Local Disks RDMA Cache Storage Servers OpenStack (Cinder) Using RDMA to accelerate iSCSI storage 0 1000 2000 3000 4000 5000 6000 7000 1 2 4 8 16 32 64 128 256 Bandwidth[MB/s] I/O Size [KB] iSER 4 VMs Write iSER 8 VMs Write iSER 16 VMs Write iSCSI Write 8 vms iSCSI Write 16 VMs PCIe Limit 6X
  • 24. © 2014 Mellanox Technologies 24 Mellanox CloudX OpenCloud Any Software Open NIC, Open Switch, Open Server, Open Rack
  • 25. © 2014 Mellanox Technologies 25 Fat-tree SDN Switch Network 40GbE 56Gbps IB FDR Fabric 40Gbps Fabric Platform 1 Platform 2 40Gbps Fabric Platform X 40Gbps Server Attached and or Network Attached HWA DPI BRAS SGSN GGSN PE Router Firewall CG-NAT SBC STB Ethernet Ethernet Ethernet A A A A A A A A A A A A B B B B B B B B B B B B C C C C C C C C C C C C A A A A B B B B C C C C A A A A B B B B C C C C AA AA BB BB CC CC AA BB CC AA BB CC AA BB CC AA BB CC AA BB CC AA BB CC AA BB CC Server Attached and or Network Attached HWA are Non-Scalable and lead back to the custom appliance based model
  • 26. © 2014 Mellanox Technologies 26 Fat-tree SDN Switch Network 40GbE 56Gbps IB FDR SX1024 Ethernet Switch HWA / Signal Processing Fabric 40Gbps SX1024 Ethernet Switch HWA / Signal Processing Fabric Platform 1 Platform 2 40Gbps SX1024 Ethernet Switch HWA / Signal Processing Fabric Platform X Nx40Gbps Nx40Gbps Nx40Gbps 40Gbps Remote HWA as a Service in NFV Cloud Model DPI BRAS SGSN GGSN PE Router Firewall CG-NAT SBC STB Ethernet Ethernet Ethernet RDMA/RoCE RDMA/RoCE RDMA/RoCE AA BB CC AA BB CC AA BB CC AA BB CC AA BB CC AA BB CC
  • 27. © 2014 Mellanox Technologies 27 Fat-tree SDN Switch Network 10/40/100Gbps ToR Aggregation Ethernet Switch SAN/NAS Storage Compute Storage 10/40/100Gbps 10/40/100Gbps Ethernet Switch SAN/NAS Storage Compute Storage Rack 1 Rack 2 10/40/100Gbps 10/40/100Gbps Ethernet Switch SAN/NAS Storage Compute Storage Rack n 12x10/40/100Gbps 12x10/40/100Gbps 12x10/40/100Gbps 10/40/100Gbps iSCSI SAN/NAS Storage Architecture in an NFV Cloud model iSCSI SAN/NAS Storage over Standard Ethernet Network: Shared Resource RDMA/RoCE RDMA/RoCE RDMA/RoCE
  • 28. © 2014 Mellanox Technologies 28 The GPU as a Service Implementation  GPUs as a network-resident service • Little to no overhead when using FDR InfiniBand  Virtualize and decouple GPU services from CPU services • A new paradigm in cluster flexibility • Lower cost, lower power and ease of use with shared GPU resources • Remove difficult physical requirements of the GPU for standard compute servers GPU CPU GPU CPUGPU CPU GPU CPU GPU CPU GPUs in every server GPUs as a Service CPU VGPU CPU VGPUv CPU VGPU GPUGPUGPUGPUGPUGPUGPUGPUGPUGPUGPU
  • 29. © 2014 Mellanox Technologies 29 Local and Remote GPU HWA Solutions Application/GPU servers GPU as a Service with Mellanox GPUDirect™ 1.0 rCUDA daemon Network Interface CUDA Driver + runtime Network Interface rCUDA library Application Application Server Side Remote GPU Side Application CUDA Driver + runtime CUDA Application Mellanox GPUDirect™ 1.0 enables remote access from every node to any GPU in the system with a single copy Data path is copied through CPU Memory to or from Network Interface and GPU HWA Device GPU as a Service with Mellanox PeerDirect™ Network Interface CUDA Driver + runtime Network Interface rCUDA library Application Application Server Side Remote GPU Side P2P Plugin HCA Driver Peer Driver EXPORT Peer Device Memory Functions ib_umem_* functions are “tunneled” thru the p2p plugin module Mellanox PeerDirect™ enables remote access from every node to any GPU in the system with a zero copy Data path is directly from Network Interface to GPU HWA Device
  • 30. © 2014 Mellanox Technologies 30  Ideal for Cloud Datacenter, Data Processing Platforms and Network Functions Virtualization • Leading SerDes Technology: High Bandwidth – Advanced Process • 10/40/56Gb VPI with PCIe 3.0 Interface • 10/40/56Gb High Bandwidth Switch: 36 ports of 10/40/56Gb or 64 ports of 10Gb • RDMA/RoCE technology: Ultra Low Latency Data Transfer • Software Defined Networking: SDN Switch and Control End to End Solution • Cloud Management: OpenStack integration  Paving the way to 100Gb/s Interconnect • End to End Network Interconnect for Compute/Processing and Switching • Software Defined Networking  High Bandwidth, Low Latency and Lower TCO: $/Port/Gb Mellanox Interconnect Solutions Mellanox Interconnect is Your competitive Advantage!