This document discusses DPDK support for new hardware offloads. It describes the Netronome Agilio SmartNIC card, which has hardware accelerators and can offload tasks like cryptography and flow processing. It discusses using the card with OVS-DPDK for improved performance over kernel OVS. Full offloading of OVS classification and actions to the NIC is proposed to reduce CPU usage. Supporting eBPF offloading is also discussed, as is using DPDK with virtio. Overall the document explores how DPDK can better leverage network hardware offloads.
Driving Behavioral Change for Information Management through Data-Driven Gree...
LF_DPDK17_DPDK support for new hardware offloads
1. DPDK Summit - San Jose – 2017
DPDK support for new
hw offloads
#DPDKSummit
Alejandro Lucero, Netronome
2. 2#DPDKSummit
DPDK support for new hw offloads
Netronome Agilio SmartNIC: a highly programmable card designed for
network packet/flow processing
● 120 Flow Processing cores
● Hardware accelerators: crypto, hash, queue, LB, TM
● Hardware offloads: checksum, VLAN, TSO, IPSec, …, OVS, eBPF, P4,
Contrail vROUTER, virtio
● 10G, 25G, 40G, 100G
● Up to quad PCIe Gen3x8
3. 3
DPDK support for new hw offloads
OVS
VM
virtio-net
kernel
user
OVS-kernelvhost-net
Orchestrator
HW
NIC driver
NIC
VM
virtio-netVM
virtio-netVM
virtio-net
OVS
KERNEL
4. 4
DPDK support for new hw offloads
OVS
VM
virtio-net
kernel
user
OVS-kernel
Orchestrator
HW
NFP PF netdev
VM
virtio-netVM
virtio-netVM
NFP driver
OVS
KERNEL
OFFLOAD
VF
VF
VF
VF OVS
PF
Repr
netdev
NFP
WIRE
TC
Repr
netdev
Repr
netdev
Repr
netdev
Flow offload
5. 5
DPDK support for new hw offloads
OVS
kernel
user
OVS-kernel
Orchestrator
HW
NFP PF netdev
VM
virtio-net OVS
KERNEL
OFFLOAD
+
(Netronome) XVIO
VF
VF
VF
VF OVS
PF
Repr
netdev
NFP
WIRE
TC
Repr
netdev
Repr
netdev
Repr
netdev
PMD PMD PMD
VM
virtio-net
VM
virtio-net
XVIO-DPDK
vhost-user
...
...
Flow offload
6. 6
DPDK support for new hw offloads
OVS-DPDK:
● Better performance than (kernel) OVS
● Consumes CPU in the Host. Scalable?
7. 7
DPDK support for new hw offloads
OVS-DPDK
VM
virtio-net
kernel
user
Orchestrator
HW
PMD
NIC
VM
virtio-net
VM
virtio-net
VM
virtio-net OVS-DPDK
V
H
O
S
T
8. 8
DPDK support for new hw offloads
OVS-DPDK
VM
virtio-net
kernel
user
Orchestrator
HW
PMD
NIC
VM
virtio-net
VM
virtio-net
VM
virtio-net
OVS-DPDK &
SR-IOV
V
H
O
S
T
PMD PMD...
VF VF VF...
9. 9
DPDK support for new hw offloads
OVS-DPDK: offload?
● Partial offload proposed in the OvS mailing list (just classification giving
hints for action to OvS)
● Full (classification + action) Offload? Does it make sense?
○ VMs using SR-IOV (native NIC performance)
○ OVS-DPDK needs CPUs. With offload CPU just for slow path
○ Different tenants, different service: virtio AND SR-IOV
○ Security
○ Just experimental work done (Netronome)
10. 10
DPDK support for new hw offloads
VM
virtio-n
et
kernel
user
Orchestrator
HW
NFP PF PMD
VM
virtio-n
et
VM
virtio-n
et
VM
NFP driver
OVS-DPDK
FULL OFFLOAD
VF
VF
VF
VF OVS
PF
Repr
PMD
NFP
WIRE
Repr
PMD
Repr
PMD
...
OVS-DPDK
11. 11
DPDK support for new hw offloads
kernel
user
Orchestrator
HW
NFP PF PMD
VM
virtio-n
et
VM
virtio-n
et
VM
NFP driver
OVS- DPDK
FULL OFFLOAD
(optional)
VF
VF
VF
VF OVS
PF
Repr
PMD
NFP
WIRE
Repr
PMD
Repr
PMD
...
OVS-DPDK
v
h
o
s
t
VM
virtio-net
container
container
12. 12
DPDK support for new hw offloads
V-PMD
rx_pkt_burst
M-PMD
V-PMD
rx_pkt_burst
V-PMD
rx_pkt_burst
V-PMD
rx_pkt_burst
...
Virtual ports (Representors) packet delivery (slow path)
enqueue
burst
enqueue
burst
dequeue
burst
dequeue
burst
dequeue
burst
dequeue
burst
RTE RING
LIBRARY
RXRING
RXRING
RXRING
RXRING
Representors
Multiplexed PF PMD based on metadata
13. 13
DPDK support for new hw offloads
OVS-DPDK Offload: what is needed?
● Representors PMDs could be created inside PF PMDs, but ...
○ hotplug/unplug: representors are not PCI devices
○ Transparency: representors naming
○ Who is taking over the PF? Bifurcated driver?
● OVS Flow rules offload?
○ Changes to OVS-DPDK? Using TC through PF?
○ Is rte_flow enough for OVS flows syntax?
14. 14
DPDK support for new hw offloads
eBPF offload
BPF: Berkeley Packet Filter (tcpdump, libpcap, netfilter)
Kernel executes BPF programs via in-kernel virtual machine
eBPF: extended BPF. Sockets filtering and tracing (since 3.18)
Attaching eBPF programs to kernel TC classifier (since 4.1)
XDP: eXpress Data Path
15. 15
DPDK support for new hw offloads
XDP (eXpress Data Path) in the Linux kernel
Bare metal packet processing at the lowest point in the software stack
It does not require any specialized hardware
It does not required kernel bypass
It does not replace the TCP/IP stack
It works in concert with TCP/IP stack along with all the benefits of BPF (eBPF)
17. 17
DPDK support for new hw offloads
XDP/eBPF & DPDK
Do we need XDP/eBPF in userspace networking? How to do it?
Good for being “kernel compatible”: executing eBPF/XDP programs, but …
Can eBPF-DPDK be eBPF-kernel compatible?
Likely good for any DPDK-based network stack
Support at PMD level with offload option
It is already possible (with limitations) to use eBPF in userspace
18. 18
DPDK support for new hw offloads
eBPF Offload
XDP consume host resources (CPU, PCIe bandwidth)
Netronome’s NFP: Packet processing through eBPF programs with hardware offload
IOvisor: eBPF to the extreme
19. 19
DPDK support for new hw offloads
Userspace
Network Stack
NIC HW
eBPF
program
XDP DROP, FORWARD
HOST CPU
Userspace
Network Stack
NIC HW
XDP
DROP,
FORWARD
eBPFPCIe
PCIe
NFP
NIC
DRIVER
NIC
DRIVER
Kernel
eBPF
Offload
20. 20
DPDK support for new hw offloads
virtio Offload: virtio capable NIC
● VMs with SR-IOV (device passthrough) but using virtio interface
○ Pros: VM provisioning, performance
○ Cons: VM migration, East-West traffic
● VM migration: requires a migration friendly NIC
● East-West traffic: memory vs NIC
● DPDK: virtio changes (vhost), iommu changes????
● Other option: vDPA (vHost Data Path Acceleration)
21. 21
Dataplane Acceleration Developer Day (DXDD)
▪ Date: December 11-12 (Monday & Tuesday)
▪ Time: 8:30 a.m. – 8:00 p.m.
▪ Location: Computer Science Museum (Mountain View, CA)
▪ Why should you attend?
• Discussions about recent dataplane acceleration development
– P4-16 introduction
– TC offload introduction
– eBPF introduction
• Extensive hands-on training
– P4-14 labs
– TC labs
▪ Register: https://open-nfp.org/dxdd-2017