SlideShare a Scribd company logo
1 of 12
Download to read offline
Intel DPDK
Step by Step Instructions

       Hisaki Ohara (@hisak)
Objectives

• Build/Execute sample applications
  (helloworld, L2fwd and L3fwd)
• Packet forwarding by generating with
  Linux/pktgen
Test Environment
      ESXi 5.1                   ESXi 5.1                  ESXi 5.1
     Sender VM                        DPDK VM           Receiver VM
      CentOS 6.3                       CentOS 6.3          CentOS 6.3
           ixgbe                      ixgbe  ixgbe              ixgbe


VMDirectPath (VT-D)        VMDirectPath (VT-D)       VMDirectPath (VT-D)

Intel 10G NIC         Intel 10G NIC                  Intel 10G NIC
    (82599)               (82599)                        (82599)




       - ESXi 5.1
          - CPU: Xeon 5600 Series
          - Guest OS: CentOS 6.3 x86_64
             - # of vCPUs: 2
          - 10G NIC (82599) is passed through
             - In-box driver of ixgbe
Step0: Download source codes
• Source codes and relevant documents
 • http://www.intel.com/go/dpdk
Step1: Prepare Linux Kernel
      •
# uname -a
          Add boot option and fstab for hugepage
Linux cent-dpdk 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6 23:43:09 UTC 2012 x86_64
x86_64 x86_64 GNU/Linux
# cat /boot/grub/grub.conf
<snip>
title CentOS (2.6.32-279.14.1.el6.x86_64)
         root (hd0,0)
         kernel /vmlinuz-2.6.32-279.14.1.el6.x86_64 ro root=/dev/mapper/vg_cent6-
lv_root rd_LVM_LV=vg_cent6/lv_swap rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD
rd_LVM_LV=vg_cent6/lv_root SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us
rd_NO_DM rhgb quiet crashkernel=auto hugepages=256
         initrd /initramfs-2.6.32-279.14.1.el6.x86_64.img
<snip>
# mkdir /hugepages
# cat /etc/fstab
<snip>
hugetlbfs                /hugepages               hugetlbfs rw,mode=0777 0 0
# reboot

      •   Confirm hugepage is enabled
# cat /proc/meminfo
<snip>
HugePages_Total:     256
HugePages_Free:      256
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
<snip>
Step2: Build DPDK and samples

$ unzip INTELDPDK.L.1.2.3_3.zip
$ cd DPDK
$ make install T=x86_64-default-linuxapp-gcc
$ pwd
/home/dpdktest/DPDK
$ cd examples/helloworld
$ RTE_SDK=/home/dpdktest/DPDK make
  CC main.o
  LD helloworld
  INSTALL-APP helloworld
  INSTALL-MAP helloworld.map
Step3: helloworld sample
      •   Load required module for DPDK Linux app
# modprobe uio
# insmod /home/dpdktest/DPDK/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko



      •   Execute helloworld sample
# ./build/helloworld -c 3 -n 2
EAL: coremask set to 3
EAL: Detected lcore 0 on socket 0
EAL: Detected lcore 1 on socket 0
EAL: Requesting 256 pages of size 2097152
EAL: Ask a virtual area of 0x20000000 bytes
EAL: Virtual area found at 0x7f4862c00000 (size = 0x20000000)
EAL: WARNING: Cannot mmap /dev/hpet! The TSC will be used instead.
EAL: Master core 0 is ready (tid=82dd0800)
EAL: Core 1 is ready (tid=621fe700)
hello from core 1
hello from core 0
Step4-1: L2fwd Sample Build
           Sender VM                             DPDK VM           Receiver VM
            CentOS 6.3                            CentOS 6.3          CentOS 6.3
                 ixgbe                           ixgbe  ixgbe              ixgbe
          10.0.10.11 eth1                          port0   port1   10.0.10.22     eth1



                                                                                e.g. MAC addr (AA:BB:CC:DD:EE:FF)


      •      L2fwd/L3fwd samples are very simple
            •      One-way only. Don’t expect ping/pong
            •      Dest MAC address is hard-coded...

$ cd examples/l2fwd
$ diff -up main.c.0 main.c
@@ -293,7 +293,7 @@ l2fwd_simple_forward(struct rte_mbuf *m,

    /* 00:09:c0:00:00:xx */
        tmp = &eth->d_addr.addr_bytes[0];
-   *((uint64_t *)tmp) = 0x000000c00900 + (dst_port << 24);
+   *((uint64_t *)tmp) = 0xFFEEDDCCBBAA; /* AA:BB:CC:DD:EE:FF */

    /* src addr */
    ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
$ RTE_SDK=/home/dpdktest/DPDK make
Step4-2: L2fwd Sample
                    Sender VM                               DPDK VM                  Receiver VM
                     CentOS 6.3                              CentOS 6.3                CentOS 6.3
                          ixgbe                             ixgbe  ixgbe                    ixgbe
                   10.0.10.11 eth1                           port0   port1          10.0.10.22   eth1
                                            (11:22:33:44:55:66)                                     (AA:BB:CC:DD:EE:FF)



#ip address add 10.0.10.11/24 dev eth1    #./build/l2fwd -c 0x3 -n 2 -- -p 0x3   #ip address add 10.0.10.22/24 dev eth1
                                                                                 #vnstat -i eth1 -l
#modprobe pktgen


#echo   “rem_device_all” > /proc/net/pktgen/kpktgend_0
#echo   "add_device eth1" > /proc/net/pktgen/kpktgend_0
#echo   "count 10000000" > /proc/net/pktgen/eth1
#echo   "clone_skb 1000000" > /proc/net/pktgen/eth1
#echo   "pkt_size 60" > /proc/net/pktgen/eth1
#echo   "delay 0" > /proc/net/pktgen/eth1
#echo   "dst 10.0.10.22" > /proc/net/pktgen/eth1
#echo   "dst_mac 11:22:33:44:55:66" > /proc/net/pktgen/eth1
#echo   "start" > /proc/net/pktgen/pgctrl




                                                                                 Packets are dropped at RX ports
                                                                                  of DPDK VM and Receiver VM
Step5-1: L3fwd Sample Build
           Sender VM                               DPDK VM                   Receiver VM
            CentOS 6.3                              CentOS 6.3                  CentOS 6.3
                 ixgbe                             ixgbe  ixgbe                      ixgbe
          10.0.10.11 eth1                           port0   port1            10.0.20.22     eth1




      •      L3 fwd sample has two functions to determine destination port                e.g. MAC addr (AA:BB:CC:DD:EE:FF)


            •      [default] destination IP address (LPM-based)
            •      5-tuples (Hash-based)
$ cd examples/l3fwd
$ diff -up main.c.0 main.c
@@ -282,6 +282,8 @@ static struct l3fwd_route l3fwd_route_ar
        {IPv4(6,1,1,0), 24, 5},
        {IPv4(7,1,1,0), 24, 6},
        {IPv4(8,1,1,0), 24, 7},
+       {IPv4(10,0,10,11), 24, 0},
+       {IPv4(10,0,20,22), 24, 1},
  };

 #define L3FWD_NUM_ROUTES 
@@ -475,7 +477,7 @@ l3fwd_simple_forward(struct rte_mbuf *m,

          /* 00:09:c0:00:00:xx */
          tmp = &eth_hdr->d_addr.addr_bytes[0];
-         *((uint64_t *)tmp) = 0x000000c00900 + (dst_port << 24);
+         *((uint64_t *)tmp) = 0xFFEEDDCCBBAA; /* AA:BB:CC:DD:EE:FF */

$ RTE_SDK=/home/dpdktest/DPDK make
Step5-2: L3fwd Sample
                    Sender VM                               DPDK VM                  Receiver VM
                     CentOS 6.3                              CentOS 6.3                CentOS 6.3
                          ixgbe                             ixgbe  ixgbe                    ixgbe
                   10.0.10.11 eth1                           port0   port1          10.0.20.22   eth1
                                            (11:22:33:44:55:66)                                     (AA:BB:CC:DD:EE:FF)



#ip address add 10.0.10.11/24 dev eth1    #./build/l3fwd -c 0x3 -n 2 -- -p 0x3   #ip address add 10.0.20.22/24 dev eth1
                                          --config="(0,0,0),(1,0,1)"             #vnstat -i eth1 -l
#modprobe pktgen


#echo   “rem_device_all” > /proc/net/pktgen/kpktgend_0
#echo   "add_device eth1" > /proc/net/pktgen/kpktgend_0
#echo   "count 10000000" > /proc/net/pktgen/eth1
#echo   "clone_skb 1000000" > /proc/net/pktgen/eth1
#echo   "pkt_size 60" > /proc/net/pktgen/eth1
#echo   "delay 0" > /proc/net/pktgen/eth1
#echo   "dst 10.0.20.22" > /proc/net/pktgen/eth1
#echo   "dst_mac 11:22:33:44:55:66" > /proc/net/pktgen/eth1
#echo   "start" > /proc/net/pktgen/pgctrl




                                                                                 Need reliable ways and tunings
                                                                                   to measure performance
Notes on this experiment

• No guarantee as usual
• No tuning effort has been made
• References:
 • http://www.intel.com/go/dpdk
 • For pktgen (in Japanese)
   • http://research.sakura.ad.jp/2010/10/08/
      infini01/

More Related Content

What's hot

The linux networking architecture
The linux networking architectureThe linux networking architecture
The linux networking architecture
hugo lu
 
High-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uringHigh-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uring
ScyllaDB
 

What's hot (20)

DPDK in Containers Hands-on Lab
DPDK in Containers Hands-on LabDPDK in Containers Hands-on Lab
DPDK in Containers Hands-on Lab
 
Ixgbe internals
Ixgbe internalsIxgbe internals
Ixgbe internals
 
Network Programming: Data Plane Development Kit (DPDK)
Network Programming: Data Plane Development Kit (DPDK)Network Programming: Data Plane Development Kit (DPDK)
Network Programming: Data Plane Development Kit (DPDK)
 
LinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking WalkthroughLinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking Walkthrough
 
Enable DPDK and SR-IOV for containerized virtual network functions with zun
Enable DPDK and SR-IOV for containerized virtual network functions with zunEnable DPDK and SR-IOV for containerized virtual network functions with zun
Enable DPDK and SR-IOV for containerized virtual network functions with zun
 
DPDK: Multi Architecture High Performance Packet Processing
DPDK: Multi Architecture High Performance Packet ProcessingDPDK: Multi Architecture High Performance Packet Processing
DPDK: Multi Architecture High Performance Packet Processing
 
Linux Network Stack
Linux Network StackLinux Network Stack
Linux Network Stack
 
Linux Networking Explained
Linux Networking ExplainedLinux Networking Explained
Linux Networking Explained
 
FD.IO Vector Packet Processing
FD.IO Vector Packet ProcessingFD.IO Vector Packet Processing
FD.IO Vector Packet Processing
 
Understanding DPDK algorithmics
Understanding DPDK algorithmicsUnderstanding DPDK algorithmics
Understanding DPDK algorithmics
 
1 intro to_dpdk_and_hw
1 intro to_dpdk_and_hw1 intro to_dpdk_and_hw
1 intro to_dpdk_and_hw
 
BPF Internals (eBPF)
BPF Internals (eBPF)BPF Internals (eBPF)
BPF Internals (eBPF)
 
DevConf 2014 Kernel Networking Walkthrough
DevConf 2014   Kernel Networking WalkthroughDevConf 2014   Kernel Networking Walkthrough
DevConf 2014 Kernel Networking Walkthrough
 
eBPF - Rethinking the Linux Kernel
eBPF - Rethinking the Linux KerneleBPF - Rethinking the Linux Kernel
eBPF - Rethinking the Linux Kernel
 
The linux networking architecture
The linux networking architectureThe linux networking architecture
The linux networking architecture
 
eBPF/XDP
eBPF/XDP eBPF/XDP
eBPF/XDP
 
Debug dpdk process bottleneck & painpoints
Debug dpdk process bottleneck & painpointsDebug dpdk process bottleneck & painpoints
Debug dpdk process bottleneck & painpoints
 
eBPF Trace from Kernel to Userspace
eBPF Trace from Kernel to UserspaceeBPF Trace from Kernel to Userspace
eBPF Trace from Kernel to Userspace
 
Using eBPF for High-Performance Networking in Cilium
Using eBPF for High-Performance Networking in CiliumUsing eBPF for High-Performance Networking in Cilium
Using eBPF for High-Performance Networking in Cilium
 
High-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uringHigh-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uring
 

Viewers also liked (8)

Vagrant
VagrantVagrant
Vagrant
 
cassandra 100 node cluster admin operation
cassandra 100 node cluster admin operationcassandra 100 node cluster admin operation
cassandra 100 node cluster admin operation
 
Seastar:高スループットなサーバアプリケーションの為の新しいフレームワーク
Seastar:高スループットなサーバアプリケーションの為の新しいフレームワークSeastar:高スループットなサーバアプリケーションの為の新しいフレームワーク
Seastar:高スループットなサーバアプリケーションの為の新しいフレームワーク
 
OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京
OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京
OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京
 
PaaSの作り方 Sqaleの場合
PaaSの作り方 Sqaleの場合PaaSの作り方 Sqaleの場合
PaaSの作り方 Sqaleの場合
 
コンテナ情報交換会2
コンテナ情報交換会2コンテナ情報交換会2
コンテナ情報交換会2
 
Inside Sqale's Backend at Sapporo Ruby Kaigi 2012
Inside Sqale's Backend at Sapporo Ruby Kaigi 2012Inside Sqale's Backend at Sapporo Ruby Kaigi 2012
Inside Sqale's Backend at Sapporo Ruby Kaigi 2012
 
Nosqlの基礎知識(2013年7月講義資料)
Nosqlの基礎知識(2013年7月講義資料)Nosqlの基礎知識(2013年7月講義資料)
Nosqlの基礎知識(2013年7月講義資料)
 

Similar to Intel DPDK Step by Step instructions

[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
OpenStack Korea Community
 
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
Sungman Jang
 

Similar to Intel DPDK Step by Step instructions (20)

SR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/Stable
SR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/StableSR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/Stable
SR-IOV, KVM and Emulex OneConnect 10Gbps cards on Debian/Stable
 
SR-IOV+KVM on Debian/Stable
SR-IOV+KVM on Debian/StableSR-IOV+KVM on Debian/Stable
SR-IOV+KVM on Debian/Stable
 
Genode Compositions
Genode CompositionsGenode Compositions
Genode Compositions
 
k8s practice 2023.pptx
k8s practice 2023.pptxk8s practice 2023.pptx
k8s practice 2023.pptx
 
See what happened with real time kvm when building real time cloud pezhang@re...
See what happened with real time kvm when building real time cloud pezhang@re...See what happened with real time kvm when building real time cloud pezhang@re...
See what happened with real time kvm when building real time cloud pezhang@re...
 
Razor, the Provisioning Toolbox - PuppetConf 2014
Razor, the Provisioning Toolbox - PuppetConf 2014Razor, the Provisioning Toolbox - PuppetConf 2014
Razor, the Provisioning Toolbox - PuppetConf 2014
 
Linux router
Linux routerLinux router
Linux router
 
Network Automation Tools
Network Automation ToolsNetwork Automation Tools
Network Automation Tools
 
Docker
DockerDocker
Docker
 
[오픈소스컨설팅] Linux Network Troubleshooting
[오픈소스컨설팅] Linux Network Troubleshooting[오픈소스컨설팅] Linux Network Troubleshooting
[오픈소스컨설팅] Linux Network Troubleshooting
 
LF_OVS_17_OVS-DPDK Installation and Gotchas
LF_OVS_17_OVS-DPDK Installation and GotchasLF_OVS_17_OVS-DPDK Installation and Gotchas
LF_OVS_17_OVS-DPDK Installation and Gotchas
 
Open stack pike-devstack-tutorial
Open stack pike-devstack-tutorialOpen stack pike-devstack-tutorial
Open stack pike-devstack-tutorial
 
Docker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in PragueDocker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in Prague
 
Run Run Trema Test
Run Run Trema TestRun Run Trema Test
Run Run Trema Test
 
Let's trace Linux Lernel with KGDB @ COSCUP 2021
Let's trace Linux Lernel with KGDB @ COSCUP 2021Let's trace Linux Lernel with KGDB @ COSCUP 2021
Let's trace Linux Lernel with KGDB @ COSCUP 2021
 
KubeCon EU 2016: Secure, Cloud-Native Networking with Project Calico
KubeCon EU 2016: Secure, Cloud-Native Networking with Project CalicoKubeCon EU 2016: Secure, Cloud-Native Networking with Project Calico
KubeCon EU 2016: Secure, Cloud-Native Networking with Project Calico
 
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
 
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
 
Basic Linux kernel
Basic Linux kernelBasic Linux kernel
Basic Linux kernel
 
Kubernetes networking
Kubernetes networkingKubernetes networking
Kubernetes networking
 

Intel DPDK Step by Step instructions

  • 1. Intel DPDK Step by Step Instructions Hisaki Ohara (@hisak)
  • 2. Objectives • Build/Execute sample applications (helloworld, L2fwd and L3fwd) • Packet forwarding by generating with Linux/pktgen
  • 3. Test Environment ESXi 5.1 ESXi 5.1 ESXi 5.1 Sender VM DPDK VM Receiver VM CentOS 6.3 CentOS 6.3 CentOS 6.3 ixgbe ixgbe ixgbe ixgbe VMDirectPath (VT-D) VMDirectPath (VT-D) VMDirectPath (VT-D) Intel 10G NIC Intel 10G NIC Intel 10G NIC (82599) (82599) (82599) - ESXi 5.1 - CPU: Xeon 5600 Series - Guest OS: CentOS 6.3 x86_64 - # of vCPUs: 2 - 10G NIC (82599) is passed through - In-box driver of ixgbe
  • 4. Step0: Download source codes • Source codes and relevant documents • http://www.intel.com/go/dpdk
  • 5. Step1: Prepare Linux Kernel • # uname -a Add boot option and fstab for hugepage Linux cent-dpdk 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6 23:43:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux # cat /boot/grub/grub.conf <snip> title CentOS (2.6.32-279.14.1.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-279.14.1.el6.x86_64 ro root=/dev/mapper/vg_cent6- lv_root rd_LVM_LV=vg_cent6/lv_swap rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=vg_cent6/lv_root SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet crashkernel=auto hugepages=256 initrd /initramfs-2.6.32-279.14.1.el6.x86_64.img <snip> # mkdir /hugepages # cat /etc/fstab <snip> hugetlbfs /hugepages hugetlbfs rw,mode=0777 0 0 # reboot • Confirm hugepage is enabled # cat /proc/meminfo <snip> HugePages_Total: 256 HugePages_Free: 256 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB <snip>
  • 6. Step2: Build DPDK and samples $ unzip INTELDPDK.L.1.2.3_3.zip $ cd DPDK $ make install T=x86_64-default-linuxapp-gcc $ pwd /home/dpdktest/DPDK $ cd examples/helloworld $ RTE_SDK=/home/dpdktest/DPDK make CC main.o LD helloworld INSTALL-APP helloworld INSTALL-MAP helloworld.map
  • 7. Step3: helloworld sample • Load required module for DPDK Linux app # modprobe uio # insmod /home/dpdktest/DPDK/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko • Execute helloworld sample # ./build/helloworld -c 3 -n 2 EAL: coremask set to 3 EAL: Detected lcore 0 on socket 0 EAL: Detected lcore 1 on socket 0 EAL: Requesting 256 pages of size 2097152 EAL: Ask a virtual area of 0x20000000 bytes EAL: Virtual area found at 0x7f4862c00000 (size = 0x20000000) EAL: WARNING: Cannot mmap /dev/hpet! The TSC will be used instead. EAL: Master core 0 is ready (tid=82dd0800) EAL: Core 1 is ready (tid=621fe700) hello from core 1 hello from core 0
  • 8. Step4-1: L2fwd Sample Build Sender VM DPDK VM Receiver VM CentOS 6.3 CentOS 6.3 CentOS 6.3 ixgbe ixgbe ixgbe ixgbe 10.0.10.11 eth1 port0 port1 10.0.10.22 eth1 e.g. MAC addr (AA:BB:CC:DD:EE:FF) • L2fwd/L3fwd samples are very simple • One-way only. Don’t expect ping/pong • Dest MAC address is hard-coded... $ cd examples/l2fwd $ diff -up main.c.0 main.c @@ -293,7 +293,7 @@ l2fwd_simple_forward(struct rte_mbuf *m, /* 00:09:c0:00:00:xx */ tmp = &eth->d_addr.addr_bytes[0]; - *((uint64_t *)tmp) = 0x000000c00900 + (dst_port << 24); + *((uint64_t *)tmp) = 0xFFEEDDCCBBAA; /* AA:BB:CC:DD:EE:FF */ /* src addr */ ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr); $ RTE_SDK=/home/dpdktest/DPDK make
  • 9. Step4-2: L2fwd Sample Sender VM DPDK VM Receiver VM CentOS 6.3 CentOS 6.3 CentOS 6.3 ixgbe ixgbe ixgbe ixgbe 10.0.10.11 eth1 port0 port1 10.0.10.22 eth1 (11:22:33:44:55:66) (AA:BB:CC:DD:EE:FF) #ip address add 10.0.10.11/24 dev eth1 #./build/l2fwd -c 0x3 -n 2 -- -p 0x3 #ip address add 10.0.10.22/24 dev eth1 #vnstat -i eth1 -l #modprobe pktgen #echo “rem_device_all” > /proc/net/pktgen/kpktgend_0 #echo "add_device eth1" > /proc/net/pktgen/kpktgend_0 #echo "count 10000000" > /proc/net/pktgen/eth1 #echo "clone_skb 1000000" > /proc/net/pktgen/eth1 #echo "pkt_size 60" > /proc/net/pktgen/eth1 #echo "delay 0" > /proc/net/pktgen/eth1 #echo "dst 10.0.10.22" > /proc/net/pktgen/eth1 #echo "dst_mac 11:22:33:44:55:66" > /proc/net/pktgen/eth1 #echo "start" > /proc/net/pktgen/pgctrl Packets are dropped at RX ports of DPDK VM and Receiver VM
  • 10. Step5-1: L3fwd Sample Build Sender VM DPDK VM Receiver VM CentOS 6.3 CentOS 6.3 CentOS 6.3 ixgbe ixgbe ixgbe ixgbe 10.0.10.11 eth1 port0 port1 10.0.20.22 eth1 • L3 fwd sample has two functions to determine destination port e.g. MAC addr (AA:BB:CC:DD:EE:FF) • [default] destination IP address (LPM-based) • 5-tuples (Hash-based) $ cd examples/l3fwd $ diff -up main.c.0 main.c @@ -282,6 +282,8 @@ static struct l3fwd_route l3fwd_route_ar {IPv4(6,1,1,0), 24, 5}, {IPv4(7,1,1,0), 24, 6}, {IPv4(8,1,1,0), 24, 7}, + {IPv4(10,0,10,11), 24, 0}, + {IPv4(10,0,20,22), 24, 1}, }; #define L3FWD_NUM_ROUTES @@ -475,7 +477,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, /* 00:09:c0:00:00:xx */ tmp = &eth_hdr->d_addr.addr_bytes[0]; - *((uint64_t *)tmp) = 0x000000c00900 + (dst_port << 24); + *((uint64_t *)tmp) = 0xFFEEDDCCBBAA; /* AA:BB:CC:DD:EE:FF */ $ RTE_SDK=/home/dpdktest/DPDK make
  • 11. Step5-2: L3fwd Sample Sender VM DPDK VM Receiver VM CentOS 6.3 CentOS 6.3 CentOS 6.3 ixgbe ixgbe ixgbe ixgbe 10.0.10.11 eth1 port0 port1 10.0.20.22 eth1 (11:22:33:44:55:66) (AA:BB:CC:DD:EE:FF) #ip address add 10.0.10.11/24 dev eth1 #./build/l3fwd -c 0x3 -n 2 -- -p 0x3 #ip address add 10.0.20.22/24 dev eth1 --config="(0,0,0),(1,0,1)" #vnstat -i eth1 -l #modprobe pktgen #echo “rem_device_all” > /proc/net/pktgen/kpktgend_0 #echo "add_device eth1" > /proc/net/pktgen/kpktgend_0 #echo "count 10000000" > /proc/net/pktgen/eth1 #echo "clone_skb 1000000" > /proc/net/pktgen/eth1 #echo "pkt_size 60" > /proc/net/pktgen/eth1 #echo "delay 0" > /proc/net/pktgen/eth1 #echo "dst 10.0.20.22" > /proc/net/pktgen/eth1 #echo "dst_mac 11:22:33:44:55:66" > /proc/net/pktgen/eth1 #echo "start" > /proc/net/pktgen/pgctrl Need reliable ways and tunings to measure performance
  • 12. Notes on this experiment • No guarantee as usual • No tuning effort has been made • References: • http://www.intel.com/go/dpdk • For pktgen (in Japanese) • http://research.sakura.ad.jp/2010/10/08/ infini01/