1 Using Open vSwitch with DPDK
2 ============================
4 Open vSwitch can use Intel(R) DPDK lib to operate entirely in
5 userspace. This file explains how to install and use Open vSwitch in
8 The DPDK support of Open vSwitch is considered experimental.
9 It has not been thoroughly tested.
11 This version of Open vSwitch should be built manually with `configure`
14 OVS needs a system with 1GB hugepages support.
16 Building and Installing:
17 ------------------------
21 1. Configure build & install DPDK:
25 export DPDK_DIR=/usr/src/dpdk-1.8.0
29 2. Update `config/common_linuxapp` so that DPDK generate single lib file.
30 (modification also required for IVSHMEM build)
32 `CONFIG_RTE_BUILD_COMBINE_LIBS=y`
34 Then run `make install` to build and isntall the library.
35 For default install without IVSHMEM:
37 `make install T=x86_64-native-linuxapp-gcc`
39 To include IVSHMEM (shared memory):
41 `make install T=x86_64-ivshmem-linuxapp-gcc`
43 For further details refer to http://dpdk.org/
45 2. Configure & build the Linux kernel:
47 Refer to intel-dpdk-getting-started-guide.pdf for understanding
48 DPDK kernel requirement.
50 3. Configure & build OVS:
54 `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/`
58 `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/`
61 cd $(OVS_DIR)/openvswitch
63 ./configure --with-dpdk=$DPDK_BUILD
67 To have better performance one can enable aggressive compiler optimizations and
68 use the special instructions(popcnt, crc32) that may not be available on all
69 machines. Instead of typing `make`, type:
71 `make CFLAGS='-O3 -march=native'`
73 Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS.
75 Using the DPDK with ovs-vswitchd:
76 ---------------------------------
79 Add the following options to the kernel bootline:
81 `default_hugepagesz=1GB hugepagesz=1G hugepages=1`
83 2. Setup DPDK devices:
85 DPDK devices can be setup using either the VFIO (for DPDK 1.7+) or UIO
86 modules. UIO requires inserting an out of tree driver igb_uio.ko that is
87 available in DPDK. Setup for both methods are described below.
90 1. insert uio.ko: `modprobe uio`
91 2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko`
92 3. Bind network device to igb_uio:
93 `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1`
97 VFIO needs to be supported in the kernel and the BIOS. More information
98 can be found in the [DPDK Linux GSG].
100 1. Insert vfio-pci.ko: `modprobe vfio-pci`
101 2. Set correct permissions on vfio device: `sudo /usr/bin/chmod a+x /dev/vfio`
102 and: `sudo /usr/bin/chmod 0666 /dev/vfio/*`
103 3. Bind network device to vfio-pci:
104 `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=vfio-pci eth1`
106 3. Mount the hugetable filsystem
108 `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages`
110 Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup.
112 4. Follow the instructions in [INSTALL.md] to install only the
113 userspace daemons and utilities (via 'make install').
114 1. First time only db creation (or clearing):
117 mkdir -p /usr/local/etc/openvswitch
118 mkdir -p /usr/local/var/run/openvswitch
119 rm /usr/local/etc/openvswitch/conf.db
120 ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
121 /usr/local/share/openvswitch/vswitch.ovsschema
124 2. Start ovsdb-server
127 ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
128 --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
129 --private-key=db:Open_vSwitch,SSL,private_key \
130 --certificate=Open_vSwitch,SSL,certificate \
131 --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
134 3. First time after db creation, initialize:
137 ovs-vsctl --no-wait init
142 DPDK configuration arguments can be passed to vswitchd via `--dpdk`
143 argument. This needs to be first argument passed to vswitchd process.
144 dpdk arg -c is ignored by ovs-dpdk, but it is a required parameter
145 for dpdk initialization.
148 export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
149 ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach
152 If allocated more than one GB hugepage (as for IVSHMEM), set amount and
153 use NUMA node 0 memory:
156 ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 \
157 -- unix:$DB_SOCK --pidfile --detach
160 6. Add bridge & ports
162 To use ovs-vswitchd with DPDK, create a bridge with datapath_type
163 "netdev" in the configuration database. For example:
165 `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev`
167 Now you can add dpdk devices. OVS expect DPDK device name start with dpdk
168 and end with portid. vswitchd should print (in the log file) the number
169 of dpdk devices found.
172 ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
173 ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
176 Once first DPDK port is added to vswitchd, it creates a Polling thread and
177 polls dpdk device in continuous loop. Therefore CPU utilization
178 for that thread is always 100%.
182 Test flow script across NICs (assuming ovs in /usr/src/ovs):
187 # Move to command directory
188 cd /usr/src/ovs/utilities/
190 # Clear current flows
191 ./ovs-ofctl del-flows br0
193 # Add flows between port 1 (dpdk0) to port 2 (dpdk1)
194 ./ovs-ofctl add-flow br0 in_port=1,action=output:2
195 ./ovs-ofctl add-flow br0 in_port=2,action=output:1
198 8. Performance tuning
200 With pmd multi-threading support, OVS creates one pmd thread for each
201 numa node as default. The pmd thread handles the I/O of all DPDK
202 interfaces on the same numa node. The following two commands can be used
203 to configure the multi-threading behavior.
205 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>`
207 The command above asks for a CPU mask for setting the affinity of pmd
208 threads. A set bit in the mask means a pmd thread is created and pinned
209 to the corresponding CPU core. For more information, please refer to
210 `man ovs-vswitchd.conf.db`
212 `ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<integer>`
214 The command above sets the number of rx queues of each DPDK interface. The
215 rx queues are assigned to pmd threads on the same numa node in round-robin
216 fashion. For more information, please refer to `man ovs-vswitchd.conf.db`
218 Ideally for maximum throughput, the pmd thread should not be scheduled out
219 which temporarily halts its execution. The following affinitization methods
222 Lets pick core 4,6,8,10 for pmd threads to run on. Also assume a dual 8 core
223 sandy bridge system with hyperthreading enabled where CPU1 has cores 0,...,7
224 and 16,...,23 & CPU2 cores 8,...,15 & 24,...,31. (A different cpu
225 configuration could have different core mask requirements).
227 To kernel bootline add core isolation list for cores and associated hype cores
228 (e.g. isolcpus=4,20,6,22,8,24,10,26,). Reboot system for isolation to take
229 effect, restart everything.
231 Configure pmd threads on core 4,6,8,10 using 'pmd-cpu-mask':
233 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=00000550`
235 You should be able to check that pmd threads are pinned to the correct cores
239 top -p `pidof ovs-vswitchd` -H -d1
242 Note, the pmd threads on a numa node are only created if there is at least
243 one DPDK interface from the numa node that has been added to OVS.
245 Note, core 0 is always reserved from non-pmd threads and should never be set
251 Following the steps above to create a bridge, you can now add dpdk rings
252 as a port to the vswitch. OVS will expect the DPDK ring device name to
253 start with dpdkr and end with a portid.
255 `ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr`
257 DPDK rings client test application
259 Included in the test directory is a sample DPDK application for testing
260 the rings. This is from the base dpdk directory and modified to work
261 with the ring naming used within ovs.
263 location tests/ovs_client
268 cd /usr/src/ovs/tests/
269 ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr"
272 In the case of the dpdkr example above the "port id you gave dpdkr" is 0.
274 It is essential to have --proc-type=secondary
276 The application simply receives an mbuf on the receive queue of the
277 ethernet ring and then places that same mbuf on the transmit ring of
278 the ethernet ring. It is a trivial loopback application.
280 DPDK rings in VM (IVSHMEM shared memory communications)
281 -------------------------------------------------------
283 In addition to executing the client in the host, you can execute it within
284 a guest VM. To do so you will need a patched qemu. You can download the
285 patch and getting started guide at :
287 https://01.org/packet-processing/downloads
289 A general rule of thumb for better performance is that the client
290 application should not be assigned the same dpdk core mask "-c" as
296 - This Support is for Physical NIC. I have tested with Intel NIC only.
297 - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue.
298 - Currently DPDK port does not make use any offload functionality.
301 - The shared memory is currently restricted to the use of a 1GB
303 - All huge pages are shared amongst the host, clients, virtual
309 Please report problems to bugs@openvswitch.org.
311 [INSTALL.userspace.md]:INSTALL.userspace.md
312 [INSTALL.md]:INSTALL.md
313 [DPDK Linux GSG]: http://www.dpdk.org/doc/guides/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-igb-uioor-vfio-modules