1 Using Open vSwitch with DPDK
2 ============================
4 Open vSwitch can use Intel(R) DPDK lib to operate entirely in
5 userspace. This file explains how to install and use Open vSwitch in
8 The DPDK support of Open vSwitch is considered experimental.
9 It has not been thoroughly tested.
11 This version of Open vSwitch should be built manually with `configure`
14 OVS needs a system with 1GB hugepages support.
16 Building and Installing:
17 ------------------------
19 Required: DPDK 16.04, libnuma
20 Optional (if building with vhost-cuse): `fuse`, `fuse-devel` (`libfuse-dev`
23 1. Configure build & install DPDK:
27 export DPDK_DIR=/usr/src/dpdk-16.04
31 2. Then run `make install` to build and install the library.
32 For default install without IVSHMEM:
34 `make install T=x86_64-native-linuxapp-gcc DESTDIR=install`
36 To include IVSHMEM (shared memory):
38 `make install T=x86_64-ivshmem-linuxapp-gcc DESTDIR=install`
40 For further details refer to http://dpdk.org/
42 2. Configure & build the Linux kernel:
44 Refer to intel-dpdk-getting-started-guide.pdf for understanding
45 DPDK kernel requirement.
47 3. Configure & build OVS:
51 `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/`
55 `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/`
60 ./configure --with-dpdk=$DPDK_BUILD [CFLAGS="-g -O2 -Wno-cast-align"]
64 Note: 'clang' users may specify the '-Wno-cast-align' flag to suppress DPDK cast-align warnings.
66 To have better performance one can enable aggressive compiler optimizations and
67 use the special instructions(popcnt, crc32) that may not be available on all
68 machines. Instead of typing `make`, type:
70 `make CFLAGS='-O3 -march=native'`
72 Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS.
74 Using the DPDK with ovs-vswitchd:
75 ---------------------------------
78 Add the following options to the kernel bootline:
80 `default_hugepagesz=1GB hugepagesz=1G hugepages=1`
82 2. Setup DPDK devices:
84 DPDK devices can be setup using either the VFIO (for DPDK 1.7+) or UIO
85 modules. UIO requires inserting an out of tree driver igb_uio.ko that is
86 available in DPDK. Setup for both methods are described below.
89 1. insert uio.ko: `modprobe uio`
90 2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko`
91 3. Bind network device to igb_uio:
92 `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1`
96 VFIO needs to be supported in the kernel and the BIOS. More information
97 can be found in the [DPDK Linux GSG].
99 1. Insert vfio-pci.ko: `modprobe vfio-pci`
100 2. Set correct permissions on vfio device: `sudo /usr/bin/chmod a+x /dev/vfio`
101 and: `sudo /usr/bin/chmod 0666 /dev/vfio/*`
102 3. Bind network device to vfio-pci:
103 `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=vfio-pci eth1`
105 3. Mount the hugetable filesystem
107 `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages`
109 Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup.
111 4. Follow the instructions in [INSTALL.md] to install only the
112 userspace daemons and utilities (via 'make install').
113 1. First time only db creation (or clearing):
116 mkdir -p /usr/local/etc/openvswitch
117 mkdir -p /usr/local/var/run/openvswitch
118 rm /usr/local/etc/openvswitch/conf.db
119 ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
120 /usr/local/share/openvswitch/vswitch.ovsschema
123 2. Start ovsdb-server
126 ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
127 --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
128 --private-key=db:Open_vSwitch,SSL,private_key \
129 --certificate=Open_vSwitch,SSL,certificate \
130 --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
133 3. First time after db creation, initialize:
136 ovs-vsctl --no-wait init
141 DPDK configuration arguments can be passed to vswitchd via Open_vSwitch
142 other_config column. The recognized configuration options are listed.
143 Defaults will be provided for all values not explicitly set.
146 Specifies whether OVS should initialize and support DPDK ports. This is
147 a boolean, and defaults to false.
150 Specifies the CPU cores on which dpdk lcore threads should be spawned.
151 The DPDK lcore threads are used for DPDK library tasks, such as
152 library internal message processing, logging, etc. Value should be in
153 the form of a hex string (so '0x123') similar to the 'taskset' mask
155 If not specified, the value will be determined by choosing the lowest
156 CPU core from initial cpu affinity list. Otherwise, the value will be
157 passed directly to the DPDK library.
158 For performance reasons, it is best to set this to a single core on
159 the system, rather than allow lcore threads to float.
162 This sets the total memory to preallocate from hugepages regardless of
163 processor socket. It is recommended to use dpdk-socket-mem instead.
166 Comma separated list of memory to pre-allocate from hugepages on specific
170 Directory where hugetlbfs is mounted
173 Extra arguments to provide to DPDK EAL, as previously specified on the
174 command line. Do not pass '--no-huge' to the system in this way. Support
175 for running the system without hugepages is nonexistent.
178 Option to set the vhost_cuse character device name.
181 Option to set the path to the vhost_user unix socket files.
183 NOTE: Changing any of these options requires restarting the ovs-vswitchd
186 Open vSwitch can be started as normal. DPDK will be initialized as long
187 as the dpdk-init option has been set to 'true'.
191 export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
192 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
193 ovs-vswitchd unix:$DB_SOCK --pidfile --detach
196 If allocated more than one GB hugepage (as for IVSHMEM), set amount and
197 use NUMA node 0 memory:
200 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,0"
201 ovs-vswitchd unix:$DB_SOCK --pidfile --detach
204 6. Add bridge & ports
206 To use ovs-vswitchd with DPDK, create a bridge with datapath_type
207 "netdev" in the configuration database. For example:
209 `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev`
211 Now you can add dpdk devices. OVS expects DPDK device names to start with
212 "dpdk" and end with a portid. vswitchd should print (in the log file) the
213 number of dpdk devices found.
216 ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
217 ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
220 Once first DPDK port is added to vswitchd, it creates a Polling thread and
221 polls dpdk device in continuous loop. Therefore CPU utilization
222 for that thread is always 100%.
224 Note: creating bonds of DPDK interfaces is slightly different to creating
225 bonds of system interfaces. For DPDK, the interface type must be explicitly
229 ovs-vsctl add-bond br0 dpdkbond dpdk0 dpdk1 -- set Interface dpdk0 type=dpdk -- set Interface dpdk1 type=dpdk
234 Test flow script across NICs (assuming ovs in /usr/src/ovs):
239 # Move to command directory
240 cd /usr/src/ovs/utilities/
242 # Clear current flows
243 ./ovs-ofctl del-flows br0
245 # Add flows between port 1 (dpdk0) to port 2 (dpdk1)
246 ./ovs-ofctl add-flow br0 in_port=1,action=output:2
247 ./ovs-ofctl add-flow br0 in_port=2,action=output:1
252 Assuming you have a vhost-user port transmitting traffic consisting of
253 packets of size 64 bytes, the following command would limit the egress
254 transmission rate of the port to ~1,000,000 packets per second:
256 `ovs-vsctl set port vhost-user0 qos=@newqos -- --id=@newqos create qos
257 type=egress-policer other-config:cir=46000000 other-config:cbs=2048`
259 To examine the QoS configuration of the port:
261 `ovs-appctl -t ovs-vswitchd qos/show vhost-user0`
263 To clear the QoS configuration from the port and ovsdb use the following:
265 `ovs-vsctl destroy QoS vhost-user0 -- clear Port vhost-user0 qos`
267 For more details regarding egress-policer parameters please refer to the
270 9. Ingress Policing Example
272 Assuming you have a vhost-user port receiving traffic consisting of
273 packets of size 64 bytes, the following command would limit the reception
274 rate of the port to ~1,000,000 packets per second:
276 `ovs-vsctl set interface vhost-user0 ingress_policing_rate=368000
277 ingress_policing_burst=1000`
279 To examine the ingress policer configuration of the port:
281 `ovs-vsctl list interface vhost-user0`
283 To clear the ingress policer configuration from the port use the following:
285 `ovs-vsctl set interface vhost-user0 ingress_policing_rate=0`
287 For more details regarding ingress-policer see the vswitch.xml.
292 1. PMD affinitization
294 A poll mode driver (pmd) thread handles the I/O of all DPDK
295 interfaces assigned to it. A pmd thread will busy loop through
296 the assigned port/rxq's polling for packets, switch the packets
297 and send to a tx port if required. Typically, it is found that
298 a pmd thread is CPU bound, meaning that the greater the CPU
299 occupancy the pmd thread can get, the better the performance. To
300 that end, it is good practice to ensure that a pmd thread has as
301 many cycles on a core available to it as possible. This can be
302 achieved by affinitizing the pmd thread with a core that has no
303 other workload. See section 7 below for a description of how to
304 isolate cores for this purpose also.
306 The following command can be used to specify the affinity of the
309 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>`
311 By setting a bit in the mask, a pmd thread is created and pinned
312 to the corresponding CPU core. e.g. to run a pmd thread on core 1
314 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=2`
316 For more information, please refer to the Open_vSwitch TABLE section in
318 `man ovs-vswitchd.conf.db`
320 Note, that a pmd thread on a NUMA node is only created if there is
321 at least one DPDK interface from that NUMA node added to OVS.
323 2. Multiple poll mode driver threads
325 With pmd multi-threading support, OVS creates one pmd thread
326 for each NUMA node by default. However, it can be seen that in cases
327 where there are multiple ports/rxq's producing traffic, performance
328 can be improved by creating multiple pmd threads running on separate
329 cores. These pmd threads can then share the workload by each being
330 responsible for different ports/rxq's. Assignment of ports/rxq's to
331 pmd threads is done automatically.
333 The following command can be used to specify the affinity of the
336 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>`
338 A set bit in the mask means a pmd thread is created and pinned
339 to the corresponding CPU core. e.g. to run pmd threads on core 1 and 2
341 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=6`
343 For more information, please refer to the Open_vSwitch TABLE section in
345 `man ovs-vswitchd.conf.db`
347 For example, when using dpdk and dpdkvhostuser ports in a bi-directional
348 VM loopback as shown below, spreading the workload over 2 or 4 pmd
349 threads shows significant improvements as there will be more total CPU
352 NIC port0 <-> OVS <-> VM <-> OVS <-> NIC port 1
354 The following command can be used to confirm that the port/rxq assignment
355 to pmd threads is as required:
357 `ovs-appctl dpif-netdev/pmd-rxq-show`
359 This can also be checked with:
363 taskset -p <pid_of_pmd>
366 To understand where most of the pmd thread time is spent and whether the
367 caches are being utilized, these commands can be used:
370 # Clear previous stats
371 ovs-appctl dpif-netdev/pmd-stats-clear
373 # Check current stats
374 ovs-appctl dpif-netdev/pmd-stats-show
377 3. DPDK port Rx Queues
379 `ovs-vsctl set Interface <DPDK interface> options:n_rxq=<integer>`
381 The command above sets the number of rx queues for DPDK interface.
382 The rx queues are assigned to pmd threads on the same NUMA node in a
383 round-robin fashion. For more information, please refer to the
384 Open_vSwitch TABLE section in
386 `man ovs-vswitchd.conf.db`
390 Each pmd thread contains one EMC. After initial flow setup in the
391 datapath, the EMC contains a single table and provides the lowest level
392 (fastest) switching for DPDK ports. If there is a miss in the EMC then
393 the next level where switching will occur is the datapath classifier.
394 Missing in the EMC and looking up in the datapath classifier incurs a
395 significant performance penalty. If lookup misses occur in the EMC
396 because it is too small to handle the number of flows, its size can
397 be increased. The EMC size can be modified by editing the define
398 EM_FLOW_HASH_SHIFT in lib/dpif-netdev.c.
400 As mentioned above an EMC is per pmd thread. So an alternative way of
401 increasing the aggregate amount of possible flow entries in EMC and
402 avoiding datapath classifier lookups is to have multiple pmd threads
403 running. This can be done as described in section 2.
407 The default compiler optimization level is '-O2'. Changing this to
408 more aggressive compiler optimizations such as '-O3' or
409 '-Ofast -march=native' with gcc can produce performance gains.
411 6. Simultaneous Multithreading (SMT)
413 With SMT enabled, one physical core appears as two logical cores
414 which can improve performance.
416 SMT can be utilized to add additional pmd threads without consuming
417 additional physical cores. Additional pmd threads may be added in the
418 same manner as described in section 2. If trying to minimize the use
419 of physical cores for pmd threads, care must be taken to set the
420 correct bits in the pmd-cpu-mask to ensure that the pmd threads are
421 pinned to SMT siblings.
423 For example, when using 2x 10 core processors in a dual socket system
424 with HT enabled, /proc/cpuinfo will report 40 logical cores. To use
425 two logical cores which share the same physical core for pmd threads,
426 the following command can be used to identify a pair of logical cores.
428 `cat /sys/devices/system/cpu/cpuN/topology/thread_siblings_list`
430 where N is the logical core number. In this example, it would show that
431 cores 1 and 21 share the same physical core. The pmd-cpu-mask to enable
432 two pmd threads running on these two logical cores (one physical core)
435 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=100002`
437 Note that SMT is enabled by the Hyper-Threading section in the
438 BIOS, and as such will apply to the whole system. So the impact of
439 enabling/disabling it for the whole system should be considered
440 e.g. If workloads on the system can scale across multiple cores,
441 SMT may very beneficial. However, if they do not and perform best
442 on a single physical core, SMT may not be beneficial.
444 7. The isolcpus kernel boot parameter
446 isolcpus can be used on the kernel bootline to isolate cores from the
447 kernel scheduler and hence dedicate them to OVS or other packet
448 forwarding related workloads. For example a Linux kernel boot-line
452 GRUB_CMDLINE_LINUX_DEFAULT="quiet hugepagesz=1G hugepages=4
453 default_hugepagesz=1G 'intel_iommu=off' isolcpus=1-19"
456 8. NUMA/Cluster On Die
458 Ideally inter NUMA datapaths should be avoided where possible as packets
459 will go across QPI and there may be a slight performance penalty when
460 compared with intra NUMA datapaths. On Intel Xeon Processor E5 v3,
461 Cluster On Die is introduced on models that have 10 cores or more.
462 This makes it possible to logically split a socket into two NUMA regions
463 and again it is preferred where possible to keep critical datapaths
464 within the one cluster.
466 It is good practice to ensure that threads that are in the datapath are
467 pinned to cores in the same NUMA area. e.g. pmd threads and QEMU vCPUs
468 responsible for forwarding. If DPDK is built with
469 CONFIG_RTE_LIBRTE_VHOST_NUMA=y, vHost User ports automatically
470 detect the NUMA socket of the QEMU vCPUs and will be serviced by a PMD
471 from the same node provided a core on this node is enabled in the
474 9. Rx Mergeable buffers
476 Rx Mergeable buffers is a virtio feature that allows chaining of multiple
477 virtio descriptors to handle large packet sizes. As such, large packets
478 are handled by reserving and chaining multiple free descriptors
479 together. Mergeable buffer support is negotiated between the virtio
480 driver and virtio device and is supported by the DPDK vhost library.
481 This behavior is typically supported and enabled by default, however
482 in the case where the user knows that rx mergeable buffers are not needed
483 i.e. jumbo frames are not needed, it can be forced off by adding
484 mrg_rxbuf=off to the QEMU command line options. By not reserving multiple
485 chains of descriptors it will make more individual virtio descriptors
486 available for rx to the guest using dpdkvhost ports and this can improve
489 10. Packet processing in the guest
491 It is good practice whether simply forwarding packets from one
492 interface to another or more complex packet processing in the guest,
493 to ensure that the thread performing this work has as much CPU
494 occupancy as possible. For example when the DPDK sample application
495 `testpmd` is used to forward packets in the guest, multiple QEMU vCPU
496 threads can be created. Taskset can then be used to affinitize the
497 vCPU thread responsible for forwarding to a dedicated core not used
498 for other general processing on the host system.
500 11. DPDK virtio pmd in the guest
502 dpdkvhostcuse or dpdkvhostuser ports can be used to accelerate the path
503 to the guest using the DPDK vhost library. This library is compatible with
504 virtio-net drivers in the guest but significantly better performance can
505 be observed when using the DPDK virtio pmd driver in the guest. The DPDK
506 `testpmd` application can be used in the guest as an example application
507 that forwards packet from one DPDK vhost port to another. An example of
508 running `testpmd` in the guest can be seen here.
511 ./testpmd -c 0x3 -n 4 --socket-mem 512 -- --burst=64 -i --txqflags=0xf00
512 --disable-hw-vlan --forward-mode=io --auto-start
515 See below information on dpdkvhostcuse and dpdkvhostuser ports.
516 See [DPDK Docs] for more information on `testpmd`.
521 Following the steps above to create a bridge, you can now add dpdk rings
522 as a port to the vswitch. OVS will expect the DPDK ring device name to
523 start with dpdkr and end with a portid.
525 `ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr`
527 DPDK rings client test application
529 Included in the test directory is a sample DPDK application for testing
530 the rings. This is from the base dpdk directory and modified to work
531 with the ring naming used within ovs.
533 location tests/ovs_client
538 cd /usr/src/ovs/tests/
539 ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr"
542 In the case of the dpdkr example above the "port id you gave dpdkr" is 0.
544 It is essential to have --proc-type=secondary
546 The application simply receives an mbuf on the receive queue of the
547 ethernet ring and then places that same mbuf on the transmit ring of
548 the ethernet ring. It is a trivial loopback application.
550 DPDK rings in VM (IVSHMEM shared memory communications)
551 -------------------------------------------------------
553 In addition to executing the client in the host, you can execute it within
554 a guest VM. To do so you will need a patched qemu. You can download the
555 patch and getting started guide at :
557 https://01.org/packet-processing/downloads
559 A general rule of thumb for better performance is that the client
560 application should not be assigned the same dpdk core mask "-c" as
566 DPDK 16.04 supports two types of vhost:
571 Whatever type of vhost is enabled in the DPDK build specified, is the type
572 that will be enabled in OVS. By default, vhost-user is enabled in DPDK.
573 Therefore, unless vhost-cuse has been enabled in DPDK, vhost-user ports
574 will be enabled in OVS.
575 Please note that support for vhost-cuse is intended to be deprecated in OVS
581 The following sections describe the use of vhost-user 'dpdkvhostuser' ports
584 DPDK vhost-user Prerequisites:
585 -------------------------
587 1. DPDK 16.04 with vhost support enabled as documented in the "Building and
590 2. QEMU version v2.1.0+
592 QEMU v2.1.0 will suffice, but it is recommended to use v2.2.0 if providing
593 your VM with memory greater than 1GB due to potential issues with memory
594 mapping larger areas.
596 Adding DPDK vhost-user ports to the Switch:
597 --------------------------------------
599 Following the steps above to create a bridge, you can now add DPDK vhost-user
600 as a port to the vswitch. Unlike DPDK ring ports, DPDK vhost-user ports can
601 have arbitrary names, except that forward and backward slashes are prohibited
604 - For vhost-user, the name of the port type is `dpdkvhostuser`
607 ovs-vsctl add-port br0 vhost-user-1 -- set Interface vhost-user-1
611 This action creates a socket located at
612 `/usr/local/var/run/openvswitch/vhost-user-1`, which you must provide
613 to your VM on the QEMU command line. More instructions on this can be
614 found in the next section "DPDK vhost-user VM configuration"
615 - If you wish for the vhost-user sockets to be created in a sub-directory of
616 `/usr/local/var/run/openvswitch`, you may specify this directory in the
619 `./utilities/ovs-vsctl --no-wait \
620 set Open_vSwitch . other_config:vhost-sock-dir=subdir`
622 DPDK vhost-user VM configuration:
623 ---------------------------------
624 Follow the steps below to attach vhost-user port(s) to a VM.
626 1. Configure sockets.
627 Pass the following parameters to QEMU to attach a vhost-user device:
630 -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user-1
631 -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce
632 -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1
635 ...where vhost-user-1 is the name of the vhost-user port added
637 Repeat the above parameters for multiple devices, changing the
638 chardev path and id as necessary. Note that a separate and different
639 chardev path needs to be specified for each vhost-user device. For
640 example you have a second vhost-user port named 'vhost-user-2', you
641 append your QEMU command line with an additional set of parameters:
644 -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user-2
645 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce
646 -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2
649 2. Configure huge pages.
650 QEMU must allocate the VM's memory on hugetlbfs. vhost-user ports access
651 a virtio-net device's virtual rings and packet buffers mapping the VM's
652 physical memory on hugetlbfs. To enable vhost-user ports to map the VM's
653 memory into their process address space, pass the following paramters
657 -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,
659 -numa node,memdev=mem -mem-prealloc
662 3. Optional: Enable multiqueue support
663 The vhost-user interface must be configured in Open vSwitch with the
664 desired amount of queues with:
667 ovs-vsctl set Interface vhost-user-2 options:n_rxq=<requested queues>
670 QEMU needs to be configured as well.
671 The $q below should match the queues requested in OVS (if $q is more,
672 packets will not be received).
673 The $v is the number of vectors, which is '$q x 2 + 2'.
676 -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user-2
677 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce,queues=$q
678 -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mq=on,vectors=$v
681 If one wishes to use multiple queues for an interface in the guest, the
682 driver in the guest operating system must be configured to do so. It is
683 recommended that the number of queues configured be equal to '$q'.
685 For example, this can be done for the Linux kernel virtio-net driver with:
688 ethtool -L <DEV> combined <$q>
691 A note on the command above:
693 `-L`: Changes the numbers of channels of the specified network device
695 `combined`: Changes the number of multi-purpose channels.
700 The following sections describe the use of vhost-cuse 'dpdkvhostcuse' ports
703 DPDK vhost-cuse Prerequisites:
704 -------------------------
706 1. DPDK 16.04 with vhost support enabled as documented in the "Building and
708 As an additional step, you must enable vhost-cuse in DPDK by setting the
709 following additional flag in `config/common_base`:
711 `CONFIG_RTE_LIBRTE_VHOST_USER=n`
713 Following this, rebuild DPDK as per the instructions in the "Building and
714 Installing" section. Finally, rebuild OVS as per step 3 in the "Building
715 and Installing" section - OVS will detect that DPDK has vhost-cuse libraries
716 compiled and in turn will enable support for it in the switch and disable
719 2. Insert the Cuse module:
723 3. Build and insert the `eventfd_link` module:
726 cd $DPDK_DIR/lib/librte_vhost/eventfd_link/
728 insmod $DPDK_DIR/lib/librte_vhost/eventfd_link.ko
731 4. QEMU version v2.1.0+
733 vhost-cuse will work with QEMU v2.1.0 and above, however it is recommended to
734 use v2.2.0 if providing your VM with memory greater than 1GB due to potential
735 issues with memory mapping larger areas.
736 Note: QEMU v1.6.2 will also work, with slightly different command line parameters,
737 which are specified later in this document.
739 Adding DPDK vhost-cuse ports to the Switch:
740 --------------------------------------
742 Following the steps above to create a bridge, you can now add DPDK vhost-cuse
743 as a port to the vswitch. Unlike DPDK ring ports, DPDK vhost-cuse ports can have
746 - For vhost-cuse, the name of the port type is `dpdkvhostcuse`
749 ovs-vsctl add-port br0 vhost-cuse-1 -- set Interface vhost-cuse-1
753 When attaching vhost-cuse ports to QEMU, the name provided during the
754 add-port operation must match the ifname parameter on the QEMU command
755 line. More instructions on this can be found in the next section.
757 DPDK vhost-cuse VM configuration:
758 ---------------------------------
760 vhost-cuse ports use a Linux* character device to communicate with QEMU.
761 By default it is set to `/dev/vhost-net`. It is possible to reuse this
762 standard device for DPDK vhost, which makes setup a little simpler but it
763 is better practice to specify an alternative character device in order to
764 avoid any conflicts if kernel vhost is to be used in parallel.
766 1. This step is only needed if using an alternative character device.
768 The new character device filename must be specified in the ovsdb:
770 `./utilities/ovs-vsctl --no-wait set Open_vSwitch . \
771 other_config:cuse-dev-name=my-vhost-net`
773 In the example above, the character device to be used will be
776 2. This step is only needed if reusing the standard character device. It will
777 conflict with the kernel vhost character device so the user must first
780 `rm -rf /dev/vhost-net`
782 3a. Configure virtio-net adaptors:
783 The following parameters must be passed to the QEMU binary:
786 -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on
787 -device virtio-net-pci,netdev=net1,mac=<mac>
790 Repeat the above parameters for multiple devices.
792 The DPDK vhost library will negiotiate its own features, so they
793 need not be passed in as command line params. Note that as offloads are
794 disabled this is the equivalent of setting:
796 `csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off`
798 3b. If using an alternative character device. It must be also explicitly
799 passed to QEMU using the `vhostfd` argument:
802 -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on,
804 -device virtio-net-pci,netdev=net1,mac=<mac>
807 The open file descriptor must be passed to QEMU running as a child
808 process. This could be done with a simple python script.
812 fd = os.open("/dev/usvhost", os.O_RDWR)
813 subprocess.call("qemu-system-x86_64 .... -netdev tap,id=vhostnet0,\
814 vhost=on,vhostfd=" + fd +"...", shell=True)
816 Alternatively the `qemu-wrap.py` script can be used to automate the
817 requirements specified above and can be used in conjunction with libvirt if
818 desired. See the "DPDK vhost VM configuration with QEMU wrapper" section
821 4. Configure huge pages:
822 QEMU must allocate the VM's memory on hugetlbfs. Vhost ports access a
823 virtio-net device's virtual rings and packet buffers mapping the VM's
824 physical memory on hugetlbfs. To enable vhost-ports to map the VM's
825 memory into their process address space, pass the following parameters
828 `-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,
829 share=on -numa node,memdev=mem -mem-prealloc`
831 Note: For use with an earlier QEMU version such as v1.6.2, use the
832 following to configure hugepages instead:
834 `-mem-path /dev/hugepages -mem-prealloc`
836 DPDK vhost-cuse VM configuration with QEMU wrapper:
837 ---------------------------------------------------
838 The QEMU wrapper script automatically detects and calls QEMU with the
839 necessary parameters. It performs the following actions:
841 * Automatically detects the location of the hugetlbfs and inserts this
842 into the command line parameters.
843 * Automatically open file descriptors for each virtio-net device and
844 inserts this into the command line parameters.
845 * Calls QEMU passing both the command line parameters passed to the
846 script itself and those it has auto-detected.
848 Before use, you **must** edit the configuration parameters section of the
849 script to point to the correct emulator location and set additional
850 settings. Of these settings, `emul_path` and `us_vhost_path` **must** be
851 set. All other settings are optional.
853 To use directly from the command line simply pass the wrapper some of the
854 QEMU parameters: it will configure the rest. For example:
857 qemu-wrap.py -cpu host -boot c -hda <disk image> -m 4096 -smp 4
858 --enable-kvm -nographic -vnc none -net none -netdev tap,id=net1,
859 script=no,downscript=no,ifname=if1,vhost=on -device virtio-net-pci,
860 netdev=net1,mac=00:00:00:00:00:01
863 DPDK vhost-cuse VM configuration with libvirt:
864 ----------------------------------------------
866 If you are using libvirt, you must enable libvirt to access the character
867 device by adding it to controllers cgroup for libvirtd using the following
870 1. In `/etc/libvirt/qemu.conf` add/edit the following lines:
873 1) clear_emulator_capabilities = 0
876 4) cgroup_device_acl = [
877 "/dev/null", "/dev/full", "/dev/zero",
878 "/dev/random", "/dev/urandom",
879 "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
880 "/dev/rtc", "/dev/hpet", "/dev/net/tun",
881 "/dev/<my-vhost-device>",
885 <my-vhost-device> refers to "vhost-net" if using the `/dev/vhost-net`
886 device. If you have specificed a different name in the database
887 using the "other_config:cuse-dev-name" parameter, please specify that
890 2. Disable SELinux or set to permissive mode
892 3. Restart the libvirtd process
893 For example, on Fedora:
895 `systemctl restart libvirtd.service`
897 After successfully editing the configuration, you may launch your
898 vhost-enabled VM. The XML describing the VM can be configured like so
899 within the <qemu:commandline> section:
901 1. Set up shared hugepages:
904 <qemu:arg value='-object'/>
905 <qemu:arg value='memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on'/>
906 <qemu:arg value='-numa'/>
907 <qemu:arg value='node,memdev=mem'/>
908 <qemu:arg value='-mem-prealloc'/>
911 2. Set up your tap devices:
914 <qemu:arg value='-netdev'/>
915 <qemu:arg value='type=tap,id=net1,script=no,downscript=no,ifname=vhost0,vhost=on'/>
916 <qemu:arg value='-device'/>
917 <qemu:arg value='virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01'/>
920 Repeat for as many devices as are desired, modifying the id, ifname
921 and mac as necessary.
923 Again, if you are using an alternative character device (other than
924 `/dev/vhost-net`), please specify the file descriptor like so:
926 `<qemu:arg value='type=tap,id=net3,script=no,downscript=no,ifname=vhost0,vhost=on,vhostfd=<open_fd>'/>`
928 Where <open_fd> refers to the open file descriptor of the character device.
929 Instructions of how to retrieve the file descriptor can be found in the
930 "DPDK vhost VM configuration" section.
931 Alternatively, the process is automated with the qemu-wrap.py script,
932 detailed in the next section.
934 Now you may launch your VM using virt-manager, or like so:
936 `virsh create my_vhost_vm.xml`
938 DPDK vhost-cuse VM configuration with libvirt and QEMU wrapper:
939 ----------------------------------------------------------
941 To use the qemu-wrapper script in conjuntion with libvirt, follow the
942 steps in the previous section before proceeding with the following steps:
944 1. Place `qemu-wrap.py` in libvirtd's binary search PATH ($PATH)
945 Ideally in the same directory that the QEMU binary is located.
947 2. Ensure that the script has the same owner/group and file permissions
950 3. Update the VM xml file using "virsh edit VM.xml"
952 1. Set the VM to use the launch script.
953 Set the emulator path contained in the `<emulator><emulator/>` tags.
954 For example, replace:
956 `<emulator>/usr/bin/qemu-kvm<emulator/>`
960 `<emulator>/usr/bin/qemu-wrap.py<emulator/>`
962 4. Edit the Configuration Parameters section of the script to point to
963 the correct emulator location and set any additional options. If you are
964 using a alternative character device name, please set "us_vhost_path" to the
965 location of that device. The script will automatically detect and insert
966 the correct "vhostfd" value in the QEMU command line arguments.
968 5. Use virt-manager to launch the VM
970 Running ovs-vswitchd with DPDK backend inside a VM
971 --------------------------------------------------
973 Please note that additional configuration is required if you want to run
974 ovs-vswitchd with DPDK backend inside a QEMU virtual machine. Ovs-vswitchd
975 creates separate DPDK TX queues for each CPU core available. This operation
976 fails inside QEMU virtual machine because, by default, VirtIO NIC provided
977 to the guest is configured to support only single TX queue and single RX
978 queue. To change this behavior, you need to turn on 'mq' (multiqueue)
979 property of all virtio-net-pci devices emulated by QEMU and used by DPDK.
980 You may do it manually (by changing QEMU command line) or, if you use Libvirt,
981 by adding the following string:
983 `<driver name='vhost' queues='N'/>`
985 to <interface> sections of all network devices used by DPDK. Parameter 'N'
986 determines how many queues can be used by the guest.
991 - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue.
992 - Currently DPDK port does not make use any offload functionality.
993 - DPDK-vHost support works with 1G huge pages.
996 - If you run Open vSwitch with smaller page sizes (e.g. 2MB), you may be
997 unable to share any rings or mempools with a virtual machine.
998 This is because the current implementation of ivshmem works by sharing
999 a single 1GB huge page from the host operating system to any guest
1000 operating system through the Qemu ivshmem device. When using smaller
1001 page sizes, multiple pages may be required to hold the ring descriptors
1002 and buffer pools. The Qemu ivshmem device does not allow you to share
1003 multiple file descriptors to the guest operating system. However, if you
1004 want to share dpdkr rings with other processes on the host, you can do
1005 this with smaller page sizes.
1007 Platform and Network Interface:
1008 - By default with DPDK 16.04, a maximum of 64 TX queues can be used with an
1009 Intel XL710 Network Interface on a platform with more than 64 logical
1010 cores. If a user attempts to add an XL710 interface as a DPDK port type to
1011 a system as described above, an error will be reported that initialization
1012 failed for the 65th queue. OVS will then roll back to the previous
1013 successful queue initialization and use that value as the total number of
1014 TX queues available with queue locking. If a user wishes to use more than
1015 64 queues and avoid locking, then the
1016 `CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF` config parameter in DPDK must be
1017 increased to the desired number of queues. Both DPDK and OVS must be
1018 recompiled for this change to take effect.
1023 Please report problems to bugs@openvswitch.org.
1025 [INSTALL.userspace.md]:INSTALL.userspace.md
1026 [INSTALL.md]:INSTALL.md
1027 [DPDK Linux GSG]: http://www.dpdk.org/doc/guides/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-igb-uioor-vfio-modules
1028 [DPDK Docs]: http://dpdk.org/doc