xen-netback: always fully coalesce guest Rx packets
authorDavid Vrabel <david.vrabel@citrix.com>
Tue, 20 Jan 2015 14:49:52 +0000 (14:49 +0000)
committerDavid S. Miller <davem@davemloft.net>
Sat, 24 Jan 2015 02:01:58 +0000 (18:01 -0800)
commit1650d5455bd2dc6b5ee134bd6fc1a3236c266b5b
treeade96080aac11eaf88a50f000a60c46ef633fb39
parentf4ac8292b09350868418983fc1b85a6c6e48a177
xen-netback: always fully coalesce guest Rx packets

Always fully coalesce guest Rx packets into the minimum number of ring
slots.  Reducing the number of slots per packet has significant
performance benefits when receiving off-host traffic.

Results from XenServer's performance benchmarks:

                         Baseline    Full coalesce
Interhost VM receive      7.2 Gb/s   11 Gb/s
Interhost aggregate      24 Gb/s     24 Gb/s
Intrahost single stream  14 Gb/s     14 Gb/s
Intrahost aggregate      34 Gb/s     34 Gb/s

However, this can increase the number of grant ops per packet which
decreases performance of backend (dom0) to VM traffic (by ~10%)
/unless/ grant copy has been optimized for adjacent ops with the same
source or destination (see "grant-table: defer releasing pages
acquired in a grant copy"[1] expected in Xen 4.6).

[1] http://lists.xen.org/archives/html/xen-devel/2015-01/msg01118.html

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
drivers/net/xen-netback/common.h
drivers/net/xen-netback/netback.c