]> git.kernelconcepts.de Git - karo-tx-linux.git/commitdiff
virtio_net: add gro capability
authorEric Dumazet <edumazet@google.com>
Fri, 31 Jul 2015 16:25:17 +0000 (18:25 +0200)
committerDavid S. Miller <davem@davemloft.net>
Mon, 3 Aug 2015 21:22:53 +0000 (14:22 -0700)
Straightforward patch to add GRO processing to virtio_net.

napi_complete_done() usage allows more aggressive aggregation,
opted-in by setting /sys/class/net/xxx/gro_flush_timeout

Tested:

Setting /sys/class/net/xxx/gro_flush_timeout to 1000 nsec,
Rick Jones reported following results.

One VM of each on a pair of OpenStack compute nodes with E5-2650Lv3 CPUs
and Intel 82599ES-based NICs. So, two "before" and two "after" VMs.
The OpenStack compute nodes were running OpenStack Kilo, with VxLAN
encapsulation being used through OVS so no GRO coming-up the host
stack.  The compute nodes themselves were running a 3.14-based kernel.

Single-stream netperf, CPU utilizations and thus service demands are
based on intra-guest reported CPU.

Throughput Mbit/s, bigger is better
        Min     Median  Average Max
4.2.0-rc3+      1364    1686    1678    1938
4.2.0-rc3+flush1k       1824    2269    2275    2647

Send Service Demand, smaller is better
        Min     Median  Average Max
4.2.0-rc3+      0.236   0.558   0.524   0.802
4.2.0-rc3+flush1k       0.176   0.503   0.471   0.738

Receive Service Demand, smaller is better.
        Min     Median  Average Max
4.2.0-rc3+      1.906   2.188   2.191   2.531
4.2.0-rc3+flush1k       0.448   0.529   0.533   0.692

Signed-off-by: Eric Dumazet <edumazet@google.com>
Tested-by: Rick Jones <rick.jones2@hp.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
drivers/net/virtio_net.c

index 7fbca37a1adffe5d46d3603e9fd44d4dbd16d331..66f08f622dc6603026ceb7646878a6878b637d15 100644 (file)
@@ -518,7 +518,7 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
 
        skb_mark_napi_id(skb, &rq->napi);
 
-       netif_receive_skb(skb);
+       napi_gro_receive(&rq->napi, skb);
        return;
 
 frame_err:
@@ -756,7 +756,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
        /* Out of packets? */
        if (received < budget) {
                r = virtqueue_enable_cb_prepare(rq->vq);
-               napi_complete(napi);
+               napi_complete_done(napi, received);
                if (unlikely(virtqueue_poll(rq->vq, r)) &&
                    napi_schedule_prep(napi)) {
                        virtqueue_disable_cb(rq->vq);