From patchwork Tue Apr 13 05:47:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 420722 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21461C433B4 for ; Tue, 13 Apr 2021 05:47:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 04F266109D for ; Tue, 13 Apr 2021 05:47:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344624AbhDMFsQ (ORCPT ); Tue, 13 Apr 2021 01:48:16 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:27764 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243542AbhDMFsM (ORCPT ); Tue, 13 Apr 2021 01:48:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618292872; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xQ4VlAVqH25DzMLpMhdVuF4QUuvJ8Qp+k4bIKP2VcN4=; b=P2YU6J4TxjsyiFuLYQZ5eA+khUK5ccYlvFqj0isLo3J2yzLpwcGhttY9EostfRY11igeYH aX217diD7+N6p6DF8oIDYsKjQdGvtxZACgqSPo35eXAldDXMP7qNm2ExHH0tp4VUgVxhST QFh3qthj0EMy4WwEjfSatqKedo+cXJM= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-586-u3i0SZAdOR-PuIDP2L3FxQ-1; Tue, 13 Apr 2021 01:47:51 -0400 X-MC-Unique: u3i0SZAdOR-PuIDP2L3FxQ-1 Received: by mail-wr1-f71.google.com with SMTP id n16so315608wrm.4 for ; Mon, 12 Apr 2021 22:47:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=xQ4VlAVqH25DzMLpMhdVuF4QUuvJ8Qp+k4bIKP2VcN4=; b=OLVEpINjaQt8SuAp95tFMzWOgkZR1JmPJE2rwf+30WdboLo10dNeFa7Twf+xP4+Sap S4JqUVX/QzgKodG/h/VLlXB/v2EhsI2Vkv7dUcI1vHJSXQ0nf0RX+iiuP7vbKrQMmZIm 3skOOaCOyM9gaQzBjPWnc0/DjYKboQxQ0/60oa/zlgbCNW4kgOAd71MkVgIgvBGo5RVW MfhBolNjc3VLaj0FLYvRr5wy3IHfF8kiNVlbTtjpNXBspRs7UE0cRVirgoWOy32vrtYP koTBv59Nra3v+C1mD2wl+/VjTsrVjx6xpBFMAZjW4o2e7EQ2qYHcSItn0piyDD5SnCIz 4Xbg== X-Gm-Message-State: AOAM532Fsc+jyMpBAGgdeJL1mKkmpjmTLPU+ueblwqg+55/IYX+xknFw aKisFMWnHomV22QcfjKfO9THO9B0nj6epgQA+z6aBkIib/M1rbdz6S9jw06Yk0UI4t3H6GGixNM xbqVou5SvHYiUEyd3 X-Received: by 2002:adf:e650:: with SMTP id b16mr14712020wrn.273.1618292869955; Mon, 12 Apr 2021 22:47:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx9v8kGgKLvQMgN2NEqBuuP7XEFD5CaexgqT8HLJgbkC/63U6daM4nldTQE5UiI0lEBV0QuFA== X-Received: by 2002:adf:e650:: with SMTP id b16mr14712006wrn.273.1618292869792; Mon, 12 Apr 2021 22:47:49 -0700 (PDT) Received: from redhat.com ([2a10:8006:2281:0:1994:c627:9eac:1825]) by smtp.gmail.com with ESMTPSA id b6sm20176538wrv.12.2021.04.12.22.47.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Apr 2021 22:47:49 -0700 (PDT) Date: Tue, 13 Apr 2021 01:47:47 -0400 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Cc: Jakub Kicinski , Jason Wang , Wei Wang , David Miller , netdev@vger.kernel.org, Willem de Bruijn , virtualization@lists.linux-foundation.org Subject: [PATCH RFC v2 1/4] virtio: fix up virtio_disable_cb Message-ID: <20210413054733.36363-2-mst@redhat.com> References: <20210413054733.36363-1-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210413054733.36363-1-mst@redhat.com> X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1 X-Mutt-Fcc: =sent Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org virtio_disable_cb is currently a nop for split ring with event index. This is because it used to be always called from a callback when we know device won't trigger more events until we update the index. However, now that we run with interrupts enabled a lot we also poll without a callback so that is different: disabling callbacks will help reduce the number of spurious interrupts. Further, if using event index with a packed ring, and if being called from a callback, we actually do disable interrupts which is unnecessary. Fix both issues by tracking whenever we get a callback. If that is the case disabling interrupts with event index can be a nop. If not the case disable interrupts. Note: with a split ring there's no explicit "no interrupts" value. For now we write a fixed value so our chance of triggering an interupt is 1/ring size. It's probably better to write something related to the last used index there to reduce the chance even further. For now I'm keeping it simple. Signed-off-by: Michael S. Tsirkin --- drivers/virtio/virtio_ring.c | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 71e16b53e9c1..88f0b16b11b8 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -113,6 +113,9 @@ struct vring_virtqueue { /* Last used index we've seen. */ u16 last_used_idx; + /* Hint for event idx: already triggered no need to disable. */ + bool event_triggered; + union { /* Available for split ring */ struct { @@ -739,7 +742,10 @@ static void virtqueue_disable_cb_split(struct virtqueue *_vq) if (!(vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) { vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; - if (!vq->event) + if (vq->event) + /* TODO: this is a hack. Figure out a cleaner value to write. */ + vring_used_event(&vq->split.vring) = 0x0; + else vq->split.vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->split.avail_flags_shadow); @@ -1605,6 +1611,7 @@ static struct virtqueue *vring_create_virtqueue_packed( vq->weak_barriers = weak_barriers; vq->broken = false; vq->last_used_idx = 0; + vq->event_triggered = false; vq->num_added = 0; vq->packed_ring = true; vq->use_dma_api = vring_use_dma_api(vdev); @@ -1919,6 +1926,12 @@ void virtqueue_disable_cb(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); + /* If device triggered an event already it won't trigger one again: + * no need to disable. + */ + if (vq->event_triggered) + return; + if (vq->packed_ring) virtqueue_disable_cb_packed(_vq); else @@ -1942,6 +1955,9 @@ unsigned virtqueue_enable_cb_prepare(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); + if (vq->event_triggered) + vq->event_triggered = false; + return vq->packed_ring ? virtqueue_enable_cb_prepare_packed(_vq) : virtqueue_enable_cb_prepare_split(_vq); } @@ -2005,6 +2021,9 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); + if (vq->event_triggered) + vq->event_triggered = false; + return vq->packed_ring ? virtqueue_enable_cb_delayed_packed(_vq) : virtqueue_enable_cb_delayed_split(_vq); } @@ -2044,6 +2063,10 @@ irqreturn_t vring_interrupt(int irq, void *_vq) if (unlikely(vq->broken)) return IRQ_HANDLED; + /* Just a hint for performance: so it's ok that this can be racy! */ + if (vq->event) + vq->event_triggered = true; + pr_debug("virtqueue callback for %p (%p)\n", vq, vq->vq.callback); if (vq->vq.callback) vq->vq.callback(&vq->vq); @@ -2083,6 +2106,7 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index, vq->weak_barriers = weak_barriers; vq->broken = false; vq->last_used_idx = 0; + vq->event_triggered = false; vq->num_added = 0; vq->use_dma_api = vring_use_dma_api(vdev); #ifdef DEBUG From patchwork Tue Apr 13 05:47:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 420721 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A1D0C433B4 for ; Tue, 13 Apr 2021 05:48:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6FED613B6 for ; Tue, 13 Apr 2021 05:48:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344646AbhDMFsX (ORCPT ); Tue, 13 Apr 2021 01:48:23 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:26380 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344615AbhDMFsS (ORCPT ); Tue, 13 Apr 2021 01:48:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618292878; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=6jg8BCGbpMnOAIkNzLWva23V8vKuJlUTOTl6SMonbbA=; b=O6r8sdg+5gYgLX9GRNnhA7wzIWNjgqizxmpx1ehB2sC7c3nRayl+LhSGAFouGQqaH8t59a kJElYbr4LJhsVd/ebDjUrIQBNxJr5mlt0cUrXlcf0YQvLoAqjQtPkaLyMWPIV0yHAeJ+rU eNYx5xltXJsAsbFB2FjKpUeF9Gp3a6I= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-557-97rGbQ6lPGO3aU-rUxiMgA-1; Tue, 13 Apr 2021 01:47:56 -0400 X-MC-Unique: 97rGbQ6lPGO3aU-rUxiMgA-1 Received: by mail-wm1-f70.google.com with SMTP id u11-20020a05600c00cbb029012a3f52677dso665955wmm.8 for ; Mon, 12 Apr 2021 22:47:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=6jg8BCGbpMnOAIkNzLWva23V8vKuJlUTOTl6SMonbbA=; b=qMFi3fKD//ZaT/rIrVuA3wOVCO2PKiYpdMfFv/9ZUqnU1if/cvS9re9KZCukF1FuvM SuSsjlVZAYQnpfqFE+4ctXsAM75E5aBefkwTyR3zaVde8/IuEDpiwygDNrbryV6Nv2eL V3vJImwj2LIMzGdedWydfWTin7DQohuOQUHT6SK8PGuVD/AVvr9rpOHgqwuzgZ4XFPGp 9Y+HbXzpltxnZiavdcBlaFNJZbWipUFXqKIk6QQBmN31YxxYimq14C2U8dG4xl2aUoCY 0DExzptT9h3BdNMzONZrf4PZOwdWJ7I5UpZWEHBEn7q4yGdLgCac76FrYcTLFihbiTk8 q8Og== X-Gm-Message-State: AOAM532u3niJoOwgJdeLddirwkXHCdqghpsiL1Zvg28rzCMi7GFEldSJ vVt2HqzQvQYUR959H5bgJ2hvwdNOPmF+XvylUYji0g8t0C12ALvnwSuop7Froa0ozhhXRJzC4rN pjBuKJ8615VdMKMES X-Received: by 2002:a5d:4e08:: with SMTP id p8mr35572386wrt.4.1618292874760; Mon, 12 Apr 2021 22:47:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlOFgyL/tLR6N8M+xp6zE98QryIgkrAWDLF0SSu1myLxGcVLWj/Riydxj6MeQAxcymfF/BrA== X-Received: by 2002:a5d:4e08:: with SMTP id p8mr35572375wrt.4.1618292874631; Mon, 12 Apr 2021 22:47:54 -0700 (PDT) Received: from redhat.com ([2a10:8006:2281:0:1994:c627:9eac:1825]) by smtp.gmail.com with ESMTPSA id i4sm1264707wmq.12.2021.04.12.22.47.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Apr 2021 22:47:54 -0700 (PDT) Date: Tue, 13 Apr 2021 01:47:52 -0400 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Cc: Jakub Kicinski , Jason Wang , Wei Wang , David Miller , netdev@vger.kernel.org, Willem de Bruijn , virtualization@lists.linux-foundation.org Subject: [PATCH RFC v2 3/4] virtio_net: move tx vq operation under tx queue lock Message-ID: <20210413054733.36363-4-mst@redhat.com> References: <20210413054733.36363-1-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210413054733.36363-1-mst@redhat.com> X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1 X-Mutt-Fcc: =sent Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org It's unsafe to operate a vq from multiple threads. Unfortunately this is exactly what we do when invoking clean tx poll from rx napi. As a fix move everything that deals with the vq to under tx lock. Signed-off-by: Michael S. Tsirkin --- drivers/net/virtio_net.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 16d5abed582c..460ccdbb840e 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1505,6 +1505,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) struct virtnet_info *vi = sq->vq->vdev->priv; unsigned int index = vq2txq(sq->vq); struct netdev_queue *txq; + int opaque; + bool done; if (unlikely(is_xdp_raw_buffer_queue(vi, index))) { /* We don't need to enable cb for XDP */ @@ -1514,10 +1516,28 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) txq = netdev_get_tx_queue(vi->dev, index); __netif_tx_lock(txq, raw_smp_processor_id()); + virtqueue_disable_cb(sq->vq); free_old_xmit_skbs(sq, true); + + opaque = virtqueue_enable_cb_prepare(sq->vq); + + done = napi_complete_done(napi, 0); + + if (!done) + virtqueue_disable_cb(sq->vq); + __netif_tx_unlock(txq); - virtqueue_napi_complete(napi, sq->vq, 0); + if (done) { + if (unlikely(virtqueue_poll(sq->vq, opaque))) { + if (napi_schedule_prep(napi)) { + __netif_tx_lock(txq, raw_smp_processor_id()); + virtqueue_disable_cb(sq->vq); + __netif_tx_unlock(txq); + __napi_schedule(napi); + } + } + } if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) netif_tx_wake_queue(txq);