From patchwork Fri Dec 8 00:52:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 751878 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XWkp78Jt" Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECDBA1725 for ; Thu, 7 Dec 2023 16:52:57 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5d8d3271ff5so18152147b3.2 for ; Thu, 07 Dec 2023 16:52:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701996777; x=1702601577; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hfCMsfp1IUoZ64EVVtr7gxYSAUWCUtGUA5R8s2RxKzs=; b=XWkp78Jt2QfvZg8KL7llBcvri5RjxOftzxLo0jx+7924tKzy++MPlqogZva2qqiyHY +bueHNbRUnR3T9ac90rBm7qI3/wDGbnJN/nP8rsklNS96u4ZWHZSFl+syYtRZ0VX77x3 4H+75nfKvExAlegJE/i8inw6F53Al++9/MEQy42vYTBYdujlyUcroYYZtih02yNAIzge 8n9USaAsYZMIKt5sMYTcgQ2a0AtrWPCAh6X7T6YCdxNsTCfWn+sGys4zJnbJwZRVveoz xBNWojBDl1810EOBaMMJh2y57X8lkYKiRr9DE5U1SXABZldjMU/S2aKk893QNxUF4YmV Pt0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701996777; x=1702601577; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hfCMsfp1IUoZ64EVVtr7gxYSAUWCUtGUA5R8s2RxKzs=; b=G+twMqKPtQ1FvK7uSK6ozdFSBWJ2E4upR4xGK9Pvf7qHKCysOQF25wtelwN5y/erKf np92T+MG2ahPFJrqA35ybTpbvNrHdqdo9cV6LUU9Wl5fINvTuxVQzheZcVhMGozWYgD2 08UDoNrASDm7AzPHq5bKqUNlH5Hpa5zvHlPLxnMs2O/0JPdofRf0plFdUq5yHWC/fe+k UWB11LBtDkeZvLPi+ie+Stw81cS/mODwkFr1OKjiwfBzyTHazAQDY5+2JCCzk18ILdZY 1JMyFU2FHeZ2Z83RCEyaVixB9a6P0Rdy+FD0CvrxpSEIZ2nNXyvJG7A9+6LYQNkZYK1V 6WBg== X-Gm-Message-State: AOJu0Yz50hSremc1DEnUCQ5QST03uMEBNAwFMJ59NE/Y/JVJrHzF1BjO rO6NnORCAk+MenvZtQIpYuVmiwS7c4nhvZVdGw== X-Google-Smtp-Source: AGHT+IEutjJnS2EH/ZQ+5x+IuH211lQgvDfZASfUKd2jgJtW5JenzTRObxK0Vp6bTrTGEljS7VIcMH18dMhy9n5y3A== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:f1cf:c733:235b:9fff]) (user=almasrymina job=sendgmr) by 2002:a81:441f:0:b0:5d5:5183:ebdb with SMTP id r31-20020a81441f000000b005d55183ebdbmr57205ywa.10.1701996776836; Thu, 07 Dec 2023 16:52:56 -0800 (PST) Date: Thu, 7 Dec 2023 16:52:32 -0800 In-Reply-To: <20231208005250.2910004-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231208005250.2910004-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231208005250.2910004-2-almasrymina@google.com> Subject: [net-next v1 01/16] net: page_pool: factor out releasing DMA from releasing the page From: Mina Almasry To: Shailend Chand , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Jeroen de Borst , Praveen Kaligineedi , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Yunsheng Lin , Harshitha Ramamurthy , Shakeel Butt From: Jakub Kicinski Releasing the DMA mapping will be useful for other types of pages, so factor it out. Make sure compiler inlines it, to avoid any regressions. Signed-off-by: Jakub Kicinski Signed-off-by: Mina Almasry Reviewed-by: Shakeel Butt Reviewed-by: Ilias Apalodimas --- This is implemented by Jakub in his RFC: https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.com/T/ I take no credit for the idea or implementation. This is a critical dependency of device memory TCP and thus I'm pulling it into this series to make it revewable and mergable. --- net/core/page_pool.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c2e7c9a6efbe..ca1b3b65c9b5 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -548,21 +548,16 @@ s32 page_pool_inflight(const struct page_pool *pool, bool strict) return inflight; } -/* Disconnects a page (from a page_pool). API users can have a need - * to disconnect a page (from a page_pool), to allow it to be used as - * a regular page (that will eventually be returned to the normal - * page-allocator via put_page). - */ -static void page_pool_return_page(struct page_pool *pool, struct page *page) +static __always_inline +void __page_pool_release_page_dma(struct page_pool *pool, struct page *page) { dma_addr_t dma; - int count; if (!(pool->p.flags & PP_FLAG_DMA_MAP)) /* Always account for inflight pages, even if we didn't * map them */ - goto skip_dma_unmap; + return; dma = page_pool_get_dma_addr(page); @@ -571,7 +566,19 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page) PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); page_pool_set_dma_addr(page, 0); -skip_dma_unmap: +} + +/* Disconnects a page (from a page_pool). API users can have a need + * to disconnect a page (from a page_pool), to allow it to be used as + * a regular page (that will eventually be returned to the normal + * page-allocator via put_page). + */ +void page_pool_return_page(struct page_pool *pool, struct page *page) +{ + int count; + + __page_pool_release_page_dma(pool, page); + page_pool_clear_pp_info(page); /* This may be the last page returned, releasing the pool, so From patchwork Fri Dec 8 00:52:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 751877 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nKnwUW+O" Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1E0D1739 for ; Thu, 7 Dec 2023 16:53:01 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5de8e375768so1622127b3.3 for ; Thu, 07 Dec 2023 16:53:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701996781; x=1702601581; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=V1r8ZXlyktFk/hR6LvKbeClNyv61T/bWHclepM8Rp88=; b=nKnwUW+ODKImfMaS+HQrnvNEyZvj4JFMXrS8bZ2Yj2CEkAWfRhNkOcAbSNZryzCV40 IEP3lXnGr0DPXlGGLhlUv9I9HMPS4JPnRE7WCrvUzpjlrtvyTGHZ21wStkG0XrDx2NMW UxP6Ekq10yozd0+HCZ2rraY2EqsiLLQf7nZUnVP6fq27YMRIRrL3OzLJnG1+wUA+pcXO vd6SaMDoFYSUNiRcyaZnRvWOhdHS3Y3Lw/f3oaqITRv1LXh3YNJumLBbszFEM4o6mo34 MaFYf2i0lX6QKtI2a/1XI8KLfwQUm79xAeSyNffz3LXeA1Otv48DStwIYBlBzvrjFvl2 oJGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701996781; x=1702601581; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=V1r8ZXlyktFk/hR6LvKbeClNyv61T/bWHclepM8Rp88=; b=kOpGKjnCsufppTwO16+hXtmg5EBGNMaE/J94UvN9jHbea13VZHWPb3EdUbCw1BOFtB Jupai8R0EY7V4UC18b2W8zB0yn8yh3g/Q5T/JnAIfZLA1b3O8w/VxPEP7s6LyBJnJI0O QCa5Kne7fklfyjgiCPp/3QQhkt2uMACjvNNdOkBwhEgiMnefz01Ev+1WPSuwwlnjna9K NudgudyKoV4TWA2anZFTGbcKtjEme87a0uAv90rYrrxZWYl1vI+eUg047uiMOthw1sjD e413JEH4s6vjA3LPIgE29Q8zhZip1xjBxDRjCOcMCQYXy8P8Uw/KimMYv0mup4Nh8OtY ZfVQ== X-Gm-Message-State: AOJu0YwHaWfD/f8Q8V8woCNrIcaEoL8cVAh7EcR4fobxW+k9G3yxoLvW LqCuTYsb7iaIGh5m4NoXo4wXwYQVKyi1mnSoNA== X-Google-Smtp-Source: AGHT+IEj1VqfFnBiOaXdk3C7zQ8oyWSVZPqGPRw04iIGc/p6HTi+O3QytptJH8qJW9Cs/Vxkfvehw/gsxCl4ycls9Q== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:f1cf:c733:235b:9fff]) (user=almasrymina job=sendgmr) by 2002:a25:cc4b:0:b0:db5:3bdf:ff55 with SMTP id l72-20020a25cc4b000000b00db53bdfff55mr39860ybf.6.1701996781080; Thu, 07 Dec 2023 16:53:01 -0800 (PST) Date: Thu, 7 Dec 2023 16:52:34 -0800 In-Reply-To: <20231208005250.2910004-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231208005250.2910004-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231208005250.2910004-4-almasrymina@google.com> Subject: [net-next v1 03/16] queue_api: define queue api From: Mina Almasry To: Shailend Chand , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Jeroen de Borst , Praveen Kaligineedi , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Yunsheng Lin , Harshitha Ramamurthy , Shakeel Butt This API enables the net stack to reset the queues used for devmem. Signed-off-by: Mina Almasry --- include/linux/netdevice.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 1b935ee341b4..316f7dee86ce 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1432,6 +1432,20 @@ struct netdev_net_notifier { * struct kernel_hwtstamp_config *kernel_config, * struct netlink_ext_ack *extack); * Change the hardware timestamping parameters for NIC device. + * + * void *(*ndo_queue_mem_alloc)(struct net_device *dev, int idx); + * Allocate memory for an RX queue. The memory returned in the form of + * a void * can be passed to ndo_queue_mem_free() for freeing or to + * ndo_queue_start to create an RX queue with this memory. + * + * void (*ndo_queue_mem_free)(struct net_device *dev, void *); + * Free memory from an RX queue. + * + * int (*ndo_queue_start)(struct net_device *dev, int idx, void *); + * Start an RX queue at the specified index. + * + * int (*ndo_queue_stop)(struct net_device *dev, int idx, void **); + * Stop the RX queue at the specified index. */ struct net_device_ops { int (*ndo_init)(struct net_device *dev); @@ -1673,6 +1687,16 @@ struct net_device_ops { int (*ndo_hwtstamp_set)(struct net_device *dev, struct kernel_hwtstamp_config *kernel_config, struct netlink_ext_ack *extack); + void * (*ndo_queue_mem_alloc)(struct net_device *dev, + int idx); + void (*ndo_queue_mem_free)(struct net_device *dev, + void *queue_mem); + int (*ndo_queue_start)(struct net_device *dev, + int idx, + void *queue_mem); + int (*ndo_queue_stop)(struct net_device *dev, + int idx, + void **out_queue_mem); }; /** From patchwork Fri Dec 8 00:52:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 751876 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VeAzctvT" Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2A611981 for ; Thu, 7 Dec 2023 16:53:05 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5d7e7e10231so9953147b3.1 for ; Thu, 07 Dec 2023 16:53:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701996785; x=1702601585; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aoV0tDbNZgYvivQmYBKAP9GCcy/xYSu5tcLB8R1UGG0=; b=VeAzctvT1kcS8w5gV/5/iAZvKWCS4Eh1NZboSist2gYXvzP+4bfJmEGNu2ljTmU8Tj lm83lQKNgFs1DAIIdJlTn3ETMug/o8n8k3a+BTug7cE5M7Vz8Tur0uQss6ZH2gaXNyDS 2cmS6jK/ovrzsVqicpl+EPe3FIFFlarY+RSAEMOhriuEy5vjjN4hrBmLzR4OYPOJY4vg 645Iq/RuFhqRFLMSqiTydphJicdLlH4je3gkmsB67fDI7fDhRnNLyzIQYNU2PIPHgp8r yrhHYqg97U1tlPMnk/owoAcSofETreXdj2MXh+rGAMOJ2BefNdiio+cU3RXmowHwDMm+ Pdjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701996785; x=1702601585; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aoV0tDbNZgYvivQmYBKAP9GCcy/xYSu5tcLB8R1UGG0=; b=T1nINSABh9GTkzHuMDFLW4G10AvH1kFqLkEFJT41S8mycsaeucn+0TAnWvO95inKgC hPCeBA6HqmRbrXzXvsnxRAw8myYmPFmzAC9PZOjFIp0fpD7L30Fpt4vWdkeP0GeF/KlM RkKtzoL3ygGJnZRmi3iZQm3tQDUi35uIeFqGzXuclEGyTALV0EsXpdvIxTobGUN1+EFT qu3BOoXTVq2egZukDSsXj6VjmCER2G2C16vOWcycL3vJpkr455GgM1KcEWOY+pE95ubO Ew0ZP7ZU5LlAN+X2rhOwpvLMt1TxpBd7COxleJHrWhqhEIxzeee9Rji8ODP+urXSuDnp 0w9g== X-Gm-Message-State: AOJu0YwJFrjF3Odvcw9oqFcknYrCfshct1OElXl9xHC9W678QgIZE0Wi CMFA9GuHd8YlRCapPdQe6jMrSjkjJJSJAijPyg== X-Google-Smtp-Source: AGHT+IE5JC8UJbjHXiqOjZ3HxGGcNGRSVzlm9Sw4sjr3ZGx1DM1Mw9UG+uNTdTVqfEXvlPfcoYBx/QWrPWa8iYcuIw== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:f1cf:c733:235b:9fff]) (user=almasrymina job=sendgmr) by 2002:a05:690c:c1c:b0:5d9:452e:d653 with SMTP id cl28-20020a05690c0c1c00b005d9452ed653mr2146ywb.5.1701996785141; Thu, 07 Dec 2023 16:53:05 -0800 (PST) Date: Thu, 7 Dec 2023 16:52:36 -0800 In-Reply-To: <20231208005250.2910004-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231208005250.2910004-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231208005250.2910004-6-almasrymina@google.com> Subject: [net-next v1 05/16] net: netdev netlink api to bind dma-buf to a net device From: Mina Almasry To: Shailend Chand , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Jeroen de Borst , Praveen Kaligineedi , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Yunsheng Lin , Harshitha Ramamurthy , Shakeel Butt , Stanislav Fomichev API takes the dma-buf fd as input, and binds it to the netdevice. The user can specify the rx queues to bind the dma-buf to. Suggested-by: Stanislav Fomichev Signed-off-by: Mina Almasry --- Changes in v1: - Add rx-queue-type to distingish rx from tx (Jakub) - Return dma-buf ID from netlink API (David, Stan) Changes in RFC-v3: - Support binding multiple rx rx-queues --- Documentation/netlink/specs/netdev.yaml | 52 +++++++++++++++++++++++++ include/uapi/linux/netdev.h | 19 +++++++++ net/core/netdev-genl-gen.c | 19 +++++++++ net/core/netdev-genl-gen.h | 2 + net/core/netdev-genl.c | 6 +++ tools/include/uapi/linux/netdev.h | 19 +++++++++ 6 files changed, 117 insertions(+) diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index f2c76d103bd8..df6a11d47006 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -260,6 +260,45 @@ attribute-sets: name: napi-id doc: ID of the NAPI instance which services this queue. type: u32 + - + name: queue-dmabuf + attributes: + - + name: type + doc: rx or tx queue + type: u8 + enum: queue-type + - + name: idx + doc: queue index + type: u32 + + - + name: bind-dmabuf + attributes: + - + name: ifindex + doc: netdev ifindex to bind the dma-buf to. + type: u32 + checks: + min: 1 + - + name: queues + doc: receive queues to bind the dma-buf to. + type: nest + nested-attributes: queue-dmabuf + multi-attr: true + - + name: dmabuf-fd + doc: dmabuf file descriptor to bind. + type: u32 + - + name: dmabuf-id + doc: id of the dmabuf binding + type: u32 + checks: + min: 1 + operations: list: @@ -382,6 +421,19 @@ operations: attributes: - ifindex reply: *queue-get-op + - + name: bind-rx + doc: Bind dmabuf to netdev + attribute-set: bind-dmabuf + do: + request: + attributes: + - ifindex + - dmabuf-fd + - queues + reply: + attributes: + - dmabuf-id - name: napi-get doc: Get information about NAPI instances configured on the system. diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index 424c5e28f495..35d201dc4b05 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -129,6 +129,24 @@ enum { NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1) }; +enum { + NETDEV_A_QUEUE_DMABUF_TYPE = 1, + NETDEV_A_QUEUE_DMABUF_IDX, + + __NETDEV_A_QUEUE_DMABUF_MAX, + NETDEV_A_QUEUE_DMABUF_MAX = (__NETDEV_A_QUEUE_DMABUF_MAX - 1) +}; + +enum { + NETDEV_A_BIND_DMABUF_IFINDEX = 1, + NETDEV_A_BIND_DMABUF_QUEUES, + NETDEV_A_BIND_DMABUF_DMABUF_FD, + NETDEV_A_BIND_DMABUF_DMABUF_ID, + + __NETDEV_A_BIND_DMABUF_MAX, + NETDEV_A_BIND_DMABUF_MAX = (__NETDEV_A_BIND_DMABUF_MAX - 1) +}; + enum { NETDEV_CMD_DEV_GET = 1, NETDEV_CMD_DEV_ADD_NTF, @@ -140,6 +158,7 @@ enum { NETDEV_CMD_PAGE_POOL_CHANGE_NTF, NETDEV_CMD_PAGE_POOL_STATS_GET, NETDEV_CMD_QUEUE_GET, + NETDEV_CMD_BIND_RX, NETDEV_CMD_NAPI_GET, __NETDEV_CMD_MAX, diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c index be7f2ebd61b2..3384b1ae3f40 100644 --- a/net/core/netdev-genl-gen.c +++ b/net/core/netdev-genl-gen.c @@ -27,6 +27,11 @@ const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFIND [NETDEV_A_PAGE_POOL_IFINDEX] = NLA_POLICY_FULL_RANGE(NLA_U32, &netdev_a_page_pool_ifindex_range), }; +const struct nla_policy netdev_queue_dmabuf_nl_policy[NETDEV_A_QUEUE_DMABUF_IDX + 1] = { + [NETDEV_A_QUEUE_DMABUF_TYPE] = NLA_POLICY_MAX(NLA_U8, 1), + [NETDEV_A_QUEUE_DMABUF_IDX] = { .type = NLA_U32, }, +}; + /* NETDEV_CMD_DEV_GET - do */ static const struct nla_policy netdev_dev_get_nl_policy[NETDEV_A_DEV_IFINDEX + 1] = { [NETDEV_A_DEV_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), @@ -58,6 +63,13 @@ static const struct nla_policy netdev_queue_get_dump_nl_policy[NETDEV_A_QUEUE_IF [NETDEV_A_QUEUE_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), }; +/* NETDEV_CMD_BIND_RX - do */ +static const struct nla_policy netdev_bind_rx_nl_policy[NETDEV_A_BIND_DMABUF_DMABUF_FD + 1] = { + [NETDEV_A_BIND_DMABUF_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), + [NETDEV_A_BIND_DMABUF_DMABUF_FD] = { .type = NLA_U32, }, + [NETDEV_A_BIND_DMABUF_QUEUES] = NLA_POLICY_NESTED(netdev_queue_dmabuf_nl_policy), +}; + /* NETDEV_CMD_NAPI_GET - do */ static const struct nla_policy netdev_napi_get_do_nl_policy[NETDEV_A_NAPI_ID + 1] = { [NETDEV_A_NAPI_ID] = { .type = NLA_U32, }, @@ -124,6 +136,13 @@ static const struct genl_split_ops netdev_nl_ops[] = { .maxattr = NETDEV_A_QUEUE_IFINDEX, .flags = GENL_CMD_CAP_DUMP, }, + { + .cmd = NETDEV_CMD_BIND_RX, + .doit = netdev_nl_bind_rx_doit, + .policy = netdev_bind_rx_nl_policy, + .maxattr = NETDEV_A_BIND_DMABUF_DMABUF_FD, + .flags = GENL_CMD_CAP_DO, + }, { .cmd = NETDEV_CMD_NAPI_GET, .doit = netdev_nl_napi_get_doit, diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h index a47f2bcbe4fa..a7ede514eccd 100644 --- a/net/core/netdev-genl-gen.h +++ b/net/core/netdev-genl-gen.h @@ -13,6 +13,7 @@ /* Common nested types */ extern const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFINDEX + 1]; +extern const struct nla_policy netdev_queue_dmabuf_nl_policy[NETDEV_A_QUEUE_DMABUF_IDX + 1]; int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); @@ -26,6 +27,7 @@ int netdev_nl_page_pool_stats_get_dumpit(struct sk_buff *skb, int netdev_nl_queue_get_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_queue_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); +int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index fd98936da3ae..0ed292d87ae0 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -469,6 +469,12 @@ int netdev_nl_queue_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) return skb->len; } +/* Stub */ +int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) +{ + return 0; +} + static int netdev_genl_netdevice_event(struct notifier_block *nb, unsigned long event, void *ptr) { diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index 424c5e28f495..35d201dc4b05 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -129,6 +129,24 @@ enum { NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1) }; +enum { + NETDEV_A_QUEUE_DMABUF_TYPE = 1, + NETDEV_A_QUEUE_DMABUF_IDX, + + __NETDEV_A_QUEUE_DMABUF_MAX, + NETDEV_A_QUEUE_DMABUF_MAX = (__NETDEV_A_QUEUE_DMABUF_MAX - 1) +}; + +enum { + NETDEV_A_BIND_DMABUF_IFINDEX = 1, + NETDEV_A_BIND_DMABUF_QUEUES, + NETDEV_A_BIND_DMABUF_DMABUF_FD, + NETDEV_A_BIND_DMABUF_DMABUF_ID, + + __NETDEV_A_BIND_DMABUF_MAX, + NETDEV_A_BIND_DMABUF_MAX = (__NETDEV_A_BIND_DMABUF_MAX - 1) +}; + enum { NETDEV_CMD_DEV_GET = 1, NETDEV_CMD_DEV_ADD_NTF, @@ -140,6 +158,7 @@ enum { NETDEV_CMD_PAGE_POOL_CHANGE_NTF, NETDEV_CMD_PAGE_POOL_STATS_GET, NETDEV_CMD_QUEUE_GET, + NETDEV_CMD_BIND_RX, NETDEV_CMD_NAPI_GET, __NETDEV_CMD_MAX, From patchwork Fri Dec 8 00:52:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 751875 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="q8qOBwMV" Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C8981986 for ; Thu, 7 Dec 2023 16:53:08 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-daee86e2d70so2099613276.0 for ; Thu, 07 Dec 2023 16:53:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701996787; x=1702601587; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IJSREQV89hlugaW+sm/AZT6Rr4mGIq6Ap66sB5d+pEY=; b=q8qOBwMVDo+EhxalRraVbQIFxhroHw+/hJVeuCzPAxpTRkLGepzEHgOkjTc5r8+X/e dhZcdCs45zYqXAldsPXBTlt7GGMKqdGKVB8tjf7uoGE5jcezHR+goh9l8gM7wveJdeGx Gc3nnfU2CO7Uh8RsBYFuQYhxMVaiO/ObEcVuANEsywJ0Mf9Rb6ZrDaoCM9HmR5o5AnCl YtlYg2JXgA1korBkbGG0W3IVdZFRTmBdKy+NpKXBkiTOyamHGxrbnOcny3eM8qyKLfQ8 R7VzOJmyQX/FjXT1CpQUNwjomAAkW0m7Fboa46LtEHtECVTb6ZlPlDvjA2uYwi6bxVRW WjgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701996787; x=1702601587; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IJSREQV89hlugaW+sm/AZT6Rr4mGIq6Ap66sB5d+pEY=; b=CcjOmH60EY5aeyjeHdzU3ZFsbV5wjYoOvB9mY6k1keyI45uex0wqN5dsXEEsFng6/+ 2yt9lsJmiCL6XoY6DWnyxHWYdMZxbi/emGA0eQI889SaC9vg1eZLF2s9Hl1JmpGfprXe ijDa0qsIo0PLKr9U7GAuzcIoVm29VRjfOqqUMYYpl+jzlT6Hdf4Cqy9+xxc2r86QCTgx nEzZbMYobKymjqkGcMMX0rSzSeZYEhxgRJyBMS4/hjuAxPVUuu+EvGS5a7m5qHyQ0Gpb KuHK/iR4qiv/Is8SHD1nXuiDDHQCeLMcSrscjBrg0kFGK5S6yymemIhPSUB+UOoR6ikJ Nqig== X-Gm-Message-State: AOJu0YwiMRYTGay6By5XUNBHQePyKFP3gwWbWOW5fgOXgxylkISmSXM8 tVydt0wWg6nNObWXxGmqiNA1NdGR+DJPQ1AFtA== X-Google-Smtp-Source: AGHT+IE1O6D1lEfHIWYXNPTvaL4ziEYaae1mVfpQsRnwWu8BFA05JX6B+wAzkmf+4xuK8ojr7WtOk9hALNAjNl5NPw== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:f1cf:c733:235b:9fff]) (user=almasrymina job=sendgmr) by 2002:a25:d84e:0:b0:db3:5b0a:f274 with SMTP id p75-20020a25d84e000000b00db35b0af274mr47808ybg.0.1701996787378; Thu, 07 Dec 2023 16:53:07 -0800 (PST) Date: Thu, 7 Dec 2023 16:52:37 -0800 In-Reply-To: <20231208005250.2910004-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231208005250.2910004-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231208005250.2910004-7-almasrymina@google.com> Subject: [net-next v1 06/16] netdev: support binding dma-buf to netdevice From: Mina Almasry To: Shailend Chand , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Jeroen de Borst , Praveen Kaligineedi , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Yunsheng Lin , Harshitha Ramamurthy , Shakeel Butt , Willem de Bruijn , Kaiyuan Zhang Add a netdev_dmabuf_binding struct which represents the dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to rx queues on the netdevice. On the binding, the dma_buf_attach & dma_buf_map_attachment will occur. The entries in the sg_table from mapping will be inserted into a genpool to make it ready for allocation. The chunks in the genpool are owned by a dmabuf_chunk_owner struct which holds the dma-buf offset of the base of the chunk and the dma_addr of the chunk. Both are needed to use allocations that come from this chunk. We create a new type that represents an allocation from the genpool: page_pool_iov. We setup the page_pool_iov allocation size in the genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally allocated by the page pool and given to the drivers. The user can unbind the dmabuf from the netdevice by closing the netlink socket that established the binding. We do this so that the binding is automatically unbound even if the userspace process crashes. The binding and unbinding leaves an indicator in struct netdev_rx_queue that the given queue is bound, but the binding doesn't take effect until the driver actually reconfigures its queues, and re-initializes its page pool. The netdev_dmabuf_binding struct is refcounted, and releases its resources only when all the refs are released. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry --- v1: - Introduce devmem.h instead of bloating netdevice.h (Jakub) - ENOTSUPP -> EOPNOTSUPP (checkpatch.pl I think) - Remove unneeded rcu protection for binding->list (rtnl protected) - Removed extraneous err_binding_put: label. - Removed dma_addr += len (Paolo). - Don't override err on netdev_bind_dmabuf_to_queue failure. - Rename devmem -> dmabuf (David). - Add id to dmabuf binding (David/Stan). - Fix missing xa_destroy bound_rq_list. - Use queue api to reset bound RX queues (Jakub). - Update netlink API for rx-queue type (tx/re) (Jakub). RFC v3: - Support multi rx-queue binding --- include/net/devmem.h | 96 ++++++++++++ include/net/netdev_rx_queue.h | 1 + include/net/page_pool/types.h | 27 ++++ net/core/dev.c | 276 ++++++++++++++++++++++++++++++++++ net/core/netdev-genl.c | 122 ++++++++++++++- 5 files changed, 520 insertions(+), 2 deletions(-) create mode 100644 include/net/devmem.h diff --git a/include/net/devmem.h b/include/net/devmem.h new file mode 100644 index 000000000000..29ff125f9815 --- /dev/null +++ b/include/net/devmem.h @@ -0,0 +1,96 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Device memory TCP support + * + * Authors: Mina Almasry + * Willem de Bruijn + * Kaiyuan Zhang + * + */ +#ifndef _NET_DEVMEM_H +#define _NET_DEVMEM_H + +struct netdev_dmabuf_binding { + struct dma_buf *dmabuf; + struct dma_buf_attachment *attachment; + struct sg_table *sgt; + struct net_device *dev; + struct gen_pool *chunk_pool; + + /* The user holds a ref (via the netlink API) for as long as they want + * the binding to remain alive. Each page pool using this binding holds + * a ref to keep the binding alive. Each allocated page_pool_iov holds a + * ref. + * + * The binding undos itself and unmaps the underlying dmabuf once all + * those refs are dropped and the binding is no longer desired or in + * use. + */ + refcount_t ref; + + /* The portid of the user that owns this binding. Used for netlink to + * notify us of the user dropping the bind. + */ + u32 owner_nlportid; + + /* The list of bindings currently active. Used for netlink to notify us + * of the user dropping the bind. + */ + struct list_head list; + + /* rxq's this binding is active on. */ + struct xarray bound_rxq_list; + + /* ID of this binding. Globally unique to all bindings currently + * active. + */ + u32 id; +}; + +#ifdef CONFIG_DMA_SHARED_BUFFER +void __netdev_dmabuf_binding_free(struct netdev_dmabuf_binding *binding); +int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, + struct netdev_dmabuf_binding **out); +void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding); +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct netdev_dmabuf_binding *binding); +#else +static inline void +__netdev_dmabuf_binding_free(struct netdev_dmabuf_binding *binding) +{ +} + +static inline int netdev_bind_dmabuf(struct net_device *dev, + unsigned int dmabuf_fd, + struct netdev_dmabuf_binding **out) +{ + return -EOPNOTSUPP; +} +static inline void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) +{ +} + +static inline int +netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct netdev_dmabuf_binding *binding) +{ + return -EOPNOTSUPP; +} +#endif + +static inline void +netdev_dmabuf_binding_get(struct netdev_dmabuf_binding *binding) +{ + refcount_inc(&binding->ref); +} + +static inline void +netdev_dmabuf_binding_put(struct netdev_dmabuf_binding *binding) +{ + if (!refcount_dec_and_test(&binding->ref)) + return; + + __netdev_dmabuf_binding_free(binding); +} + +#endif /* _NET_DEVMEM_H */ diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h index aa1716fb0e53..5dc35628633a 100644 --- a/include/net/netdev_rx_queue.h +++ b/include/net/netdev_rx_queue.h @@ -25,6 +25,7 @@ struct netdev_rx_queue { * Readers and writers must hold RTNL */ struct napi_struct *napi; + struct netdev_dmabuf_binding *binding; } ____cacheline_aligned_in_smp; /* diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 0e9fa79a5ef1..44faee7a7b02 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -134,6 +134,33 @@ struct memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct page *page); }; +/* page_pool_iov support */ + +/* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist + * entry from the dmabuf is inserted into the genpool as a chunk, and needs + * this owner struct to keep track of some metadata necessary to create + * allocations from this chunk. + */ +struct dmabuf_genpool_chunk_owner { + /* Offset into the dma-buf where this chunk starts. */ + unsigned long base_virtual; + + /* dma_addr of the start of the chunk. */ + dma_addr_t base_dma_addr; + + /* Array of page_pool_iovs for this chunk. */ + struct page_pool_iov *ppiovs; + size_t num_ppiovs; + + struct netdev_dmabuf_binding *binding; +}; + +struct page_pool_iov { + struct dmabuf_genpool_chunk_owner *owner; + + refcount_t refcount; +}; + struct page_pool { struct page_pool_params_fast p; diff --git a/net/core/dev.c b/net/core/dev.c index 0432b04cf9b0..b8c8be5a912e 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -153,6 +153,10 @@ #include #include #include +#include +#include +#include +#include #include "dev.h" #include "net-sysfs.h" @@ -2041,6 +2045,278 @@ static int call_netdevice_notifiers_mtu(unsigned long val, return call_netdevice_notifiers_info(val, &info.info); } +/* Device memory support */ + +#ifdef CONFIG_DMA_SHARED_BUFFER +static void netdev_dmabuf_free_chunk_owner(struct gen_pool *genpool, + struct gen_pool_chunk *chunk, + void *not_used) +{ + struct dmabuf_genpool_chunk_owner *owner = chunk->owner; + + kvfree(owner->ppiovs); + kfree(owner); +} + +void __netdev_dmabuf_binding_free(struct netdev_dmabuf_binding *binding) +{ + size_t size, avail; + + gen_pool_for_each_chunk(binding->chunk_pool, + netdev_dmabuf_free_chunk_owner, NULL); + + size = gen_pool_size(binding->chunk_pool); + avail = gen_pool_avail(binding->chunk_pool); + + if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu", + size, avail)) + gen_pool_destroy(binding->chunk_pool); + + dma_buf_unmap_attachment(binding->attachment, binding->sgt, + DMA_BIDIRECTIONAL); + dma_buf_detach(binding->dmabuf, binding->attachment); + dma_buf_put(binding->dmabuf); + xa_destroy(&binding->bound_rxq_list); + kfree(binding); +} + +static int netdev_restart_rx_queue(struct net_device *dev, int rxq_idx) +{ + void *new_mem; + void *old_mem; + int err; + + if (!dev || !dev->netdev_ops) + return -EINVAL; + + if (!dev->netdev_ops->ndo_queue_stop || + !dev->netdev_ops->ndo_queue_mem_free || + !dev->netdev_ops->ndo_queue_mem_alloc || + !dev->netdev_ops->ndo_queue_start) + return -EOPNOTSUPP; + + new_mem = dev->netdev_ops->ndo_queue_mem_alloc(dev, rxq_idx); + if (!new_mem) + return -ENOMEM; + + err = dev->netdev_ops->ndo_queue_stop(dev, rxq_idx, &old_mem); + if (err) + goto err_free_new_mem; + + err = dev->netdev_ops->ndo_queue_start(dev, rxq_idx, new_mem); + if (err) + goto err_start_queue; + + dev->netdev_ops->ndo_queue_mem_free(dev, old_mem); + + return 0; + +err_start_queue: + dev->netdev_ops->ndo_queue_start(dev, rxq_idx, old_mem); + +err_free_new_mem: + dev->netdev_ops->ndo_queue_mem_free(dev, new_mem); + + return err; +} + +/* Protected by rtnl_lock() */ +static DEFINE_XARRAY_FLAGS(netdev_dmabuf_bindings, XA_FLAGS_ALLOC1); + +void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) +{ + struct netdev_rx_queue *rxq; + unsigned long xa_idx; + unsigned int rxq_idx; + + if (!binding) + return; + + if (binding->list.next) + list_del(&binding->list); + + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) { + if (rxq->binding == binding) { + /* We hold the rtnl_lock while binding/unbinding + * dma-buf, so we can't race with another thread that + * is also modifying this value. However, the driver + * may read this config while it's creating its + * rx-queues. WRITE_ONCE() here to match the + * READ_ONCE() in the driver. + */ + WRITE_ONCE(rxq->binding, NULL); + + rxq_idx = get_netdev_rx_queue_index(rxq); + + netdev_restart_rx_queue(binding->dev, rxq_idx); + } + } + + xa_erase(&netdev_dmabuf_bindings, binding->id); + + netdev_dmabuf_binding_put(binding); +} + +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct netdev_dmabuf_binding *binding) +{ + struct netdev_rx_queue *rxq; + u32 xa_idx; + int err; + + rxq = __netif_get_rx_queue(dev, rxq_idx); + + if (rxq->binding) + return -EEXIST; + + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b, + GFP_KERNEL); + if (err) + return err; + + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't + * race with another thread that is also modifying this value. However, + * the driver may read this config while it's creating its * rx-queues. + * WRITE_ONCE() here to match the READ_ONCE() in the driver. + */ + WRITE_ONCE(rxq->binding, binding); + + err = netdev_restart_rx_queue(dev, rxq_idx); + if (err) + goto err_xa_erase; + + return 0; + +err_xa_erase: + xa_erase(&binding->bound_rxq_list, xa_idx); + WRITE_ONCE(rxq->binding, NULL); + + return err; +} + +int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, + struct netdev_dmabuf_binding **out) +{ + struct netdev_dmabuf_binding *binding; + static u32 id_alloc_next; + struct scatterlist *sg; + struct dma_buf *dmabuf; + unsigned int sg_idx, i; + unsigned long virtual; + int err; + + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + dmabuf = dma_buf_get(dmabuf_fd); + if (IS_ERR_OR_NULL(dmabuf)) + return -EBADFD; + + binding = kzalloc_node(sizeof(*binding), GFP_KERNEL, + dev_to_node(&dev->dev)); + if (!binding) { + err = -ENOMEM; + goto err_put_dmabuf; + } + binding->dev = dev; + + err = xa_alloc_cyclic(&netdev_dmabuf_bindings, &binding->id, binding, + xa_limit_32b, &id_alloc_next, GFP_KERNEL); + if (err < 0) + goto err_free_binding; + + xa_init_flags(&binding->bound_rxq_list, XA_FLAGS_ALLOC); + + refcount_set(&binding->ref, 1); + + binding->dmabuf = dmabuf; + + binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent); + if (IS_ERR(binding->attachment)) { + err = PTR_ERR(binding->attachment); + goto err_free_id; + } + + binding->sgt = dma_buf_map_attachment(binding->attachment, + DMA_BIDIRECTIONAL); + if (IS_ERR(binding->sgt)) { + err = PTR_ERR(binding->sgt); + goto err_detach; + } + + /* For simplicity we expect to make PAGE_SIZE allocations, but the + * binding can be much more flexible than that. We may be able to + * allocate MTU sized chunks here. Leave that for future work... + */ + binding->chunk_pool = gen_pool_create(PAGE_SHIFT, + dev_to_node(&dev->dev)); + if (!binding->chunk_pool) { + err = -ENOMEM; + goto err_unmap; + } + + virtual = 0; + for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) { + dma_addr_t dma_addr = sg_dma_address(sg); + struct dmabuf_genpool_chunk_owner *owner; + size_t len = sg_dma_len(sg); + struct page_pool_iov *ppiov; + + owner = kzalloc_node(sizeof(*owner), GFP_KERNEL, + dev_to_node(&dev->dev)); + owner->base_virtual = virtual; + owner->base_dma_addr = dma_addr; + owner->num_ppiovs = len / PAGE_SIZE; + owner->binding = binding; + + err = gen_pool_add_owner(binding->chunk_pool, dma_addr, + dma_addr, len, dev_to_node(&dev->dev), + owner); + if (err) { + err = -EINVAL; + goto err_free_chunks; + } + + owner->ppiovs = kvmalloc_array(owner->num_ppiovs, + sizeof(*owner->ppiovs), + GFP_KERNEL); + if (!owner->ppiovs) { + err = -ENOMEM; + goto err_free_chunks; + } + + for (i = 0; i < owner->num_ppiovs; i++) { + ppiov = &owner->ppiovs[i]; + ppiov->owner = owner; + refcount_set(&ppiov->refcount, 1); + } + + virtual += len; + } + + *out = binding; + + return 0; + +err_free_chunks: + gen_pool_for_each_chunk(binding->chunk_pool, + netdev_dmabuf_free_chunk_owner, NULL); + gen_pool_destroy(binding->chunk_pool); +err_unmap: + dma_buf_unmap_attachment(binding->attachment, binding->sgt, + DMA_BIDIRECTIONAL); +err_detach: + dma_buf_detach(dmabuf, binding->attachment); +err_free_id: + xa_erase(&netdev_dmabuf_bindings, binding->id); +err_free_binding: + kfree(binding); +err_put_dmabuf: + dma_buf_put(dmabuf); + return err; +} +#endif + #ifdef CONFIG_NET_INGRESS static DEFINE_STATIC_KEY_FALSE(ingress_needed_key); diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 0ed292d87ae0..b3323812d0b0 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "netdev-genl-gen.h" #include "dev.h" @@ -469,10 +470,94 @@ int netdev_nl_queue_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) return skb->len; } -/* Stub */ +static LIST_HEAD(netdev_rbinding_list); + int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) { - return 0; + struct nlattr *tb[ARRAY_SIZE(netdev_queue_dmabuf_nl_policy)]; + struct netdev_dmabuf_binding *out_binding; + u32 ifindex, dmabuf_fd, rxq_idx; + struct net_device *netdev; + struct sk_buff *rsp; + struct nlattr *attr; + int rem, err = 0; + void *hdr; + + if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_DMABUF_FD) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_QUEUES)) + return -EINVAL; + + ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]); + dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_BIND_DMABUF_DMABUF_FD]); + + rtnl_lock(); + + netdev = __dev_get_by_index(genl_info_net(info), ifindex); + if (!netdev) { + err = -ENODEV; + goto err_unlock; + } + + err = netdev_bind_dmabuf(netdev, dmabuf_fd, &out_binding); + if (err) + goto err_unlock; + + nla_for_each_attr(attr, genlmsg_data(info->genlhdr), + genlmsg_len(info->genlhdr), rem) { + if (nla_type(attr) != NETDEV_A_BIND_DMABUF_QUEUES) + continue; + + err = nla_parse_nested(tb, + ARRAY_SIZE(netdev_queue_dmabuf_nl_policy) - 1, + attr, netdev_queue_dmabuf_nl_policy, + info->extack); + + if (err < 0) + goto err_unbind; + + rxq_idx = nla_get_u32(tb[NETDEV_A_QUEUE_DMABUF_IDX]); + + if (rxq_idx >= netdev->num_rx_queues) { + err = -ERANGE; + goto err_unbind; + } + + err = netdev_bind_dmabuf_to_queue(netdev, rxq_idx, out_binding); + if (err) + goto err_unbind; + } + + out_binding->owner_nlportid = info->snd_portid; + list_add(&out_binding->list, &netdev_rbinding_list); + + rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!rsp) { + err = -ENOMEM; + goto err_unbind; + } + + hdr = genlmsg_put(rsp, info->snd_portid, info->snd_seq, + &netdev_nl_family, 0, info->genlhdr->cmd); + if (!hdr) { + err = -EMSGSIZE; + goto err_genlmsg_free; + } + + nla_put_u32(rsp, NETDEV_A_BIND_DMABUF_DMABUF_ID, out_binding->id); + genlmsg_end(rsp, hdr); + + rtnl_unlock(); + + return genlmsg_reply(rsp, info); + +err_genlmsg_free: + nlmsg_free(rsp); +err_unbind: + netdev_unbind_dmabuf(out_binding); +err_unlock: + rtnl_unlock(); + return err; } static int netdev_genl_netdevice_event(struct notifier_block *nb, @@ -495,10 +580,37 @@ static int netdev_genl_netdevice_event(struct notifier_block *nb, return NOTIFY_OK; } +static int netdev_netlink_notify(struct notifier_block *nb, unsigned long state, + void *_notify) +{ + struct netlink_notify *notify = _notify; + struct netdev_dmabuf_binding *rbinding; + + if (state != NETLINK_URELEASE || notify->protocol != NETLINK_GENERIC) + return NOTIFY_DONE; + + rtnl_lock(); + + list_for_each_entry(rbinding, &netdev_rbinding_list, list) { + if (rbinding->owner_nlportid == notify->portid) { + netdev_unbind_dmabuf(rbinding); + break; + } + } + + rtnl_unlock(); + + return NOTIFY_OK; +} + static struct notifier_block netdev_genl_nb = { .notifier_call = netdev_genl_netdevice_event, }; +static struct notifier_block netdev_netlink_notifier = { + .notifier_call = netdev_netlink_notify, +}; + static int __init netdev_genl_init(void) { int err; @@ -511,8 +623,14 @@ static int __init netdev_genl_init(void) if (err) goto err_unreg_ntf; + err = netlink_register_notifier(&netdev_netlink_notifier); + if (err) + goto err_unreg_family; + return 0; +err_unreg_family: + genl_unregister_family(&netdev_nl_family); err_unreg_ntf: unregister_netdevice_notifier(&netdev_genl_nb); return err; From patchwork Fri Dec 8 00:52:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 751874 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tItI9ksJ" Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BEC41730 for ; Thu, 7 Dec 2023 16:53:16 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5d3911218b3so17599487b3.1 for ; Thu, 07 Dec 2023 16:53:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701996795; x=1702601595; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lPnDX1rYMAtCe+hQ8qdPonfWEnFnkE5EPZQcdSbkY2c=; b=tItI9ksJxKZZeSA4cIlz1awMH257FvktV3a5FQasmksOiVe6EDUTdc38DlVmY9AMUv 1yUbt++nhyY11JmQKVKMdqq/CKI60A8ETh2zdhXhofyLY1NvfvxI27vAS+LaBEfcsC0m dMAr4tkoEokr4qCBaTMo4v6nkfqhHxyI7j+wwuw3MRT2niwmzCRi6fGzm0jpDy+wGnvE PZvJaSLBlM5QaFpeBtM0oCVky9+C8Jg5N9L//cguRIlLlMuU524zOQUz7g3+6xyaACVK H4SLrLLPCGbX164C9j9lRBfetzidTop22PysOSNLg7487MDm0dggC/tegdH5rd8KCgry Sp0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701996795; x=1702601595; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lPnDX1rYMAtCe+hQ8qdPonfWEnFnkE5EPZQcdSbkY2c=; b=RZdtjtXl57UBDl610YFwHWMPMuCnhe5qNsB4lX0m/RBDT45XQSata1CS7uNKyxdYRI arP+EIyBfjdoQPg14AqKwX1FAzP/FxXV2zBAXmBXhc9s7sT+DR+iykpv1IMAvm3LADLh KpME7KLb478yyRelNBEmDsU/5lbSxVNJJhNDWmP7cv+GvjLDF21cV+UjsTECuOnEpd3l 7kBYPhAeHC9NXU/b1VOax+ENIOAlskEAQhu71IB9oJVB2BecAPCTphC+ERp1dH9Coh1i MTCEHRKzxjn7cmkuxQM2tWbzB+vzRy3NGLdNVoHDoE+yGtdCFxj/HpuA+FAyM/4MKu9a iqnA== X-Gm-Message-State: AOJu0YzhZ1AQ44YgZW7Jj+ZfriFrdUG9GJkwb7O1MRh72nJkRYW/Wo6m w0jXVfkz6YDVrTRCpNFg26WgxHX+K8wWGliWCw== X-Google-Smtp-Source: AGHT+IEKP0wg/owThAA3/mC6Vxrn1R6Pw24Bozp4eJkoXp2s6/m2Qgw2X1bYwO4di4FgizGGC3rWfu0KjgUDlNjaXA== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:f1cf:c733:235b:9fff]) (user=almasrymina job=sendgmr) by 2002:a05:690c:4707:b0:5d4:ce2:e908 with SMTP id gz7-20020a05690c470700b005d40ce2e908mr53364ywb.3.1701996795620; Thu, 07 Dec 2023 16:53:15 -0800 (PST) Date: Thu, 7 Dec 2023 16:52:41 -0800 In-Reply-To: <20231208005250.2910004-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231208005250.2910004-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231208005250.2910004-11-almasrymina@google.com> Subject: [net-next v1 10/16] page_pool: don't release iov on elevanted refcount From: Mina Almasry To: Shailend Chand , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Jeroen de Borst , Praveen Kaligineedi , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Yunsheng Lin , Harshitha Ramamurthy , Shakeel Butt Currently the page_pool behavior is that a page is considered for recycling only once, the first time __page_pool_put_page() is called on it. This works because in practice the net stack only holds 1 reference to the skb frags. In that case, the page_pool recycling works as expected, as the skb frags will have 1 reference on the pages from the net stack when __page_pool_put_page() is called (if the driver is not holding extra references for recycling), and so the page will be recycled. However, this is not compatible with devmem TCP. For devmem TCP, the net stack holds 2 references for each frag, 1 reference is part of the SKB, and the second reference is for the user holding the frag until they call SO_DEVMEM_DONTNEED. This causes a bug in the page_pool recycling where, when the skb is freed, the reference count goes from 2->1, the page_pool sees a pending reference, releases the page, and so no devmem iovs get recycled. To fix this, don't release iovs on elevated refcount. Signed-off-by: Mina Almasry --- net/core/page_pool.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index f0148d66371b..dc2a148f5b06 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -731,6 +731,29 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, /* Page found as candidate for recycling */ return page; } + + if (page_is_page_pool_iov(page)) { + /* With devmem TCP and ppiovs, we can't release pages if the + * refcount is > 1. This is because the net stack holds + * 2 references: + * - 1 for the skb, and + * - 1 for the user until they call SO_DEVMEM_DONTNEED. + * Releasing pages for elevated refcounts completely disables + * page_pool recycling. Instead, simply don't release pages and + * the next call to napi_pp_put_page() via SO_DEVMEM_DONTNEED + * will consider the page again for recycling. As a result, + * devmem TCP incompatible with drivers doing refcnt based + * recycling unless those drivers: + * + * - don't mark skb_mark_for_recycle() + * - are sure to release the last reference with + * page_pool_put_full_page() to consider the page for + * page_pool recycling. + */ + page_pool_page_put_many(page, 1); + return NULL; + } + /* Fallback/non-XDP mode: API user have elevated refcnt. * * Many drivers split up the page into fragments, and some From patchwork Fri Dec 8 00:52:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 751873 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CARj6FzP" Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 672911987 for ; Thu, 7 Dec 2023 16:53:18 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5d3a1e5f8d6so18285387b3.3 for ; Thu, 07 Dec 2023 16:53:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701996797; x=1702601597; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ffR5lRGBwRWz92voabMKfnUxOBX9ei9jc/R/TuPbdfY=; b=CARj6FzPCQ1sS6YCVmsJB2NuIjXU5lWVvtkUA/k/Ll427ms3YefKFgMH2SMQQf4uBm Y2ixiCLENlCGWmGmNRwE04NsaWWaoc4v0jHlbOrFkzgIyF6Pu+sLahVyJ5wZPPRlSi4L kLQRg9ZKrOYxNvgxMBRA/rmTJwWc4ML4fjSMjZRL50MXqbO50Ww/Kf4PZtBI4Q1sNKsS 7nI8xXGI/uXnpLQUYfGgpCCEL7LIrUJhauFjUzSAypMIPyixSiMFc2VlLXB8UKV9yYkX ZP3x0ifY9/bD9o8CH1cl3tnMc12Q65gmXLdDzgfdTONQe3SLK3C9tEsoS2nOSd2zRAxy g3Cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701996797; x=1702601597; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ffR5lRGBwRWz92voabMKfnUxOBX9ei9jc/R/TuPbdfY=; b=FkCL4a/tDcMYP+pmsJv+WJOdTlPBSPyGYvrwHnbQOHFhs3dbbt1BL8Ui9Mfzwq1V1s IfSdgnMvoF4FEsbkUmmsk4qGFMRTRdp0Et12kE+6Cxk/k4XPiGYzqqIjIAdFNj6ULpXK qoxBba554Jv5bCa41kkLonI/FVR5p0WUKBV4cPHKehMU6z6kZ1QxRQD9mT9uf8HSHJIJ GxvRiCOLKfNESiYZ/+XQVzjabiWpdt8cCS4N6iSdzBK+IR5x9SY+wWvcNRGO75aKqdxe zrHvWX+mGo+Hj2oyIqnOYTAdZhmOqF4ypSXkBHaN4/gKkGpNENbVkEWXwJEOoIVBwX7O IQBA== X-Gm-Message-State: AOJu0YyRiAyMojjJ4ab5OnSmYXtG4WU4IZU/eDZmc1YGZT6+S9zg+96u c/4nIYh6dojFFE7Rj6X7OfI1s57yYQgv+11gNA== X-Google-Smtp-Source: AGHT+IH8QZVgyP79RbEsqH0j0RL1Mpin9iL/UFr2+o7Otvdv7pJ6u3sBW0Y9+P6SJHsoq2x+yqVIWLpUXWuYkWSTkw== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:f1cf:c733:235b:9fff]) (user=almasrymina job=sendgmr) by 2002:a25:ccd5:0:b0:daf:6333:17c3 with SMTP id l204-20020a25ccd5000000b00daf633317c3mr42079ybf.1.1701996797568; Thu, 07 Dec 2023 16:53:17 -0800 (PST) Date: Thu, 7 Dec 2023 16:52:42 -0800 In-Reply-To: <20231208005250.2910004-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231208005250.2910004-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231208005250.2910004-12-almasrymina@google.com> Subject: [net-next v1 11/16] net: support non paged skb frags From: Mina Almasry To: Shailend Chand , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Jeroen de Borst , Praveen Kaligineedi , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Yunsheng Lin , Harshitha Ramamurthy , Shakeel Butt Make skb_frag_page() fail in the case where the frag is not backed by a page, and fix its relevant callers to handle this case. Correctly handle skb_frag refcounting in the page_pool_iovs case. Signed-off-by: Mina Almasry --- Changes in v1: - Fix illegal_highdma() (Yunsheng). - Rework napi_pp_put_page() slightly to reduce code churn (Willem). --- include/linux/skbuff.h | 42 +++++++++++++++++++++++++++++++++++------- net/core/dev.c | 3 ++- net/core/gro.c | 2 +- net/core/skbuff.c | 3 +++ net/ipv4/tcp.c | 3 +++ 5 files changed, 44 insertions(+), 9 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index b370eb8d70f7..851f448d2181 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -37,6 +37,8 @@ #endif #include #include +#include +#include /** * DOC: skb checksums @@ -3414,15 +3416,38 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto, fragto->bv_offset = fragfrom->bv_offset; } +/* Returns true if the skb_frag contains a page_pool_iov. */ +static inline bool skb_frag_is_page_pool_iov(const skb_frag_t *frag) +{ + return page_is_page_pool_iov(frag->bv_page); +} + /** * skb_frag_page - retrieve the page referred to by a paged fragment * @frag: the paged fragment * - * Returns the &struct page associated with @frag. + * Returns the &struct page associated with @frag. Returns NULL if this frag + * has no associated page. */ static inline struct page *skb_frag_page(const skb_frag_t *frag) { - return frag->bv_page; + if (!page_is_page_pool_iov(frag->bv_page)) + return frag->bv_page; + + return NULL; +} + +/** + * skb_frag_page_pool_iov - retrieve the page_pool_iov referred to by fragment + * @frag: the fragment + * + * Returns the &struct page_pool_iov associated with @frag. Returns NULL if this + * frag has no associated page_pool_iov. + */ +static inline struct page_pool_iov * +skb_frag_page_pool_iov(const skb_frag_t *frag) +{ + return page_to_page_pool_iov(frag->bv_page); } /** @@ -3433,7 +3458,7 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) */ static inline void __skb_frag_ref(skb_frag_t *frag) { - get_page(skb_frag_page(frag)); + page_pool_page_get_many(frag->bv_page, 1); } /** @@ -3453,13 +3478,13 @@ bool napi_pp_put_page(struct page *page, bool napi_safe); static inline void napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) { - struct page *page = skb_frag_page(frag); - #ifdef CONFIG_PAGE_POOL - if (recycle && napi_pp_put_page(page, napi_safe)) + if (recycle && napi_pp_put_page(frag->bv_page, napi_safe)) return; + page_pool_page_put_many(frag->bv_page, 1); +#else + put_page(skb_frag_page(frag)); #endif - put_page(page); } /** @@ -3499,6 +3524,9 @@ static inline void skb_frag_unref(struct sk_buff *skb, int f) */ static inline void *skb_frag_address(const skb_frag_t *frag) { + if (!skb_frag_page(frag)) + return NULL; + return page_address(skb_frag_page(frag)) + skb_frag_off(frag); } diff --git a/net/core/dev.c b/net/core/dev.c index 30667e4c3b95..1ae9257df441 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3709,8 +3709,9 @@ static int illegal_highdma(struct net_device *dev, struct sk_buff *skb) if (!(dev->features & NETIF_F_HIGHDMA)) { for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + struct page *page = skb_frag_page(frag); - if (PageHighMem(skb_frag_page(frag))) + if (page && PageHighMem(page)) return 1; } } diff --git a/net/core/gro.c b/net/core/gro.c index 0759277dc14e..42d7f6755f32 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -376,7 +376,7 @@ static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff) NAPI_GRO_CB(skb)->frag0 = NULL; NAPI_GRO_CB(skb)->frag0_len = 0; - if (!skb_headlen(skb) && pinfo->nr_frags && + if (!skb_headlen(skb) && pinfo->nr_frags && skb_frag_page(frag0) && !PageHighMem(skb_frag_page(frag0)) && (!NET_IP_ALIGN || !((skb_frag_off(frag0) + nhoff) & 3))) { NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 07f802f1adf1..2ce64f57a0f6 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2999,6 +2999,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe, for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) { const skb_frag_t *f = &skb_shinfo(skb)->frags[seg]; + if (WARN_ON_ONCE(!skb_frag_page(f))) + return false; + if (__splice_segment(skb_frag_page(f), skb_frag_off(f), skb_frag_size(f), offset, len, spd, false, sk, pipe)) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 70a1bafbefba..e22681c4bfac 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2157,6 +2157,9 @@ static int tcp_zerocopy_receive(struct sock *sk, break; } page = skb_frag_page(frags); + if (WARN_ON_ONCE(!page)) + break; + prefetchw(page); pages[pages_to_map++] = page; length += PAGE_SIZE; From patchwork Fri Dec 8 00:52:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 751872 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gmqAQl+a" Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A34B171F for ; Thu, 7 Dec 2023 16:53:20 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-db547d41413so1395977276.0 for ; Thu, 07 Dec 2023 16:53:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701996799; x=1702601599; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+q3Q0nSqUx5ks54zyV6WkT43wIv1KXasQh2KaO7Camc=; b=gmqAQl+aXWfBigYktGXkKzQ0WeVGdIx2AxomgDeKeLHHWTxCHEy7acwmd7fLUB9Tvw 6lsc27J39RpEpOQoU99ET3b+di0Oz+ghz9ASOJ3n6NRK5AXNwPTT6GiLUUTKzJb2dNOX sANS4MCFXy5wmjVaGVvDfK4bGFUwBQ0X8M3NiDFsRAANZNE6+hNiLVSpPL7iaVyYagbp SZ7/FPGhGcICK0j2zHEAgoQBxLtXpZRKCFGrEkX3VGHsuPOWRFEqEfVnRo9QaVA9YIwM JIJGVx5fgNVofvOD315vGHrmqJBWLLsB8BkHnO1d1uMp/+4YgBUvw/S8qHP6378KqUfQ AuFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701996799; x=1702601599; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+q3Q0nSqUx5ks54zyV6WkT43wIv1KXasQh2KaO7Camc=; b=mPflO4xsPauya48NakFv56nJla51MMEW/UyZA6Ji7Qv99ojYJLW2okGZ2qEGMbP0kW V1p96tre0tdg5GO39XUVhwWrXv8QMmIcJSK0qn/TYEJQ+6L1/Y9eIsrzM8uinBLoewb/ oSNP3U2WxWpSsCwZ+Q0Bz+VW/D9TsqgsIUuD4EhaA/pmRoaoh3QUBgId/CV3HtKjch5c Dmt3IJkSC/a1xJuTdco/cJmjpK03y9P/wLTI6YEGWMTx0p2kakmQtB/NtNWLLafg/a1b u5AoAdtj8nWW4HhzuBkwYVQaOyu2OcyHx7aRJJBziZYn06t7FXVXOC58BFQ/EjVkdAfH NtbA== X-Gm-Message-State: AOJu0YxfoRFzF248HIzU9p44vvBmRpgLP2qFhom1vBomlu6tsEicP3GA Q/sKHqeuL1CmjmYeUlysJjaE35YaPxK1+crpMw== X-Google-Smtp-Source: AGHT+IEsLWSlO89pvMKWy39GrzK9f7qQ5gB+T3CnTNcZLTL3Ym09mzP2P3gjJnS5Yf/v+3KuJiN6VxqOjgGT2cM+Ww== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:f1cf:c733:235b:9fff]) (user=almasrymina job=sendgmr) by 2002:a05:6902:14d:b0:db5:3aaf:5207 with SMTP id p13-20020a056902014d00b00db53aaf5207mr1545ybh.3.1701996799475; Thu, 07 Dec 2023 16:53:19 -0800 (PST) Date: Thu, 7 Dec 2023 16:52:43 -0800 In-Reply-To: <20231208005250.2910004-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231208005250.2910004-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231208005250.2910004-13-almasrymina@google.com> Subject: [net-next v1 12/16] net: add support for skbs with unreadable frags From: Mina Almasry To: Shailend Chand , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Jeroen de Borst , Praveen Kaligineedi , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Yunsheng Lin , Harshitha Ramamurthy , Shakeel Butt , Willem de Bruijn , Kaiyuan Zhang For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb. Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not. __skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, and marks the skb as skb->devmem accordingly. Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry --- Changes in v1: - Rename devmem -> dmabuf (David). - Flip skb_frags_not_readable (Jakub). --- include/linux/skbuff.h | 14 +++++++- include/net/tcp.h | 5 +-- net/core/datagram.c | 6 ++++ net/core/gro.c | 5 ++- net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ net/ipv4/tcp.c | 3 ++ net/ipv4/tcp_input.c | 13 +++++-- net/ipv4/tcp_output.c | 5 ++- net/packet/af_packet.c | 4 +-- 9 files changed, 112 insertions(+), 20 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 851f448d2181..61de32ab04ea 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -817,6 +817,8 @@ typedef unsigned char *sk_buff_data_t; * @csum_level: indicates the number of consecutive checksums found in * the packet minus one that have been verified as * CHECKSUM_UNNECESSARY (max 3) + * @dmabuf: indicates that all the fragments in this skb are backed by + * dmabuf. * @dst_pending_confirm: need to confirm neighbour * @decrypted: Decrypted SKB * @slow_gro: state present at GRO time, slower prepare step required @@ -1003,7 +1005,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif - + __u8 dmabuf:1; #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1778,6 +1780,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); } +/* Return true if frags in this skb are readable by the host. */ +static inline bool skb_frags_readable(const struct sk_buff *skb) +{ + return !skb->dmabuf; +} + static inline void skb_mark_not_on_list(struct sk_buff *skb) { skb->next = NULL; @@ -2480,6 +2488,10 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i, struct page *page, int off, int size) { __skb_fill_page_desc_noacc(skb_shinfo(skb), i, page, off, size); + if (page_is_page_pool_iov(page)) { + skb->dmabuf = true; + return; + } /* Propagate page pfmemalloc to the skb if we can. The problem is * that not all callers have unique ownership of the page but rely diff --git a/include/net/tcp.h b/include/net/tcp.h index 973555cb1d3f..0fbf198bdb55 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1017,7 +1017,7 @@ static inline int tcp_skb_mss(const struct sk_buff *skb) static inline bool tcp_skb_can_collapse_to(const struct sk_buff *skb) { - return likely(!TCP_SKB_CB(skb)->eor); + return likely(!TCP_SKB_CB(skb)->eor && skb_frags_readable(skb)); } static inline bool tcp_skb_can_collapse(const struct sk_buff *to, @@ -1025,7 +1025,8 @@ static inline bool tcp_skb_can_collapse(const struct sk_buff *to, { return likely(tcp_skb_can_collapse_to(to) && mptcp_skb_can_collapse(to, from) && - skb_pure_zcopy_same(to, from)); + skb_pure_zcopy_same(to, from) && + skb_frags_readable(to) == skb_frags_readable(from)); } /* Events passed to congestion control interface */ diff --git a/net/core/datagram.c b/net/core/datagram.c index 103d46fa0eeb..f28472ddbaa4 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -426,6 +426,9 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset, return 0; } + if (!skb_frags_readable(skb)) + goto short_copy; + /* Copy paged appendix. Hmm... why does this look so complicated? */ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; @@ -638,6 +641,9 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, if (msg && msg->msg_ubuf && msg->sg_from_iter) return msg->sg_from_iter(sk, skb, from, length); + if (!skb_frags_readable(skb)) + return -EFAULT; + frag = skb_shinfo(skb)->nr_frags; while (length && iov_iter_count(from)) { diff --git a/net/core/gro.c b/net/core/gro.c index 42d7f6755f32..26df48f1b355 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -390,6 +390,9 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow) { struct skb_shared_info *pinfo = skb_shinfo(skb); + if (WARN_ON_ONCE(!skb_frags_readable(skb))) + return; + BUG_ON(skb->end - skb->tail < grow); memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow); @@ -411,7 +414,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb) { int grow = skb_gro_offset(skb) - skb_headlen(skb); - if (grow > 0) + if (grow > 0 && skb_frags_readable(skb)) gro_pull_from_frag0(skb, grow); } diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 2ce64f57a0f6..50b1b7c2ef7b 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1235,6 +1235,14 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt) struct page *p; u8 *vaddr; + if (skb_frag_is_page_pool_iov(frag)) { + printk("%sskb frag %d: not readable\n", level, i); + len -= frag->bv_len; + if (!len) + break; + continue; + } + skb_frag_foreach_page(frag, skb_frag_off(frag), skb_frag_size(frag), p, p_off, p_len, copied) { @@ -1812,6 +1820,9 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) if (skb_shared(skb) || skb_unclone(skb, gfp_mask)) return -EINVAL; + if (!skb_frags_readable(skb)) + return -EFAULT; + if (!num_frags) goto release; @@ -1982,8 +1993,12 @@ struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask) { int headerlen = skb_headroom(skb); unsigned int size = skb_end_offset(skb) + skb->data_len; - struct sk_buff *n = __alloc_skb(size, gfp_mask, - skb_alloc_rx_flag(skb), NUMA_NO_NODE); + struct sk_buff *n; + + if (!skb_frags_readable(skb)) + return NULL; + + n = __alloc_skb(size, gfp_mask, skb_alloc_rx_flag(skb), NUMA_NO_NODE); if (!n) return NULL; @@ -2309,14 +2324,16 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb, int newheadroom, int newtailroom, gfp_t gfp_mask) { - /* - * Allocate the copy buffer - */ - struct sk_buff *n = __alloc_skb(newheadroom + skb->len + newtailroom, - gfp_mask, skb_alloc_rx_flag(skb), - NUMA_NO_NODE); int oldheadroom = skb_headroom(skb); int head_copy_len, head_copy_off; + struct sk_buff *n; + + if (!skb_frags_readable(skb)) + return NULL; + + /* Allocate the copy buffer */ + n = __alloc_skb(newheadroom + skb->len + newtailroom, gfp_mask, + skb_alloc_rx_flag(skb), NUMA_NO_NODE); if (!n) return NULL; @@ -2655,6 +2672,9 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta) */ int i, k, eat = (skb->tail + delta) - skb->end; + if (!skb_frags_readable(skb)) + return NULL; + if (eat > 0 || skb_cloned(skb)) { if (pskb_expand_head(skb, 0, eat > 0 ? eat + 128 : 0, GFP_ATOMIC)) @@ -2808,6 +2828,9 @@ int skb_copy_bits(const struct sk_buff *skb, int offset, void *to, int len) to += copy; } + if (!skb_frags_readable(skb)) + goto fault; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; skb_frag_t *f = &skb_shinfo(skb)->frags[i]; @@ -2996,6 +3019,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe, /* * then map the fragments */ + if (!skb_frags_readable(skb)) + return false; + for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) { const skb_frag_t *f = &skb_shinfo(skb)->frags[seg]; @@ -3219,6 +3245,9 @@ int skb_store_bits(struct sk_buff *skb, int offset, const void *from, int len) from += copy; } + if (!skb_frags_readable(skb)) + goto fault; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; int end; @@ -3298,6 +3327,9 @@ __wsum __skb_checksum(const struct sk_buff *skb, int offset, int len, pos = copy; } + if (!skb_frags_readable(skb)) + return 0; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; @@ -3398,6 +3430,9 @@ __wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset, pos = copy; } + if (!skb_frags_readable(skb)) + return 0; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; @@ -3888,7 +3923,9 @@ static inline void skb_split_inside_header(struct sk_buff *skb, skb_shinfo(skb1)->frags[i] = skb_shinfo(skb)->frags[i]; skb_shinfo(skb1)->nr_frags = skb_shinfo(skb)->nr_frags; + skb1->dmabuf = skb->dmabuf; skb_shinfo(skb)->nr_frags = 0; + skb->dmabuf = 0; skb1->data_len = skb->data_len; skb1->len += skb1->data_len; skb->data_len = 0; @@ -3902,6 +3939,7 @@ static inline void skb_split_no_header(struct sk_buff *skb, { int i, k = 0; const int nfrags = skb_shinfo(skb)->nr_frags; + const int dmabuf = skb->dmabuf; skb_shinfo(skb)->nr_frags = 0; skb1->len = skb1->data_len = skb->len - len; @@ -3935,6 +3973,16 @@ static inline void skb_split_no_header(struct sk_buff *skb, pos += size; } skb_shinfo(skb1)->nr_frags = k; + + if (skb_shinfo(skb)->nr_frags) + skb->dmabuf = dmabuf; + else + skb->dmabuf = 0; + + if (skb_shinfo(skb1)->nr_frags) + skb1->dmabuf = dmabuf; + else + skb1->dmabuf = 0; } /** @@ -4170,6 +4218,9 @@ unsigned int skb_seq_read(unsigned int consumed, const u8 **data, return block_limit - abs_offset; } + if (!skb_frags_readable(st->cur_skb)) + return 0; + if (st->frag_idx == 0 && !st->frag_data) st->stepped_offset += skb_headlen(st->cur_skb); @@ -5784,7 +5835,10 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from, (from->pp_recycle && skb_cloned(from))) return false; - if (len <= skb_tailroom(to)) { + if (skb_frags_readable(from) != skb_frags_readable(to)) + return false; + + if (len <= skb_tailroom(to) && skb_frags_readable(from)) { if (len) BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len)); *delta_truesize = 0; @@ -5959,6 +6013,9 @@ int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len) if (!pskb_may_pull(skb, write_len)) return -ENOMEM; + if (!skb_frags_readable(skb)) + return -EFAULT; + if (!skb_cloned(skb) || skb_clone_writable(skb, write_len)) return 0; @@ -6613,7 +6670,7 @@ void skb_condense(struct sk_buff *skb) { if (skb->data_len) { if (skb->data_len > skb->end - skb->tail || - skb_cloned(skb)) + skb_cloned(skb) || !skb_frags_readable(skb)) return; /* Nice, we can free page frag(s) right now */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index e22681c4bfac..5a3135e93d3d 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2140,6 +2140,9 @@ static int tcp_zerocopy_receive(struct sock *sk, skb = tcp_recv_skb(sk, seq, &offset); } + if (!skb_frags_readable(skb)) + break; + if (TCP_SKB_CB(skb)->has_rxtstamp) { tcp_update_recv_tstamps(skb, tss); zc->msg_flags |= TCP_CMSG_TS; diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 0548f0c12155..a47f98187656 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5309,6 +5309,9 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, for (end_of_skbs = true; skb != NULL && skb != tail; skb = n) { n = tcp_skb_next(skb, list); + if (!skb_frags_readable(skb)) + goto skip_this; + /* No new bits? It is possible on ofo queue. */ if (!before(start, TCP_SKB_CB(skb)->end_seq)) { skb = tcp_collapse_one(sk, skb, list, root); @@ -5329,17 +5332,20 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, break; } - if (n && n != tail && mptcp_skb_can_collapse(skb, n) && + if (n && n != tail && skb_frags_readable(n) && + mptcp_skb_can_collapse(skb, n) && TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(n)->seq) { end_of_skbs = false; break; } +skip_this: /* Decided to skip this, advance start seq. */ start = TCP_SKB_CB(skb)->end_seq; } if (end_of_skbs || - (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN))) + (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) || + !skb_frags_readable(skb)) return; __skb_queue_head_init(&tmp); @@ -5383,7 +5389,8 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, if (!skb || skb == tail || !mptcp_skb_can_collapse(nskb, skb) || - (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN))) + (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) || + !skb_frags_readable(skb)) goto end; #ifdef CONFIG_TLS_DEVICE if (skb->decrypted != nskb->decrypted) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index eb13a55d660c..c8c0a1cbaca5 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2343,7 +2343,8 @@ static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len) if (unlikely(TCP_SKB_CB(skb)->eor) || tcp_has_tx_tstamp(skb) || - !skb_pure_zcopy_same(skb, next)) + !skb_pure_zcopy_same(skb, next) || + skb_frags_readable(skb) != skb_frags_readable(next)) return false; len -= skb->len; @@ -3227,6 +3228,8 @@ static bool tcp_can_collapse(const struct sock *sk, const struct sk_buff *skb) return false; if (skb_cloned(skb)) return false; + if (!skb_frags_readable(skb)) + return false; /* Some heuristics for collapsing over SACK'd could be invented */ if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED) return false; diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index f92edba4c40f..33988106f237 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2156,7 +2156,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev, } } - snaplen = skb->len; + snaplen = skb_frags_readable(skb) ? skb->len : skb_headlen(skb); res = run_filter(skb, sk, snaplen); if (!res) @@ -2276,7 +2276,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, } } - snaplen = skb->len; + snaplen = skb_frags_readable(skb) ? skb->len : skb_headlen(skb); res = run_filter(skb, sk, snaplen); if (!res) From patchwork Fri Dec 8 00:52:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 751871 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Y7oNXzs9" Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCDFF19AA for ; Thu, 7 Dec 2023 16:53:28 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5d7e7e10231so9954787b3.1 for ; Thu, 07 Dec 2023 16:53:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701996808; x=1702601608; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8LAFahE4f6lW6U4R0EfMD5JqvdtDASlh+SiOPl8ncLM=; b=Y7oNXzs9yj0JOSbCMwO0P7k/0Gy/PW8g8Rdbsr9VG8Es5LP04F0RuLDkAFnvq/0hF6 zVuuJ1lsgTLvDykmNGZBKy8vDuYxwV7VtIu/2qOS3LQ0r7mOoEuc4jJySCMCOeaCw5ia qKhmoEraG3DdceIscMlsEI1rWDGpl5H9fj0eSfk6umEsmFVZMdphBnkCnBgFE2uorHHj Oz/Ogw7pO1Cv/m8u5MccDhdGyOR6my9NqA+VWeCUYeoWl9VFkFPwgfBVeXh5y6JP29W7 x3stEq1KceVAfMSEg3787QXnS4lET/iZt2t82Z0RP0YdBZGHMqO+k5ecVidDaNkQkAY4 PmNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701996808; x=1702601608; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8LAFahE4f6lW6U4R0EfMD5JqvdtDASlh+SiOPl8ncLM=; b=bGt4nIHIYXDdQRUr0FYbkB0q+WY3q0+ApKCZqjUN26FwKvH50lLrA7ojS7/IoKYCGa NtBrvYI7jATlEz9QJvs87Z8ArTsqUt4Dnz6sPIYK7vV58LMqUi7wQFJf/GuyRT6vmwQI +jEPdmnR00cZulTqT5wvcmwsGtjlTehxMDBdZYFwOPXbV6rBnrBzzTY3pxNQ+f0f7bf2 vPiRzYMz8jF0NWhlLRosNYF/VoIf1UxCa6WtX3Dqr+qEg8+Yt50jSyvrNM3irGX3mR4/ XmNnIrd/bvEciVyUZkUKKnOAp0LZXkVcKeObft0/qLoVzv1ilDB06KnygEgZi7DF6aFM nhqA== X-Gm-Message-State: AOJu0Yy7Ta7wcEz75SIPfCozJMw9N3H3Ek6dfORdrWHhXXnH/wbqSQ2V O02ESQjn8ed7OE2ZN57zDLA5QUUzq4I5JGuCDg== X-Google-Smtp-Source: AGHT+IEM0Wm9DYVFZMhitDMwATBdyyiibL1KLXTVDq5QGJ9QOeVsrO6pcWSYZFnzLKl/9nrYJpl34dm3X4sSAh0HkQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:f1cf:c733:235b:9fff]) (user=almasrymina job=sendgmr) by 2002:a05:6902:14d:b0:db5:3aaf:5207 with SMTP id p13-20020a056902014d00b00db53aaf5207mr1548ybh.3.1701996807958; Thu, 07 Dec 2023 16:53:27 -0800 (PST) Date: Thu, 7 Dec 2023 16:52:47 -0800 In-Reply-To: <20231208005250.2910004-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231208005250.2910004-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231208005250.2910004-17-almasrymina@google.com> Subject: [net-next v1 16/16] selftests: add ncdevmem, netcat for devmem TCP From: Mina Almasry To: Shailend Chand , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Jeroen de Borst , Praveen Kaligineedi , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Yunsheng Lin , Harshitha Ramamurthy , Shakeel Butt , Stanislav Fomichev ncdevmem is a devmem TCP netcat. It works similarly to netcat, but it sends and receives data using the devmem TCP APIs. It uses udmabuf as the dmabuf provider. It is compatible with a regular netcat running on a peer, or a ncdevmem running on a peer. In addition to normal netcat support, ncdevmem has a validation mode, where it sends a specific pattern and validates this pattern on the receiver side to ensure data integrity. Suggested-by: Stanislav Fomichev Signed-off-by: Mina Almasry --- Changes in v1: - Many more general cleanups (Willem). - Removed driver reset (Jakub). - Removed hardcoded if index (Paolo). RFC v2: - General cleanups (Willem). --- tools/testing/selftests/net/.gitignore | 1 + tools/testing/selftests/net/Makefile | 5 + tools/testing/selftests/net/ncdevmem.c | 489 +++++++++++++++++++++++++ 3 files changed, 495 insertions(+) create mode 100644 tools/testing/selftests/net/ncdevmem.c diff --git a/tools/testing/selftests/net/.gitignore b/tools/testing/selftests/net/.gitignore index 2f9d378edec3..b644dbae58b7 100644 --- a/tools/testing/selftests/net/.gitignore +++ b/tools/testing/selftests/net/.gitignore @@ -17,6 +17,7 @@ ipv6_flowlabel ipv6_flowlabel_mgr log.txt msg_zerocopy +ncdevmem nettest psock_fanout psock_snd diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile index 14bd68da7466..d7a66563ffe7 100644 --- a/tools/testing/selftests/net/Makefile +++ b/tools/testing/selftests/net/Makefile @@ -5,6 +5,10 @@ CFLAGS = -Wall -Wl,--no-as-needed -O2 -g CFLAGS += -I../../../../usr/include/ $(KHDR_INCLUDES) # Additional include paths needed by kselftest.h CFLAGS += -I../ +CFLAGS += -I../../../net/ynl/generated/ +CFLAGS += -I../../../net/ynl/lib/ + +LDLIBS += ../../../net/ynl/lib/ynl.a ../../../net/ynl/generated/protos.a TEST_PROGS := run_netsocktests run_afpackettests test_bpf.sh netdevice.sh \ rtnetlink.sh xfrm_policy.sh test_blackhole_dev.sh @@ -92,6 +96,7 @@ TEST_PROGS += test_vxlan_nolocalbypass.sh TEST_PROGS += test_bridge_backup_port.sh TEST_PROGS += fdb_flush.sh TEST_PROGS += fq_band_pktlimit.sh +TEST_GEN_FILES += ncdevmem TEST_FILES := settings diff --git a/tools/testing/selftests/net/ncdevmem.c b/tools/testing/selftests/net/ncdevmem.c new file mode 100644 index 000000000000..7fbeee02b9a2 --- /dev/null +++ b/tools/testing/selftests/net/ncdevmem.c @@ -0,0 +1,489 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#define __EXPORTED_HEADERS__ + +#include +#include +#include +#include +#include +#include +#include +#define __iovec_defined +#include +#include +#include + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "netdev-user.h" +#include + +#define PAGE_SHIFT 12 +#define TEST_PREFIX "ncdevmem" +#define NUM_PAGES 16000 + +#ifndef MSG_SOCK_DEVMEM +#define MSG_SOCK_DEVMEM 0x2000000 +#endif + +/* + * tcpdevmem netcat. Works similarly to netcat but does device memory TCP + * instead of regular TCP. Uses udmabuf to mock a dmabuf provider. + * + * Usage: + * + * On server: + * ncdevmem -s -c -f eth1 -d 3 -n 0000:06:00.0 -l \ + * -p 5201 -v 7 + * + * On client: + * yes $(echo -e \\x01\\x02\\x03\\x04\\x05\\x06) | \ + * tr \\n \\0 | \ + * head -c 5G | \ + * nc 5201 -p 5201 + * + * Note this is compatible with regular netcat. i.e. the sender or receiver can + * be replaced with regular netcat to test the RX or TX path in isolation. + */ + +static char *server_ip = "192.168.1.4"; +static char *client_ip = "192.168.1.2"; +static char *port = "5201"; +static size_t do_validation; +static int queue_num = 15; +static char *ifname = "eth1"; +static unsigned int ifindex = 3; +static char *nic_pci_addr = "0000:06:00.0"; +static unsigned int iterations; +static unsigned int dmabuf_id; + +void print_bytes(void *ptr, size_t size) +{ + unsigned char *p = ptr; + int i; + + for (i = 0; i < size; i++) + printf("%02hhX ", p[i]); + printf("\n"); +} + +void print_nonzero_bytes(void *ptr, size_t size) +{ + unsigned char *p = ptr; + unsigned int i; + + for (i = 0; i < size; i++) + putchar(p[i]); + printf("\n"); +} + +void validate_buffer(void *line, size_t size) +{ + static unsigned char seed = 1; + unsigned char *ptr = line; + int errors = 0; + size_t i; + + for (i = 0; i < size; i++) { + if (ptr[i] != seed) { + fprintf(stderr, + "Failed validation: expected=%u, actual=%u, index=%lu\n", + seed, ptr[i], i); + errors++; + if (errors > 20) + error(1, 0, "validation failed."); + } + seed++; + if (seed == do_validation) + seed = 0; + } + + fprintf(stdout, "Validated buffer\n"); +} + +static void reset_flow_steering(void) +{ + char command[256]; + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple off", + "eth1"); + system(command); + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple on", + "eth1"); + system(command); +} + +static void configure_flow_steering(void) +{ + char command[256]; + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), + "sudo ethtool -N %s flow-type tcp4 src-ip %s dst-ip %s src-port %s dst-port %s queue %d", + ifname, client_ip, server_ip, port, port, queue_num); + system(command); +} + +static int bind_rx_queue(unsigned int ifindex, unsigned int dmabuf_fd, + struct netdev_queue_dmabuf *queues, + unsigned int n_queue_index, struct ynl_sock **ys) +{ + struct netdev_bind_rx_req *req = NULL; + struct netdev_bind_rx_rsp *rsp = NULL; + struct ynl_error yerr; + + *ys = ynl_sock_create(&ynl_netdev_family, &yerr); + if (!*ys) { + fprintf(stderr, "YNL: %s\n", yerr.msg); + return -1; + } + + req = netdev_bind_rx_req_alloc(); + netdev_bind_rx_req_set_ifindex(req, ifindex); + netdev_bind_rx_req_set_dmabuf_fd(req, dmabuf_fd); + __netdev_bind_rx_req_set_queues(req, queues, n_queue_index); + + rsp = netdev_bind_rx(*ys, req); + if (!rsp) { + perror("netdev_bind_rx"); + goto err_close; + } + + if (!rsp->_present.dmabuf_id) { + perror("dmabuf_id not present"); + goto err_close; + } + + printf("got dmabuf id=%d\n", rsp->dmabuf_id); + dmabuf_id = rsp->dmabuf_id; + + netdev_bind_rx_req_free(req); + netdev_bind_rx_rsp_free(rsp); + + return 0; + +err_close: + fprintf(stderr, "YNL failed: %s\n", (*ys)->err.msg); + netdev_bind_rx_req_free(req); + ynl_sock_destroy(*ys); + return -1; +} + +static void create_udmabuf(int *devfd, int *memfd, int *buf, size_t dmabuf_size) +{ + struct udmabuf_create create; + int ret; + + *devfd = open("/dev/udmabuf", O_RDWR); + if (*devfd < 0) { + error(70, 0, + "%s: [skip,no-udmabuf: Unable to access DMA buffer device file]\n", + TEST_PREFIX); + } + + *memfd = memfd_create("udmabuf-test", MFD_ALLOW_SEALING); + if (*memfd < 0) + error(70, 0, "%s: [skip,no-memfd]\n", TEST_PREFIX); + + /* Required for udmabuf */ + ret = fcntl(*memfd, F_ADD_SEALS, F_SEAL_SHRINK); + if (ret < 0) + error(73, 0, "%s: [skip,fcntl-add-seals]\n", TEST_PREFIX); + + ret = ftruncate(*memfd, dmabuf_size); + if (ret == -1) + error(74, 0, "%s: [FAIL,memfd-truncate]\n", TEST_PREFIX); + + memset(&create, 0, sizeof(create)); + + create.memfd = *memfd; + create.offset = 0; + create.size = dmabuf_size; + *buf = ioctl(*devfd, UDMABUF_CREATE, &create); + if (*buf < 0) + error(75, 0, "%s: [FAIL, create udmabuf]\n", TEST_PREFIX); +} + +int do_server(void) +{ + char ctrl_data[sizeof(int) * 20000]; + struct netdev_queue_dmabuf *queues; + size_t non_page_aligned_frags = 0; + struct sockaddr_in client_addr; + struct sockaddr_in server_sin; + size_t page_aligned_frags = 0; + int devfd, memfd, buf, ret; + size_t total_received = 0; + socklen_t client_addr_len; + bool is_devmem = false; + char *buf_mem = NULL; + struct ynl_sock *ys; + size_t dmabuf_size; + char iobuf[819200]; + char buffer[256]; + int socket_fd; + int client_fd; + size_t i = 0; + int opt = 1; + + dmabuf_size = getpagesize() * NUM_PAGES; + + create_udmabuf(&devfd, &memfd, &buf, dmabuf_size); + + reset_flow_steering(); + configure_flow_steering(); + + sleep(1); + + queues = malloc(sizeof(*queues) * 1); + + queues[0]._present.type = 1; + queues[0]._present.idx = 1; + queues[0].type = NETDEV_QUEUE_TYPE_RX; + queues[0].idx = queue_num; + if (bind_rx_queue(ifindex, buf, queues, 1, &ys)) + error(1, 0, "Failed to bind\n"); + + buf_mem = mmap(NULL, dmabuf_size, PROT_READ | PROT_WRITE, MAP_SHARED, + buf, 0); + if (buf_mem == MAP_FAILED) + error(1, 0, "mmap()"); + + server_sin.sin_family = AF_INET; + server_sin.sin_port = htons(atoi(port)); + + ret = inet_pton(server_sin.sin_family, server_ip, &server_sin.sin_addr); + if (socket < 0) + error(79, 0, "%s: [FAIL, create socket]\n", TEST_PREFIX); + + socket_fd = socket(server_sin.sin_family, SOCK_STREAM, 0); + if (socket < 0) + error(errno, errno, "%s: [FAIL, create socket]\n", TEST_PREFIX); + + ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &opt, + sizeof(opt)); + if (ret) + error(errno, errno, "%s: [FAIL, set sock opt]\n", TEST_PREFIX); + + ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &opt, + sizeof(opt)); + if (ret) + error(errno, errno, "%s: [FAIL, set sock opt]\n", TEST_PREFIX); + + printf("binding to address %s:%d\n", server_ip, + ntohs(server_sin.sin_port)); + + ret = bind(socket_fd, &server_sin, sizeof(server_sin)); + if (ret) + error(errno, errno, "%s: [FAIL, bind]\n", TEST_PREFIX); + + ret = listen(socket_fd, 1); + if (ret) + error(errno, errno, "%s: [FAIL, listen]\n", TEST_PREFIX); + + client_addr_len = sizeof(client_addr); + + inet_ntop(server_sin.sin_family, &server_sin.sin_addr, buffer, + sizeof(buffer)); + printf("Waiting or connection on %s:%d\n", buffer, + ntohs(server_sin.sin_port)); + client_fd = accept(socket_fd, &client_addr, &client_addr_len); + + inet_ntop(client_addr.sin_family, &client_addr.sin_addr, buffer, + sizeof(buffer)); + printf("Got connection from %s:%d\n", buffer, + ntohs(client_addr.sin_port)); + + while (1) { + struct iovec iov = { .iov_base = iobuf, + .iov_len = sizeof(iobuf) }; + struct dmabuf_cmsg *dmabuf_cmsg = NULL; + struct dma_buf_sync sync = { 0 }; + struct cmsghdr *cm = NULL; + struct msghdr msg = { 0 }; + struct dmabuf_token token; + ssize_t ret; + + is_devmem = false; + printf("\n\n"); + + msg.msg_iov = &iov; + msg.msg_iovlen = 1; + msg.msg_control = ctrl_data; + msg.msg_controllen = sizeof(ctrl_data); + ret = recvmsg(client_fd, &msg, MSG_SOCK_DEVMEM); + printf("recvmsg ret=%ld\n", ret); + if (ret < 0 && (errno == EAGAIN || errno == EWOULDBLOCK)) + continue; + if (ret < 0) { + perror("recvmsg"); + continue; + } + if (ret == 0) { + printf("client exited\n"); + goto cleanup; + } + + i++; + for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) { + if (cm->cmsg_level != SOL_SOCKET || + (cm->cmsg_type != SCM_DEVMEM_DMABUF && + cm->cmsg_type != SCM_DEVMEM_LINEAR)) { + fprintf(stdout, "skipping non-devmem cmsg\n"); + continue; + } + + dmabuf_cmsg = (struct dmabuf_cmsg *)CMSG_DATA(cm); + is_devmem = true; + + if (cm->cmsg_type == SCM_DEVMEM_LINEAR) { + /* TODO: process data copied from skb's linear + * buffer. + */ + fprintf(stdout, + "SCM_DEVMEM_LINEAR. dmabuf_cmsg->frag_size=%u\n", + dmabuf_cmsg->frag_size); + + continue; + } + + token.token_start = dmabuf_cmsg->frag_token; + token.token_count = 1; + + total_received += dmabuf_cmsg->frag_size; + printf("received frag_page=%llu, in_page_offset=%llu, frag_offset=%llu, frag_size=%u, token=%u, total_received=%lu, dmabuf_id=%u\n", + dmabuf_cmsg->frag_offset >> PAGE_SHIFT, + dmabuf_cmsg->frag_offset % getpagesize(), + dmabuf_cmsg->frag_offset, dmabuf_cmsg->frag_size, + dmabuf_cmsg->frag_token, total_received, + dmabuf_cmsg->dmabuf_id); + + if (dmabuf_cmsg->dmabuf_id != dmabuf_id) + error(1, 0, + "received on wrong dmabuf_id: flow steering error\n"); + + if (dmabuf_cmsg->frag_size % getpagesize()) + non_page_aligned_frags++; + else + page_aligned_frags++; + + sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_START; + ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync); + + if (do_validation) + validate_buffer( + ((unsigned char *)buf_mem) + + dmabuf_cmsg->frag_offset, + dmabuf_cmsg->frag_size); + else + print_nonzero_bytes( + ((unsigned char *)buf_mem) + + dmabuf_cmsg->frag_offset, + dmabuf_cmsg->frag_size); + + sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_END; + ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync); + + ret = setsockopt(client_fd, SOL_SOCKET, + SO_DEVMEM_DONTNEED, &token, + sizeof(token)); + if (ret != 1) + error(1, 0, + "SO_DEVMEM_DONTNEED not enough tokens"); + } + if (!is_devmem) + error(1, 0, "flow steering error\n"); + + printf("total_received=%lu\n", total_received); + } + + fprintf(stdout, "%s: ok\n", TEST_PREFIX); + + fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n", + page_aligned_frags, non_page_aligned_frags); + + fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n", + page_aligned_frags, non_page_aligned_frags); + +cleanup: + + munmap(buf_mem, dmabuf_size); + close(client_fd); + close(socket_fd); + close(buf); + close(memfd); + close(devfd); + ynl_sock_destroy(ys); + + return 0; +} + +int main(int argc, char *argv[]) +{ + int is_server = 0, opt; + + while ((opt = getopt(argc, argv, "ls:c:p:v:q:f:n:i:d:")) != -1) { + switch (opt) { + case 'l': + is_server = 1; + break; + case 's': + server_ip = optarg; + break; + case 'c': + client_ip = optarg; + break; + case 'p': + port = optarg; + break; + case 'v': + do_validation = atoll(optarg); + break; + case 'q': + queue_num = atoi(optarg); + break; + case 'f': + ifname = optarg; + break; + case 'd': + ifindex = atoi(optarg); + break; + case 'n': + nic_pci_addr = optarg; + break; + case 'i': + iterations = atoll(optarg); + break; + case '?': + printf("unknown option: %c\n", optopt); + break; + } + } + + for (; optind < argc; optind++) + printf("extra arguments: %s\n", argv[optind]); + + if (is_server) + return do_server(); + + return 0; +}