From patchwork Wed Mar 31 06:12:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Loftus, Ciara" X-Patchwork-Id: 413416 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFE70C433E2 for ; Wed, 31 Mar 2021 06:44:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B6338619D8 for ; Wed, 31 Mar 2021 06:44:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233840AbhCaGoF (ORCPT ); Wed, 31 Mar 2021 02:44:05 -0400 Received: from mga05.intel.com ([192.55.52.43]:14630 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233780AbhCaGoC (ORCPT ); Wed, 31 Mar 2021 02:44:02 -0400 IronPort-SDR: Qr2+YHvTcUEOSAFIjdweIBlxUFyIPf4CJdznZWkyvPc94Io8wm1EUvbSsPbmkyUGjnRtG4WCVS 1Lbf2JnbTUAQ== X-IronPort-AV: E=McAfee;i="6000,8403,9939"; a="277111333" X-IronPort-AV: E=Sophos;i="5.81,293,1610438400"; d="scan'208";a="277111333" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2021 23:44:02 -0700 IronPort-SDR: xmZkpm5nC4d0w9YtcRU3LWOWZunI5pRJ46SpyPWggYevHePjkuQ75Mt+PUw5YUiCXIZRu+up4Y C9UfZy/pTsow== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,293,1610438400"; d="scan'208";a="445523168" Received: from silpixa00399839.ir.intel.com (HELO localhost.localdomain) ([10.237.222.142]) by fmsmga002.fm.intel.com with ESMTP; 30 Mar 2021 23:44:00 -0700 From: Ciara Loftus To: netdev@vger.kernel.org, bpf@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, alexei.starovoitov@gmail.com Cc: Ciara Loftus Subject: [PATCH v4 bpf 2/3] libbpf: restore umem state after socket create failure Date: Wed, 31 Mar 2021 06:12:17 +0000 Message-Id: <20210331061218.1647-3-ciara.loftus@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210331061218.1647-1-ciara.loftus@intel.com> References: <20210331061218.1647-1-ciara.loftus@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org If the call to xsk_socket__create fails, the user may want to retry the socket creation using the same umem. Ensure that the umem is in the same state on exit if the call fails by: 1. ensuring the umem _save pointers are unmodified. 2. not unmapping the set of umem rings that were set up with the umem during xsk_umem__create, since those maps existed before the call to xsk_socket__create and should remain in tact even in the event of failure. Fixes: 2f6324a3937f ("libbpf: Support shared umems between queues and devices") Signed-off-by: Ciara Loftus --- tools/lib/bpf/xsk.c | 41 +++++++++++++++++++++++------------------ 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c index 443b0cfb45e8..5098d9e3b55a 100644 --- a/tools/lib/bpf/xsk.c +++ b/tools/lib/bpf/xsk.c @@ -743,26 +743,30 @@ static struct xsk_ctx *xsk_get_ctx(struct xsk_umem *umem, int ifindex, return NULL; } -static void xsk_put_ctx(struct xsk_ctx *ctx) +static void xsk_put_ctx(struct xsk_ctx *ctx, bool unmap) { struct xsk_umem *umem = ctx->umem; struct xdp_mmap_offsets off; int err; - if (--ctx->refcount == 0) { - err = xsk_get_mmap_offsets(umem->fd, &off); - if (!err) { - munmap(ctx->fill->ring - off.fr.desc, - off.fr.desc + umem->config.fill_size * - sizeof(__u64)); - munmap(ctx->comp->ring - off.cr.desc, - off.cr.desc + umem->config.comp_size * - sizeof(__u64)); - } + if (--ctx->refcount) + return; - list_del(&ctx->list); - free(ctx); - } + if (!unmap) + goto out_free; + + err = xsk_get_mmap_offsets(umem->fd, &off); + if (err) + goto out_free; + + munmap(ctx->fill->ring - off.fr.desc, off.fr.desc + umem->config.fill_size * + sizeof(__u64)); + munmap(ctx->comp->ring - off.cr.desc, off.cr.desc + umem->config.comp_size * + sizeof(__u64)); + +out_free: + list_del(&ctx->list); + free(ctx); } static struct xsk_ctx *xsk_create_ctx(struct xsk_socket *xsk, @@ -797,8 +801,6 @@ static struct xsk_ctx *xsk_create_ctx(struct xsk_socket *xsk, memcpy(ctx->ifname, ifname, IFNAMSIZ - 1); ctx->ifname[IFNAMSIZ - 1] = '\0'; - umem->fill_save = NULL; - umem->comp_save = NULL; ctx->fill = fill; ctx->comp = comp; list_add(&ctx->list, &umem->ctx_list); @@ -854,6 +856,7 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr, struct xsk_socket *xsk; struct xsk_ctx *ctx; int err, ifindex; + bool unmap = umem->fill_save != fill; if (!umem || !xsk_ptr || !(rx || tx)) return -EFAULT; @@ -994,6 +997,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr, } *xsk_ptr = xsk; + umem->fill_save = NULL; + umem->comp_save = NULL; return 0; out_mmap_tx: @@ -1005,7 +1010,7 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr, munmap(rx_map, off.rx.desc + xsk->config.rx_size * sizeof(struct xdp_desc)); out_put_ctx: - xsk_put_ctx(ctx); + xsk_put_ctx(ctx, unmap); out_socket: if (--umem->refcount) close(xsk->fd); @@ -1071,7 +1076,7 @@ void xsk_socket__delete(struct xsk_socket *xsk) } } - xsk_put_ctx(ctx); + xsk_put_ctx(ctx, true); umem->refcount--; /* Do not close an fd that also has an associated umem connected From patchwork Wed Mar 31 06:12:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Loftus, Ciara" X-Patchwork-Id: 413415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31833C433E1 for ; Wed, 31 Mar 2021 06:45:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F0893619D8 for ; Wed, 31 Mar 2021 06:45:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233859AbhCaGoj (ORCPT ); Wed, 31 Mar 2021 02:44:39 -0400 Received: from mga05.intel.com ([192.55.52.43]:14630 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233817AbhCaGoE (ORCPT ); Wed, 31 Mar 2021 02:44:04 -0400 IronPort-SDR: 0eZOBCSH/whwK/1qOqbWsdPsbKRA+ZmzBMudKeoeYz8WAicVBcLRRSvWTT45jvBOS0ezY6DjKv /ww/oxqwODSQ== X-IronPort-AV: E=McAfee;i="6000,8403,9939"; a="277111342" X-IronPort-AV: E=Sophos;i="5.81,293,1610438400"; d="scan'208";a="277111342" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2021 23:44:04 -0700 IronPort-SDR: u4pZQBhQWR4I0uUz9+bQAnRJCZgwG1/U0GqD4iOKFkeF33i+cbik/XqY+j/TIcpIQ4BoHvuK6G KtflLnCv+NEg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,293,1610438400"; d="scan'208";a="445523209" Received: from silpixa00399839.ir.intel.com (HELO localhost.localdomain) ([10.237.222.142]) by fmsmga002.fm.intel.com with ESMTP; 30 Mar 2021 23:44:02 -0700 From: Ciara Loftus To: netdev@vger.kernel.org, bpf@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, alexei.starovoitov@gmail.com Cc: Ciara Loftus Subject: [PATCH v4 bpf 3/3] libbpf: only create rx and tx XDP rings when necessary Date: Wed, 31 Mar 2021 06:12:18 +0000 Message-Id: <20210331061218.1647-4-ciara.loftus@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210331061218.1647-1-ciara.loftus@intel.com> References: <20210331061218.1647-1-ciara.loftus@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Prior to this commit xsk_socket__create(_shared) always attempted to create the rx and tx rings for the socket. However this causes an issue when the socket being setup is that which shares the fd with the UMEM. If a previous call to this function failed with this socket after the rings were set up, a subsequent call would always fail because the rings are not torn down after the first call and when we try to set them up again we encounter an error because they already exist. Solve this by remembering whether the rings were set up by introducing new bools to struct xsk_umem which represent the ring setup status and using them to determine whether or not to set up the rings. Fixes: 1cad07884239 ("libbpf: add support for using AF_XDP sockets") Signed-off-by: Ciara Loftus --- tools/lib/bpf/xsk.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c index 5098d9e3b55a..d24b5cc720ec 100644 --- a/tools/lib/bpf/xsk.c +++ b/tools/lib/bpf/xsk.c @@ -59,6 +59,8 @@ struct xsk_umem { int fd; int refcount; struct list_head ctx_list; + bool rx_ring_setup_done; + bool tx_ring_setup_done; }; struct xsk_ctx { @@ -857,6 +859,7 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr, struct xsk_ctx *ctx; int err, ifindex; bool unmap = umem->fill_save != fill; + bool rx_setup_done = false, tx_setup_done = false; if (!umem || !xsk_ptr || !(rx || tx)) return -EFAULT; @@ -884,6 +887,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr, } } else { xsk->fd = umem->fd; + rx_setup_done = umem->rx_ring_setup_done; + tx_setup_done = umem->tx_ring_setup_done; } ctx = xsk_get_ctx(umem, ifindex, queue_id); @@ -902,7 +907,7 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr, } xsk->ctx = ctx; - if (rx) { + if (rx && !rx_setup_done) { err = setsockopt(xsk->fd, SOL_XDP, XDP_RX_RING, &xsk->config.rx_size, sizeof(xsk->config.rx_size)); @@ -910,8 +915,10 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr, err = -errno; goto out_put_ctx; } + if (xsk->fd == umem->fd) + umem->rx_ring_setup_done = true; } - if (tx) { + if (tx && !tx_setup_done) { err = setsockopt(xsk->fd, SOL_XDP, XDP_TX_RING, &xsk->config.tx_size, sizeof(xsk->config.tx_size)); @@ -919,6 +926,8 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr, err = -errno; goto out_put_ctx; } + if (xsk->fd == umem->fd) + umem->rx_ring_setup_done = true; } err = xsk_get_mmap_offsets(xsk->fd, &off);