From patchwork Thu Jul 1 00:27:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 469001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 454DBC11F68 for ; Thu, 1 Jul 2021 00:30:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 235B661407 for ; Thu, 1 Jul 2021 00:30:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238114AbhGAAca (ORCPT ); Wed, 30 Jun 2021 20:32:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236734AbhGAAc3 (ORCPT ); Wed, 30 Jun 2021 20:32:29 -0400 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84CEFC061756; Wed, 30 Jun 2021 17:30:00 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id h4so4420321pgp.5; Wed, 30 Jun 2021 17:30:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Pu6s/bZWRTuBtOPeO5jualkpubf/IINLtS9eFBKRQso=; b=PdX6RKKOGJrv6kvWCgF9zQlBNmqZRxLH9LK5yDqb1OWn6hpBjqRK21ap8ELgA1AqwQ oetj7LRrHJBZTi4smNBb4zfWcCrRT5/uCy7+Gm2b3sN1lNNRFqNla19dpeGdVnuQ4BLY S8ZnF9Iw7JWTtCCWi2qet68vqeQvz4Ry7xdP6aEAfNF22cygjuXKorxzPjAYP5fFvora gqh5TrOXAdWlEdh915zL7RC7TOkgWwHac/a088cAI684r4bny/8fkf8mbHIyX344W7kL AM3b0ejxk3Fj4XN9DQ/nllu6sNDq5FVbTdGBGL/1o/N7qTwH4gHf0wtoi+r49zstmwie 1MSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Pu6s/bZWRTuBtOPeO5jualkpubf/IINLtS9eFBKRQso=; b=BiG6aEcutbskRBtSc47gn6noYMSh3hG48THXMsdL/pXxImbuuS0VrDyvvfIij60TD9 zpdCHy1GQjLhUYdZPsDGORYPtl3MLt/g8ntLoew+rny6Znr8wGHq8KKnn5PbmBaH5mt8 VoG8kQ3qJmthlRwqsIAQtd4F9wL2rxYuH5TE1tEnf9x+jeI6dELLKDUrZDjTDZY33Bs4 gA0Q18lq5QbCex8W7uWCsvNn6Z8dybOjsL5SqTizMYtYtv8JikSQKfSJ+Sx/XMgEtKUg nxv6UfxbdOxgTgGyILe5RF9Abb7FiLEpdfsL3VbqCn9GSgANDQq/WUVoNg7jjO/O14ct oZnQ== X-Gm-Message-State: AOAM5339Z5vzncKrcHxGN2zXQzJRW8RngFEBOlbwei2ZtZvISCHEjVnk RWupWAeO/zDv2w4mCbE5R1gqYgNPH/4= X-Google-Smtp-Source: ABdhPJxP19XIUIx80Qil3bc2zMDihIP4g2SCxld7uNKjcjzjRZT1Yx3xh5Y3wXlYWRFRW7ttOtV07Q== X-Received: by 2002:aa7:989c:0:b029:30a:13ef:2bbd with SMTP id r28-20020aa7989c0000b029030a13ef2bbdmr28175685pfl.20.1625099399870; Wed, 30 Jun 2021 17:29:59 -0700 (PDT) Received: from localhost ([2402:3a80:11db:6f6:e6a8:37a6:1da7:fbc7]) by smtp.gmail.com with ESMTPSA id k35sm24545840pgi.21.2021.06.30.17.29.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Jun 2021 17:29:59 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: netdev@vger.kernel.org Cc: Kumar Kartikeya Dwivedi , =?utf-8?q?Toke_H=C3=B8ilan?= =?utf-8?q?d-J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Jesper Dangaard Brouer , "David S. Miller" , Jakub Kicinski , John Fastabend , Martin KaFai Lau , bpf@vger.kernel.org Subject: [PATCH net-next v5 1/5] net: core: split out code to run generic XDP prog Date: Thu, 1 Jul 2021 05:57:55 +0530 Message-Id: <20210701002759.381983-2-memxor@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210701002759.381983-1-memxor@gmail.com> References: <20210701002759.381983-1-memxor@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This helper can later be utilized in code that runs cpumap and devmap programs in generic redirect mode and adjust skb based on changes made to xdp_buff. When returning XDP_REDIRECT/XDP_TX, it invokes __skb_push, so whenever a generic redirect path invokes devmap/cpumap prog if set, it must __skb_pull again as we expect mac header to be pulled. It also drops the skb_reset_mac_len call after do_xdp_generic, as the mac_header and network_header are advanced by the same offset, so the difference (mac_len) remains constant. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/netdevice.h | 2 + net/core/dev.c | 84 ++++++++++++++++++++++++--------------- 2 files changed, 55 insertions(+), 31 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index be1dcceda5e4..90472ea70db2 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3984,6 +3984,8 @@ static inline void dev_consume_skb_any(struct sk_buff *skb) __dev_kfree_skb_any(skb, SKB_REASON_CONSUMED); } +u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog); void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog); int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff *skb); int netif_rx(struct sk_buff *skb); diff --git a/net/core/dev.c b/net/core/dev.c index 991d09b67bd9..ad5ab33cbd39 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4740,45 +4740,18 @@ static struct netdev_rx_queue *netif_get_rxqueue(struct sk_buff *skb) return rxqueue; } -static u32 netif_receive_generic_xdp(struct sk_buff *skb, - struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) +u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog) { void *orig_data, *orig_data_end, *hard_start; struct netdev_rx_queue *rxqueue; - u32 metalen, act = XDP_DROP; bool orig_bcast, orig_host; u32 mac_len, frame_sz; __be16 orig_eth_type; struct ethhdr *eth; + u32 metalen, act; int off; - /* Reinjected packets coming from act_mirred or similar should - * not get XDP generic processing. - */ - if (skb_is_redirected(skb)) - return XDP_PASS; - - /* XDP packets must be linear and must have sufficient headroom - * of XDP_PACKET_HEADROOM bytes. This is the guarantee that also - * native XDP provides, thus we need to do it here as well. - */ - if (skb_cloned(skb) || skb_is_nonlinear(skb) || - skb_headroom(skb) < XDP_PACKET_HEADROOM) { - int hroom = XDP_PACKET_HEADROOM - skb_headroom(skb); - int troom = skb->tail + skb->data_len - skb->end; - - /* In case we have to go down the path and also linearize, - * then lets do the pskb_expand_head() work just once here. - */ - if (pskb_expand_head(skb, - hroom > 0 ? ALIGN(hroom, NET_SKB_PAD) : 0, - troom > 0 ? troom + 128 : 0, GFP_ATOMIC)) - goto do_drop; - if (skb_linearize(skb)) - goto do_drop; - } - /* The XDP program wants to see the packet starting at the MAC * header. */ @@ -4833,6 +4806,13 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, skb->protocol = eth_type_trans(skb, skb->dev); } + /* Redirect/Tx gives L2 packet, code that will reuse skb must __skb_pull + * before calling us again on redirect path. We do not call do_redirect + * as we leave that up to the caller. + * + * Caller is responsible for managing lifetime of skb (i.e. calling + * kfree_skb in response to actions it cannot handle/XDP_DROP). + */ switch (act) { case XDP_REDIRECT: case XDP_TX: @@ -4843,6 +4823,49 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, if (metalen) skb_metadata_set(skb, metalen); break; + } + + return act; +} + +static u32 netif_receive_generic_xdp(struct sk_buff *skb, + struct xdp_buff *xdp, + struct bpf_prog *xdp_prog) +{ + u32 act = XDP_DROP; + + /* Reinjected packets coming from act_mirred or similar should + * not get XDP generic processing. + */ + if (skb_is_redirected(skb)) + return XDP_PASS; + + /* XDP packets must be linear and must have sufficient headroom + * of XDP_PACKET_HEADROOM bytes. This is the guarantee that also + * native XDP provides, thus we need to do it here as well. + */ + if (skb_cloned(skb) || skb_is_nonlinear(skb) || + skb_headroom(skb) < XDP_PACKET_HEADROOM) { + int hroom = XDP_PACKET_HEADROOM - skb_headroom(skb); + int troom = skb->tail + skb->data_len - skb->end; + + /* In case we have to go down the path and also linearize, + * then lets do the pskb_expand_head() work just once here. + */ + if (pskb_expand_head(skb, + hroom > 0 ? ALIGN(hroom, NET_SKB_PAD) : 0, + troom > 0 ? troom + 128 : 0, GFP_ATOMIC)) + goto do_drop; + if (skb_linearize(skb)) + goto do_drop; + } + + act = bpf_prog_run_generic_xdp(skb, xdp, xdp_prog); + switch (act) { + case XDP_REDIRECT: + case XDP_TX: + case XDP_PASS: + break; default: bpf_warn_invalid_xdp_action(act); fallthrough; @@ -5308,7 +5331,6 @@ static int __netif_receive_skb_core(struct sk_buff **pskb, bool pfmemalloc, ret = NET_RX_DROP; goto out; } - skb_reset_mac_len(skb); } if (eth_type_vlan(skb->protocol)) { From patchwork Thu Jul 1 00:27:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 469449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C665C11F66 for ; Thu, 1 Jul 2021 00:30:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 61F1A6147D for ; Thu, 1 Jul 2021 00:30:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238162AbhGAAce (ORCPT ); Wed, 30 Jun 2021 20:32:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238132AbhGAAcd (ORCPT ); Wed, 30 Jun 2021 20:32:33 -0400 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFCFFC061756; Wed, 30 Jun 2021 17:30:03 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id m26so4413034pgb.8; Wed, 30 Jun 2021 17:30:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xl2XQWEJG8begT6rZuY2RmzXT/AoqbtgvUhn3YaKP7M=; b=tSee+xMkrX2NUt3ddxNLUclWxPk59Tcck5SDe+FxXdhg/iOwIPxHHSfIYDjN4osP1o Simf3pFkkl/dpKWgrrklUkvAFZB6bmi66Lp+HA9r9SNqDJGPAG86A4RztOzez4CHn7Ni J5tXHlyDuFuAlH06zrQgE9wELzZ+a0SjdIkkUTrS0yht5e4w55l+taGR4ml5WGls83F4 BOZXiIrCa+WJt5XniSvp9ME2SC0qPpI/S9VWLNjjcIt2cz0mWjqXoxeahxtJU2CzN5Wt KGMlqgJMGaQHXiqJc8FzhALepPhISk/u66qmC9hKJvk89UEY4XLeRe5txSBYUP+0r4F0 TQSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xl2XQWEJG8begT6rZuY2RmzXT/AoqbtgvUhn3YaKP7M=; b=PFxirul5rmKpHRTU0LJamV7IrnddFcAz6VMNPre+vdaV8BdnA4aANiM9QouqNFmh61 Gmz8qkI2KliVxeyQfpGHm+uBRPRiuD3BJw/Xx3zrbJq04TKOlYieQfZO4vC//k73rKjN FjulgnmKi5rLs5kRfk2wgrUdbgy+C0/Wg2/f3y1UIzV626JACicE67EOMeH20uCPXLjc EqvXfz6CZu9Sj79io51MIjktS8BZDU7E77NQXl0xA9KdokI1PQpkvp5RJ7RGlwPv58yE o6wp4hzhpsEYYic8boOCcwUCZVFSRWUlqEJJxD4XCztKLIXKZcnfjEnKFAU1PAP9EHWX XHIw== X-Gm-Message-State: AOAM53136329mJKpm7Q3Ae2JMstzkvMYLWypPvjxdbxAW5q/Li2P0Wta AQBBCzQSjHmIv/aEgFlZ1SH9nBMXG+g= X-Google-Smtp-Source: ABdhPJySSxKosOvXkh5kMgVHhl5e5dghTSrFVcDO3nSRbPBb8W6Qe2d5J6GsjEy0hquA/3BIzCr3oA== X-Received: by 2002:a63:4719:: with SMTP id u25mr37126833pga.193.1625099403322; Wed, 30 Jun 2021 17:30:03 -0700 (PDT) Received: from localhost ([2402:3a80:11db:6f6:e6a8:37a6:1da7:fbc7]) by smtp.gmail.com with ESMTPSA id u24sm23741144pfm.200.2021.06.30.17.30.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Jun 2021 17:30:03 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: netdev@vger.kernel.org Cc: Kumar Kartikeya Dwivedi , =?utf-8?q?Toke_H=C3=B8ilan?= =?utf-8?q?d-J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Jesper Dangaard Brouer , "David S. Miller" , Jakub Kicinski , John Fastabend , Martin KaFai Lau , bpf@vger.kernel.org Subject: [PATCH net-next v5 2/5] bitops: add non-atomic bitops for pointers Date: Thu, 1 Jul 2021 05:57:56 +0530 Message-Id: <20210701002759.381983-3-memxor@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210701002759.381983-1-memxor@gmail.com> References: <20210701002759.381983-1-memxor@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org cpumap needs to set, clear, and test the lowest bit in skb pointer in various places. To make these checks less noisy, add pointer friendly bitop macros that also do some typechecking to sanitize the argument. These wrap the non-atomic bitops __set_bit, __clear_bit, and test_bit but for pointer arguments. Pointer's address has to be passed in and it is treated as an unsigned long *, since width and representation of pointer and unsigned long match on targets Linux supports. They are prefixed with double underscore to indicate lack of atomicity. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bitops.h | 50 +++++++++++++++++++++++++++++++++++++++ include/linux/typecheck.h | 9 +++++++ 2 files changed, 59 insertions(+) diff --git a/include/linux/bitops.h b/include/linux/bitops.h index 26bf15e6cd35..5e62e2383b7f 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -4,6 +4,7 @@ #include #include +#include #include @@ -253,6 +254,55 @@ static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, __clear_bit(nr, addr); } +/** + * __ptr_set_bit - Set bit in a pointer's value + * @nr: the bit to set + * @addr: the address of the pointer variable + * + * Example: + * void *p = foo(); + * __ptr_set_bit(bit, &p); + */ +#define __ptr_set_bit(nr, addr) \ + ({ \ + typecheck_pointer(*(addr)); \ + __set_bit(nr, (unsigned long *)(addr)); \ + }) + +/** + * __ptr_clear_bit - Clear bit in a pointer's value + * @nr: the bit to clear + * @addr: the address of the pointer variable + * + * Example: + * void *p = foo(); + * __ptr_clear_bit(bit, &p); + */ +#define __ptr_clear_bit(nr, addr) \ + ({ \ + typecheck_pointer(*(addr)); \ + __clear_bit(nr, (unsigned long *)(addr)); \ + }) + +/** + * __ptr_test_bit - Test bit in a pointer's value + * @nr: the bit to test + * @addr: the address of the pointer variable + * + * Example: + * void *p = foo(); + * if (__ptr_test_bit(bit, &p)) { + * ... + * } else { + * ... + * } + */ +#define __ptr_test_bit(nr, addr) \ + ({ \ + typecheck_pointer(*(addr)); \ + test_bit(nr, (unsigned long *)(addr)); \ + }) + #ifdef __KERNEL__ #ifndef set_mask_bits diff --git a/include/linux/typecheck.h b/include/linux/typecheck.h index 20d310331eb5..46b15e2aaefb 100644 --- a/include/linux/typecheck.h +++ b/include/linux/typecheck.h @@ -22,4 +22,13 @@ (void)__tmp; \ }) +/* + * Check at compile time that something is a pointer type. + */ +#define typecheck_pointer(x) \ +({ typeof(x) __dummy; \ + (void)sizeof(*__dummy); \ + 1; \ +}) + #endif /* TYPECHECK_H_INCLUDED */ From patchwork Thu Jul 1 00:27:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 469448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71744C11F66 for ; Thu, 1 Jul 2021 00:30:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 566666146D for ; Thu, 1 Jul 2021 00:30:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238177AbhGAAcq (ORCPT ); Wed, 30 Jun 2021 20:32:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238224AbhGAAcl (ORCPT ); Wed, 30 Jun 2021 20:32:41 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CEF98C061756; Wed, 30 Jun 2021 17:30:10 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id t9so4436451pgn.4; Wed, 30 Jun 2021 17:30:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SkgpkHwukme+s+DiUo/QAxq3uuNb0i0Ju9uiWH1fc9E=; b=qTYnPkb90Gwe6s1eVuvrmLmrqyjctqLVk/ta1z+WXptREFN1taVF4hgAcWFEKYHNZr 9m58rW4DyOPBx8xpghN3M75vJzs5lrHysQa2HZhyEf+i3IcMx1n9Alz3qYizNFiDGllr W+jPMsgs3qccxiL72n81muAa7LHo+UasIFU3Xts1zqGWMSx+aPKq0+R7S77Ccn8wX74m MwJyBQA8rknZdMbBRpBeJX78YhDJxzB4mzJdy2aNFEOwQDN2ZUdChagKRqeJ74k0ixhp aeafsxnBLAJVnn4gEaeO+PHLrh3nEH1lC3pjpak5Fkz4f6D14c6pLupbo5coZcBwJZbK xoKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SkgpkHwukme+s+DiUo/QAxq3uuNb0i0Ju9uiWH1fc9E=; b=UdvaqkIDypuR4hTg2xu2oYTmbGFwGX7zYyfcTYsoYp8BUDxazoqYSsuC/KLFSsGaZD Yv6YJeu9rMNFsoI2mr6XcwOZ/RHQCCActDHmMczFrOOE/dOMnt7FRmNbOyFocsWYDiHI qM0+gjVetzxQlAk3cmCC2tyd/r4pG63xHE6zRHNCzfq4iYbTV+yoS1kHd99fQChpbti+ W/abW4dWMOzBj/fZfMH+kfBEcs/rb8ocUuNSofKs4SVRnorbD/pXHe5GJH84wcVaO3uc 4kc5WfJkDH+OlcFiQ1vGypMXuVwTImfCbLaldTwTyuSlo0BsvMuz/9uPFPEgabtgs7t3 TIfw== X-Gm-Message-State: AOAM532xv9PRmMDtHEVRp3Lq71QvLqPQNSZkGXQddj52MCfsQGu2ioXd rkrpqMQ4okUr6gsg8p0+AyOsR1wXhuk= X-Google-Smtp-Source: ABdhPJx1oPfh3CMSQbayS4D1qA8/l8j41gvWPRcV9+uKl/GEIaXnR09ZRYSmwcKQ1n98dEivdTEopQ== X-Received: by 2002:a63:f10:: with SMTP id e16mr6243928pgl.163.1625099410254; Wed, 30 Jun 2021 17:30:10 -0700 (PDT) Received: from localhost ([2402:3a80:11db:6f6:e6a8:37a6:1da7:fbc7]) by smtp.gmail.com with ESMTPSA id na7sm7467224pjb.36.2021.06.30.17.30.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Jun 2021 17:30:09 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: netdev@vger.kernel.org Cc: Kumar Kartikeya Dwivedi , =?utf-8?q?Toke_H=C3=B8ilan?= =?utf-8?q?d-J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Jesper Dangaard Brouer , "David S. Miller" , Jakub Kicinski , John Fastabend , Martin KaFai Lau , bpf@vger.kernel.org Subject: [PATCH net-next v5 4/5] bpf: devmap: implement devmap prog execution for generic XDP Date: Thu, 1 Jul 2021 05:57:58 +0530 Message-Id: <20210701002759.381983-5-memxor@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210701002759.381983-1-memxor@gmail.com> References: <20210701002759.381983-1-memxor@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This lifts the restriction on running devmap BPF progs in generic redirect mode. To match native XDP behavior, it is invoked right before generic_xdp_tx is called, and only supports XDP_PASS/XDP_ABORTED/ XDP_DROP actions. We also return 0 even if devmap program drops the packet, as semantically redirect has already succeeded and the devmap prog is the last point before TX of the packet to device where it can deliver a verdict on the packet. This also means it must take care of freeing the skb, as xdp_do_generic_redirect callers only do that in case an error is returned. Since devmap entry prog is supported, remove the check in generic_xdp_install entirely. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 1 - kernel/bpf/devmap.c | 49 ++++++++++++++++++++++++++++++++++++--------- net/core/dev.c | 18 ----------------- 3 files changed, 39 insertions(+), 29 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 095aaa104c56..4afbff308ca3 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1508,7 +1508,6 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, struct bpf_prog *xdp_prog, struct bpf_map *map, bool exclude_ingress); -bool dev_map_can_have_prog(struct bpf_map *map); void __cpu_map_flush(void); int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp, diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 2a75e6c2d27d..49f03e8e5561 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -318,16 +318,6 @@ static int dev_map_hash_get_next_key(struct bpf_map *map, void *key, return -ENOENT; } -bool dev_map_can_have_prog(struct bpf_map *map) -{ - if ((map->map_type == BPF_MAP_TYPE_DEVMAP || - map->map_type == BPF_MAP_TYPE_DEVMAP_HASH) && - map->value_size != offsetofend(struct bpf_devmap_val, ifindex)) - return true; - - return false; -} - static int dev_map_bpf_prog_run(struct bpf_prog *xdp_prog, struct xdp_frame **frames, int n, struct net_device *dev) @@ -499,6 +489,37 @@ static inline int __xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, return 0; } +static u32 dev_map_bpf_prog_run_skb(struct sk_buff *skb, struct bpf_dtab_netdev *dst) +{ + struct xdp_txq_info txq = { .dev = dst->dev }; + struct xdp_buff xdp; + u32 act; + + if (!dst->xdp_prog) + return XDP_PASS; + + __skb_pull(skb, skb->mac_len); + xdp.txq = &txq; + + act = bpf_prog_run_generic_xdp(skb, &xdp, dst->xdp_prog); + switch (act) { + case XDP_PASS: + __skb_push(skb, skb->mac_len); + break; + default: + bpf_warn_invalid_xdp_action(act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(dst->dev, dst->xdp_prog, act); + fallthrough; + case XDP_DROP: + kfree_skb(skb); + break; + } + + return act; +} + int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, struct net_device *dev_rx) { @@ -614,6 +635,14 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, err = xdp_ok_fwd_dev(dst->dev, skb->len); if (unlikely(err)) return err; + + /* Redirect has already succeeded semantically at this point, so we just + * return 0 even if packet is dropped. Helper below takes care of + * freeing skb. + */ + if (dev_map_bpf_prog_run_skb(skb, dst) != XDP_PASS) + return 0; + skb->dev = dst->dev; generic_xdp_tx(skb, xdp_prog); diff --git a/net/core/dev.c b/net/core/dev.c index 8521936414f2..c674fe191e8a 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -5656,24 +5656,6 @@ static int generic_xdp_install(struct net_device *dev, struct netdev_bpf *xdp) struct bpf_prog *new = xdp->prog; int ret = 0; - if (new) { - u32 i; - - mutex_lock(&new->aux->used_maps_mutex); - - /* generic XDP does not work with DEVMAPs that can - * have a bpf_prog installed on an entry - */ - for (i = 0; i < new->aux->used_map_cnt; i++) { - if (dev_map_can_have_prog(new->aux->used_maps[i])) { - mutex_unlock(&new->aux->used_maps_mutex); - return -EINVAL; - } - } - - mutex_unlock(&new->aux->used_maps_mutex); - } - switch (xdp->command) { case XDP_SETUP_PROG: rcu_assign_pointer(dev->xdp_prog, new); From patchwork Thu Jul 1 00:27:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 469000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39E92C11F66 for ; Thu, 1 Jul 2021 00:30:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2315461474 for ; Thu, 1 Jul 2021 00:30:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238245AbhGAAcu (ORCPT ); Wed, 30 Jun 2021 20:32:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238236AbhGAAco (ORCPT ); Wed, 30 Jun 2021 20:32:44 -0400 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1201C061756; Wed, 30 Jun 2021 17:30:14 -0700 (PDT) Received: by mail-pj1-x1043.google.com with SMTP id 22-20020a17090a0c16b0290164a5354ad0so5793033pjs.2; Wed, 30 Jun 2021 17:30:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KfeZbHLaSagpSWbMcY2CgR88Lxlqz7ykXlSuHFuKJ+0=; b=CXeSyTs2kYg5SZQywjQPVPk4IAL2mP8UI3mV+Et62qf6pvOH9iJLnZ+Q7RGidz00kN x+qK8BZUS15siwMPM6ch4vu06Vo3T45T7DWxkoHbICEktxfGqTVqAf0kau7jpVOAm/MN BYW1m67bY+lPjtevKdyMwRCQZCmLHthK9JjQFPm94E7WMaLslkdj/wmF6XoEQ+BHPpLs Yr/gYObDkeNGoc7JjutV0kbhUZrgePCEsOx65OmrGJgrtCJEC63ouI3YoK2TWHOZ0++u WKPkskHAd4c2BQkwxwZbPKUYM7mBM+Fbk6ytyWBY0LaicSAl+Kd6nmokwKFpI1/8rJ+6 NEcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KfeZbHLaSagpSWbMcY2CgR88Lxlqz7ykXlSuHFuKJ+0=; b=m6lLhvpodNXvmS2oEHVAqRnYjuux5y+xx4KbizDICkpSZ2wAp3cWkRWakeOzZezisg JvMC+74YRW7CudeJnrTbQdlrogvHcLBHOKhfeFvxUwTM7iStSrzQiJG5m7t/64wVSY5l Iu5pZCMq4F+mg73z/hfz6VsYQFeJbVy0pKbfVVuGHYQeohrHQRy8Heb2AsnspgwwVXec nOTxFFhHZg63kWlxLjeDLC2ERTBaMrjeuDSa3NXYaLG0a8chMd7+Oek/qwE/06ga1FkQ 2ZXDg233c6sSCkoBeXKFeEoSisSFgxOhfWhlWf6HhA+PJkDHRo51/w7jDR6VihPYh+P+ pprg== X-Gm-Message-State: AOAM531RWwW7eiW0gJXmODHn2+csqgOBzh1CYM4/u7ZBgPwL9aw/YJTY McGBS9/tg/wfpbtFD7eUIVSFXLMHuwg= X-Google-Smtp-Source: ABdhPJxiPjHyY13PeYJEmuShSJL69/6GqAcAI0fOVxfJTqP3Ub3856G9CvCNCYXkXJdiIbOtJcxbSQ== X-Received: by 2002:a17:90b:1809:: with SMTP id lw9mr42269799pjb.128.1625099414238; Wed, 30 Jun 2021 17:30:14 -0700 (PDT) Received: from localhost ([2402:3a80:11db:6f6:e6a8:37a6:1da7:fbc7]) by smtp.gmail.com with ESMTPSA id o34sm25890345pgm.6.2021.06.30.17.30.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Jun 2021 17:30:13 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: netdev@vger.kernel.org Cc: Kumar Kartikeya Dwivedi , =?utf-8?q?Toke_H=C3=B8ilan?= =?utf-8?q?d-J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Jesper Dangaard Brouer , "David S. Miller" , Jakub Kicinski , John Fastabend , Martin KaFai Lau , bpf@vger.kernel.org Subject: [PATCH net-next v5 5/5] bpf: tidy xdp attach selftests Date: Thu, 1 Jul 2021 05:57:59 +0530 Message-Id: <20210701002759.381983-6-memxor@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210701002759.381983-1-memxor@gmail.com> References: <20210701002759.381983-1-memxor@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Support for cpumap and devmap entry progs in previous commits means the test needs to be updated for the new semantics. Also take this opportunity to convert it from CHECK macros to the new ASSERT macros. Since xdp_cpumap_attach has no subtest, put the sole test inside test_xdptest_xdp_cpumap_attach function. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Kumar Kartikeya Dwivedi --- .../bpf/prog_tests/xdp_cpumap_attach.c | 43 +++++++------------ .../bpf/prog_tests/xdp_devmap_attach.c | 39 +++++++---------- 2 files changed, 32 insertions(+), 50 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c index 0176573fe4e7..8755effd80b0 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c @@ -7,64 +7,53 @@ #define IFINDEX_LO 1 -void test_xdp_with_cpumap_helpers(void) +void test_xdp_cpumap_attach(void) { struct test_xdp_with_cpumap_helpers *skel; struct bpf_prog_info info = {}; + __u32 len = sizeof(info); struct bpf_cpumap_val val = { .qsize = 192, }; - __u32 duration = 0, idx = 0; - __u32 len = sizeof(info); int err, prog_fd, map_fd; + __u32 idx = 0; skel = test_xdp_with_cpumap_helpers__open_and_load(); - if (CHECK_FAIL(!skel)) { - perror("test_xdp_with_cpumap_helpers__open_and_load"); + if (!ASSERT_OK_PTR(skel, "test_xdp_with_cpumap_helpers__open_and_load")) return; - } - /* can not attach program with cpumaps that allow programs - * as xdp generic - */ prog_fd = bpf_program__fd(skel->progs.xdp_redir_prog); err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE); - CHECK(err == 0, "Generic attach of program with 8-byte CPUMAP", - "should have failed\n"); + if (!ASSERT_OK(err, "Generic attach of program with 8-byte CPUMAP")) + goto out_close; + + err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); + ASSERT_OK(err, "XDP program detach"); prog_fd = bpf_program__fd(skel->progs.xdp_dummy_cm); map_fd = bpf_map__fd(skel->maps.cpu_map); err = bpf_obj_get_info_by_fd(prog_fd, &info, &len); - if (CHECK_FAIL(err)) + if (!ASSERT_OK(err, "bpf_obj_get_info_by_fd")) goto out_close; val.bpf_prog.fd = prog_fd; err = bpf_map_update_elem(map_fd, &idx, &val, 0); - CHECK(err, "Add program to cpumap entry", "err %d errno %d\n", - err, errno); + ASSERT_OK(err, "Add program to cpumap entry"); err = bpf_map_lookup_elem(map_fd, &idx, &val); - CHECK(err, "Read cpumap entry", "err %d errno %d\n", err, errno); - CHECK(info.id != val.bpf_prog.id, "Expected program id in cpumap entry", - "expected %u read %u\n", info.id, val.bpf_prog.id); + ASSERT_OK(err, "Read cpumap entry"); + ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to cpumap entry prog_id"); /* can not attach BPF_XDP_CPUMAP program to a device */ err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE); - CHECK(err == 0, "Attach of BPF_XDP_CPUMAP program", - "should have failed\n"); + if (!ASSERT_NEQ(err, 0, "Attach of BPF_XDP_CPUMAP program")) + bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); val.qsize = 192; val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_prog); err = bpf_map_update_elem(map_fd, &idx, &val, 0); - CHECK(err == 0, "Add non-BPF_XDP_CPUMAP program to cpumap entry", - "should have failed\n"); + ASSERT_NEQ(err, 0, "Add non-BPF_XDP_CPUMAP program to cpumap entry"); out_close: test_xdp_with_cpumap_helpers__destroy(skel); } - -void test_xdp_cpumap_attach(void) -{ - if (test__start_subtest("cpumap_with_progs")) - test_xdp_with_cpumap_helpers(); -} diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c index 88ef3ec8ac4c..c72af030ff10 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c @@ -16,50 +16,45 @@ void test_xdp_with_devmap_helpers(void) .ifindex = IFINDEX_LO, }; __u32 len = sizeof(info); - __u32 duration = 0, idx = 0; int err, dm_fd, map_fd; + __u32 idx = 0; skel = test_xdp_with_devmap_helpers__open_and_load(); - if (CHECK_FAIL(!skel)) { - perror("test_xdp_with_devmap_helpers__open_and_load"); + if (!ASSERT_OK_PTR(skel, "test_xdp_with_devmap_helpers__open_and_load")) return; - } - /* can not attach program with DEVMAPs that allow programs - * as xdp generic - */ dm_fd = bpf_program__fd(skel->progs.xdp_redir_prog); err = bpf_set_link_xdp_fd(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE); - CHECK(err == 0, "Generic attach of program with 8-byte devmap", - "should have failed\n"); + if (!ASSERT_OK(err, "Generic attach of program with 8-byte devmap")) + goto out_close; + + err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); + ASSERT_OK(err, "XDP program detach"); dm_fd = bpf_program__fd(skel->progs.xdp_dummy_dm); map_fd = bpf_map__fd(skel->maps.dm_ports); err = bpf_obj_get_info_by_fd(dm_fd, &info, &len); - if (CHECK_FAIL(err)) + if (!ASSERT_OK(err, "bpf_obj_get_info_by_fd")) goto out_close; val.bpf_prog.fd = dm_fd; err = bpf_map_update_elem(map_fd, &idx, &val, 0); - CHECK(err, "Add program to devmap entry", - "err %d errno %d\n", err, errno); + ASSERT_OK(err, "Add program to devmap entry"); err = bpf_map_lookup_elem(map_fd, &idx, &val); - CHECK(err, "Read devmap entry", "err %d errno %d\n", err, errno); - CHECK(info.id != val.bpf_prog.id, "Expected program id in devmap entry", - "expected %u read %u\n", info.id, val.bpf_prog.id); + ASSERT_OK(err, "Read devmap entry"); + ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to devmap entry prog_id"); /* can not attach BPF_XDP_DEVMAP program to a device */ err = bpf_set_link_xdp_fd(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE); - CHECK(err == 0, "Attach of BPF_XDP_DEVMAP program", - "should have failed\n"); + if (!ASSERT_NEQ(err, 0, "Attach of BPF_XDP_DEVMAP program")) + bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); val.ifindex = 1; val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_prog); err = bpf_map_update_elem(map_fd, &idx, &val, 0); - CHECK(err == 0, "Add non-BPF_XDP_DEVMAP program to devmap entry", - "should have failed\n"); + ASSERT_NEQ(err, 0, "Add non-BPF_XDP_DEVMAP program to devmap entry"); out_close: test_xdp_with_devmap_helpers__destroy(skel); @@ -68,12 +63,10 @@ void test_xdp_with_devmap_helpers(void) void test_neg_xdp_devmap_helpers(void) { struct test_xdp_devmap_helpers *skel; - __u32 duration = 0; skel = test_xdp_devmap_helpers__open_and_load(); - if (CHECK(skel, - "Load of XDP program accessing egress ifindex without attach type", - "should have failed\n")) { + if (!ASSERT_EQ(skel, NULL, + "Load of XDP program accessing egress ifindex without attach type")) { test_xdp_devmap_helpers__destroy(skel); } }