mbox series

[net-next,0/4] Generic XDP improvements

Message ID 20210620233200.855534-1-memxor@gmail.com
Headers show
Series Generic XDP improvements | expand

Message

Kumar Kartikeya Dwivedi June 20, 2021, 11:31 p.m. UTC
This small series makes some improvements to generic XDP mode and brings it
closer to native XDP. Patch 1 splits out generic XDP processing into reusable
parts, patch 2 implements generic cpumap support (details in commit) and patch 3
allows devmap bpf prog execution before generic_xdp_tx is called.

Patch 4 just updates a couple of selftests to adapt to changes in behavior (in
that specifying devmap/cpumap prog fd in generic mode is now allowed).

Kumar Kartikeya Dwivedi (4):
  net: core: split out code to run generic XDP prog
  net: implement generic cpumap
  bpf: devmap: implement devmap prog execution for generic XDP
  bpf: update XDP selftests to not fail with generic XDP

 include/linux/bpf.h                           |   8 +
 include/linux/netdevice.h                     |   2 +
 include/linux/skbuff.h                        |  10 +-
 kernel/bpf/cpumap.c                           | 151 ++++++++++++++++--
 kernel/bpf/devmap.c                           |  42 ++++-
 net/core/dev.c                                |  86 ++++++----
 net/core/filter.c                             |   6 +-
 .../bpf/prog_tests/xdp_cpumap_attach.c        |   4 +-
 .../bpf/prog_tests/xdp_devmap_attach.c        |   4 +-
 9 files changed, 255 insertions(+), 58 deletions(-)

--
2.31.1

Comments

Kumar Kartikeya Dwivedi June 20, 2021, 11:31 p.m. UTC | #1
This change implements CPUMAP redirect support for generic XDP programs.
The idea is to reuse the cpu map entry's queue that is used to push
native xdp frames for redirecting skb to a different CPU. This will
match native XDP behavior (in that RPS is invoked again for packet
reinjected into networking stack).

To be able to determine whether the incoming skb is from the driver or
cpumap, we reuse skb->redirected bit that skips generic XDP processing
when it is set. To always make use of this, CONFIG_NET_REDIRECT guard on
it has been lifted and it is always available.
Toke Høiland-Jørgensen June 21, 2021, 3:43 p.m. UTC | #2
Kumar Kartikeya Dwivedi <memxor@gmail.com> writes:

> This change implements CPUMAP redirect support for generic XDP programs.

> The idea is to reuse the cpu map entry's queue that is used to push

> native xdp frames for redirecting skb to a different CPU. This will

> match native XDP behavior (in that RPS is invoked again for packet

> reinjected into networking stack).

>

> To be able to determine whether the incoming skb is from the driver or

> cpumap, we reuse skb->redirected bit that skips generic XDP processing

> when it is set. To always make use of this, CONFIG_NET_REDIRECT guard on

> it has been lifted and it is always available.

>

> From the redirect side, we add the skb to ptr_ring with its lowest bit

> set to 1.  This should be safe as skb is not 1-byte aligned. This allows

> kthread to discern between xdp_frames and sk_buff. On consumption of the

> ptr_ring item, the lowest bit is unset.

>

> In the end, the skb is simply added to the list that kthread is anyway

> going to maintain for xdp_frames converted to skb, and then received

> again by using netif_receive_skb_list.

>

> Bulking optimization for generic cpumap is left as an exercise for a

> future patch for now.

>

> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>

> ---

>  include/linux/bpf.h    |   8 +++

>  include/linux/skbuff.h |  10 +--

>  kernel/bpf/cpumap.c    | 151 +++++++++++++++++++++++++++++++++++++----

>  net/core/filter.c      |   6 +-

>  4 files changed, 154 insertions(+), 21 deletions(-)

>

> diff --git a/include/linux/bpf.h b/include/linux/bpf.h

> index f309fc1509f2..46e6587d3ee6 100644

> --- a/include/linux/bpf.h

> +++ b/include/linux/bpf.h

> @@ -1513,6 +1513,8 @@ bool dev_map_can_have_prog(struct bpf_map *map);

>  void __cpu_map_flush(void);

>  int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp,

>  		    struct net_device *dev_rx);

> +int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,

> +			     struct sk_buff *skb);

>  bool cpu_map_prog_allowed(struct bpf_map *map);

>  

>  /* Return map's numa specified by userspace */

> @@ -1710,6 +1712,12 @@ static inline int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu,

>  	return 0;

>  }

>  

> +static inline int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu,

> +					   struct sk_buff *skb)

> +{

> +	return -EOPNOTSUPP;

> +}

> +

>  static inline bool cpu_map_prog_allowed(struct bpf_map *map)

>  {

>  	return false;

> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h

> index b2db9cd9a73f..f19190820e63 100644

> --- a/include/linux/skbuff.h

> +++ b/include/linux/skbuff.h

> @@ -863,8 +863,8 @@ struct sk_buff {

>  	__u8			tc_skip_classify:1;

>  	__u8			tc_at_ingress:1;

>  #endif

> -#ifdef CONFIG_NET_REDIRECT

>  	__u8			redirected:1;

> +#ifdef CONFIG_NET_REDIRECT

>  	__u8			from_ingress:1;

>  #endif

>  #ifdef CONFIG_TLS_DEVICE

> @@ -4664,17 +4664,13 @@ static inline __wsum lco_csum(struct sk_buff *skb)

>  

>  static inline bool skb_is_redirected(const struct sk_buff *skb)

>  {

> -#ifdef CONFIG_NET_REDIRECT

>  	return skb->redirected;

> -#else

> -	return false;

> -#endif

>  }

>  

>  static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress)

>  {

> -#ifdef CONFIG_NET_REDIRECT

>  	skb->redirected = 1;

> +#ifdef CONFIG_NET_REDIRECT

>  	skb->from_ingress = from_ingress;

>  	if (skb->from_ingress)

>  		skb->tstamp = 0;

> @@ -4683,9 +4679,7 @@ static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress)

>  

>  static inline void skb_reset_redirect(struct sk_buff *skb)

>  {

> -#ifdef CONFIG_NET_REDIRECT

>  	skb->redirected = 0;

> -#endif

>  }

>  

>  static inline bool skb_csum_is_sctp(struct sk_buff *skb)

> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c

> index a1a0c4e791c6..f016daf8fdcc 100644

> --- a/kernel/bpf/cpumap.c

> +++ b/kernel/bpf/cpumap.c

> @@ -16,6 +16,7 @@

>   * netstack, and assigning dedicated CPUs for this stage.  This

>   * basically allows for 10G wirespeed pre-filtering via bpf.

>   */

> +#include <linux/bitops.h>

>  #include <linux/bpf.h>

>  #include <linux/filter.h>

>  #include <linux/ptr_ring.h>

> @@ -79,6 +80,29 @@ struct bpf_cpu_map {

>  

>  static DEFINE_PER_CPU(struct list_head, cpu_map_flush_list);

>  

> +static void *__ptr_set_bit(void *ptr, int bit)

> +{

> +	unsigned long __ptr = (unsigned long)ptr;

> +

> +	__ptr |= BIT(bit);

> +	return (void *)__ptr;

> +}

> +

> +static void *__ptr_clear_bit(void *ptr, int bit)

> +{

> +	unsigned long __ptr = (unsigned long)ptr;

> +

> +	__ptr &= ~BIT(bit);

> +	return (void *)__ptr;

> +}

> +

> +static int __ptr_test_bit(void *ptr, int bit)

> +{

> +	unsigned long __ptr = (unsigned long)ptr;

> +

> +	return __ptr & BIT(bit);

> +}


Why not put these into bitops.h instead?

>  static struct bpf_map *cpu_map_alloc(union bpf_attr *attr)

>  {

>  	u32 value_size = attr->value_size;

> @@ -168,6 +192,64 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu)

>  	}

>  }

>  

> +static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu,

> +				    void **frames, int skb_n,

> +				    struct xdp_cpumap_stats *stats,

> +				    struct list_head *listp)

> +{

> +	struct xdp_rxq_info rxq = {};

> +	struct xdp_buff xdp;

> +	int err, i;

> +	u32 act;

> +

> +	xdp.rxq = &rxq;

> +

> +	if (!rcpu->prog)

> +		goto insert;

> +

> +	for (i = 0; i < skb_n; i++) {

> +		struct sk_buff *skb = frames[i];

> +

> +		rxq.dev = skb->dev;

> +

> +		act = bpf_prog_run_generic_xdp(skb, &xdp, rcpu->prog);

> +		switch (act) {

> +		case XDP_PASS:

> +			list_add_tail(&skb->list, listp);

> +			break;

> +		case XDP_REDIRECT:

> +			err = xdp_do_generic_redirect(skb->dev, skb, &xdp,

> +						      rcpu->prog);

> +			if (unlikely(err)) {

> +				kfree_skb(skb);

> +				stats->drop++;

> +			} else {

> +				stats->redirect++;

> +			}

> +			return;

> +		default:

> +			bpf_warn_invalid_xdp_action(act);

> +			fallthrough;

> +		case XDP_ABORTED:

> +			trace_xdp_exception(skb->dev, rcpu->prog, act);

> +			fallthrough;

> +		case XDP_DROP:

> +			kfree_skb(skb);

> +			stats->drop++;

> +			return;

> +		}

> +	}

> +

> +	return;

> +

> +insert:

> +	for (i = 0; i < skb_n; i++) {

> +		struct sk_buff *skb = frames[i];

> +

> +		list_add_tail(&skb->list, listp);

> +	}

> +}

> +

>  static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,

>  				    void **frames, int n,

>  				    struct xdp_cpumap_stats *stats)

> @@ -179,8 +261,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,

>  	if (!rcpu->prog)

>  		return n;

>  

> -	rcu_read_lock_bh();

> -

>  	xdp_set_return_frame_no_direct();

>  	xdp.rxq = &rxq;

>  

> @@ -227,17 +307,36 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,

>  		}

>  	}

>  

> +	xdp_clear_return_frame_no_direct();

> +

> +	return nframes;

> +}

> +

> +#define CPUMAP_BATCH 8

> +

> +static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu,

> +				void **frames, int xdp_n, int skb_n,

> +				struct xdp_cpumap_stats *stats,

> +				struct list_head *list)

> +{

> +	int nframes;

> +

> +	rcu_read_lock_bh();

> +

> +	nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, xdp_n, stats);

> +

>  	if (stats->redirect)

> -		xdp_do_flush_map();

> +		xdp_do_flush();

>  

> -	xdp_clear_return_frame_no_direct();

> +	if (unlikely(skb_n))

> +		cpu_map_bpf_prog_run_skb(rcpu, frames + CPUMAP_BATCH, skb_n,

> +					 stats, list);

>  

> -	rcu_read_unlock_bh(); /* resched point, may call do_softirq() */

> +	rcu_read_unlock_bh();

>  

>  	return nframes;

>  }

>  

> -#define CPUMAP_BATCH 8

>  

>  static int cpu_map_kthread_run(void *data)

>  {

> @@ -254,9 +353,9 @@ static int cpu_map_kthread_run(void *data)

>  		struct xdp_cpumap_stats stats = {}; /* zero stats */

>  		unsigned int kmem_alloc_drops = 0, sched = 0;

>  		gfp_t gfp = __GFP_ZERO | GFP_ATOMIC;

> -		void *frames[CPUMAP_BATCH];

> +		int i, n, m, nframes, xdp_n, skb_n;

> +		void *frames[CPUMAP_BATCH * 2];


This double-sized array thing is clever, but it hurts readability. You'd
get basically the same code by having them as two separate arrays and
passing in two separate pointers to cpu_map_bpf_prog_run().

Or you could even just use 'list' - you're passing in that anyway, just
to have cpu_map_bpf_prog_run_skb() add the skbs to it; so why not just
add them right here in the caller, and have cpu_map_bpf_prog_run_skb()
remove them again if the rcpu prog doesn't return XDP_PASS?

-Toke
Toke Høiland-Jørgensen June 21, 2021, 3:50 p.m. UTC | #3
Kumar Kartikeya Dwivedi <memxor@gmail.com> writes:

> This lifts the restriction on running devmap BPF progs in generic

> redirect mode. To match native XDP behavior, it is invoked right before

> generic_xdp_tx is called, and only supports XDP_PASS/XDP_ABORTED/

> XDP_DROP actions.

>

> We also return 0 even if devmap program drops the packet, as

> semantically redirect has already succeeded and the devmap prog is the

> last point before TX of the packet to device where it can deliver a

> verdict on the packet.

>

> This also means it must take care of freeing the skb, as

> xdp_do_generic_redirect callers only do that in case an error is

> returned.

>

> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>

> ---

>  kernel/bpf/devmap.c | 42 +++++++++++++++++++++++++++++++++++++++++-

>  1 file changed, 41 insertions(+), 1 deletion(-)

>

> diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c

> index 2a75e6c2d27d..db3ed8b20c8c 100644

> --- a/kernel/bpf/devmap.c

> +++ b/kernel/bpf/devmap.c

> @@ -322,7 +322,8 @@ bool dev_map_can_have_prog(struct bpf_map *map)

>  {

>  	if ((map->map_type == BPF_MAP_TYPE_DEVMAP ||

>  	     map->map_type == BPF_MAP_TYPE_DEVMAP_HASH) &&

> -	    map->value_size != offsetofend(struct bpf_devmap_val, ifindex))

> +	    map->value_size != offsetofend(struct bpf_devmap_val, ifindex) &&

> +	    map->value_size != offsetofend(struct bpf_devmap_val, bpf_prog.fd))

>  		return true;


With this you've basically removed the need for the check that calls
this, so why not just get rid of it entirely? Same thing for cpumap,
instead of updating cpu_map_prog_allowed(), just get rid of it...

-Toke