From patchwork Sun May 31 21:46:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 218084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B37EC433E2 for ; Sun, 31 May 2020 21:47:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 36A6720776 for ; Sun, 31 May 2020 21:47:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590961645; bh=Ki2F541Wbf8/Su6oBZd3PjYjymW5xaUDTo9NkkgkQcY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=2v7Me8daiPP4vtaUPzDDwy6wnV9jPTOIblmVHY8Tox/krQFnlV2xOKx59fMMWpW76 UHNjvAttuGb7+gouXizRB4WQEjcrQfBHpto7KKwJtARWhcPIPkhv/N3zpkt9lV3YhO mL2eM7NMsqNl4OvkTCHu915xfCtNIyWZYNyH6EdA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728393AbgEaVrY (ORCPT ); Sun, 31 May 2020 17:47:24 -0400 Received: from mail.kernel.org ([198.145.29.99]:37102 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725860AbgEaVrY (ORCPT ); Sun, 31 May 2020 17:47:24 -0400 Received: from lore-desk.lan (unknown [151.48.128.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0994D206F1; Sun, 31 May 2020 21:47:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590961643; bh=Ki2F541Wbf8/Su6oBZd3PjYjymW5xaUDTo9NkkgkQcY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n5MioE/W/grqG7yBaDCw255hMjfv0guC1bTdtEwKZRAd02s2JWX3VVZQ7O4rCdO0J sC0hJJurPFNL1/cHCJE5LFflTwStpcRlgDD0K0++vE+y8vX2zX6q5hn35SzaikgMox YhHTbJPZp8VQ03GaqAhfXyvTRQpZKEIuTQYKvamQ= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, ast@kernel.org, brouer@redhat.com, toke@redhat.com, daniel@iogearbox.net, lorenzo.bianconi@redhat.com, dsahern@kernel.org, David Ahern Subject: [PATCH bpf-next 1/6] net: Refactor xdp_convert_buff_to_frame Date: Sun, 31 May 2020 23:46:46 +0200 Message-Id: <3dabeed5867243f624f5fb79c81dc9629a53677c.1590960613.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: David Ahern Move the guts of xdp_convert_buff_to_frame to a new helper, xdp_update_frame_from_buff so it can be reused removing code duplication Suggested-by: Jesper Dangaard Brouer Co-developed-by: Lorenzo Bianconi Signed-off-by: Lorenzo Bianconi Signed-off-by: David Ahern --- include/net/xdp.h | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 609f819ed08b..ab1c503808a4 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -121,39 +121,48 @@ void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp) xdp->frame_sz = frame->frame_sz; } -/* Convert xdp_buff to xdp_frame */ static inline -struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp) +int xdp_update_frame_from_buff(struct xdp_buff *xdp, + struct xdp_frame *xdp_frame) { - struct xdp_frame *xdp_frame; - int metasize; - int headroom; - - if (xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL) - return xdp_convert_zc_to_xdp_frame(xdp); + int metasize, headroom; /* Assure headroom is available for storing info */ headroom = xdp->data - xdp->data_hard_start; metasize = xdp->data - xdp->data_meta; metasize = metasize > 0 ? metasize : 0; if (unlikely((headroom - metasize) < sizeof(*xdp_frame))) - return NULL; + return -ENOMEM; /* Catch if driver didn't reserve tailroom for skb_shared_info */ if (unlikely(xdp->data_end > xdp_data_hard_end(xdp))) { XDP_WARN("Driver BUG: missing reserved tailroom"); - return NULL; + return -ENOMEM; } - /* Store info in top of packet */ - xdp_frame = xdp->data_hard_start; - xdp_frame->data = xdp->data; xdp_frame->len = xdp->data_end - xdp->data; xdp_frame->headroom = headroom - sizeof(*xdp_frame); xdp_frame->metasize = metasize; xdp_frame->frame_sz = xdp->frame_sz; + return 0; +} + +/* Convert xdp_buff to xdp_frame */ +static inline +struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp) +{ + struct xdp_frame *xdp_frame; + + if (xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL) + return xdp_convert_zc_to_xdp_frame(xdp); + + /* Store info in top of packet */ + xdp_frame = xdp->data_hard_start; + if (unlikely(xdp_update_frame_from_buff(xdp, xdp_frame) < 0)) + return NULL; + /* rxq only valid until napi_schedule ends, convert to xdp_mem_info */ xdp_frame->mem = xdp->rxq->mem; From patchwork Sun May 31 21:46:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 218083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C823C433E5 for ; Sun, 31 May 2020 21:47:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 34B20206B6 for ; Sun, 31 May 2020 21:47:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590961662; bh=vIDBqI4LeBbmbgXUpAeR5oXjD6W4UaeuCDblUyQUXxQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=iXUZDdK6JBbv9kuaVtP4O2ts3x9p1VpLf1nSkQ/wAkFLTaaLd7ocuDQyuBtGdcZkf 8sQ5w+l/VbjrI97nIBzWfzT9yBcNsnkdtAUA2jEVn2hZfEJHOXAXzwZ/KYKKAX8a/K N78PVhFNTV36nJ5YnaHBMZMvqMPoymhUkfhH9zqw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728464AbgEaVrk (ORCPT ); Sun, 31 May 2020 17:47:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:37188 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728395AbgEaVrj (ORCPT ); Sun, 31 May 2020 17:47:39 -0400 Received: from lore-desk.lan (unknown [151.48.128.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 18C53206F1; Sun, 31 May 2020 21:47:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590961658; bh=vIDBqI4LeBbmbgXUpAeR5oXjD6W4UaeuCDblUyQUXxQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G5+nj7e7vyn/htHNL0x2tyKIonkNX/NXf4++/XrJHaz+peVQTK73xjrR2GUMjlcoZ OtsvI3XzDETdxb5LLIKyXXDOCPqkIj5gebyDmQmvjZeMDlsUk+JHxlxd51Z1n56KMT VU5CAuOq9LFXF9nK91JAR3xsvgxES4Og6hHDz+ME= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, ast@kernel.org, brouer@redhat.com, toke@redhat.com, daniel@iogearbox.net, lorenzo.bianconi@redhat.com, dsahern@kernel.org Subject: [PATCH bpf-next 3/6] cpumap: formalize map value as a named struct Date: Sun, 31 May 2020 23:46:48 +0200 Message-Id: <02fcf47c1b0dcf37b108994ba6f44266ad89bee6.1590960613.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org As it has been already done for devmap, introduce 'struct bpf_cpumap_val' to formalize the expected values that can be passed in for a CPUMAP. Update cpumap code to use the struct. Signed-off-by: Lorenzo Bianconi --- kernel/bpf/cpumap.c | 30 ++++++++++++++++++------------ 1 file changed, 18 insertions(+), 12 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 27595fc6da56..57402276d8af 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -48,11 +48,15 @@ struct xdp_bulk_queue { unsigned int count; }; +/* CPUMAP value */ +struct bpf_cpumap_val { + u32 qsize; /* queue size */ +}; + /* Struct for every remote "destination" CPU in map */ struct bpf_cpu_map_entry { u32 cpu; /* kthread CPU and map index */ int map_id; /* Back reference to map */ - u32 qsize; /* Queue size placeholder for map lookup */ /* XDP can run multiple RX-ring queues, need __percpu enqueue store */ struct xdp_bulk_queue __percpu *bulkq; @@ -66,6 +70,8 @@ struct bpf_cpu_map_entry { atomic_t refcnt; /* Control when this struct can be free'ed */ struct rcu_head rcu; + + struct bpf_cpumap_val value; }; struct bpf_cpu_map { @@ -307,8 +313,8 @@ static int cpu_map_kthread_run(void *data) return 0; } -static struct bpf_cpu_map_entry *__cpu_map_entry_alloc(u32 qsize, u32 cpu, - int map_id) +static struct bpf_cpu_map_entry * +__cpu_map_entry_alloc(struct bpf_cpumap_val *value, u32 cpu, int map_id) { gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; struct bpf_cpu_map_entry *rcpu; @@ -338,13 +344,13 @@ static struct bpf_cpu_map_entry *__cpu_map_entry_alloc(u32 qsize, u32 cpu, if (!rcpu->queue) goto free_bulkq; - err = ptr_ring_init(rcpu->queue, qsize, gfp); + err = ptr_ring_init(rcpu->queue, value->qsize, gfp); if (err) goto free_queue; rcpu->cpu = cpu; rcpu->map_id = map_id; - rcpu->qsize = qsize; + rcpu->value.qsize = value->qsize; /* Setup kthread */ rcpu->kthread = kthread_create_on_node(cpu_map_kthread_run, rcpu, numa, @@ -437,12 +443,12 @@ static int cpu_map_update_elem(struct bpf_map *map, void *key, void *value, u64 map_flags) { struct bpf_cpu_map *cmap = container_of(map, struct bpf_cpu_map, map); + struct bpf_cpumap_val cpumap_value = {}; struct bpf_cpu_map_entry *rcpu; - /* Array index key correspond to CPU number */ u32 key_cpu = *(u32 *)key; - /* Value is the queue size */ - u32 qsize = *(u32 *)value; + + memcpy(&cpumap_value, value, map->value_size); if (unlikely(map_flags > BPF_EXIST)) return -EINVAL; @@ -450,18 +456,18 @@ static int cpu_map_update_elem(struct bpf_map *map, void *key, void *value, return -E2BIG; if (unlikely(map_flags == BPF_NOEXIST)) return -EEXIST; - if (unlikely(qsize > 16384)) /* sanity limit on qsize */ + if (unlikely(cpumap_value.qsize > 16384)) /* sanity limit on qsize */ return -EOVERFLOW; /* Make sure CPU is a valid possible cpu */ if (key_cpu >= nr_cpumask_bits || !cpu_possible(key_cpu)) return -ENODEV; - if (qsize == 0) { + if (cpumap_value.qsize == 0) { rcpu = NULL; /* Same as deleting */ } else { /* Updating qsize cause re-allocation of bpf_cpu_map_entry */ - rcpu = __cpu_map_entry_alloc(qsize, key_cpu, map->id); + rcpu = __cpu_map_entry_alloc(&cpumap_value, key_cpu, map->id); if (!rcpu) return -ENOMEM; rcpu->cmap = cmap; @@ -523,7 +529,7 @@ static void *cpu_map_lookup_elem(struct bpf_map *map, void *key) struct bpf_cpu_map_entry *rcpu = __cpu_map_lookup_elem(map, *(u32 *)key); - return rcpu ? &rcpu->qsize : NULL; + return rcpu ? &rcpu->value.qsize : NULL; } static int cpu_map_get_next_key(struct bpf_map *map, void *key, void *next_key) From patchwork Sun May 31 21:46:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 218082 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC883C433DF for ; Sun, 31 May 2020 21:47:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A7CAA206F1 for ; Sun, 31 May 2020 21:47:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590961670; bh=AYTwty+A3SMZzGlgrDYlynHYNbtMF4MiJ8SBIUndzsI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=lJ9WTFW6Q4GXWsnAlPk2y5ErtQDlOdkbEekYOsMoCKMN/QXRmvpi0/dpCoJn6WLC9 kuAfduWLM5Fa5MeLJODzpmQScI1Le5tEZk3TfjIJp42C5HUTwn+YilIbVpwfq8WvTJ cA79eVUmQ5hhloypzrUPku+g9nIREqAlCd6LsCJ0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728482AbgEaVrt (ORCPT ); Sun, 31 May 2020 17:47:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:37302 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728395AbgEaVrt (ORCPT ); Sun, 31 May 2020 17:47:49 -0400 Received: from lore-desk.lan (unknown [151.48.128.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 38465206B6; Sun, 31 May 2020 21:47:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590961668; bh=AYTwty+A3SMZzGlgrDYlynHYNbtMF4MiJ8SBIUndzsI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jCVtfsopL1+OQ06KPop4degUJN8hnDlxkasFb95yatPxuMc+UGdz4Zrgo7rOdb+Jc cnFw8tPPoOiR1vzd0m9hQIIqecYfzQfYReGrisLcjhoWgrxYnXVzktxX8AmIdQFVAP MGaTNj+1m+8JFhlsFuKBs69WgSQkg6Hn+NxzFmUE= From: Lorenzo Bianconi To: bpf@vger.kernel.org, netdev@vger.kernel.org Cc: davem@davemloft.net, ast@kernel.org, brouer@redhat.com, toke@redhat.com, daniel@iogearbox.net, lorenzo.bianconi@redhat.com, dsahern@kernel.org Subject: [PATCH bpf-next 5/6] bpf: cpumap: implement XDP_REDIRECT for eBPF programs attached to map entries Date: Sun, 31 May 2020 23:46:50 +0200 Message-Id: <605426d4fac4e5ae4e5d98afdafaf7e35625657c.1590960613.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add XDP_REDIRECT support for eBPF programs attached to cpumap entries Signed-off-by: Lorenzo Bianconi --- include/trace/events/xdp.h | 12 ++++++++---- kernel/bpf/cpumap.c | 21 +++++++++++++++++---- 2 files changed, 25 insertions(+), 8 deletions(-) diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h index 06ec557c6bf5..162ce06c6da0 100644 --- a/include/trace/events/xdp.h +++ b/include/trace/events/xdp.h @@ -177,9 +177,11 @@ DEFINE_EVENT(xdp_redirect_template, xdp_redirect_map_err, TRACE_EVENT(xdp_cpumap_kthread, TP_PROTO(int map_id, unsigned int processed, unsigned int drops, - int sched, unsigned int xdp_pass, unsigned int xdp_drop), + int sched, unsigned int xdp_pass, unsigned int xdp_drop, + unsigned int xdp_redirect), - TP_ARGS(map_id, processed, drops, sched, xdp_pass, xdp_drop), + TP_ARGS(map_id, processed, drops, sched, xdp_pass, xdp_drop, + xdp_redirect), TP_STRUCT__entry( __field(int, map_id) @@ -190,6 +192,7 @@ TRACE_EVENT(xdp_cpumap_kthread, __field(int, sched) __field(unsigned int, xdp_pass) __field(unsigned int, xdp_drop) + __field(unsigned int, xdp_redirect) ), TP_fast_assign( @@ -201,18 +204,19 @@ TRACE_EVENT(xdp_cpumap_kthread, __entry->sched = sched; __entry->xdp_pass = xdp_pass; __entry->xdp_drop = xdp_drop; + __entry->xdp_redirect = xdp_redirect; ), TP_printk("kthread" " cpu=%d map_id=%d action=%s" " processed=%u drops=%u" " sched=%d" - " xdp_pass=%u xdp_drop=%u", + " xdp_pass=%u xdp_drop=%u xdp_redirect=%u", __entry->cpu, __entry->map_id, __print_symbolic(__entry->act, __XDP_ACT_SYM_TAB), __entry->processed, __entry->drops, __entry->sched, - __entry->xdp_pass, __entry->xdp_drop) + __entry->xdp_pass, __entry->xdp_drop, __entry->xdp_redirect) ); TRACE_EVENT(xdp_cpumap_enqueue, diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 24ab0a6b9772..a45157627fbc 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -248,7 +248,7 @@ static int cpu_map_kthread_run(void *data) * kthread_stop signal until queue is empty. */ while (!kthread_should_stop() || !__ptr_ring_empty(rcpu->queue)) { - unsigned int xdp_pass = 0, xdp_drop = 0; + unsigned int xdp_pass = 0, xdp_drop = 0, xdp_redirect = 0; gfp_t gfp = __GFP_ZERO | GFP_ATOMIC; unsigned int drops = 0, sched = 0; void *xdp_frames[CPUMAP_BATCH]; @@ -279,7 +279,7 @@ static int cpu_map_kthread_run(void *data) n = ptr_ring_consume_batched(rcpu->queue, xdp_frames, CPUMAP_BATCH); - rcu_read_lock(); + rcu_read_lock_bh(); prog = READ_ONCE(rcpu->prog); for (i = 0; i < n; i++) { @@ -315,6 +315,16 @@ static int cpu_map_kthread_run(void *data) xdp_pass++; } break; + case XDP_REDIRECT: + err = xdp_do_redirect(xdpf->dev_rx, &xdp, + prog); + if (unlikely(err)) { + xdp_return_frame(xdpf); + drops++; + } else { + xdp_redirect++; + } + break; default: bpf_warn_invalid_xdp_action(act); /* fallthrough */ @@ -325,7 +335,10 @@ static int cpu_map_kthread_run(void *data) } } - rcu_read_unlock(); + if (xdp_redirect) + xdp_do_flush_map(); + + rcu_read_unlock_bh(); m = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, nframes, skbs); @@ -354,7 +367,7 @@ static int cpu_map_kthread_run(void *data) } /* Feedback loop via tracepoint */ trace_xdp_cpumap_kthread(rcpu->map_id, n, drops, sched, - xdp_pass, xdp_drop); + xdp_pass, xdp_drop, xdp_redirect); local_bh_enable(); /* resched point, may call do_softirq() */ }