From patchwork Wed Mar 31 12:28:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 413390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CD2EC433ED for ; Wed, 31 Mar 2021 12:38:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E0EEB61A45 for ; Wed, 31 Mar 2021 12:29:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235555AbhCaM3T (ORCPT ); Wed, 31 Mar 2021 08:29:19 -0400 Received: from mail1.protonmail.ch ([185.70.40.18]:59068 "EHLO mail1.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235723AbhCaM2m (ORCPT ); Wed, 31 Mar 2021 08:28:42 -0400 Date: Wed, 31 Mar 2021 12:28:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1617193719; bh=zI3QaYD0UyhErTHzJ9xtVjlAtErv7jv4T3mrzGJ1yG8=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=c84tXW1g3EszL3GXgeXUMymQ0g5Zg9NaGGhLaxp3OI1aGtitirwU50pNTmtC57J7F 8uyuZz4xQtRNHkpA5LDrPLYxlLcv6acHSbabi7lczADJ9VQffjOgPXX6TvuboORA6x ZFYAEh2IyrGDfs0Zk3oCLPkJimyPwpbv1YleOMgfhCmWQCTYXsSioWuJrFNcAlmwQR sjGSswH3MvgbPsH/nvBnc6w3DJxSMwAWxBDrs3pKTQuYBCoia2Re0sd9zYQ4gl41Mu IiVqf9FqLY5Wccqd9NpuGzHkhjzEuyZYn5Q3Hyni2Wprffnt5Stg20RNv0jNk2Zm2v YygrVsSlVR4ZA== To: Alexei Starovoitov , Daniel Borkmann From: Alexander Lobakin Cc: Xuan Zhuo , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Magnus Karlsson , Jonathan Lemon , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Alexander Lobakin , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH v2 bpf-next 1/2] xsk: speed-up generic full-copy xmit Message-ID: <20210331122820.6356-1-alobakin@pm.me> In-Reply-To: <20210331122602.6000-1-alobakin@pm.me> References: <20210331122602.6000-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org There are a few moments that are known for sure at the moment of copying: - allocated skb is fully linear; - its linear space is long enough to hold the full buffer data. So, the out-of-line skb_put(), skb_store_bits() and the check for a retcode can be replaced with plain memcpy(__skb_put()) with no loss. Also align memcpy()'s len to sizeof(long) to improve its performance. Signed-off-by: Alexander Lobakin --- net/xdp/xsk.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) -- 2.31.1 diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index a71ed664da0a..41f8f21b3348 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -517,14 +517,9 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs, return ERR_PTR(err); skb_reserve(skb, hr); - skb_put(skb, len); buffer = xsk_buff_raw_get_data(xs->pool, desc->addr); - err = skb_store_bits(skb, 0, buffer, len); - if (unlikely(err)) { - kfree_skb(skb); - return ERR_PTR(err); - } + memcpy(__skb_put(skb, len), buffer, ALIGN(len, sizeof(long))); } skb->dev = dev;