From patchwork Sat Aug 15 07:41:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 247775 Delivered-To: patch@linaro.org Received: by 2002:a92:cc90:0:0:0:0:0 with SMTP id x16csp1491201ilo; Sat, 15 Aug 2020 16:41:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzqWWdCIz1msL7hNzL0+OqwsCbBzDPk0d36tBIkti5dWGFclEpSC5pnlHzRUhvksIuZPjZE X-Received: by 2002:a17:906:15c7:: with SMTP id l7mr8631750ejd.208.1597534887404; Sat, 15 Aug 2020 16:41:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597534887; cv=none; d=google.com; s=arc-20160816; b=O/MNnMf6GTAVG9m3JNYijuxw2Q1ClQulPbpCs6i3Ep9k5BwhawgRQATIlGP+NfV/yS Boc+LWhLa9RgpuQBaR/JdYfdqyNL/1B2/9ycV4V53J/orj8S3Yhz4/KxpXu16Mz4W3bl by9blznd84QxMs5/5WfR+HpLvgwrDi1WF0DY+s2aYJf79DdkULMiJ29DYOLyuEq8m05N UxG/HVVcZBbe3d9c++dg4tAKGEQHnyFb72AZaFZ9GiLNTgmdc8QK1eM+73GHvx66x+h+ ymNWkryDkoREmidUdwO4/D0jNrI3pSyo6iBAD8+ibA4PW/kAYL6IfP2fNy6JME+pTtTQ FVkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MHsOb0ZW9zlekCr1L/jouy4mfb3HG3Fwn+V4uWK7lY0=; b=IUCQGbA9cyr4GBkub/d95oOEBW8Usd0fnwEJ0XWDos5zKtm9Wzyk90Rwq7aT+Xk7fR AKAcGKs+46HViKC4cbTpwPhV0/Ad4boNbrT5+ndQc6wjwaUfhtq9WigEBpBNOHBma2vh BZcksC+q+QWWYFegDhjnkn9sBQNs1i7lFuhLLIadJrYkuui35LFhB07Db1CYfn5FoQV9 Nd4leTONNVitoUDZoLQh4Sm3F+QFUiXuYLeAQv8xQBF+tt7SQXuWzxPeDtg+n4cYl7u/ Nxbc2PlTOJfOJQAZq5pCMHQ8IHg0dGp9IVLOvY0XpasKCCntXsaBgJGvZf6fHJwPOjyU O4Tg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=mail header.b=smtwuOwR; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j7si9688624ejo.66.2020.08.15.16.41.27; Sat, 15 Aug 2020 16:41:27 -0700 (PDT) Received-SPF: pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=mail header.b=smtwuOwR; spf=pass (google.com: domain of netdev-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728589AbgHOXlY (ORCPT + 9 others); Sat, 15 Aug 2020 19:41:24 -0400 Received: from mail.zx2c4.com ([192.95.5.64]:55225 "EHLO mail.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726429AbgHOXlY (ORCPT ); Sat, 15 Aug 2020 19:41:24 -0400 Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTP id 57e5ed94; Sat, 15 Aug 2020 07:15:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=zx2c4.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-type:content-transfer-encoding; s=mail; bh=8Qjf+nS4+2iB grrnVwyy+Y9pqsY=; b=smtwuOwR5FYc6sJHnacSwvljOMeqrKKkoguT2FImIV8b ZcpZA7/BlZWu97gncg9nSWmizUbp9LbTdTYSObGbPZ7xoJ532y9WnFDfiuEKKXAW 03N//42dcDS8noDzIuC22qG9BS5lcagkkfLrFrSwoH+sNm7V1ueDLAkl0mftfPFk 6iC+y8t2+aS7pAVhiGSeG5Ly84DRgt+C18njvLiK6Y0OkgYl0ApNSjyZlZds8x3S rW7wKIXTDVqOG4K10jfY5abbGCvyf7BrlLKJcT8IKplrdqTiFjrdhXLVMCfJgPvn 2y14U7uryj9B7MDNOUJVcIGTVj3fKWPeDXICw4S0eQ== Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id f925c602 (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Sat, 15 Aug 2020 07:15:30 +0000 (UTC) From: "Jason A. Donenfeld" To: netdev@vger.kernel.org, bpf@vger.kernel.org Cc: "Jason A. Donenfeld" , Thomas Ptacek , Adhipati Blambangan , David Ahern , =?utf-8?b?VG9rZSBIw7hpbGFuZC1Kw7hy?= =?utf-8?q?gensen?= , Jakub Kicinski , Alexei Starovoitov , Jesper Dangaard Brouer , John Fastabend , Daniel Borkmann , "David S . Miller" Subject: [PATCH net v6] net: xdp: account for layer 3 packets in generic skb handler Date: Sat, 15 Aug 2020 09:41:02 +0200 Message-Id: <20200815074102.5357-1-Jason@zx2c4.com> In-Reply-To: <20200814.135546.2266851283177227377.davem@davemloft.net> References: <20200814.135546.2266851283177227377.davem@davemloft.net> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org A user reported that packets from wireguard were possibly ignored by XDP [1]. Another user reported that modifying packets from layer 3 interfaces results in impossible to diagnose drops. Apparently, the generic skb xdp handler path seems to assume that packets will always have an ethernet header, which really isn't always the case for layer 3 packets, which are produced by multiple drivers. This patch fixes the oversight. If the mac_len is 0 and so is hard_header_len, then we know that the skb is a layer 3 packet, and in that case prepend a pseudo ethhdr to the packet whose h_proto is copied from skb->protocol, which will have the appropriate v4 or v6 ethertype. This allows us to keep XDP programs' assumption correct about packets always having that ethernet header, so that existing code doesn't break, while still allowing layer 3 devices to use the generic XDP handler. We push on the ethernet header and then pull it right off and set mac_len to the ethernet header size, so that the rest of the XDP code does not need any changes. That is, it makes it so that the skb has its ethernet header just before the data pointer, of size ETH_HLEN. Previous discussions have included the point that maybe XDP should just be intentionally broken on layer 3 interfaces, by design, and that layer 3 people should be using cls_bpf. However, I think there are good grounds to reconsider this perspective: - Complicated deployments wind up applying XDP modifications to a variety of different devices on a given host, some of which are using specialized ethernet cards and other ones using virtual layer 3 interfaces, such as WireGuard. Being able to apply one codebase to each of these winds up being essential. - cls_bpf does not support the same feature set as XDP, and operates at a slightly different stage in the networking stack. You may reply, "then add all the features you want to cls_bpf", but that seems to be missing the point, and would still result in there being two ways to do everything, which is not desirable for anyone actually _using_ this code. - While XDP was originally made for hardware offloading, and while many look disdainfully upon the generic mode, it nevertheless remains a highly useful and popular way of adding bespoke packet transformations, and from that perspective, a difference between layer 2 and layer 3 packets is immaterial if the user is primarily concerned with transformations to layer 3 and beyond. - It's not impossible to imagine layer 3 hardware (e.g. a WireGuard PCIe card) including eBPF/XDP functionality built-in. In that case, why limit XDP as a technology to only layer 2? Then, having generic XDP work for layer 3 would naturally fit as well. [1] https://lore.kernel.org/wireguard/M5WzVK5--3-2@tuta.io/ Reported-by: Thomas Ptacek Reported-by: Adhipati Blambangan Cc: David Ahern Cc: Toke Høiland-Jørgensen Cc: Jakub Kicinski Cc: Alexei Starovoitov Cc: Jesper Dangaard Brouer Cc: John Fastabend Cc: Daniel Borkmann Cc: David S. Miller Signed-off-by: Jason A. Donenfeld --- I had originally dropped this patch, but the issue kept coming up in user reports, so here's a v4 of it. Testing of it is still rather slim, but hopefully that will change in the coming days. Changes v5->v6: - The fix to the skb->protocol changing case is now in a separate stand-alone patch, and removed from this one, so that it can be evaluated separately. Changes v4->v5: - Rather than tracking in a messy manner whether the skb is l3, we just do the check once, and then adjust the skb geometry to be identical to the l2 case. This simplifies the code quite a bit. - Fix a preexisting bug where the l2 header remained attached if skb->protocol was updated. Changes v3->v4: - We now preserve the same logic for XDP_TX/XDP_REDIRECT as before. - hard_header_len is checked in addition to mac_len. net/core/dev.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) -- 2.28.0 diff --git a/net/core/dev.c b/net/core/dev.c index 151f1651439f..79c15f4244e6 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4630,6 +4630,18 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, * header. */ mac_len = skb->data - skb_mac_header(skb); + if (!mac_len && !skb->dev->hard_header_len) { + /* For l3 packets, we push on a fake mac header, and then + * pull it off again, so that it has the same skb geometry + * as for the l2 case. + */ + eth = skb_push(skb, ETH_HLEN); + eth_zero_addr(eth->h_source); + eth_zero_addr(eth->h_dest); + eth->h_proto = skb->protocol; + __skb_pull(skb, ETH_HLEN); + mac_len = ETH_HLEN; + } hlen = skb_headlen(skb) + mac_len; xdp->data = skb->data - mac_len; xdp->data_meta = xdp->data;