From patchwork Mon Jun 1 17:55:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 225032 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A5A2C433E0 for ; Mon, 1 Jun 2020 18:17:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 05FA1207BB for ; Mon, 1 Jun 2020 18:17:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591035455; bh=C00bN90PjpBO4q+GdjxIB4G5xBPgxVOJh/nEb06aWCg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=WxdCD4OyUxViczgG15GQaKIdlpfn7NYJP23Ui1JhMqx7uKiz6MkBPJFDz0OpbNMC2 t5qtvE4p2jFFrGPWSbOhJdFV2F3CxqZbQm4OIYvDf5vgGPHfPxYwoAq/RMqY7oUy1l d2NC5bdUUa4wuP5hwVArpPNLBFyiWERE58Eqw458= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731569AbgFASRd (ORCPT ); Mon, 1 Jun 2020 14:17:33 -0400 Received: from mail.kernel.org ([198.145.29.99]:38698 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731836AbgFASRb (ORCPT ); Mon, 1 Jun 2020 14:17:31 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B04D6206E2; Mon, 1 Jun 2020 18:17:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591035451; bh=C00bN90PjpBO4q+GdjxIB4G5xBPgxVOJh/nEb06aWCg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oBD+ZUi5n7CLsYpD/m3/Vs5fawu6dd5bpFJ2+YJTxRJ3XC3rNE6s1t3eP4HjbjoNp jwpQafO8g4LBE1ZAN+T+dQfCulUTOw2rZUZFCk4H3TJU7+QqaPzEVT6ot3CQXqXRq+ R2ZICnXF3Rq2nRtqxIVj8amlk/UPK5Mff94xT9Jg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, =?utf-8?q?Minh_B=C3=B9i_Quang?= , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Daniel Borkmann , Jonathan Lemon Subject: [PATCH 5.6 164/177] xsk: Add overflow check for u64 division, stored into u32 Date: Mon, 1 Jun 2020 19:55:02 +0200 Message-Id: <20200601174101.876672341@linuxfoundation.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200601174048.468952319@linuxfoundation.org> References: <20200601174048.468952319@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Björn Töpel commit b16a87d0aef7a6be766f6618976dc5ff2c689291 upstream. The npgs member of struct xdp_umem is an u32 entity, and stores the number of pages the UMEM consumes. The calculation of npgs npgs = size / PAGE_SIZE can overflow. To avoid overflow scenarios, the division is now first stored in a u64, and the result is verified to fit into 32b. An alternative would be storing the npgs as a u64, however, this wastes memory and is an unrealisticly large packet area. Fixes: c0c77d8fb787 ("xsk: add user memory registration support sockopt") Reported-by: "Minh Bùi Quang" Signed-off-by: Björn Töpel Signed-off-by: Daniel Borkmann Acked-by: Jonathan Lemon Link: https://lore.kernel.org/bpf/CACtPs=GGvV-_Yj6rbpzTVnopgi5nhMoCcTkSkYrJHGQHJWFZMQ@mail.gmail.com/ Link: https://lore.kernel.org/bpf/20200525080400.13195-1-bjorn.topel@gmail.com Signed-off-by: Greg Kroah-Hartman --- net/xdp/xdp_umem.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) --- a/net/xdp/xdp_umem.c +++ b/net/xdp/xdp_umem.c @@ -341,8 +341,8 @@ static int xdp_umem_reg(struct xdp_umem { bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG; u32 chunk_size = mr->chunk_size, headroom = mr->headroom; + u64 npgs, addr = mr->addr, size = mr->len; unsigned int chunks, chunks_per_page; - u64 addr = mr->addr, size = mr->len; int err; if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) { @@ -372,6 +372,10 @@ static int xdp_umem_reg(struct xdp_umem if ((addr + size) < addr) return -EINVAL; + npgs = div_u64(size, PAGE_SIZE); + if (npgs > U32_MAX) + return -EINVAL; + chunks = (unsigned int)div_u64(size, chunk_size); if (chunks == 0) return -EINVAL; @@ -391,7 +395,7 @@ static int xdp_umem_reg(struct xdp_umem umem->size = size; umem->headroom = headroom; umem->chunk_size_nohr = chunk_size - headroom; - umem->npgs = size / PAGE_SIZE; + umem->npgs = (u32)npgs; umem->pgs = NULL; umem->user = NULL; umem->flags = mr->flags;