From patchwork Thu Oct 29 11:03:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 317450 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8EB3C2D0A3 for ; Thu, 29 Oct 2020 11:03:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 571542076B for ; Thu, 29 Oct 2020 11:03:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603969421; bh=acyncxg5wDmK6CNLmKA7z6UGUav7uI03MNVXwSyOnEI=; h=From:To:Cc:Subject:Date:List-ID:From; b=edCCTLD4UTMheYDAOplj9DCLgm/LjLAScMj51sXtnBAVCGM7PmN2lJ7Tjz5JyErH7 dqwDlWEiTuidm0hIwGW6DKiAHYVEcHV/t5KPARZf/so5ErVZ1W1UCY7opJL6a5r0Es O6DUgxqWeTvbbETS7Tnl/XLHERM+hxg5Y5bmfpTM= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727343AbgJ2LDl (ORCPT ); Thu, 29 Oct 2020 07:03:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:40282 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727294AbgJ2LDk (ORCPT ); Thu, 29 Oct 2020 07:03:40 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D4F4620754; Thu, 29 Oct 2020 11:03:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603969420; bh=acyncxg5wDmK6CNLmKA7z6UGUav7uI03MNVXwSyOnEI=; h=From:To:Cc:Subject:Date:From; b=2l7+bdLsxiEoowdZcFK8fSCWL8dc6VEbigghYpsfmRdAG96y63rpRU+TOb4VC3eRm SxGFfM/H8Drmgt7S9KEKWjfrVU10S2BXpHQh5zohudS/m+W2oBmOOmjJKratdUW6M/ ALKaey2MrK+zqmua8tZQZrtZMrdwYB5eoZK6f67Q= From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linus.walleij@linaro.org, linux@armlinux.org.uk, Ard Biesheuvel , stable@vger.kernel.org Subject: [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations Date: Thu, 29 Oct 2020 12:03:34 +0100 Message-Id: <20201029110334.4118-1-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. However, as it rounds the end of each region downwards, we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. So align the end of the range to the next page instead. Cc: Signed-off-by: Ard Biesheuvel Signed-off-by: Ard Biesheuvel Signed-off-by: Mike Rapoport --- arch/arm/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a391804c7ce3..d41781cb5496 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,7 +354,7 @@ static void __init free_highpages(void) for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_PFN(range_start); - unsigned long end = PHYS_PFN(range_end); + unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end)); /* Ignore complete lowmem entries */ if (end <= max_low)