From patchwork Thu Sep 16 15:56:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 513019 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp1419689jao; Thu, 16 Sep 2021 10:15:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxU0r6G4PvEYbs9W/9PWlhEUNU2+Y7j6i8qvF3SGwnVjW7Oi0HbpQmRAgVejTZvLXCOIMcN X-Received: by 2002:a05:6638:2104:: with SMTP id n4mr5241143jaj.111.1631812504623; Thu, 16 Sep 2021 10:15:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631812504; cv=none; d=google.com; s=arc-20160816; b=WHrSjoIQmveOAjpDpB50KKjjIT2dKezXysDf/H7yo8h86zT8gSN8skMp6FCcOR9fsJ KkXC5/b/n86QDCK5kaP7N8xQ6d0oTiMNvV2vzCh5ZzcWYMm435vexAiS85g2yL6OhvpG +MOYpzuAP7N9Ny43vJjmEiTZ0nfnNvQcbD8FyH6GR8OPd3azvxyyDke8cEi7JyRMnxPj HqC/h66PZPXGZ5IoKsBaCifkpP1uRV4ddY+T8CjkHtZ/KyoH/gAGbeYKQWKHdN0EaIzL fZMdILLpy1b2lsIJTona8jyF04Uy3Ym1c4wkIbzOesx8+qsonC5v/15ZRlUOAQ7NiUXk xbFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=WbyV3mGyb5cadz6RM9vRVJ0uYItRhEE1eblZOenPnpU=; b=Ppx44cqTAKRy1XHtVfEPmnbZKBMzhr4n7qRjpH50XrpaK1xcQx6p/GzdFM41FPcEKe 8NhjILiiETBaUOcNEYV6HyGeUH7w5shTy8S/6X7FXM30HZ4Um/lGOaKCDtpTzE//etE5 njepY1+mad0/bXs4cUdqcI7qKdnsxsGSD2QJMNwN7xj2kPx3cMaMKO7kdD3jb2ThACWP OOhIuardaKWa3rdZWUJSB56AfDuFtZi6e+uiPR9mFzgKlEv2Vu6Pf+Qc7LgnI9tvLNTx iFWMoOnjbkLt1BC8PsYiYyhTB5NO5K2vSMYVomhtBEV+LSR6ytev1temihZaxmqMv9Qc ZpeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=ZgrExLQP; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f24si2964490jaa.118.2021.09.16.10.15.04; Thu, 16 Sep 2021 10:15:04 -0700 (PDT) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=ZgrExLQP; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242510AbhIPRQM (ORCPT + 11 others); Thu, 16 Sep 2021 13:16:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:37020 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345449AbhIPRHK (ORCPT ); Thu, 16 Sep 2021 13:07:10 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6804061401; Thu, 16 Sep 2021 16:35:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1631810158; bh=3YzuJpnFC6++lS73Cjev2j2VE/KDl8KjmZAwuijIgWQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZgrExLQPe+qBCZmlRA2/uAGY/VXbv3T7d0tiO8/Jd+6iSq6X37Znr5aA+mUss7n6a V5un6TK5JCe/MiWai3CaH4wwcJpwytoRtiCsO5aaxnPvUOJbnPeHJYI/i4/kyNWa07 HaNE/CVIS6xGkv0nhi26Po0TgOXiqiEDfa1hnVrU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mark Rutland , Anshuman Khandual , Ard Biesheuvel , Steve Capper , Will Deacon , Catalin Marinas Subject: [PATCH 5.14 039/432] arm64: head: avoid over-mapping in map_memory Date: Thu, 16 Sep 2021 17:56:28 +0200 Message-Id: <20210916155812.144477991@linuxfoundation.org> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210916155810.813340753@linuxfoundation.org> References: <20210916155810.813340753@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mark Rutland commit 90268574a3e8a6b883bd802d702a2738577e1006 upstream. The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds. We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then: * In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping. * In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map. As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to. The final entry at each level may require an additional table at that level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate enough memory for this. Avoid the extraneous mapping by having map_memory convert the exclusive end address to an inclusive end address by subtracting one, and do likewise in EARLY_ENTRIES() when calculating the number of required tables. For clarity, comments are updated to more clearly document which boundaries the macros operate on. For consistency with the other macros, the comments in map_memory are also updated to describe `vstart` and `vend` as virtual addresses. Fixes: 0370b31e4845 ("arm64: Extend early page table code to allow for larger kernels") Cc: # 4.16.x Signed-off-by: Mark Rutland Cc: Anshuman Khandual Cc: Ard Biesheuvel Cc: Steve Capper Cc: Will Deacon Acked-by: Will Deacon Link: https://lore.kernel.org/r/20210823101253.55567-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/kernel-pgtable.h | 4 ++-- arch/arm64/kernel/head.S | 11 ++++++----- 2 files changed, 8 insertions(+), 7 deletions(-) --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -65,8 +65,8 @@ #define EARLY_KASLR (0) #endif -#define EARLY_ENTRIES(vstart, vend, shift) (((vend) >> (shift)) \ - - ((vstart) >> (shift)) + 1 + EARLY_KASLR) +#define EARLY_ENTRIES(vstart, vend, shift) \ + ((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + EARLY_KASLR) #define EARLY_PGDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT)) --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -177,7 +177,7 @@ SYM_CODE_END(preserve_boot_args) * to be composed of multiple pages. (This effectively scales the end index). * * vstart: virtual address of start of range - * vend: virtual address of end of range + * vend: virtual address of end of range - we map [vstart, vend] * shift: shift used to transform virtual address into index * ptrs: number of entries in page table * istart: index in table corresponding to vstart @@ -214,17 +214,18 @@ SYM_CODE_END(preserve_boot_args) * * tbl: location of page table * rtbl: address to be used for first level page table entry (typically tbl + PAGE_SIZE) - * vstart: start address to map - * vend: end address to map - we map [vstart, vend] + * vstart: virtual address of start of range + * vend: virtual address of end of range - we map [vstart, vend - 1] * flags: flags to use to map last level entries * phys: physical address corresponding to vstart - physical memory is contiguous * pgds: the number of pgd entries * * Temporaries: istart, iend, tmp, count, sv - these need to be different registers - * Preserves: vstart, vend, flags - * Corrupts: tbl, rtbl, istart, iend, tmp, count, sv + * Preserves: vstart, flags + * Corrupts: tbl, rtbl, vend, istart, iend, tmp, count, sv */ .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv + sub \vend, \vend, #1 add \rtbl, \tbl, #PAGE_SIZE mov \sv, \rtbl mov \count, #0