From patchwork Fri Jan 8 10:59:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 59341 Delivered-To: patch@linaro.org Received: by 10.112.130.2 with SMTP id oa2csp480162lbb; Fri, 8 Jan 2016 03:01:23 -0800 (PST) X-Received: by 10.98.7.146 with SMTP id 18mr3376419pfh.47.1452250883767; Fri, 08 Jan 2016 03:01:23 -0800 (PST) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id fl1si33970491pad.15.2016.01.08.03.01.23 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 08 Jan 2016 03:01:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org; dkim=neutral (body hash did not verify) header.i=@linaro.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aHUmK-0003g2-SE; Fri, 08 Jan 2016 11:00:20 +0000 Received: from mail-io0-x229.google.com ([2607:f8b0:4001:c06::229]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aHUm6-0002dQ-79 for linux-arm-kernel@lists.infradead.org; Fri, 08 Jan 2016 11:00:09 +0000 Received: by mail-io0-x229.google.com with SMTP id 1so235621471ion.1 for ; Fri, 08 Jan 2016 02:59:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=IjX4J33UANvr5FY1s1V0ybXV7IvbHUr5XN4nH2Ob7Gc=; b=KE9z7BwgGrVkqvPF7TG0y7oxh1771eAmlsqjlsDrxnjGxmzHtb5C14fmB6041BuMis 3fNIuvXbx+5uv2KRe3XnF4ZA+f2LINASI0gJaE+88jEuMrtB44ZRiVdnWbolGWheyDNN fPJ7akxJCiYcJ4sWVQxZU/aMG26I3CwPJWsj4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=IjX4J33UANvr5FY1s1V0ybXV7IvbHUr5XN4nH2Ob7Gc=; b=dmMpOzAjDg6aoGZ0VtZHkh7dyw8bJDn7wpEn50EMHOGVZTOreQmfkSgistrFSSblzW SOld3io0+Hh0X6mwel2ozDXfTC5OmP/doPy8jnyJVsCFR8f3ZFTcERfXoNYIHHmkyqTj EjKae1bGVn6VZpUe5B6WGKSzOh4DaoCrQZ07hbWoZvLzijcYlvdfLVfYg68ol0jhdW0y TAOpKy0aSTm0svB4ko1Omy/FUp3aY07DMWqYvjpkW4PV7OxTRtr7a6Y1gypwA3b9Ofwd SPOZDTycFT736R1bJgO5GV5s6IIld4uFyuCRl0bpmgc1xT67wrIRUUZEpPBWRQH1q/ny 2CIQ== X-Gm-Message-State: ALoCoQkL2XiZsTbz1vn7UQTj+ewMkv+WJnKW7n9kEMtZNcisDAdNtwC2tN2cNqVomn33NPILgHYWUPnhTstQDtJ9LLW23Vr6O8xhAgeQI8ScyJH5dZ8IErE= MIME-Version: 1.0 X-Received: by 10.107.128.37 with SMTP id b37mr68832021iod.183.1452250784924; Fri, 08 Jan 2016 02:59:44 -0800 (PST) Received: by 10.36.29.6 with HTTP; Fri, 8 Jan 2016 02:59:44 -0800 (PST) In-Reply-To: <1452182840-5120-1-git-send-email-catalin.marinas@arm.com> References: <1452182840-5120-1-git-send-email-catalin.marinas@arm.com> Date: Fri, 8 Jan 2016 11:59:44 +0100 Message-ID: Subject: Re: [PATCH] arm64: Honour !PTE_WRITE in set_pte_at() for kernel mappings From: Ard Biesheuvel To: Catalin Marinas , Andrey Ryabinin X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160108_030006_565144_0B867ECB X-CRM114-Status: GOOD ( 20.61 ) X-Spam-Score: -2.7 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [2607:f8b0:4001:c06:0:0:0:229 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Will Deacon , "linux-arm-kernel@lists.infradead.org" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org (+ Andrey) On 7 January 2016 at 17:07, Catalin Marinas wrote: > Currently, set_pte_at() only checks the software PTE_WRITE bit for user > mappings when it sets or clears the hardware PTE_RDONLY accordingly. The > kernel ptes are written directly without any modification, relying > solely on the protection bits in macros like PAGE_KERNEL. However, > modifying kernel pte attributes via pte_wrprotect() would be ignored by > set_pte_at(). Since pte_wrprotect() does not set PTE_RDONLY (it only > clears PTE_WRITE), the new permission is not taken into account. > > This patch changes set_pte_at() to adjust the read-only permission for > kernel ptes as well. As a side effect, existing PROT_* definitions used > for kernel ioremap*() need to include PTE_DIRTY | PTE_WRITE. > > (additionally, white space fix for PTE_KERNEL_ROX) > > Signed-off-by: Catalin Marinas > Reported-by: Ard Biesheuvel > Cc: Will Deacon > --- > arch/arm64/include/asm/pgtable.h | 21 ++++++++++----------- > 1 file changed, 10 insertions(+), 11 deletions(-) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 63f52b55defe..8bdf47cd1bc3 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -67,11 +67,11 @@ extern void __pgd_error(const char *file, int line, unsigned long val); > #define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) > #define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) > > -#define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRnE)) > -#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRE)) > -#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_NORMAL_NC)) > -#define PROT_NORMAL_WT (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_NORMAL_WT)) > -#define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_NORMAL)) > +#define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE)) > +#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE)) > +#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC)) > +#define PROT_NORMAL_WT (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_WT)) > +#define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL)) > > #define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE)) > #define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL)) > @@ -81,7 +81,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val); > > #define PAGE_KERNEL __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE) > #define PAGE_KERNEL_RO __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_RDONLY) > -#define PAGE_KERNEL_ROX __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_RDONLY) > +#define PAGE_KERNEL_ROX __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_RDONLY) > #define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE) > #define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT) > > @@ -153,6 +153,7 @@ extern struct page *empty_zero_page; > #define pte_write(pte) (!!(pte_val(pte) & PTE_WRITE)) > #define pte_exec(pte) (!(pte_val(pte) & PTE_UXN)) > #define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT)) > +#define pte_user(pte) (!!(pte_val(pte) & PTE_USER)) > > #ifdef CONFIG_ARM64_HW_AFDBM > #define pte_hw_dirty(pte) (pte_write(pte) && !(pte_val(pte) & PTE_RDONLY)) > @@ -163,8 +164,6 @@ extern struct page *empty_zero_page; > #define pte_dirty(pte) (pte_sw_dirty(pte) || pte_hw_dirty(pte)) > > #define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID)) > -#define pte_valid_user(pte) \ > - ((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER)) > #define pte_valid_not_user(pte) \ > ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) > > @@ -262,13 +261,13 @@ extern void __sync_icache_dcache(pte_t pteval, unsigned long addr); > static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pte) > { > - if (pte_valid_user(pte)) { > - if (!pte_special(pte) && pte_exec(pte)) > - __sync_icache_dcache(pte, addr); > + if (pte_valid(pte)) { > if (pte_sw_dirty(pte) && pte_write(pte)) > pte_val(pte) &= ~PTE_RDONLY; > else > pte_val(pte) |= PTE_RDONLY; > + if (pte_user(pte) && pte_exec(pte) && !pte_special(pte)) > + __sync_icache_dcache(pte, addr); > } > > /* This works, as far as I can tell. However, I still need the patch below to make sure that the KAsan zero page is mapped read-only. (The reason is that, depending on the alignment of the regions, kasan_populate_zero_shadow() may never call zero_[pud|pmd|pte]_populate()) Before this patch (and my change), the KAsan shadow regions looks like this: 0xffffff8000000000-0xffffff8200800000 8200M RW NX SHD AF UXN MEM/NORMAL 0xffffff8200800000-0xffffff8200c00000 4M RW NX SHD AF BLK UXN MEM/NORMAL 0xffffff8200c00000-0xffffff8800000000 24564M RW NX SHD AF UXN MEM/NORMAL 0xffffff8800000000-0xffffff8820200000 514M RW NX SHD AF BLK UXN MEM/NORMAL and after: 0xffffff8000000000-0xffffff8200800000 8200M ro NX SHD AF UXN MEM/NORMAL 0xffffff8200800000-0xffffff8200c00000 4M RW NX SHD AF BLK UXN MEM/NORMAL 0xffffff8200c00000-0xffffff8800000000 24564M ro NX SHD AF UXN MEM/NORMAL 0xffffff8800000000-0xffffff8820200000 514M RW NX SHD AF BLK UXN MEM/NORMAL ---------8<-------------- _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel Acked-by: Catalin Marinas diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 72fe2978b38a..c3c14204d196 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -140,6 +140,7 @@ void __init kasan_init(void) { u64 kimg_shadow_start, kimg_shadow_end; struct memblock_region *reg; + int i; kimg_shadow_start = round_down((u64)kasan_mem_to_shadow(_text), SWAPPER_BLOCK_SIZE); @@ -185,6 +186,14 @@ void __init kasan_init(void) pfn_to_nid(virt_to_pfn(start))); } + /* + * KAsan may reuse the current contents of kasan_zero_pte directly, so we + * should make sure that it maps the zero page read-only. + */ + for (i = 0; i < PTRS_PER_PTE; i++) + set_pte(&kasan_zero_pte[i], + pfn_pte(virt_to_pfn(kasan_zero_page), PAGE_KERNEL_RO)); + memset(kasan_zero_page, 0, PAGE_SIZE); cpu_replace_ttbr1(swapper_pg_dir);