From patchwork Wed Jan 13 16:10:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 59671 Delivered-To: patch@linaro.org Received: by 10.112.130.2 with SMTP id oa2csp3456339lbb; Wed, 13 Jan 2016 08:10:44 -0800 (PST) X-Received: by 10.67.24.33 with SMTP id if1mr121800996pad.137.1452701440414; Wed, 13 Jan 2016 08:10:40 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id qd3si2781211pab.208.2016.01.13.08.10.39; Wed, 13 Jan 2016 08:10:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dkim=pass header.i=@linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753318AbcAMQKi (ORCPT + 29 others); Wed, 13 Jan 2016 11:10:38 -0500 Received: from mail-io0-f174.google.com ([209.85.223.174]:36323 "EHLO mail-io0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751513AbcAMQKc (ORCPT ); Wed, 13 Jan 2016 11:10:32 -0500 Received: by mail-io0-f174.google.com with SMTP id g73so228091125ioe.3 for ; Wed, 13 Jan 2016 08:10:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=5JwWv0NW0Bu+KfC0yy5QBzzrbjdptEmoBsFCiuMMfNM=; b=MxMzbFBliYL77S+CsrvYj1vTdotIXpiDhbROCkC7BDW1HINFys/Y7uRqJ3dVe/PhNj LrOKsth4S6jBO4pXc0TfTZOpsA//C9xWyOC7M97tzH1kS0llUN7b3nGgMi3n+KMg10Ow f6qzSRTslmvtlT+F99dPZtlQFw1T9+kzwlY/k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=5JwWv0NW0Bu+KfC0yy5QBzzrbjdptEmoBsFCiuMMfNM=; b=lEH4YHNWfs8RL7jmuAfFgv6bmh75DF4smfJaJw3n+d/Lwi/prlckkyioYY3Q8B8Lv7 oj3FXE+6VaChFBpEeLO5JiMG1CMDqq8P8pMvhRZSa6XBcFHelRz3U1dlhS8xRKSJnJAV 6I7l0ebqPctUIVcVWrRuhSCqr8EOfJa/8XbZX9HlhAFAmFSOKvzfvtFNVwsks9R8Fzk0 YNMhZyMVJcOPifcXUJ4a/NDvcIFHyIAXhtN85pFKJQZ1j5QsIsE1bDNREfllBa7qdaBO rms+2XF2g33F2GJKptU0xo2GJWj/5NTQro4d8UL0NbmeHwd1+B/K9QwwHuxbdHn4Q8p2 akng== X-Gm-Message-State: ALoCoQkT/RQc+UkvS8wm3ZdwrLKXdXS3cV4Gphsrtp7H+D43xZj5yUIl7tb1mGgHL/CXX/qiP1qKB4K4/tpWWlFHf35Z+9Cr+QdNJxKFRd/mtODJhVsRaKc= MIME-Version: 1.0 X-Received: by 10.107.128.37 with SMTP id b37mr92156816iod.183.1452701431308; Wed, 13 Jan 2016 08:10:31 -0800 (PST) Received: by 10.36.29.6 with HTTP; Wed, 13 Jan 2016 08:10:31 -0800 (PST) In-Reply-To: References: <1452635187-8057-1-git-send-email-labbott@fedoraproject.org> Date: Wed, 13 Jan 2016 17:10:31 +0100 Message-ID: Subject: Re: [PATCH] arm64: Allow vmalloc regions to be set with set_memory_* From: Ard Biesheuvel To: Laura Abbott Cc: Catalin Marinas , Will Deacon , Mark Rutland , Kees Cook , "linux-kernel@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 13 January 2016 at 15:03, Ard Biesheuvel wrote: > On 12 January 2016 at 22:46, Laura Abbott wrote: >> >> The range of set_memory_* is currently restricted to the module address >> range because of difficulties in breaking down larger block sizes. >> vmalloc maps PAGE_SIZE pages so it is safe to use as well. Update the >> function ranges and add a comment explaining why the range is restricted >> the way it is. >> >> Signed-off-by: Laura Abbott >> --- >> This should let the protections for the eBPF work as expected, I don't >> know if there is some sort of self test for thatL. > > > This is going to conflict with my KASLR implementation, since it puts > the kernel image right in the middle of the vmalloc area, and the > kernel is obviously mapped with block mappings. In fact, I am > proposing enabling huge-vmap for arm64 as well, since it seems an > improvement generally, but also specifically allows me to unmap the > __init section using the generic vunmap code (remove_vm_area). But in > general, I think the assumption that the whole vmalloc area is mapped > using pages is not tenable. > > AFAICT, vmalloc still use pages exclusively even with huge-vmap (but > ioremap does not). So perhaps it would make sense to check for the > VM_ALLOC bit in the VMA flags (which I will not set for the kernel > regions either) > Something along these lines, perhaps? > >> --- >> arch/arm64/mm/pageattr.c | 25 +++++++++++++++++++++---- >> 1 file changed, 21 insertions(+), 4 deletions(-) >> >> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c >> index 3571c73..274208e 100644 >> --- a/arch/arm64/mm/pageattr.c >> +++ b/arch/arm64/mm/pageattr.c >> @@ -36,6 +36,26 @@ static int change_page_range(pte_t *ptep, pgtable_t token, unsigned long addr, >> return 0; >> } >> >> +static bool validate_addr(unsigned long start, unsigned long end) >> +{ >> + /* >> + * This check explicitly excludes most kernel memory. Most kernel >> + * memory is mapped with a larger page size and breaking down the >> + * larger page size without causing TLB conflicts is very difficult. >> + * >> + * If you need to call set_memory_* on a range, the recommendation is >> + * to use vmalloc since that range is mapped with pages. >> + */ >> + if (start >= MODULES_VADDR && start < MODULES_END && >> + end >= MODULES_VADDR && end < MODULES_END) >> + return true; >> + >> + if (is_vmalloc_addr(start) && is_vmalloc_addr(end)) >> + return true; >> + >> + return false; >> +} >> + >> static int change_memory_common(unsigned long addr, int numpages, >> pgprot_t set_mask, pgprot_t clear_mask) >> { >> @@ -51,10 +71,7 @@ static int change_memory_common(unsigned long addr, int numpages, >> WARN_ON_ONCE(1); >> } >> >> - if (start < MODULES_VADDR || start >= MODULES_END) >> - return -EINVAL; >> - >> - if (end < MODULES_VADDR || end >= MODULES_END) >> + if (!validate_addr(start, end)) >> return -EINVAL; >> >> data.set_mask = set_mask; >> -- >> 2.5.0 >> >> >> _______________________________________________ >> linux-arm-kernel mailing list >> linux-arm-kernel@lists.infradead.org >> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 3571c7309c5e..bda0a776c58e 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -44,6 +45,7 @@ static int change_memory_common(unsigned long addr unsigned long end = start + size; int ret; struct page_change_data data; + struct vm_struct *area; if (!PAGE_ALIGNED(addr)) { start &= PAGE_MASK; @@ -51,10 +53,14 @@ static int change_memory_common(unsigned long addr, WARN_ON_ONCE(1); } - if (start < MODULES_VADDR || start >= MODULES_END) - return -EINVAL; - - if (end < MODULES_VADDR || end >= MODULES_END) + /* + * Check whether the [addr, addr + size) interval is entirely + * covered by precisely one VM area that has the VM_ALLOC flag set + */ + area = find_vm_area((void *)addr); + if (!area || + end > (unsigned long)area->addr + area->size || + !(area->flags & VM_ALLOC)) return -EINVAL; data.set_mask = set_mask;