From patchwork Mon Apr 18 16:04:57 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 66045 Delivered-To: patch@linaro.org Received: by 10.140.93.198 with SMTP id d64csp1368010qge; Mon, 18 Apr 2016 09:05:54 -0700 (PDT) X-Received: by 10.98.50.193 with SMTP id y184mr2691152pfy.25.1460995554456; Mon, 18 Apr 2016 09:05:54 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x2si2280595pfa.33.2016.04.18.09.05.54; Mon, 18 Apr 2016 09:05:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753176AbcDRQFt (ORCPT + 29 others); Mon, 18 Apr 2016 12:05:49 -0400 Received: from mail-wm0-f49.google.com ([74.125.82.49]:38641 "EHLO mail-wm0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752705AbcDRQFK (ORCPT ); Mon, 18 Apr 2016 12:05:10 -0400 Received: by mail-wm0-f49.google.com with SMTP id u206so131488208wme.1 for ; Mon, 18 Apr 2016 09:05:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=X8ycw9MbKDJRDZ88quCXXG1dZ44g3OKyXp5C6D1t0cM=; b=OG9gXMjlOTyczvSDRV5KESn6UJlDLVfDA5oOyevQs/Rj5h7a1l7OVsfkNQHNmG2AXr S+HoZO5nY8KqcLpxJL8qEI6TZTaMDkZqTE6c4Hec464nxhK0w305ATW/Xi2eULNdt2DB ScHO4poyXUj7EJ826W2hKFTaxa0IxYipj/dX4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=X8ycw9MbKDJRDZ88quCXXG1dZ44g3OKyXp5C6D1t0cM=; b=mkYmfLCAQarGGJw6ofA/JdLYWegvpHksmoQdNjCelUfwvKd2/4GHl9N5dStvB6Cg8Q P06EGEYyt8QNjEo7y/KRq/R37eLCl9xlMR/5nlN5J8QUEjrN0LUIho5FIXW6bH0DKGZc n0tTDQZe6m7WeBz9qtwuGNcb9L29W2MhiOdP8pN9Ar+L444AYeDywZm+/E081WsaymMv in0SBJu58hWefqkKFq0G42UOhEQaJ4Dp7p234Ap31HcOQiZOEYDzp+sfmg8kTiiIE3oH E18Rs5+JPg7w2i5vY3oCqtLKMCRbbYX+Aky1rgwPydPZhORRdQ8a1fwsET48fuIMrQnp h4Yg== X-Gm-Message-State: AOPr4FUxZ4ee5GOPzD1zkndwg1uasCoYE7D7zennFxCI08IZ7LBPoBZMrVNIVFrmDEMYaE3I X-Received: by 10.194.41.104 with SMTP id e8mr35214769wjl.177.1460995508403; Mon, 18 Apr 2016 09:05:08 -0700 (PDT) Received: from localhost.localdomain ([195.55.142.58]) by smtp.gmail.com with ESMTPSA id e12sm27231198wma.15.2016.04.18.09.05.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 18 Apr 2016 09:05:07 -0700 (PDT) From: Ard Biesheuvel To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, lftan@altera.com, jonas@southpole.se Cc: will.deacon@arm.com, Ard Biesheuvel Subject: [PATCH resend 3/3] mm: replace open coded page to virt conversion with page_to_virt() Date: Mon, 18 Apr 2016 18:04:57 +0200 Message-Id: <1460995497-24312-4-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1460995497-24312-1-git-send-email-ard.biesheuvel@linaro.org> References: <1460995497-24312-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The open coded conversion from struct page address to virtual address in lowmem_page_address() involves an intermediate conversion step to pfn number/physical address. Since the placement of the struct page array relative to the linear mapping may be completely independent from the placement of physical RAM (as is that case for arm64 after commit dfd55ad85e 'arm64: vmemmap: use virtual projection of linear region'), the conversion to physical address and back again should factor out of the equation, but unfortunately, the shifting and pointer arithmetic involved prevent this from happening, and the resulting calculation essentially subtracts the address of the start of physical memory and adds it back again, in a way that prevents the compiler from optimizing it away. Since the start of physical memory is not a build time constant on arm64, the resulting conversion involves an unnecessary memory access, which we would like to get rid of. So replace the open coded conversion with a call to page_to_virt(), and use the open coded conversion as its default definition, to be overriden by the architecture, if desired. The existing arch specific definitions of page_to_virt are all equivalent to this default definition, so by itself this patch is a no-op. Acked-by: Will Deacon Signed-off-by: Ard Biesheuvel --- include/linux/mm.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) -- 2.5.0 diff --git a/include/linux/mm.h b/include/linux/mm.h index a55e5be0894f..7d66dbba220f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -72,6 +72,10 @@ extern int mmap_rnd_compat_bits __read_mostly; #define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x), 0)) #endif +#ifndef page_to_virt +#define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x))) +#endif + /* * To prevent common memory management code establishing * a zero page mapping on a read fault. @@ -948,7 +952,7 @@ static inline struct mem_cgroup *page_memcg(struct page *page) static __always_inline void *lowmem_page_address(const struct page *page) { - return __va(PFN_PHYS(page_to_pfn(page))); + return page_to_virt(page); } #if defined(CONFIG_HIGHMEM) && !defined(WANT_PAGE_VIRTUAL)