From patchwork Thu Apr 12 11:11:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 133203 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp1571400ljb; Thu, 12 Apr 2018 04:12:22 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+CJ3TCSteApJoAFvCcpZykW4oI7Ak12Nb7qXqlk8mCkx2mpdRI80QmagpcJRJ4RP2d8LtE X-Received: by 2002:a17:902:5a4:: with SMTP id f33-v6mr558902plf.278.1523531542316; Thu, 12 Apr 2018 04:12:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523531542; cv=none; d=google.com; s=arc-20160816; b=DAnl9jx01gnEpDERZUrnNHIrcPVAjh34/alP4XTESUiCWvkewTdU87omvfXbE4ZG78 GduJF5YvEOTMz8hKModMd6as/okf4M0ABnyF0HQq1voQ5EktKG9J5wyh8oY27ive/XR8 uim+OS/j/arVSos1ZIqUjPf7nkeV2rile0WjASpHhsOyIuoi3z+jISuWkAWY9IwatsqH H/7TnQ1TfhwltNO89BvQxbr4BQNMRd3zblSRlnjx0SefcNNVBimYpoGfh0RYNuj+mRfQ AVefEBZanMhg/qyTwFnTtxna0YvrTqJLhd1WbOf3iInum2I7KWMio1V2+RiuQA323rMD XMDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Me2IxNygyU3YAwbDEomJmEYBjJeCi1zSN+GFLrkuP/A=; b=fdmaTDFY+/C7J/P2Z1eTOfRIHxEMMLKQwZkBOCFBxYs/5qv4ZdpJUgy1EYBrG12Gwq AgjOVXzzM+o66Z8EPcN5AO5uikfX3MlGSwJHXzARcWJNDaICscTXJoi+vduCMlsdfKak YccEqJ2vv6ePZUL3ApW1IU0LRKwKCtV+Q7yQMXWPctWwTZHY2s0EtRdUnGRLA11+H/Oq nIIFbifvh47CE4V5ixTvxdEeDF1I97IuML07MZkbbi0T6yp5ekCbZX/7VhR/HYbrtNUX sr4OdFAhvM+qz1BgrUQO3+xcJlz0iuMX80+fl4ykHad0z1GK7zKR3PevX1N5B6x/0tl9 UvUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bg3-v6si3001807plb.118.2018.04.12.04.12.22; Thu, 12 Apr 2018 04:12:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752833AbeDLLMV (ORCPT + 11 others); Thu, 12 Apr 2018 07:12:21 -0400 Received: from foss.arm.com ([217.140.101.70]:59368 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752428AbeDLLMV (ORCPT ); Thu, 12 Apr 2018 07:12:21 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0236415AD; Thu, 12 Apr 2018 04:12:21 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5BD193F24A; Thu, 12 Apr 2018 04:12:19 -0700 (PDT) From: Mark Rutland To: stable@vger.kernel.org Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, ghackmann@google.com, shankerd@codeaurora.org Subject: [PATCH v4.9.y 08/42] arm64: uaccess: Don't bother eliding access_ok checks in __{get, put}_user Date: Thu, 12 Apr 2018 12:11:04 +0100 Message-Id: <20180412111138.40990-9-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180412111138.40990-1-mark.rutland@arm.com> References: <20180412111138.40990-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit 84624087dd7e3b482b7b11c170ebc1f329b3a218 upstream. access_ok isn't an expensive operation once the addr_limit for the current thread has been loaded into the cache. Given that the initial access_ok check preceding a sequence of __{get,put}_user operations will take the brunt of the miss, we can make the __* variants identical to the full-fat versions, which brings with it the benefits of address masking. The likely cost in these sequences will be from toggling PAN/UAO, which we can address later by implementing the *_unsafe versions. Reviewed-by: Robin Murphy Signed-off-by: Will Deacon Signed-off-by: Catalin Marinas Signed-off-by: Mark Rutland [v4.9 backport] --- arch/arm64/include/asm/uaccess.h | 62 +++++++++++++++++++++++----------------- 1 file changed, 36 insertions(+), 26 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 900397e73fa6..0d4330c7d58d 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -209,30 +209,35 @@ do { \ CONFIG_ARM64_PAN)); \ } while (0) -#define __get_user(x, ptr) \ +#define __get_user_check(x, ptr, err) \ ({ \ - int __gu_err = 0; \ - __get_user_err((x), (ptr), __gu_err); \ - __gu_err; \ + __typeof__(*(ptr)) __user *__p = (ptr); \ + might_fault(); \ + if (access_ok(VERIFY_READ, __p, sizeof(*__p))) { \ + __p = uaccess_mask_ptr(__p); \ + __get_user_err((x), __p, (err)); \ + } else { \ + (x) = 0; (err) = -EFAULT; \ + } \ }) #define __get_user_error(x, ptr, err) \ ({ \ - __get_user_err((x), (ptr), (err)); \ + __get_user_check((x), (ptr), (err)); \ (void)0; \ }) -#define __get_user_unaligned __get_user - -#define get_user(x, ptr) \ +#define __get_user(x, ptr) \ ({ \ - __typeof__(*(ptr)) __user *__p = (ptr); \ - might_fault(); \ - access_ok(VERIFY_READ, __p, sizeof(*__p)) ? \ - __p = uaccess_mask_ptr(__p), __get_user((x), __p) : \ - ((x) = 0, -EFAULT); \ + int __gu_err = 0; \ + __get_user_check((x), (ptr), __gu_err); \ + __gu_err; \ }) +#define __get_user_unaligned __get_user + +#define get_user __get_user + #define __put_user_asm(instr, alt_instr, reg, x, addr, err, feature) \ asm volatile( \ "1:"ALTERNATIVE(instr " " reg "1, [%2]\n", \ @@ -277,30 +282,35 @@ do { \ CONFIG_ARM64_PAN)); \ } while (0) -#define __put_user(x, ptr) \ +#define __put_user_check(x, ptr, err) \ ({ \ - int __pu_err = 0; \ - __put_user_err((x), (ptr), __pu_err); \ - __pu_err; \ + __typeof__(*(ptr)) __user *__p = (ptr); \ + might_fault(); \ + if (access_ok(VERIFY_WRITE, __p, sizeof(*__p))) { \ + __p = uaccess_mask_ptr(__p); \ + __put_user_err((x), __p, (err)); \ + } else { \ + (err) = -EFAULT; \ + } \ }) #define __put_user_error(x, ptr, err) \ ({ \ - __put_user_err((x), (ptr), (err)); \ + __put_user_check((x), (ptr), (err)); \ (void)0; \ }) -#define __put_user_unaligned __put_user - -#define put_user(x, ptr) \ +#define __put_user(x, ptr) \ ({ \ - __typeof__(*(ptr)) __user *__p = (ptr); \ - might_fault(); \ - access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ? \ - __p = uaccess_mask_ptr(__p), __put_user((x), __p) : \ - -EFAULT; \ + int __pu_err = 0; \ + __put_user_check((x), (ptr), __pu_err); \ + __pu_err; \ }) +#define __put_user_unaligned __put_user + +#define put_user __put_user + extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n); extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n); extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n);