From patchwork Fri May 4 17:39:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 135015 Delivered-To: patch@linaro.org Received: by 10.46.151.6 with SMTP id r6csp323551lji; Fri, 4 May 2018 10:40:11 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpKGkWkPYi7eZ5JaAliENxyT1FwV2SB8E0eHEKBa9TC/ir/bxWwqB4kUrbmvZTSJ1Bs3NxA X-Received: by 2002:a17:902:6113:: with SMTP id t19-v6mr28089616plj.372.1525455611364; Fri, 04 May 2018 10:40:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525455611; cv=none; d=google.com; s=arc-20160816; b=vA/S6UcYmTVvFxrXuDW/SXbJCtptJCPSDnxLG75ahRPdnFAvwxDZ5f5tRaXKcZtjoP Mm8sjg96N0k+pzlPbQHsZPlnDomcdXtgRVWL77Nm8TgWjPR5jEa82tvtmdMeI7WIRp6l xQtGbziywac3g4yl7XYxtscYlgxuRA9cuASTRG8lttEWxcRn19bEJhIwke4q3j5/WlE0 jjqLONam//+3wdMdVACAS69ctZ7/IowDhYCqJWJ1lsTM2J4wQl92rB3zmnOfM73zSza7 UT0i4vkQAZ8Og6bG/ClRbHsiYz+pFYRUBOeR4hb4uA/EtYlIl6rY/pBwv0QSvduVLG0t +TCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=JFDb2bTHNhhXHAlzS+cmj2XiuPnP+AQdqEhx9TTNBOU=; b=vCX6TIcJNtOeL7uaKqh02J+0eq9ebPamBfj4iR2qFck0PNttQHIERzF8PmsFI31CU2 QwLnWiRfgDAj2CGm842eK/u8hUiBsU6zRxjOv+Zhs/IdesFBsriBopK9jmQlPHpxnhSG +ScsRmacN1dZb67TBPLo8nBVsxxKUgDCQ9u40XR7ZqgaQquZP1rKyw/olbhCR84ZJXAm amRjQ/3GJWr3lCOjK/GtyuuE6EOYMVfDanNkvL5u+4qTH7F/TP7XyDSEkydI5/H/a92v InWkgVRjX5/3pkGYrBHsmKr/KrpTdGEepf420ugqlQB7me5oRl/zrine0EJLXtVDRBiH QpCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x3-v6si14047666plo.303.2018.05.04.10.40.11; Fri, 04 May 2018 10:40:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752042AbeEDRkJ (ORCPT + 29 others); Fri, 4 May 2018 13:40:09 -0400 Received: from foss.arm.com ([217.140.101.70]:57428 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751484AbeEDRkG (ORCPT ); Fri, 4 May 2018 13:40:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4E3F61682; Fri, 4 May 2018 10:40:06 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 841763F487; Fri, 4 May 2018 10:40:04 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, mark.rutland@arm.com, mingo@kernel.org, peterz@infradead.org, will.deacon@arm.com Subject: [PATCH 6/6] arm64: instrument smp_{load_acquire,store_release} Date: Fri, 4 May 2018 18:39:37 +0100 Message-Id: <20180504173937.25300-7-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180504173937.25300-1-mark.rutland@arm.com> References: <20180504173937.25300-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Our __smp_store_release() and __smp_load_acquire() macros use inline assembly, which is opaque to kasan. This means that kasan can't catch erroneous use of these. This patch adds kasan instrumentation to both. It might be better to turn these into __arch_* variants, as we do for the atomics, but this works for the time being. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Will Deacon --- arch/arm64/include/asm/barrier.h | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index f11518af96a9..1a9c601619e5 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -20,6 +20,8 @@ #ifndef __ASSEMBLY__ +#include + #define __nops(n) ".rept " #n "\nnop\n.endr\n" #define nops(n) asm volatile(__nops(n)) @@ -68,31 +70,33 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx, #define __smp_store_release(p, v) \ do { \ + typeof(p) __p = (p); \ union { typeof(*p) __val; char __c[1]; } __u = \ { .__val = (__force typeof(*p)) (v) }; \ compiletime_assert_atomic_type(*p); \ + kasan_check_write(__p, sizeof(*__p)); \ switch (sizeof(*p)) { \ case 1: \ asm volatile ("stlrb %w1, %0" \ - : "=Q" (*p) \ + : "=Q" (*__p) \ : "r" (*(__u8 *)__u.__c) \ : "memory"); \ break; \ case 2: \ asm volatile ("stlrh %w1, %0" \ - : "=Q" (*p) \ + : "=Q" (*__p) \ : "r" (*(__u16 *)__u.__c) \ : "memory"); \ break; \ case 4: \ asm volatile ("stlr %w1, %0" \ - : "=Q" (*p) \ + : "=Q" (*__p) \ : "r" (*(__u32 *)__u.__c) \ : "memory"); \ break; \ case 8: \ asm volatile ("stlr %1, %0" \ - : "=Q" (*p) \ + : "=Q" (*__p) \ : "r" (*(__u64 *)__u.__c) \ : "memory"); \ break; \ @@ -102,27 +106,29 @@ do { \ #define __smp_load_acquire(p) \ ({ \ union { typeof(*p) __val; char __c[1]; } __u; \ + typeof(p) __p = (p); \ compiletime_assert_atomic_type(*p); \ + kasan_check_read(__p, sizeof(*__p)); \ switch (sizeof(*p)) { \ case 1: \ asm volatile ("ldarb %w0, %1" \ : "=r" (*(__u8 *)__u.__c) \ - : "Q" (*p) : "memory"); \ + : "Q" (*__p) : "memory"); \ break; \ case 2: \ asm volatile ("ldarh %w0, %1" \ : "=r" (*(__u16 *)__u.__c) \ - : "Q" (*p) : "memory"); \ + : "Q" (*__p) : "memory"); \ break; \ case 4: \ asm volatile ("ldar %w0, %1" \ : "=r" (*(__u32 *)__u.__c) \ - : "Q" (*p) : "memory"); \ + : "Q" (*__p) : "memory"); \ break; \ case 8: \ asm volatile ("ldar %0, %1" \ : "=r" (*(__u64 *)__u.__c) \ - : "Q" (*p) : "memory"); \ + : "Q" (*__p) : "memory"); \ break; \ } \ __u.__val; \