From patchwork Mon Dec 17 16:03:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Masahiro Yamada X-Patchwork-Id: 154017 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp2640717ljp; Mon, 17 Dec 2018 08:11:34 -0800 (PST) X-Google-Smtp-Source: AFSGD/Vieqj1agmtM7teIdvil67nwm8i1fbirWfeh59GQR2ma2OCQ0RUQ3m0H9NCiGLJ3+fmmkMu X-Received: by 2002:a63:ac46:: with SMTP id z6mr12617791pgn.162.1545063094406; Mon, 17 Dec 2018 08:11:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545063094; cv=none; d=google.com; s=arc-20160816; b=Vz60xdmjiX3w8xWC3rLAZOFTEVzYsmv9JNf5tuHAi+M7uAuAyC5/ClEZHtnWdtWb4s c4/Mt8yEc3bT8DYK0/XB77bPIjxUnPWplAi5TS9LrsL/MtjLIIKbgXUP3iOgPaur9y6O KcMEEKsiXSCt3o1vPN4K8Az0IeXej/8qwXjs6xOp0wMlZu62skdVb9laqGUpyHtp9iH2 lc86YQnxoxtXWlMfoyftCt/XVbgoNCQ4otja0LFsC7ig69jfbhWXWJGQZD6BooC0BCD7 sLdIPaFDhab28B60qrO5b6iVmn6OowMfBnsPZS4Gy4DT+bLnXPN3F+iF9Jne0JsxNusZ B5OQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=Ncxd2084CDNaDsykqqCM0ja6cXudxZXMCOcG57w78u4=; b=h95ZTpEbnQ8hUnQvPOXdoc/6LjcMrAAlG/urZLzNYWmXkOA6EX892jxvp7ZgoVZi96 tA7YOt4K4/x9W+IosnbdOhLaUup3KpWd+TMb6hhbSsxzP3bgCBaqC+uUqqNDhXpDS50L M5wFlKP1kHrrRaniUCE3GFE1FgDz9EsQNKo4h/alXtoCaY0MyrR4EWAZdvt6pcOOp3xd bdVira37+YlV4QX/MSe79N4PCvQiqWc5E7kKGJyOUQe5WC21wPbSuP2m5ZxowIsBDmRA tU73MQ3OPKb7SZaSjlQWsnC6IXkZSR+aj575uQkJA5LhvMlrm6LLIsnE2AMH68XjMSmz UtqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nifty.com header.s=dec2015msa header.b=DQIWUNZW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 23si11551611pfk.287.2018.12.17.08.11.34; Mon, 17 Dec 2018 08:11:34 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nifty.com header.s=dec2015msa header.b=DQIWUNZW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388282AbeLQQLb (ORCPT + 31 others); Mon, 17 Dec 2018 11:11:31 -0500 Received: from conuserg-07.nifty.com ([210.131.2.74]:65527 "EHLO conuserg-07.nifty.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727814AbeLQQL3 (ORCPT ); Mon, 17 Dec 2018 11:11:29 -0500 Received: from grover.tkatk1.zaq.ne.jp (zaqdadce369.zaq.ne.jp [218.220.227.105]) (authenticated) by conuserg-07.nifty.com with ESMTP id wBHG3bRk008119; Tue, 18 Dec 2018 01:03:49 +0900 DKIM-Filter: OpenDKIM Filter v2.10.3 conuserg-07.nifty.com wBHG3bRk008119 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nifty.com; s=dec2015msa; t=1545062630; bh=Ncxd2084CDNaDsykqqCM0ja6cXudxZXMCOcG57w78u4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DQIWUNZWUM0NwJM0cC8xuPxPsCtjgGQrOYu0DQxt4dwzfD9gqV2PumQQLjS5Hdv+s 7SqvYKdfPxVwNVY5G5zZOU4DJu9pjCDqEAIrf0ZCujiw1Tj0KJIHd+SmmDciqGDFCU vTgNoLKduUZj06nE6KtwKilGdxKeZoA++dsBYDQ/FmxgPAtTRak4paxHyAR16ZBuJi bOLF0f4W8jBdVCngKy5RWgEYmvT1EGWqOhcBGw5hM3kC2kNFKl/YemVi86Kss+WLnz NajeEnKPi8COqXsk4a0GZe3CFgeRrHWnPtBlND5VrLwNU+sLQTt1UAw9zU1wMh69Mn XHnK8NelQmzOA== X-Nifty-SrcIP: [218.220.227.105] From: Masahiro Yamada To: x86@kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" Cc: Richard Biener , Segher Boessenkool , Peter Zijlstra , Juergen Gross , Josh Poimboeuf , Kees Cook , Linus Torvalds , Masahiro Yamada , Alexey Dobriyan , linux-kernel@vger.kernel.org, Jan Beulich , Nadav Amit Subject: [PATCH v3 07/12] Revert "x86/refcount: Work around GCC inlining bug" Date: Tue, 18 Dec 2018 01:03:22 +0900 Message-Id: <1545062607-8599-8-git-send-email-yamada.masahiro@socionext.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545062607-8599-1-git-send-email-yamada.masahiro@socionext.com> References: <1545062607-8599-1-git-send-email-yamada.masahiro@socionext.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This reverts commit 9e1725b410594911cc5981b6c7b4cea4ec054ca8. The in-kernel workarounds will be replaced with GCC's new "asm inline" syntax. Resolved conflicts caused by 288e4521f0f6 ("x86/asm: 'Simplify' GEN_*_RMWcc() macros"). Signed-off-by: Masahiro Yamada --- arch/x86/include/asm/refcount.h | 81 +++++++++++++++++------------------------ arch/x86/kernel/macros.S | 1 - 2 files changed, 33 insertions(+), 49 deletions(-) -- 2.7.4 diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h index a8b5e1e..dbaed55 100644 --- a/arch/x86/include/asm/refcount.h +++ b/arch/x86/include/asm/refcount.h @@ -4,41 +4,6 @@ * x86-specific implementation of refcount_t. Based on PAX_REFCOUNT from * PaX/grsecurity. */ - -#ifdef __ASSEMBLY__ - -#include -#include - -.macro REFCOUNT_EXCEPTION counter:req - .pushsection .text..refcount -111: lea \counter, %_ASM_CX -112: ud2 - ASM_UNREACHABLE - .popsection -113: _ASM_EXTABLE_REFCOUNT(112b, 113b) -.endm - -/* Trigger refcount exception if refcount result is negative. */ -.macro REFCOUNT_CHECK_LT_ZERO counter:req - js 111f - REFCOUNT_EXCEPTION counter="\counter" -.endm - -/* Trigger refcount exception if refcount result is zero or negative. */ -.macro REFCOUNT_CHECK_LE_ZERO counter:req - jz 111f - REFCOUNT_CHECK_LT_ZERO counter="\counter" -.endm - -/* Trigger refcount exception unconditionally. */ -.macro REFCOUNT_ERROR counter:req - jmp 111f - REFCOUNT_EXCEPTION counter="\counter" -.endm - -#else /* __ASSEMBLY__ */ - #include #include @@ -50,12 +15,35 @@ * central refcount exception. The fixup address for the exception points * back to the regular execution flow in .text. */ +#define _REFCOUNT_EXCEPTION \ + ".pushsection .text..refcount\n" \ + "111:\tlea %[var], %%" _ASM_CX "\n" \ + "112:\t" ASM_UD2 "\n" \ + ASM_UNREACHABLE \ + ".popsection\n" \ + "113:\n" \ + _ASM_EXTABLE_REFCOUNT(112b, 113b) + +/* Trigger refcount exception if refcount result is negative. */ +#define REFCOUNT_CHECK_LT_ZERO \ + "js 111f\n\t" \ + _REFCOUNT_EXCEPTION + +/* Trigger refcount exception if refcount result is zero or negative. */ +#define REFCOUNT_CHECK_LE_ZERO \ + "jz 111f\n\t" \ + REFCOUNT_CHECK_LT_ZERO + +/* Trigger refcount exception unconditionally. */ +#define REFCOUNT_ERROR \ + "jmp 111f\n\t" \ + _REFCOUNT_EXCEPTION static __always_inline void refcount_add(unsigned int i, refcount_t *r) { asm volatile(LOCK_PREFIX "addl %1,%0\n\t" - "REFCOUNT_CHECK_LT_ZERO counter=\"%[counter]\"" - : [counter] "+m" (r->refs.counter) + REFCOUNT_CHECK_LT_ZERO + : [var] "+m" (r->refs.counter) : "ir" (i) : "cc", "cx"); } @@ -63,32 +51,31 @@ static __always_inline void refcount_add(unsigned int i, refcount_t *r) static __always_inline void refcount_inc(refcount_t *r) { asm volatile(LOCK_PREFIX "incl %0\n\t" - "REFCOUNT_CHECK_LT_ZERO counter=\"%[counter]\"" - : [counter] "+m" (r->refs.counter) + REFCOUNT_CHECK_LT_ZERO + : [var] "+m" (r->refs.counter) : : "cc", "cx"); } static __always_inline void refcount_dec(refcount_t *r) { asm volatile(LOCK_PREFIX "decl %0\n\t" - "REFCOUNT_CHECK_LE_ZERO counter=\"%[counter]\"" - : [counter] "+m" (r->refs.counter) + REFCOUNT_CHECK_LE_ZERO + : [var] "+m" (r->refs.counter) : : "cc", "cx"); } static __always_inline __must_check bool refcount_sub_and_test(unsigned int i, refcount_t *r) { - return GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", - "REFCOUNT_CHECK_LT_ZERO counter=\"%[var]\"", + REFCOUNT_CHECK_LT_ZERO, r->refs.counter, e, "er", i, "cx"); } static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r) { return GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", - "REFCOUNT_CHECK_LT_ZERO counter=\"%[var]\"", + REFCOUNT_CHECK_LT_ZERO, r->refs.counter, e, "cx"); } @@ -106,8 +93,8 @@ bool refcount_add_not_zero(unsigned int i, refcount_t *r) /* Did we try to increment from/to an undesirable state? */ if (unlikely(c < 0 || c == INT_MAX || result < c)) { - asm volatile("REFCOUNT_ERROR counter=\"%[counter]\"" - : : [counter] "m" (r->refs.counter) + asm volatile(REFCOUNT_ERROR + : : [var] "m" (r->refs.counter) : "cc", "cx"); break; } @@ -122,6 +109,4 @@ static __always_inline __must_check bool refcount_inc_not_zero(refcount_t *r) return refcount_add_not_zero(1, r); } -#endif /* __ASSEMBLY__ */ - #endif diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S index f1fe1d5..cee28c3 100644 --- a/arch/x86/kernel/macros.S +++ b/arch/x86/kernel/macros.S @@ -7,4 +7,3 @@ */ #include -#include