From patchwork Wed May 12 14:52:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 435682 Delivered-To: patch@linaro.org Received: by 2002:a02:c901:0:0:0:0:0 with SMTP id t1csp5234116jao; Wed, 12 May 2021 15:04:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwLzVfL2GketcC85RpgSXAo0zswd1XfhA5uvrNxwYxu/RSEY1KyRZsXByzzx4KFM78ZA31u X-Received: by 2002:aa7:c9cf:: with SMTP id i15mr46432338edt.4.1620857087552; Wed, 12 May 2021 15:04:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620857087; cv=none; d=google.com; s=arc-20160816; b=xA0oz6Q1HQjhO13pqMumadv4A05fswBJjHmnPf6ug238uFLoVPl2wYCqStRVJHx6lV URYikTPKU3Mjs8MuY/r5dal71LUKAo9wXMtnIq9XiJREcC6FgrPPIUJBjzpyDQzXQxoU hKf9fkXB+WidU7A22HYCC+J7mLArURi3tv0gC2bD6q4MYIvNnHarUA8SSGs+pHNrXuZ2 1RhKFx/seWMlGRZywuSCWEVa3il6DbBzWFsmBEXoNYIA/nnSTw5xfkt90vBsDFruJD31 rDEEyWmCkmPE6eapZ+lwY61fgUupLL44mqg6ZhkakiwJAjQZqoyRqg7x4XnjvhVxmFcW PA/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=HnRZbM50I0esgD81k2QS8o11aK6RlUOtamfk9fLTnrU=; b=mjGbOuLYM3PY5If2CY9D0QQmnjf4FG4JYl6/30IUrYyR/thcqYh52z+2Wi1ay8k3wl lteSysLFjtNwY0LoL2MqN1Bl3RJzcw5GWyitA8hVNdHE1rbp7u9Ar7A83Am8BNAgyeHI XjQzVZoEVLnymtcVT7qVIV2kDvw/MrgSQifQx2rgd/HbS+m1uTpYPwdDc/VUNDWURlF+ eLtYiVBHRx7n8Adz+YBDZnq9SS4ETQYyim+FzN+HboBtYs3lp3fEGL7aji145cWPQgXZ SL79vgUaPFmfQWJx90zuvC3mYRwoqKRuLzZUDeda2NAalIpjjCVNhniwhWJWt0HB8q3R uBYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=P7k+Pxg0; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u19si859013ejx.176.2021.05.12.15.04.47; Wed, 12 May 2021 15:04:47 -0700 (PDT) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=P7k+Pxg0; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344091AbhELRHz (ORCPT + 12 others); Wed, 12 May 2021 13:07:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:46628 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244786AbhELQvK (ORCPT ); Wed, 12 May 2021 12:51:10 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5CCD961D6A; Wed, 12 May 2021 16:18:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1620836285; bh=KYjZRfCr3a3gEQmWBk32tJcCugeOR6+kdHR9uJKL2Is=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P7k+Pxg0Tjl0FDXMQfS7wHjZnvzyz8B/c1N/nQD0VtgHCAsWWpptZ96qtDXHlae7i 8nuh97HhJPqZEGy8cPXufb/MYsXZ7jzXCJToxWhAj/WlTsD7JHCa7+hFk5TG24ETIj RV/25+0xB7K41JBMWOhReMXnRONiwc4mUmqE7fvA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Arnd Bergmann , "Peter Zijlstra (Intel)" , Jens Axboe , Nathan Chancellor Subject: [PATCH 5.12 675/677] smp: Fix smp_call_function_single_async prototype Date: Wed, 12 May 2021 16:52:01 +0200 Message-Id: <20210512144859.785179198@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210512144837.204217980@linuxfoundation.org> References: <20210512144837.204217980@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Arnd Bergmann commit 1139aeb1c521eb4a050920ce6c64c36c4f2a3ab7 upstream. As of commit 966a967116e6 ("smp: Avoid using two cache lines for struct call_single_data"), the smp code prefers 32-byte aligned call_single_data objects for performance reasons, but the block layer includes an instance of this structure in the main 'struct request' that is more senstive to size than to performance here, see 4ccafe032005 ("block: unalign call_single_data in struct request"). The result is a violation of the calling conventions that clang correctly points out: block/blk-mq.c:630:39: warning: passing 8-byte aligned argument to 32-byte aligned parameter 2 of 'smp_call_function_single_async' may result in an unaligned pointer access [-Walign-mismatch] smp_call_function_single_async(cpu, &rq->csd); It does seem that the usage of the call_single_data without cache line alignment should still be allowed by the smp code, so just change the function prototype so it accepts both, but leave the default alignment unchanged for the other users. This seems better to me than adding a local hack to shut up an otherwise correct warning in the caller. Signed-off-by: Arnd Bergmann Signed-off-by: Peter Zijlstra (Intel) Acked-by: Jens Axboe Link: https://lkml.kernel.org/r/20210505211300.3174456-1-arnd@kernel.org [nc: Fix conflicts] Signed-off-by: Nathan Chancellor Signed-off-by: Greg Kroah-Hartman --- include/linux/smp.h | 2 +- kernel/smp.c | 20 ++++++++++---------- kernel/up.c | 2 +- 3 files changed, 12 insertions(+), 12 deletions(-) --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -73,7 +73,7 @@ void on_each_cpu_cond(smp_cond_func_t co void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func, void *info, bool wait, const struct cpumask *mask); -int smp_call_function_single_async(int cpu, call_single_data_t *csd); +int smp_call_function_single_async(int cpu, struct __call_single_data *csd); #ifdef CONFIG_SMP --- a/kernel/smp.c +++ b/kernel/smp.c @@ -110,7 +110,7 @@ static DEFINE_PER_CPU(void *, cur_csd_in static atomic_t csd_bug_count = ATOMIC_INIT(0); /* Record current CSD work for current CPU, NULL to erase. */ -static void csd_lock_record(call_single_data_t *csd) +static void csd_lock_record(struct __call_single_data *csd) { if (!csd) { smp_mb(); /* NULL cur_csd after unlock. */ @@ -125,7 +125,7 @@ static void csd_lock_record(call_single_ /* Or before unlock, as the case may be. */ } -static __always_inline int csd_lock_wait_getcpu(call_single_data_t *csd) +static __always_inline int csd_lock_wait_getcpu(struct __call_single_data *csd) { unsigned int csd_type; @@ -140,7 +140,7 @@ static __always_inline int csd_lock_wait * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU, * so waiting on other types gets much less information. */ -static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, int *bug_id) +static __always_inline bool csd_lock_wait_toolong(struct __call_single_data *csd, u64 ts0, u64 *ts1, int *bug_id) { int cpu = -1; int cpux; @@ -204,7 +204,7 @@ static __always_inline bool csd_lock_wai * previous function call. For multi-cpu calls its even more interesting * as we'll have to ensure no other cpu is observing our csd. */ -static __always_inline void csd_lock_wait(call_single_data_t *csd) +static __always_inline void csd_lock_wait(struct __call_single_data *csd) { int bug_id = 0; u64 ts0, ts1; @@ -219,17 +219,17 @@ static __always_inline void csd_lock_wai } #else -static void csd_lock_record(call_single_data_t *csd) +static void csd_lock_record(struct __call_single_data *csd) { } -static __always_inline void csd_lock_wait(call_single_data_t *csd) +static __always_inline void csd_lock_wait(struct __call_single_data *csd) { smp_cond_load_acquire(&csd->node.u_flags, !(VAL & CSD_FLAG_LOCK)); } #endif -static __always_inline void csd_lock(call_single_data_t *csd) +static __always_inline void csd_lock(struct __call_single_data *csd) { csd_lock_wait(csd); csd->node.u_flags |= CSD_FLAG_LOCK; @@ -242,7 +242,7 @@ static __always_inline void csd_lock(cal smp_wmb(); } -static __always_inline void csd_unlock(call_single_data_t *csd) +static __always_inline void csd_unlock(struct __call_single_data *csd) { WARN_ON(!(csd->node.u_flags & CSD_FLAG_LOCK)); @@ -276,7 +276,7 @@ void __smp_call_single_queue(int cpu, st * for execution on the given CPU. data must already have * ->func, ->info, and ->flags set. */ -static int generic_exec_single(int cpu, call_single_data_t *csd) +static int generic_exec_single(int cpu, struct __call_single_data *csd) { if (cpu == smp_processor_id()) { smp_call_func_t func = csd->func; @@ -542,7 +542,7 @@ EXPORT_SYMBOL(smp_call_function_single); * NOTE: Be careful, there is unfortunately no current debugging facility to * validate the correctness of this serialization. */ -int smp_call_function_single_async(int cpu, call_single_data_t *csd) +int smp_call_function_single_async(int cpu, struct __call_single_data *csd) { int err = 0; --- a/kernel/up.c +++ b/kernel/up.c @@ -25,7 +25,7 @@ int smp_call_function_single(int cpu, vo } EXPORT_SYMBOL(smp_call_function_single); -int smp_call_function_single_async(int cpu, call_single_data_t *csd) +int smp_call_function_single_async(int cpu, struct __call_single_data *csd) { unsigned long flags;