From patchwork Sun Aug 19 07:51:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 144527 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp2665886ljj; Sun, 19 Aug 2018 00:55:38 -0700 (PDT) X-Google-Smtp-Source: AA+uWPwyKHJ9TKBoBqQ3wyI1megg9xDBRLxxSR4NPM9am4PA3D+BKv/btQQJ7OABWnAefx9Ev5F8 X-Received: by 2002:a17:902:2d24:: with SMTP id o33-v6mr40647003plb.38.1534665338610; Sun, 19 Aug 2018 00:55:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534665338; cv=none; d=google.com; s=arc-20160816; b=0xxO8subU/9kFF5+scZsha5dAdsIfcTIUF7ljAfSV/dKJmJKitHBh8CbswXXdMJT29 KVfnbWxD7DPq1HQag2+cZWq8/10Ruzx+azyauLBjkGcFsVt3yZirBlGHGrJ51srT0uqF 9WMMVduXQRdKrRMMTBxiSDXynDC5ZUBrzHU3Tjs6YJclcWt//pQBB6PEjSk5qp9bIpfN ryGdBBPwc1xbG5GpdRdxzN7Z2FgZseNjx3uCOyfvXQcX1C0I0pgfuv8P/bPSwvWcBqIA JE9y/hyZc8TPhYcusWxtBXytr+N3KDXVZ9JR9FLLJSc8kAma5eYSTd/COhIs/5LuAW5g V4ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=hpl+fyb25LYHyA5jAhKTH4UlxuVIByg1ENug2hQgiQI=; b=oO7wGIhrXKPlwyotmh/u+k421Mn7fXL/bMdManP4OgEjlDOErLL/hxgVDcRjXcYUhY MOCIT2sPI36VGsAwdYPK0cMP2uB3Fom74BS2S++pKcqF13ZelIUC5gK3CQYSawkuuztK 0zqr159b121l5ugYH6LPDYzpj5JvfSQPaHFy0IYHIiB0v04dS1tF5+J7ys0ftzsfCBRM ApU2tXja8lddEPXxIQrLIBUsXFdQPOdRzIaPuq9OPRsqsB6X7OqUKpbHs3kz3lSQFCpn wQEK1YtBFv1dZzLNIG2gckK3dGXqvfeHKBpoPR6dyQHCM+2CxHuVVMKUbwLeO99NEYIX OcFg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q19-v6si2365521pls.490.2018.08.19.00.55.38; Sun, 19 Aug 2018 00:55:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726425AbeHSLGD (ORCPT + 32 others); Sun, 19 Aug 2018 07:06:03 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:11150 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725929AbeHSLGD (ORCPT ); Sun, 19 Aug 2018 07:06:03 -0400 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 72FEF39F75EE9; Sun, 19 Aug 2018 15:55:20 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.399.0; Sun, 19 Aug 2018 15:55:15 +0800 From: Zhen Lei To: Robin Murphy , Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , linux-kernel CC: Zhen Lei , LinuxArm , Hanjun Guo , Libin , "John Garry" Subject: [PATCH v4 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout Date: Sun, 19 Aug 2018 15:51:10 +0800 Message-ID: <1534665071-7976-2-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1534665071-7976-1-git-send-email-thunder.leizhen@huawei.com> References: <1534665071-7976-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The condition "(int)(VAL - sync_idx) >= 0" to break loop in function __arm_smmu_sync_poll_msi requires that sync_idx must be increased monotonously according to the sequence of the CMDs in the cmdq. But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected by spinlock, so the following scenarios may appear: cpu0 cpu1 msidata=0 msidata=1 insert cmd1 insert cmd0 smmu execute cmd1 smmu execute cmd0 poll timeout, because msidata=1 is overridden by cmd0, that means VAL=0, sync_idx=1. This is not a functional problem, just make the caller wait for a long time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs during the waiting period will break it. Split the building of CMD_SYNC for SIG_IRQ mode into a single function, to ensure better performance and good coding style. Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu-v3.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) -- 1.8.3 diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 1d64710..ac6d6df 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -566,7 +566,7 @@ struct arm_smmu_device { int gerr_irq; int combined_irq; - atomic_t sync_nr; + u32 sync_nr; unsigned long ias; /* IPA */ unsigned long oas; /* PA */ @@ -775,6 +775,17 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent) return 0; } +static inline +void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) +{ + cmd[0] = FIELD_PREP(CMDQ_0_OP, ent->opcode); + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ); + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH); + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB); + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata); + cmd[1] = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK; +} + /* High-level queue accessors */ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) { @@ -830,14 +841,9 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp); break; case CMDQ_OP_CMD_SYNC: - if (ent->sync.msiaddr) - cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ); - else - cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV); + cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV); cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH); cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB); - cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata); - cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK; break; default: return -ENOENT; @@ -947,14 +953,13 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu) struct arm_smmu_cmdq_ent ent = { .opcode = CMDQ_OP_CMD_SYNC, .sync = { - .msidata = atomic_inc_return_relaxed(&smmu->sync_nr), .msiaddr = virt_to_phys(&smmu->sync_count), }, }; - arm_smmu_cmdq_build_cmd(cmd, &ent); - spin_lock_irqsave(&smmu->cmdq.lock, flags); + ent.sync.msidata = ++smmu->sync_nr; + arm_smmu_cmdq_build_sync_msi_cmd(cmd, &ent); arm_smmu_cmdq_insert_cmd(smmu, cmd); spin_unlock_irqrestore(&smmu->cmdq.lock, flags); @@ -2179,7 +2184,6 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu) { int ret; - atomic_set(&smmu->sync_nr, 0); ret = arm_smmu_init_queues(smmu); if (ret) return ret;