From patchwork Tue Apr 14 17:02:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 202080 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E497C55182 for ; Tue, 14 Apr 2020 17:04:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4BF2F20678 for ; Tue, 14 Apr 2020 17:04:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="UGA3KLMw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391371AbgDNREs (ORCPT ); Tue, 14 Apr 2020 13:04:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391390AbgDNREm (ORCPT ); Tue, 14 Apr 2020 13:04:42 -0400 Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CC1CC061A10 for ; Tue, 14 Apr 2020 10:04:42 -0700 (PDT) Received: by mail-wm1-x341.google.com with SMTP id e26so13825348wmk.5 for ; Tue, 14 Apr 2020 10:04:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kX7S6lLPtPxuthnaqmJScAqBjqH/apZcE9CXTwQeOvU=; b=UGA3KLMwsLNY89811V7EK3id91XIRFmER1PcDwpN6xrw1GtsOaQizxrRHRtGEE4TX3 do1n0AFiqJF6QCTHbjBBQVWUHDTYjRFS4XwOivdCaMdJVcZ+EpuQGdACDtk5jJ/ALrk9 QWkfPd8rxSd35yJXezznRHoO4af8sVb2YJHI6dIC9yyRjSbGXaAgyzJpjh5J52Jc+RGL hLOrTCp1qBgro6BphrKqLYQaG6dLMZ31qsFfaosesRKQkqeQP5unheE0t4aNORXh1gBn VVxXwA1j6xT4GQQP6HFNo1G8EqTDEoncXmNvd+5q+FMzHAwz4AFOArEfPYMYNEYf/6yH eumQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kX7S6lLPtPxuthnaqmJScAqBjqH/apZcE9CXTwQeOvU=; b=bpi2r/Fmk41CxjU7s6jFadj3baN+j7VRO61PrfWJJ7kTLEJmmTAUFn+Efl4ZvltHSW 59KOGs9iTkKOyhqcKMIdRgi3UGkxFZwng7VMXr2k2lGlnIJdTCsRyUiN74jNUZ40B06N HjcuVAyzwLt70Z9mJp5Oi3cO8NsoC3oQCiK77xp2q9hTl8o+CabuPuxCbvv8tmwhtAo6 Fo0e9NPtlRK7L3qy6c++C5XhfPQE3hM+Zn9FR7qhzlMwgf+k1I5Z+/RHyICiEi6NDCxv iSBKNn7WqmmBcw+nDOID8TfY9avNSoiT7lcUZ2+3REo9ryfTF1WrHT6dAZtOhdxmYZd3 gpRw== X-Gm-Message-State: AGi0PuZVBTF1jaxlrRm1z/vCZ6F4OCLUlJCvUMntDQqU4Ua9+zjNANyh sE2ga5qKzCJSNum9vyAMNUSISA== X-Google-Smtp-Source: APiQypL0sw4GI3orm6Mwz7O2HA+cidMWHFaRPNOxNBIJlp7Uv38brpt7BnSl5ju9KFU9AbCgedQXVw== X-Received: by 2002:a1c:8084:: with SMTP id b126mr752667wmd.135.1586883880974; Tue, 14 Apr 2020 10:04:40 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:40 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker , Suzuki K Poulose Subject: [PATCH v5 16/25] iommu/arm-smmu-v3: Add SVA feature checking Date: Tue, 14 Apr 2020 19:02:44 +0200 Message-Id: <20200414170252.714402-17-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Aggregate all sanity-checks for sharing CPU page tables with the SMMU under a single ARM_SMMU_FEAT_SVA bit. For PCIe SVA, users also need to check FEAT_ATS and FEAT_PRI. For platform SVA, they will most likely have to check FEAT_STALLS. Cc: Suzuki K Poulose Signed-off-by: Jean-Philippe Brucker --- v4->v5: bump feature bit --- drivers/iommu/arm-smmu-v3.c | 72 +++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index e7de8a7459fa4..d209d85402a83 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -657,6 +657,7 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_RANGE_INV (1 << 15) #define ARM_SMMU_FEAT_E2H (1 << 16) #define ARM_SMMU_FEAT_BTM (1 << 17) +#define ARM_SMMU_FEAT_SVA (1 << 18) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) @@ -3930,6 +3931,74 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) return 0; } +static bool arm_smmu_supports_sva(struct arm_smmu_device *smmu) +{ + unsigned long reg, fld; + unsigned long oas; + unsigned long asid_bits; + + u32 feat_mask = ARM_SMMU_FEAT_BTM | ARM_SMMU_FEAT_COHERENCY; + + if ((smmu->features & feat_mask) != feat_mask) + return false; + + if (!(smmu->pgsize_bitmap & PAGE_SIZE)) + return false; + + /* + * Get the smallest PA size of all CPUs (sanitized by cpufeature). We're + * not even pretending to support AArch32 here. + */ + reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); + fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT); + switch (fld) { + case 0x0: + oas = 32; + break; + case 0x1: + oas = 36; + break; + case 0x2: + oas = 40; + break; + case 0x3: + oas = 42; + break; + case 0x4: + oas = 44; + break; + case 0x5: + oas = 48; + break; + case 0x6: + oas = 52; + break; + default: + return false; + } + + /* abort if MMU outputs addresses greater than what we support. */ + if (smmu->oas < oas) + return false; + + /* We can support bigger ASIDs than the CPU, but not smaller */ + fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_ASID_SHIFT); + asid_bits = fld ? 16 : 8; + if (smmu->asid_bits < asid_bits) + return false; + + /* + * See max_pinned_asids in arch/arm64/mm/context.c. The following is + * generally the maximum number of bindable processes. + */ + if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) + asid_bits--; + dev_dbg(smmu->dev, "%d shared contexts\n", (1 << asid_bits) - + num_possible_cpus() - 2); + + return true; +} + static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) { u32 reg; @@ -4142,6 +4211,9 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) smmu->ias = max(smmu->ias, smmu->oas); + if (arm_smmu_supports_sva(smmu)) + smmu->features |= ARM_SMMU_FEAT_SVA; + dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n", smmu->ias, smmu->oas, smmu->features); return 0;