From patchwork Thu Sep 4 16:50:02 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 36749 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f69.google.com (mail-yh0-f69.google.com [209.85.213.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E6F86202E4 for ; Thu, 4 Sep 2014 17:11:04 +0000 (UTC) Received: by mail-yh0-f69.google.com with SMTP id v1sf35837146yhn.8 for ; Thu, 04 Sep 2014 10:11:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=jPysfaoqPOC4rUc8lKYxoR7eULju5ayIXJD2VQNmVXQ=; b=AwxFxYrZkdblcproFS9opRA/Nsa+19oXYC3mqpFfLgGL0UmDwG6FV+qSROOA5kCcwO dCqoqB8ANn7TPWiCNysm1hS2i6AUm9MRBxEr8SdGTpW+wlE+9oc2Qc4BCGNuASh2j0lm 1cLnaEyWI8zYbQzLUquICS1sq2d1xeyld1qk5vr0gWYvbzjP3iCJf6Vx+Z6AZJQRD7r4 1C2Tz/NdMLM5kxDLSrHBId75k5zN27X9/z9dMZranq8acIcnTQkGfu2L0ilQrdJL+pH6 wtZQUmKvIsI49287a0ZDoRI7UYN6JyipANcpZ13jXPVRQ+7hTkJcxswVnuISY4i8f/cF ygDg== X-Gm-Message-State: ALoCoQkjxQECKaxyxK7hiXE1Jvt3NPdpcFrJZFfQbF5ZjFPRi4SW2bJbDcnv90Nb+HYnQaaZ+5x9 X-Received: by 10.236.87.205 with SMTP id y53mr2961403yhe.39.1409850664780; Thu, 04 Sep 2014 10:11:04 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.84.47 with SMTP id k44ls375074qgd.73.gmail; Thu, 04 Sep 2014 10:11:04 -0700 (PDT) X-Received: by 10.221.6.201 with SMTP id ol9mr5620523vcb.2.1409850664632; Thu, 04 Sep 2014 10:11:04 -0700 (PDT) Received: from mail-vc0-f180.google.com (mail-vc0-f180.google.com [209.85.220.180]) by mx.google.com with ESMTPS id dp8si6207867vcb.12.2014.09.04.10.11.04 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 04 Sep 2014 10:11:04 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.180 as permitted sender) client-ip=209.85.220.180; Received: by mail-vc0-f180.google.com with SMTP id lf12so11074109vcb.11 for ; Thu, 04 Sep 2014 10:11:04 -0700 (PDT) X-Received: by 10.221.5.137 with SMTP id og9mr5627705vcb.18.1409850664558; Thu, 04 Sep 2014 10:11:04 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.45.67 with SMTP id uj3csp870675vcb; Thu, 4 Sep 2014 10:11:04 -0700 (PDT) X-Received: by 10.43.164.130 with SMTP id ms2mr7681682icc.9.1409850663987; Thu, 04 Sep 2014 10:11:03 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id kd4si4937512pbc.12.2014.09.04.10.10.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Sep 2014 10:10:59 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XPaTB-0001Si-Np; Thu, 04 Sep 2014 17:05:13 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XPaEg-0002bV-Aw for linux-arm-kernel@lists.infradead.org; Thu, 04 Sep 2014 16:50:15 +0000 Received: from edgewater-inn.cambridge.arm.com (edgewater-inn.cambridge.arm.com [10.1.203.34]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id s84Gniwo014713; Thu, 4 Sep 2014 17:49:44 +0100 (BST) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id D28C81AE07FE; Thu, 4 Sep 2014 17:50:08 +0100 (BST) From: Will Deacon To: iommu@lists.linuxfoundation.org Subject: [PATCH 4/7] iommu/arm-smmu: use page shift instead of page size to avoid division Date: Thu, 4 Sep 2014 17:50:02 +0100 Message-Id: <1409849405-17347-5-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1409849405-17347-1-git-send-email-will.deacon@arm.com> References: <1409849405-17347-1-git-send-email-will.deacon@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140904_095014_760529_5ABEE829 X-CRM114-Status: GOOD ( 13.29 ) X-Spam-Score: -6.7 (------) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-6.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.6 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain Cc: Will Deacon , tchalamarla@cavium.com, robin.murphy@arm.com, joro@8bytes.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: will.deacon@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.180 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Arbitrary integer division is not available in all ARM CPUs, so the GCC may spit out calls to helper functions which are not implemented in the kernel. This patch avoids these problems in the SMMU driver by using page shift instead of page size, so that divisions by the page size (as required by the vSMMU code) can be expressed as a simple right shift. Signed-off-by: Will Deacon --- drivers/iommu/arm-smmu.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 7ab3cc4ffbb3..ecad700cd4f4 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -59,7 +59,7 @@ /* SMMU global address space */ #define ARM_SMMU_GR0(smmu) ((smmu)->base) -#define ARM_SMMU_GR1(smmu) ((smmu)->base + (smmu)->pagesize) +#define ARM_SMMU_GR1(smmu) ((smmu)->base + (1 << (smmu)->pgshift)) /* * SMMU global address space with conditional offset to access secure @@ -224,7 +224,7 @@ /* Translation context bank */ #define ARM_SMMU_CB_BASE(smmu) ((smmu)->base + ((smmu)->size >> 1)) -#define ARM_SMMU_CB(smmu, n) ((n) * (smmu)->pagesize) +#define ARM_SMMU_CB(smmu, n) ((n) * (1 << (smmu)->pgshift)) #define ARM_SMMU_CB_SCTLR 0x0 #define ARM_SMMU_CB_RESUME 0x8 @@ -354,7 +354,7 @@ struct arm_smmu_device { void __iomem *base; unsigned long size; - unsigned long pagesize; + unsigned long pgshift; #define ARM_SMMU_FEAT_COHERENT_WALK (1 << 0) #define ARM_SMMU_FEAT_STREAM_MATCH (1 << 1) @@ -1807,12 +1807,12 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) /* ID1 */ id = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID1); - smmu->pagesize = (id & ID1_PAGESIZE) ? SZ_64K : SZ_4K; + smmu->pgshift = (id & ID1_PAGESIZE) ? 16 : 12; /* Check for size mismatch of SMMU address space from mapped region */ size = 1 << (((id >> ID1_NUMPAGENDXB_SHIFT) & ID1_NUMPAGENDXB_MASK) + 1); - size *= (smmu->pagesize << 1); + size *= 2 << smmu->pgshift; if (smmu->size != size) dev_warn(smmu->dev, "SMMU address space size (0x%lx) differs from mapped region size (0x%lx)!\n",