From patchwork Tue Aug 16 17:35:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 74034 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp2117687qga; Tue, 16 Aug 2016 10:34:59 -0700 (PDT) X-Received: by 10.98.99.67 with SMTP id x64mr66668366pfb.26.1471368899202; Tue, 16 Aug 2016 10:34:59 -0700 (PDT) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id qi7si32972490pac.183.2016.08.16.10.34.59 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Aug 2016 10:34:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bZiFX-0004fJ-AJ; Tue, 16 Aug 2016 17:34:03 +0000 Received: from mail-wm0-x233.google.com ([2a00:1450:400c:c09::233]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bZiEw-0004QW-Dz for linux-arm-kernel@lists.infradead.org; Tue, 16 Aug 2016 17:33:28 +0000 Received: by mail-wm0-x233.google.com with SMTP id i5so183007314wmg.0 for ; Tue, 16 Aug 2016 10:33:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TlankHYWioE8HPAWphptFewzl84mVcl8jwO2iwXM2oI=; b=R8RWXgJEaGgOeQcnFydxn47sPbJCsFgRaceQBUwzsCx29FIG3wP2AH6dpanCAUkB74 dXmYCcIyHpAuTBq04l2vj/f/kGNc7smTheh1b7rYSwLOsb0N08JlOfMLIzgn/Q9UM4Mv LoTTA2TQqsvEYtGeUV0meVfqCX4pIK4+cXrPM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TlankHYWioE8HPAWphptFewzl84mVcl8jwO2iwXM2oI=; b=lVHWcVfNqM7cH78zzyy9SknAsweGcUk60b31UrT9d7qVROv7Xw2haK+4q8RnRJb7eb XcCCBpvsSKGyQZfUK8gcYtMucBlCdBEVDeXWx+wDGVyKG5Fe1q4EIQkxFRhqMGT1b+Bi UQoAu+ml17du6VgE+Rqag085nl3+DBxETYqWL07N3la2W2iGLgxBmy1DF+v1bGjXOjld LmpoeH4NWPmCSG0z/jSmUoWm83Tw1mH0HGV4PzK0y5usAMjl1cAP43mlVtGSH5y0rEPl D4c9ByYzW1RV1CW6jjGlm7+Wxzr1K4W3DVkD5tLrbyiRxdrbgdsxpCMTnfCYOeEe2Eln 5kEw== X-Gm-Message-State: AEkooutnj16P0oeERmBT5wSucia2PF+fAvr6hx1CoN2Sd6xBZAfPzh/woSNcPLoQ8W+PJqo+ X-Received: by 10.194.123.228 with SMTP id md4mr37128060wjb.91.1471368784643; Tue, 16 Aug 2016 10:33:04 -0700 (PDT) Received: from localhost.localdomain ([94.18.191.146]) by smtp.gmail.com with ESMTPSA id 17sm22722556wmf.6.2016.08.16.10.33.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 16 Aug 2016 10:33:04 -0700 (PDT) From: Christoffer Dall To: vijay.kilari@gmail.com Subject: [PATCH v2 1/2] KVM: arm/arm64: Factor out vgic_attr_regs_access functionality Date: Tue, 16 Aug 2016 19:35:00 +0200 Message-Id: <20160816173501.21521-2-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20160816173501.21521-1-christoffer.dall@linaro.org> References: <20160816173501.21521-1-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160816_103326_689949_57A93D9D X-CRM114-Status: GOOD ( 16.33 ) X-Spam-Score: -2.7 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [2a00:1450:400c:c09:0:0:0:233 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, Marc Zyngier , Andre Przywara , Eric Auger , Christoffer Dall , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org As we are about to deal with multiple data types and situations where the vgic should not be initialized when doing userspace accesses on the register attributes, factor out the functionality of vgic_attr_regs_access into smaller bits which can be reused by a new function later. Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic/vgic-kvm-device.c | 100 ++++++++++++++++++++++++++---------- 1 file changed, 73 insertions(+), 27 deletions(-) -- 2.9.0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/virt/kvm/arm/vgic/vgic-kvm-device.c b/virt/kvm/arm/vgic/vgic-kvm-device.c index 1813f93..19fa331 100644 --- a/virt/kvm/arm/vgic/vgic-kvm-device.c +++ b/virt/kvm/arm/vgic/vgic-kvm-device.c @@ -233,6 +233,67 @@ int kvm_register_vgic_device(unsigned long type) return ret; } +struct vgic_reg_attr { + struct kvm_vcpu *vcpu; + gpa_t addr; +}; + +static int parse_vgic_v2_attr(struct kvm_device *dev, + struct kvm_device_attr *attr, + struct vgic_reg_attr *reg_attr) +{ + int cpuid; + + cpuid = (attr->attr & KVM_DEV_ARM_VGIC_CPUID_MASK) >> + KVM_DEV_ARM_VGIC_CPUID_SHIFT; + + if (cpuid >= atomic_read(&dev->kvm->online_vcpus)) + return -EINVAL; + + reg_attr->vcpu = kvm_get_vcpu(dev->kvm, cpuid); + reg_attr->addr = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK; + + return 0; +} + +/* unlocks vcpus from @vcpu_lock_idx and smaller */ +static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) +{ + struct kvm_vcpu *tmp_vcpu; + + for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) { + tmp_vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx); + mutex_unlock(&tmp_vcpu->mutex); + } +} + +static void unlock_all_vcpus(struct kvm *kvm) +{ + unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1); +} + +/* Returns true if all vcpus were locked, false otherwise */ +static bool lock_all_vcpus(struct kvm *kvm) +{ + struct kvm_vcpu *tmp_vcpu; + int c; + + /* + * Any time a vcpu is run, vcpu_load is called which tries to grab the + * vcpu->mutex. By grabbing the vcpu->mutex of all VCPUs we ensure + * that no other VCPUs are run and fiddle with the vgic state while we + * access it. + */ + kvm_for_each_vcpu(c, tmp_vcpu, kvm) { + if (!mutex_trylock(&tmp_vcpu->mutex)) { + unlock_vcpus(kvm, c - 1); + return false; + } + } + + return true; +} + /** vgic_attr_regs_access: allows user space to read/write VGIC registers * * @dev: kvm device handle @@ -245,15 +306,17 @@ static int vgic_attr_regs_access(struct kvm_device *dev, struct kvm_device_attr *attr, u32 *reg, bool is_write) { + struct vgic_reg_attr reg_attr; gpa_t addr; - int cpuid, ret, c; - struct kvm_vcpu *vcpu, *tmp_vcpu; - int vcpu_lock_idx = -1; + struct kvm_vcpu *vcpu; + int ret; - cpuid = (attr->attr & KVM_DEV_ARM_VGIC_CPUID_MASK) >> - KVM_DEV_ARM_VGIC_CPUID_SHIFT; - vcpu = kvm_get_vcpu(dev->kvm, cpuid); - addr = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK; + ret = parse_vgic_v2_attr(dev, attr, ®_attr); + if (ret) + return ret; + + vcpu = reg_attr.vcpu; + addr = reg_attr.addr; mutex_lock(&dev->kvm->lock); @@ -261,24 +324,11 @@ static int vgic_attr_regs_access(struct kvm_device *dev, if (ret) goto out; - if (cpuid >= atomic_read(&dev->kvm->online_vcpus)) { - ret = -EINVAL; + if (!lock_all_vcpus(dev->kvm)) { + ret = -EBUSY; goto out; } - /* - * Any time a vcpu is run, vcpu_load is called which tries to grab the - * vcpu->mutex. By grabbing the vcpu->mutex of all VCPUs we ensure - * that no other VCPUs are run and fiddle with the vgic state while we - * access it. - */ - ret = -EBUSY; - kvm_for_each_vcpu(c, tmp_vcpu, dev->kvm) { - if (!mutex_trylock(&tmp_vcpu->mutex)) - goto out; - vcpu_lock_idx = c; - } - switch (attr->group) { case KVM_DEV_ARM_VGIC_GRP_CPU_REGS: ret = vgic_v2_cpuif_uaccess(vcpu, is_write, addr, reg); @@ -291,12 +341,8 @@ static int vgic_attr_regs_access(struct kvm_device *dev, break; } + unlock_all_vcpus(dev->kvm); out: - for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) { - tmp_vcpu = kvm_get_vcpu(dev->kvm, vcpu_lock_idx); - mutex_unlock(&tmp_vcpu->mutex); - } - mutex_unlock(&dev->kvm->lock); return ret; }