From patchwork Fri Jan 23 10:02:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 43586 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7D657218DB for ; Fri, 23 Jan 2015 10:16:07 +0000 (UTC) Received: by mail-wi0-f197.google.com with SMTP id n3sf879012wiv.0 for ; Fri, 23 Jan 2015 02:16:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=5x3FvVF8SS6ZF6xYQBG1YdS/oeOyc2RLVAPZDEyxUcU=; b=ls4SGq+mRIzlOdW4uVP3QXUqjR4liqC9djkIv0RT91tbPLoO7gIZWhwxSINJ7clbvO Mb3mSsOJ/5dS5hzxlimp7GMsFVOEPTz/gZ36qrvW9DcNW6acctBsf+vzrqCC4rdxtL/g 8mWgUpimzD1t7JR/YfpfD2TLRz8T+TmdvJN1l88XmYL6bYeoWaQ8RYyPF7FiuRJOebOJ QuV5tt6KDxBgLs/SlNEfGG6MtGfYWoDgjxsgGJDYiQxwPJoLoO4PUwAoDHGF93qWXF4c HQkBtmguLHDcR4iFBg+nmq9XcL8tLX7ilKsU45F6tzTYmjGuUkQS3L1HsVTQDgOqxcb/ L0lg== X-Gm-Message-State: ALoCoQmv3WLZ8Bo3Esg880RLjG6B9JA598/wpi9rqXL5a9uYu8KRZh81K3IEc8s9Dw6PuEFFd/qt X-Received: by 10.180.74.140 with SMTP id t12mr170213wiv.2.1422008166833; Fri, 23 Jan 2015 02:16:06 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.23.130 with SMTP id m2ls210075laf.87.gmail; Fri, 23 Jan 2015 02:16:06 -0800 (PST) X-Received: by 10.112.156.132 with SMTP id we4mr6314611lbb.59.1422008166510; Fri, 23 Jan 2015 02:16:06 -0800 (PST) Received: from mail-lb0-f173.google.com (mail-lb0-f173.google.com. [209.85.217.173]) by mx.google.com with ESMTPS id xk2si904051lac.117.2015.01.23.02.16.06 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 23 Jan 2015 02:16:06 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.173 as permitted sender) client-ip=209.85.217.173; Received: by mail-lb0-f173.google.com with SMTP id p9so1917066lbv.4 for ; Fri, 23 Jan 2015 02:16:06 -0800 (PST) X-Received: by 10.152.45.4 with SMTP id i4mr6492107lam.74.1422008166409; Fri, 23 Jan 2015 02:16:06 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.9.200 with SMTP id c8csp196664lbb; Fri, 23 Jan 2015 02:16:05 -0800 (PST) X-Received: by 10.66.146.167 with SMTP id td7mr9845456pab.101.1422008164084; Fri, 23 Jan 2015 02:16:04 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id sk3si1259789pab.208.2015.01.23.02.16.03 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 23 Jan 2015 02:16:04 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YEbFP-00050W-4A; Fri, 23 Jan 2015 10:13:51 +0000 Received: from mail-lb0-f171.google.com ([209.85.217.171]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YEb5h-0003uf-9E for linux-arm-kernel@lists.infradead.org; Fri, 23 Jan 2015 10:03:52 +0000 Received: by mail-lb0-f171.google.com with SMTP id u14so6171258lbd.2 for ; Fri, 23 Jan 2015 02:03:32 -0800 (PST) X-Received: by 10.112.134.74 with SMTP id pi10mr6401940lbb.67.1422007411921; Fri, 23 Jan 2015 02:03:31 -0800 (PST) Received: from localhost.localdomain (188-178-240-98-static.dk.customer.tdc.net. [188.178.240.98]) by mx.google.com with ESMTPSA id pg3sm331848lbb.8.2015.01.23.02.03.30 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 23 Jan 2015 02:03:30 -0800 (PST) From: Christoffer Dall To: Paolo Bonzini , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [GIT PULL 15/36] arm/arm64: KVM: rework MPIDR assignment and add accessors Date: Fri, 23 Jan 2015 11:02:44 +0100 Message-Id: <1422007385-14730-16-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.1.2.330.g565301e.dirty In-Reply-To: <1422007385-14730-1-git-send-email-christoffer.dall@linaro.org> References: <1422007385-14730-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150123_020349_548232_B1DF54D5 X-CRM114-Status: GOOD ( 18.44 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.217.171 listed in wl.mailspike.net] -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.217.171 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record Cc: Marc Zyngier , Andre Przywara , Christoffer Dall , kvm@vger.kernel.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.173 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Andre Przywara The virtual MPIDR registers (containing topology information) for the guest are currently mapped linearily to the vcpu_id. Improve this mapping for arm64 by using three levels to not artificially limit the number of vCPUs. To help this, change and rename the kvm_vcpu_get_mpidr() function to mask off the non-affinity bits in the MPIDR register. Also add an accessor to later allow easier access to a vCPU with a given MPIDR. Use this new accessor in the PSCI emulation. Signed-off-by: Andre Przywara Reviewed-by: Christoffer Dall Reviewed-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_emulate.h | 5 +++-- arch/arm/include/asm/kvm_host.h | 2 ++ arch/arm/kvm/arm.c | 13 +++++++++++++ arch/arm/kvm/psci.c | 17 +++++------------ arch/arm64/include/asm/kvm_emulate.h | 5 +++-- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/sys_regs.c | 13 +++++++++++-- 7 files changed, 39 insertions(+), 18 deletions(-) diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h index 66ce176..c528615 100644 --- a/arch/arm/include/asm/kvm_emulate.h +++ b/arch/arm/include/asm/kvm_emulate.h @@ -23,6 +23,7 @@ #include #include #include +#include unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num); unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu); @@ -167,9 +168,9 @@ static inline u32 kvm_vcpu_hvc_get_imm(struct kvm_vcpu *vcpu) return kvm_vcpu_get_hsr(vcpu) & HSR_HVC_IMM_MASK; } -static inline unsigned long kvm_vcpu_get_mpidr(struct kvm_vcpu *vcpu) +static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu) { - return vcpu->arch.cp15[c0_MPIDR]; + return vcpu->arch.cp15[c0_MPIDR] & MPIDR_HWID_BITMASK; } static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 7d07eb8..2fa5174 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -236,6 +236,8 @@ int kvm_perf_teardown(void); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); +struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); + static inline void kvm_arch_hardware_disable(void) {} static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 74603a0..a7b94ec 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -1075,6 +1075,19 @@ static void check_kvm_target_cpu(void *ret) *(int *)ret = kvm_target_cpu(); } +struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr) +{ + struct kvm_vcpu *vcpu; + int i; + + mpidr &= MPIDR_HWID_BITMASK; + kvm_for_each_vcpu(i, vcpu, kvm) { + if (mpidr == kvm_vcpu_get_mpidr_aff(vcpu)) + return vcpu; + } + return NULL; +} + /** * Initialize Hyp-mode and memory mappings on all CPUs. */ diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c index 58cb324..02fa8ef 100644 --- a/arch/arm/kvm/psci.c +++ b/arch/arm/kvm/psci.c @@ -22,6 +22,7 @@ #include #include #include +#include /* * This is an implementation of the Power State Coordination Interface @@ -66,25 +67,17 @@ static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu) static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu) { struct kvm *kvm = source_vcpu->kvm; - struct kvm_vcpu *vcpu = NULL, *tmp; + struct kvm_vcpu *vcpu = NULL; wait_queue_head_t *wq; unsigned long cpu_id; unsigned long context_id; - unsigned long mpidr; phys_addr_t target_pc; - int i; - cpu_id = *vcpu_reg(source_vcpu, 1); + cpu_id = *vcpu_reg(source_vcpu, 1) & MPIDR_HWID_BITMASK; if (vcpu_mode_is_32bit(source_vcpu)) cpu_id &= ~((u32) 0); - kvm_for_each_vcpu(i, tmp, kvm) { - mpidr = kvm_vcpu_get_mpidr(tmp); - if ((mpidr & MPIDR_HWID_BITMASK) == (cpu_id & MPIDR_HWID_BITMASK)) { - vcpu = tmp; - break; - } - } + vcpu = kvm_mpidr_to_vcpu(kvm, cpu_id); /* * Make sure the caller requested a valid CPU and that the CPU is @@ -155,7 +148,7 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu) * then ON else OFF */ kvm_for_each_vcpu(i, tmp, kvm) { - mpidr = kvm_vcpu_get_mpidr(tmp); + mpidr = kvm_vcpu_get_mpidr_aff(tmp); if (((mpidr & target_affinity_mask) == target_affinity) && !tmp->arch.pause) { return PSCI_0_2_AFFINITY_LEVEL_ON; diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index a6fa2d2..b3f1def 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -27,6 +27,7 @@ #include #include #include +#include unsigned long *vcpu_reg32(const struct kvm_vcpu *vcpu, u8 reg_num); unsigned long *vcpu_spsr32(const struct kvm_vcpu *vcpu); @@ -192,9 +193,9 @@ static inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu) return kvm_vcpu_get_hsr(vcpu) & ESR_EL2_FSC_TYPE; } -static inline unsigned long kvm_vcpu_get_mpidr(struct kvm_vcpu *vcpu) +static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu) { - return vcpu_sys_reg(vcpu, MPIDR_EL1); + return vcpu_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK; } static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 012af6c..ff8ee3e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -207,6 +207,8 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, int kvm_perf_init(void); int kvm_perf_teardown(void); +struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); + static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, phys_addr_t pgd_ptr, unsigned long hyp_stack_ptr, diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 3d7c2df..136e679 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -252,10 +252,19 @@ static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { + u64 mpidr; + /* - * Simply map the vcpu_id into the Aff0 field of the MPIDR. + * Map the vcpu_id into the first three affinity level fields of + * the MPIDR. We limit the number of VCPUs in level 0 due to a + * limitation to 16 CPUs in that level in the ICC_SGIxR registers + * of the GICv3 to be able to address each CPU directly when + * sending IPIs. */ - vcpu_sys_reg(vcpu, MPIDR_EL1) = (1UL << 31) | (vcpu->vcpu_id & 0xff); + mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0); + mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1); + mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2); + vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr; } /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */