From patchwork Sun May 31 04:27:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhichao Huang X-Patchwork-Id: 49264 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E987F24588 for ; Sun, 31 May 2015 04:34:09 +0000 (UTC) Received: by lbcak1 with SMTP id ak1sf27409892lbc.2 for ; Sat, 30 May 2015 21:34:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=LqiIkGqyLv9UhrrNBuMzbCaNMG14/JFsCQtfPUaxJDc=; b=mKWwtSM4x5uzG/xd7/O1kJJG0BXReG+qidogyvLmT08h/aVJGHoTvF5dizXgqt3g58 Z7U8RtjnimPjGz3EBkYuI72xGcY75RyJujxP/18ODpocYnqAXibFRH5sGOOK9E+OHpg+ ltxTn+77t7ZCg3Q32YaFiWzaUsB1AWu25sOeJcM9v+4JcFgw90JqY1xkRjImGWBv8y5V Uvh17bhEkNMan3/ITKvI9JoSYeBIB3hw7+RvAy2g/J6yyaVAFGWuNX0EgRP+iXCQ25ll StZf0gmBdOHpbBFe2JEHygJqabVT7ZQRfdE19+V7k5q+G6oTe9nhuM6xE3zYIbn+tDjf E7yg== X-Gm-Message-State: ALoCoQnU0t5aRfTv95EdK0NJnaMapCSHa+udr+MavWBPzwqiffkAnwctdoJV7AOT+PP2oOZyCOgU X-Received: by 10.112.189.131 with SMTP id gi3mr14699628lbc.6.1433046848722; Sat, 30 May 2015 21:34:08 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.225.134 with SMTP id rk6ls516969lac.83.gmail; Sat, 30 May 2015 21:34:08 -0700 (PDT) X-Received: by 10.152.42.200 with SMTP id q8mr4191940lal.53.1433046848588; Sat, 30 May 2015 21:34:08 -0700 (PDT) Received: from mail-la0-f47.google.com (mail-la0-f47.google.com. [209.85.215.47]) by mx.google.com with ESMTPS id cn6si8869672lbb.126.2015.05.30.21.34.08 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 30 May 2015 21:34:08 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.47 as permitted sender) client-ip=209.85.215.47; Received: by lagv1 with SMTP id v1so80522026lag.3 for ; Sat, 30 May 2015 21:34:08 -0700 (PDT) X-Received: by 10.112.93.37 with SMTP id cr5mr15422405lbb.106.1433046848151; Sat, 30 May 2015 21:34:08 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp1397684lbb; Sat, 30 May 2015 21:34:06 -0700 (PDT) X-Received: by 10.70.129.172 with SMTP id nx12mr28858935pdb.157.1433046846033; Sat, 30 May 2015 21:34:06 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id pm11si15573020pdb.55.2015.05.30.21.34.05 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 30 May 2015 21:34:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Yyuuz-000093-Dr; Sun, 31 May 2015 04:32:13 +0000 Received: from mail-pd0-f180.google.com ([209.85.192.180]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YyusO-0005Vk-Em for linux-arm-kernel@lists.infradead.org; Sun, 31 May 2015 04:29:33 +0000 Received: by pdjm12 with SMTP id m12so145162pdj.3 for ; Sat, 30 May 2015 21:29:11 -0700 (PDT) X-Received: by 10.68.202.7 with SMTP id ke7mr28974902pbc.114.1433046550987; Sat, 30 May 2015 21:29:10 -0700 (PDT) Received: from localhost ([167.160.116.87]) by mx.google.com with ESMTPSA id c16sm10184257pdl.61.2015.05.30.21.29.08 (version=TLSv1 cipher=RC4-SHA bits=128/128); Sat, 30 May 2015 21:29:09 -0700 (PDT) From: Zhichao Huang To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com, alex.bennee@linaro.org, will.deacon@arm.com Subject: [PATCH v2 08/11] KVM: arm: implement dirty bit mechanism for debug registers Date: Sun, 31 May 2015 12:27:09 +0800 Message-Id: <1433046432-1824-9-git-send-email-zhichao.huang@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1433046432-1824-1-git-send-email-zhichao.huang@linaro.org> References: <1433046432-1824-1-git-send-email-zhichao.huang@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150530_212932_612999_1FBA92E0 X-CRM114-Status: GOOD ( 18.88 ) X-Spam-Score: -1.8 (-) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-1.8 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.192.180 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.1 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.192.180 listed in wl.mailspike.net] Cc: huangzhichao@huawei.com, Zhichao Huang X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: zhichao.huang@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.47 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 The trapping code keeps track of the state of the debug registers, allowing for the switch code to implement a lazy switching strategy. Signed-off-by: Zhichao Huang --- arch/arm/include/asm/kvm_asm.h | 3 +++ arch/arm/include/asm/kvm_host.h | 3 +++ arch/arm/kernel/asm-offsets.c | 1 + arch/arm/kvm/coproc.c | 32 +++++++++++++++++++++++++++++-- arch/arm/kvm/interrupts_head.S | 42 +++++++++++++++++++++++++++++++++++++++++ 5 files changed, 79 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index ba65e05..4fb64cf 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -64,6 +64,9 @@ #define cp14_DBGDSCRext 65 /* Debug Status and Control external */ #define NR_CP14_REGS 66 /* Number of regs (incl. invalid) */ +#define KVM_ARM_DEBUG_DIRTY_SHIFT 0 +#define KVM_ARM_DEBUG_DIRTY (1 << KVM_ARM_DEBUG_DIRTY_SHIFT) + #define ARM_EXCEPTION_RESET 0 #define ARM_EXCEPTION_UNDEFINED 1 #define ARM_EXCEPTION_SOFTWARE 2 diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 3d16820..09b54bf 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -127,6 +127,9 @@ struct kvm_vcpu_arch { /* System control coprocessor (cp14) */ u32 cp14[NR_CP14_REGS]; + /* Debug state */ + u32 debug_flags; + /* * Anything that is not used directly from assembly code goes * here. diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c index 9158de0..e876109 100644 --- a/arch/arm/kernel/asm-offsets.c +++ b/arch/arm/kernel/asm-offsets.c @@ -185,6 +185,7 @@ int main(void) DEFINE(VCPU_FIQ_REGS, offsetof(struct kvm_vcpu, arch.regs.fiq_regs)); DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc)); DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr)); + DEFINE(VCPU_DEBUG_FLAGS, offsetof(struct kvm_vcpu, arch.debug_flags)); DEFINE(VCPU_HCR, offsetof(struct kvm_vcpu, arch.hcr)); DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines)); DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr)); diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c index eeee648..1cc74d8 100644 --- a/arch/arm/kvm/coproc.c +++ b/arch/arm/kvm/coproc.c @@ -220,14 +220,42 @@ bool access_vm_reg(struct kvm_vcpu *vcpu, return true; } +/* + * We want to avoid world-switching all the DBG registers all the + * time: + * + * - If we've touched any debug register, it is likely that we're + * going to touch more of them. It then makes sense to disable the + * traps and start doing the save/restore dance + * - If debug is active (ARM_DSCR_MDBGEN set), it is then mandatory + * to save/restore the registers, as the guest depends on them. + * + * For this, we use a DIRTY bit, indicating the guest has modified the + * debug registers, used as follow: + * + * On guest entry: + * - If the dirty bit is set (because we're coming back from trapping), + * disable the traps, save host registers, restore guest registers. + * - If debug is actively in use (ARM_DSCR_MDBGEN set), + * set the dirty bit, disable the traps, save host registers, + * restore guest registers. + * - Otherwise, enable the traps + * + * On guest exit: + * - If the dirty bit is set, save guest registers, restore host + * registers and clear the dirty bit. This ensure that the host can + * now use the debug registers. + */ static bool trap_debug32(struct kvm_vcpu *vcpu, const struct coproc_params *p, const struct coproc_reg *r) { - if (p->is_write) + if (p->is_write) { vcpu->arch.cp14[r->reg] = *vcpu_reg(vcpu, p->Rt1); - else + vcpu->arch.debug_flags |= KVM_ARM_DEBUG_DIRTY; + } else { *vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp14[r->reg]; + } return true; } diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S index 35e4a3a..3a0128c 100644 --- a/arch/arm/kvm/interrupts_head.S +++ b/arch/arm/kvm/interrupts_head.S @@ -1,4 +1,6 @@ #include +#include +#include #include #define VCPU_USR_REG(_reg_nr) (VCPU_USR_REGS + (_reg_nr * 4)) @@ -396,6 +398,46 @@ vcpu .req r0 @ vcpu pointer always in r0 mcr p15, 2, r12, c0, c0, 0 @ CSSELR .endm +/* Assume vcpu pointer in vcpu reg, clobbers r5 */ +.macro skip_debug_state target + ldr r5, [vcpu, #VCPU_DEBUG_FLAGS] + cmp r5, #KVM_ARM_DEBUG_DIRTY + bne \target +1: +.endm + +/* Compute debug state: If ARM_DSCR_MDBGEN or KVM_ARM_DEBUG_DIRTY + * is set, we do a full save/restore cycle and disable trapping. + * + * Assumes vcpu pointer in vcpu reg + * + * Clobbers r5, r6 + */ +.macro compute_debug_state target + // Check the state of MDSCR_EL1 + ldr r5, [vcpu, #CP14_OFFSET(cp14_DBGDSCRext)] + and r6, r5, #ARM_DSCR_MDBGEN + cmp r6, #0 + beq 9998f // Nothing to see there + + // If ARM_DSCR_MDBGEN bit was set, we must set the flag + mov r5, #KVM_ARM_DEBUG_DIRTY + str r5, [vcpu, #VCPU_DEBUG_FLAGS] + b 9999f // Don't skip restore + +9998: + // Otherwise load the flags from memory in case we recently + // trapped + skip_debug_state \target +9999: +.endm + +/* Assume vcpu pointer in vcpu reg, clobbers r5 */ +.macro clear_debug_dirty_bit + mov r5, #0 + str r5, [vcpu, #VCPU_DEBUG_FLAGS] +.endm + /* * Save the VGIC CPU state into memory *