From patchwork Sat May 16 04:45:49 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhichao Huang X-Patchwork-Id: 48586 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6BDF12121F for ; Sat, 16 May 2015 04:52:25 +0000 (UTC) Received: by wixv7 with SMTP id v7sf3842311wix.0 for ; Fri, 15 May 2015 21:52:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=ZI1CKVsu+U8w7CWndw0ZZxsQfEAeb/smsHW9oZaL+t8=; b=OmkaJk6sHQ+HaT0g0pfNI12wEf/ePYhLIRRTY3AFt+GZTOyUHHKm4AGoW2HwNJYfSS 2TwqSrKGunJtHTk3mpdnwv7TGmLfkCBUpi96iRGBo+75DGx/cQS6kB8j2Of/rv6+f5Ht JHhCDquT7kqetYrPJCRFIvNCM4rTOZLZb3xwPCv1KaffobpL8QXCoveZ4/LiK5i+8QhY x5sG0HLeXBJh680gCH702FkaYiMkwnza010ox1To2z4neIvBD4dPFtDFK41XmPiqUxJO 5dszn/PVwP0gwibocph6VV0ZxpnsFcDnOrTNhacEj0AdQ+CcuquvT3qfhgxBiQXkaCED HS7A== X-Gm-Message-State: ALoCoQnHzdjBCVwqYq+xElM6lTArYPTCyFUdmTBkGlWwXwDX+foYZEBH3LrNPgi6oNWNauxII3iK X-Received: by 10.152.184.73 with SMTP id es9mr9595823lac.4.1431751944652; Fri, 15 May 2015 21:52:24 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.153.5.42 with SMTP id cj10ls608924lad.51.gmail; Fri, 15 May 2015 21:52:24 -0700 (PDT) X-Received: by 10.152.29.198 with SMTP id m6mr9441807lah.11.1431751944510; Fri, 15 May 2015 21:52:24 -0700 (PDT) Received: from mail-la0-f50.google.com (mail-la0-f50.google.com. [209.85.215.50]) by mx.google.com with ESMTPS id c10si2358511laa.171.2015.05.15.21.52.24 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 15 May 2015 21:52:24 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) client-ip=209.85.215.50; Received: by laat2 with SMTP id t2so149306148laa.1 for ; Fri, 15 May 2015 21:52:24 -0700 (PDT) X-Received: by 10.112.198.74 with SMTP id ja10mr9636695lbc.19.1431751944234; Fri, 15 May 2015 21:52:24 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp2473216lbb; Fri, 15 May 2015 21:52:23 -0700 (PDT) X-Received: by 10.68.93.68 with SMTP id cs4mr24459566pbb.139.1431751942375; Fri, 15 May 2015 21:52:22 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id u9si5640518pdp.186.2015.05.15.21.52.21 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 15 May 2015 21:52:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YtU2p-0008Ld-5w; Sat, 16 May 2015 04:49:51 +0000 Received: from mail-pa0-f50.google.com ([209.85.220.50]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YtU0W-0006l2-6j for linux-arm-kernel@lists.infradead.org; Sat, 16 May 2015 04:47:29 +0000 Received: by pacwv17 with SMTP id wv17so47591543pac.2 for ; Fri, 15 May 2015 21:47:07 -0700 (PDT) X-Received: by 10.66.165.67 with SMTP id yw3mr24316833pab.95.1431751627105; Fri, 15 May 2015 21:47:07 -0700 (PDT) Received: from localhost ([167.160.116.91]) by mx.google.com with ESMTPSA id lw9sm3384490pdb.19.2015.05.15.21.47.04 (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 15 May 2015 21:47:06 -0700 (PDT) From: Zhichao Huang To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com, alex.bennee@linaro.org Subject: [PATCH 08/10] KVM: arm: implement dirty bit mechanism for debug registers Date: Sat, 16 May 2015 12:45:49 +0800 Message-Id: <1431751551-4788-9-git-send-email-zhichao.huang@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1431751551-4788-1-git-send-email-zhichao.huang@linaro.org> References: <1431751551-4788-1-git-send-email-zhichao.huang@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150515_214728_326828_96FD42E4 X-CRM114-Status: GOOD ( 18.80 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.220.50 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.220.50 listed in wl.mailspike.net] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: huangzhichao@huawei.com, Zhichao Huang X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: zhichao.huang@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 The trapping code keeps track of the state of the debug registers, allowing for the switch code to implement a lazy switching strategy. Signed-off-by: Zhichao Huang --- arch/arm/include/asm/kvm_asm.h | 3 +++ arch/arm/include/asm/kvm_host.h | 3 +++ arch/arm/kernel/asm-offsets.c | 1 + arch/arm/kvm/coproc.c | 32 +++++++++++++++++++++++++++++-- arch/arm/kvm/interrupts_head.S | 42 +++++++++++++++++++++++++++++++++++++++++ 5 files changed, 79 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index ba65e05..4fb64cf 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -64,6 +64,9 @@ #define cp14_DBGDSCRext 65 /* Debug Status and Control external */ #define NR_CP14_REGS 66 /* Number of regs (incl. invalid) */ +#define KVM_ARM_DEBUG_DIRTY_SHIFT 0 +#define KVM_ARM_DEBUG_DIRTY (1 << KVM_ARM_DEBUG_DIRTY_SHIFT) + #define ARM_EXCEPTION_RESET 0 #define ARM_EXCEPTION_UNDEFINED 1 #define ARM_EXCEPTION_SOFTWARE 2 diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 3d16820..09b54bf 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -127,6 +127,9 @@ struct kvm_vcpu_arch { /* System control coprocessor (cp14) */ u32 cp14[NR_CP14_REGS]; + /* Debug state */ + u32 debug_flags; + /* * Anything that is not used directly from assembly code goes * here. diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c index 9158de0..e876109 100644 --- a/arch/arm/kernel/asm-offsets.c +++ b/arch/arm/kernel/asm-offsets.c @@ -185,6 +185,7 @@ int main(void) DEFINE(VCPU_FIQ_REGS, offsetof(struct kvm_vcpu, arch.regs.fiq_regs)); DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc)); DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr)); + DEFINE(VCPU_DEBUG_FLAGS, offsetof(struct kvm_vcpu, arch.debug_flags)); DEFINE(VCPU_HCR, offsetof(struct kvm_vcpu, arch.hcr)); DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines)); DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr)); diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c index cba81ed..49a30dd 100644 --- a/arch/arm/kvm/coproc.c +++ b/arch/arm/kvm/coproc.c @@ -220,14 +220,42 @@ bool access_vm_reg(struct kvm_vcpu *vcpu, return true; } +/* + * We want to avoid world-switching all the DBG registers all the + * time: + * + * - If we've touched any debug register, it is likely that we're + * going to touch more of them. It then makes sense to disable the + * traps and start doing the save/restore dance + * - If debug is active (ARM_DSCR_MDBGEN set), it is then mandatory + * to save/restore the registers, as the guest depends on them. + * + * For this, we use a DIRTY bit, indicating the guest has modified the + * debug registers, used as follow: + * + * On guest entry: + * - If the dirty bit is set (because we're coming back from trapping), + * disable the traps, save host registers, restore guest registers. + * - If debug is actively in use (ARM_DSCR_MDBGEN set), + * set the dirty bit, disable the traps, save host registers, + * restore guest registers. + * - Otherwise, enable the traps + * + * On guest exit: + * - If the dirty bit is set, save guest registers, restore host + * registers and clear the dirty bit. This ensure that the host can + * now use the debug registers. + */ static bool trap_debug32(struct kvm_vcpu *vcpu, const struct coproc_params *p, const struct coproc_reg *r) { - if (p->is_write) + if (p->is_write) { vcpu->arch.cp14[r->reg] = *vcpu_reg(vcpu, p->Rt1); - else + vcpu->arch.debug_flags |= KVM_ARM_DEBUG_DIRTY; + } else { *vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp14[r->reg]; + } return true; } diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S index 35e4a3a..3a0128c 100644 --- a/arch/arm/kvm/interrupts_head.S +++ b/arch/arm/kvm/interrupts_head.S @@ -1,4 +1,6 @@ #include +#include +#include #include #define VCPU_USR_REG(_reg_nr) (VCPU_USR_REGS + (_reg_nr * 4)) @@ -396,6 +398,46 @@ vcpu .req r0 @ vcpu pointer always in r0 mcr p15, 2, r12, c0, c0, 0 @ CSSELR .endm +/* Assume vcpu pointer in vcpu reg, clobbers r5 */ +.macro skip_debug_state target + ldr r5, [vcpu, #VCPU_DEBUG_FLAGS] + cmp r5, #KVM_ARM_DEBUG_DIRTY + bne \target +1: +.endm + +/* Compute debug state: If ARM_DSCR_MDBGEN or KVM_ARM_DEBUG_DIRTY + * is set, we do a full save/restore cycle and disable trapping. + * + * Assumes vcpu pointer in vcpu reg + * + * Clobbers r5, r6 + */ +.macro compute_debug_state target + // Check the state of MDSCR_EL1 + ldr r5, [vcpu, #CP14_OFFSET(cp14_DBGDSCRext)] + and r6, r5, #ARM_DSCR_MDBGEN + cmp r6, #0 + beq 9998f // Nothing to see there + + // If ARM_DSCR_MDBGEN bit was set, we must set the flag + mov r5, #KVM_ARM_DEBUG_DIRTY + str r5, [vcpu, #VCPU_DEBUG_FLAGS] + b 9999f // Don't skip restore + +9998: + // Otherwise load the flags from memory in case we recently + // trapped + skip_debug_state \target +9999: +.endm + +/* Assume vcpu pointer in vcpu reg, clobbers r5 */ +.macro clear_debug_dirty_bit + mov r5, #0 + str r5, [vcpu, #VCPU_DEBUG_FLAGS] +.endm + /* * Save the VGIC CPU state into memory *