From patchwork Tue Nov 25 16:10:04 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 41480 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f72.google.com (mail-wg0-f72.google.com [74.125.82.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id ED74C25E18 for ; Tue, 25 Nov 2014 16:13:33 +0000 (UTC) Received: by mail-wg0-f72.google.com with SMTP id y19sf623570wgg.3 for ; Tue, 25 Nov 2014 08:13:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:mime-version:cc:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=QNkyvDskmRNRqGrSgivTp2oD9IpenaWZornbg9o34OY=; b=AbzcDXTHAeqjV4qzPXcFTN5S47ZRE1cUok3MCPlmuPtJXLXw+lScFdgkEfq5mCt9zP lZAzdem1AVJ1JAeigU+rjEhIXw7RPzE/283chYWMix/aebMjLWhq2jQ/HStIf9gIk2Jf eHQv1WUSzX8GNeaL9Y52Qm3Q/X7ZUkDjRTGSDT/4FFzHcHHqOcAdgHNGi/1oZl3Iiu+L 9tg8RSWeptpOmvURfuvASS1m+2MhVnx/0JlcQiNktdSzS5afatYNRYYalqadlYTitd4Y JwxHk48Ahc5QOVm5tsGhjJbABJ3zRahGHGHAc7VMCcx92gQb9AO+eK1TqD76qjTBlfpH dYFQ== X-Gm-Message-State: ALoCoQmgDryGyocyVv/V3SIGWW5yFCV/ALyyFABYSNcREVl32X3pfJie7mCRZXvj6yhGzqE/a7G1 X-Received: by 10.112.146.104 with SMTP id tb8mr13749lbb.22.1416932013261; Tue, 25 Nov 2014 08:13:33 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.205.10 with SMTP id lc10ls591639lac.22.gmail; Tue, 25 Nov 2014 08:13:32 -0800 (PST) X-Received: by 10.112.167.130 with SMTP id zo2mr28309558lbb.4.1416932012607; Tue, 25 Nov 2014 08:13:32 -0800 (PST) Received: from mail-la0-f49.google.com (mail-la0-f49.google.com. [209.85.215.49]) by mx.google.com with ESMTPS id b3si1774207laa.104.2014.11.25.08.13.32 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Nov 2014 08:13:32 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) client-ip=209.85.215.49; Received: by mail-la0-f49.google.com with SMTP id hs14so835376lab.36 for ; Tue, 25 Nov 2014 08:13:32 -0800 (PST) X-Received: by 10.112.62.166 with SMTP id z6mr27641644lbr.74.1416932012373; Tue, 25 Nov 2014 08:13:32 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp462711lbc; Tue, 25 Nov 2014 08:13:31 -0800 (PST) X-Received: by 10.70.98.200 with SMTP id ek8mr17012348pdb.163.1416932010553; Tue, 25 Nov 2014 08:13:30 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id zy5si2612940pbb.135.2014.11.25.08.13.29 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 25 Nov 2014 08:13:30 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XtIiY-0003Uu-Rz; Tue, 25 Nov 2014 16:11:54 +0000 Received: from static.88-198-71-155.clients.your-server.de ([88.198.71.155] helo=socrates.bennee.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XtIht-0003I8-Nx for linux-arm-kernel@lists.infradead.org; Tue, 25 Nov 2014 16:11:21 +0000 Received: from localhost ([127.0.0.1] helo=zen.linaroharston) by socrates.bennee.com with esmtp (Exim 4.80) (envelope-from ) id 1XtJJW-0005HW-5U; Tue, 25 Nov 2014 17:50:06 +0100 From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com, peter.maydell@linaro.org, agraf@suse.de Subject: [PATCH 6/7] KVM: arm64: re-factor hyp.S debug register code Date: Tue, 25 Nov 2014 16:10:04 +0000 Message-Id: <1416931805-23223-7-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.1.3 In-Reply-To: <1416931805-23223-1-git-send-email-alex.bennee@linaro.org> References: <1416931805-23223-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 127.0.0.1 X-SA-Exim-Mail-From: alex.bennee@linaro.org X-SA-Exim-Scanned: No (on socrates.bennee.com); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141125_081113_948107_CC526AFE X-CRM114-Status: UNSURE ( 8.37 ) X-CRM114-Notice: Please train this message. X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- Cc: Catalin Marinas , Gleb Natapov , jan.kiszka@siemens.com, Will Deacon , open list , dahi@linux.vnet.ibm.com, r65777@freescale.com, pbonzini@redhat.com, bp@suse.de, =?UTF-8?q?Alex=20Benn=C3=A9e?= X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: alex.bennee@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This is a pre-cursor to sharing the code with the guest debug support. This replaces the big macro that fishes data out of a fixed location with a more general helper macro to restore a set of debug registers. It uses macro substitution so it can be re-used for debug control and value registers. It does however rely on the debug registers being 64 bit aligned (as they happen to be in the hyp ABI). Signed-off-by: Alex Bennée diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index c0bc218..b38ce3d 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -490,65 +490,12 @@ msr mdscr_el1, x25 .endm -.macro restore_debug - // x2: base address for cpu context - // x3: tmp register - - mrs x26, id_aa64dfr0_el1 - ubfx x24, x26, #12, #4 // Extract BRPs - ubfx x25, x26, #20, #4 // Extract WRPs - mov w26, #15 - sub w24, w26, w24 // How many BPs to skip - sub w25, w26, w25 // How many WPs to skip - - add x3, x2, #CPU_SYSREG_OFFSET(DBGBCR0_EL1) - - adr x26, 1f - add x26, x26, x24, lsl #2 - br x26 -1: - ldr x20, [x3, #(15 * 8)] - ldr x19, [x3, #(14 * 8)] - ldr x18, [x3, #(13 * 8)] - ldr x17, [x3, #(12 * 8)] - ldr x16, [x3, #(11 * 8)] - ldr x15, [x3, #(10 * 8)] - ldr x14, [x3, #(9 * 8)] - ldr x13, [x3, #(8 * 8)] - ldr x12, [x3, #(7 * 8)] - ldr x11, [x3, #(6 * 8)] - ldr x10, [x3, #(5 * 8)] - ldr x9, [x3, #(4 * 8)] - ldr x8, [x3, #(3 * 8)] - ldr x7, [x3, #(2 * 8)] - ldr x6, [x3, #(1 * 8)] - ldr x5, [x3, #(0 * 8)] - +.macro setup_debug_registers type + // x3: pointer to register set + // x4: number of registers to copy + // x5..x20/x26 trashed adr x26, 1f - add x26, x26, x24, lsl #2 - br x26 -1: - msr dbgbcr15_el1, x20 - msr dbgbcr14_el1, x19 - msr dbgbcr13_el1, x18 - msr dbgbcr12_el1, x17 - msr dbgbcr11_el1, x16 - msr dbgbcr10_el1, x15 - msr dbgbcr9_el1, x14 - msr dbgbcr8_el1, x13 - msr dbgbcr7_el1, x12 - msr dbgbcr6_el1, x11 - msr dbgbcr5_el1, x10 - msr dbgbcr4_el1, x9 - msr dbgbcr3_el1, x8 - msr dbgbcr2_el1, x7 - msr dbgbcr1_el1, x6 - msr dbgbcr0_el1, x5 - - add x3, x2, #CPU_SYSREG_OFFSET(DBGBVR0_EL1) - - adr x26, 1f - add x26, x26, x24, lsl #2 + add x26, x26, x4, lsl #2 br x26 1: ldr x20, [x3, #(15 * 8)] @@ -569,116 +516,25 @@ ldr x5, [x3, #(0 * 8)] adr x26, 1f - add x26, x26, x24, lsl #2 - br x26 -1: - msr dbgbvr15_el1, x20 - msr dbgbvr14_el1, x19 - msr dbgbvr13_el1, x18 - msr dbgbvr12_el1, x17 - msr dbgbvr11_el1, x16 - msr dbgbvr10_el1, x15 - msr dbgbvr9_el1, x14 - msr dbgbvr8_el1, x13 - msr dbgbvr7_el1, x12 - msr dbgbvr6_el1, x11 - msr dbgbvr5_el1, x10 - msr dbgbvr4_el1, x9 - msr dbgbvr3_el1, x8 - msr dbgbvr2_el1, x7 - msr dbgbvr1_el1, x6 - msr dbgbvr0_el1, x5 - - add x3, x2, #CPU_SYSREG_OFFSET(DBGWCR0_EL1) - - adr x26, 1f - add x26, x26, x25, lsl #2 + add x26, x26, x4, lsl #2 br x26 1: - ldr x20, [x3, #(15 * 8)] - ldr x19, [x3, #(14 * 8)] - ldr x18, [x3, #(13 * 8)] - ldr x17, [x3, #(12 * 8)] - ldr x16, [x3, #(11 * 8)] - ldr x15, [x3, #(10 * 8)] - ldr x14, [x3, #(9 * 8)] - ldr x13, [x3, #(8 * 8)] - ldr x12, [x3, #(7 * 8)] - ldr x11, [x3, #(6 * 8)] - ldr x10, [x3, #(5 * 8)] - ldr x9, [x3, #(4 * 8)] - ldr x8, [x3, #(3 * 8)] - ldr x7, [x3, #(2 * 8)] - ldr x6, [x3, #(1 * 8)] - ldr x5, [x3, #(0 * 8)] - - adr x26, 1f - add x26, x26, x25, lsl #2 - br x26 -1: - msr dbgwcr15_el1, x20 - msr dbgwcr14_el1, x19 - msr dbgwcr13_el1, x18 - msr dbgwcr12_el1, x17 - msr dbgwcr11_el1, x16 - msr dbgwcr10_el1, x15 - msr dbgwcr9_el1, x14 - msr dbgwcr8_el1, x13 - msr dbgwcr7_el1, x12 - msr dbgwcr6_el1, x11 - msr dbgwcr5_el1, x10 - msr dbgwcr4_el1, x9 - msr dbgwcr3_el1, x8 - msr dbgwcr2_el1, x7 - msr dbgwcr1_el1, x6 - msr dbgwcr0_el1, x5 - - add x3, x2, #CPU_SYSREG_OFFSET(DBGWVR0_EL1) - - adr x26, 1f - add x26, x26, x25, lsl #2 - br x26 -1: - ldr x20, [x3, #(15 * 8)] - ldr x19, [x3, #(14 * 8)] - ldr x18, [x3, #(13 * 8)] - ldr x17, [x3, #(12 * 8)] - ldr x16, [x3, #(11 * 8)] - ldr x15, [x3, #(10 * 8)] - ldr x14, [x3, #(9 * 8)] - ldr x13, [x3, #(8 * 8)] - ldr x12, [x3, #(7 * 8)] - ldr x11, [x3, #(6 * 8)] - ldr x10, [x3, #(5 * 8)] - ldr x9, [x3, #(4 * 8)] - ldr x8, [x3, #(3 * 8)] - ldr x7, [x3, #(2 * 8)] - ldr x6, [x3, #(1 * 8)] - ldr x5, [x3, #(0 * 8)] - - adr x26, 1f - add x26, x26, x25, lsl #2 - br x26 -1: - msr dbgwvr15_el1, x20 - msr dbgwvr14_el1, x19 - msr dbgwvr13_el1, x18 - msr dbgwvr12_el1, x17 - msr dbgwvr11_el1, x16 - msr dbgwvr10_el1, x15 - msr dbgwvr9_el1, x14 - msr dbgwvr8_el1, x13 - msr dbgwvr7_el1, x12 - msr dbgwvr6_el1, x11 - msr dbgwvr5_el1, x10 - msr dbgwvr4_el1, x9 - msr dbgwvr3_el1, x8 - msr dbgwvr2_el1, x7 - msr dbgwvr1_el1, x6 - msr dbgwvr0_el1, x5 - - ldr x21, [x2, #CPU_SYSREG_OFFSET(MDCCINT_EL1)] - msr mdccint_el1, x21 + msr \type\()15_el1, x20 + msr \type\()14_el1, x19 + msr \type\()13_el1, x18 + msr \type\()12_el1, x17 + msr \type\()11_el1, x16 + msr \type\()10_el1, x15 + msr \type\()9_el1, x14 + msr \type\()8_el1, x13 + msr \type\()7_el1, x12 + msr \type\()6_el1, x11 + msr \type\()5_el1, x10 + msr \type\()4_el1, x9 + msr \type\()3_el1, x8 + msr \type\()2_el1, x7 + msr \type\()1_el1, x6 + msr \type\()0_el1, x5 .endm .macro skip_32bit_state tmp, target @@ -929,8 +785,34 @@ __save_debug: save_debug ret +/* Restore saved guest debug state */ __restore_debug: - restore_debug + // x2: base address for cpu context + // x3: ptr to guest registers passed to setup_debug_registers + // x5..x20/x26: trashed + + mrs x26, id_aa64dfr0_el1 + ubfx x24, x26, #12, #4 // Extract BRPs + ubfx x25, x26, #20, #4 // Extract WRPs + mov w26, #15 + sub w24, w26, w24 // How many BPs to skip + sub w25, w26, w25 // How many WPs to skip + + mov x4, x24 + add x3, x2, #CPU_SYSREG_OFFSET(DBGBCR0_EL1) + setup_debug_registers dbgbcr + add x3, x2, #CPU_SYSREG_OFFSET(DBGBVR0_EL1) + setup_debug_registers dbgbvr + + mov x4, x25 + add x3, x2, #CPU_SYSREG_OFFSET(DBGWCR0_EL1) + setup_debug_registers dbgwcr + add x3, x2, #CPU_SYSREG_OFFSET(DBGWVR0_EL1) + setup_debug_registers dbgwvr + + ldr x21, [x2, #CPU_SYSREG_OFFSET(MDCCINT_EL1)] + msr mdccint_el1, x21 + ret __save_fpsimd: