From patchwork Fri May 10 07:23:50 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: PranavkumarSawargaonkar X-Patchwork-Id: 16857 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f197.google.com (mail-ve0-f197.google.com [209.85.128.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 2728B238F6 for ; Fri, 10 May 2013 07:25:00 +0000 (UTC) Received: by mail-ve0-f197.google.com with SMTP id jz10sf4783274veb.8 for ; Fri, 10 May 2013 00:24:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:x-beenthere:x-received:received-spf :x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=vqXw54iWGVo6PIVa0avvdZfr/BIVcOiFfwRXbFHorYI=; b=lRxmkaMSXDMng6RKeYiZlAbNy65wu5FgXKcDI3MqnC1+m0kq/EBjrMGnyh+EwCmJKd J6sKqgCU0TYVVJzVzSpMVNT/JIM9kPYWTXw1f5kWKNuHDqKGB3JdnSL1YmgQLEbzCtaY JwDxO8JxflirEKuzjiEgXrbUbFuRLyD/AdHXxYIsOTWr6hNh/0S02VFWgWPWUF0CL7dd 5fzZyk0WLLzkS5VhtE+oOqfz110ZA2rUwim03lYKJVW7Lwbbe4tXaAf3n1TyX12LuSUR paQwjW1PCxUyPpcQeGJM0UwqN9XEQ/pSNOCU1KpQZD1qZQGxC5Yl4f5vt6gpmgWt7lrx AioQ== X-Received: by 10.236.128.207 with SMTP id f55mr8116064yhi.28.1368170674664; Fri, 10 May 2013 00:24:34 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.121.5 with SMTP id lg5ls1716351qeb.21.gmail; Fri, 10 May 2013 00:24:34 -0700 (PDT) X-Received: by 10.58.48.166 with SMTP id m6mr10053561ven.59.1368170674481; Fri, 10 May 2013 00:24:34 -0700 (PDT) Received: from mail-vc0-f169.google.com (mail-vc0-f169.google.com [209.85.220.169]) by mx.google.com with ESMTPS id u8si698858vck.29.2013.05.10.00.24.34 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 10 May 2013 00:24:34 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.169 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.169; Received: by mail-vc0-f169.google.com with SMTP id gd11so3510594vcb.0 for ; Fri, 10 May 2013 00:24:34 -0700 (PDT) X-Received: by 10.52.175.200 with SMTP id cc8mr8731560vdc.94.1368170674376; Fri, 10 May 2013 00:24:34 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.217.15 with SMTP id hk15csp43746vcb; Fri, 10 May 2013 00:24:33 -0700 (PDT) X-Received: by 10.66.255.41 with SMTP id an9mr16457877pad.44.1368170673223; Fri, 10 May 2013 00:24:33 -0700 (PDT) Received: from mail-pa0-f50.google.com (mail-pa0-f50.google.com [209.85.220.50]) by mx.google.com with ESMTPS id ns8si953896pbb.319.2013.05.10.00.24.32 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 10 May 2013 00:24:33 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.50 is neither permitted nor denied by best guess record for domain of pranavkumar@linaro.org) client-ip=209.85.220.50; Received: by mail-pa0-f50.google.com with SMTP id fb10so2733729pad.37 for ; Fri, 10 May 2013 00:24:32 -0700 (PDT) X-Received: by 10.68.227.106 with SMTP id rz10mr16356925pbc.32.1368170672718; Fri, 10 May 2013 00:24:32 -0700 (PDT) Received: from pnqlab006.amcc.com ([182.72.18.82]) by mx.google.com with ESMTPSA id al2sm1607309pbc.25.2013.05.10.00.24.29 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 10 May 2013 00:24:32 -0700 (PDT) From: Pranavkumar Sawargaonkar To: kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, linaro-kernel@lists.linaro.org, patches@linaro.org, marc.zyngier@arm.com, Pranavkumar Sawargaonkar , Anup Patel Subject: [PATCH V2] arm64: KVM: Fix HCR_EL2 and VTCR_EL2 configuration bits Date: Fri, 10 May 2013 12:53:50 +0530 Message-Id: <1368170630-19567-1-git-send-email-pranavkumar@linaro.org> X-Mailer: git-send-email 1.7.9.5 X-Gm-Message-State: ALoCoQntcW7KXfjoLA6tOjM31zvLjWn7DUURtPm2qzL4B2gl6etZRa8PWWpvSPq3BS9paGHjgAZ0 X-Original-Sender: pranavkumar@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.169 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch does following fixes: 1. Make HCR_* flags as unsigned long long constants Reason : By default, compiler assumes numeric constants as signed hence sign extending it when assigned to unsigned variable such as hcr_el2 (in VCPU context). This accidently sets HCR_ID and HCR_CD making entire guest memory non-cacheable. On real HW, this breaks Stage2 ttbl walks and also breaks VirtIO. 2. VTCR_EL2_ORGN0_WBWA and VTCR_EL2_IRGN0_WBWA macros. Signed-off-by: Pranavkumar Sawargaonkar Signed-off-by: Anup Patel --- arch/arm64/include/asm/kvm_arm.h | 73 +++++++++++++++++++------------------- 1 file changed, 37 insertions(+), 36 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 8ced0ca..14ead69 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -18,44 +18,45 @@ #ifndef __ARM64_KVM_ARM_H__ #define __ARM64_KVM_ARM_H__ +#include #include /* Hyp Configuration Register (HCR) bits */ -#define HCR_ID (1 << 33) -#define HCR_CD (1 << 32) +#define HCR_ID (UL(0x1) << 33) +#define HCR_CD (UL(0x1) << 32) #define HCR_RW_SHIFT 31 -#define HCR_RW (1 << HCR_RW_SHIFT) -#define HCR_TRVM (1 << 30) -#define HCR_HCD (1 << 29) -#define HCR_TDZ (1 << 28) -#define HCR_TGE (1 << 27) -#define HCR_TVM (1 << 26) -#define HCR_TTLB (1 << 25) -#define HCR_TPU (1 << 24) -#define HCR_TPC (1 << 23) -#define HCR_TSW (1 << 22) -#define HCR_TAC (1 << 21) -#define HCR_TIDCP (1 << 20) -#define HCR_TSC (1 << 19) -#define HCR_TID3 (1 << 18) -#define HCR_TID2 (1 << 17) -#define HCR_TID1 (1 << 16) -#define HCR_TID0 (1 << 15) -#define HCR_TWE (1 << 14) -#define HCR_TWI (1 << 13) -#define HCR_DC (1 << 12) -#define HCR_BSU (3 << 10) -#define HCR_BSU_IS (1 << 10) -#define HCR_FB (1 << 9) -#define HCR_VA (1 << 8) -#define HCR_VI (1 << 7) -#define HCR_VF (1 << 6) -#define HCR_AMO (1 << 5) -#define HCR_IMO (1 << 4) -#define HCR_FMO (1 << 3) -#define HCR_PTW (1 << 2) -#define HCR_SWIO (1 << 1) -#define HCR_VM (1) +#define HCR_RW (UL(0x1) << HCR_RW_SHIFT) +#define HCR_TRVM (UL(0x1) << 30) +#define HCR_HCD (UL(0x1) << 29) +#define HCR_TDZ (UL(0x1) << 28) +#define HCR_TGE (UL(0x1) << 27) +#define HCR_TVM (UL(0x1) << 26) +#define HCR_TTLB (UL(0x1) << 25) +#define HCR_TPU (UL(0x1) << 24) +#define HCR_TPC (UL(0x1) << 23) +#define HCR_TSW (UL(0x1) << 22) +#define HCR_TAC (UL(0x1) << 21) +#define HCR_TIDCP (UL(0x1) << 20) +#define HCR_TSC (UL(0x1) << 19) +#define HCR_TID3 (UL(0x1) << 18) +#define HCR_TID2 (UL(0x1) << 17) +#define HCR_TID1 (UL(0x1) << 16) +#define HCR_TID0 (UL(0x1) << 15) +#define HCR_TWE (UL(0x1) << 14) +#define HCR_TWI (UL(0x1) << 13) +#define HCR_DC (UL(0x1) << 12) +#define HCR_BSU (UL(0x3) << 10) +#define HCR_BSU_IS (UL(0x1) << 10) +#define HCR_FB (UL(0x1) << 9) +#define HCR_VA (UL(0x1) << 8) +#define HCR_VI (UL(0x1) << 7) +#define HCR_VF (UL(0x1) << 6) +#define HCR_AMO (UL(0x1) << 5) +#define HCR_IMO (UL(0x1) << 4) +#define HCR_FMO (UL(0x1) << 3) +#define HCR_PTW (UL(0x1) << 2) +#define HCR_SWIO (UL(0x1) << 1) +#define HCR_VM (UL(0x1)) /* * The bits we set in HCR: @@ -111,9 +112,9 @@ #define VTCR_EL2_SH0_MASK (3 << 12) #define VTCR_EL2_SH0_INNER (3 << 12) #define VTCR_EL2_ORGN0_MASK (3 << 10) -#define VTCR_EL2_ORGN0_WBWA (3 << 10) +#define VTCR_EL2_ORGN0_WBWA (1 << 10) #define VTCR_EL2_IRGN0_MASK (3 << 8) -#define VTCR_EL2_IRGN0_WBWA (3 << 8) +#define VTCR_EL2_IRGN0_WBWA (1 << 8) #define VTCR_EL2_SL0_MASK (3 << 6) #define VTCR_EL2_SL0_LVL1 (1 << 6) #define VTCR_EL2_T0SZ_MASK 0x3f