From patchwork Tue Dec 17 15:12:22 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 22592 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pa0-f72.google.com (mail-pa0-f72.google.com [209.85.220.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id F2FE523FBC for ; Tue, 17 Dec 2013 15:27:23 +0000 (UTC) Received: by mail-pa0-f72.google.com with SMTP id rd3sf13878783pab.11 for ; Tue, 17 Dec 2013 07:27:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=TrxHAnbTkNTPmSS0hZwdOvFcR9SlN0bfnduzjENleD8=; b=FF5BSGcyVygp29V7M+GeFRYIUm1tIDrJ62w6hyZMpCx/ukfB5RQu1kGm6UcNG+yTHC Wd3B/A7XFLEGTpHV5pJUYHDLnURInixwXG0NTtJsoD1bfAgVKORiqMfYqpZNtHsra5RY 5cFZm2uS+HWGnQjyySbbnCf/opdWnYECfWi9b/Q26JyO6YOMyf7XpjWG3WLlDoN0ataN MJywuJen208pDwmHZd4VzHt9fsSkdqoRSfcreGkEw3rDy6i1AkRDZA7s6bP0N4hkkGJg AoQ40QMUcIu/ulfJBfkjvKdNFLPCx0q0KwA8C7TYsnJ29SBPTrbukRevfoqH8+mEpR0x YwUQ== X-Gm-Message-State: ALoCoQnBllE1bzpiEuw6hnyNCXlaKmIRi5uYmhTisbyr7+rc3Fj9kBEp4uDGfaDafpXW1V3rRuHx X-Received: by 10.66.220.163 with SMTP id px3mr3866008pac.38.1387294043249; Tue, 17 Dec 2013 07:27:23 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.106.132 with SMTP id gu4ls2375987qeb.8.gmail; Tue, 17 Dec 2013 07:27:23 -0800 (PST) X-Received: by 10.58.232.228 with SMTP id tr4mr6650431vec.34.1387294043113; Tue, 17 Dec 2013 07:27:23 -0800 (PST) Received: from mail-vb0-f47.google.com (mail-vb0-f47.google.com [209.85.212.47]) by mx.google.com with ESMTPS id pv1si5004651veb.94.2013.12.17.07.27.23 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 17 Dec 2013 07:27:23 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.47 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.47; Received: by mail-vb0-f47.google.com with SMTP id q12so4170352vbe.20 for ; Tue, 17 Dec 2013 07:27:23 -0800 (PST) X-Received: by 10.58.23.33 with SMTP id j1mr6658853vef.27.1387294042983; Tue, 17 Dec 2013 07:27:22 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp75765vcz; Tue, 17 Dec 2013 07:27:22 -0800 (PST) X-Received: by 10.180.79.67 with SMTP id h3mr3633919wix.58.1387294041881; Tue, 17 Dec 2013 07:27:21 -0800 (PST) Received: from mnementh.archaic.org.uk (mnementh.archaic.org.uk. [2001:8b0:1d0::1]) by mx.google.com with ESMTPS id dv4si5885690wib.59.2013.12.17.07.27.20 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Tue, 17 Dec 2013 07:27:21 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::1 as permitted sender) client-ip=2001:8b0:1d0::1; Received: from pm215 by mnementh.archaic.org.uk with local (Exim 4.80) (envelope-from ) id 1VswJt-000323-Rv; Tue, 17 Dec 2013 15:12:25 +0000 From: Peter Maydell To: qemu-devel@nongnu.org Cc: patches@linaro.org, Michael Matz , Claudio Fontana , Dirk Mueller , Laurent Desnogues , kvmarm@lists.cs.columbia.edu, Richard Henderson , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Christoffer Dall , Will Newton Subject: [PATCH 19/21] target-arm: Widen exclusive-access support struct fields to 64 bits Date: Tue, 17 Dec 2013 15:12:22 +0000 Message-Id: <1387293144-11554-20-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1387293144-11554-1-git-send-email-peter.maydell@linaro.org> References: <1387293144-11554-1-git-send-email-peter.maydell@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peter.maydell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.47 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , In preparation for adding support for A64 load/store exclusive instructions, widen the fields in the CPU state struct that deal with address and data values for exclusives from 32 to 64 bits. Although in practice AArch64 and AArch32 exclusive accesses will be generally separate there are some odd theoretical corner cases (eg you should be able to do the exclusive load in AArch32, take an exception to AArch64 and successfully do the store exclusive there), and it's also easier to reason about. The changes in semantics for the variables are: exclusive_addr -> extended to 64 bits; -1ULL for "monitor lost", otherwise always < 2^32 for AArch32 exclusive_val -> extended to 64 bits. 64 bit exclusives in AArch32 now use the high half of exclusive_val instead of a separate exclusive_high exclusive_high -> is no longer used in AArch32; extended to 64 bits as it will be needed for AArch64's pair-of-64-bit-values exclusives. exclusive_test -> extended to 64 bits, as it is an address. Since this is a linux-user-only field, in arm-linux-user it will always have the top 32 bits zero. exclusive_info -> stays 32 bits, as it is neither data nor address, but simply holds register indexes etc. AArch64 will be able to fit all its information into 32 bits as well. Note that the refactoring of gen_store_exclusive() coincidentally fixes a minor bug where ldrexd would incorrectly update the first CPU register even if the load for the second register faulted. Signed-off-by: Peter Maydell --- linux-user/main.c | 25 +++++++++++-------- target-arm/cpu.h | 8 +++---- target-arm/machine.c | 12 +++++----- target-arm/translate.c | 65 ++++++++++++++++++++++++++++++-------------------- 4 files changed, 64 insertions(+), 46 deletions(-) diff --git a/linux-user/main.c b/linux-user/main.c index c0df8b5..20f9832 100644 --- a/linux-user/main.c +++ b/linux-user/main.c @@ -589,16 +589,21 @@ do_kernel_trap(CPUARMState *env) static int do_strex(CPUARMState *env) { - uint32_t val; + uint64_t val; int size; int rc = 1; int segv = 0; uint32_t addr; start_exclusive(); - addr = env->exclusive_addr; - if (addr != env->exclusive_test) { + if (env->exclusive_addr != env->exclusive_test) { goto fail; } + /* We know we're always AArch32 so the address is in uint32_t range + * unless it was the -1 exclusive-monitor-lost value (which won't + * match exclusive_test above). + */ + assert(extract64(env->exclusive_addr, 32, 32) == 0); + addr = env->exclusive_addr; size = env->exclusive_info & 0xf; switch (size) { case 0: @@ -618,19 +623,19 @@ static int do_strex(CPUARMState *env) env->cp15.c6_data = addr; goto done; } - if (val != env->exclusive_val) { - goto fail; - } if (size == 3) { - segv = get_user_u32(val, addr + 4); + uint32_t valhi; + segv = get_user_u32(valhi, addr + 4); if (segv) { env->cp15.c6_data = addr + 4; goto done; } - if (val != env->exclusive_high) { - goto fail; - } + val = deposit64(val, 32, 32, valhi); + } + if (val != env->exclusive_val) { + goto fail; } + val = env->regs[(env->exclusive_info >> 8) & 0xf]; switch (size) { case 0: diff --git a/target-arm/cpu.h b/target-arm/cpu.h index 81c0b1c..744d1dd 100644 --- a/target-arm/cpu.h +++ b/target-arm/cpu.h @@ -278,11 +278,11 @@ typedef struct CPUARMState { float_status fp_status; float_status standard_fp_status; } vfp; - uint32_t exclusive_addr; - uint32_t exclusive_val; - uint32_t exclusive_high; + uint64_t exclusive_addr; + uint64_t exclusive_val; + uint64_t exclusive_high; #if defined(CONFIG_USER_ONLY) - uint32_t exclusive_test; + uint64_t exclusive_test; uint32_t exclusive_info; #endif diff --git a/target-arm/machine.c b/target-arm/machine.c index 74f010f..8f9e7d4 100644 --- a/target-arm/machine.c +++ b/target-arm/machine.c @@ -222,9 +222,9 @@ static int cpu_post_load(void *opaque, int version_id) const VMStateDescription vmstate_arm_cpu = { .name = "cpu", - .version_id = 13, - .minimum_version_id = 13, - .minimum_version_id_old = 13, + .version_id = 14, + .minimum_version_id = 14, + .minimum_version_id_old = 14, .pre_save = cpu_pre_save, .post_load = cpu_post_load, .fields = (VMStateField[]) { @@ -253,9 +253,9 @@ const VMStateDescription vmstate_arm_cpu = { VMSTATE_VARRAY_INT32(cpreg_vmstate_values, ARMCPU, cpreg_vmstate_array_len, 0, vmstate_info_uint64, uint64_t), - VMSTATE_UINT32(env.exclusive_addr, ARMCPU), - VMSTATE_UINT32(env.exclusive_val, ARMCPU), - VMSTATE_UINT32(env.exclusive_high, ARMCPU), + VMSTATE_UINT64(env.exclusive_addr, ARMCPU), + VMSTATE_UINT64(env.exclusive_val, ARMCPU), + VMSTATE_UINT64(env.exclusive_high, ARMCPU), VMSTATE_UINT64(env.features, ARMCPU), VMSTATE_TIMER(gt_timer[GTIMER_PHYS], ARMCPU), VMSTATE_TIMER(gt_timer[GTIMER_VIRT], ARMCPU), diff --git a/target-arm/translate.c b/target-arm/translate.c index 8bfe950..4387547 100644 --- a/target-arm/translate.c +++ b/target-arm/translate.c @@ -61,11 +61,10 @@ TCGv_ptr cpu_env; static TCGv_i64 cpu_V0, cpu_V1, cpu_M0; static TCGv_i32 cpu_R[16]; static TCGv_i32 cpu_CF, cpu_NF, cpu_VF, cpu_ZF; -static TCGv_i32 cpu_exclusive_addr; -static TCGv_i32 cpu_exclusive_val; -static TCGv_i32 cpu_exclusive_high; +static TCGv_i64 cpu_exclusive_addr; +static TCGv_i64 cpu_exclusive_val; #ifdef CONFIG_USER_ONLY -static TCGv_i32 cpu_exclusive_test; +static TCGv_i64 cpu_exclusive_test; static TCGv_i32 cpu_exclusive_info; #endif @@ -96,14 +95,12 @@ void arm_translate_init(void) cpu_VF = tcg_global_mem_new_i32(TCG_AREG0, offsetof(CPUARMState, VF), "VF"); cpu_ZF = tcg_global_mem_new_i32(TCG_AREG0, offsetof(CPUARMState, ZF), "ZF"); - cpu_exclusive_addr = tcg_global_mem_new_i32(TCG_AREG0, + cpu_exclusive_addr = tcg_global_mem_new_i64(TCG_AREG0, offsetof(CPUARMState, exclusive_addr), "exclusive_addr"); - cpu_exclusive_val = tcg_global_mem_new_i32(TCG_AREG0, + cpu_exclusive_val = tcg_global_mem_new_i64(TCG_AREG0, offsetof(CPUARMState, exclusive_val), "exclusive_val"); - cpu_exclusive_high = tcg_global_mem_new_i32(TCG_AREG0, - offsetof(CPUARMState, exclusive_high), "exclusive_high"); #ifdef CONFIG_USER_ONLY - cpu_exclusive_test = tcg_global_mem_new_i32(TCG_AREG0, + cpu_exclusive_test = tcg_global_mem_new_i64(TCG_AREG0, offsetof(CPUARMState, exclusive_test), "exclusive_test"); cpu_exclusive_info = tcg_global_mem_new_i32(TCG_AREG0, offsetof(CPUARMState, exclusive_info), "exclusive_info"); @@ -6758,30 +6755,34 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, default: abort(); } - tcg_gen_mov_i32(cpu_exclusive_val, tmp); - store_reg(s, rt, tmp); + if (size == 3) { TCGv_i32 tmp2 = tcg_temp_new_i32(); + TCGv_i32 tmp3 = tcg_temp_new_i32(); + tcg_gen_addi_i32(tmp2, addr, 4); - tmp = tcg_temp_new_i32(); - gen_aa32_ld32u(tmp, tmp2, IS_USER(s)); + gen_aa32_ld32u(tmp3, tmp2, IS_USER(s)); tcg_temp_free_i32(tmp2); - tcg_gen_mov_i32(cpu_exclusive_high, tmp); - store_reg(s, rt2, tmp); + tcg_gen_concat_i32_i64(cpu_exclusive_val, tmp, tmp3); + store_reg(s, rt2, tmp3); + } else { + tcg_gen_extu_i32_i64(cpu_exclusive_val, tmp); } - tcg_gen_mov_i32(cpu_exclusive_addr, addr); + + store_reg(s, rt, tmp); + tcg_gen_extu_i32_i64(cpu_exclusive_addr, addr); } static void gen_clrex(DisasContext *s) { - tcg_gen_movi_i32(cpu_exclusive_addr, -1); + tcg_gen_movi_i64(cpu_exclusive_addr, -1); } #ifdef CONFIG_USER_ONLY static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, TCGv_i32 addr, int size) { - tcg_gen_mov_i32(cpu_exclusive_test, addr); + tcg_gen_extu_i32_i64(cpu_exclusive_test, addr); tcg_gen_movi_i32(cpu_exclusive_info, size | (rd << 4) | (rt << 8) | (rt2 << 12)); gen_exception_insn(s, 4, EXCP_STREX); @@ -6791,6 +6792,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, TCGv_i32 addr, int size) { TCGv_i32 tmp; + TCGv_i64 val64, extaddr; int done_label; int fail_label; @@ -6802,7 +6804,11 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, } */ fail_label = gen_new_label(); done_label = gen_new_label(); - tcg_gen_brcond_i32(TCG_COND_NE, addr, cpu_exclusive_addr, fail_label); + extaddr = tcg_temp_new_i64(); + tcg_gen_extu_i32_i64(extaddr, addr); + tcg_gen_brcond_i64(TCG_COND_NE, extaddr, cpu_exclusive_addr, fail_label); + tcg_temp_free_i64(extaddr); + tmp = tcg_temp_new_i32(); switch (size) { case 0: @@ -6818,17 +6824,24 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, default: abort(); } - tcg_gen_brcond_i32(TCG_COND_NE, tmp, cpu_exclusive_val, fail_label); - tcg_temp_free_i32(tmp); + + val64 = tcg_temp_new_i64(); if (size == 3) { TCGv_i32 tmp2 = tcg_temp_new_i32(); + TCGv_i32 tmp3 = tcg_temp_new_i32(); tcg_gen_addi_i32(tmp2, addr, 4); - tmp = tcg_temp_new_i32(); - gen_aa32_ld32u(tmp, tmp2, IS_USER(s)); + gen_aa32_ld32u(tmp3, tmp2, IS_USER(s)); tcg_temp_free_i32(tmp2); - tcg_gen_brcond_i32(TCG_COND_NE, tmp, cpu_exclusive_high, fail_label); - tcg_temp_free_i32(tmp); + tcg_gen_concat_i32_i64(val64, tmp, tmp3); + tcg_temp_free_i32(tmp3); + } else { + tcg_gen_extu_i32_i64(val64, tmp); } + tcg_temp_free_i32(tmp); + + tcg_gen_brcond_i64(TCG_COND_NE, val64, cpu_exclusive_val, fail_label); + tcg_temp_free_i64(val64); + tmp = load_reg(s, rt); switch (size) { case 0: @@ -6856,7 +6869,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, gen_set_label(fail_label); tcg_gen_movi_i32(cpu_R[rd], 1); gen_set_label(done_label); - tcg_gen_movi_i32(cpu_exclusive_addr, -1); + tcg_gen_movi_i64(cpu_exclusive_addr, -1); } #endif