From patchwork Fri Aug 7 12:33:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 52040 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by patches.linaro.org (Postfix) with ESMTPS id 2767C22EAD for ; Fri, 7 Aug 2015 12:33:40 +0000 (UTC) Received: by labia3 with SMTP id ia3sf17264542lab.1 for ; Fri, 07 Aug 2015 05:33:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=S/LGMIY1m3pcraRIOiHpJ2m8EEGtpmh8t4bnQD39fZE=; b=VfqtuBK4iWzXN9NvvTeZNYZyHiokuFau/+IDxl/gYBOiF/KocZ3OxheDL4tYnS0Yj7 gJE+tgw8ZakW2yZOEHFjIW2ZJdvObYkUihdcSWbQ84qZpX5SnOFrbxy43qaKnx+HAM2t deDis/Kv1zhXhA8BVg/c45CMeknM3TRdmSBLJ6PLHJLOWEkFhXtzqxXtjDwWOqL/up5N OmBgv4kX68cydSGxtt10zr+e7lx+Rx7CTXZQgYs0x5O6CZNjOtUCmANtCqHiAU+hTZb1 SHjuL3DxTn8l4R8yTM1HAs7I6yLNg05UFxb87jRfbaPeRfLnnSI8rqimlEaCBwGs3RZR l/FA== X-Gm-Message-State: ALoCoQla0aec0uAgbvJdU9WLRpxaNEzj+7PmMB2efbduO8/8iFZ3Pktt2D4rIOrLD65eN1mxK0Eu X-Received: by 10.194.80.69 with SMTP id p5mr2088328wjx.2.1438950819090; Fri, 07 Aug 2015 05:33:39 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.120.40 with SMTP id kz8ls467486lab.24.gmail; Fri, 07 Aug 2015 05:33:38 -0700 (PDT) X-Received: by 10.112.145.169 with SMTP id sv9mr7215869lbb.73.1438950818799; Fri, 07 Aug 2015 05:33:38 -0700 (PDT) Received: from mail-lb0-f177.google.com (mail-lb0-f177.google.com. [209.85.217.177]) by mx.google.com with ESMTPS id z6si7379021lad.39.2015.08.07.05.33.38 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 Aug 2015 05:33:38 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.177 as permitted sender) client-ip=209.85.217.177; Received: by lbbyj8 with SMTP id yj8so60041841lbb.0 for ; Fri, 07 Aug 2015 05:33:38 -0700 (PDT) X-Received: by 10.112.166.106 with SMTP id zf10mr7203008lbb.36.1438950818581; Fri, 07 Aug 2015 05:33:38 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.7.198 with SMTP id l6csp347889lba; Fri, 7 Aug 2015 05:33:37 -0700 (PDT) X-Received: by 10.140.105.53 with SMTP id b50mr12305844qgf.100.1438950816938; Fri, 07 Aug 2015 05:33:36 -0700 (PDT) Received: from mnementh.archaic.org.uk (mnementh.archaic.org.uk. [2001:8b0:1d0::1]) by mx.google.com with ESMTPS id n35si17598177qge.91.2015.08.07.05.33.35 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Fri, 07 Aug 2015 05:33:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::1 as permitted sender) client-ip=2001:8b0:1d0::1; Received: from pm215 by mnementh.archaic.org.uk with local (Exim 4.80) (envelope-from ) id 1ZNgq3-0007SF-2W; Fri, 07 Aug 2015 13:33:31 +0100 From: Peter Maydell To: qemu-devel@nongnu.org Cc: patches@linaro.org, "Edgar E. Iglesias" , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Paolo Bonzini Subject: [PATCH 3/6] target-arm: Restrict AArch64 TLB flushes to the MMU indexes they must touch Date: Fri, 7 Aug 2015 13:33:27 +0100 Message-Id: <1438950810-28618-4-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1438950810-28618-1-git-send-email-peter.maydell@linaro.org> References: <1438950810-28618-1-git-send-email-peter.maydell@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peter.maydell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.177 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Now we have the ability to flush the TLB only for specific MMU indexes, update the AArch64 TLB maintenance instruction implementations to only flush the parts of the TLB they need to, rather than doing full flushes. We take the opportunity to remove some duplicate functions (the per-asid tlb ops work like the non-per-asid ones because we don't support flushing a TLB only by ASID) and to bring the function names in line with the architectural TLBI operation names. Signed-off-by: Peter Maydell --- target-arm/helper.c | 172 +++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 129 insertions(+), 43 deletions(-) diff --git a/target-arm/helper.c b/target-arm/helper.c index d5052c2..c53fecf 100644 --- a/target-arm/helper.c +++ b/target-arm/helper.c @@ -2240,65 +2240,151 @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env, * Page D4-1736 (DDI0487A.b) */ -static void tlbi_aa64_va_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) +static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) { - /* Invalidate by VA (AArch64 version) */ ARMCPU *cpu = arm_env_get_cpu(env); - uint64_t pageaddr = sextract64(value << 12, 0, 56); + CPUState *cs = CPU(cpu); - tlb_flush_page(CPU(cpu), pageaddr); + if (arm_is_secure_below_el3(env)) { + tlb_flush_by_mmuidx(cs, ARMMMUIdx_S1SE1, ARMMMUIdx_S1SE0, -1); + } else { + tlb_flush_by_mmuidx(cs, ARMMMUIdx_S12NSE1, ARMMMUIdx_S12NSE0, -1); + } } -static void tlbi_aa64_vaa_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) +static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) { - /* Invalidate by VA, all ASIDs (AArch64 version) */ - ARMCPU *cpu = arm_env_get_cpu(env); - uint64_t pageaddr = sextract64(value << 12, 0, 56); + bool sec = arm_is_secure_below_el3(env); + CPUState *other_cs; - tlb_flush_page(CPU(cpu), pageaddr); + CPU_FOREACH(other_cs) { + if (sec) { + tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S1SE1, ARMMMUIdx_S1SE0, -1); + } else { + tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S12NSE1, + ARMMMUIdx_S12NSE0, -1); + } + } } -static void tlbi_aa64_asid_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) +static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) { - /* Invalidate by ASID (AArch64 version) */ + /* Note that the 'ALL' scope must invalidate both stage 1 and + * stage 2 translations, whereas most other scopes only invalidate + * stage 1 translations. + */ ARMCPU *cpu = arm_env_get_cpu(env); - int asid = extract64(value, 48, 16); - tlb_flush(CPU(cpu), asid == 0); + CPUState *cs = CPU(cpu); + + if (arm_is_secure_below_el3(env)) { + tlb_flush_by_mmuidx(cs, ARMMMUIdx_S1SE1, ARMMMUIdx_S1SE0, -1); + } else { + if (arm_feature(env, ARM_FEATURE_EL2)) { + tlb_flush_by_mmuidx(cs, ARMMMUIdx_S12NSE1, ARMMMUIdx_S12NSE0, + ARMMMUIdx_S2NS, -1); + } else { + tlb_flush_by_mmuidx(cs, ARMMMUIdx_S12NSE1, ARMMMUIdx_S12NSE0, -1); + } + } } -static void tlbi_aa64_va_is_write(CPUARMState *env, const ARMCPRegInfo *ri, +static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { + ARMCPU *cpu = arm_env_get_cpu(env); + CPUState *cs = CPU(cpu); + + tlb_flush_by_mmuidx(cs, ARMMMUIdx_S1E2, -1); +} + +static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Note that the 'ALL' scope must invalidate both stage 1 and + * stage 2 translations, whereas most other scopes only invalidate + * stage 1 translations. + */ + bool sec = arm_is_secure_below_el3(env); + bool has_el2 = arm_feature(env, ARM_FEATURE_EL2); CPUState *other_cs; - uint64_t pageaddr = sextract64(value << 12, 0, 56); CPU_FOREACH(other_cs) { - tlb_flush_page(other_cs, pageaddr); + if (sec) { + tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S1SE1, ARMMMUIdx_S1SE0, -1); + } else if (has_el2) { + tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S12NSE1, + ARMMMUIdx_S12NSE0, ARMMMUIdx_S2NS, -1); + } else { + tlb_flush_by_mmuidx(other_cs, ARMMMUIdx_S12NSE1, + ARMMMUIdx_S12NSE0, -1); + } } } -static void tlbi_aa64_vaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) +static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Invalidate by VA, EL1&0 (AArch64 version). + * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1, + * since we don't support flush-for-specific-ASID-only or + * flush-last-level-only. + */ + ARMCPU *cpu = arm_env_get_cpu(env); + CPUState *cs = CPU(cpu); + uint64_t pageaddr = sextract64(value << 12, 0, 56); + + if (arm_is_secure_below_el3(env)) { + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S1SE1, + ARMMMUIdx_S1SE0, -1); + } else { + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S12NSE1, + ARMMMUIdx_S12NSE0, -1); + } +} + +static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) +{ + /* Invalidate by VA, EL2 + * Currently handles both VAE2 and VALE2, since we don't support + * flush-last-level-only. + */ + ARMCPU *cpu = arm_env_get_cpu(env); + CPUState *cs = CPU(cpu); + uint64_t pageaddr = sextract64(value << 12, 0, 56); + + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S1E2, -1); +} + +static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) { + bool sec = arm_is_secure_below_el3(env); CPUState *other_cs; uint64_t pageaddr = sextract64(value << 12, 0, 56); CPU_FOREACH(other_cs) { - tlb_flush_page(other_cs, pageaddr); + if (sec) { + tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S1SE1, + ARMMMUIdx_S1SE0, -1); + } else { + tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S12NSE1, + ARMMMUIdx_S12NSE0, -1); + } } } -static void tlbi_aa64_asid_is_write(CPUARMState *env, const ARMCPRegInfo *ri, - uint64_t value) +static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri, + uint64_t value) { CPUState *other_cs; - int asid = extract64(value, 48, 16); + uint64_t pageaddr = sextract64(value << 12, 0, 56); CPU_FOREACH(other_cs) { - tlb_flush(other_cs, asid == 0); + tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S1E2, -1); } } @@ -2437,59 +2523,59 @@ static const ARMCPRegInfo v8_cp_reginfo[] = { { .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbiall_is_write }, + .writefn = tlbi_aa64_vmalle1is_write }, { .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_va_is_write }, + .writefn = tlbi_aa64_vae1is_write }, { .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_asid_is_write }, + .writefn = tlbi_aa64_vmalle1is_write }, { .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vaa_is_write }, + .writefn = tlbi_aa64_vae1is_write }, { .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_va_is_write }, + .writefn = tlbi_aa64_vae1is_write }, { .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vaa_is_write }, + .writefn = tlbi_aa64_vae1is_write }, { .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbiall_write }, + .writefn = tlbi_aa64_vmalle1_write }, { .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_va_write }, + .writefn = tlbi_aa64_vae1_write }, { .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_asid_write }, + .writefn = tlbi_aa64_vmalle1_write }, { .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vaa_write }, + .writefn = tlbi_aa64_vae1_write }, { .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_va_write }, + .writefn = tlbi_aa64_vae1_write }, { .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7, .access = PL1_W, .type = ARM_CP_NO_RAW, - .writefn = tlbi_aa64_vaa_write }, + .writefn = tlbi_aa64_vae1_write }, { .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4, .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbiall_is_write }, + .writefn = tlbi_aa64_alle1is_write }, { .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4, .access = PL2_W, .type = ARM_CP_NO_RAW, - .writefn = tlbiall_write }, + .writefn = tlbi_aa64_alle1_write }, #ifndef CONFIG_USER_ONLY /* 64 bit address translation operations */ { .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64, @@ -2715,15 +2801,15 @@ static const ARMCPRegInfo el2_cp_reginfo[] = { { .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0, .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbiall_write }, + .writefn = tlbi_aa64_alle2_write }, { .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1, .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbi_aa64_vaa_write }, + .writefn = tlbi_aa64_vae2_write }, { .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64, .opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1, .type = ARM_CP_NO_RAW, .access = PL2_W, - .writefn = tlbi_aa64_vaa_write }, + .writefn = tlbi_aa64_vae2is_write }, REGINFO_SENTINEL };