From patchwork Wed Nov 5 23:22:48 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Bellows X-Patchwork-Id: 40223 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f69.google.com (mail-ee0-f69.google.com [74.125.83.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1DAC2240A6 for ; Wed, 5 Nov 2014 23:24:47 +0000 (UTC) Received: by mail-ee0-f69.google.com with SMTP id c41sf1473349eek.0 for ; Wed, 05 Nov 2014 15:24:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:date :message-id:in-reply-to:references:cc:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=l6yFC6nuzL5qq7FSiG2ASDGRNV0LoOuRw68EKVmYcy8=; b=gfSnuyjGDQkm9PDSWcqfhcX1GVyzzw45er9nP8Pua5GG8MKcZ2cElwsQV2dP7+BzO7 rmdBFkVuXn7dWG9wMlB0Mmdz0ECNX6yFoJ5ZCbKrz4u34PX7A/AiZyS3QsHl2p+ceR2f +U4/EN9zyEI1POetsgg4B+WQz2qdUDKGIrPz8sWLDgPTODwtqaPO76HDDhFXE4DfWHNt 5mV7acUNbYCH2hoUHNX3huiJbJUcZIco1l48pfVN9Lr1pH477bBBhQlfTmqqqfBgW+2a w+L/oiLxfcRlQqgUD2MJpezgeoIPTLpAX5+aurAQOvjjU2ip0qA+S5nCDdFSpThVzlbf baWg== X-Gm-Message-State: ALoCoQmeBMNTWKhvymBbAAPc9ngvm8BknNhQHdJ3TazhiNPjuxwGhw6EVNlMclDrRDg8TH2hDjWb X-Received: by 10.180.81.5 with SMTP id v5mr3665539wix.0.1415229886391; Wed, 05 Nov 2014 15:24:46 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.43.34 with SMTP id t2ls13109lal.2.gmail; Wed, 05 Nov 2014 15:24:46 -0800 (PST) X-Received: by 10.112.139.165 with SMTP id qz5mr579081lbb.96.1415229886224; Wed, 05 Nov 2014 15:24:46 -0800 (PST) Received: from mail-la0-f46.google.com (mail-la0-f46.google.com. [209.85.215.46]) by mx.google.com with ESMTPS id g7si8882530lbs.0.2014.11.05.15.24.45 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 05 Nov 2014 15:24:45 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.46 as permitted sender) client-ip=209.85.215.46; Received: by mail-la0-f46.google.com with SMTP id gm9so1642288lab.33 for ; Wed, 05 Nov 2014 15:24:45 -0800 (PST) X-Received: by 10.152.6.228 with SMTP id e4mr463729laa.71.1415229885700; Wed, 05 Nov 2014 15:24:45 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp371365lbc; Wed, 5 Nov 2014 15:24:44 -0800 (PST) X-Received: by 10.224.55.70 with SMTP id t6mr705227qag.83.1415229883503; Wed, 05 Nov 2014 15:24:43 -0800 (PST) Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id x4si9106208qcn.2.2014.11.05.15.24.43 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 05 Nov 2014 15:24:43 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Received: from localhost ([::1]:48949 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xm9wQ-00056U-Mw for patch@linaro.org; Wed, 05 Nov 2014 18:24:42 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56199) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xm9vF-0003xU-2Y for qemu-devel@nongnu.org; Wed, 05 Nov 2014 18:23:34 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Xm9v9-0004a7-67 for qemu-devel@nongnu.org; Wed, 05 Nov 2014 18:23:29 -0500 Received: from mail-pd0-f179.google.com ([209.85.192.179]:34594) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xm9v8-0004a3-U1 for qemu-devel@nongnu.org; Wed, 05 Nov 2014 18:23:23 -0500 Received: by mail-pd0-f179.google.com with SMTP id g10so1664882pdj.24 for ; Wed, 05 Nov 2014 15:23:22 -0800 (PST) X-Received: by 10.68.92.66 with SMTP id ck2mr540947pbb.72.1415229801995; Wed, 05 Nov 2014 15:23:21 -0800 (PST) Received: from gbellows-linaro.qualcomm.com (rrcs-67-52-129-61.west.biz.rr.com. [67.52.129.61]) by mx.google.com with ESMTPSA id r4sm4086349pdm.93.2014.11.05.15.23.20 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 05 Nov 2014 15:23:21 -0800 (PST) From: Greg Bellows To: qemu-devel@nongnu.org, serge.fdrv@gmail.com, edgar.iglesias@gmail.com, aggelerf@ethz.ch, peter.maydell@linaro.org Date: Wed, 5 Nov 2014 17:22:48 -0600 Message-Id: <1415229793-3278-2-git-send-email-greg.bellows@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1415229793-3278-1-git-send-email-greg.bellows@linaro.org> References: <1415229793-3278-1-git-send-email-greg.bellows@linaro.org> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.85.192.179 Cc: greg.bellows@linaro.org Subject: [Qemu-devel] [PATCH v9 01/26] target-arm: extend async excp masking X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: greg.bellows@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.46 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This patch extends arm_excp_unmasked() to use lookup tables for determining whether IRQ and FIQ exceptions are masked. The lookup tables are based on the ARMv8 and ARMv7 specification physical interrupt masking tables. If EL3 is using AArch64 IRQ/FIQ masking is ignored in all exception levels other than EL3 if SCR.{FIQ|IRQ} is set to 1 (routed to EL3). Signed-off-by: Greg Bellows --- v8 -> v9 - Undo the use of tables for exception masking and instead go with simplified logic based on the target EL lookup. - Remove the masking tables v7 -> v8 - Add IRQ and FIQ exeception masking lookup tables. - Rewrite patch to use lookup tables for determining whether an excpetion is masked or not. v5 -> v6 - Globally change Aarch# to AArch# - Fixed comment termination v4 -> v5 - Merge with v4 patch 10 --- target-arm/cpu.h | 79 +++++++++++++++++++++++++++++++++++++------------------- 1 file changed, 53 insertions(+), 26 deletions(-) diff --git a/target-arm/cpu.h b/target-arm/cpu.h index cb6ec5c..0ea8602 100644 --- a/target-arm/cpu.h +++ b/target-arm/cpu.h @@ -1247,39 +1247,51 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx) CPUARMState *env = cs->env_ptr; unsigned int cur_el = arm_current_el(env); unsigned int target_el = arm_excp_target_el(cs, excp_idx); - /* FIXME: Use actual secure state. */ - bool secure = false; - /* If in EL1/0, Physical IRQ routing to EL2 only happens from NS state. */ - bool irq_can_hyp = !secure && cur_el < 2 && target_el == 2; - /* ARMv7-M interrupt return works by loading a magic value - * into the PC. On real hardware the load causes the - * return to occur. The qemu implementation performs the - * jump normally, then does the exception return when the - * CPU tries to execute code at the magic address. - * This will cause the magic PC value to be pushed to - * the stack if an interrupt occurred at the wrong time. - * We avoid this by disabling interrupts when - * pc contains a magic address. + bool secure = arm_is_secure(env); + uint32_t scr; + uint32_t hcr; + bool pstate_unmasked; + int8_t unmasked = 0; + bool is_aa64 = arm_el_is_aa64(env, 3); + + /* Don't take exceptions if they target a lower EL. + * This check should catch any exceptions that would not be taken but left + * pending. */ - bool irq_unmasked = !(env->daif & PSTATE_I) - && (!IS_M(env) || env->regs[15] < 0xfffffff0); - - /* Don't take exceptions if they target a lower EL. */ if (cur_el > target_el) { return false; } switch (excp_idx) { case EXCP_FIQ: - if (irq_can_hyp && (env->cp15.hcr_el2 & HCR_FMO)) { - return true; - } - return !(env->daif & PSTATE_F); + /* If FIQs are routed to EL3 or EL2 then there are cases where we + * override the CPSR.F in determining if the exception is masked or + * not. If neither of these are set then we fall back to the CPSR.F + * setting otherwise we further assess the state below. + */ + hcr = (env->cp15.hcr_el2 & HCR_FMO); + scr = (env->cp15.scr_el3 & SCR_FIQ); + + /* When EL3 is 32-bit, the SCR.FW bit controls whether the CPSR.F bit + * masks FIQ interrupts when taken in non-secure state. If SCR.FW is + * set then FIQs can be masked by CPSR.F when non-secure but only + * when FIQs are only routed to EL3. + */ + scr &= is_aa64 || !((env->cp15.scr_el3 & SCR_FW) && !hcr); + pstate_unmasked = !(env->daif & PSTATE_F); + break; + case EXCP_IRQ: - if (irq_can_hyp && (env->cp15.hcr_el2 & HCR_IMO)) { - return true; - } - return irq_unmasked; + /* When EL3 execution state is 32-bit, if HCR.IMO is set then we may + * override the CPSR.I masking when in non-secure state. The SCR.IRQ + * setting has already been taken into consideration when setting the + * target EL, so it does not have a further affect here. + */ + hcr = is_aa64 || (env->cp15.hcr_el2 & HCR_IMO); + scr = false; + pstate_unmasked = !(env->daif & PSTATE_I); + break; + case EXCP_VFIQ: if (!secure && !(env->cp15.hcr_el2 & HCR_FMO)) { /* VFIQs are only taken when hypervized and non-secure. */ @@ -1291,10 +1303,25 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx) /* VIRQs are only taken when hypervized and non-secure. */ return false; } - return irq_unmasked; + return !(env->daif & PSTATE_I) && + (!IS_M(env) || env->regs[15] < 0xfffffff0); default: g_assert_not_reached(); } + + /* Use the target EL, current execution state and SCR/HCR settings to + * determine whether the corresponding CPSR bit is used to mask the + * interrupt. + */ + if ((target_el > cur_el) && (target_el != 1) && (scr || hcr) && + (is_aa64 || !secure)) { + unmasked = 1; + } + + /* The PSTATE bits only mask the interrupt if we have not overriden the + * ability above. + */ + return unmasked || pstate_unmasked; } static inline CPUARMState *cpu_init(const char *cpu_model)