From patchwork Sat Sep 19 13:06:51 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 53953 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by patches.linaro.org (Postfix) with ESMTPS id 54B4722A0D for ; Sat, 19 Sep 2015 13:06:57 +0000 (UTC) Received: by wicgb1 with SMTP id gb1sf17751706wic.3 for ; Sat, 19 Sep 2015 06:06:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:x-original-sender:x-original-authentication-results :precedence:mailing-list:list-id:list-post:list-help:list-archive :list-unsubscribe; bh=XhkdcjlAybEGqoydNuHRslf6mhZ61AluV5V8w/hQzVg=; b=GghZmt9njEV0k6Hr8fB1tVgGAxRbYX6pwjlftZ4dsicBxlB4oDbpq7sDfKvmEOX7kU VcfyYrW9gk4eNo+rhwov80zqNi/3f9TotSOyV1jBTCsjyjnuG5TIdvKDSt7Bccxa1Lus gO9PSshelrIecWA7IZhPUqUL/1HfVTN1NVzZlL7BKW45BtFaPP6jas2kbZAvFxUPIlW0 Oj9WjI9pnMkxrg+CAqRzeB+tweLu99UU+ONYMgS+e4j4ZP2kv8NYRYSco/wu6FYr1ht0 60RWOE//w/eGvhthzFvksBMNElu6hHOtLJpJT1wwHXhN1rTUaN2a0s9TOgWcSUOb7FDt DcvA== X-Gm-Message-State: ALoCoQnBAuXorfYpi9jbKFFs/NxStMIflF3KS/CGDx0qB0Yu5yvdWiR12cOhSRk9FTLd7XGRYer/ X-Received: by 10.181.29.103 with SMTP id jv7mr549516wid.0.1442668016530; Sat, 19 Sep 2015 06:06:56 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.20.38 with SMTP id k6ls331748lae.39.gmail; Sat, 19 Sep 2015 06:06:56 -0700 (PDT) X-Received: by 10.112.54.132 with SMTP id j4mr4772521lbp.84.1442668016381; Sat, 19 Sep 2015 06:06:56 -0700 (PDT) Received: from mail-la0-f44.google.com (mail-la0-f44.google.com. [209.85.215.44]) by mx.google.com with ESMTPS id k4si9638463lah.107.2015.09.19.06.06.55 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 19 Sep 2015 06:06:55 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) client-ip=209.85.215.44; Received: by lamp12 with SMTP id p12so43760551lam.0 for ; Sat, 19 Sep 2015 06:06:55 -0700 (PDT) X-Received: by 10.152.43.137 with SMTP id w9mr4978597lal.56.1442668015763; Sat, 19 Sep 2015 06:06:55 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.59.35 with SMTP id w3csp824304lbq; Sat, 19 Sep 2015 06:06:54 -0700 (PDT) X-Received: by 10.194.114.104 with SMTP id jf8mr315085wjb.155.1442668014946; Sat, 19 Sep 2015 06:06:54 -0700 (PDT) Received: from mnementh.archaic.org.uk (mnementh.archaic.org.uk. [2001:8b0:1d0::1]) by mx.google.com with ESMTPS id fd7si5477161wjc.157.2015.09.19.06.06.54 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sat, 19 Sep 2015 06:06:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of pm215@archaic.org.uk designates 2001:8b0:1d0::1 as permitted sender) client-ip=2001:8b0:1d0::1; Received: from pm215 by mnementh.archaic.org.uk with local (Exim 4.80) (envelope-from ) id 1ZdHqt-0001Qs-2Y; Sat, 19 Sep 2015 14:06:51 +0100 From: Peter Maydell To: qemu-devel@nongnu.org Cc: patches@linaro.org, Pavel Dovgalyuk , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Richard Henderson Subject: [PATCH] target-arm/translate.c: Handle non-executable page-straddling Thumb insns Date: Sat, 19 Sep 2015 14:06:51 +0100 Message-Id: <1442668011-5481-1-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.10.4 X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peter.maydell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , When the memory we're trying to translate code from is not executable we have to turn this into a guest fault. In order to report the correct PC for this fault, and to make sure it is not reported until after any other possible faults for instructions earlier in execution, we must terminate TBs at the end of a page, in case the next instruction is in a non-executable page. This is simple for T16, A32 and A64 instructions, which are always aligned to their size. However T32 instructions may be 32-bits but only 16-aligned, so they can straddle a page boundary. Correct the condition that checks whether the next instruction will touch the following page, to ensure that if we're 2 bytes before the boundary and this insn is T32 then we end the TB. Reported-by: Pavel Dovgalyuk Signed-off-by: Peter Maydell Reviewed-by: Laurent Desnogues --- The other way you could do this would be to check before each 'read halfword' in the decoder whether you were going to go off the end of the page, and if so roll back anything you'd already generated, but that sounds really painful. I'm glad I don't have to fix this bug for x86 :-) target-arm/translate.c | 48 +++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 47 insertions(+), 1 deletion(-) diff --git a/target-arm/translate.c b/target-arm/translate.c index 84a21ac..d5cfe84 100644 --- a/target-arm/translate.c +++ b/target-arm/translate.c @@ -11167,6 +11167,38 @@ undef: default_exception_el(s)); } +static bool insn_crosses_page(CPUARMState *env, DisasContext *s) +{ + /* Return true if the insn at dc->pc might cross a page boundary. + * (False positives are OK, false negatives are not.) + */ + uint16_t insn; + + if ((s->pc & 3) == 0) { + /* At a 4-aligned address we can't be crossing a page */ + return false; + } + + /* This must be a Thumb insn */ + insn = arm_lduw_code(env, s->pc, s->bswap_code); + + switch (insn >> 11) { + case 0x1d: /* 0b11101 */ + case 0x1e: /* 0b11110 */ + case 0x1f: /* 0b11111 */ + /* First half of a 32-bit Thumb insn. Thumb-1 cores might + * end up actually treating this as two 16-bit insns (see the + * code at the start of disas_thumb2_insn()) but we don't bother + * to check for that as it is unlikely, and false positives here + * are harmless. + */ + return true; + default: + /* 16-bit Thumb insn */ + return false; + } +} + /* generate intermediate code in gen_opc_buf and gen_opparam_buf for basic block 'tb'. If search_pc is TRUE, also generate PC information for each intermediate instruction. */ @@ -11183,6 +11215,7 @@ static inline void gen_intermediate_code_internal(ARMCPU *cpu, target_ulong next_page_start; int num_insns; int max_insns; + bool end_of_page; /* generate intermediate code */ @@ -11404,11 +11437,24 @@ static inline void gen_intermediate_code_internal(ARMCPU *cpu, * Also stop translation when a page boundary is reached. This * ensures prefetch aborts occur at the right place. */ num_insns ++; + + /* We want to stop the TB if the next insn starts in a new page, + * or if it spans between this page and the next. This means that + * if we're looking at the last halfword in the page we need to + * see if it's a 16-bit Thumb insn (which will fit in this TB) + * or a 32-bit Thumb insn (which won't). + * This is to avoid generating a silly TB with a single 16-bit insn + * in it at the end of this page (which would execute correctly + * but isn't very efficient). + */ + end_of_page = (dc->pc >= next_page_start) || + ((dc->pc >= next_page_start - 3) && insn_crosses_page(env, dc)); + } while (!dc->is_jmp && !tcg_op_buf_full() && !cs->singlestep_enabled && !singlestep && !dc->ss_active && - dc->pc < next_page_start && + !end_of_page && num_insns < max_insns); if (tb->cflags & CF_LAST_IO) {