From patchwork Fri Jan 8 15:53:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 59380 Delivered-To: patch@linaro.org Received: by 10.112.130.2 with SMTP id oa2csp639000lbb; Fri, 8 Jan 2016 07:55:14 -0800 (PST) X-Received: by 10.55.77.206 with SMTP id a197mr143502531qkb.43.1452268514352; Fri, 08 Jan 2016 07:55:14 -0800 (PST) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id x4si47815688qkx.21.2016.01.08.07.55.14 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 08 Jan 2016 07:55:14 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dkim=fail header.i=@linaro.org Received: from localhost ([::1]:37014 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aHZNh-0001wN-Up for patch@linaro.org; Fri, 08 Jan 2016 10:55:13 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53764) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aHZLy-0007lY-Vm for qemu-devel@nongnu.org; Fri, 08 Jan 2016 10:53:28 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aHZLt-0006Sc-HM for qemu-devel@nongnu.org; Fri, 08 Jan 2016 10:53:26 -0500 Received: from mail-wm0-x232.google.com ([2a00:1450:400c:c09::232]:37904) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aHZLt-0006SW-89 for qemu-devel@nongnu.org; Fri, 08 Jan 2016 10:53:21 -0500 Received: by mail-wm0-x232.google.com with SMTP id b14so177620326wmb.1 for ; Fri, 08 Jan 2016 07:53:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type:content-transfer-encoding; bh=P+BjAwYT0XWRwvDBjDijOdvICuIOb0lAKH3TPuJC07k=; b=U7HvRQ4bcrxGxokycC4gjqVo5ds5Igz4uaHW9AjxgaL+sg3+WfL+LmUz27kG6/otEU F5G/Jux+2RZCXXlB4SxtsmL9o6ZApfyiETQcMdP0i6XsCR/Fd6GnFwqBKEBVuwETU3fK akdYVutv3/5MuXrny7zckVYn0dZBSf/EzGSmw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-type:content-transfer-encoding; bh=P+BjAwYT0XWRwvDBjDijOdvICuIOb0lAKH3TPuJC07k=; b=KKzD8H0q2usfMHBsUt+xxDvXGsDCmFKo1X27PsDpstiv9emW7WYNDNg+Tffkf8o8rF SLhV79FiGkRqPwn2dhtWBzEg4SzV9La/hqCQhRwe2rhx80u0fKkzy76UyCdYmZR4lEPE vDgLiJdcf99F7aDLgR2mNNUix/n4ItspU+lI4OXN9PpYTcjCseKs7X+hhpnfpyRK9esR G/pboaJKHnv8KYcJBp0Sdx62SnZfhv286nNpfvkKorpOEnBzcKfDXRXpIvSf7Usys1m8 JeNiis6M321RVZgONjRdykns8jSBhzNae5BYhOimIPncpfYMlYL2SCjQnPl85Ibo0Apg IoBw== X-Gm-Message-State: ALoCoQllvoyIAWy2bSBiRDfPsgYO4YLBu+83rofhb3W14alMF0r0PNoQyoXDciGF5ip4ABm0SUwIpbd+XaXDBR8NmDnib++0kw== X-Received: by 10.194.246.134 with SMTP id xw6mr4948860wjc.158.1452268400624; Fri, 08 Jan 2016 07:53:20 -0800 (PST) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id w73sm19071467wmw.21.2016.01.08.07.53.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 08 Jan 2016 07:53:18 -0800 (PST) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id B34473E07C8; Fri, 8 Jan 2016 15:53:17 +0000 (GMT) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: qemu-devel@nongnu.org Date: Fri, 8 Jan 2016 15:53:13 +0000 Message-Id: <1452268394-31252-2-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.6.4 In-Reply-To: <1452268394-31252-1-git-send-email-alex.bennee@linaro.org> References: <1452268394-31252-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::232 Cc: Peter Crosthwaite , claudio.fontana@huawei.com, a.rigo@virtualopensystems.com, Paolo Bonzini , jani.kokkonen@huawei.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , Richard Henderson Subject: [Qemu-devel] [RFC PATCH 1/2] softmmu_template: add smmu_helper, convert VICTIM_TLB_HIT X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org This lays the ground work for a re-factoring of the softmmu template code. The patch introduces inline "smmu_helper" functions where common (or almost common) code can be placed. Arguments that the compiler picks up as constant can then be used to eliminate legs of code in the inline fragments. There is a minor wrinkle that we need to use a unique name for each inline fragment as the template is included multiple times. For this the smmu_helper macro does the appropriate glue magic. I've tested the result with no change to functionality. Comparing the the objdump of cputlb.o shows minimal changes in probe_write and everything else is identical. TODO: explain probe_write changes Signed-off-by: Alex Bennée --- softmmu_template.h | 75 +++++++++++++++++++++++++++++++++--------------------- 1 file changed, 46 insertions(+), 29 deletions(-) -- 2.6.4 diff --git a/softmmu_template.h b/softmmu_template.h index 6803890..0074bd7 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -116,30 +116,47 @@ # define helper_te_st_name helper_le_st_name #endif -/* macro to check the victim tlb */ -#define VICTIM_TLB_HIT(ty) \ -({ \ - /* we are about to do a page table walk. our last hope is the \ - * victim tlb. try to refill from the victim tlb before walking the \ - * page table. */ \ - int vidx; \ - CPUIOTLBEntry tmpiotlb; \ - CPUTLBEntry tmptlb; \ - for (vidx = CPU_VTLB_SIZE-1; vidx >= 0; --vidx) { \ - if (env->tlb_v_table[mmu_idx][vidx].ty == (addr & TARGET_PAGE_MASK)) {\ - /* found entry in victim tlb, swap tlb and iotlb */ \ - tmptlb = env->tlb_table[mmu_idx][index]; \ - env->tlb_table[mmu_idx][index] = env->tlb_v_table[mmu_idx][vidx]; \ - env->tlb_v_table[mmu_idx][vidx] = tmptlb; \ - tmpiotlb = env->iotlb[mmu_idx][index]; \ - env->iotlb[mmu_idx][index] = env->iotlb_v[mmu_idx][vidx]; \ - env->iotlb_v[mmu_idx][vidx] = tmpiotlb; \ - break; \ - } \ - } \ - /* return true when there is a vtlb hit, i.e. vidx >=0 */ \ - vidx >= 0; \ -}) +/* Inline helper functions for SoftMMU + * + * These functions help reduce code duplication in the various main + * helper functions. Constant arguments (like endian state) will allow + * the compiler to skip code which is never called in a given inline. + */ + +#define smmu_helper(name) glue(glue(glue(_smmu_helper_, SUFFIX), MMUSUFFIX),name) + +static inline int smmu_helper(victim_tlb_hit) (const bool is_read, CPUArchState *env, + unsigned mmu_idx, int index, + target_ulong addr) +{ + /* we are about to do a page table walk. our last hope is the + * victim tlb. try to refill from the victim tlb before walking the + * page table. */ + int vidx; + CPUIOTLBEntry tmpiotlb; + CPUTLBEntry tmptlb; + for (vidx = CPU_VTLB_SIZE-1; vidx >= 0; --vidx) { + bool match; + if (is_read) { + match = env->tlb_v_table[mmu_idx][vidx].ADDR_READ == (addr & TARGET_PAGE_MASK); + } else { + match = env->tlb_v_table[mmu_idx][vidx].addr_write == (addr & TARGET_PAGE_MASK); + } + + if (match) { + /* found entry in victim tlb, swap tlb and iotlb */ + tmptlb = env->tlb_table[mmu_idx][index]; + env->tlb_table[mmu_idx][index] = env->tlb_v_table[mmu_idx][vidx]; + env->tlb_v_table[mmu_idx][vidx] = tmptlb; + tmpiotlb = env->iotlb[mmu_idx][index]; + env->iotlb[mmu_idx][index] = env->iotlb_v[mmu_idx][vidx]; + env->iotlb_v[mmu_idx][vidx] = tmpiotlb; + break; + } + } + /* return true when there is a vtlb hit, i.e. vidx >=0 */ + return vidx >= 0; +} #ifndef SOFTMMU_CODE_ACCESS static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env, @@ -185,7 +202,7 @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr, cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE, mmu_idx, retaddr); } - if (!VICTIM_TLB_HIT(ADDR_READ)) { + if (!smmu_helper(victim_tlb_hit)(true, env, mmu_idx, index, addr)) { tlb_fill(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE, mmu_idx, retaddr); } @@ -269,7 +286,7 @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr, cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE, mmu_idx, retaddr); } - if (!VICTIM_TLB_HIT(ADDR_READ)) { + if (!smmu_helper(victim_tlb_hit)(true, env, mmu_idx, index, addr)) { tlb_fill(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE, mmu_idx, retaddr); } @@ -389,7 +406,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr); } - if (!VICTIM_TLB_HIT(addr_write)) { + if (!smmu_helper(victim_tlb_hit)(false, env, mmu_idx, index, addr)) { tlb_fill(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr); } tlb_addr = env->tlb_table[mmu_idx][index].addr_write; @@ -469,7 +486,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr); } - if (!VICTIM_TLB_HIT(addr_write)) { + if (!smmu_helper(victim_tlb_hit)(false, env, mmu_idx, index, addr)) { tlb_fill(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr); } tlb_addr = env->tlb_table[mmu_idx][index].addr_write; @@ -542,7 +559,7 @@ void probe_write(CPUArchState *env, target_ulong addr, int mmu_idx, if ((addr & TARGET_PAGE_MASK) != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) { /* TLB entry is for a different page */ - if (!VICTIM_TLB_HIT(addr_write)) { + if (!smmu_helper(victim_tlb_hit)(false, env, mmu_idx, index, addr)) { tlb_fill(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr); } }