From patchwork Sat Jun 19 17:26:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 463887 Delivered-To: patch@linaro.org Received: by 2002:a05:6638:102:0:0:0:0 with SMTP id x2csp1025210jao; Sat, 19 Jun 2021 10:40:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwGXT+hvvDFKM/QYnK49d8WTX0VHMpGJOmWusmYdK/zlZZ2xeVjIzsb2EUNEP6Qmxk8MT37 X-Received: by 2002:a05:6638:248a:: with SMTP id x10mr4001226jat.68.1624124458664; Sat, 19 Jun 2021 10:40:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624124458; cv=none; d=google.com; s=arc-20160816; b=XmrrLzJQKEmf+K1lnU2ie4ISdZrMJ9SDt77qFLG9edugNhuwJ1KR9cDTOggoVHOogn 9/0p4Vc2QRMsKkRAGAoigxY48qvb5Nh8nbVC0z0JFP6oxXPR9FFAwG5R3pZZVhJwj0ah EXPu1FJAjaXyAop4pgfUzByx7FoVJYi7lRngpGsJ00Q143qFDCR/GHguKP5wqMnT+Q8Q R9FFl6b1DUuJ/MOxGpcze9L0fpRMbd2JBZP1NHeD5U9kR1J6DYd9toycT76rQ8EZeqEU WUwgYrQar4s+CfUUz7RYu1LIpIyx62dGtKLbFPy65oxk4OOJ4FXQsoSVS5XE2B5fWg2D +Xkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=HZXg/FINwhlCDx9vLoMBvEcrQqzi0upM0DdxrVdyXzE=; b=EH870rf11gjaSqgGdM7NuK6W0Ck5yBBND9c5w7EOUp80zvO2JCy7OYRhcRMywbtny9 Tb0fJH0cQ0Ft7IEki84ZWSu2sMjuUlaPuirtYg0L0YFRt9znK/f//i9/7NzXG3RE0bbt DIYPa9qu/fUJfMaRDW2hR3XHT4PgPN4e3VmuiqLJapEAWzY1p1SY613AyDrH0KzSUDel WqHHPKjXsTR199gJupnAS8OWY4cKqdunebZiJMQzlvwDX4+NMpSqa6kRwgjfKv4dTUIP IuW1k5ZiMJ5RuXoc9lsx9450Ejf7Kiv8L11g3De87LrX3kgH0qi6b8YPTuSb8Xtv9RLY mY5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=FvPMg0qL; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w10si4531473jao.68.2021.06.19.10.40.58 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sat, 19 Jun 2021 10:40:58 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=FvPMg0qL; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42408 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1luexm-0003YP-3t for patch@linaro.org; Sat, 19 Jun 2021 13:40:58 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55508) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1luejz-0003vj-KJ for qemu-devel@nongnu.org; Sat, 19 Jun 2021 13:26:43 -0400 Received: from mail-pf1-x42e.google.com ([2607:f8b0:4864:20::42e]:41579) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lueju-0003dL-KZ for qemu-devel@nongnu.org; Sat, 19 Jun 2021 13:26:43 -0400 Received: by mail-pf1-x42e.google.com with SMTP id x73so10280293pfc.8 for ; Sat, 19 Jun 2021 10:26:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HZXg/FINwhlCDx9vLoMBvEcrQqzi0upM0DdxrVdyXzE=; b=FvPMg0qLTnaTN/dhXUgrZXO+9vOqrW3MsjIx6uPT7l1pUS0OgAmvFD8rxGfZc2KkpZ /MdI2SvHqVNZzDFXohViPnEUmbGErXId5osyY52J5IyRWTLeKbI3CLD1p2d0kQXLthie UgRmodlUGu9A+W5k2jEuk1UTtSCXQWLtt5N6Z7zFHHQZmnmoHBag7dUpbqeyrgE1aQlo 36w0WxwTBwY3HsBoteWEkFh0I+E1yUxIV0tw3CHFYiZI0HFJsjjYrBlzwBD3rZDrJUs+ 2ncZmLAbfSKKtoA/WdTZo1ImSAPHTp2rPirdWbp+L4XV63gSQlBvZbNEYYWmxZaYECUC 66kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HZXg/FINwhlCDx9vLoMBvEcrQqzi0upM0DdxrVdyXzE=; b=t6EwkIphzSaw907sS2XRgp4JqMTw9Y2GC620eTmdHtQI6TYEr8RvbAY3PYq/CU6So3 amxTzTyEvii0Q3Uqo4AsEGXZhMnYrI1D6ixQtHXI2iIg4Dukw/pHNA49y8J6lXRy/OJX ZjyRlj6lq2Bh8NpBTU38+WnOmdd5hgVtMZ5dfRWx2gk3RB1zOQ+7fJvAF8a6AvnvQwns BByqin9aQLFok62MQFLfAXNNztCW7KdGZC0BNbNTj4fkOL1XVlLDd2yE8DCjTwjSX4Ys KZQ2ayFfzHUERtM6gzj8TEbMrokPScz2ljdySa5WqLVjhfw9d5E+OHBJVEM30AJCo/5b ROWw== X-Gm-Message-State: AOAM533qJ0SmQ/fG3RxJuhOE6fQX+9Cq5/DkXvtBkXobI6wEM1L6oZO2 XA23kBg8zxK+hce/U0ncdNNrSBG1DiRxZA== X-Received: by 2002:a63:f944:: with SMTP id q4mr816513pgk.264.1624123597318; Sat, 19 Jun 2021 10:26:37 -0700 (PDT) Received: from localhost.localdomain ([71.212.149.176]) by smtp.gmail.com with ESMTPSA id co18sm2084241pjb.37.2021.06.19.10.26.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 19 Jun 2021 10:26:37 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 14/15] softmmu/memory: Support some unaligned access Date: Sat, 19 Jun 2021 10:26:25 -0700 Message-Id: <20210619172626.875885-15-richard.henderson@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210619172626.875885-1-richard.henderson@linaro.org> References: <20210619172626.875885-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42e; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: alex.bennee@linaro.org, pbonzini@redhat.com, mark.cave-ayland@ilande.co.uk, f4bug@amsat.org Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Decline to support writes that cannot be covered by access_size_min. Decline to support unaligned reads that require extraction from more than two reads. Do support exact size match when the model supports unaligned. Do support reducing the operation size to match the alignment. Do support loads that extract from 1 or 2 larger loads. Diagnose anything that we do not handle via LOG_GUEST_ERROR, as any of these cases are probably model or guest errors. Signed-off-by: Richard Henderson --- softmmu/memory.c | 127 ++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 121 insertions(+), 6 deletions(-) -- 2.25.1 diff --git a/softmmu/memory.c b/softmmu/memory.c index 2fe237327d..baf8573f1b 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -529,7 +529,7 @@ static MemTxResult access_with_adjusted_size(hwaddr addr, MemoryRegionAccessFn *access_fn; uint64_t access_mask; unsigned access_size; - unsigned i; + signed access_sh; MemTxResult r = MEMTX_OK; if (!access_size_min) { @@ -547,7 +547,6 @@ static MemTxResult access_with_adjusted_size(hwaddr addr, : memory_region_read_with_attrs_accessor); } - /* FIXME: support unaligned access? */ /* * Check for a small access. */ @@ -557,6 +556,10 @@ static MemTxResult access_with_adjusted_size(hwaddr addr, * cycle, and many mmio registers have side-effects on read. * In practice, this appears to be either (1) model error, * or (2) guest error via the fuzzer. + * + * TODO: Are all short reads also guest or model errors, because + * of said side effects? Or is this valid read-for-effect then + * discard the (complete) result via narrow destination register? */ if (write) { qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid short write: %s " @@ -566,22 +569,134 @@ static MemTxResult access_with_adjusted_size(hwaddr addr, access_size_min, access_size_max); return MEMTX_ERROR; } + + /* + * If the original access is aligned, we can always extract + * from a single larger load. + */ + access_size = access_size_min; + if (likely((addr & (size - 1)) == 0)) { + goto extract; + } + + /* + * TODO: We could search for a larger load that happens to + * cover the unaligned load, but at some point we will always + * require two operations. Extract from two loads. + */ + goto extract2; } + /* + * Check for size in range. + */ + if (likely(size <= access_size_max)) { + /* + * If the access is aligned or if the model supports + * unaligned accesses, use one operation directly. + */ + if (likely((addr & (size - 1)) == 0) || mr->ops->impl.unaligned) { + access_size = size; + access_sh = 0; + goto direct; + } + } + + /* + * It is certain that we require multiple operations. + * If the access is aligned (or the model supports unaligned), + * then we will perform N accesses which exactly cover the + * operation requested. + */ access_size = MAX(MIN(size, access_size_max), access_size_min); + if (unlikely(addr & (access_size - 1))) { + unsigned lsb = addr & -addr; + if (lsb >= access_size_min) { + /* + * The model supports small enough loads that we can + * exactly match the operation requested. For reads, + * this is preferable to touching more than requested. + * For writes, this is mandatory. + */ + access_size = lsb; + } else if (write) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Invalid unaligned write: %s " + "hwaddr: 0x%" HWADDR_PRIx " size: %u " + "min: %u max: %u\n", __func__, + memory_region_name(mr), addr, size, + access_size_min, access_size_max); + return MEMTX_ERROR; + } else if (size <= access_size_max) { + /* As per above, we can use two loads to implement. */ + access_size = size; + goto extract2; + } else { + /* + * TODO: becaseu access_size_max is small, this case requires + * more than 2 loads to assemble and extract. Bail out. + */ + qemu_log_mask(LOG_GUEST_ERROR, "%s: Unhandled unaligned read: %s " + "hwaddr: 0x%" HWADDR_PRIx " size: %u " + "min: %u max: %u\n", __func__, + memory_region_name(mr), addr, size, + access_size_min, access_size_max); + return MEMTX_ERROR; + } + } + access_mask = MAKE_64BIT_MASK(0, access_size * 8); if (memory_region_big_endian(mr)) { - for (i = 0; i < size; i += access_size) { + for (unsigned i = 0; i < size; i += access_size) { r |= access_fn(mr, addr + i, value, access_size, - (size - access_size - i) * 8, access_mask, attrs); + (size - access_size - i) * 8, access_mask, attrs); } } else { - for (i = 0; i < size; i += access_size) { + for (unsigned i = 0; i < size; i += access_size) { r |= access_fn(mr, addr + i, value, access_size, i * 8, - access_mask, attrs); + access_mask, attrs); } } return r; + + extract2: + /* + * Extract from one or two loads to produce the result. + * Validate that we need two loads before performing them. + */ + access_sh = addr & (access_size - 1); + if (access_sh + size > access_size) { + addr &= ~(access_size - 1); + if (memory_region_big_endian(mr)) { + access_sh = (access_size - access_sh) * 8; + r |= access_fn(mr, addr, value, access_size, access_sh, -1, attrs); + access_sh -= access_size * 8; + r |= access_fn(mr, addr, value, access_size, access_sh, -1, attrs); + } else { + access_sh = (access_sh - access_size) * 8; + r |= access_fn(mr, addr, value, access_size, access_sh, -1, attrs); + access_sh += access_size * 8; + r |= access_fn(mr, addr, value, access_size, access_sh, -1, attrs); + } + *value &= MAKE_64BIT_MASK(0, size * 8); + return r; + } + + extract: + /* + * Extract from one larger load to produce the result. + */ + access_sh = addr & (access_size - 1); + addr &= ~(access_size - 1); + if (memory_region_big_endian(mr)) { + access_sh = access_size - size - access_sh; + } + /* Note that with this interface, right shift is negative. */ + access_sh *= -8; + + direct: + access_mask = MAKE_64BIT_MASK(0, size * 8); + return access_fn(mr, addr, value, access_size, access_sh, + access_mask, attrs); } static AddressSpace *memory_region_to_address_space(MemoryRegion *mr)