From patchwork Mon Oct 24 17:19:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 617892 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp2876607pvb; Mon, 24 Oct 2022 10:39:30 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5z50tnbdFVUGF/rOmOFG4K2HO5iZ/d9IGiA+ne7KN8XqE2svXIXv4dhw0vPr1cxb/dV7n4 X-Received: by 2002:a05:620a:1373:b0:6ee:b177:2b7b with SMTP id d19-20020a05620a137300b006eeb1772b7bmr24021925qkl.618.1666633169926; Mon, 24 Oct 2022 10:39:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666633169; cv=none; d=google.com; s=arc-20160816; b=0dZiAzzWCemCfDUS4rw7bmQkl9SyOeXUQypSOCyNEzMBnPwNH5N7ti+jnDTS3++vyS cN3z7LbVtEj60YMiVrqRc71bbDqK7U6Ud1Q4L0KqHH2ATYK8TtZ5c4dwMQ+nEiAlNqsG j+8unq/kUdUyFf5lanv2KLCMkSopLCG4eYLl/TZ88udrRbV2oU+zrXoeC0t0ickWPsu6 tB7r0xCJuIXHnfWxxRBa1wPxRXwjqVoIDvBz+B035p5w3hmvXXvpzIqGPuZLWGtbdAsm r6qXfhAMp4Jjp7bSiVomNlCaLJ6nHa8eylm2nR29d7jUAkUdDoa2G3LCgJa3B3rdtKrp aDlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:sender:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:message-id:date:subject:cc:to:from:dkim-signature; bh=zN8eZ3sDF0JepmaSWchVQLpqUqMskvP9M8txyDY11sU=; b=poCeHzpHA5RmbLFaOhWlrnMDMJatc2274/BD1kkpY8Qys+y+OX62ZsOB0mpyvDGs1N pIJv/sdY535zmrRqJQ9cS6ozp/8O45W7U9C8gWiQLC99B8HaWPjHHqrIrzP3asDmsXRu n876riDxyHAyrg0QfXqTkOEsx1xEx71hsnByxQwKLDIokv43Udj6DsQsYU2NE0CycJ0h fobkh5qDCN9leGnuLwtxUfLnqOtGjt0h7cSdKXGka7cq8ZfHHeXhH78BdrK2zxVXtrxt zXm4OOaI+oSAiopeS5oF2OoimgijM3Kqum3WrZvQKKYl19TIdIuEJPTS++ON/8QRgf+v 1IKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=p8ly8nBv; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id br24-20020a05622a1e1800b00399c50c71c2si359925qtb.12.2022.10.24.10.39.29 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 24 Oct 2022 10:39:29 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=p8ly8nBv; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1on16y-0000SD-1Y; Mon, 24 Oct 2022 13:19:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1on16w-0000P6-0f for qemu-devel@nongnu.org; Mon, 24 Oct 2022 13:19:38 -0400 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1on16p-0002hB-HP for qemu-devel@nongnu.org; Mon, 24 Oct 2022 13:19:37 -0400 Received: by mail-wr1-x42d.google.com with SMTP id bs21so1407457wrb.4 for ; Mon, 24 Oct 2022 10:19:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=zN8eZ3sDF0JepmaSWchVQLpqUqMskvP9M8txyDY11sU=; b=p8ly8nBv72YRXwQONRvPscRhbaEFHfFfEmKj6i8yKvfn85g2AlWGnI28cVK07V5bKb nqw2uxLS9RFPw74uJZMyRhg330yAGQvO+C4+ZgMPfJIiWJfEfMCoVAoz2ErkpLvo5og4 R0X56dNyt4u+oLqa6GqxRGggol5CtNSwY6E0QGMeG/xwUHIJ+Leg5YiTX8zGT2hpgnwd Oh0n85zDgJrCiwqGAB246Bdt3ir1acCKz3NC8hL3+SPlMcfvurtChO6K9g+rNwssz1p0 e3nNGruW7n//r6dofXUCZK0e+cj6uCOZ0b1/qHv1rDAgo0M2iaoUt+Tf5re8lym5bXmM UdDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zN8eZ3sDF0JepmaSWchVQLpqUqMskvP9M8txyDY11sU=; b=uhscIe7EzcloijVN2H00E3lCfcj8blbhpS8ZnVAnz4ZvjuxAAOm4dDsSLAt3odyGix yRoM+3nC5oaGE0LO66geDWjeFvmGTq65u9bwXpK6n1iC6TGuo/tvYnJU6IMvHD1QTVmi YVReNXIY9RIpytLvPhI44yiNfp6ZNwukm1WskJXXbIEuPL09gQvRqfWvs+gZ6TD5pmmJ CZvDoQS2YooAZWEQFzKtbQNgo2e6h0INpXovhYkf8od5UcLUdfLAWHnsg+XNfCIeoZ1j WMk3FyXh9c/nrv0xFLmfyR+U3vBGMV12pKJGXgviQkVnD/YXzbc/PeL1+L9QX0jrA4HM FmTA== X-Gm-Message-State: ACrzQf1nxoJU2f/C+Wqjg5fmG8qXqNHFeq83+EzVvJyL7CLPOEgpolGQ lEipkRYSRxolXVJyu91TcsFJWU2Q7nCsCw== X-Received: by 2002:a05:6000:15c8:b0:22f:c428:2493 with SMTP id y8-20020a05600015c800b0022fc4282493mr21711724wry.83.1666631957617; Mon, 24 Oct 2022 10:19:17 -0700 (PDT) Received: from zen.linaroharston ([185.81.254.11]) by smtp.gmail.com with ESMTPSA id z3-20020a05600c0a0300b003a682354f63sm9706442wmp.11.2022.10.24.10.19.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 10:19:17 -0700 (PDT) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 980661FFB7; Mon, 24 Oct 2022 18:19:16 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , Paolo Bonzini , Peter Maydell , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Akihiko Odaki , Gerd Hoffmann Subject: [RFC PATCH] main-loop: introduce WITH_QEMU_IOTHREAD_LOCK Date: Mon, 24 Oct 2022 18:19:09 +0100 Message-Id: <20221024171909.434818-1-alex.bennee@linaro.org> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42d; envelope-from=alex.bennee@linaro.org; helo=mail-wr1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, T_SPF_TEMPERROR=0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Qemu-devel" Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org This helper intends to ape our other auto-unlocking helpers with WITH_QEMU_LOCK_GUARD. The principle difference is the iothread lock is often nested needs a little extra book keeping to ensure we don't double lock or unlock a lock taken higher up the call chain. Convert some of the common routines that follow this pattern to use the new wrapper. Signed-off-by: Alex Bennée Reviewed-by: Richard Henderson --- include/qemu/main-loop.h | 41 ++++++++++++++++++++++++++++++++++++++++ hw/core/cpu-common.c | 10 ++-------- util/rcu.c | 40 ++++++++++++++++----------------------- ui/cocoa.m | 18 ++++-------------- 4 files changed, 63 insertions(+), 46 deletions(-) diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h index aac707d073..604e1823da 100644 --- a/include/qemu/main-loop.h +++ b/include/qemu/main-loop.h @@ -341,6 +341,47 @@ void qemu_mutex_lock_iothread_impl(const char *file, int line); */ void qemu_mutex_unlock_iothread(void); +/** + * WITH_QEMU_IOTHREAD_LOCK - nested lock of iothread + * + * This is a specialised form of WITH_QEMU_LOCK_GUARD which is used to + * safely encapsulate code that needs the BQL. The main difference is + * the BQL is often nested so we need to save the state of it on entry + * so we know if we need to free it once we leave the scope of the gaurd. + */ + +typedef struct { + bool taken; +} IoThreadLocked; + +static inline IoThreadLocked * qemu_iothread_auto_lock(IoThreadLocked *x) +{ + bool locked = qemu_mutex_iothread_locked(); + if (!locked) { + qemu_mutex_lock_iothread(); + x->taken = true; + } + return x; +} + +static inline void qemu_iothread_auto_unlock(IoThreadLocked *x) +{ + if (x->taken) { + qemu_mutex_unlock_iothread(); + } +} + +G_DEFINE_AUTOPTR_CLEANUP_FUNC(IoThreadLocked, qemu_iothread_auto_unlock) + +#define WITH_QEMU_IOTHREAD_LOCK_(var) \ + for (g_autoptr(IoThreadLocked) var = \ + qemu_iothread_auto_lock(&(IoThreadLocked) {}); \ + var; \ + qemu_iothread_auto_unlock(var), var = NULL) + +#define WITH_QEMU_IOTHREAD_LOCK \ + WITH_QEMU_IOTHREAD_LOCK_(glue(qemu_lockable_auto, __COUNTER__)) + /* * qemu_cond_wait_iothread: Wait on condition for the main loop mutex * diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c index f9fdd46b9d..0a60f916a9 100644 --- a/hw/core/cpu-common.c +++ b/hw/core/cpu-common.c @@ -70,14 +70,8 @@ CPUState *cpu_create(const char *typename) * BQL here if we need to. cpu_interrupt assumes it is held.*/ void cpu_reset_interrupt(CPUState *cpu, int mask) { - bool need_lock = !qemu_mutex_iothread_locked(); - - if (need_lock) { - qemu_mutex_lock_iothread(); - } - cpu->interrupt_request &= ~mask; - if (need_lock) { - qemu_mutex_unlock_iothread(); + WITH_QEMU_IOTHREAD_LOCK { + cpu->interrupt_request &= ~mask; } } diff --git a/util/rcu.c b/util/rcu.c index b6d6c71cff..02e7491de1 100644 --- a/util/rcu.c +++ b/util/rcu.c @@ -320,35 +320,27 @@ static void drain_rcu_callback(struct rcu_head *node) void drain_call_rcu(void) { struct rcu_drain rcu_drain; - bool locked = qemu_mutex_iothread_locked(); memset(&rcu_drain, 0, sizeof(struct rcu_drain)); qemu_event_init(&rcu_drain.drain_complete_event, false); - if (locked) { - qemu_mutex_unlock_iothread(); - } - - - /* - * RCU callbacks are invoked in the same order as in which they - * are registered, thus we can be sure that when 'drain_rcu_callback' - * is called, all RCU callbacks that were registered on this thread - * prior to calling this function are completed. - * - * Note that since we have only one global queue of the RCU callbacks, - * we also end up waiting for most of RCU callbacks that were registered - * on the other threads, but this is a side effect that shoudn't be - * assumed. - */ - - qatomic_inc(&in_drain_call_rcu); - call_rcu1(&rcu_drain.rcu, drain_rcu_callback); - qemu_event_wait(&rcu_drain.drain_complete_event); - qatomic_dec(&in_drain_call_rcu); + WITH_QEMU_IOTHREAD_LOCK { + /* + * RCU callbacks are invoked in the same order as in which they + * are registered, thus we can be sure that when 'drain_rcu_callback' + * is called, all RCU callbacks that were registered on this thread + * prior to calling this function are completed. + * + * Note that since we have only one global queue of the RCU callbacks, + * we also end up waiting for most of RCU callbacks that were registered + * on the other threads, but this is a side effect that shoudn't be + * assumed. + */ - if (locked) { - qemu_mutex_lock_iothread(); + qatomic_inc(&in_drain_call_rcu); + call_rcu1(&rcu_drain.rcu, drain_rcu_callback); + qemu_event_wait(&rcu_drain.drain_complete_event); + qatomic_dec(&in_drain_call_rcu); } } diff --git a/ui/cocoa.m b/ui/cocoa.m index 660d3e0935..f8bd315bdd 100644 --- a/ui/cocoa.m +++ b/ui/cocoa.m @@ -115,27 +115,17 @@ static void cocoa_switch(DisplayChangeListener *dcl, static void with_iothread_lock(CodeBlock block) { - bool locked = qemu_mutex_iothread_locked(); - if (!locked) { - qemu_mutex_lock_iothread(); - } - block(); - if (!locked) { - qemu_mutex_unlock_iothread(); + WITH_QEMU_IOTHREAD_LOCK { + block(); } } static bool bool_with_iothread_lock(BoolCodeBlock block) { - bool locked = qemu_mutex_iothread_locked(); bool val; - if (!locked) { - qemu_mutex_lock_iothread(); - } - val = block(); - if (!locked) { - qemu_mutex_unlock_iothread(); + WITH_QEMU_IOTHREAD_LOCK { + val = block(); } return val; }