From patchwork Tue Jun 23 12:21:24 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 50219 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 54370216E1 for ; Tue, 23 Jun 2015 12:21:41 +0000 (UTC) Received: by lbbwc1 with SMTP id wc1sf2430996lbb.3 for ; Tue, 23 Jun 2015 05:21:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id :mime-version:content-type:content-transfer-encoding:cc:subject :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=g3OJNwZ2tX9dpQ1bQSI62WfbPYq1xSV24wMT8G3ofio=; b=PnAC0nD4yLArh2WCKNmzPFmB+NGbmsAz/JxAvvtf4g2vI/bRghw7Y7Ni0i10/R0ofS G4BZbDDKp3enNTVTvczr0t5dzi0rzSteGYTPFKRdRXtxdHkqJ1PCHEgVdzgXCFVnoL2u 8rn5STym0YjS6a1oJ+5vytkMgHgkPFkvzShrgdHB11iv2ZhF7K82D9NOdxbDfrp3ZR5h ObZOs50Uf2YL48dIU+9Agn2flThVlioRjERCCIlsbCHhDP3qhQAPDHi6U00ODPqlKL/V +Q+AHGz7ocb4aIs9P/SBW5qc2AAUhkXNaFiEDv9+fVCyoHhrLLSF5hdnuE5+s1T2WGpi 1KGA== X-Gm-Message-State: ALoCoQmr8IdZopmJx8Ohu5LLW/SmIHdrBNkLyhyg4/8zewZNddCrRavHuTBy4YnO9voJ2xa4slyE X-Received: by 10.180.198.172 with SMTP id jd12mr1140720wic.5.1435062099817; Tue, 23 Jun 2015 05:21:39 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.37.105 with SMTP id x9ls42493laj.11.gmail; Tue, 23 Jun 2015 05:21:39 -0700 (PDT) X-Received: by 10.112.182.99 with SMTP id ed3mr6691606lbc.116.1435062099451; Tue, 23 Jun 2015 05:21:39 -0700 (PDT) Received: from mail-la0-f48.google.com (mail-la0-f48.google.com. [209.85.215.48]) by mx.google.com with ESMTPS id da3si19151490lac.178.2015.06.23.05.21.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Jun 2015 05:21:39 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.48 as permitted sender) client-ip=209.85.215.48; Received: by lagx9 with SMTP id x9so5362674lag.1 for ; Tue, 23 Jun 2015 05:21:39 -0700 (PDT) X-Received: by 10.112.199.133 with SMTP id jk5mr35699404lbc.32.1435062099359; Tue, 23 Jun 2015 05:21:39 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp3035868lbb; Tue, 23 Jun 2015 05:21:38 -0700 (PDT) X-Received: by 10.140.41.9 with SMTP id y9mr43243501qgy.28.1435062097859; Tue, 23 Jun 2015 05:21:37 -0700 (PDT) Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id w132si21942153qha.88.2015.06.23.05.21.37 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 23 Jun 2015 05:21:37 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Received: from localhost ([::1]:44994 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7NCq-00022z-TO for patch@linaro.org; Tue, 23 Jun 2015 08:21:36 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58139) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7NCJ-0001m3-Hl for qemu-devel@nongnu.org; Tue, 23 Jun 2015 08:21:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z7NCG-0005Tp-6e for qemu-devel@nongnu.org; Tue, 23 Jun 2015 08:21:03 -0400 Received: from static.88-198-71-155.clients.your-server.de ([88.198.71.155]:43079 helo=socrates.bennee.com) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7NCF-0005TM-Fe for qemu-devel@nongnu.org; Tue, 23 Jun 2015 08:20:59 -0400 Received: from localhost ([127.0.0.1] helo=zen.linaroharston) by socrates.bennee.com with esmtp (Exim 4.84) (envelope-from ) id 1Z7NE2-0002wy-Pp; Tue, 23 Jun 2015 14:22:50 +0200 From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, mark.burton@greensocs.com, fred.konrad@greensocs.com, pbonzini@redhat.com Date: Tue, 23 Jun 2015 13:21:24 +0100 Message-Id: <1435062084-25332-1-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.4.3 MIME-Version: 1.0 X-SA-Exim-Connect-IP: 127.0.0.1 X-SA-Exim-Mail-From: alex.bennee@linaro.org X-SA-Exim-Scanned: No (on socrates.bennee.com); SAEximRunCond expanded to false X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 88.198.71.155 Cc: peter.maydell@linaro.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= , qemu-devel@nongnu.org Subject: [Qemu-devel] [RFC PATCH] qemu_mutux: make the iothread recursive (MTTCG) X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: alex.bennee@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.48 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 While I was testing multi-threaded TCG I discovered once consequence of using locking around memory_region_dispatch is that virt-io transactions could dead lock trying to grab the main mutex. This is due to the virt-io driver writing data back into the system memory: #0 0x00007ffff119dcc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007ffff11a10d8 in __GI_abort () at abort.c:89 #2 0x00005555555f9b24 in error_exit (err=, msg=msg@entry=0x5555559f3710 <__func__.6011> "qemu_mutex_lock") at util/qemu-thread-posix.c:48 #3 0x000055555594d630 in qemu_mutex_lock (mutex=mutex@entry=0x555555e62e60 ) at util/qemu-thread-posix.c:79 #4 0x0000555555631a84 in qemu_mutex_lock_iothread () at /home/alex/lsrc/qemu/qemu.git/cpus.c:1128 #5 0x000055555560dd1a in stw_phys_internal (endian=DEVICE_LITTLE_ENDIAN, val=1, addr=, as=0x555555e08060 ) at /home/alex/lsrc/qemu/qemu.git/exec.c:3010 #6 stw_le_phys (as=as@entry=0x555555e08060 , addr=, val=1) at /home/alex/lsrc/qemu/qemu.git/exec.c:3024 #7 0x0000555555696ae5 in virtio_stw_phys (vdev=, value=, pa=) at /home/alex/lsrc/qemu/qemu.git/include/hw/virtio/virtio-access.h:61 #8 vring_avail_event (vq=0x55555648dc00, vq=0x55555648dc00, vq=0x55555648dc00, val=) at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:214 #9 virtqueue_pop (vq=0x55555648dc00, elem=elem@entry=0x7fff1403fd98) at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:472 #10 0x0000555555653cd1 in virtio_blk_get_request (s=0x555556486830) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:122 #11 virtio_blk_handle_output (vdev=, vq=) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:446 #12 0x00005555556414e1 in access_with_adjusted_size (addr=addr@entry=80, value=value@entry=0x7fffa93052b0, size=size@entry=4, access_size_min=, access_size_max=, access=0x5555556413e0 , mr=0x555556b80388) at /home/alex/lsrc/qemu/qemu.git/memory.c:461 #13 0x00005555556471b7 in memory_region_dispatch_write (size=4, data=0, addr=80, mr=0x555556b80388) at /home/alex/lsrc/qemu/qemu.git/memory.c:1103 #14 io_mem_write (mr=mr@entry=0x555556b80388, addr=80, val=, size=size@entry=4) at /home/alex/lsrc/qemu/qemu.git/memory.c:2003 #15 0x000055555560ad6b in address_space_rw_internal (as=, addr=167788112, buf=buf@entry=0x7fffa9305380 "", len=4, is_write=is_write@entry=true, unlocked=, unlocked@entry=false) at /home/alex/lsrc/qemu/qemu.git/exec.c:2318 #16 0x000055555560aea8 in address_space_rw (is_write=true, len=, buf=0x7fffa9305380 "", addr=, as=) at /home/alex/lsrc/qemu/qemu.git/exec.c:2392 #17 address_space_write (len=, buf=0x7fffa9305380 "", addr=, as=) at /home/alex/lsrc/qemu/qemu.git/exec.c:2404 #18 subpage_write (opaque=, addr=, value=, len=) at /home/alex/lsrc/qemu/qemu.git/exec.c:1963 #19 0x00005555556414e1 in access_with_adjusted_size (addr=addr@entry=592, value=value@entry=0x7fffa9305420, size=size@entry=4, access_size_min=, access_size_max=, access=0x5555556413e0 , mr=0x555556bfca20) at /home/alex/lsrc/qemu/qemu.git/memory.c:461 #20 0x00005555556471b7 in memory_region_dispatch_write (size=4, data=0, addr=592, mr=0x555556bfca20) at /home/alex/lsrc/qemu/qemu.git/memory.c:1103 #21 io_mem_write (mr=mr@entry=0x555556bfca20, addr=addr@entry=592, val=val@entry=0, size=size@entry=4) at /home/alex/lsrc/qemu/qemu.git/memory.c:2003 #22 0x000055555564ce16 in io_writel (retaddr=140736492182797, addr=4027616848, val=0, physaddr=592, env=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:386 #23 helper_le_stl_mmu (env=0x55555649e9b0, addr=, val=0, mmu_idx=, retaddr=140736492182797) at /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:426 #24 0x00007fffc49f9d0f in code_gen_buffer () #25 0x00005555556109dc in cpu_tb_exec (tb_ptr=0x7fffc49f9c60 "A\213n\374\205\355\017\205\233\001", cpu=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:179 #26 cpu_arm_exec (env=env@entry=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:524 #27 0x0000555555630f28 in tcg_cpu_exec (env=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1344 #28 tcg_exec_all (cpu=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1392 #29 qemu_tcg_cpu_thread_fn (arg=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1037 #30 0x00007ffff1534182 in start_thread (arg=0x7fffa9306700) at pthread_create.c:312 #31 0x00007ffff126147d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 The fix in this patch makes the global/iothread mutex recursive (only if called from the same thread-id). As other threads are still blocked memory is still consistent all the way through. This seems neater than having to do a trylock each time. Tested-by: Alex Bennée --- cpus.c | 2 +- include/qemu/thread.h | 1 + util/qemu-thread-posix.c | 12 ++++++++++++ util/qemu-thread-win32.c | 5 +++++ 4 files changed, 19 insertions(+), 1 deletion(-) diff --git a/cpus.c b/cpus.c index d161bb9..412ab04 100644 --- a/cpus.c +++ b/cpus.c @@ -804,7 +804,7 @@ void qemu_init_cpu_loop(void) qemu_cond_init(&qemu_pause_cond); qemu_cond_init(&qemu_work_cond); qemu_cond_init(&qemu_io_proceeded_cond); - qemu_mutex_init(&qemu_global_mutex); + qemu_mutex_init_recursive(&qemu_global_mutex); qemu_thread_get_self(&io_thread); } diff --git a/include/qemu/thread.h b/include/qemu/thread.h index c9389f4..4519d2f 100644 --- a/include/qemu/thread.h +++ b/include/qemu/thread.h @@ -22,6 +22,7 @@ typedef struct QemuThread QemuThread; #define QEMU_THREAD_DETACHED 1 void qemu_mutex_init(QemuMutex *mutex); +void qemu_mutex_init_recursive(QemuMutex *mutex); void qemu_mutex_destroy(QemuMutex *mutex); void __qemu_mutex_lock(QemuMutex *mutex, const char *func, int line); #define qemu_mutex_lock(mutex) __qemu_mutex_lock(mutex, __func__, __LINE__) diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c index 98eb0f0..ba2fb97 100644 --- a/util/qemu-thread-posix.c +++ b/util/qemu-thread-posix.c @@ -57,6 +57,18 @@ void qemu_mutex_init(QemuMutex *mutex) error_exit(err, __func__); } +void qemu_mutex_init_recursive(QemuMutex *mutex) +{ + int err; + pthread_mutexattr_t attr; + + pthread_mutexattr_init(&attr); + pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE_NP); + err = pthread_mutex_init(&mutex->lock, &attr); + if (err) + error_exit(err, __func__); +} + void qemu_mutex_destroy(QemuMutex *mutex) { int err; diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c index 406b52f..f055067 100644 --- a/util/qemu-thread-win32.c +++ b/util/qemu-thread-win32.c @@ -44,6 +44,11 @@ void qemu_mutex_init(QemuMutex *mutex) InitializeCriticalSection(&mutex->lock); } +void qemu_mutex_init_recursive(QemuMutex *mutex) +{ + error_exit(0, "%s: not implemented for win32\n", __func__); +} + void qemu_mutex_destroy(QemuMutex *mutex) { assert(mutex->owner == 0);