From patchwork Thu Aug 11 15:24:10 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 73781 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp176312qga; Thu, 11 Aug 2016 08:49:48 -0700 (PDT) X-Received: by 10.55.31.41 with SMTP id f41mr11810576qkf.249.1470930588736; Thu, 11 Aug 2016 08:49:48 -0700 (PDT) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id o4si598003qkf.61.2016.08.11.08.49.48 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 11 Aug 2016 08:49:48 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:49274 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXsEu-0003ta-01 for patch@linaro.org; Thu, 11 Aug 2016 11:49:48 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57454) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXrqZ-0003iQ-TV for qemu-devel@nongnu.org; Thu, 11 Aug 2016 11:24:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bXrqX-0006cS-RL for qemu-devel@nongnu.org; Thu, 11 Aug 2016 11:24:38 -0400 Received: from mail-wm0-x236.google.com ([2a00:1450:400c:c09::236]:38813) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXrqX-0006cL-LX for qemu-devel@nongnu.org; Thu, 11 Aug 2016 11:24:37 -0400 Received: by mail-wm0-x236.google.com with SMTP id o80so3120530wme.1 for ; Thu, 11 Aug 2016 08:24:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9JaqOKwdnnBzrutSjbJpeGMa0t20OfYnzrp8MFO9nVI=; b=dI0gFHOg1MeUBFkYNj9rGhCXIt90TDuILroqerdFGZD3kwZablilV/peiakv2bClGx hn0BEnGbqkC/5STgCmsh3ozUmSdQXHSNG0+7FT+hLCp5M3+iUnci4mdlqIQozYF8QiQ8 0NwCuqVOIRSxD1ACGDIjFD1CITtBFP5byxUVg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9JaqOKwdnnBzrutSjbJpeGMa0t20OfYnzrp8MFO9nVI=; b=jqVpy3K/bpiPD2/iqmOqlX0tnkUF/Pi9zx65VbHxhcTSzpfnnYFQ/CWzhlHFE0UXDa c4LxVxtlV6RvYv7Vy+FjJfP+jyKl5PihBKXJsP9LVjO0ZBx6/7P5IQNtHs0tdH/ZM4Ub +A/cNLbbf+24C3FrkC2TKJIdzqWIsMUlZw8hexxeJfLK9DBn7rv522fcwKuBN/BqGigq +s/dF63POFGnezJqr9HrhZ5NLtX84nxHG1bhwTZRW6Owvg7kTkRAvlswM6GbEREr7Li4 srie+i/h8/qtC5BMvPAjmMyERT/uX9lPBD3hfMTpL3WFSCME5MNChZkGW7B21q5fk68t iMGg== X-Gm-Message-State: AEkoousGVttCQwGY6tAe7tnp8cii6CY9i7EwK/Jfz5ZEsSxTQ12fMWWpiutFn5pX9COnjatS X-Received: by 10.28.228.132 with SMTP id b126mr10555124wmh.93.1470929076549; Thu, 11 Aug 2016 08:24:36 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id 207sm595084wmb.7.2016.08.11.08.24.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 11 Aug 2016 08:24:31 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id DC5033E02F1; Thu, 11 Aug 2016 16:24:30 +0100 (BST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, cota@braap.org, bobby.prani@gmail.com, nikunj@linux.vnet.ibm.com Date: Thu, 11 Aug 2016 16:24:10 +0100 Message-Id: <1470929064-4092-15-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1470929064-4092-1-git-send-email-alex.bennee@linaro.org> References: <1470929064-4092-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::236 Subject: [Qemu-devel] [RFC v4 14/28] tcg: add kick timer for single-threaded vCPU emulation X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, claudio.fontana@huawei.com, Peter Crosthwaite , jan.kiszka@siemens.com, mark.burton@greensocs.com, serge.fdrv@gmail.com, pbonzini@redhat.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , rth@twiddle.net Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Currently we rely on the side effect of the main loop grabbing the iothread_mutex to give any long running basic block chains a kick to ensure the next vCPU is scheduled. As this code is being re-factored and rationalised we now do it explicitly here. Signed-off-by: Alex Bennée --- v2 - re-base fixes - get_ticks_per_sec() -> NANOSECONDS_PER_SEC v3 - add define for TCG_KICK_FREQ - fix checkpatch warning v4 - wrap next calc in inline qemu_tcg_next_kick() instead of macro --- cpus.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) -- 2.7.4 diff --git a/cpus.c b/cpus.c index b5b45b8..8c49d6c 100644 --- a/cpus.c +++ b/cpus.c @@ -1185,9 +1185,34 @@ static void deal_with_unplugged_cpus(void) } } +/* Single-threaded TCG + * + * In the single-threaded case each vCPU is simulated in turn. If + * there is more than a single vCPU we create a simple timer to kick + * the vCPU and ensure we don't get stuck in a tight loop in one vCPU. + * This is done explicitly rather than relying on side-effects + * elsewhere. + */ +static void qemu_cpu_kick_no_halt(void); + +#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10) + +static inline int64_t qemu_tcg_next_kick(void) +{ + return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD; +} + +static void kick_tcg_thread(void *opaque) +{ + QEMUTimer *self = *(QEMUTimer **) opaque; + timer_mod(self, qemu_tcg_next_kick()); + qemu_cpu_kick_no_halt(); +} + static void *qemu_tcg_cpu_thread_fn(void *arg) { CPUState *cpu = arg; + QEMUTimer *kick_timer; rcu_register_thread(); @@ -1211,6 +1236,13 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) } } + /* Set to kick if we have to do more than one vCPU */ + if (CPU_NEXT(first_cpu)) { + kick_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, kick_tcg_thread, + &kick_timer); + timer_mod(kick_timer, qemu_tcg_next_kick()); + } + /* process any pending work */ atomic_mb_set(&exit_request, 1);