From patchwork Thu Oct 18 14:28:47 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 12343 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id AD7EF23E01 for ; Thu, 18 Oct 2012 14:28:56 +0000 (UTC) Received: from mail-ia0-f180.google.com (mail-ia0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 59D06A18B53 for ; Thu, 18 Oct 2012 14:28:56 +0000 (UTC) Received: by mail-ia0-f180.google.com with SMTP id f6so6165393iag.11 for ; Thu, 18 Oct 2012 07:28:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:x-gm-message-state; bh=I5mUl26et6oFq2tu41HWFHhMUdLq9cW1enSa4HcWLq4=; b=Vq7ay7aCEly+AJ9NLXxvex9SM5bMU9tBbB5h5Pi/kgzPR1hlpigU2bDiOb64fbgWVL CJiCt/X4Z1Em9KGTKdl0Q4Ab61eWWk39ZS3kP8yzHB33kMsKjYNlK6N3zCPrLDd7COkl gg0YYhpg6RPcscVOKvbrKFlaiTFUh5+XkQ2SpPsks63qibE9jbaHqlrpqlcfa0xhBwPa eY9zUMALpK38pokULi1TeRJKEat7RYMSQU3T81w6beBdsk9fQAZhoW4Xi4r/ZBq841s+ e79cTIhcNzyjb/b4PZf9OiAmqC/3emmhqzpGdcGVjs7nNN0IcxLqnGThyN32zKyDq1Zm GOyA== Received: by 10.50.168.37 with SMTP id zt5mr4950343igb.57.1350570535695; Thu, 18 Oct 2012 07:28:55 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.67.148 with SMTP id n20csp1082950igt; Thu, 18 Oct 2012 07:28:54 -0700 (PDT) Received: by 10.216.134.100 with SMTP id r78mr12080222wei.152.1350570532680; Thu, 18 Oct 2012 07:28:52 -0700 (PDT) Received: from mnementh.archaic.org.uk (1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.d.1.0.0.b.8.0.1.0.0.2.ip6.arpa. [2001:8b0:1d0::1]) by mx.google.com with ESMTPS id b16si27640262weq.77.2012.10.18.07.28.52 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 18 Oct 2012 07:28:52 -0700 (PDT) Received-SPF: neutral (google.com: 2001:8b0:1d0::1 is neither permitted nor denied by best guess record for domain of pm215@archaic.org.uk) client-ip=2001:8b0:1d0::1; Authentication-Results: mx.google.com; spf=neutral (google.com: 2001:8b0:1d0::1 is neither permitted nor denied by best guess record for domain of pm215@archaic.org.uk) smtp.mail=pm215@archaic.org.uk Received: from pm215 by mnementh.archaic.org.uk with local (Exim 4.72) (envelope-from ) id 1TOr5b-0006IY-VI; Thu, 18 Oct 2012 15:28:47 +0100 From: Peter Maydell To: qemu-devel@nongnu.org Cc: patches@linaro.org, Corentin Chary Subject: [PATCH] ui/vnc-jobs: Delete unused and buggy vnc_stop_worker_thread() Date: Thu, 18 Oct 2012 15:28:47 +0100 Message-Id: <1350570527-24187-1-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.2.5 X-Gm-Message-State: ALoCoQlXaCfDPspf4EQOD7RcIDw5jvzWmjoeC/et7YlxB656By6eAr9+TMM9hdcAmiXk5RF06dAX The function vnc_stop_worker_thread() is buggy, beacuse it tries to delete jobs from the worker thread's queue but the worker thread itself will not cope with this happening (it would end up trying to remove an already-removed list item from its queue list). Fortunately nobody ever calls vnc_stop_worker_thread(), so we can fix this by simply deleting all the untested racy code. Signed-off-by: Peter Maydell --- Seems the easiest way to deal with this bug spotted via code inspection :-) ui/vnc-jobs.c | 26 -------------------------- ui/vnc-jobs.h | 2 -- 2 files changed, 28 deletions(-) diff --git a/ui/vnc-jobs.c b/ui/vnc-jobs.c index 087b84d..c5ce2f1 100644 --- a/ui/vnc-jobs.c +++ b/ui/vnc-jobs.c @@ -136,19 +136,6 @@ bool vnc_has_job(VncState *vs) return ret; } -void vnc_jobs_clear(VncState *vs) -{ - VncJob *job, *tmp; - - vnc_lock_queue(queue); - QTAILQ_FOREACH_SAFE(job, &queue->jobs, next, tmp) { - if (job->vs == vs || !vs) { - QTAILQ_REMOVE(&queue->jobs, job, next); - } - } - vnc_unlock_queue(queue); -} - void vnc_jobs_join(VncState *vs) { vnc_lock_queue(queue); @@ -336,16 +323,3 @@ bool vnc_worker_thread_running(void) { return queue; /* Check global queue */ } - -void vnc_stop_worker_thread(void) -{ - if (!vnc_worker_thread_running()) - return ; - - /* Remove all jobs and wake up the thread */ - vnc_lock_queue(queue); - queue->exit = true; - vnc_unlock_queue(queue); - vnc_jobs_clear(NULL); - qemu_cond_broadcast(&queue->cond); -} diff --git a/ui/vnc-jobs.h b/ui/vnc-jobs.h index 86e6d88..b87857f 100644 --- a/ui/vnc-jobs.h +++ b/ui/vnc-jobs.h @@ -35,13 +35,11 @@ VncJob *vnc_job_new(VncState *vs); int vnc_job_add_rect(VncJob *job, int x, int y, int w, int h); void vnc_job_push(VncJob *job); bool vnc_has_job(VncState *vs); -void vnc_jobs_clear(VncState *vs); void vnc_jobs_join(VncState *vs); void vnc_jobs_consume_buffer(VncState *vs); void vnc_start_worker_thread(void); bool vnc_worker_thread_running(void); -void vnc_stop_worker_thread(void); /* Locks */ static inline int vnc_trylock_display(VncDisplay *vd)