From patchwork Tue Jun 28 16:34:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 71121 Delivered-To: patch@linaro.org Received: by 10.140.28.4 with SMTP id 4csp1690345qgy; Tue, 28 Jun 2016 09:36:10 -0700 (PDT) X-Received: by 10.36.69.4 with SMTP id y4mr15240629ita.49.1467131770242; Tue, 28 Jun 2016 09:36:10 -0700 (PDT) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id 66si1682161iov.29.2016.06.28.09.36.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 28 Jun 2016 09:36:10 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bHvyE-0002KO-Ij; Tue, 28 Jun 2016 16:34:42 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bHvyD-0002K5-23 for xen-devel@lists.xen.org; Tue, 28 Jun 2016 16:34:41 +0000 Received: from [85.158.137.68] by server-8.bemta-3.messagelabs.com id 1A/C5-03780-027A2775; Tue, 28 Jun 2016 16:34:40 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrLLMWRWlGSWpSXmKPExsVysyfVTVd+eVG 4wZWvqhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8bKkwuYC25JV/QsX8jcwLhWtIuRi0NIYCOj xLPd+9ghnNOMEg0TT7B2MXJysAloStz5/IkJxBYRkJa49vkyI4jNLFAgcef0fXYQW1jAU2JR8 x2WLkYODhYBVYn/xzhAwrwCLhJr2vtZQGwJATmJk8cms05g5FzAyLCKUb04tagstUjXXC+pKD M9oyQ3MTNH19DAWC83tbg4MT01JzGpWC85P3cTI9BbDECwg7Hxu9MhRkkOJiVR3m+9ReFCfEn 5KZUZicUZ8UWlOanFhxhlODiUJHiXLgXKCRalpqdWpGXmAMMGJi3BwaMkwtsFkuYtLkjMLc5M h0idYlSUEudtBkkIgCQySvPg2mCheolRVkqYlxHoECGegtSi3MwSVPlXjOIcjErCvFdApvBk5 pXATX8FtJgJaDFrdT7I4pJEhJRUA2O6hahd+MOV29vm1fpX5CltLTjV++RV/5Q260Uzr0hNeb nfNfD/7zJXZse5PftND8/bY7bs8h7f5Y92Bs382Gq4JDZFM9Vn95w69gk3T2banex8F2+3eH7 4nt9PdLNL1Tm+q07Xjf81NaiNLWvn1bgPYgF9Hw9Xef5lVdX765V6Z4WXLEeyTb4SS3FGoqEW c1FxIgB9NbHCUAIAAA== X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-4.tower-31.messagelabs.com!1467131678!38111250!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 11556 invoked from network); 28 Jun 2016 16:34:39 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-4.tower-31.messagelabs.com with SMTP; 28 Jun 2016 16:34:39 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 65C182F; Tue, 28 Jun 2016 09:35:30 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.215.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3B5A93F21A; Tue, 28 Jun 2016 09:34:37 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Tue, 28 Jun 2016 17:34:31 +0100 Message-Id: <1467131671-24612-1-git-send-email-julien.grall@arm.com> X-Mailer: git-send-email 1.9.1 Cc: Julien Grall , sstabellini@kernel.org, shankerd@codeaurora.org, wei.chen@linaro.org Subject: [Xen-devel] [PATCH] xen/arm: io: Protect the handlers with a read-write lock X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" Currently, accessing the I/O handlers does not require to take a lock because new handlers are always added at the end of the array. In a follow-up patch, this array will be sort to optimize the look up. Given that most of the time the I/O handlers will not be modify, using a spinlock will add contention when multiple vCPU are accessing the emulated MMIOs. So use a read-write lock to protected the handlers. Finally, take the opportunity to re-indent correctly domain_io_init. Signed-off-by: Julien Grall --- xen/arch/arm/io.c | 47 +++++++++++++++++++++++++++------------------- xen/include/asm-arm/mmio.h | 3 ++- 2 files changed, 30 insertions(+), 20 deletions(-) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index 0156755..5a96836 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -70,23 +70,39 @@ static int handle_write(const struct mmio_handler *handler, struct vcpu *v, handler->priv); } -int handle_mmio(mmio_info_t *info) +static const struct mmio_handler *find_mmio_handler(struct domain *d, + paddr_t gpa) { - struct vcpu *v = current; - int i; - const struct mmio_handler *handler = NULL; - const struct vmmio *vmmio = &v->domain->arch.vmmio; + const struct mmio_handler *handler; + unsigned int i; + struct vmmio *vmmio = &d->arch.vmmio; + + read_lock(&vmmio->lock); for ( i = 0; i < vmmio->num_entries; i++ ) { handler = &vmmio->handlers[i]; - if ( (info->gpa >= handler->addr) && - (info->gpa < (handler->addr + handler->size)) ) + if ( (gpa >= handler->addr) && + (gpa < (handler->addr + handler->size)) ) break; } if ( i == vmmio->num_entries ) + handler = NULL; + + read_unlock(&vmmio->lock); + + return handler; +} + +int handle_mmio(mmio_info_t *info) +{ + struct vcpu *v = current; + const struct mmio_handler *handler = NULL; + + handler = find_mmio_handler(v->domain, info->gpa); + if ( !handler ) return 0; if ( info->dabt.write ) @@ -104,7 +120,7 @@ void register_mmio_handler(struct domain *d, BUG_ON(vmmio->num_entries >= MAX_IO_HANDLER); - spin_lock(&vmmio->lock); + write_lock(&vmmio->lock); handler = &vmmio->handlers[vmmio->num_entries]; @@ -113,24 +129,17 @@ void register_mmio_handler(struct domain *d, handler->size = size; handler->priv = priv; - /* - * handle_mmio is not using the lock to avoid contention. - * Make sure the other processors see the new handler before - * updating the number of entries - */ - dsb(ish); - vmmio->num_entries++; - spin_unlock(&vmmio->lock); + write_unlock(&vmmio->lock); } int domain_io_init(struct domain *d) { - spin_lock_init(&d->arch.vmmio.lock); - d->arch.vmmio.num_entries = 0; + rwlock_init(&d->arch.vmmio.lock); + d->arch.vmmio.num_entries = 0; - return 0; + return 0; } /* diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h index da1cc2e..32f10f2 100644 --- a/xen/include/asm-arm/mmio.h +++ b/xen/include/asm-arm/mmio.h @@ -20,6 +20,7 @@ #define __ASM_ARM_MMIO_H__ #include +#include #include #include @@ -51,7 +52,7 @@ struct mmio_handler { struct vmmio { int num_entries; - spinlock_t lock; + rwlock_t lock; struct mmio_handler handlers[MAX_IO_HANDLER]; };