From patchwork Mon Feb 16 14:50:48 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 44709 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-we0-f200.google.com (mail-we0-f200.google.com [74.125.82.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0F14A21544 for ; Mon, 16 Feb 2015 14:52:47 +0000 (UTC) Received: by mail-we0-f200.google.com with SMTP id k11sf19768519wes.3 for ; Mon, 16 Feb 2015 06:52:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:cc:subject:precedence:list-id:list-unsubscribe:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list:list-archive; bh=UHz5+L76uoB5dEU5N3IJ2feTyUaFgiPa3xcSc90qyFw=; b=Z5nY1S4Ynbsl6kVUSq0y8um4cBozEyTkn+AO0eohWdn5Q/Suw+FSywq7uoQ/zYJSiQ eeGbJuRSIEJq/eKBJfsvagOkHVRpRyk+K0jHaHdmBcdqoQvvQRQ41r9u37KJG7Y7DuVj WaKlUGheKixh1gZyNx5/K26aXmRMuT7lUUIcuXp2For+cbPEbvs7iBMn7T9sjM/WHiUh 9VV+ol3ol/etg9Kl73JsYn3+vXkq9wp5XW/QVYNrFOv6NqHGNECJKc0G8JbEWX0kc2rc r9zktDZpboCcdjopib4DEgXWl1tWT/9SkfYOmN+PbELeYxN+B4u2zkJX18eeUm9xfX/O ZOkg== X-Gm-Message-State: ALoCoQmeDs2EwdXKRYLCi7d7rhyoM5FD/wqp0pYTdOQ9coBZoPgyF4D4KrTxU1jrDxBLa6B9tz2A X-Received: by 10.112.160.3 with SMTP id xg3mr2968141lbb.5.1424098366367; Mon, 16 Feb 2015 06:52:46 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.9.6 with SMTP id v6ls472412laa.13.gmail; Mon, 16 Feb 2015 06:52:46 -0800 (PST) X-Received: by 10.152.179.135 with SMTP id dg7mr23552011lac.58.1424098366007; Mon, 16 Feb 2015 06:52:46 -0800 (PST) Received: from mail-la0-f42.google.com (mail-la0-f42.google.com. [209.85.215.42]) by mx.google.com with ESMTPS id mv5si9809053lbb.104.2015.02.16.06.52.45 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 Feb 2015 06:52:45 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.42 as permitted sender) client-ip=209.85.215.42; Received: by labgf13 with SMTP id gf13so29257896lab.9 for ; Mon, 16 Feb 2015 06:52:45 -0800 (PST) X-Received: by 10.152.179.172 with SMTP id dh12mr20085556lac.76.1424098365904; Mon, 16 Feb 2015 06:52:45 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp1629431lbj; Mon, 16 Feb 2015 06:52:44 -0800 (PST) X-Received: by 10.236.66.10 with SMTP id g10mr5350594yhd.80.1424098364105; Mon, 16 Feb 2015 06:52:44 -0800 (PST) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id e3si14311121qaf.113.2015.02.16.06.52.42 (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 16 Feb 2015 06:52:44 -0800 (PST) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1YNN1V-0001mj-JC; Mon, 16 Feb 2015 14:51:45 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1YNN1U-0001l6-B2 for xen-devel@lists.xenproject.org; Mon, 16 Feb 2015 14:51:44 +0000 Received: from [85.158.143.35] by server-2.bemta-4.messagelabs.com id AA/70-02830-FF302E45; Mon, 16 Feb 2015 14:51:43 +0000 X-Env-Sender: julien.grall@linaro.org X-Msg-Ref: server-6.tower-21.messagelabs.com!1424098299!12127495!1 X-Originating-IP: [74.125.82.174] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 6.13.4; banners=-,-,- X-VirusChecked: Checked Received: (qmail 8576 invoked from network); 16 Feb 2015 14:51:41 -0000 Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com) (74.125.82.174) by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 16 Feb 2015 14:51:41 -0000 Received: by mail-we0-f174.google.com with SMTP id w55so29703320wes.5 for ; Mon, 16 Feb 2015 06:51:37 -0800 (PST) X-Received: by 10.180.86.201 with SMTP id r9mr38168528wiz.56.1424098296326; Mon, 16 Feb 2015 06:51:36 -0800 (PST) Received: from chilopoda.uk.xensource.com. ([185.25.64.249]) by mx.google.com with ESMTPSA id dj5sm23172398wjb.28.2015.02.16.06.51.34 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Feb 2015 06:51:35 -0800 (PST) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Mon, 16 Feb 2015 14:50:48 +0000 Message-Id: <1424098255-22490-9-git-send-email-julien.grall@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1424098255-22490-1-git-send-email-julien.grall@linaro.org> References: <1424098255-22490-1-git-send-email-julien.grall@linaro.org> Cc: stefano.stabellini@citrix.com, Vijaya.Kumar@caviumnetworks.com, Julien Grall , tim@xen.org, ian.campbell@citrix.com Subject: [Xen-devel] [PATCH v3 08/15] xen/arm: vgic-v3: Emulate correctly the re-distributor X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: julien.grall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.42 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: There is a one-to-one mapping between each re-distributors and processors. Each re-distributors can be accessed by any processor at any time. For instance during the initialization of the GIC, the drivers will browse the re-distributor to find the one associated to the current processor (via GICR_TYPER). So each re-distributor has its own MMIO region. The current implementation of the vGICv3 emulation assumes that the re-distributor region is banked. Therefore, the processor won't be able to access other re-distributor. While this is working fine for Linux, a processor will only access GICR_TYPER to find the associated re-distributor, we have to implement correctly the re-distributor emulation in order to boot other operating systems. All emulated registers of the re-distributors take a vCPU in parameter and necessary lock. Therefore concurrent access is already properly handled. The missing bit is retrieving the right vCPU following the region accessed. Retrieving the right vCPU could be slow, so it has been divided in 2 paths: - fast path: The current vCPU is accessing its own re-distributor - slow path: The current vCPU is accessing another re-distributor As the processor needs to initialize itself, the former case is very common. To handle the access quickly, the base address of the re-distributor is computed and stored per-vCPU during the vCPU initialization. The latter is less common and more complicate to handle. The re-distributors can be spread across multiple regions in the memory. During the domain creation, Xen will browse those regions to find the first vCPU handled by this region. When an access hits the slow path, Xen will: 1) Retrieve the region using the base address of the re-distributor accessed 2) Find the vCPU ID attached to the redistributor 3) Check the validity of the vCPU. If it's not valid, a data abort will be injected to the guest Finally, this patch also correctly support the bit GICR_TYPER.LAST which indicates if the redistributor is the last one of the contiguous region. Signed-off-by: Julien Grall --- Linux doesn't access the redistributor from another processor, except for GICR_TYPER during processor initialization. As it banks it will quickly get the "correct" redistributor. But ideally this should be backported to Xen 4.5. Changes in v3: - Typoes and update commit message - Clarify/remove some comments in the code - Sort the re-distributor regions. Changes in v2: - Patch added --- xen/arch/arm/gic-v3.c | 24 ++++++++++- xen/arch/arm/vgic-v3.c | 99 +++++++++++++++++++++++++++++++++++++++++++- xen/include/asm-arm/domain.h | 6 +++ 3 files changed, 126 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c index fdfda0b..e7a7789 100644 --- a/xen/arch/arm/gic-v3.c +++ b/xen/arch/arm/gic-v3.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include #include @@ -895,6 +896,8 @@ static int gicv_v3_init(struct domain *d) */ if ( is_hardware_domain(d) ) { + unsigned int first_cpu = 0; + d->arch.vgic.dbase = gicv3.dbase; d->arch.vgic.dbase_size = gicv3.dbase_size; @@ -909,8 +912,15 @@ static int gicv_v3_init(struct domain *d) for ( i = 0; i < gicv3.rdist_count; i++ ) { + paddr_t size = gicv3.rdist_regions[i].size; + d->arch.vgic.rdist_regions[i].base = gicv3.rdist_regions[i].base; - d->arch.vgic.rdist_regions[i].size = gicv3.rdist_regions[i].size; + d->arch.vgic.rdist_regions[i].size = size; + + /* Set the first CPU handled by this region */ + d->arch.vgic.rdist_regions[i].first_cpu = first_cpu; + + first_cpu += size / d->arch.vgic.rdist_stride; } d->arch.vgic.nr_regions = gicv3.rdist_count; } @@ -929,6 +939,7 @@ static int gicv_v3_init(struct domain *d) BUILD_BUG_ON((GUEST_GICV3_GICR0_SIZE / GUEST_GICV3_RDIST_STRIDE) < MAX_VIRT_CPUS); d->arch.vgic.rdist_regions[0].base = GUEST_GICV3_GICR0_BASE; d->arch.vgic.rdist_regions[0].size = GUEST_GICV3_GICR0_SIZE; + d->arch.vgic.rdist_regions[0].first_cpu = 0; } return 0; @@ -1173,6 +1184,14 @@ static const struct gic_hw_operations gicv3_ops = { .make_dt_node = gicv3_make_dt_node, }; +static int __init cmp_rdist(const void *a, const void *b) +{ + const struct rdist_region *l = a, *r = a; + + /* We assume that re-distributor regions can never overlap */ + return ( l->base < r->base) ? -1 : 0; +} + /* Set up the GIC */ static int __init gicv3_init(struct dt_device_node *node, const void *data) { @@ -1228,6 +1247,9 @@ static int __init gicv3_init(struct dt_device_node *node, const void *data) rdist_regs[i].size = rdist_size; } + /* The vGIC code requires the region to be sorted */ + sort(rdist_regs, gicv3.rdist_count, sizeof(*rdist_regs), cmp_rdist, NULL); + /* If stride is not set in dt. Set default to 2 * SZ_64K */ if ( !dt_property_read_u32(node, "redistributor-stride", &gicv3.rdist_stride) ) gicv3.rdist_stride = 0; diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index 1d0e52d..97249db 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -114,6 +114,10 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info, MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 1) << 40 | MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 0) << 32); *r = aff; + + if ( v->arch.vgic.flags & VGIC_V3_RDIST_LAST ) + *r |= GICR_TYPER_LAST; + return 1; case GICR_STATUSR: /* Not implemented */ @@ -619,13 +623,56 @@ write_ignore: return 1; } +static inline struct vcpu *get_vcpu_from_rdist(paddr_t gpa, + struct vcpu *v, + uint32_t *offset) +{ + struct domain *d = v->domain; + uint32_t stride = d->arch.vgic.rdist_stride; + paddr_t base; + int i, vcpu_id; + struct vgic_rdist_region *region; + + *offset = gpa & (stride - 1); + base = gpa & ~((paddr_t)stride - 1); + + /* Fast path: the VCPU is trying to access its re-distributor */ + if ( likely(v->arch.vgic.rdist_base == base) ) + return v; + + /* Slow path: the VCPU is trying to access another re-distributor */ + + /* + * Find the region where the re-distributor lives. For this purpose, + * we look one region ahead as only MMIO range for redistributors + * traps here. + * Note: The region has been ordered during the GIC initialization + */ + for ( i = 1; i < d->arch.vgic.nr_regions; i++ ) + { + if ( base < d->arch.vgic.rdist_regions[i].base ) + break; + } + + region = &d->arch.vgic.rdist_regions[i - 1]; + + vcpu_id = region->first_cpu + ((base - region->base) / stride); + + if ( unlikely(vcpu_id >= d->max_vcpus) ) + return NULL; + + return d->vcpu[vcpu_id]; +} + static int vgic_v3_rdistr_mmio_read(struct vcpu *v, mmio_info_t *info) { uint32_t offset; perfc_incr(vgicr_reads); - offset = info->gpa & (v->domain->arch.vgic.rdist_stride - 1); + v = get_vcpu_from_rdist(info->gpa, v, &offset); + if ( unlikely(!v) ) + return 0; if ( offset < SZ_64K ) return __vgic_v3_rdistr_rd_mmio_read(v, info, offset); @@ -645,7 +692,9 @@ static int vgic_v3_rdistr_mmio_write(struct vcpu *v, mmio_info_t *info) perfc_incr(vgicr_writes); - offset = info->gpa & (v->domain->arch.vgic.rdist_stride - 1); + v = get_vcpu_from_rdist(info->gpa, v, &offset); + if ( unlikely(!v) ) + return 0; if ( offset < SZ_64K ) return __vgic_v3_rdistr_rd_mmio_write(v, info, offset); @@ -1080,6 +1129,13 @@ static int vgic_v3_vcpu_init(struct vcpu *v) { int i; uint64_t affinity; + paddr_t rdist_base; + struct vgic_rdist_region *region; + unsigned int last_cpu; + + /* Convenient alias */ + struct domain *d = v->domain; + uint32_t rdist_stride = d->arch.vgic.rdist_stride; /* For SGI and PPI the target is always this CPU */ affinity = (MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 3) << 32 | @@ -1090,6 +1146,45 @@ static int vgic_v3_vcpu_init(struct vcpu *v) for ( i = 0 ; i < 32 ; i++ ) v->arch.vgic.private_irqs->v3.irouter[i] = affinity; + /* + * Find the region where the re-distributor lives. For this purpose, + * we look one region ahead as we have only the first CPU in hand. + */ + for ( i = 1; i < d->arch.vgic.nr_regions; i++ ) + { + if ( v->vcpu_id < d->arch.vgic.rdist_regions[i].first_cpu ) + break; + } + + region = &d->arch.vgic.rdist_regions[i - 1]; + + /* Get the base address of the redistributor */ + rdist_base = region->base; + rdist_base += (v->vcpu_id - region->first_cpu) * rdist_stride; + + /* Check if a valid region was found for the re-distributor */ + if ( (rdist_base < region->base) || + ((rdist_base + rdist_stride) > (region->base + region->size)) ) + { + dprintk(XENLOG_ERR, + "d%u: Unable to find a re-distributor for VCPU %u\n", + d->domain_id, v->vcpu_id); + return -EINVAL; + } + + v->arch.vgic.rdist_base = rdist_base; + + /* + * If the redistributor is the last one of the + * contiguous region of the vCPU is the last of the domain, set + * VGIC_V3_RDIST_LAST flags. + * Note that we are assuming max_vcpus will never change. + */ + last_cpu = (region->size / rdist_stride) + region->first_cpu - 1; + + if ( v->vcpu_id == last_cpu || (v->vcpu_id == (d->max_vcpus - 1)) ) + v->arch.vgic.flags |= VGIC_V3_RDIST_LAST; + return 0; } diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 3eaa7f0..81e3185 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -106,6 +106,7 @@ struct arch_domain struct vgic_rdist_region { paddr_t base; /* Base address */ paddr_t size; /* Size */ + unsigned int first_cpu; /* First CPU handled */ } rdist_regions[MAX_RDIST_COUNT]; int nr_regions; /* Number of rdist regions */ uint32_t rdist_stride; /* Re-Distributor stride */ @@ -239,6 +240,11 @@ struct arch_vcpu * lr_pending is a subset of vgic.inflight_irqs. */ struct list_head lr_pending; spinlock_t lock; + + /* GICv3: redistributor base and flags for this vCPU */ + paddr_t rdist_base; +#define VGIC_V3_RDIST_LAST (1 << 0) /* last vCPU of the rdist */ + uint8_t flags; } vgic; /* Timer registers */