From patchwork Thu Jan 29 18:25:43 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 43962 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 64CD424128 for ; Thu, 29 Jan 2015 18:28:15 +0000 (UTC) Received: by mail-wi0-f200.google.com with SMTP id fb4sf15006519wid.3 for ; Thu, 29 Jan 2015 10:28:14 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:cc:subject:precedence:list-id:list-unsubscribe:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list:list-archive; bh=zi3juUdMF9eQluwdYxW37/fTsZU/QnNrfq/OBEMGlDA=; b=FzIwdyWn3dYb0AbpCqsvxCH6xyVZzxjwSllIpMCWsacVZo2JabKVVev/ZJbNVoP4Vg sM67tMrSC2VLg8dKOUJSok7RnOUssN9szJl0ZW8m3/DgdzmeW9565atGsroyEoXrWKOA n7/YREiQswoyeK+Y4AuA1cIN4MtB4iQu03s6vygZHpX27cytT8QCeCcChYkK9ESHTHEj 67rkqR6UGwWodhUWb8ClVzCYBN1zdCf6Cq/goTm6OMTX4nq1MNimFYozdWGgnExP/HJI z9akQmMblsK4E2x9hZVPaZtFASi6nn4wctpYSD44xNDnjfQjjKRFfuZ0VqLjse2YAIXd nnjQ== X-Gm-Message-State: ALoCoQl8Aq0752RQZL9QON6P2KY2I+fM5uZKd3p4z6uvMnXk3MAD4FQaAC4A+Fh2mdhZI+LWxw9I X-Received: by 10.194.86.1 with SMTP id l1mr314669wjz.0.1422556094681; Thu, 29 Jan 2015 10:28:14 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.5 with SMTP id m5ls342944laj.71.gmail; Thu, 29 Jan 2015 10:28:14 -0800 (PST) X-Received: by 10.112.85.11 with SMTP id d11mr2540983lbz.100.1422556094444; Thu, 29 Jan 2015 10:28:14 -0800 (PST) Received: from mail-lb0-f178.google.com (mail-lb0-f178.google.com. [209.85.217.178]) by mx.google.com with ESMTPS id rn9si4517163lbb.29.2015.01.29.10.28.14 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 29 Jan 2015 10:28:14 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) client-ip=209.85.217.178; Received: by mail-lb0-f178.google.com with SMTP id u10so31205832lbd.9 for ; Thu, 29 Jan 2015 10:28:14 -0800 (PST) X-Received: by 10.112.170.36 with SMTP id aj4mr2516592lbc.3.1422556094201; Thu, 29 Jan 2015 10:28:14 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp34240lbj; Thu, 29 Jan 2015 10:28:13 -0800 (PST) X-Received: by 10.194.83.66 with SMTP id o2mr3569662wjy.55.1422556086477; Thu, 29 Jan 2015 10:28:06 -0800 (PST) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id ek6si2405620wib.78.2015.01.29.10.28.05 (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 29 Jan 2015 10:28:06 -0800 (PST) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1YGtno-0004fQ-I7; Thu, 29 Jan 2015 18:26:52 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1YGtnn-0004di-1d for xen-devel@lists.xenproject.org; Thu, 29 Jan 2015 18:26:51 +0000 Received: from [85.158.143.35] by server-3.bemta-4.messagelabs.com id 67/7C-02754-A6B7AC45; Thu, 29 Jan 2015 18:26:50 +0000 X-Env-Sender: julien.grall@linaro.org X-Msg-Ref: server-10.tower-21.messagelabs.com!1422556009!13347980!1 X-Originating-IP: [209.85.212.177] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 6.12.5; banners=-,-,- X-VirusChecked: Checked Received: (qmail 9069 invoked from network); 29 Jan 2015 18:26:49 -0000 Received: from mail-wi0-f177.google.com (HELO mail-wi0-f177.google.com) (209.85.212.177) by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 29 Jan 2015 18:26:49 -0000 Received: by mail-wi0-f177.google.com with SMTP id r20so28274402wiv.4 for ; Thu, 29 Jan 2015 10:26:49 -0800 (PST) X-Received: by 10.194.203.195 with SMTP id ks3mr3831179wjc.88.1422556008500; Thu, 29 Jan 2015 10:26:48 -0800 (PST) Received: from chilopoda.uk.xensource.com. ([185.25.64.249]) by mx.google.com with ESMTPSA id x18sm3518503wia.12.2015.01.29.10.26.47 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 29 Jan 2015 10:26:47 -0800 (PST) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Thu, 29 Jan 2015 18:25:43 +0000 Message-Id: <1422555950-31821-9-git-send-email-julien.grall@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1422555950-31821-1-git-send-email-julien.grall@linaro.org> References: <1422555950-31821-1-git-send-email-julien.grall@linaro.org> Cc: stefano.stabellini@citrix.com, Vijaya.Kumar@caviumnetworks.com, Julien Grall , tim@xen.org, ian.campbell@citrix.com Subject: [Xen-devel] [PATCH v2 08/15] xen/arm: vgic-v3: Emulate correctly the re-distributor X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: julien.grall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: There is a one-to-one mapping between each re-distributors and processors. Each re-distributors can be accessed by any processor at any time. For instance during the initialization of the GIC, the drivers will browse the re-distributor to find the one associated to the current processor (via GICR_TYPER). So each re-distributor has its own MMIO region. The current implementation of the vGICv3 emulation assumes that the re-distributor region is banked. Therefore, the processor won't be able to access other re-distributor. While this is working fine for Linux, a processor will only access GICR_TYPER to find the associated re-distributor, we should have a correct implementation for the other operating system. All emulated registers of the re-distributors take a vCPU in parameter and necessary lock. Therefore concurrent access is already properly handled. The missing bit is retrieving the right vCPU following the region accessed. Retrieving the right vCPU could be slow, so it has been divided in 2 paths: - fast path: The current vCPU is accessing its own re-distributor - slow path: The current vCPU is accessing an other re-distributor As the processor needs to initialize itself, the former case is very common. To handle the access quickly, the base address of the re-distributor is computed and stored per-vCPU during the vCPU initialization. The latter is less common and more complicate to handle. The re-distributors can be spread accross multiple regions in the memory. During the domain creation, Xen will browse those regions to find the first vCPU handled by this region. When an access hits the slow path, Xen will: 1) Retrieve the region using the base address of the re-distributor accessed 2) Find the vCPU ID attached to the redistributor 3) Check the validity of the vCPU. If it's not valid, a data abort will be injected to the guest Finally, this patch also correctly support the bit GICR_TYPER.LAST which indicates if the redistributor is the last one of the contiguous region. Signed-off-by: Julien Grall --- Linux doesn't access the redistributor from another processor, except for GICR_TYPER during processor initialization. As it banks it will quickly get the "correct" redistributor. But ideally this should be backported to Xen 4.5. Changes in v2: - Patch added --- xen/arch/arm/gic-v3.c | 12 ++++- xen/arch/arm/vgic-v3.c | 111 ++++++++++++++++++++++++++++++++++++++++--- xen/include/asm-arm/domain.h | 6 +++ 3 files changed, 122 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c index fdfda0b..1b7ddb3 100644 --- a/xen/arch/arm/gic-v3.c +++ b/xen/arch/arm/gic-v3.c @@ -895,6 +895,8 @@ static int gicv_v3_init(struct domain *d) */ if ( is_hardware_domain(d) ) { + unsigned int first_cpu = 0; + d->arch.vgic.dbase = gicv3.dbase; d->arch.vgic.dbase_size = gicv3.dbase_size; @@ -909,8 +911,15 @@ static int gicv_v3_init(struct domain *d) for ( i = 0; i < gicv3.rdist_count; i++ ) { + paddr_t size = gicv3.rdist_regions[i].size; + d->arch.vgic.rdist_regions[i].base = gicv3.rdist_regions[i].base; - d->arch.vgic.rdist_regions[i].size = gicv3.rdist_regions[i].size; + d->arch.vgic.rdist_regions[i].size = size; + + /* Set the first CPU handled by this region */ + d->arch.vgic.rdist_regions[i].first_cpu = first_cpu; + + first_cpu += size / d->arch.vgic.rdist_stride; } d->arch.vgic.nr_regions = gicv3.rdist_count; } @@ -929,6 +938,7 @@ static int gicv_v3_init(struct domain *d) BUILD_BUG_ON((GUEST_GICV3_GICR0_SIZE / GUEST_GICV3_RDIST_STRIDE) < MAX_VIRT_CPUS); d->arch.vgic.rdist_regions[0].base = GUEST_GICV3_GICR0_BASE; d->arch.vgic.rdist_regions[0].size = GUEST_GICV3_GICR0_SIZE; + d->arch.vgic.rdist_regions[0].first_cpu = 0; } return 0; diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index 13481ac..378ac82 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -114,6 +114,10 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info, MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 1) << 40 | MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 0) << 32); *r = aff; + + if ( v->arch.vgic.flags & VGIC_V3_RDIST_LAST ) + *r |= GICR_TYPER_LAST; + return 1; case GICR_STATUSR: /* Not implemented */ @@ -619,13 +623,61 @@ write_ignore: return 1; } +static inline struct vcpu *get_vcpu_from_rdist(paddr_t gpa, + struct vcpu *v, + uint32_t *offset) +{ + struct domain *d = v->domain; + uint32_t stride = d->arch.vgic.rdist_stride; + paddr_t base; + int i, vcpu_id; + struct vgic_rdist_region *region; + + *offset = gpa & (stride - 1); + base = gpa & ~((paddr_t)stride - 1); + + /* Fast path: the VCPU is trying to access its re-distributor */ + if ( likely(v->arch.vgic.rdist_base == base) ) + return v; + + /* Slow path: the VCPU is trying to access another re-distributor */ + + /* + * Find the region where the re-distributor lives. For this purpose, + * we look one region ahead as only MMIO range for redistributors + * traps here. + * Note: We assume that the region are ordered. + */ + for ( i = 1; i < d->arch.vgic.nr_regions; i++ ) + { + if ( base < d->arch.vgic.rdist_regions[i].base ) + break; + } + + region = &d->arch.vgic.rdist_regions[i - 1]; + + vcpu_id = region->first_cpu + ((base - region->base) / stride); + + if ( unlikely(vcpu_id >= d->max_vcpus) ) + return NULL; + + /* + * Note: We are assuming that d->vcpu[vcpu_id] is never NULL. If + * it's the case, the guest will receive a data abort and won't be + * able to boot. + */ + return d->vcpu[vcpu_id]; +} + static int vgic_v3_rdistr_mmio_read(struct vcpu *v, mmio_info_t *info) { uint32_t offset; perfc_incr(vgicr_reads); - offset = info->gpa & (v->domain->arch.vgic.rdist_stride - 1); + v = get_vcpu_from_rdist(info->gpa, v, &offset); + if ( unlikely(!v) ) + return 0; if ( offset < SZ_64K ) return __vgic_v3_rdistr_rd_mmio_read(v, info, offset); @@ -645,11 +697,9 @@ static int vgic_v3_rdistr_mmio_write(struct vcpu *v, mmio_info_t *info) perfc_incr(vgicr_writes); - if ( v->domain->arch.vgic.rdist_stride != 0 ) - offset = info->gpa & (v->domain->arch.vgic.rdist_stride - 1); - else - /* If stride is not set. Default 128K */ - offset = info->gpa & (SZ_128K - 1); + v = get_vcpu_from_rdist(info->gpa, v, &offset); + if ( unlikely(!v) ) + return 0; if ( offset < SZ_64K ) return __vgic_v3_rdistr_rd_mmio_write(v, info, offset); @@ -1084,6 +1134,13 @@ static int vgic_v3_vcpu_init(struct vcpu *v) { int i; uint64_t affinity; + paddr_t rdist_base; + struct vgic_rdist_region *region; + unsigned int last_cpu; + + /* Convenient alias */ + struct domain *d = v->domain; + uint32_t rdist_stride = d->arch.vgic.rdist_stride; /* For SGI and PPI the target is always this CPU */ affinity = (MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 3) << 32 | @@ -1094,6 +1151,48 @@ static int vgic_v3_vcpu_init(struct vcpu *v) for ( i = 0 ; i < 32 ; i++ ) v->arch.vgic.private_irqs->v3.irouter[i] = affinity; + /* + * Find the region where the re-distributor lives. For this purpose, + * we look one region ahead as we have only the first CPU in hand. + */ + for ( i = 1; i < d->arch.vgic.nr_regions; i++ ) + { + if ( v->vcpu_id < d->arch.vgic.rdist_regions[i].first_cpu ) + break; + } + + region = &d->arch.vgic.rdist_regions[i - 1]; + + /* Get the base address of the redistributor */ + rdist_base = region->base; + rdist_base += (v->vcpu_id - region->first_cpu) * rdist_stride; + + /* + * Safety check mostly for DOM0. It's possible to have more vCPU + * than the number of physical CPU. Maybe we should deny this case? + */ + if ( (rdist_base < region->base) || + ((rdist_base + rdist_stride) > (region->base + region->size)) ) + { + dprintk(XENLOG_ERR, + "d%u: Unable to find a re-distributor for VCPU %u\n", + d->domain_id, v->vcpu_id); + return -EINVAL; + } + + v->arch.vgic.rdist_base = rdist_base; + + /* + * If the redistributor is the last one of the + * contiguous region of the vCPU is the last of the domain, set + * VGIC_V3_RDIST_LAST flags. + * Note that we are assuming max_vcpus will never change. + */ + last_cpu = (region->size / rdist_stride) + region->first_cpu - 1; + + if ( v->vcpu_id == last_cpu || (v->vcpu_id == (d->max_vcpus - 1)) ) + v->arch.vgic.flags |= VGIC_V3_RDIST_LAST; + return 0; } diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 3eaa7f0..81e3185 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -106,6 +106,7 @@ struct arch_domain struct vgic_rdist_region { paddr_t base; /* Base address */ paddr_t size; /* Size */ + unsigned int first_cpu; /* First CPU handled */ } rdist_regions[MAX_RDIST_COUNT]; int nr_regions; /* Number of rdist regions */ uint32_t rdist_stride; /* Re-Distributor stride */ @@ -239,6 +240,11 @@ struct arch_vcpu * lr_pending is a subset of vgic.inflight_irqs. */ struct list_head lr_pending; spinlock_t lock; + + /* GICv3: redistributor base and flags for this vCPU */ + paddr_t rdist_base; +#define VGIC_V3_RDIST_LAST (1 << 0) /* last vCPU of the rdist */ + uint8_t flags; } vgic; /* Timer registers */