From patchwork Fri Sep 26 15:53:15 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 38008 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-we0-f197.google.com (mail-we0-f197.google.com [74.125.82.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 43A5720F2E for ; Fri, 26 Sep 2014 15:57:20 +0000 (UTC) Received: by mail-we0-f197.google.com with SMTP id q59sf635729wes.4 for ; Fri, 26 Sep 2014 08:57:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:date:from:to:in-reply-to:message-id :references:user-agent:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=uwfxOC3Mw1RcImBb4ipqLEt1K2BVjHsZJZsjAVJjWcs=; b=Ug/Y2QFY8FAkGAsgxAf/vKnlEnkgeiGRv5qA+mBsz5BTMuF8sBaqDcDLeYLzqZp93t 3tWLJRokkl53mvfMd1N2Cnb0uERSqMsSXINNcmCneaTF4vjprLmqEYD21KAEJ8NBwyvn WIMZ1RFv6IK2cbORSUF5G3oOBCt8YzYS8TGgGaGYpI105n1sL0o98ImX7sfYECPX2Kq6 7IYTLCNGM3LR8ar+ZZvHYbVaVHjZ/ODRuDwvU/ydGOVQ0e7tEM8ID3cAxGlCwqpiUmYZ fKIFQ2Bglqp+V4WlbQYLZVCxUTDW2kg6pE/3GSW1yIuL8uXwnw+UxsZYhSNmh4Rg8ix3 WU1A== X-Gm-Message-State: ALoCoQk4RZDkjWIxtiax4rL2vFzM+ad2doytaLDZQF/4h2AlLXBU8VRc0iJr0vi3SJ6k44pGGKxy X-Received: by 10.180.183.165 with SMTP id en5mr6291803wic.1.1411747039385; Fri, 26 Sep 2014 08:57:19 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.87.132 with SMTP id ay4ls403143lab.46.gmail; Fri, 26 Sep 2014 08:57:19 -0700 (PDT) X-Received: by 10.152.43.201 with SMTP id y9mr21614953lal.54.1411747039183; Fri, 26 Sep 2014 08:57:19 -0700 (PDT) Received: from mail-la0-f51.google.com (mail-la0-f51.google.com [209.85.215.51]) by mx.google.com with ESMTPS id ci10si7806739lad.27.2014.09.26.08.57.18 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 26 Sep 2014 08:57:18 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) client-ip=209.85.215.51; Received: by mail-la0-f51.google.com with SMTP id pv20so5115173lab.24 for ; Fri, 26 Sep 2014 08:57:18 -0700 (PDT) X-Received: by 10.112.200.134 with SMTP id js6mr20050498lbc.0.1411747038868; Fri, 26 Sep 2014 08:57:18 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.130.169 with SMTP id of9csp85820lbb; Fri, 26 Sep 2014 08:57:17 -0700 (PDT) X-Received: by 10.220.97.5 with SMTP id j5mr16472843vcn.16.1411747036896; Fri, 26 Sep 2014 08:57:16 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id ec1si2557249vdb.19.2014.09.26.08.57.16 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 26 Sep 2014 08:57:16 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XXXrl-0002Vb-9n; Fri, 26 Sep 2014 15:55:29 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XXXrj-0002VI-TW for xen-devel@lists.xen.org; Fri, 26 Sep 2014 15:55:28 +0000 Received: from [85.158.139.211:19876] by server-4.bemta-5.messagelabs.com id 68/7D-10551-F6C85245; Fri, 26 Sep 2014 15:55:27 +0000 X-Env-Sender: Stefano.Stabellini@citrix.com X-Msg-Ref: server-10.tower-206.messagelabs.com!1411746924!6974204!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n X-StarScan-Received: X-StarScan-Version: 6.12.2; banners=-,-,- X-VirusChecked: Checked Received: (qmail 26483 invoked from network); 26 Sep 2014 15:55:26 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 26 Sep 2014 15:55:26 -0000 X-IronPort-AV: E=Sophos;i="5.04,604,1406592000"; d="scan'208";a="176556232" Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id 14.3.181.6; Fri, 26 Sep 2014 11:55:21 -0400 Received: from kaball.uk.xensource.com ([10.80.2.59]) by ukmail1.uk.xensource.com with esmtp (Exim 4.69) (envelope-from ) id 1XXXrY-0001fB-FA; Fri, 26 Sep 2014 16:55:16 +0100 Date: Fri, 26 Sep 2014 16:53:15 +0100 From: Stefano Stabellini X-X-Sender: sstabellini@kaball.uk.xensource.com To: Stefano Stabellini In-Reply-To: Message-ID: References: <1411722219-29771-1-git-send-email-vijay.kilari@gmail.com> User-Agent: Alpine 2.02 (DEB 1266 2009-07-14) MIME-Version: 1.0 X-DLP: MIA1 Cc: Ian.Campbell@citrix.com, vijay.kilari@gmail.com, Prasun.Kapoor@caviumnetworks.com, Vijaya Kumar K , julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org, stefano.stabellini@citrix.com, manish.jaggi@caviumnetworks.com Subject: Re: [Xen-devel] [PATCH v4] xen/arm: Deliver interrupts to vcpu specified in IROUTER X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: stefano.stabellini@eu.citrix.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: On Fri, 26 Sep 2014, Stefano Stabellini wrote: > On Fri, 26 Sep 2014, vijay.kilari@gmail.com wrote: > > From: Vijaya Kumar K > > > > In GICv3 use IROUTER register contents to deliver irq to > > specified vcpu. > > > > vgic irouter[irq] is used to represent vcpu number for which > > irq affinity is assigned. Bit[31] is used to store IROUTER > > bit[31] value to represent irq mode. > > > > This patch is similar to Stefano's commit > > 5b3a817ea33b891caf7d7d788da9ce6deffa82a1 for GICv2 > > > > Signed-off-by: Vijaya Kumar K > > Thanks for your work Vijaya. > Few very small changes required, see below. With them: > > Acked-by: Stefano Stabellini > > > Ian, maybe you could just apply and make the changes yourself? For clarity, I have appended the patch with the three changes I listed: --- xen/arm: Deliver interrupts to vcpu specified in IROUTER In GICv3 use IROUTER register contents to deliver irq to specified vcpu. vgic irouter[irq] is used to represent vcpu number for which irq affinity is assigned. Bit[31] is used to store IROUTER bit[31] value to represent irq mode. This patch is similar to Stefano's commit 5b3a817ea33b891caf7d7d788da9ce6deffa82a1 for GICv2 Signed-off-by: Vijaya Kumar K Signed-off-by: Stefano Stabellini --- xen/arch/arm/vgic-v3.c | 108 +++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 93 insertions(+), 15 deletions(-) diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index ac8cf07..ff99e50 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -45,10 +45,42 @@ #define GICV3_GICR_PIDR2 GICV3_GICD_PIDR2 #define GICV3_GICR_PIDR4 GICV3_GICD_PIDR4 +static struct vcpu *vgic_v3_irouter_to_vcpu(struct vcpu *v, uint64_t irouter) +{ + irouter &= ~(GICD_IROUTER_SPI_MODE_ANY); + irouter = irouter & MPIDR_AFF0_MASK; + + return v->domain->vcpu[irouter]; +} + +static uint64_t vgic_v3_vcpu_to_irouter(struct vcpu *v, + unsigned int vcpu_id) +{ + uint64_t irq_affinity; + struct vcpu *v_target; + + v_target = v->domain->vcpu[vcpu_id]; + irq_affinity = (MPIDR_AFFINITY_LEVEL(v_target->arch.vmpidr, 3) << 32 | + MPIDR_AFFINITY_LEVEL(v_target->arch.vmpidr, 2) << 16 | + MPIDR_AFFINITY_LEVEL(v_target->arch.vmpidr, 1) << 8 | + MPIDR_AFFINITY_LEVEL(v_target->arch.vmpidr, 0)); + + return irq_affinity; +} + static struct vcpu *vgic_v3_get_target_vcpu(struct vcpu *v, unsigned int irq) { - /* TODO: Return vcpu0 always */ - return v->domain->vcpu[0]; + uint64_t target; + struct vgic_irq_rank *rank = vgic_rank_irq(v, irq); + + ASSERT(spin_is_locked(&rank->lock)); + + target = rank->v3.irouter[irq % 32]; + target &= ~(GICD_IROUTER_SPI_MODE_ANY); + target &= MPIDR_AFF0_MASK; + ASSERT(target >= 0 && target < v->domain->max_vcpus); + + return v->domain->vcpu[target]; } static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info, @@ -353,9 +385,9 @@ static int __vgic_v3_distr_common_mmio_write(struct vcpu *v, mmio_info_t *info, vgic_lock_rank(v, rank, flags); tr = rank->ienable; rank->ienable |= *r; - vgic_unlock_rank(v, rank, flags); /* The irq number is extracted from offset. so shift by register size */ vgic_enable_irqs(v, (*r) & (~tr), (reg - GICD_ISENABLER) >> DABT_WORD); + vgic_unlock_rank(v, rank, flags); return 1; case GICD_ICENABLER ... GICD_ICENABLERN: if ( dabt.size != DABT_WORD ) goto bad_width; @@ -364,9 +396,9 @@ static int __vgic_v3_distr_common_mmio_write(struct vcpu *v, mmio_info_t *info, vgic_lock_rank(v, rank, flags); tr = rank->ienable; rank->ienable &= ~*r; - vgic_unlock_rank(v, rank, flags); /* The irq number is extracted from offset. so shift by register size */ vgic_disable_irqs(v, (*r) & tr, (reg - GICD_ICENABLER) >> DABT_WORD); + vgic_unlock_rank(v, rank, flags); return 1; case GICD_ISPENDR ... GICD_ISPENDRN: if ( dabt.size != DABT_WORD ) goto bad_width; @@ -620,6 +652,8 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, mmio_info_t *info) register_t *r = select_user_reg(regs, dabt.reg); struct vgic_irq_rank *rank; unsigned long flags; + uint64_t irouter; + unsigned int vcpu_id; int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase); switch ( gicd_reg ) @@ -672,8 +706,17 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, mmio_info_t *info) DABT_DOUBLE_WORD); if ( rank == NULL ) goto read_as_zero; vgic_lock_rank(v, rank, flags); - *r = rank->v3.irouter[REG_RANK_INDEX(64, - (gicd_reg - GICD_IROUTER), DABT_DOUBLE_WORD)]; + irouter = rank->v3.irouter[REG_RANK_INDEX(64, + (gicd_reg - GICD_IROUTER), DABT_DOUBLE_WORD)]; + /* XXX: bit[31] stores IRQ mode. Just return */ + if ( irouter & GICD_IROUTER_SPI_MODE_ANY ) + { + *r = GICD_IROUTER_SPI_MODE_ANY; + vgic_unlock_rank(v, rank, flags); + return 1; + } + vcpu_id = irouter; + *r = vgic_v3_vcpu_to_irouter(v, vcpu_id); vgic_unlock_rank(v, rank, flags); return 1; case GICD_NSACR ... GICD_NSACRN: @@ -754,6 +797,8 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, mmio_info_t *info) register_t *r = select_user_reg(regs, dabt.reg); struct vgic_irq_rank *rank; unsigned long flags; + uint64_t new_irouter, new_target, old_target; + struct vcpu *old_vcpu, *new_vcpu; int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase); switch ( gicd_reg ) @@ -810,16 +855,43 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, mmio_info_t *info) rank = vgic_rank_offset(v, 64, gicd_reg - GICD_IROUTER, DABT_DOUBLE_WORD); if ( rank == NULL ) goto write_ignore_64; - if ( *r ) + BUG_ON(v->domain->max_vcpus > 8); + new_irouter = *r; + vgic_lock_rank(v, rank, flags); + + old_target = rank->v3.irouter[REG_RANK_INDEX(64, + (gicd_reg - GICD_IROUTER), DABT_DOUBLE_WORD)]; + old_target &= ~(GICD_IROUTER_SPI_MODE_ANY); + if ( new_irouter & GICD_IROUTER_SPI_MODE_ANY ) { - /* TODO: Ignored. We don't support irq delivery for vcpu != 0 */ - gdprintk(XENLOG_DEBUG, - "SPI delivery to secondary cpus not supported\n"); - goto write_ignore_64; + /* + * IRQ routing mode set. Route any one processor in the entire + * system. We chose vcpu 0 and set IRQ mode bit[31] in irouter. + */ + new_target = 0; + new_vcpu = v->domain->vcpu[0]; + new_irouter = GICD_IROUTER_SPI_MODE_ANY; + } + else + { + new_target = new_irouter & MPIDR_AFF0_MASK; + if ( new_target >= v->domain->max_vcpus ) + { + printk("vGICv3: vGICD: wrong irouter at offset %#08x\n val 0x%lx vcpu %x", + gicd_reg, new_target, v->domain->max_vcpus); + vgic_unlock_rank(v, rank, flags); + return 0; + } + new_vcpu = vgic_v3_irouter_to_vcpu(v, new_irouter); + } + + rank->v3.irouter[REG_RANK_INDEX(64, (gicd_reg - GICD_IROUTER), + DABT_DOUBLE_WORD)] = new_irouter; + if ( old_target != new_target ) + { + old_vcpu = v->domain->vcpu[old_target]; + vgic_migrate_irq(old_vcpu, new_vcpu, (gicd_reg - GICD_IROUTER)/8); } - vgic_lock_rank(v, rank, flags); - rank->v3.irouter[REG_RANK_INDEX(64, - (gicd_reg - GICD_IROUTER), DABT_DOUBLE_WORD)] = *r; vgic_unlock_rank(v, rank, flags); return 1; case GICD_NSACR ... GICD_NSACRN: @@ -965,8 +1037,14 @@ static int vgic_v3_vcpu_init(struct vcpu *v) static int vgic_v3_domain_init(struct domain *d) { - int i; + int i, idx; + /* By default deliver to CPU0 */ + for ( i = 0; i < DOMAIN_NR_RANKS(d); i++ ) + { + for ( idx = 0; idx < 32; idx++ ) + d->arch.vgic.shared_irqs[i].v3.irouter[idx] = 0; + } /* We rely on gicv init to get dbase and size */ register_mmio_handler(d, &vgic_distr_mmio_handler, d->arch.vgic.dbase, d->arch.vgic.dbase_size);