From patchwork Thu May 8 21:18:24 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Huang X-Patchwork-Id: 29867 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pd0-f197.google.com (mail-pd0-f197.google.com [209.85.192.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0F2CF20534 for ; Thu, 8 May 2014 21:23:59 +0000 (UTC) Received: by mail-pd0-f197.google.com with SMTP id g10sf12069336pdj.4 for ; Thu, 08 May 2014 14:23:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:cc:subject:precedence:list-id:list-unsubscribe:list-post :list-help:list-subscribe:mime-version:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list :list-archive:content-type:content-transfer-encoding; bh=cOrMr5sH2m/7SwOdVZ5xRp2c0dZtM3N6DrI4mo0MOow=; b=RPmrjG9N6nfTLwUJfVcNxymNhrkot843zvyYhQH6txAOKIWSveSAAqngVC218iTjeH YETGbJ2AkcYY80WirBGpkd3L3Ie/gq9t134VcBiru/QPq5auHYHO4q55ygwR6YASmcZd TYBkRKTasG8/bKt3WHyDzbaQxD0/XgFvcHByaPfIU2boX9cxSP2jhV/jZhOlPW+Bsjtn WviTmIcOtNQvFkuc7CVIbrov3FpHZYKVnu9Fg9uqRHPTRMdiAvgVTOgQA5Y3EEJYVy7t gOf7fqbdWm2ZdTeQzSZFiZRRKQbAwME7MGf41Ti+WFWAvVBtVVlUXcOEr98Vdxt/rQeZ HKjg== X-Gm-Message-State: ALoCoQljfNazOLZf3G7Q4i9aynxLH/QHyd4rhIPggw3/xMaZpfp2+gq5Gf7zse6SSk5bipDA731e X-Received: by 10.66.102.36 with SMTP id fl4mr2962249pab.20.1399584239235; Thu, 08 May 2014 14:23:59 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.82.145 with SMTP id h17ls50949qgd.31.gmail; Thu, 08 May 2014 14:23:59 -0700 (PDT) X-Received: by 10.58.150.68 with SMTP id ug4mr2484897veb.50.1399584239109; Thu, 08 May 2014 14:23:59 -0700 (PDT) Received: from mail-ve0-f182.google.com (mail-ve0-f182.google.com [209.85.128.182]) by mx.google.com with ESMTPS id gu9si374794vdc.196.2014.05.08.14.23.59 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 08 May 2014 14:23:59 -0700 (PDT) Received-SPF: none (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) client-ip=209.85.128.182; Received: by mail-ve0-f182.google.com with SMTP id sa20so4099098veb.13 for ; Thu, 08 May 2014 14:23:59 -0700 (PDT) X-Received: by 10.58.132.228 with SMTP id ox4mr2454444veb.54.1399584238998; Thu, 08 May 2014 14:23:58 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp26233vcb; Thu, 8 May 2014 14:23:58 -0700 (PDT) X-Received: by 10.140.85.166 with SMTP id n35mr8302123qgd.67.1399584238457; Thu, 08 May 2014 14:23:58 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id o1si1132564qai.238.2014.05.08.14.23.58 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 08 May 2014 14:23:58 -0700 (PDT) Received-SPF: none (google.com: xen-devel-bounces@lists.xen.org does not designate permitted sender hosts) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WiVl9-0000ZY-1h; Thu, 08 May 2014 21:21:43 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WiVl7-0000Z9-FP for xen-devel@lists.xen.org; Thu, 08 May 2014 21:21:41 +0000 Received: from [85.158.143.35:52562] by server-1.bemta-4.messagelabs.com id B8/5F-09853-465FB635; Thu, 08 May 2014 21:21:40 +0000 X-Env-Sender: w1.huang@samsung.com X-Msg-Ref: server-12.tower-21.messagelabs.com!1399584097!3749558!1 X-Originating-IP: [203.254.224.24] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMjAzLjI1NC4yMjQuMjQgPT4gMzY1MDA2\n X-StarScan-Received: X-StarScan-Version: 6.11.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 22732 invoked from network); 8 May 2014 21:21:39 -0000 Received: from mailout1.samsung.com (HELO mailout1.samsung.com) (203.254.224.24) by server-12.tower-21.messagelabs.com with DES-CBC3-SHA encrypted SMTP; 8 May 2014 21:21:39 -0000 Received: from epcpsbgm2.samsung.com (epcpsbgm2 [203.254.230.27]) by mailout1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N5900ARIY00A590@mailout1.samsung.com> for xen-devel@lists.xen.org; Fri, 09 May 2014 06:21:36 +0900 (KST) X-AuditID: cbfee61b-b7f766d00000646c-10-536bf5601b0c Received: from epmmp1.local.host ( [203.254.227.16]) by epcpsbgm2.samsung.com (EPCPMTA) with SMTP id 29.1C.25708.065FB635; Fri, 09 May 2014 06:21:36 +0900 (KST) Received: from weihp.spa.sarc.sas ([105.140.31.10]) by mmp1.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0N59002IGXZLDK90@mmp1.samsung.com>; Fri, 09 May 2014 06:21:36 +0900 (KST) From: Wei Huang To: xen-devel@lists.xen.org Date: Thu, 08 May 2014 16:18:24 -0500 Message-id: <1399583908-21755-3-git-send-email-w1.huang@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1399583908-21755-1-git-send-email-w1.huang@samsung.com> References: <1399583908-21755-1-git-send-email-w1.huang@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrGLMWRmVeSWpSXmKPExsVy+t9jAd2Er9nBBoeXC1gsfryL3eJNbweL xcVrr5gsbvTeYrP4uechm8X0P3fYLPZ9XsVi8fPORUaLr82rGC1en/vIbLHk42IWi45/09gc eDxeT57A6LH9iYjHnWt72DyO7v7N5NG3ZRWjx/otV1k8Tt+axRbAHsVlk5Kak1mWWqRvl8CV sWXVVraCB7YVO96vZmtg3K3XxcjJISFgIrHm8SYmCFtM4sK99WxdjFwcQgKLGCXOXTvACpIQ Emhmkti0RRnEZhNQkzh18T8LiC0iIC1x7fNlRpAGZoGPjBKHZ/WBJYQF3CTeXNkKNpVFQFXi 79r7bCA2r4CLxLfX/UA2B9A2BYk5k2xAwpwCrhL3p81jhNjlIvHq2jvWCYy8CxgZVjGKphYk FxQnpeca6RUn5haX5qXrJefnbmIEh+Yz6R2MqxosDjEKcDAq8fC+mJIdLMSaWFZcmXuIUYKD WUmE98UyoBBvSmJlVWpRfnxRaU5q8SFGaQ4WJXHeg63WgUIC6YklqdmpqQWpRTBZJg5OqQbG CB6G1Lb+A2s+X5CW5dh8SC2iWSZIeZrP4ayDeq3c2xyXSZ53XqS1WGfufAefWz9ffla5HPRC 6knI0pTNQr0uH8wZtv4Ve5ynm51Xv3i3/rP9awNmen380PBB1CnjUrHQFznr6u6feQHm1l+8 DNzZxN3uyNfyHa0RCVF7/sjS9NCtWf1zJ1oqsRRnJBpqMRcVJwIArVpfM0kCAAA= Cc: keir@xen.org, ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com, julien.grall@linaro.org, tim@xen.org, jaeyong.yoo@samsung.com, jbeulich@suse.com, ian.jackson@eu.citrix.com, yjhyun.yoo@samsung.com Subject: [Xen-devel] [RFC v3 2/6] xen/arm: Add save/restore support for ARM GIC V2 X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: w1.huang@samsung.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: This patch implements a save/restore support for ARM guest GIC. Two types of GIC V2 states are saved seperately: 1) VGICD_* contains the GIC distributor state from guest VM's view; 2) GICH_* is the GIC virtual control state from hypervisor's persepctive. Signed-off-by: Evgeny Fedotov Signed-off-by: Wei Huang --- xen/arch/arm/vgic.c | 171 ++++++++++++++++++++++++++++++++ xen/include/public/arch-arm/hvm/save.h | 34 ++++++- 2 files changed, 204 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 4cf6470..505e944 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -24,6 +24,7 @@ #include #include #include +#include #include @@ -73,6 +74,110 @@ static struct vgic_irq_rank *vgic_irq_rank(struct vcpu *v, int b, int n) return NULL; } +/* Save guest VM's distributor info into a context to support domains + * save/restore. Such info represents guest VM's view of its GIC + * distributor (GICD_*). + */ +static int hvm_vgicd_save(struct domain *d, hvm_domain_context_t *h) +{ + struct hvm_arm_vgicd_v2 ctxt; + struct vcpu *v; + struct vgic_irq_rank *rank; + int rc = 0; + + /* Save the state for each VCPU */ + for_each_vcpu( d, v ) + { + rank = &v->arch.vgic.private_irqs; + + /* IENABLE, IACTIVE, IPEND, PENDSGI */ + ctxt.ienable = rank->ienable; + ctxt.iactive = rank->iactive; + ctxt.ipend = rank->ipend; + ctxt.pendsgi = rank->pendsgi; + + /* ICFG */ + ctxt.icfg[0] = rank->icfg[0]; + ctxt.icfg[1] = rank->icfg[1]; + + /* IPRIORITY */ + BUILD_BUG_ON(sizeof(rank->ipriority) != sizeof (ctxt.ipriority)); + memcpy(ctxt.ipriority, rank->ipriority, sizeof(rank->ipriority)); + + /* ITARGETS */ + BUILD_BUG_ON(sizeof(rank->itargets) != sizeof (ctxt.itargets)); + memcpy(ctxt.itargets, rank->itargets, sizeof(rank->itargets)); + + if ( (rc = hvm_save_entry(VGICD_V2, v->vcpu_id, h, &ctxt)) != 0 ) + return rc; + } + + return rc; +} + +/* Load guest VM's distributor info from a context to support domain + * save/restore. The info is loaded into vgic_irq_rank. + */ +static int hvm_vgicd_load(struct domain *d, hvm_domain_context_t *h) +{ + struct hvm_arm_vgicd_v2 ctxt; + struct vgic_irq_rank *rank; + struct vcpu *v; + int vcpuid; + unsigned long enable_bits; + struct pending_irq *p; + unsigned int irq = 0; + int rc = 0; + + /* Which vcpu is this? */ + vcpuid = hvm_load_instance(h); + if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL ) + { + dprintk(XENLOG_ERR, "HVM restore: dom%u has no vcpu%u\n", + d->domain_id, vcpuid); + return -EINVAL; + } + + if ( (rc = hvm_load_entry(VGICD_V2, h, &ctxt)) != 0 ) + return rc; + + /* Restore PPI states */ + rank = &v->arch.vgic.private_irqs; + + /* IENABLE, IACTIVE, IPEND, PENDSGI */ + rank->ienable = ctxt.ienable; + rank->iactive = ctxt.iactive; + rank->ipend = ctxt.ipend; + rank->pendsgi = ctxt.pendsgi; + + /* ICFG */ + rank->icfg[0] = ctxt.icfg[0]; + rank->icfg[1] = ctxt.icfg[1]; + + /* IPRIORITY */ + BUILD_BUG_ON(sizeof(rank->ipriority) != sizeof (ctxt.ipriority)); + memcpy(rank->ipriority, ctxt.ipriority, sizeof(rank->ipriority)); + + /* ITARGETS */ + BUILD_BUG_ON(sizeof(rank->itargets) != sizeof (ctxt.itargets)); + memcpy(rank->itargets, ctxt.itargets, sizeof(rank->itargets)); + + /* Set IRQ status as enabled by iterating through rank->ienable register. + * This step is required otherwise events won't be received by the VM + * after restore. */ + enable_bits = ctxt.ienable; + while ( (irq = find_next_bit(&enable_bits, 32, irq)) < 32 ) + { + p = irq_to_pending(v, irq); + set_bit(GIC_IRQ_GUEST_ENABLED, &p->status); + irq++; + } + + return 0; +} +HVM_REGISTER_SAVE_RESTORE(VGICD_V2, hvm_vgicd_save, hvm_vgicd_load, + 1, HVMSR_PER_VCPU); + int domain_vgic_init(struct domain *d) { int i; @@ -759,6 +864,72 @@ out: smp_send_event_check_mask(cpumask_of(v->processor)); } +/* Save GIC virtual control state into a context to support save/restore. + * The info reprsents most of GICH_* registers. */ +static int hvm_gich_save(struct domain *d, hvm_domain_context_t *h) +{ + struct hvm_arm_gich_v2 ctxt; + struct vcpu *v; + int rc = 0; + + /* Save the state of GICs */ + for_each_vcpu( d, v ) + { + ctxt.gic_hcr = v->arch.gic_hcr; + ctxt.gic_vmcr = v->arch.gic_vmcr; + ctxt.gic_apr = v->arch.gic_apr; + + /* Save list registers and masks */ + BUILD_BUG_ON(sizeof(v->arch.gic_lr) > sizeof (ctxt.gic_lr)); + memcpy(ctxt.gic_lr, v->arch.gic_lr, sizeof(v->arch.gic_lr)); + + ctxt.lr_mask = v->arch.lr_mask; + ctxt.event_mask = v->arch.event_mask; + + if ( (rc = hvm_save_entry(GICH_V2, v->vcpu_id, h, &ctxt)) != 0 ) + return rc; + } + + return rc; +} + +/* Restore GIC virtual control state from a context to support save/restore */ +static int hvm_gich_load(struct domain *d, hvm_domain_context_t *h) +{ + int vcpuid; + struct hvm_arm_gich_v2 ctxt; + struct vcpu *v; + int rc = 0; + + /* Which vcpu is this? */ + vcpuid = hvm_load_instance(h); + if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL ) + { + dprintk(XENLOG_ERR, "HVM restore: dom%u has no vcpu%u\n", d->domain_id, + vcpuid); + return -EINVAL; + } + + if ( (rc = hvm_load_entry(GICH_V2, h, &ctxt)) != 0 ) + return rc; + + v->arch.gic_hcr = ctxt.gic_hcr; + v->arch.gic_vmcr = ctxt.gic_vmcr; + v->arch.gic_apr = ctxt.gic_apr; + + /* Restore list registers and masks */ + BUILD_BUG_ON(sizeof(v->arch.gic_lr) > sizeof (ctxt.gic_lr)); + memcpy(v->arch.gic_lr, ctxt.gic_lr, sizeof(v->arch.gic_lr)); + + v->arch.lr_mask = ctxt.lr_mask; + v->arch.event_mask = ctxt.event_mask; + + return rc; +} + +HVM_REGISTER_SAVE_RESTORE(GICH_V2, hvm_gich_save, hvm_gich_load, 1, + HVMSR_PER_VCPU); + /* * Local variables: * mode: C diff --git a/xen/include/public/arch-arm/hvm/save.h b/xen/include/public/arch-arm/hvm/save.h index 8312e7b..421a6f6 100644 --- a/xen/include/public/arch-arm/hvm/save.h +++ b/xen/include/public/arch-arm/hvm/save.h @@ -40,10 +40,42 @@ struct hvm_save_header }; DECLARE_HVM_SAVE_TYPE(HEADER, 1, struct hvm_save_header); +/* Guest's view of GIC distributor (per-vcpu) + * - Based on GICv2 (see "struct vgic_irq_rank") + * - Store guest's view of GIC distributor + * - Only support SGI and PPI for DomU (DomU doesn't handle SPI) + */ +struct hvm_arm_vgicd_v2 +{ + uint32_t ienable; + uint32_t iactive; + uint32_t ipend; + uint32_t pendsgi; + uint32_t icfg[2]; + uint32_t ipriority[8]; + uint32_t itargets[8]; +}; +DECLARE_HVM_SAVE_TYPE(VGICD_V2, 2, struct hvm_arm_vgicd_v2); + +/* Info for hypervisor to manage guests (per-vcpu) + * - Based on GICv2 + * - Mainly store registers of GICH_* + */ +struct hvm_arm_gich_v2 +{ + uint32_t gic_hcr; + uint32_t gic_vmcr; + uint32_t gic_apr; + uint32_t gic_lr[64]; + uint64_t event_mask; + uint64_t lr_mask; +}; +DECLARE_HVM_SAVE_TYPE(GICH_V2, 3, struct hvm_arm_gich_v2); + /* * Largest type-code in use */ -#define HVM_SAVE_CODE_MAX 1 +#define HVM_SAVE_CODE_MAX 3 #endif