From patchwork Mon May 4 01:25:06 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 47918 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f198.google.com (mail-wi0-f198.google.com [209.85.212.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7675520553 for ; Mon, 4 May 2015 01:26:46 +0000 (UTC) Received: by wiun10 with SMTP id n10sf30439388wiu.1 for ; Sun, 03 May 2015 18:26:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=ZhtuzFfAUK0+yXy7nY66ihrMtvL+7pns+HgavxaWirs=; b=jPam9nNgSU4BfWCRHPuhda5yllF4GZmfrLl2Ej0cSRchblnpwj1JMbldlui1zEtf9q 16qgnnqTBFDIDEHXFNln06k1ol/MWbgFxnvrw8wb5bXoZ5lKzTmyXBOBVAXyzZ5g06uq Tpyjw34bYMA2w8UAnEcPBfXBRkAH0+jqxxp6IxICRjiu378vCV7cimctmmPHAJUkl7Bk 57SPIJVWJB9G0Bh+/vfBLq5g/rlflzCdopvLwSROsgg3Ss3xs7dHPSUOr0fosFotkNjc 2XDzwUeOxsbONmm8YFXni4ut9jF2G+WvMDkVkURD14hoh7CMb0Vp2ASUDsM8eP9ScYQy ZFkA== X-Gm-Message-State: ALoCoQkhz8sZa44ChblvyzkzhVjcuIEuQ7qQzeUO2fhJAVJQlVt+80eZVpXlxQ6L/zrr6ppnCNWL X-Received: by 10.112.13.200 with SMTP id j8mr16932445lbc.14.1430702805427; Sun, 03 May 2015 18:26:45 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.207.65 with SMTP id lu1ls651309lac.104.gmail; Sun, 03 May 2015 18:26:45 -0700 (PDT) X-Received: by 10.152.43.110 with SMTP id v14mr12981637lal.4.1430702805280; Sun, 03 May 2015 18:26:45 -0700 (PDT) Received: from mail-lb0-f170.google.com (mail-lb0-f170.google.com. [209.85.217.170]) by mx.google.com with ESMTPS id pr4si9043071lbb.133.2015.05.03.18.26.45 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 03 May 2015 18:26:45 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.170 as permitted sender) client-ip=209.85.217.170; Received: by lbbqq2 with SMTP id qq2so95353766lbb.3 for ; Sun, 03 May 2015 18:26:45 -0700 (PDT) X-Received: by 10.112.204.6 with SMTP id ku6mr17350821lbc.73.1430702805099; Sun, 03 May 2015 18:26:45 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp1405724lbt; Sun, 3 May 2015 18:26:44 -0700 (PDT) X-Received: by 10.66.141.143 with SMTP id ro15mr38884700pab.4.1430702803377; Sun, 03 May 2015 18:26:43 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id es7si17680936pbd.123.2015.05.03.18.26.42; Sun, 03 May 2015 18:26:43 -0700 (PDT) Received-SPF: none (google.com: stable-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751456AbbEDB0m (ORCPT + 2 others); Sun, 3 May 2015 21:26:42 -0400 Received: from mail-pa0-f47.google.com ([209.85.220.47]:33264 "EHLO mail-pa0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751036AbbEDB0l (ORCPT ); Sun, 3 May 2015 21:26:41 -0400 Received: by pacwv17 with SMTP id wv17so147435787pac.0 for ; Sun, 03 May 2015 18:26:41 -0700 (PDT) X-Received: by 10.68.238.228 with SMTP id vn4mr18069011pbc.166.1430702801048; Sun, 03 May 2015 18:26:41 -0700 (PDT) Received: from localhost ([180.150.153.1]) by mx.google.com with ESMTPSA id p5sm11015895pdi.2.2015.05.03.18.26.39 (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 03 May 2015 18:26:39 -0700 (PDT) From: shannon.zhao@linaro.org To: stable@vger.kernel.org Cc: sasha.levin@oracle.com, christoffer.dall@linaro.org, shannon.zhao@linaro.org, Laszlo Ersek , Ard Biesheuvel , Marc Zyngier Subject: [PATCH for 3.18.y stable 02/22] arm, arm64: KVM: allow forced dcache flush on page faults Date: Mon, 4 May 2015 09:25:06 +0800 Message-Id: <1430702726-2056-3-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1430702726-2056-1-git-send-email-shannon.zhao@linaro.org> References: <1430702726-2056-1-git-send-email-shannon.zhao@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: shannon.zhao@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.170 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Laszlo Ersek commit 840f4bfbe03f1ce94ade8fdf84e8cd925ef15a48 upstream. To allow handling of incoherent memslots in a subsequent patch, this patch adds a paramater 'ipa_uncached' to cache_coherent_guest_page() so that we can instruct it to flush the page's contents to DRAM even if the guest has caching globally enabled. Signed-off-by: Laszlo Ersek Signed-off-by: Ard Biesheuvel Signed-off-by: Marc Zyngier Signed-off-by: Shannon Zhao --- arch/arm/include/asm/kvm_mmu.h | 5 +++-- arch/arm/kvm/mmu.c | 9 +++++++-- arch/arm64/include/asm/kvm_mmu.h | 5 +++-- 3 files changed, 13 insertions(+), 6 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index acb0d57..f867060 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -161,9 +161,10 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) } static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, - unsigned long size) + unsigned long size, + bool ipa_uncached) { - if (!vcpu_has_cache_enabled(vcpu)) + if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached) kvm_flush_dcache_to_poc((void *)hva, size); /* diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 8664ff1..8038e52 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -853,6 +853,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct vm_area_struct *vma; pfn_t pfn; pgprot_t mem_type = PAGE_S2; + bool fault_ipa_uncached; write_fault = kvm_is_write_fault(vcpu); if (fault_status == FSC_PERM && !write_fault) { @@ -919,6 +920,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (!hugetlb && !force_pte) hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); + fault_ipa_uncached = false; + if (hugetlb) { pmd_t new_pmd = pfn_pmd(pfn, mem_type); new_pmd = pmd_mkhuge(new_pmd); @@ -926,7 +929,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_set_s2pmd_writable(&new_pmd); kvm_set_pfn_dirty(pfn); } - coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE); + coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE, + fault_ipa_uncached); ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); } else { pte_t new_pte = pfn_pte(pfn, mem_type); @@ -934,7 +938,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_set_s2pte_writable(&new_pte); kvm_set_pfn_dirty(pfn); } - coherent_cache_guest_page(vcpu, hva, PAGE_SIZE); + coherent_cache_guest_page(vcpu, hva, PAGE_SIZE, + fault_ipa_uncached); ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE)); } diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 0caf7a5..123b521 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -243,9 +243,10 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) } static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, - unsigned long size) + unsigned long size, + bool ipa_uncached) { - if (!vcpu_has_cache_enabled(vcpu)) + if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached) kvm_flush_dcache_to_poc((void *)hva, size); if (!icache_is_aliasing()) { /* PIPT */