From patchwork Mon Aug 12 04:13:00 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 18992 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f197.google.com (mail-qc0-f197.google.com [209.85.216.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6EFD2248E6 for ; Mon, 12 Aug 2013 04:13:28 +0000 (UTC) Received: by mail-qc0-f197.google.com with SMTP id s1sf3472850qcw.0 for ; Sun, 11 Aug 2013 21:13:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-gm-message-state:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=53WpGYs5aAIf9pGPp/o7z1XfV4d+fzwH7Cxf0YAADx4=; b=E4CFdEyl/Rr8V8YWdBQhzRvhoYAiBj8UFg/9pUcW7gjZDXycGAbvpBBseDhj+p3eyt YupA/IvKKRSG+Un6jLYJwoHXGRwZLQhs413ZffcGgn49NUVBoxflqsz7WhbxlDlueabS 3TFq8v61/Z8kP5h051YnHuSE5IRUf7vFMlD+OGlyodcpac8+bVgI4OKMzPt88KSJ43GL 7DVY0Sl1+K9QXzS3cp8v+iKGpOVFHiLt62NADFVvGnLqt/oL63Dka2UZWs3UXWBFl44Y QQtgdvjSdbrBfbizwC4UsQRMtyf7mfpwEvbp3LQ9x9anbQnYuHZrJvkYNm2C+PsyHEyn 6viw== X-Received: by 10.236.93.113 with SMTP id k77mr9808654yhf.25.1376280808173; Sun, 11 Aug 2013 21:13:28 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.48.44 with SMTP id i12ls2254169qen.41.gmail; Sun, 11 Aug 2013 21:13:28 -0700 (PDT) X-Received: by 10.58.230.135 with SMTP id sy7mr5571166vec.42.1376280808028; Sun, 11 Aug 2013 21:13:28 -0700 (PDT) Received: from mail-ve0-f174.google.com (mail-ve0-f174.google.com [209.85.128.174]) by mx.google.com with ESMTPS id o7si7813475vet.131.2013.08.11.21.13.28 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 11 Aug 2013 21:13:28 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.174; Received: by mail-ve0-f174.google.com with SMTP id d10so5191513vea.5 for ; Sun, 11 Aug 2013 21:13:28 -0700 (PDT) X-Gm-Message-State: ALoCoQleLNUe2gppF2ZurY0lrZJPnQRhFBcOMzNHoihMI1lfFHR5qow3V2YIJCmtgkCncY04oxtP X-Received: by 10.58.211.227 with SMTP id nf3mr5522561vec.20.1376280807937; Sun, 11 Aug 2013 21:13:27 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp75028vcz; Sun, 11 Aug 2013 21:13:27 -0700 (PDT) X-Received: by 10.66.168.7 with SMTP id zs7mr11768988pab.152.1376280807095; Sun, 11 Aug 2013 21:13:27 -0700 (PDT) Received: from mail-pd0-f181.google.com (mail-pd0-f181.google.com [209.85.192.181]) by mx.google.com with ESMTPS id qf5si21185220pac.124.2013.08.11.21.13.26 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 11 Aug 2013 21:13:27 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.192.181 is neither permitted nor denied by best guess record for domain of christoffer.dall@linaro.org) client-ip=209.85.192.181; Received: by mail-pd0-f181.google.com with SMTP id g10so2885556pdj.26 for ; Sun, 11 Aug 2013 21:13:26 -0700 (PDT) X-Received: by 10.66.221.8 with SMTP id qa8mr22263012pac.188.1376280806703; Sun, 11 Aug 2013 21:13:26 -0700 (PDT) Received: from localhost.localdomain (c-67-169-183-77.hsd1.ca.comcast.net. [67.169.183.77]) by mx.google.com with ESMTPSA id nj9sm34355902pbc.13.2013.08.11.21.13.24 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 11 Aug 2013 21:13:25 -0700 (PDT) From: Christoffer Dall To: Paolo Bonzini , Gleb Natapov Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linaro-kernel@lists.linaro.org, patches@linaro.org, Marc Zyngier , Christoffer Dall Subject: [PATCH 3/4] arm64: KVM: fix 2-level page tables unmapping Date: Sun, 11 Aug 2013 21:13:00 -0700 Message-Id: <1376280781-6539-4-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1376280781-6539-1-git-send-email-christoffer.dall@linaro.org> References: <1376280781-6539-1-git-send-email-christoffer.dall@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Marc Zyngier When using 64kB pages, we only have two levels of page tables, meaning that PGD, PUD and PMD are fused. In this case, trying to refcount PUDs and PMDs independently is a a complete disaster, as they are the same. We manage to get it right for the allocation (stage2_set_pte uses {pmd,pud}_none), but the unmapping path clears both pud and pmd refcounts, which fails spectacularly with 2-level page tables. The fix is to avoid calling clear_pud_entry when both the pmd and pud pages are empty. For this, and instead of introducing another pud_empty function, consolidate both pte_empty and pmd_empty into page_empty (the code is actually identical) and use that to also test the validity of the pud. Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/kvm/mmu.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 80a83ec..0988d9e 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -85,6 +85,12 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) return p; } +static bool page_empty(void *ptr) +{ + struct page *ptr_page = virt_to_page(ptr); + return page_count(ptr_page) == 1; +} + static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr) { pmd_t *pmd_table = pmd_offset(pud, 0); @@ -103,12 +109,6 @@ static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr) put_page(virt_to_page(pmd)); } -static bool pmd_empty(pmd_t *pmd) -{ - struct page *pmd_page = virt_to_page(pmd); - return page_count(pmd_page) == 1; -} - static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr) { if (pte_present(*pte)) { @@ -118,12 +118,6 @@ static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr) } } -static bool pte_empty(pte_t *pte) -{ - struct page *pte_page = virt_to_page(pte); - return page_count(pte_page) == 1; -} - static void unmap_range(struct kvm *kvm, pgd_t *pgdp, unsigned long long start, u64 size) { @@ -153,10 +147,10 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, next = addr + PAGE_SIZE; /* If we emptied the pte, walk back up the ladder */ - if (pte_empty(pte)) { + if (page_empty(pte)) { clear_pmd_entry(kvm, pmd, addr); next = pmd_addr_end(addr, end); - if (pmd_empty(pmd)) { + if (page_empty(pmd) && !page_empty(pud)) { clear_pud_entry(kvm, pud, addr); next = pud_addr_end(addr, end); }