From patchwork Thu Mar 13 22:44:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 26226 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f200.google.com (mail-vc0-f200.google.com [209.85.220.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 30297203AC for ; Thu, 13 Mar 2014 22:45:02 +0000 (UTC) Received: by mail-vc0-f200.google.com with SMTP id lg15sf3752895vcb.7 for ; Thu, 13 Mar 2014 15:45:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=SaN6LNpRzzgBsQVSszHzgySv7VFDrK2BxkgHKCGRMYA=; b=MjAO2Ir033tYhrTuNfXwPb/cyiOUuEGhULe+Tnu32oyEZYozS8MDQTlhzbqxDTeLOY iPbImks84nBmCnG3/2EAFKMiSI5pjPrFPCAh/zH7q6FFyScixM0q5NOlflELZlRYhke3 3FvEIy06mNTsgrDOVf5rc0k8Mp23R+/S7LDUvEU0Z0lhD37G9y9rnFbW9WBHyYijMkfF XOFTDT6xoicAWmvxQ3z3tnkgvcNrXGgTxxFE9mXpw33rV9BDOt7L911cAcwpKCzv3FNc 8DLG50K1FfCWBJ5BZIRcKthqOkZU0JYFiwu0gsMWhOKBF3r40f5yAjYBOhBrDzfC2dCq WQWQ== X-Gm-Message-State: ALoCoQkg+iK7uVatWtDIkhlJt1UOiHpBXFOGczC7XEiy++/h7oUNtgFve3M8D5DD5/m1KZxabvjy X-Received: by 10.236.112.4 with SMTP id x4mr1678416yhg.56.1394750702756; Thu, 13 Mar 2014 15:45:02 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.109.137 with SMTP id l9ls490644qgf.40.gmail; Thu, 13 Mar 2014 15:45:02 -0700 (PDT) X-Received: by 10.52.119.197 with SMTP id kw5mr2902786vdb.5.1394750702665; Thu, 13 Mar 2014 15:45:02 -0700 (PDT) Received: from mail-ve0-f176.google.com (mail-ve0-f176.google.com [209.85.128.176]) by mx.google.com with ESMTPS id no3si70855vec.118.2014.03.13.15.45.02 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 13 Mar 2014 15:45:02 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.176; Received: by mail-ve0-f176.google.com with SMTP id cz12so1824913veb.7 for ; Thu, 13 Mar 2014 15:45:02 -0700 (PDT) X-Received: by 10.52.15.132 with SMTP id x4mr2908933vdc.31.1394750702553; Thu, 13 Mar 2014 15:45:02 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.78.9 with SMTP id i9csp10052vck; Thu, 13 Mar 2014 15:45:02 -0700 (PDT) X-Received: by 10.68.58.34 with SMTP id n2mr5380088pbq.122.1394750673891; Thu, 13 Mar 2014 15:44:33 -0700 (PDT) Received: from mail-pa0-f43.google.com (mail-pa0-f43.google.com [209.85.220.43]) by mx.google.com with ESMTPS id qe9si2128420pbb.12.2014.03.13.15.44.33 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 13 Mar 2014 15:44:33 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.43 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.43; Received: by mail-pa0-f43.google.com with SMTP id bj1so1765801pad.30 for ; Thu, 13 Mar 2014 15:44:33 -0700 (PDT) X-Received: by 10.68.240.36 with SMTP id vx4mr5424885pbc.140.1394750673471; Thu, 13 Mar 2014 15:44:33 -0700 (PDT) Received: from buildbox.hsd1.or.comcast.net (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id bz4sm10365676pbb.12.2014.03.13.15.44.31 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 13 Mar 2014 15:44:32 -0700 (PDT) From: John Stultz To: dave@sr71.net Cc: John Stultz Subject: [PATCH 2/3] vrange: Add purged page detection on setting memory non-volatile Date: Thu, 13 Mar 2014 15:44:27 -0700 Message-Id: <1394750668-28654-2-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1394750668-28654-1-git-send-email-john.stultz@linaro.org> References: <1394750668-28654-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Users of volatile ranges will need to know if memory was discarded. This patch adds the purged state tracking required to inform userland when it marks memory as non-volatile that some memory in that range was purged and needs to be regenerated. This simplified implementation which uses some of the logic from Minchan's earlier efforts, so credit to Minchan for his work. Signed-off-by: John Stultz --- include/linux/swap.h | 15 +++++++++++-- include/linux/vrange.h | 13 ++++++++++++ mm/vrange.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 83 insertions(+), 2 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 46ba0c6..18c12f9 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -70,8 +70,19 @@ static inline int current_is_kswapd(void) #define SWP_HWPOISON_NUM 0 #endif -#define MAX_SWAPFILES \ - ((1 << MAX_SWAPFILES_SHIFT) - SWP_MIGRATION_NUM - SWP_HWPOISON_NUM) + +/* + * Purged volatile range pages + */ +#define SWP_VRANGE_PURGED_NUM 1 +#define SWP_VRANGE_PURGED (MAX_SWAPFILES + SWP_HWPOISON_NUM + SWP_MIGRATION_NUM) + + +#define MAX_SWAPFILES ((1 << MAX_SWAPFILES_SHIFT) \ + - SWP_MIGRATION_NUM \ + - SWP_HWPOISON_NUM \ + - SWP_VRANGE_PURGED_NUM \ + ) /* * Magic header for a swap area. The first part of the union is diff --git a/include/linux/vrange.h b/include/linux/vrange.h index 652396b..c4a1616 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -1,7 +1,20 @@ #ifndef _LINUX_VRANGE_H #define _LINUX_VRANGE_H +#include +#include + #define VRANGE_NONVOLATILE 0 #define VRANGE_VOLATILE 1 +static inline swp_entry_t swp_entry_mk_vrange_purged(void) +{ + return swp_entry(SWP_VRANGE_PURGED, 0); +} + +static inline int entry_is_vrange_purged(swp_entry_t entry) +{ + return swp_type(entry) == SWP_VRANGE_PURGED; +} + #endif /* _LINUX_VRANGE_H */ diff --git a/mm/vrange.c b/mm/vrange.c index d9116b1..73ef7ac 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -8,6 +8,60 @@ #include #include "internal.h" +struct vrange_walker { + struct vm_area_struct *vma; + int pages_purged; +}; + +static int vrange_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + struct vrange_walker *vw = walk->private; + pte_t *pte; + spinlock_t *ptl; + + if (pmd_trans_huge(*pmd)) + return 0; + if (pmd_trans_unstable(pmd)) + return 0; + + pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + for (; addr != end; pte++, addr += PAGE_SIZE) { + if (!pte_present(*pte)) { + swp_entry_t vrange_entry = pte_to_swp_entry(*pte); + + if (unlikely(entry_is_vrange_purged(vrange_entry))) { + vw->pages_purged = 1; + break; + } + } + } + pte_unmap_unlock(pte - 1, ptl); + cond_resched(); + + return 0; +} + +static unsigned long vrange_check_purged(struct mm_struct *mm, + struct vm_area_struct *vma, + unsigned long start, + unsigned long end) +{ + struct vrange_walker vw; + struct mm_walk vrange_walk = { + .pmd_entry = vrange_pte_range, + .mm = vma->vm_mm, + .private = &vw, + }; + vw.pages_purged = 0; + vw.vma = vma; + + walk_page_range(start, end, &vrange_walk); + + return vw.pages_purged; + +} + static ssize_t do_vrange(struct mm_struct *mm, unsigned long start, unsigned long end, int mode, int *purged) { @@ -57,6 +111,9 @@ static ssize_t do_vrange(struct mm_struct *mm, unsigned long start, break; case VRANGE_NONVOLATILE: new_flags &= ~VM_VOLATILE; + lpurged |= vrange_check_purged(mm, vma, + vma->vm_start, + vma->vm_end); } pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);