From patchwork Fri Mar 14 18:33:32 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 26284 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pb0-f71.google.com (mail-pb0-f71.google.com [209.85.160.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id ED19E202DD for ; Fri, 14 Mar 2014 18:33:49 +0000 (UTC) Received: by mail-pb0-f71.google.com with SMTP id up15sf6841729pbc.2 for ; Fri, 14 Mar 2014 11:33:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=9fGne+jvwk74WHLZB619Ee/THJ2tWiNx1rsTy77FuiA=; b=HhRKH7Jh5qn9TaE4KD7sP/gYfPfvtI1axeontJBeYUCTRn57JOFjpxZlnMYHw7q22H 78Mx1C5Y2qhqihHHgFsOFA2suLLs9ep7PEM3kpXhS0GbEKgJUAJxkHo+eGHvCfuw+ttE vHh366m/KDOZEJBA1koh4x3QwpgJ+cLT7jwl3Kzx+S+acE28NnaWPTOCSF4XgJ9B82g/ 3cQcDQewvXbL/tbElBEKis7NCnlq7TwMvqhM21YDXJ/oR0aXGFYneOcVXofa8dxGV5qF 3Alu/g3uHbp4dniAXByg7JkCTFl6ASd+LFGY28tNaMzrUozz7PbRQF1VoqG+1kRbL86L KaMw== X-Gm-Message-State: ALoCoQkrBFbu7cHoSLFHTeqx9q3ZTR8wAB+c6ykHfv4cHbkp1afR9bVljPaUR7Q6iRFEEgI/BTpm X-Received: by 10.66.157.35 with SMTP id wj3mr3850019pab.11.1394822029254; Fri, 14 Mar 2014 11:33:49 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.49.166 with SMTP id q35ls847538qga.89.gmail; Fri, 14 Mar 2014 11:33:49 -0700 (PDT) X-Received: by 10.52.104.68 with SMTP id gc4mr6265496vdb.2.1394822029121; Fri, 14 Mar 2014 11:33:49 -0700 (PDT) Received: from mail-ve0-f179.google.com (mail-ve0-f179.google.com [209.85.128.179]) by mx.google.com with ESMTPS id ru8si2378573vcb.104.2014.03.14.11.33.49 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 14 Mar 2014 11:33:49 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.179; Received: by mail-ve0-f179.google.com with SMTP id db12so3180765veb.10 for ; Fri, 14 Mar 2014 11:33:49 -0700 (PDT) X-Received: by 10.52.139.237 with SMTP id rb13mr1510923vdb.33.1394822028951; Fri, 14 Mar 2014 11:33:48 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.78.9 with SMTP id i9csp43106vck; Fri, 14 Mar 2014 11:33:48 -0700 (PDT) X-Received: by 10.68.218.3 with SMTP id pc3mr10730456pbc.71.1394822028045; Fri, 14 Mar 2014 11:33:48 -0700 (PDT) Received: from mail-pd0-f173.google.com (mail-pd0-f173.google.com [209.85.192.173]) by mx.google.com with ESMTPS id vo7si4428053pab.16.2014.03.14.11.33.47 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 14 Mar 2014 11:33:48 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.192.173 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.192.173; Received: by mail-pd0-f173.google.com with SMTP id z10so2882133pdj.32 for ; Fri, 14 Mar 2014 11:33:47 -0700 (PDT) X-Received: by 10.68.201.67 with SMTP id jy3mr10694722pbc.20.1394822027629; Fri, 14 Mar 2014 11:33:47 -0700 (PDT) Received: from buildbox.hsd1.or.comcast.net (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id dk1sm18837041pbc.46.2014.03.14.11.33.46 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 14 Mar 2014 11:33:47 -0700 (PDT) From: John Stultz To: LKML Cc: John Stultz , Andrew Morton , Android Kernel Team , Johannes Weiner , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Neil Brown , Andrea Arcangeli , Mike Hommey , Taras Glek , Dhaval Giani , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , Minchan Kim , "linux-mm@kvack.org" Subject: [PATCH 2/3] vrange: Add purged page detection on setting memory non-volatile Date: Fri, 14 Mar 2014 11:33:32 -0700 Message-Id: <1394822013-23804-3-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1394822013-23804-1-git-send-email-john.stultz@linaro.org> References: <1394822013-23804-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Users of volatile ranges will need to know if memory was discarded. This patch adds the purged state tracking required to inform userland when it marks memory as non-volatile that some memory in that range was purged and needs to be regenerated. This simplified implementation which uses some of the logic from Minchan's earlier efforts, so credit to Minchan for his work. Cc: Andrew Morton Cc: Android Kernel Team Cc: Johannes Weiner Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Neil Brown Cc: Andrea Arcangeli Cc: Mike Hommey Cc: Taras Glek Cc: Dhaval Giani Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: John Stultz --- include/linux/swap.h | 15 +++++++++++-- include/linux/vrange.h | 13 ++++++++++++ mm/vrange.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 83 insertions(+), 2 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 46ba0c6..18c12f9 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -70,8 +70,19 @@ static inline int current_is_kswapd(void) #define SWP_HWPOISON_NUM 0 #endif -#define MAX_SWAPFILES \ - ((1 << MAX_SWAPFILES_SHIFT) - SWP_MIGRATION_NUM - SWP_HWPOISON_NUM) + +/* + * Purged volatile range pages + */ +#define SWP_VRANGE_PURGED_NUM 1 +#define SWP_VRANGE_PURGED (MAX_SWAPFILES + SWP_HWPOISON_NUM + SWP_MIGRATION_NUM) + + +#define MAX_SWAPFILES ((1 << MAX_SWAPFILES_SHIFT) \ + - SWP_MIGRATION_NUM \ + - SWP_HWPOISON_NUM \ + - SWP_VRANGE_PURGED_NUM \ + ) /* * Magic header for a swap area. The first part of the union is diff --git a/include/linux/vrange.h b/include/linux/vrange.h index 652396b..c4a1616 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -1,7 +1,20 @@ #ifndef _LINUX_VRANGE_H #define _LINUX_VRANGE_H +#include +#include + #define VRANGE_NONVOLATILE 0 #define VRANGE_VOLATILE 1 +static inline swp_entry_t swp_entry_mk_vrange_purged(void) +{ + return swp_entry(SWP_VRANGE_PURGED, 0); +} + +static inline int entry_is_vrange_purged(swp_entry_t entry) +{ + return swp_type(entry) == SWP_VRANGE_PURGED; +} + #endif /* _LINUX_VRANGE_H */ diff --git a/mm/vrange.c b/mm/vrange.c index acb4356..844571b 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -8,6 +8,60 @@ #include #include "internal.h" +struct vrange_walker { + struct vm_area_struct *vma; + int pages_purged; +}; + +static int vrange_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + struct vrange_walker *vw = walk->private; + pte_t *pte; + spinlock_t *ptl; + + if (pmd_trans_huge(*pmd)) + return 0; + if (pmd_trans_unstable(pmd)) + return 0; + + pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + for (; addr != end; pte++, addr += PAGE_SIZE) { + if (!pte_present(*pte)) { + swp_entry_t vrange_entry = pte_to_swp_entry(*pte); + + if (unlikely(entry_is_vrange_purged(vrange_entry))) { + vw->pages_purged = 1; + break; + } + } + } + pte_unmap_unlock(pte - 1, ptl); + cond_resched(); + + return 0; +} + +static unsigned long vrange_check_purged(struct mm_struct *mm, + struct vm_area_struct *vma, + unsigned long start, + unsigned long end) +{ + struct vrange_walker vw; + struct mm_walk vrange_walk = { + .pmd_entry = vrange_pte_range, + .mm = vma->vm_mm, + .private = &vw, + }; + vw.pages_purged = 0; + vw.vma = vma; + + walk_page_range(start, end, &vrange_walk); + + return vw.pages_purged; + +} + static ssize_t do_vrange(struct mm_struct *mm, unsigned long start, unsigned long end, int mode, int *purged) { @@ -57,6 +111,9 @@ static ssize_t do_vrange(struct mm_struct *mm, unsigned long start, break; case VRANGE_NONVOLATILE: new_flags &= ~VM_VOLATILE; + lpurged |= vrange_check_purged(mm, vma, + vma->vm_start, + vma->vm_end); } pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);