From patchwork Fri Apr 11 20:15:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 28295 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pb0-f71.google.com (mail-pb0-f71.google.com [209.85.160.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id D858420822 for ; Fri, 11 Apr 2014 20:15:59 +0000 (UTC) Received: by mail-pb0-f71.google.com with SMTP id up15sf19726026pbc.10 for ; Fri, 11 Apr 2014 13:15:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=ZbYxS2PyztYp0KT4aPi8eCDU3fdprBQlpyLPbzf1g60=; b=Suen9P9zpX7lQ17zD91gG2vfUbNUy3bszSYmmF4euFR/3RBqN61Vtt6r/tDa7zN5/u tJ++irZRNBC+CpMYWZVns93o4A7SGYRFholgQTNxcNe1F/ElREfX/VNxNHtrgH+ugro5 W0PAaW891M95yOaMfBo2SCFg0DkfuEp9NVmcMNWzwpZE2AwJta9uomoE0gIlF4O7u2/u dGRN+PNXXrsjRswuXF8/CROFo++Zy15ZQWl7I7XgKqxLSmrJ5CRORnyr0boHJuTfZ/kk Z0+bAKlAVZZiYfYr7BZl1rRHLBNs76W1MSMgY3RYEf/VLGodcB77WiRyIYBkMDAonH2D Hiuw== X-Gm-Message-State: ALoCoQl1AaV6VrorsPZvC/9wIYzPMNplOHUzD2AjxvVh0/cDJ0KDqz5uAc+ZhsB47B103wJFnWTd X-Received: by 10.66.190.202 with SMTP id gs10mr12551934pac.0.1397247359063; Fri, 11 Apr 2014 13:15:59 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.36.110 with SMTP id o101ls1791441qgo.18.gmail; Fri, 11 Apr 2014 13:15:58 -0700 (PDT) X-Received: by 10.220.4.132 with SMTP id 4mr21699823vcr.9.1397247358858; Fri, 11 Apr 2014 13:15:58 -0700 (PDT) Received: from mail-ve0-f179.google.com (mail-ve0-f179.google.com [209.85.128.179]) by mx.google.com with ESMTPS id rw5si1537451vcb.145.2014.04.11.13.15.58 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 11 Apr 2014 13:15:58 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.179; Received: by mail-ve0-f179.google.com with SMTP id db12so5368925veb.10 for ; Fri, 11 Apr 2014 13:15:58 -0700 (PDT) X-Received: by 10.220.190.197 with SMTP id dj5mr10390774vcb.19.1397247358747; Fri, 11 Apr 2014 13:15:58 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp84024vcb; Fri, 11 Apr 2014 13:15:58 -0700 (PDT) X-Received: by 10.68.164.193 with SMTP id ys1mr29680294pbb.139.1397247358036; Fri, 11 Apr 2014 13:15:58 -0700 (PDT) Received: from mail-pd0-f176.google.com (mail-pd0-f176.google.com [209.85.192.176]) by mx.google.com with ESMTPS id yb4si601552pab.144.2014.04.11.13.15.57 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 11 Apr 2014 13:15:58 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.192.176 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.192.176; Received: by mail-pd0-f176.google.com with SMTP id r10so5679512pdi.7 for ; Fri, 11 Apr 2014 13:15:57 -0700 (PDT) X-Received: by 10.66.190.4 with SMTP id gm4mr29493250pac.116.1397247357651; Fri, 11 Apr 2014 13:15:57 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id te2sm40308391pac.25.2014.04.11.13.15.56 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 11 Apr 2014 13:15:57 -0700 (PDT) From: John Stultz To: LKML Cc: John Stultz , Andrew Morton , Android Kernel Team , Johannes Weiner , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Neil Brown , Andrea Arcangeli , Mike Hommey , Taras Glek , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , Minchan Kim , Keith Packard , "linux-mm@kvack.org" Subject: [PATCH 3/4] mvolatile: Add purged page detection on setting memory non-volatile Date: Fri, 11 Apr 2014 13:15:39 -0700 Message-Id: <1397247340-3365-4-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1397247340-3365-1-git-send-email-john.stultz@linaro.org> References: <1397247340-3365-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Users of volatile ranges will need to know if memory was discarded. This patch adds the purged state tracking required to inform userland when it marks memory as non-volatile that some memory in that range was purged and needs to be regenerated. This simplified implementation which uses some of the logic from Minchan's earlier efforts, so credit to Minchan for his work. Cc: Andrew Morton Cc: Android Kernel Team Cc: Johannes Weiner Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Neil Brown Cc: Andrea Arcangeli Cc: Mike Hommey Cc: Taras Glek Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Minchan Kim Cc: Keith Packard Cc: linux-mm@kvack.org Acked-by: Jan Kara Signed-off-by: John Stultz --- include/linux/swap.h | 5 +++ include/linux/swapops.h | 10 ++++++ mm/mvolatile.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 101 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index a90ea95..c372ca7 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -55,6 +55,7 @@ enum { * 1< #include #include +#include +#include #include "internal.h" +struct mvolatile_walker { + struct vm_area_struct *vma; + int page_was_purged; +}; + + +/** + * mvolatile_check_purged_pte - Checks ptes for purged pages + * @pmd: pmd to walk + * @addr: starting address + * @end: end address + * @walk: mm_walk ptr (contains ptr to mvolatile_walker) + * + * Iterates over the ptes in the pmd checking if they have + * purged swap entries. + * + * Sets the mvolatile_walker.page_was_purged to 1 if any were purged. + */ +static int mvolatile_check_purged_pte(pmd_t *pmd, unsigned long addr, + unsigned long end, struct mm_walk *walk) +{ + struct mvolatile_walker *vw = walk->private; + pte_t *pte; + spinlock_t *ptl; + int ret = 0; + + if (pmd_trans_huge(*pmd)) + return 0; + if (pmd_trans_unstable(pmd)) + return 0; + + pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + for (; addr != end; pte++, addr += PAGE_SIZE) { + if (!pte_present(*pte)) { + swp_entry_t mvolatile_entry = pte_to_swp_entry(*pte); + + if (unlikely(is_purged_entry(mvolatile_entry))) { + + vw->page_was_purged = 1; + + /* clear the pte swp entry */ + flush_cache_page(vw->vma, addr, pte_pfn(*pte)); + ptep_clear_flush(vw->vma, addr, pte); + } + } + } + pte_unmap_unlock(pte - 1, ptl); + cond_resched(); + + return ret; +} + + +/** + * mvolatile_check_purged - Sets up a mm_walk to check for purged pages + * @vma: ptr to vma we're starting with + * @start: start address to walk + * @end: end address of walk + * + * Sets up and calls wa_page_range() to check for purge pages. + * + * Returns 1 if pages in the range were purged, 0 otherwise. + */ +static int mvolatile_check_purged(struct vm_area_struct *vma, + unsigned long start, + unsigned long end) +{ + struct mvolatile_walker vw; + struct mm_walk mvolatile_walk = { + .pmd_entry = mvolatile_check_purged_pte, + .mm = vma->vm_mm, + .private = &vw, + }; + vw.page_was_purged = 0; + vw.vma = vma; + + walk_page_range(start, end, &mvolatile_walk); + + return vw.page_was_purged; + +} /** * do_mvolatile - Marks or clears VMAs in the range (start-end) as VM_VOLATILE @@ -119,6 +202,9 @@ success: vma = prev->vm_next; } out: + if (count && (mode == MVOLATILE_NONVOLATILE)) + *purged = mvolatile_check_purged(vma, orig_start, + orig_start+count); up_write(&mm->mmap_sem); /* report bytes successfully marked, even if we're exiting on error */