From patchwork Tue Oct 1 18:38:50 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 20731 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f199.google.com (mail-ve0-f199.google.com [209.85.128.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BEDE623920 for ; Tue, 1 Oct 2013 18:39:12 +0000 (UTC) Received: by mail-ve0-f199.google.com with SMTP id db12sf8809311veb.2 for ; Tue, 01 Oct 2013 11:39:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:x-gm-message-state:delivered-to:from:to:subject:date :message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=CLF4UqxzIGoT3BlhjIDedbOVrQIpLV+FEntuGE/MHj0=; b=XS/I5yA6qUjyvN+VBHpQdkAkKMNlrLbyPx7+xMc2eotpOEfER3OfBR4WW39xkpWGLP A25Ycq3HvTEX0kpKO3wMSZcpnNdsxpxJO3pTMrcGnX9o7xkSJi2tYwUqowvBEJcFJFuJ V142VtGD/twzVNrUIiIgtb1sxmRygugmI5/opBGSrxSQhRy2LwqfMKKKscH1ZQczLnal kIINOfptq1Dqwi1PW+QHVoQL4zAfyGVWyGAA0CiwcqAlg0XSKwSFudMhZZYoX/qoCXia UaLUSni2bGugKZAx0WLeEsTRsM69hMTwLhrIbMRfro16psd7C9/HJS4NCjaXNEnk3hsm sQDg== X-Received: by 10.58.173.40 with SMTP id bh8mr2333040vec.32.1380652752604; Tue, 01 Oct 2013 11:39:12 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.110.9 with SMTP id hw9ls182157qeb.16.gmail; Tue, 01 Oct 2013 11:39:12 -0700 (PDT) X-Received: by 10.220.94.206 with SMTP id a14mr28774601vcn.19.1380652752489; Tue, 01 Oct 2013 11:39:12 -0700 (PDT) Received: from mail-ve0-f182.google.com (mail-ve0-f182.google.com [209.85.128.182]) by mx.google.com with ESMTPS id cl2si1615292vdc.36.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Oct 2013 11:39:12 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.182 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.182; Received: by mail-ve0-f182.google.com with SMTP id oy12so5771967veb.27 for ; Tue, 01 Oct 2013 11:39:12 -0700 (PDT) X-Gm-Message-State: ALoCoQkP3xgsXPaBbaLknB8uR1zqfnWimrOSywlJLUmAoX3KkBJcgAgn5ba859ZcaqqqWwxTwqm9 X-Received: by 10.58.134.16 with SMTP id pg16mr13487457veb.21.1380652752332; Tue, 01 Oct 2013 11:39:12 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp38606vcz; Tue, 1 Oct 2013 11:39:11 -0700 (PDT) X-Received: by 10.66.161.229 with SMTP id xv5mr35601007pab.87.1380652750138; Tue, 01 Oct 2013 11:39:10 -0700 (PDT) Received: from mail-pa0-f44.google.com (mail-pa0-f44.google.com [209.85.220.44]) by mx.google.com with ESMTPS id fa1si5746948pab.42.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Oct 2013 11:39:10 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.44 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.44; Received: by mail-pa0-f44.google.com with SMTP id lf10so7880505pab.17 for ; Tue, 01 Oct 2013 11:39:09 -0700 (PDT) X-Received: by 10.68.197.129 with SMTP id iu1mr15398247pbc.139.1380652749601; Tue, 01 Oct 2013 11:39:09 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id ed3sm8282606pbc.6.1969.12.31.16.00.00 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Oct 2013 11:39:09 -0700 (PDT) From: John Stultz To: Minchan Kim , Dhaval Giani Subject: [PATCH 06/14] vrange: Add basic functions to purge volatile pages Date: Tue, 1 Oct 2013 11:38:50 -0700 Message-Id: <1380652738-8000-7-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1380652738-8000-1-git-send-email-john.stultz@linaro.org> References: <1380652738-8000-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.182 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim This patch adds discard_vpage and related functions to purge anonymous and file volatile pages. It is in preparation for purging volatile pages when memory is tight. The logic to trigger purge volatile pages will be introduced in the next patch. Cc: Andrew Morton Cc: Android Kernel Team Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Dave Chinner Cc: Neil Brown Cc: Andrea Righi Cc: Andrea Arcangeli Cc: Aneesh Kumar K.V Cc: Mike Hommey Cc: Taras Glek Cc: Dhaval Giani Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Rob Clark Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: Minchan Kim [jstultz: Reworked to add purging of file pages, commit log tweaks] Signed-off-by: John Stultz --- include/linux/vrange.h | 9 +++ mm/internal.h | 2 - mm/vrange.c | 185 +++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 194 insertions(+), 2 deletions(-) diff --git a/include/linux/vrange.h b/include/linux/vrange.h index ef153c8..778902d 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -41,6 +41,9 @@ extern int vrange_clear(struct vrange_root *vroot, extern void vrange_root_cleanup(struct vrange_root *vroot); extern int vrange_fork(struct mm_struct *new, struct mm_struct *old); +int discard_vpage(struct page *page); +bool vrange_addr_volatile(struct vm_area_struct *vma, unsigned long addr); + #else static inline void vrange_root_init(struct vrange_root *vroot, @@ -51,5 +54,11 @@ static inline int vrange_fork(struct mm_struct *new, struct mm_struct *old) return 0; } +static inline bool vrange_addr_volatile(struct vm_area_struct *vma, + unsigned long addr) +{ + return false; +} +static inline int discard_vpage(struct page *page) { return 0 }; #endif #endif /* _LINIUX_VRANGE_H */ diff --git a/mm/internal.h b/mm/internal.h index 4390ac6..c2c6a93 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -223,10 +223,8 @@ static inline void mlock_migrate_page(struct page *newpage, struct page *page) extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE extern unsigned long vma_address(struct page *page, struct vm_area_struct *vma); -#endif #else /* !CONFIG_MMU */ static inline int mlocked_vma_newpage(struct vm_area_struct *v, struct page *p) { diff --git a/mm/vrange.c b/mm/vrange.c index 115ddb4..6ba950d 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -6,6 +6,12 @@ #include #include #include +#include +#include +#include +#include "internal.h" +#include +#include static struct kmem_cache *vrange_cachep; @@ -63,6 +69,19 @@ static inline void __vrange_resize(struct vrange *range, __vrange_add(range, vroot); } +static struct vrange *__vrange_find(struct vrange_root *vroot, + unsigned long start_idx, + unsigned long end_idx) +{ + struct vrange *range = NULL; + struct interval_tree_node *node; + + node = interval_tree_iter_first(&vroot->v_rb, start_idx, end_idx); + if (node) + range = vrange_from_node(node); + return range; +} + static int vrange_add(struct vrange_root *vroot, unsigned long start_idx, unsigned long end_idx) { @@ -393,3 +412,169 @@ SYSCALL_DEFINE4(vrange, unsigned long, start, out: return ret; } + +bool vrange_addr_volatile(struct vm_area_struct *vma, unsigned long addr) +{ + struct vrange_root *vroot; + unsigned long vstart_idx, vend_idx; + bool ret = false; + + vroot = __vma_to_vroot(vma); + vstart_idx = __vma_addr_to_index(vma, addr); + vend_idx = vstart_idx + PAGE_SIZE - 1; + + vrange_lock(vroot); + if (__vrange_find(vroot, vstart_idx, vend_idx)) + ret = true; + vrange_unlock(vroot); + return ret; +} + +/* Caller should hold vrange_lock */ +static void do_purge(struct vrange_root *vroot, + unsigned long start_idx, unsigned long end_idx) +{ + struct vrange *range; + struct interval_tree_node *node; + + node = interval_tree_iter_first(&vroot->v_rb, start_idx, end_idx); + while (node) { + range = container_of(node, struct vrange, node); + range->purged = true; + node = interval_tree_iter_next(node, start_idx, end_idx); + } +} + +static void try_to_discard_one(struct vrange_root *vroot, struct page *page, + struct vm_area_struct *vma, unsigned long addr) +{ + struct mm_struct *mm = vma->vm_mm; + pte_t *pte; + pte_t pteval; + spinlock_t *ptl; + + VM_BUG_ON(!PageLocked(page)); + + pte = page_check_address(page, mm, addr, &ptl, 0); + if (!pte) + return; + + BUG_ON(vma->vm_flags & (VM_SPECIAL|VM_LOCKED|VM_MIXEDMAP|VM_HUGETLB)); + + flush_cache_page(vma, address, page_to_pfn(page)); + pteval = ptep_clear_flush(vma, addr, pte); + + update_hiwater_rss(mm); + if (PageAnon(page)) + dec_mm_counter(mm, MM_ANONPAGES); + else + dec_mm_counter(mm, MM_FILEPAGES); + + page_remove_rmap(page); + page_cache_release(page); + + pte_unmap_unlock(pte, ptl); + mmu_notifier_invalidate_page(mm, addr); + + addr = __vma_addr_to_index(vma, addr); + + do_purge(vroot, addr, addr + PAGE_SIZE - 1); +} + +static int try_to_discard_anon_vpage(struct page *page) +{ + struct anon_vma *anon_vma; + struct anon_vma_chain *avc; + pgoff_t pgoff; + struct vm_area_struct *vma; + struct mm_struct *mm; + struct vrange_root *vroot; + + unsigned long address; + + anon_vma = page_lock_anon_vma_read(page); + if (!anon_vma) + return -1; + + pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); + /* + * During interating the loop, some processes could see a page as + * purged while others could see a page as not-purged because we have + * no global lock between parent and child for protecting vrange system + * call during this loop. But it's not a problem because the page is + * not *SHARED* page but *COW* page so parent and child can see other + * data anytime. The worst case by this race is a page was purged + * but couldn't be discarded so it makes unnecessary page fault but + * it wouldn't be severe. + */ + anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root, pgoff, pgoff) { + vma = avc->vma; + mm = vma->vm_mm; + vroot = &mm->vroot; + address = vma_address(page, vma); + + vrange_lock(vroot); + if (!__vrange_find(vroot, address, address + PAGE_SIZE - 1)) { + vrange_unlock(vroot); + continue; + } + + try_to_discard_one(vroot, page, vma, address); + vrange_unlock(vroot); + } + + page_unlock_anon_vma_read(anon_vma); + return 0; +} + +static int try_to_discard_file_vpage(struct page *page) +{ + struct address_space *mapping = page->mapping; + pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); + struct vm_area_struct *vma; + + mutex_lock(&mapping->i_mmap_mutex); + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { + unsigned long address = vma_address(page, vma); + struct vrange_root *vroot = &mapping->vroot; + long vstart_idx; + + vstart_idx = __vma_addr_to_index(vma, address); + vrange_lock(vroot); + if (!__vrange_find(vroot, vstart_idx, + vstart_idx + PAGE_SIZE - 1)) { + vrange_unlock(vroot); + continue; + } + try_to_discard_one(vroot, page, vma, address); + vrange_unlock(vroot); + } + + mutex_unlock(&mapping->i_mmap_mutex); + return 0; +} + +static int try_to_discard_vpage(struct page *page) +{ + if (PageAnon(page)) + return try_to_discard_anon_vpage(page); + return try_to_discard_file_vpage(page); +} + +int discard_vpage(struct page *page) +{ + VM_BUG_ON(!PageLocked(page)); + VM_BUG_ON(PageLRU(page)); + + if (!try_to_discard_vpage(page)) { + if (PageSwapCache(page)) + try_to_free_swap(page); + + if (page_freeze_refs(page, 1)) { + unlock_page(page); + return 0; + } + } + + return 1; +}