From patchwork Mon Jul 28 07:50:09 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Slaby X-Patchwork-Id: 34345 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f200.google.com (mail-ie0-f200.google.com [209.85.223.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1E8EE20AE8 for ; Mon, 28 Jul 2014 07:50:39 +0000 (UTC) Received: by mail-ie0-f200.google.com with SMTP id at20sf51822965iec.11 for ; Mon, 28 Jul 2014 00:50:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=WdKrf+W8YIE0+Iv81YYlnP1MlQTUsHxi0WTPubMrsE4=; b=L3OWGpIJpzX1S3YNJ5yeWMjRiyyyaP5NNDWSEC/tyXSk5p0V/UTkFN2KxWq6El1nOF AIxvnshwTWiFCbCQAIkuq8IPU3SJK7L2VQYinD8mfLYRl16rDw3YArTM6KNtDVnxEr+O vqnobdWCG+msqbClDe182Z7e0bE1MZFSzKQej/J7xFy78PqNL+xJt6PYbGWohRpBBris cOTk9PASMJIDXJ9LdHgcnkZQY2ebYn3+FBqUKxdI8K/ShvYNaejA/X0GwQOwUHHYLaqC EKAKMftKbsdKMvULBB64UYeeGTew1WWEbSnYczwiP9RACx+ykbXo3/I3vaOZXHE0eftx kJzg== X-Gm-Message-State: ALoCoQmabXp4gutnoCJ3d2mXHVMA4+wCXsQAMj1wA/9czEgZH6/OtDa0V4SvzaBVfQi/Hs1bEIfF X-Received: by 10.182.148.1 with SMTP id to1mr14938710obb.50.1406533838636; Mon, 28 Jul 2014 00:50:38 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.101.231 with SMTP id u94ls1950112qge.62.gmail; Mon, 28 Jul 2014 00:50:38 -0700 (PDT) X-Received: by 10.52.34.209 with SMTP id b17mr10317312vdj.49.1406533838536; Mon, 28 Jul 2014 00:50:38 -0700 (PDT) Received: from mail-vc0-f174.google.com (mail-vc0-f174.google.com [209.85.220.174]) by mx.google.com with ESMTPS id rq6si11959303vcb.97.2014.07.28.00.50.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 28 Jul 2014 00:50:38 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.174 as permitted sender) client-ip=209.85.220.174; Received: by mail-vc0-f174.google.com with SMTP id la4so10611407vcb.33 for ; Mon, 28 Jul 2014 00:50:38 -0700 (PDT) X-Received: by 10.52.166.10 with SMTP id zc10mr2677173vdb.61.1406533838436; Mon, 28 Jul 2014 00:50:38 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp146480vcb; Mon, 28 Jul 2014 00:50:37 -0700 (PDT) X-Received: by 10.66.218.70 with SMTP id pe6mr22927144pac.61.1406533837470; Mon, 28 Jul 2014 00:50:37 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id iq4si16931615pbb.75.2014.07.28.00.50.32 for ; Mon, 28 Jul 2014 00:50:34 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751953AbaG1HuS (ORCPT + 21 others); Mon, 28 Jul 2014 03:50:18 -0400 Received: from ip4-83-240-18-248.cust.nbox.cz ([83.240.18.248]:51640 "EHLO ip4-83-240-18-248.cust.nbox.cz" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751867AbaG1HuL (ORCPT ); Mon, 28 Jul 2014 03:50:11 -0400 Received: from ku by ip4-83-240-18-248.cust.nbox.cz with local (Exim 4.82) (envelope-from ) id 1XBfhB-0002Ba-Ig; Mon, 28 Jul 2014 09:50:09 +0200 From: Jiri Slaby To: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Catalin Marinas , Andrew Morton , Linus Torvalds , Jiri Slaby Subject: [patch added to the 3.12 stable tree] mm: kmemleak: avoid false negatives on vmalloc'ed objects Date: Mon, 28 Jul 2014 09:50:09 +0200 Message-Id: <1406533809-8369-1-git-send-email-jslaby@suse.cz> X-Mailer: git-send-email 2.0.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: linux-kernel-owner@vger.kernel.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.174 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Catalin Marinas This patch has been added to the 3.12 stable tree. If you have any objections, please let us know. =============== commit 7f88f88f83ed609650a01b18572e605ea50cd163 upstream. Commit 248ac0e1943a ("mm/vmalloc: remove guard page from between vmap blocks") had the side effect of making vmap_area.va_end member point to the next vmap_area.va_start. This was creating an artificial reference to vmalloc'ed objects and kmemleak was rarely reporting vmalloc() leaks. This patch marks the vmap_area containing pointers explicitly and reduces the min ref_count to 2 as vm_struct still contains a reference to the vmalloc'ed object. The kmemleak add_scan_area() function has been improved to allow a SIZE_MAX argument covering the rest of the object (for simpler calling sites). Signed-off-by: Catalin Marinas Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Jiri Slaby --- mm/kmemleak.c | 4 +++- mm/vmalloc.c | 14 ++++++++++---- 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index e126b0ef9ad2..31f01c5011e5 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -753,7 +753,9 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp) } spin_lock_irqsave(&object->lock, flags); - if (ptr + size > object->pointer + object->size) { + if (size == SIZE_MAX) { + size = object->pointer + object->size - ptr; + } else if (ptr + size > object->pointer + object->size) { kmemleak_warn("Scan area larger than object 0x%08lx\n", ptr); dump_object_info(object); kmem_cache_free(scan_area_cache, area); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 107454312d5e..e2be0f802ccf 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -359,6 +359,12 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, if (unlikely(!va)) return ERR_PTR(-ENOMEM); + /* + * Only scan the relevant parts containing pointers to other objects + * to avoid false negatives. + */ + kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK); + retry: spin_lock(&vmap_area_lock); /* @@ -1646,11 +1652,11 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, clear_vm_uninitialized_flag(area); /* - * A ref_count = 3 is needed because the vm_struct and vmap_area - * structures allocated in the __get_vm_area_node() function contain - * references to the virtual address of the vmalloc'ed block. + * A ref_count = 2 is needed because vm_struct allocated in + * __get_vm_area_node() contains a reference to the virtual address of + * the vmalloc'ed block. */ - kmemleak_alloc(addr, real_size, 3, gfp_mask); + kmemleak_alloc(addr, real_size, 2, gfp_mask); return addr;