From patchwork Thu Feb 13 15:20:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 231255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC792C2BA83 for ; Thu, 13 Feb 2020 16:08:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7B4CD20656 for ; Thu, 13 Feb 2020 16:08:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581610090; bh=mQIBlK+zHeQZrDw2/w1JteBA5tl5pdyppA55xoJLPxQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=RyJzfB/Q6ksCS3AVy63q+FBAy90R6vnBwILLm+G0fnLeLciK3m+RKUE6DvHvYcdA1 j5RjUhrx1bbavcqdfbY9q+SFWr9EmmeWw8MZ8RUmXNE+VyH2riQIvUti4crrJGsO3m cmmUpHX//JcjqAxP4f/gfTYM5E67npDPD4MBGres= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730323AbgBMQIJ (ORCPT ); Thu, 13 Feb 2020 11:08:09 -0500 Received: from mail.kernel.org ([198.145.29.99]:33858 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728287AbgBMPXL (ORCPT ); Thu, 13 Feb 2020 10:23:11 -0500 Received: from localhost (unknown [104.132.1.104]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5F0A4246A3; Thu, 13 Feb 2020 15:23:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581607390; bh=mQIBlK+zHeQZrDw2/w1JteBA5tl5pdyppA55xoJLPxQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ICGXXjWhgVC4QJ9KCNo3Z3aVGZrJQO0eovP3j+F/RqLaNhwML6SfrFk6np6fOPZof G0VccoKNNb1ZOc52vvHulGAxqcssooA70FqZI28/GQ8AdIefjCAJWnxT590NaKZFSt 2sl9HZaWILpI3DaRMG0OMlbDH8ayt4eZl3i/Klt0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Alexander Shishkin , Song Liu , "Peter Zijlstra (Intel)" , Ingo Molnar Subject: [PATCH 4.4 78/91] perf/core: Fix mlock accounting in perf_mmap() Date: Thu, 13 Feb 2020 07:20:35 -0800 Message-Id: <20200213151852.717231523@linuxfoundation.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200213151821.384445454@linuxfoundation.org> References: <20200213151821.384445454@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Song Liu commit 003461559ef7a9bd0239bae35a22ad8924d6e9ad upstream. Decreasing sysctl_perf_event_mlock between two consecutive perf_mmap()s of a perf ring buffer may lead to an integer underflow in locked memory accounting. This may lead to the undesired behaviors, such as failures in BPF map creation. Address this by adjusting the accounting logic to take into account the possibility that the amount of already locked memory may exceed the current limit. Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again") Suggested-by: Alexander Shishkin Signed-off-by: Song Liu Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Cc: Acked-by: Alexander Shishkin Link: https://lkml.kernel.org/r/20200123181146.2238074-1-songliubraving@fb.com Signed-off-by: Greg Kroah-Hartman --- kernel/events/core.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4887,7 +4887,15 @@ accounting: */ user_lock_limit *= num_online_cpus(); - user_locked = atomic_long_read(&user->locked_vm) + user_extra; + user_locked = atomic_long_read(&user->locked_vm); + + /* + * sysctl_perf_event_mlock may have changed, so that + * user->locked_vm > user_lock_limit + */ + if (user_locked > user_lock_limit) + user_locked = user_lock_limit; + user_locked += user_extra; if (user_locked > user_lock_limit) extra = user_locked - user_lock_limit;