diff mbox series

[tip:,perf/urgent] perf/core: Fix mlock accounting in perf_mmap()

Message ID 158029757616.396.9494045381575919767.tip-bot2@tip-bot2
State New
Headers show
Series [tip:,perf/urgent] perf/core: Fix mlock accounting in perf_mmap() | expand

Commit Message

thermal-bot for Julien Panis Jan. 29, 2020, 11:32 a.m. UTC
The following commit has been merged into the perf/urgent branch of tip:

Commit-ID:     003461559ef7a9bd0239bae35a22ad8924d6e9ad
Gitweb:        https://git.kernel.org/tip/003461559ef7a9bd0239bae35a22ad8924d6e9ad
Author:        Song Liu <songliubraving@fb.com>
AuthorDate:    Thu, 23 Jan 2020 10:11:46 -08:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 28 Jan 2020 21:20:18 +01:00

perf/core: Fix mlock accounting in perf_mmap()

Decreasing sysctl_perf_event_mlock between two consecutive perf_mmap()s of
a perf ring buffer may lead to an integer underflow in locked memory
accounting. This may lead to the undesired behaviors, such as failures in
BPF map creation.

Address this by adjusting the accounting logic to take into account the
possibility that the amount of already locked memory may exceed the
current limit.

Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again")
Suggested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: <stable@vger.kernel.org>
Acked-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Link: https://lkml.kernel.org/r/20200123181146.2238074-1-songliubraving@fb.com
---
 kernel/events/core.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2173c23..2d9aeba 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5916,7 +5916,15 @@  accounting:
 	 */
 	user_lock_limit *= num_online_cpus();
 
-	user_locked = atomic_long_read(&user->locked_vm) + user_extra;
+	user_locked = atomic_long_read(&user->locked_vm);
+
+	/*
+	 * sysctl_perf_event_mlock may have changed, so that
+	 *     user->locked_vm > user_lock_limit
+	 */
+	if (user_locked > user_lock_limit)
+		user_locked = user_lock_limit;
+	user_locked += user_extra;
 
 	if (user_locked > user_lock_limit) {
 		/*