From patchwork Mon Oct 16 14:38:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 734820 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AE2DCDB465 for ; Mon, 16 Oct 2023 14:39:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232392AbjJPOjE (ORCPT ); Mon, 16 Oct 2023 10:39:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232365AbjJPOjB (ORCPT ); Mon, 16 Oct 2023 10:39:01 -0400 Received: from mail-oo1-xc34.google.com (mail-oo1-xc34.google.com [IPv6:2607:f8b0:4864:20::c34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 102209F for ; Mon, 16 Oct 2023 07:38:58 -0700 (PDT) Received: by mail-oo1-xc34.google.com with SMTP id 006d021491bc7-57b8a0f320dso2578871eaf.1 for ; Mon, 16 Oct 2023 07:38:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1697467137; x=1698071937; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=i0WNNc5z5Rp4qcc+bUjy2EO5P5kIEIYGxWQPd2Onn9M=; b=XZgn9pGtfiksdV+hqyJc4Ihb54ulGGzT9p7F8/zFVTt29mYTkWiR5OrI30d/xa/apR 2mCVJIV7EQprrhw3dFRnBwFCvKjW++u1sSX+dmpjol3uHIbyPy7ajvC7kZwTlL3zSJa8 jn32K/avgbbPNzq0ncObsbqG6Z5K5LVwbFqac= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697467137; x=1698071937; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=i0WNNc5z5Rp4qcc+bUjy2EO5P5kIEIYGxWQPd2Onn9M=; b=K65ic1GJE+pbfT5ov1p2PYIdqBl3yMShmIMBtRqADPFsnSZY6+ehFcOjHMgtAfsrIi OFIx24hIQGwDSLnVybQNqwyyeSBXIp8WXbbf6ntMrzkTEWgesGQGQ2uwLm4MY8uLW88g BS9pfRoZOdiSad4ID3zamjEVL88X0toGs6p5Ape69m3l6ktXAT+LVmVQAyMMfW5p7dcX MZqTd99E+fXXw+GBprcJJy2uJjYkoTelookLhRH+QENbGEhI6TcT1ciwizAn/ZFaPVxg UUC8QNyh7mZSkGsa6vQ8A1TRuGVWYUmEVrBHb4qotUeWaCi9CdUjzofpRx4d81NjkQZZ mNDg== X-Gm-Message-State: AOJu0YwSz3CVfhA9r6bvN/ezDmRKD0QiI4D0SI591CNNQ3Ufb4L+Bp6k uHMRl+H1m2bRbD0Mdz7tCzThAw== X-Google-Smtp-Source: AGHT+IHl+pMgyvtur/JNbIrQM0sYpoTu0Sl4CBWNwTFYDB3ETNHzopbO9fK1CG39Fnz3dURR6iY0pg== X-Received: by 2002:a05:6358:4311:b0:13c:fd78:bb43 with SMTP id r17-20020a056358431100b0013cfd78bb43mr36098417rwc.27.1697467137176; Mon, 16 Oct 2023 07:38:57 -0700 (PDT) Received: from localhost (9.184.168.34.bc.googleusercontent.com. [34.168.184.9]) by smtp.gmail.com with UTF8SMTPSA id s5-20020aa78bc5000000b006be0fb89ac3sm2298667pfd.30.2023.10.16.07.38.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 16 Oct 2023 07:38:56 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, sroettger@google.com Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, jannh@google.com, surenb@google.com, alex.sierra@amd.com, apopple@nvidia.com, aneesh.kumar@linux.ibm.com, axelrasmussen@google.com, ben@decadent.org.uk, catalin.marinas@arm.com, david@redhat.com, dwmw@amazon.co.uk, ying.huang@intel.com, hughd@google.com, joey.gouly@arm.com, corbet@lwn.net, wangkefeng.wang@huawei.com, Liam.Howlett@oracle.com, torvalds@linux-foundation.org, lstoakes@gmail.com, willy@infradead.org, mawupeng1@huawei.com, linmiaohe@huawei.com, namit@vmware.com, peterx@redhat.com, peterz@infradead.org, ryan.roberts@arm.com, shr@devkernel.io, vbabka@suse.cz, xiujianfeng@huawei.com, yu.ma@intel.com, zhangpeng362@huawei.com, dave.hansen@intel.com, luto@kernel.org, linux-hardening@vger.kernel.org Subject: [RFC PATCH v1 1/8] Add mseal syscall Date: Mon, 16 Oct 2023 14:38:20 +0000 Message-ID: <20231016143828.647848-2-jeffxu@chromium.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231016143828.647848-1-jeffxu@chromium.org> References: <20231016143828.647848-1-jeffxu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jeff Xu mseal() prevents system calls from modifying the metadata of virtual addresses. Five syscalls can be sealed, as specified by bitmasks: MM_SEAL_MPROTECT: Deny mprotect(2)/pkey_mprotect(2). MM_SEAL_MUNMAP: Deny munmap(2). MM_SEAL_MMAP: Deny mmap(2). MM_SEAL_MREMAP: Deny mremap(2). MM_SEAL_MSEAL: Deny adding a new seal type. Signed-off-by: Jeff Xu --- include/linux/mm.h | 14 ++ include/linux/mm_types.h | 7 + include/linux/syscalls.h | 2 + include/uapi/linux/mman.h | 6 + kernel/sys_ni.c | 1 + mm/Kconfig | 8 ++ mm/Makefile | 1 + mm/mmap.c | 14 ++ mm/mseal.c | 268 ++++++++++++++++++++++++++++++++++++++ 9 files changed, 321 insertions(+) create mode 100644 mm/mseal.c diff --git a/include/linux/mm.h b/include/linux/mm.h index 53efddc4d178..e790b91a0cd4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -257,6 +257,20 @@ extern struct rw_semaphore nommu_region_sem; extern unsigned int kobjsize(const void *objp); #endif +/* + * vm_seals in vm_area_struct, see mm_types.h. + */ +#define VM_SEAL_NONE 0x00000000 +#define VM_SEAL_MSEAL 0x00000001 +#define VM_SEAL_MPROTECT 0x00000002 +#define VM_SEAL_MUNMAP 0x00000004 +#define VM_SEAL_MREMAP 0x00000008 +#define VM_SEAL_MMAP 0x00000010 + +#define VM_SEAL_ALL \ + (VM_SEAL_MSEAL | VM_SEAL_MPROTECT | VM_SEAL_MUNMAP | VM_SEAL_MMAP | \ + VM_SEAL_MREMAP) + /* * vm_flags in vm_area_struct, see mm_types.h. * When changing, update also include/trace/events/mmflags.h diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 36c5b43999e6..17d80f5a73dc 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -660,6 +660,13 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_MSEAL + /* + * bit masks for seal. + * need this since vm_flags is full. + */ + unsigned long vm_seals; /* seal flags, see mm.h. */ +#endif } __randomize_layout; #ifdef CONFIG_SCHED_MM_CID diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index c0cb22cd607d..f574c7dbee76 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -802,6 +802,8 @@ asmlinkage long sys_process_mrelease(int pidfd, unsigned int flags); asmlinkage long sys_remap_file_pages(unsigned long start, unsigned long size, unsigned long prot, unsigned long pgoff, unsigned long flags); +asmlinkage long sys_mseal(unsigned long start, size_t len, unsigned int types, + unsigned int flags); asmlinkage long sys_mbind(unsigned long start, unsigned long len, unsigned long mode, const unsigned long __user *nmask, diff --git a/include/uapi/linux/mman.h b/include/uapi/linux/mman.h index a246e11988d5..d7882b5984ce 100644 --- a/include/uapi/linux/mman.h +++ b/include/uapi/linux/mman.h @@ -55,4 +55,10 @@ struct cachestat { __u64 nr_recently_evicted; }; +#define MM_SEAL_MSEAL 0x1 +#define MM_SEAL_MPROTECT 0x2 +#define MM_SEAL_MUNMAP 0x4 +#define MM_SEAL_MMAP 0x8 +#define MM_SEAL_MREMAP 0x10 + #endif /* _UAPI_LINUX_MMAN_H */ diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 781de7cc6a4e..06fabf379e33 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -192,6 +192,7 @@ COND_SYSCALL(migrate_pages); COND_SYSCALL(move_pages); COND_SYSCALL(set_mempolicy_home_node); COND_SYSCALL(cachestat); +COND_SYSCALL(mseal); COND_SYSCALL(perf_event_open); COND_SYSCALL(accept4); diff --git a/mm/Kconfig b/mm/Kconfig index 264a2df5ecf5..db8a567cb4d3 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1258,6 +1258,14 @@ config LOCK_MM_AND_FIND_VMA bool depends on !STACK_GROWSUP +config MSEAL + default n + bool "Enable mseal() system call" + depends on MMU + help + Enable the mseal() system call. Make memory areas's metadata immutable + by selected system calls, i.e. mprotect(), munmap(), mremap(), mmap(). + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index ec65984e2ade..643d8518dac0 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -120,6 +120,7 @@ obj-$(CONFIG_PAGE_EXTENSION) += page_ext.o obj-$(CONFIG_PAGE_TABLE_CHECK) += page_table_check.o obj-$(CONFIG_CMA_DEBUGFS) += cma_debug.o obj-$(CONFIG_SECRETMEM) += secretmem.o +obj-$(CONFIG_MSEAL) += mseal.o obj-$(CONFIG_CMA_SYSFS) += cma_sysfs.o obj-$(CONFIG_USERFAULTFD) += userfaultfd.o obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o diff --git a/mm/mmap.c b/mm/mmap.c index 514ced13c65c..9b6c477e713e 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -730,6 +730,20 @@ static inline bool is_mergeable_vma(struct vm_area_struct *vma, return false; if (!anon_vma_name_eq(anon_vma_name(vma), anon_name)) return false; +#ifdef CONFIG_MSEAL + /* + * If a VMA is sealed, it won't be merged with another VMA. + * This might be useful for diagnosis, i.e. the boundary used + * in the mseal() call will be preserved. + * There are chances of too many mseal() calls can create + * many segmentations. Considering mseal() usually comes + * with a careful memory layout design by the application, + * this might not be an issue in real world. + * Though, we could add merging support later if needed. + */ + if (vma->vm_seals & VM_SEAL_ALL) + return 0; +#endif return true; } diff --git a/mm/mseal.c b/mm/mseal.c new file mode 100644 index 000000000000..615b6e06ab44 --- /dev/null +++ b/mm/mseal.c @@ -0,0 +1,268 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Implement mseal() syscall. + * + * Copyright (c) 2023 Google, Inc. + * + * Author: Jeff Xu + */ + +#include +#include +#include +#include +#include "internal.h" + +/* + * MM_SEAL_ALL is all supported flags in mseal(). + */ +#define MM_SEAL_ALL ( \ + MM_SEAL_MSEAL | \ + MM_SEAL_MPROTECT | \ + MM_SEAL_MUNMAP | \ + MM_SEAL_MMAP | \ + MM_SEAL_MREMAP) + +static bool can_do_mseal(unsigned int types, unsigned int flags) +{ + /* check types is a valid bitmap */ + if (types & ~MM_SEAL_ALL) + return false; + + /* flags isn't used for now */ + if (flags) + return false; + + return true; +} + +/* + * Check if a seal type can be added to VMA. + */ +static bool can_add_vma_seals(struct vm_area_struct *vma, unsigned int newSeals) +{ + /* When SEAL_MSEAL is set, reject if a new type of seal is added */ + if ((vma->vm_seals & VM_SEAL_MSEAL) && + (newSeals & ~(vma->vm_seals & VM_SEAL_ALL))) + return false; + + return true; +} + +static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma, + struct vm_area_struct **prev, unsigned long start, + unsigned long end, unsigned int addtypes) +{ + int ret = 0; + + if (addtypes & ~(vma->vm_seals & VM_SEAL_ALL)) { + /* + * Handle split at start and end. + * Note: sealed VMA doesn't merge with other VMAs. + */ + if (start != vma->vm_start) { + ret = split_vma(vmi, vma, start, 1); + if (ret) + goto out; + } + + if (end != vma->vm_end) { + ret = split_vma(vmi, vma, end, 0); + if (ret) + goto out; + } + + vma->vm_seals |= addtypes; + } + +out: + *prev = vma; + return ret; +} + +/* + * convert user input to internal type for seal type. + */ +static unsigned int convert_user_seal_type(unsigned int types) +{ + unsigned int newtypes = VM_SEAL_NONE; + + if (types & MM_SEAL_MSEAL) + newtypes |= VM_SEAL_MSEAL; + + if (types & MM_SEAL_MPROTECT) + newtypes |= VM_SEAL_MPROTECT; + + if (types & MM_SEAL_MUNMAP) + newtypes |= VM_SEAL_MUNMAP; + + if (types & MM_SEAL_MMAP) + newtypes |= VM_SEAL_MMAP; + + if (types & MM_SEAL_MREMAP) + newtypes |= VM_SEAL_MREMAP; + + return newtypes; +} + +/* + * Check for do_mseal: + * 1> start is part of a valid vma. + * 2> end is part of a valid vma. + * 3> No gap (unallocated address) between start and end. + * 4> requested seal type can be added in given address range. + */ +static int check_mm_seal(unsigned long start, unsigned long end, + unsigned int newtypes) +{ + struct vm_area_struct *vma; + unsigned long nstart = start; + + VMA_ITERATOR(vmi, current->mm, start); + + /* going through each vma to check */ + for_each_vma_range(vmi, vma, end) { + if (vma->vm_start > nstart) + /* unallocated memory found */ + return -ENOMEM; + + if (!can_add_vma_seals(vma, newtypes)) + return -EACCES; + + if (vma->vm_end >= end) + return 0; + + nstart = vma->vm_end; + } + + return -ENOMEM; +} + +/* + * Apply sealing. + */ +static int apply_mm_seal(unsigned long start, unsigned long end, + unsigned int newtypes) +{ + unsigned long nstart, nend; + struct vm_area_struct *vma, *prev = NULL; + struct vma_iterator vmi; + int error = 0; + + vma_iter_init(&vmi, current->mm, start); + vma = vma_find(&vmi, end); + + prev = vma_prev(&vmi); + if (start > vma->vm_start) + prev = vma; + + nstart = start; + + /* going through each vma to update */ + for_each_vma_range(vmi, vma, end) { + nend = vma->vm_end; + if (nend > end) + nend = end; + + error = mseal_fixup(&vmi, vma, &prev, nstart, nend, newtypes); + if (error) + break; + + nstart = vma->vm_end; + } + + return error; +} + +/* + * mseal(2) seals the VM's meta data from + * selected syscalls. + * + * addr/len: VM address range. + * + * The address range by addr/len must meet: + * start (addr) must be in a valid VMA. + * end (addr + len) must be in a valid VMA. + * no gap (unallocated memory) between start and end. + * start (addr) must be page aligned. + * + * len: len will be page aligned implicitly. + * + * types: bit mask for sealed syscalls. + * MM_SEAL_MPROTECT: seal mprotect(2)/pkey_mprotect(2). + * MM_SEAL_MUNMAP: seal munmap(2). + * MM_SEAL_MMAP: seal mmap(2). + * MM_SEAL_MREMAP: seal mremap(2). + * MM_SEAL_MSEAL: adding new seal type will be rejected. + * + * flags: reserved. + * + * return values: + * zero: success + * -EINVAL: + * invalid seal type. + * invalid input flags. + * addr is not page aligned. + * addr + len overflow. + * -ENOMEM: + * addr is not a valid address (not allocated). + * end (addr + len) is not a valid address. + * a gap (unallocated memory) between start and end. + * -EACCES: + * MM_SEAL_MSEAL is set, adding a new seal is rejected. + * + * Note: + * user can call mseal(2) multiple times to add new seal types. + * adding an already added seal type is a no-action (no error). + * adding a new seal type after MM_SEAL_MSEAL will be rejected. + * unseal() or removing a seal type is not supported. + */ +static int do_mseal(unsigned long start, size_t len_in, unsigned int types, + unsigned int flags) +{ + int ret = 0; + unsigned long end; + struct mm_struct *mm = current->mm; + unsigned int newtypes; + size_t len; + + if (!can_do_mseal(types, flags)) + return -EINVAL; + + newtypes = convert_user_seal_type(types); + + start = untagged_addr(start); + if (!PAGE_ALIGNED(start)) + return -EINVAL; + + len = PAGE_ALIGN(len_in); + /* Check to see whether len was rounded up from small -ve to zero */ + if (len_in && !len) + return -EINVAL; + + end = start + len; + if (end < start) + return -EINVAL; + + if (end == start) + return 0; + + if (mmap_write_lock_killable(mm)) + return -EINTR; + + ret = check_mm_seal(start, end, newtypes); + if (ret) + goto out; + + ret = apply_mm_seal(start, end, newtypes); + +out: + mmap_write_unlock(current->mm); + return ret; +} + +SYSCALL_DEFINE4(mseal, unsigned long, start, size_t, len, unsigned int, types, unsigned int, + flags) +{ + return do_mseal(start, len, types, flags); +} From patchwork Mon Oct 16 14:38:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 734157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14299CDB484 for ; Mon, 16 Oct 2023 14:39:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232635AbjJPOjF (ORCPT ); Mon, 16 Oct 2023 10:39:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232375AbjJPOjC (ORCPT ); Mon, 16 Oct 2023 10:39:02 -0400 Received: from mail-il1-x12a.google.com (mail-il1-x12a.google.com [IPv6:2607:f8b0:4864:20::12a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00A3BB4 for ; Mon, 16 Oct 2023 07:38:59 -0700 (PDT) Received: by mail-il1-x12a.google.com with SMTP id e9e14a558f8ab-352a22e1471so18671535ab.0 for ; Mon, 16 Oct 2023 07:38:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1697467139; x=1698071939; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FAzDyUE3yfBlink0mV0CQJEoxlm4tCcqZDEi5jvmYQc=; b=D5KNs7SkxA/Vd3/p6FAlp2aC6GnDg0MlKnCjRX3rk6+ll1rRRYv+kdTE8g76D5G1eH MXJaQUvRjEIuQiTUjlqprAxN3w+IAlTezl2McflzNXqCMl9hzJQEA3gNAj8ySRwRC9+M 1VdA5eBtMF0VWD8iEZJtMKG9uqGp+WVzMh4P0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697467139; x=1698071939; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FAzDyUE3yfBlink0mV0CQJEoxlm4tCcqZDEi5jvmYQc=; b=AraKgOj8tO1MFwAcee/PUdZ/rY0AA6X/9Hjec7j64v4CLA2HJG8ZQgezBjKhINmj0Q LME/CGn+abxhLvM8r8xpHI1BFCC/0Eg3VD4YYyZL8iIilhe0UroV3NHu7NACAIeqENQn KFYAhz0FXxaC9z9NP6H9mOP8z+lV4KUu5nmPwPW5kGvOdnfsEGPhWlpmHE9brZvbFVlD G77cWYQVCRE9LkN+XbDZTUAxmQhHKlaGZIH+SkAKvxqYWbATeTdRCfOEBjIseJGOejAL zMECVx/oKMAvZFGzpyRlCrA485sQuzf0cYR/DDWVTnys/kEF+75XG64b6wYd1tek3U+D 73Ug== X-Gm-Message-State: AOJu0YySMzxFH81b8w8AQEM54Lt10t48v0AxnF4q/ITkLNOXhucNV6hZ sZXvf4uKWMsUb9XuYhbDIVJ3YA== X-Google-Smtp-Source: AGHT+IGqQDdVFlQs+IXW0yxPTymooU2TCQvu0xd2lG+Ye+SZzKGp5LnWA6dNZS4kCl0/6d0DeHSH7w== X-Received: by 2002:a05:6e02:12c5:b0:357:745a:9e79 with SMTP id i5-20020a056e0212c500b00357745a9e79mr7506574ilm.26.1697467139329; Mon, 16 Oct 2023 07:38:59 -0700 (PDT) Received: from localhost (9.184.168.34.bc.googleusercontent.com. [34.168.184.9]) by smtp.gmail.com with UTF8SMTPSA id u5-20020aa78385000000b0068c90e1ec84sm18266164pfm.167.2023.10.16.07.38.58 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 16 Oct 2023 07:38:58 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, sroettger@google.com Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, jannh@google.com, surenb@google.com, alex.sierra@amd.com, apopple@nvidia.com, aneesh.kumar@linux.ibm.com, axelrasmussen@google.com, ben@decadent.org.uk, catalin.marinas@arm.com, david@redhat.com, dwmw@amazon.co.uk, ying.huang@intel.com, hughd@google.com, joey.gouly@arm.com, corbet@lwn.net, wangkefeng.wang@huawei.com, Liam.Howlett@oracle.com, torvalds@linux-foundation.org, lstoakes@gmail.com, willy@infradead.org, mawupeng1@huawei.com, linmiaohe@huawei.com, namit@vmware.com, peterx@redhat.com, peterz@infradead.org, ryan.roberts@arm.com, shr@devkernel.io, vbabka@suse.cz, xiujianfeng@huawei.com, yu.ma@intel.com, zhangpeng362@huawei.com, dave.hansen@intel.com, luto@kernel.org, linux-hardening@vger.kernel.org Subject: [RFC PATCH v1 2/8] Wire up mseal syscall Date: Mon, 16 Oct 2023 14:38:21 +0000 Message-ID: <20231016143828.647848-3-jeffxu@chromium.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231016143828.647848-1-jeffxu@chromium.org> References: <20231016143828.647848-1-jeffxu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jeff Xu Wire up mseal syscall for all architectures. Signed-off-by: Jeff Xu --- arch/alpha/kernel/syscalls/syscall.tbl | 1 + arch/arm/tools/syscall.tbl | 1 + arch/arm64/include/asm/unistd.h | 2 +- arch/arm64/include/asm/unistd32.h | 2 ++ arch/ia64/kernel/syscalls/syscall.tbl | 1 + arch/m68k/kernel/syscalls/syscall.tbl | 1 + arch/microblaze/kernel/syscalls/syscall.tbl | 1 + arch/mips/kernel/syscalls/syscall_n32.tbl | 1 + arch/mips/kernel/syscalls/syscall_n64.tbl | 1 + arch/mips/kernel/syscalls/syscall_o32.tbl | 1 + arch/parisc/kernel/syscalls/syscall.tbl | 1 + arch/powerpc/kernel/syscalls/syscall.tbl | 1 + arch/s390/kernel/syscalls/syscall.tbl | 1 + arch/sh/kernel/syscalls/syscall.tbl | 1 + arch/sparc/kernel/syscalls/syscall.tbl | 1 + arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/xtensa/kernel/syscalls/syscall.tbl | 1 + include/uapi/asm-generic/unistd.h | 5 ++++- 19 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl index ad37569d0507..b5847d53102a 100644 --- a/arch/alpha/kernel/syscalls/syscall.tbl +++ b/arch/alpha/kernel/syscalls/syscall.tbl @@ -492,3 +492,4 @@ 560 common set_mempolicy_home_node sys_ni_syscall 561 common cachestat sys_cachestat 562 common fchmodat2 sys_fchmodat2 +563 common mseal sys_mseal diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl index c572d6c3dee0..b50c5ca5047d 100644 --- a/arch/arm/tools/syscall.tbl +++ b/arch/arm/tools/syscall.tbl @@ -466,3 +466,4 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h index bd77253b62e0..6a28fb91b85d 100644 --- a/arch/arm64/include/asm/unistd.h +++ b/arch/arm64/include/asm/unistd.h @@ -39,7 +39,7 @@ #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5) #define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800) -#define __NR_compat_syscalls 453 +#define __NR_compat_syscalls 454 #endif #define __ARCH_WANT_SYS_CLONE diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h index 78b68311ec81..1e9b3c098a8e 100644 --- a/arch/arm64/include/asm/unistd32.h +++ b/arch/arm64/include/asm/unistd32.h @@ -911,6 +911,8 @@ __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) __SYSCALL(__NR_cachestat, sys_cachestat) #define __NR_fchmodat2 452 __SYSCALL(__NR_fchmodat2, sys_fchmodat2) +#define __NR_mseal 453 +__SYSCALL(__NR_mseal, sys_mseal) /* * Please add new compat syscalls above this comment and update diff --git a/arch/ia64/kernel/syscalls/syscall.tbl b/arch/ia64/kernel/syscalls/syscall.tbl index 83d8609aec03..babe34d221ee 100644 --- a/arch/ia64/kernel/syscalls/syscall.tbl +++ b/arch/ia64/kernel/syscalls/syscall.tbl @@ -373,3 +373,4 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl index 259ceb125367..27cd3f7dbd5e 100644 --- a/arch/m68k/kernel/syscalls/syscall.tbl +++ b/arch/m68k/kernel/syscalls/syscall.tbl @@ -452,3 +452,4 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl index a3798c2637fd..e49861f7c61f 100644 --- a/arch/microblaze/kernel/syscalls/syscall.tbl +++ b/arch/microblaze/kernel/syscalls/syscall.tbl @@ -458,3 +458,4 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl index 152034b8e0a0..78d15010cd77 100644 --- a/arch/mips/kernel/syscalls/syscall_n32.tbl +++ b/arch/mips/kernel/syscalls/syscall_n32.tbl @@ -391,3 +391,4 @@ 450 n32 set_mempolicy_home_node sys_set_mempolicy_home_node 451 n32 cachestat sys_cachestat 452 n32 fchmodat2 sys_fchmodat2 +453 n32 mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_n64.tbl b/arch/mips/kernel/syscalls/syscall_n64.tbl index cb5e757f6621..813614fedb72 100644 --- a/arch/mips/kernel/syscalls/syscall_n64.tbl +++ b/arch/mips/kernel/syscalls/syscall_n64.tbl @@ -367,3 +367,4 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 n64 cachestat sys_cachestat 452 n64 fchmodat2 sys_fchmodat2 +453 n64 mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl index 1a646813afdc..01d88d3a6f3e 100644 --- a/arch/mips/kernel/syscalls/syscall_o32.tbl +++ b/arch/mips/kernel/syscalls/syscall_o32.tbl @@ -440,3 +440,4 @@ 450 o32 set_mempolicy_home_node sys_set_mempolicy_home_node 451 o32 cachestat sys_cachestat 452 o32 fchmodat2 sys_fchmodat2 +453 o32 mseal sys_mseal diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl index e97c175b56f9..d52d08f0a1ea 100644 --- a/arch/parisc/kernel/syscalls/syscall.tbl +++ b/arch/parisc/kernel/syscalls/syscall.tbl @@ -451,3 +451,4 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl index 20e50586e8a2..d38deba73a7b 100644 --- a/arch/powerpc/kernel/syscalls/syscall.tbl +++ b/arch/powerpc/kernel/syscalls/syscall.tbl @@ -539,3 +539,4 @@ 450 nospu set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl index 0122cc156952..cf3243c2978b 100644 --- a/arch/s390/kernel/syscalls/syscall.tbl +++ b/arch/s390/kernel/syscalls/syscall.tbl @@ -455,3 +455,4 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal sys_mseal diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl index e90d585c4d3e..76f1cd33adaa 100644 --- a/arch/sh/kernel/syscalls/syscall.tbl +++ b/arch/sh/kernel/syscalls/syscall.tbl @@ -455,3 +455,4 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl index 4ed06c71c43f..d7728695d780 100644 --- a/arch/sparc/kernel/syscalls/syscall.tbl +++ b/arch/sparc/kernel/syscalls/syscall.tbl @@ -498,3 +498,4 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 2d0b1bd866ea..6d4cc386df22 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -457,3 +457,4 @@ 450 i386 set_mempolicy_home_node sys_set_mempolicy_home_node 451 i386 cachestat sys_cachestat 452 i386 fchmodat2 sys_fchmodat2 +453 i386 mseal sys_mseal diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index 814768249eae..73dcfc43d921 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -374,6 +374,7 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal # # Due to a historical design error, certain syscalls are numbered differently diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl index fc1a4f3c81d9..e8fd3bf35d73 100644 --- a/arch/xtensa/kernel/syscalls/syscall.tbl +++ b/arch/xtensa/kernel/syscalls/syscall.tbl @@ -423,3 +423,4 @@ 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 451 common cachestat sys_cachestat 452 common fchmodat2 sys_fchmodat2 +453 common mseal sys_mseal diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index abe087c53b4b..0c945a798208 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -823,8 +823,11 @@ __SYSCALL(__NR_cachestat, sys_cachestat) #define __NR_fchmodat2 452 __SYSCALL(__NR_fchmodat2, sys_fchmodat2) +#define __NR_mseal 453 +__SYSCALL(__NR_mseal, sys_mseal) + #undef __NR_syscalls -#define __NR_syscalls 453 +#define __NR_syscalls 454 /* * 32 bit systems traditionally used different From patchwork Mon Oct 16 14:38:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 734819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFA1BCDB474 for ; Mon, 16 Oct 2023 14:39:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232375AbjJPOjG (ORCPT ); Mon, 16 Oct 2023 10:39:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232509AbjJPOjE (ORCPT ); Mon, 16 Oct 2023 10:39:04 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 474A7ED for ; Mon, 16 Oct 2023 07:39:02 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-1c871a095ceso30574015ad.2 for ; Mon, 16 Oct 2023 07:39:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1697467142; x=1698071942; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Xej43+CxxYh9a3blA/g52Ig4N3aDI9kSsEpUlToUGyY=; b=LwyrXl3vTxEDzbakDcTvG61WdDGcXTTnloGXmbWRhKmUm3qPzWzDAYM16vBObq/pc5 2oxGoOIBvqBsRWNfoJ12MqAPYo2DUq9eYoEFrVFogm5YiLsMajMzJdm5UHCMnCKQmzw6 YK/8I3KFIKpiCJvPwMlTyO6aIQ6/2HuWpkTOc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697467142; x=1698071942; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Xej43+CxxYh9a3blA/g52Ig4N3aDI9kSsEpUlToUGyY=; b=AT4P+CNGyei0yoLmjBgGWoVJ4N/SfYlgnakQJ8A1ay21/JOEZ6lUi1w8TZ7HOwvdt2 BY74KmoScvuWbrXs06DwmEKJPei7B+tc165fh0ypFMDBAaqk6aUurarCb+BJvqEwwc9z 84sIgYrlmVSFcDPkEtCGizdDr/gmtnoIszJR8zQxoAWU/KFTnk1LFoJKIMPbW/H6FQwb eUGmib2hM+woWJ7ynngDoLKHCQI2f7B2Rfuhf5AGh8If3XgcWnSQQ6id4yPboJ0Cm5OM BrxbxIAQS9gUext7EsuV6ZmljjzzICCv9G4TestH+2iFhCsI3NqSJhnT306GcPoWWVcY iIxQ== X-Gm-Message-State: AOJu0Ywcg/OzzWmzTAH6CpAPt88skob2+/PDxZQadv9DzSMnFkSqT9jV MO3KIaDJKYeL58BMn+4PrjPzbw== X-Google-Smtp-Source: AGHT+IGUzUb5v0t/3MkBLk3yxGa4HPz1kX3xlLqN5hhkqcb5wVyzEHQXpSYW2jltVLkxXX+Emd4ILg== X-Received: by 2002:a17:903:294b:b0:1c9:d143:e9e with SMTP id li11-20020a170903294b00b001c9d1430e9emr10728383plb.18.1697467141766; Mon, 16 Oct 2023 07:39:01 -0700 (PDT) Received: from localhost (9.184.168.34.bc.googleusercontent.com. [34.168.184.9]) by smtp.gmail.com with UTF8SMTPSA id ju1-20020a170903428100b001b9d335223csm8556065plb.26.2023.10.16.07.39.01 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 16 Oct 2023 07:39:01 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, sroettger@google.com Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, jannh@google.com, surenb@google.com, alex.sierra@amd.com, apopple@nvidia.com, aneesh.kumar@linux.ibm.com, axelrasmussen@google.com, ben@decadent.org.uk, catalin.marinas@arm.com, david@redhat.com, dwmw@amazon.co.uk, ying.huang@intel.com, hughd@google.com, joey.gouly@arm.com, corbet@lwn.net, wangkefeng.wang@huawei.com, Liam.Howlett@oracle.com, torvalds@linux-foundation.org, lstoakes@gmail.com, willy@infradead.org, mawupeng1@huawei.com, linmiaohe@huawei.com, namit@vmware.com, peterx@redhat.com, peterz@infradead.org, ryan.roberts@arm.com, shr@devkernel.io, vbabka@suse.cz, xiujianfeng@huawei.com, yu.ma@intel.com, zhangpeng362@huawei.com, dave.hansen@intel.com, luto@kernel.org, linux-hardening@vger.kernel.org Subject: [RFC PATCH v1 3/8] mseal: add can_modify_mm and can_modify_vma Date: Mon, 16 Oct 2023 14:38:22 +0000 Message-ID: <20231016143828.647848-4-jeffxu@chromium.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231016143828.647848-1-jeffxu@chromium.org> References: <20231016143828.647848-1-jeffxu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jeff Xu can_modify_mm: checks sealing flags for given memory range. can_modify_vma: checks sealing flags for given vma. Signed-off-by: Jeff Xu --- include/linux/mm.h | 34 ++++++++++++++++++++++++++ mm/mseal.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 94 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index e790b91a0cd4..aafdb68950f8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -257,6 +257,18 @@ extern struct rw_semaphore nommu_region_sem; extern unsigned int kobjsize(const void *objp); #endif +enum caller_origin { + ON_BEHALF_OF_KERNEL = 0, + ON_BEHALF_OF_USERSPACE, +}; + +enum mm_action { + MM_ACTION_MPROTECT, + MM_ACTION_MUNMAP, + MM_ACTION_MREMAP, + MM_ACTION_MMAP, +}; + /* * vm_seals in vm_area_struct, see mm_types.h. */ @@ -3302,6 +3314,28 @@ static inline void mm_populate(unsigned long addr, unsigned long len) static inline void mm_populate(unsigned long addr, unsigned long len) {} #endif +#ifdef CONFIG_MSEAL +extern bool can_modify_mm(struct mm_struct *mm, unsigned long start, + unsigned long end, enum mm_action action, + enum caller_origin called); + +extern bool can_modify_vma(struct vm_area_struct *vma, enum mm_action action, + enum caller_origin called); +#else +static inline bool can_modify_mm(struct mm_struct *mm, unsigned long start, + unsigned long end, enum mm_action action, + enum caller_origin called) +{ + return true; +} + +static inline bool can_modify_vma(struct vm_area_struct *vma, enum mm_action action, + enum caller_origin called) +{ + return true; +} +#endif + /* These take the mm semaphore themselves */ extern int __must_check vm_brk(unsigned long, unsigned long); extern int __must_check vm_brk_flags(unsigned long, unsigned long, unsigned long); diff --git a/mm/mseal.c b/mm/mseal.c index 615b6e06ab44..3285ef6b95a6 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -36,6 +36,66 @@ static bool can_do_mseal(unsigned int types, unsigned int flags) return true; } +/* + * check if a vma is sealed for modification. + * return true, if modification is allowed. + */ +bool can_modify_vma(struct vm_area_struct *vma, enum mm_action action, + enum caller_origin called) +{ + if (called == ON_BEHALF_OF_KERNEL) + return true; + + switch (action) { + case MM_ACTION_MPROTECT: + if (vma->vm_seals & VM_SEAL_MPROTECT) + return false; + break; + + case MM_ACTION_MUNMAP: + if (vma->vm_seals & VM_SEAL_MUNMAP) + return false; + break; + + case MM_ACTION_MREMAP: + if (vma->vm_seals & VM_SEAL_MREMAP) + return false; + break; + + case MM_ACTION_MMAP: + if (vma->vm_seals & VM_SEAL_MMAP) + return false; + break; + } + + return true; +} + +/* + * Check if the vmas of a memory range are allowed to be modified. + * the memory ranger can have a gap (unallocated memory). + * return true, if it is allowed. + */ +bool can_modify_mm(struct mm_struct *mm, unsigned long start, unsigned long end, + enum mm_action action, enum caller_origin called) +{ + struct vm_area_struct *vma; + + VMA_ITERATOR(vmi, mm, start); + + if (called == ON_BEHALF_OF_KERNEL) + return true; + + /* going through each vma to check */ + for_each_vma_range(vmi, vma, end) { + if (!can_modify_vma(vma, action, called)) + return false; + } + + /* Allow by default. */ + return true; +} + /* * Check if a seal type can be added to VMA. */ From patchwork Mon Oct 16 14:38:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 734156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0091BC46CA1 for ; Mon, 16 Oct 2023 14:39:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233374AbjJPOjG (ORCPT ); Mon, 16 Oct 2023 10:39:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231891AbjJPOjE (ORCPT ); Mon, 16 Oct 2023 10:39:04 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 335BAA2 for ; Mon, 16 Oct 2023 07:39:03 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id 41be03b00d2f7-5859d13f73dso2770751a12.1 for ; Mon, 16 Oct 2023 07:39:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1697467142; x=1698071942; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ousg6g8beQYrtnIXHdPl+6es+qhdKBwPjUl+0WcHVK4=; b=Dn7INWNQd11NxXeBEcslakTeZYKSjZ65GFePmvMWfc9Iua9BL8GGSyI5XPqIZEpUMc ieH3GDthP1TkPwazQSkrtzTShl1TskQ808Tbpxc+gixrQ1kmyJmboHOSmOqC9qg6bv1c 7MpfWFY4W0LYN5P7lclIvJn58df4F6m+1KrFY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697467142; x=1698071942; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ousg6g8beQYrtnIXHdPl+6es+qhdKBwPjUl+0WcHVK4=; b=hkLhT+y174PWH3Dba31tJ86YfL3JdO5JHFZ7IJuiIbz26XO7HvudMaQqwPhTprYG7c dYH5XnJxz0JOQBYQBdc0KyVkpAcZqBwzGJdkunub52N75kgyUWPaG1CGIYkMkPZQOWza R5m2AHRWBrIq6Oi6Nltfgn3+x8iwl2FG3yc7GgLinHmnpXtyDR5I7+zpZFa2LF1xSdmE kwuPJzArVwZS4thckHys8pLrYG2R7AO9eADZUvN3SefF9bI+Aq7dlGsiN/xRmWgPi5Tx 43AbiGBIABcTpUNg0dDVeCN4+mbTF+ESX47UCBDGFNf5BZiKbMTJlnGHzK90rzAjmwf/ bArA== X-Gm-Message-State: AOJu0YxEMVUl3CCP3QvhOYGaHjSU0fcJlxEJXWJpvdj0B9nBRKZ48R6K m10VRKAIbKPm+myuYmHWmrPOcg== X-Google-Smtp-Source: AGHT+IGuDFDSElhARtpju/2TKNXZCv0dyrbDcGQSiLi1n6XeXR2D8iMicnP3qR+LojeqcMvPGZ4Uig== X-Received: by 2002:a05:6a21:789c:b0:14b:8b82:867f with SMTP id bf28-20020a056a21789c00b0014b8b82867fmr34279713pzc.50.1697467142619; Mon, 16 Oct 2023 07:39:02 -0700 (PDT) Received: from localhost (9.184.168.34.bc.googleusercontent.com. [34.168.184.9]) by smtp.gmail.com with UTF8SMTPSA id jn21-20020a170903051500b001c726147a45sm8599282plb.190.2023.10.16.07.39.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 16 Oct 2023 07:39:02 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, sroettger@google.com Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, jannh@google.com, surenb@google.com, alex.sierra@amd.com, apopple@nvidia.com, aneesh.kumar@linux.ibm.com, axelrasmussen@google.com, ben@decadent.org.uk, catalin.marinas@arm.com, david@redhat.com, dwmw@amazon.co.uk, ying.huang@intel.com, hughd@google.com, joey.gouly@arm.com, corbet@lwn.net, wangkefeng.wang@huawei.com, Liam.Howlett@oracle.com, torvalds@linux-foundation.org, lstoakes@gmail.com, willy@infradead.org, mawupeng1@huawei.com, linmiaohe@huawei.com, namit@vmware.com, peterx@redhat.com, peterz@infradead.org, ryan.roberts@arm.com, shr@devkernel.io, vbabka@suse.cz, xiujianfeng@huawei.com, yu.ma@intel.com, zhangpeng362@huawei.com, dave.hansen@intel.com, luto@kernel.org, linux-hardening@vger.kernel.org Subject: [RFC PATCH v1 4/8] mseal: seal mprotect Date: Mon, 16 Oct 2023 14:38:23 +0000 Message-ID: <20231016143828.647848-5-jeffxu@chromium.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231016143828.647848-1-jeffxu@chromium.org> References: <20231016143828.647848-1-jeffxu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jeff Xu check sealing for mprotect(2). Signed-off-by: Jeff Xu --- mm/mprotect.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/mprotect.c b/mm/mprotect.c index 130db91d3a8c..5b67c66d55f7 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -753,6 +753,12 @@ static int do_mprotect_pkey(unsigned long start, size_t len, } } + if (!can_modify_mm(current->mm, start, end, MM_ACTION_MPROTECT, + ON_BEHALF_OF_USERSPACE)) { + error = -EACCES; + goto out; + } + prev = vma_prev(&vmi); if (start > vma->vm_start) prev = vma; From patchwork Mon Oct 16 14:38:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 734818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2B05CDB483 for ; Mon, 16 Oct 2023 14:39:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232003AbjJPOjH (ORCPT ); Mon, 16 Oct 2023 10:39:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233095AbjJPOjG (ORCPT ); Mon, 16 Oct 2023 10:39:06 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 539ADD9 for ; Mon, 16 Oct 2023 07:39:04 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-6b44befac59so2085182b3a.0 for ; Mon, 16 Oct 2023 07:39:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1697467144; x=1698071944; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ToXeatSf9Mfv/4ML/0TPcwOvX/byGqXHxsHhNXC33wk=; b=fkq+43Z4+iLQm5AqikeaX/UZZ8GiW/Cd653QDxtTz214LLDzQ5UWDJ/k/YHc7HdxHC jk97fPVkshRr4rlDTJOZrgJqy0AQjBfWTBKIGIRx66uyB5o4qfGbO9SNug1zULuIb7aN wtbCsY14EMtdfpJq3NCd8oYoGA5q2iTPRx27Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697467144; x=1698071944; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ToXeatSf9Mfv/4ML/0TPcwOvX/byGqXHxsHhNXC33wk=; b=RkW007bPui7GD8QMt9qvpa28nDwa7+WJ+qzPLX8K0E80ZrxITa/7/x5DVFz8VpHvID 5DfAO2yWl3g75VkG4uJqdQhN0/ZJTSIM/Qpj6G/b21V+K9fDK30sksZCQK363+y086sc RpKsCpFPBdLVvnKwHnPBz+ZRHOLY9tLhh9qHMcauF1bGpDkieVRhQ+V4bGLuzjwuc3t6 iyCCAliSm4t9p1X+UFSsXvOpNbkSjVGm9lddBUKm3f990NC9gvzaRnLBvQNd9e0Xp6zb MUoZKzIW5CVqWMhKMTJ0H5SG+WTY/WLA49+jGx/ee/UvHBbgvv6nO6UVMWuVNpzxQCxd 5Wfg== X-Gm-Message-State: AOJu0Yx2kvsWB1WX5DXcDVgPCAT5aaZkFpo8OZufOMeeUxgVGRAm6Tob PiFkBE+Ip6wFU3e1HcOaguMAZg== X-Google-Smtp-Source: AGHT+IGTMvdTkJGIXeUoXD08iW19uI3xY7OaVN1hUVre2xWAlvcHGioIeC96pEHSOxAKKy9n0wApCA== X-Received: by 2002:a05:6a20:1614:b0:133:1d62:dcbd with SMTP id l20-20020a056a20161400b001331d62dcbdmr10851831pzj.28.1697467143679; Mon, 16 Oct 2023 07:39:03 -0700 (PDT) Received: from localhost (9.184.168.34.bc.googleusercontent.com. [34.168.184.9]) by smtp.gmail.com with UTF8SMTPSA id j18-20020aa79292000000b006875df4773fsm18206313pfa.163.2023.10.16.07.39.03 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 16 Oct 2023 07:39:03 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, sroettger@google.com Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, jannh@google.com, surenb@google.com, alex.sierra@amd.com, apopple@nvidia.com, aneesh.kumar@linux.ibm.com, axelrasmussen@google.com, ben@decadent.org.uk, catalin.marinas@arm.com, david@redhat.com, dwmw@amazon.co.uk, ying.huang@intel.com, hughd@google.com, joey.gouly@arm.com, corbet@lwn.net, wangkefeng.wang@huawei.com, Liam.Howlett@oracle.com, torvalds@linux-foundation.org, lstoakes@gmail.com, willy@infradead.org, mawupeng1@huawei.com, linmiaohe@huawei.com, namit@vmware.com, peterx@redhat.com, peterz@infradead.org, ryan.roberts@arm.com, shr@devkernel.io, vbabka@suse.cz, xiujianfeng@huawei.com, yu.ma@intel.com, zhangpeng362@huawei.com, dave.hansen@intel.com, luto@kernel.org, linux-hardening@vger.kernel.org Subject: [RFC PATCH v1 5/8] mseal munmap Date: Mon, 16 Oct 2023 14:38:24 +0000 Message-ID: <20231016143828.647848-6-jeffxu@chromium.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231016143828.647848-1-jeffxu@chromium.org> References: <20231016143828.647848-1-jeffxu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jeff Xu check seal for munmap(2). Signed-off-by: Jeff Xu --- include/linux/mm.h | 2 +- mm/mmap.c | 22 ++++++++++++++-------- mm/mremap.c | 5 +++-- 3 files changed, 18 insertions(+), 11 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index aafdb68950f8..95b793eb3a80 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3294,7 +3294,7 @@ extern unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long pgoff, unsigned long *populate, struct list_head *uf); extern int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, unsigned long start, size_t len, struct list_head *uf, - bool unlock); + bool unlock, enum caller_origin called); extern int do_munmap(struct mm_struct *, unsigned long, size_t, struct list_head *uf); extern int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior); diff --git a/mm/mmap.c b/mm/mmap.c index 9b6c477e713e..f4bfcc5d2c10 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2601,6 +2601,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, * @len: The length of the range to munmap * @uf: The userfaultfd list_head * @unlock: set to true if the user wants to drop the mmap_lock on success + * @called: caller origin * * This function takes a @mas that is either pointing to the previous VMA or set * to MA_START and sets it up to remove the mapping(s). The @len will be @@ -2611,7 +2612,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, */ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, unsigned long start, size_t len, struct list_head *uf, - bool unlock) + bool unlock, enum caller_origin called) { unsigned long end; struct vm_area_struct *vma; @@ -2623,6 +2624,9 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, if (end == start) return -EINVAL; + if (!can_modify_mm(mm, start, end, MM_ACTION_MUNMAP, called)) + return -EACCES; + /* arch_unmap() might do unmaps itself. */ arch_unmap(mm, start, end); @@ -2650,7 +2654,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, { VMA_ITERATOR(vmi, mm, start); - return do_vmi_munmap(&vmi, mm, start, len, uf, false); + return do_vmi_munmap(&vmi, mm, start, len, uf, false, + ON_BEHALF_OF_KERNEL); } unsigned long mmap_region(struct file *file, unsigned long addr, @@ -2684,7 +2689,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, } /* Unmap any existing mapping in the area */ - if (do_vmi_munmap(&vmi, mm, addr, len, uf, false)) + if (do_vmi_munmap(&vmi, mm, addr, len, uf, false, ON_BEHALF_OF_KERNEL)) return -ENOMEM; /* @@ -2909,7 +2914,8 @@ unsigned long mmap_region(struct file *file, unsigned long addr, return error; } -static int __vm_munmap(unsigned long start, size_t len, bool unlock) +static int __vm_munmap(unsigned long start, size_t len, bool unlock, + enum caller_origin called) { int ret; struct mm_struct *mm = current->mm; @@ -2919,7 +2925,7 @@ static int __vm_munmap(unsigned long start, size_t len, bool unlock) if (mmap_write_lock_killable(mm)) return -EINTR; - ret = do_vmi_munmap(&vmi, mm, start, len, &uf, unlock); + ret = do_vmi_munmap(&vmi, mm, start, len, &uf, unlock, called); if (ret || !unlock) mmap_write_unlock(mm); @@ -2929,14 +2935,14 @@ static int __vm_munmap(unsigned long start, size_t len, bool unlock) int vm_munmap(unsigned long start, size_t len) { - return __vm_munmap(start, len, false); + return __vm_munmap(start, len, false, ON_BEHALF_OF_KERNEL); } EXPORT_SYMBOL(vm_munmap); SYSCALL_DEFINE2(munmap, unsigned long, addr, size_t, len) { addr = untagged_addr(addr); - return __vm_munmap(addr, len, true); + return __vm_munmap(addr, len, true, ON_BEHALF_OF_USERSPACE); } @@ -3168,7 +3174,7 @@ int vm_brk_flags(unsigned long addr, unsigned long request, unsigned long flags) if (ret) goto limits_failed; - ret = do_vmi_munmap(&vmi, mm, addr, len, &uf, 0); + ret = do_vmi_munmap(&vmi, mm, addr, len, &uf, 0, ON_BEHALF_OF_KERNEL); if (ret) goto munmap_failed; diff --git a/mm/mremap.c b/mm/mremap.c index 056478c106ee..e43f9ceaa29d 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -715,7 +715,8 @@ static unsigned long move_vma(struct vm_area_struct *vma, } vma_iter_init(&vmi, mm, old_addr); - if (!do_vmi_munmap(&vmi, mm, old_addr, old_len, uf_unmap, false)) { + if (!do_vmi_munmap(&vmi, mm, old_addr, old_len, uf_unmap, false, + ON_BEHALF_OF_KERNEL)) { /* OOM: unable to split vma, just get accounts right */ if (vm_flags & VM_ACCOUNT && !(flags & MREMAP_DONTUNMAP)) vm_acct_memory(old_len >> PAGE_SHIFT); @@ -1009,7 +1010,7 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, } ret = do_vmi_munmap(&vmi, mm, addr + new_len, old_len - new_len, - &uf_unmap, true); + &uf_unmap, true, ON_BEHALF_OF_KERNEL); if (ret) goto out; From patchwork Mon Oct 16 14:38:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 734155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 855E7CDB465 for ; Mon, 16 Oct 2023 14:39:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232660AbjJPOjI (ORCPT ); Mon, 16 Oct 2023 10:39:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233425AbjJPOjH (ORCPT ); Mon, 16 Oct 2023 10:39:07 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B6D1ED for ; Mon, 16 Oct 2023 07:39:05 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id 98e67ed59e1d1-27d113508bfso3709598a91.3 for ; Mon, 16 Oct 2023 07:39:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1697467145; x=1698071945; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+0Sn/gKJWR6siX+0KJWM5lAESAe3sWf60qybDDt1Pys=; b=SyHIg9DzLy+zp4y+Yjhwxt7wJqC+tl1u18o8eoNRpREPgA952TWGap3RWoHe48fL/B lmPrS1T+aBpXzrmOxFFdrcwxUHbvE7dcFfK2m3qQPckUiVWb4eLDa+6NG1GMgzX8Lhtv Tl8DP06YSVWn5f2JDlapE7Je/kQQC73QZ0XpQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697467145; x=1698071945; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+0Sn/gKJWR6siX+0KJWM5lAESAe3sWf60qybDDt1Pys=; b=ZSiXjcx9UW/RXg1+NMF9wnYvYLPp+fHwpctZ1U8NJeqHhDy5+H1D6rCeZqexNWvcR1 6TCb0HxxvsGBzn7W3fgGMzJXsoM7WiC+3YLRTiaDK9szM5Z6+LzaI20qFvw3/7kY+ieR 0kxNJc5Sj+NM7qG2vQIjC3G+kue9X6GAZI2ySwp/MNPFjABxt+RT7tKg2ho2Ua3OZF9x CBHuq16CTi0purWTGfYrPjXtmxHav6XA+nThbKNykPDswFH8tm7k3vIy1RZyWY6ngV1l D7V8rtOcOtdmSh9BTgLhmPmN1ozEIv9VB/qDjcdvbDIeZ08hysIhei7gFhkKAwtHM5qe LuHg== X-Gm-Message-State: AOJu0YyQqaN2QMtGZymYWfMrdBwjfMYvkomHDYk/ywV7So4BVLNep0HC 5dI29dXygNIm2Ik1OZho77qPQA== X-Google-Smtp-Source: AGHT+IHQKvwpFIbGSIva7hn5sjCBlscbClIlMg0fHg4jXNaku6Zt8hbJMcw6wC8Uuvj5QeuKGPqvTA== X-Received: by 2002:a17:90b:1fc6:b0:27d:4283:b8a2 with SMTP id st6-20020a17090b1fc600b0027d4283b8a2mr7510538pjb.14.1697467144783; Mon, 16 Oct 2023 07:39:04 -0700 (PDT) Received: from localhost (9.184.168.34.bc.googleusercontent.com. [34.168.184.9]) by smtp.gmail.com with UTF8SMTPSA id nh24-20020a17090b365800b0026f39c90111sm5008030pjb.20.2023.10.16.07.39.04 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 16 Oct 2023 07:39:04 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, sroettger@google.com Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, jannh@google.com, surenb@google.com, alex.sierra@amd.com, apopple@nvidia.com, aneesh.kumar@linux.ibm.com, axelrasmussen@google.com, ben@decadent.org.uk, catalin.marinas@arm.com, david@redhat.com, dwmw@amazon.co.uk, ying.huang@intel.com, hughd@google.com, joey.gouly@arm.com, corbet@lwn.net, wangkefeng.wang@huawei.com, Liam.Howlett@oracle.com, torvalds@linux-foundation.org, lstoakes@gmail.com, willy@infradead.org, mawupeng1@huawei.com, linmiaohe@huawei.com, namit@vmware.com, peterx@redhat.com, peterz@infradead.org, ryan.roberts@arm.com, shr@devkernel.io, vbabka@suse.cz, xiujianfeng@huawei.com, yu.ma@intel.com, zhangpeng362@huawei.com, dave.hansen@intel.com, luto@kernel.org, linux-hardening@vger.kernel.org Subject: [RFC PATCH v1 6/8] mseal mremap Date: Mon, 16 Oct 2023 14:38:25 +0000 Message-ID: <20231016143828.647848-7-jeffxu@chromium.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231016143828.647848-1-jeffxu@chromium.org> References: <20231016143828.647848-1-jeffxu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jeff Xu check seal for mremap(2) Signed-off-by: Jeff Xu --- mm/mremap.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/mm/mremap.c b/mm/mremap.c index e43f9ceaa29d..2288f9d0b126 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -836,7 +836,15 @@ static unsigned long mremap_to(unsigned long addr, unsigned long old_len, if ((mm->map_count + 2) >= sysctl_max_map_count - 3) return -ENOMEM; + if (!can_modify_mm(mm, addr, addr + old_len, MM_ACTION_MREMAP, + ON_BEHALF_OF_USERSPACE)) + return -EACCES; + if (flags & MREMAP_FIXED) { + if (!can_modify_mm(mm, new_addr, new_addr + new_len, + MM_ACTION_MREMAP, ON_BEHALF_OF_USERSPACE)) + return -EACCES; + ret = do_munmap(mm, new_addr, new_len, uf_unmap_early); if (ret) goto out; @@ -995,6 +1003,12 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, goto out; } + if (!can_modify_mm(mm, addr, addr + old_len, MM_ACTION_MREMAP, + ON_BEHALF_OF_USERSPACE)) { + ret = -EACCES; + goto out; + } + /* * Always allow a shrinking remap: that just unmaps * the unnecessary pages.. From patchwork Mon Oct 16 14:38:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 734817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 019E2CDB465 for ; Mon, 16 Oct 2023 14:39:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233501AbjJPOjM (ORCPT ); Mon, 16 Oct 2023 10:39:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233445AbjJPOjI (ORCPT ); Mon, 16 Oct 2023 10:39:08 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69CE1FA for ; Mon, 16 Oct 2023 07:39:06 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id 41be03b00d2f7-5aa7172bafdso1836746a12.1 for ; Mon, 16 Oct 2023 07:39:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1697467146; x=1698071946; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V/gIGzcs1doxA1NJxoF7qbwpya/fXwG7eIiuzPMtoy8=; b=P1sBQgC1gJNrrGRM4CrWNJlPdNasSiTHvM5bN43pz/yphaaDMjeQ651fmF0OUs8xsh M7VBM4hmTSXDzYQXrrW5mqdAAAT3GWGUOanEHLz5vlaq564GBB4izqHEAlMnlcabQ346 GoK5g6IUO5/D9Kq3hYnYob2tZ0UwmUOv4+MkA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697467146; x=1698071946; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V/gIGzcs1doxA1NJxoF7qbwpya/fXwG7eIiuzPMtoy8=; b=XFrK9Z8YBNLqTR9L6eN5xwJYw8cnCuN7TX/kSILjhWcDmd8cPIut/SjTBv073J52pT 1mdj0spmss/m8tah3Z3Q9Aaejccntk/4aPrkoNw5sgbfs5JLjsZfVxGYzThyLFCrY6Sz Dso2EHzfAlG5+y2XTl8wJ2uT/U/l6J/afQnvsbpy93iShGp2M0WOy0rMaN+nWao4A2GS behd5sLusgRkv9vC3zUA/4VmQxo7bqn7WD4T4xCFZ/9nz70NaWTROdQvzGTarhIp5xBn MUd+4gsI5+KTte/POgLcxBBUeVmsvjrDhsDfg8NWjod/rMmeSnbM5f9dzZL2PdGM6dqO GeIA== X-Gm-Message-State: AOJu0Yy0XjY9Resw4aFXKcbX1wnI0+YyjsExN07iR/u3P4c/DLfgotfE wqQAcigJvLG0QmWRRjLnIFrDww== X-Google-Smtp-Source: AGHT+IGJ+u/+BP/wEz8Z/WX19EscYJm5hsgzyBNr8KRyM+S3cXUOr85FAH7+MHGwT0v4RcdPcoyC1A== X-Received: by 2002:a05:6a20:3c89:b0:173:3ef3:236a with SMTP id b9-20020a056a203c8900b001733ef3236amr14858732pzj.21.1697467145854; Mon, 16 Oct 2023 07:39:05 -0700 (PDT) Received: from localhost (9.184.168.34.bc.googleusercontent.com. [34.168.184.9]) by smtp.gmail.com with UTF8SMTPSA id jw10-20020a170903278a00b001c5fa48b9a0sm8574385plb.33.2023.10.16.07.39.05 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 16 Oct 2023 07:39:05 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, sroettger@google.com Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, jannh@google.com, surenb@google.com, alex.sierra@amd.com, apopple@nvidia.com, aneesh.kumar@linux.ibm.com, axelrasmussen@google.com, ben@decadent.org.uk, catalin.marinas@arm.com, david@redhat.com, dwmw@amazon.co.uk, ying.huang@intel.com, hughd@google.com, joey.gouly@arm.com, corbet@lwn.net, wangkefeng.wang@huawei.com, Liam.Howlett@oracle.com, torvalds@linux-foundation.org, lstoakes@gmail.com, willy@infradead.org, mawupeng1@huawei.com, linmiaohe@huawei.com, namit@vmware.com, peterx@redhat.com, peterz@infradead.org, ryan.roberts@arm.com, shr@devkernel.io, vbabka@suse.cz, xiujianfeng@huawei.com, yu.ma@intel.com, zhangpeng362@huawei.com, dave.hansen@intel.com, luto@kernel.org, linux-hardening@vger.kernel.org Subject: [RFC PATCH v1 7/8] mseal mmap Date: Mon, 16 Oct 2023 14:38:26 +0000 Message-ID: <20231016143828.647848-8-jeffxu@chromium.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231016143828.647848-1-jeffxu@chromium.org> References: <20231016143828.647848-1-jeffxu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jeff Xu check seal for mmap(2) Signed-off-by: Jeff Xu --- fs/aio.c | 5 +++-- include/linux/mm.h | 5 ++++- ipc/shm.c | 3 ++- mm/internal.h | 4 ++-- mm/mmap.c | 13 +++++++++---- mm/nommu.c | 6 ++++-- mm/util.c | 8 +++++--- 7 files changed, 29 insertions(+), 15 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index b3174da80ff6..81040126dd45 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -557,8 +557,9 @@ static int aio_setup_ring(struct kioctx *ctx, unsigned int nr_events) } ctx->mmap_base = do_mmap(ctx->aio_ring_file, 0, ctx->mmap_size, - PROT_READ | PROT_WRITE, - MAP_SHARED, 0, &unused, NULL); + PROT_READ | PROT_WRITE, MAP_SHARED, 0, &unused, + NULL, ON_BEHALF_OF_KERNEL); + mmap_write_unlock(mm); if (IS_ERR((void *)ctx->mmap_base)) { ctx->mmap_size = 0; diff --git a/include/linux/mm.h b/include/linux/mm.h index 95b793eb3a80..ffa2eb9bd475 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3289,9 +3289,12 @@ extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned lo extern unsigned long mmap_region(struct file *file, unsigned long addr, unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, struct list_head *uf); + extern unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, - unsigned long pgoff, unsigned long *populate, struct list_head *uf); + unsigned long pgoff, unsigned long *populate, struct list_head *uf, + enum caller_origin called); + extern int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, unsigned long start, size_t len, struct list_head *uf, bool unlock, enum caller_origin called); diff --git a/ipc/shm.c b/ipc/shm.c index 60e45e7045d4..14aebeefe155 100644 --- a/ipc/shm.c +++ b/ipc/shm.c @@ -1662,7 +1662,8 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, goto invalid; } - addr = do_mmap(file, addr, size, prot, flags, 0, &populate, NULL); + addr = do_mmap(file, addr, size, prot, flags, 0, &populate, NULL, + ON_BEHALF_OF_KERNEL); *raddr = addr; err = 0; if (IS_ERR_VALUE(addr)) diff --git a/mm/internal.h b/mm/internal.h index d1d4bf4e63c0..4361eaf3d1c6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -800,8 +800,8 @@ extern u64 hwpoison_filter_memcg; extern u32 hwpoison_filter_enable; extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, - unsigned long, unsigned long, - unsigned long, unsigned long); + unsigned long, unsigned long, unsigned long, unsigned long, + enum caller_origin called); extern void set_pageblock_order(void); unsigned long reclaim_pages(struct list_head *folio_list); diff --git a/mm/mmap.c b/mm/mmap.c index f4bfcc5d2c10..a42fe27a7d04 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1197,7 +1197,8 @@ static inline bool file_mmap_ok(struct file *file, struct inode *inode, unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, unsigned long pgoff, - unsigned long *populate, struct list_head *uf) + unsigned long *populate, struct list_head *uf, + enum caller_origin called) { struct mm_struct *mm = current->mm; vm_flags_t vm_flags; @@ -1365,6 +1366,9 @@ unsigned long do_mmap(struct file *file, unsigned long addr, vm_flags |= VM_NORESERVE; } + if (!can_modify_mm(mm, addr, addr + len, MM_ACTION_MMAP, called)) + return -EACCES; + addr = mmap_region(file, addr, len, vm_flags, pgoff, uf); if (!IS_ERR_VALUE(addr) && ((vm_flags & VM_LOCKED) || @@ -1411,7 +1415,8 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len, return PTR_ERR(file); } - retval = vm_mmap_pgoff(file, addr, len, prot, flags, pgoff); + retval = vm_mmap_pgoff(file, addr, len, prot, flags, pgoff, + ON_BEHALF_OF_USERSPACE); out_fput: if (file) fput(file); @@ -3017,8 +3022,8 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size, flags |= MAP_LOCKED; file = get_file(vma->vm_file); - ret = do_mmap(vma->vm_file, start, size, - prot, flags, pgoff, &populate, NULL); + ret = do_mmap(vma->vm_file, start, size, prot, flags, pgoff, + &populate, NULL, ON_BEHALF_OF_KERNEL); fput(file); out: mmap_write_unlock(mm); diff --git a/mm/nommu.c b/mm/nommu.c index 8dba41cfc44d..8c11de5dd8e6 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -1018,7 +1018,8 @@ unsigned long do_mmap(struct file *file, unsigned long flags, unsigned long pgoff, unsigned long *populate, - struct list_head *uf) + struct list_head *uf, + enum caller_origin called) { struct vm_area_struct *vma; struct vm_region *region; @@ -1262,7 +1263,8 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len, goto out; } - retval = vm_mmap_pgoff(file, addr, len, prot, flags, pgoff); + retval = vm_mmap_pgoff(file, addr, len, prot, flags, pgoff, + ON_BEHALF_OF_USERSPACE); if (file) fput(file); diff --git a/mm/util.c b/mm/util.c index 4ed8b9b5273c..aaf37c3af517 100644 --- a/mm/util.c +++ b/mm/util.c @@ -532,7 +532,8 @@ EXPORT_SYMBOL_GPL(account_locked_vm); unsigned long vm_mmap_pgoff(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, - unsigned long flag, unsigned long pgoff) + unsigned long flag, unsigned long pgoff, + enum caller_origin called) { unsigned long ret; struct mm_struct *mm = current->mm; @@ -544,7 +545,7 @@ unsigned long vm_mmap_pgoff(struct file *file, unsigned long addr, if (mmap_write_lock_killable(mm)) return -EINTR; ret = do_mmap(file, addr, len, prot, flag, pgoff, &populate, - &uf); + &uf, called); mmap_write_unlock(mm); userfaultfd_unmap_complete(mm, &uf); if (populate) @@ -562,7 +563,8 @@ unsigned long vm_mmap(struct file *file, unsigned long addr, if (unlikely(offset_in_page(offset))) return -EINVAL; - return vm_mmap_pgoff(file, addr, len, prot, flag, offset >> PAGE_SHIFT); + return vm_mmap_pgoff(file, addr, len, prot, flag, offset >> PAGE_SHIFT, + ON_BEHALF_OF_KERNEL); } EXPORT_SYMBOL(vm_mmap); From patchwork Mon Oct 16 14:38:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 734154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F3B9CDB482 for ; Mon, 16 Oct 2023 14:39:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233641AbjJPOjc (ORCPT ); Mon, 16 Oct 2023 10:39:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233634AbjJPOjT (ORCPT ); Mon, 16 Oct 2023 10:39:19 -0400 Received: from mail-oo1-xc35.google.com (mail-oo1-xc35.google.com [IPv6:2607:f8b0:4864:20::c35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3E91F2 for ; Mon, 16 Oct 2023 07:39:07 -0700 (PDT) Received: by mail-oo1-xc35.google.com with SMTP id 006d021491bc7-57ad95c555eso2647969eaf.3 for ; Mon, 16 Oct 2023 07:39:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1697467147; x=1698071947; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=96Q+rrApZCfI+WQtbuoe54OnfJfwZ+w68c0Wju6ATQk=; b=IBhBMk6Maa0GsD+WhApcvmiQI6Kxz1Lx3rju9cGJraxr8+4LZ2MOvIeh+DfirZKM25 zNv1GCrhfWOQnBziol0+AfkRAz4sg7a28Cj4BiiCNDZhqhebIryeu9trHaFveGxaQKc6 RHo2iNuJX+9bv41V4xhY1Ksu60dKytbbOZKgo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697467147; x=1698071947; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=96Q+rrApZCfI+WQtbuoe54OnfJfwZ+w68c0Wju6ATQk=; b=uoIFQZ7N94npj13Dw2zR7cSdIyQmd7iV/8yRDtEvWDWAHCd47faqqMBg4s0YblglqT aDlmVi4CuKPhEsDr5Qtw+35CObE2hubTL55PIn2Zzs7/GJ2gXiLHcukc9PXJGgtXtl46 Xv3ierw49ZIjnABe3pdT3wENwCW2Oj0IUhNWqMtNZ5GJ3BLJ0I0UhhZVunGrrIZ3GT+V 8V33uhjEWvCfY5JfzGXjO1oHv09BdiNgZoDFQyJjyrak3ilUOJ0s82po9V3/DZB2inrY HNdl7exKPosYnDeFDPkxW7qsUr28AmQ5nHYEtZPFDcr725CR5r6F3N8hFos8Kbx7ykt1 m+Hg== X-Gm-Message-State: AOJu0Yw/BiaKlR7o2JTBEQxkDT3plQ1ukjOj1fNu6oXZFkEDuHxTdAAv IUORSIqQUFwapySGUKK4/v9R+A== X-Google-Smtp-Source: AGHT+IFRaod1lhoruGxBzYEGfz9lrBcN46bTgd1VxjtZCMnihlGBguo6Rg91XaYb7kUX3YnYZOD9jQ== X-Received: by 2002:a05:6358:5b:b0:164:8d78:258a with SMTP id 27-20020a056358005b00b001648d78258amr20773171rwx.20.1697467147014; Mon, 16 Oct 2023 07:39:07 -0700 (PDT) Received: from localhost (9.184.168.34.bc.googleusercontent.com. [34.168.184.9]) by smtp.gmail.com with UTF8SMTPSA id t5-20020a62ea05000000b0068be4ce33easm18623342pfh.96.2023.10.16.07.39.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 16 Oct 2023 07:39:06 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, sroettger@google.com Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, jannh@google.com, surenb@google.com, alex.sierra@amd.com, apopple@nvidia.com, aneesh.kumar@linux.ibm.com, axelrasmussen@google.com, ben@decadent.org.uk, catalin.marinas@arm.com, david@redhat.com, dwmw@amazon.co.uk, ying.huang@intel.com, hughd@google.com, joey.gouly@arm.com, corbet@lwn.net, wangkefeng.wang@huawei.com, Liam.Howlett@oracle.com, torvalds@linux-foundation.org, lstoakes@gmail.com, willy@infradead.org, mawupeng1@huawei.com, linmiaohe@huawei.com, namit@vmware.com, peterx@redhat.com, peterz@infradead.org, ryan.roberts@arm.com, shr@devkernel.io, vbabka@suse.cz, xiujianfeng@huawei.com, yu.ma@intel.com, zhangpeng362@huawei.com, dave.hansen@intel.com, luto@kernel.org, linux-hardening@vger.kernel.org Subject: [RFC PATCH v1 8/8] selftest mm/mseal mprotect/munmap/mremap/mmap Date: Mon, 16 Oct 2023 14:38:27 +0000 Message-ID: <20231016143828.647848-9-jeffxu@chromium.org> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog In-Reply-To: <20231016143828.647848-1-jeffxu@chromium.org> References: <20231016143828.647848-1-jeffxu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jeff Xu selftest for sealing mprotect/munmap/mremap/mmap Signed-off-by: Jeff Xu --- tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/mseal_test.c | 1428 +++++++++++++++++++++++ 2 files changed, 1429 insertions(+) create mode 100644 tools/testing/selftests/mm/mseal_test.c diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 6a9fc5693145..0c086cecc093 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test +TEST_GEN_FILES += mseal_test TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += thuge-gen TEST_GEN_FILES += transhuge-stress diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c new file mode 100644 index 000000000000..d6ae09729394 --- /dev/null +++ b/tools/testing/selftests/mm/mseal_test.c @@ -0,0 +1,1428 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include "../kselftest.h" +#include +#include +#include +#include +#include + +#ifndef MM_SEAL_MSEAL +#define MM_SEAL_MSEAL 0x1 +#endif + +#ifndef MM_SEAL_MPROTECT +#define MM_SEAL_MPROTECT 0x2 +#endif + +#ifndef MM_SEAL_MUNMAP +#define MM_SEAL_MUNMAP 0x4 +#endif + +#ifndef MM_SEAL_MMAP +#define MM_SEAL_MMAP 0x8 +#endif + +#ifndef MM_SEAL_MREMAP +#define MM_SEAL_MREMAP 0x10 +#endif + +#ifndef DEBUG +#define LOG_TEST_ENTER() {} +#else +#define LOG_TEST_ENTER() { printf("%s\n", __func__); } +#endif + +static int sys_mseal(void *start, size_t len, int types) +{ + int sret; + + errno = 0; + sret = syscall(__NR_mseal, start, len, types, 0); + return sret; +} + +int sys_mprotect(void *ptr, size_t size, unsigned long prot) +{ + int sret; + + errno = 0; + sret = syscall(SYS_mprotect, ptr, size, prot); + return sret; +} + +int sys_munmap(void *ptr, size_t size) +{ + int sret; + + errno = 0; + sret = syscall(SYS_munmap, ptr, size); + return sret; +} + +static int sys_madvise(void *start, size_t len, int types) +{ + int sret; + + errno = 0; + sret = syscall(__NR_madvise, start, len, types); + return sret; +} + +void *addr1 = (void *)0x50000000; +void *addr2 = (void *)0x50004000; +void *addr3 = (void *)0x50008000; +void setup_single_address(int size, void **ptrOut) +{ + void *ptr; + + ptr = mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + assert(ptr != (void *)-1); + *ptrOut = ptr; +} + +void setup_single_fixed_address(int size, void **ptrOut) +{ + void *ptr; + + ptr = mmap(addr1, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + assert(ptr == (void *)addr1); + + *ptrOut = ptr; +} + +void clean_single_address(void *ptr, int size) +{ + int ret; + + ret = munmap(ptr, size); + assert(!ret); +} + +void seal_mprotect_single_address(void *ptr, int size) +{ + int ret; + + ret = sys_mseal(ptr, size, MM_SEAL_MPROTECT); + assert(!ret); +} + +static void test_seal_addseals(void) +{ + LOG_TEST_ENTER(); + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* adding seal one by one */ + ret = sys_mseal(ptr, size, MM_SEAL_MPROTECT); + assert(!ret); + + ret = sys_mseal(ptr, size, MM_SEAL_MMAP); + assert(!ret); + + ret = sys_mseal(ptr, size, MM_SEAL_MREMAP); + assert(!ret); + + ret = sys_mseal(ptr, size, MM_SEAL_MSEAL); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_addseals_combined(void) +{ + LOG_TEST_ENTER(); + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + ret = sys_mseal(ptr, size, MM_SEAL_MPROTECT); + assert(!ret); + + /* adding multiple seals */ + ret = sys_mseal(ptr, size, + MM_SEAL_MPROTECT | MM_SEAL_MMAP | MM_SEAL_MREMAP | + MM_SEAL_MSEAL); + assert(!ret); + + /* not adding more seal type, so ok. */ + ret = sys_mseal(ptr, size, + MM_SEAL_MMAP | MM_SEAL_MREMAP | MM_SEAL_MSEAL); + assert(!ret); + + /* not adding more seal type, so ok. */ + ret = sys_mseal(ptr, size, MM_SEAL_MSEAL); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_addseals_reject(void) +{ + LOG_TEST_ENTER(); + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + ret = sys_mseal(ptr, size, MM_SEAL_MPROTECT | MM_SEAL_MSEAL); + assert(!ret); + + ret = sys_mseal(ptr, size, MM_SEAL_MPROTECT); + assert(!ret); + + /* MM_SEAL_MSEAL is set, so not allow new seal type . */ + ret = sys_mseal(ptr, size, + MM_SEAL_MPROTECT | MM_SEAL_MMAP | MM_SEAL_MSEAL); + assert(ret < 0); + + ret = sys_mseal(ptr, size, MM_SEAL_MMAP); + assert(ret < 0); + + ret = sys_mseal(ptr, size, MM_SEAL_MMAP | MM_SEAL_MSEAL); + assert(ret < 0); + + clean_single_address(ptr, size); +} + +static void test_seal_unmapped_start(void) +{ + LOG_TEST_ENTER(); + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + // munmap 2 pages from ptr. + ret = sys_munmap(ptr, 2 * page_size); + assert(!ret); + + // mprotect will fail because 2 pages from ptr are unmapped. + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + assert(ret < 0); + + // mseal will fail because 2 pages from ptr are unmapped. + ret = sys_mseal(ptr, size, MM_SEAL_MSEAL); + assert(ret < 0); + + ret = sys_mseal(ptr + 2 * page_size, 2 * page_size, MM_SEAL_MSEAL); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_unmapped_middle(void) +{ + LOG_TEST_ENTER(); + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + // munmap 2 pages from ptr + page. + ret = sys_munmap(ptr + page_size, 2 * page_size); + assert(!ret); + + // mprotect will fail, since size is 4 pages. + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + assert(ret < 0); + + // mseal will fail as well. + ret = sys_mseal(ptr, size, MM_SEAL_MSEAL); + assert(ret < 0); + + /* we still can add seal to the first page and last page*/ + ret = sys_mseal(ptr, page_size, MM_SEAL_MSEAL | MM_SEAL_MPROTECT); + assert(!ret); + + ret = sys_mseal(ptr + 3 * page_size, page_size, + MM_SEAL_MSEAL | MM_SEAL_MPROTECT); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_unmapped_end(void) +{ + LOG_TEST_ENTER(); + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + // unmap last 2 pages. + ret = sys_munmap(ptr + 2 * page_size, 2 * page_size); + assert(!ret); + + //mprotect will fail since last 2 pages are unmapped. + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + assert(ret < 0); + + //mseal will fail as well. + ret = sys_mseal(ptr, size, MM_SEAL_MSEAL); + assert(ret < 0); + + /* The first 2 pages is not sealed, and can add seals */ + ret = sys_mseal(ptr, 2 * page_size, MM_SEAL_MSEAL | MM_SEAL_MPROTECT); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_multiple_vmas(void) +{ + LOG_TEST_ENTER(); + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + // use mprotect to split the vma into 3. + ret = sys_mprotect(ptr + page_size, 2 * page_size, + PROT_READ | PROT_WRITE); + assert(!ret); + + // mprotect will get applied to all 4 pages - 3 VMAs. + ret = sys_mprotect(ptr, size, PROT_READ); + assert(!ret); + + // use mprotect to split the vma into 3. + ret = sys_mprotect(ptr + page_size, 2 * page_size, + PROT_READ | PROT_WRITE); + assert(!ret); + + // mseal get applied to all 4 pages - 3 VMAs. + ret = sys_mseal(ptr, size, MM_SEAL_MSEAL); + assert(!ret); + + // verify additional seal type will fail after MM_SEAL_MSEAL set. + ret = sys_mseal(ptr, page_size, MM_SEAL_MSEAL | MM_SEAL_MPROTECT); + assert(ret < 0); + + ret = sys_mseal(ptr + page_size, 2 * page_size, + MM_SEAL_MSEAL | MM_SEAL_MPROTECT); + assert(ret < 0); + + ret = sys_mseal(ptr + 3 * page_size, page_size, + MM_SEAL_MSEAL | MM_SEAL_MPROTECT); + assert(ret < 0); + + clean_single_address(ptr, size); +} + +static void test_seal_split_start(void) +{ + LOG_TEST_ENTER(); + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* use mprotect to split at middle */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE); + assert(!ret); + + /* seal the first page, this will split the VMA */ + ret = sys_mseal(ptr, page_size, MM_SEAL_MSEAL); + assert(!ret); + + /* can't add seal to the first page */ + ret = sys_mseal(ptr, page_size, MM_SEAL_MSEAL | MM_SEAL_MPROTECT); + assert(ret < 0); + + /* add seal to the remain 3 pages */ + ret = sys_mseal(ptr + page_size, 3 * page_size, + MM_SEAL_MSEAL | MM_SEAL_MPROTECT); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_split_end(void) +{ + LOG_TEST_ENTER(); + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_fixed_address(size, &ptr); + + /* use mprotect to split at middle */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE); + assert(!ret); + + /* seal the last page */ + ret = sys_mseal(ptr + 3 * page_size, page_size, MM_SEAL_MSEAL); + assert(!ret); + + /* adding seal to the last page is rejected. */ + ret = sys_mseal(ptr + 3 * page_size, page_size, + MM_SEAL_MSEAL | MM_SEAL_MPROTECT); + assert(ret < 0); + + /* Adding seals to the first 3 pages */ + ret = sys_mseal(ptr, 3 * page_size, MM_SEAL_MSEAL | MM_SEAL_MPROTECT); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_invalid_input(void) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_fixed_address(size, &ptr); + + /* invalid flag */ + ret = sys_mseal(ptr, size, 0x20); + assert(ret < 0); + + ret = sys_mseal(ptr, size, 0x31); + assert(ret < 0); + + ret = sys_mseal(ptr, size, 0x3F); + assert(ret < 0); + + /* unaligned address */ + ret = sys_mseal(ptr + 1, 2 * page_size, MM_SEAL_MSEAL); + assert(ret < 0); + + /* length too big */ + ret = sys_mseal(ptr, 5 * page_size, MM_SEAL_MSEAL); + assert(ret < 0); + + /* start is not in a valid VMA */ + ret = sys_mseal(ptr - page_size, 5 * page_size, MM_SEAL_MSEAL); + assert(ret < 0); + + clean_single_address(ptr, size); +} + +static void test_seal_zero_length(void) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + ret = sys_mprotect(ptr, 0, PROT_READ | PROT_WRITE); + assert(!ret); + + /* seal 0 length will be OK, same as mprotect */ + ret = sys_mseal(ptr, 0, MM_SEAL_MPROTECT); + assert(!ret); + + // verify the 4 pages are not sealed by previous call. + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_twice(void) +{ + LOG_TEST_ENTER(); + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + ret = sys_mseal(ptr, size, MM_SEAL_MPROTECT); + assert(!ret); + + // apply the same seal will be OK. idempotent. + ret = sys_mseal(ptr, size, MM_SEAL_MPROTECT); + assert(!ret); + + ret = sys_mseal(ptr, size, + MM_SEAL_MPROTECT | MM_SEAL_MMAP | MM_SEAL_MREMAP | + MM_SEAL_MSEAL); + assert(!ret); + + ret = sys_mseal(ptr, size, + MM_SEAL_MPROTECT | MM_SEAL_MMAP | MM_SEAL_MREMAP | + MM_SEAL_MSEAL); + assert(!ret); + + ret = sys_mseal(ptr, size, MM_SEAL_MSEAL); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_mprotect(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_mprotect_single_address(ptr, size); + + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + if (seal) + assert(ret < 0); + else + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_start_mprotect(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_mprotect_single_address(ptr, page_size); + + // the first page is sealed. + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + if (seal) + assert(ret < 0); + else + assert(!ret); + + // pages after the first page is not sealed. + ret = sys_mprotect(ptr + page_size, page_size * 3, + PROT_READ | PROT_WRITE); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_end_mprotect(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_mprotect_single_address(ptr + page_size, 3 * page_size); + + /* first page is not sealed */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + assert(!ret); + + /* last 3 page are sealed */ + ret = sys_mprotect(ptr + page_size, page_size * 3, + PROT_READ | PROT_WRITE); + if (seal) + assert(ret < 0); + else + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_mprotect_unalign_len(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_mprotect_single_address(ptr, page_size * 2 - 1); + + // 2 pages are sealed. + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + if (seal) + assert(ret < 0); + else + assert(!ret); + + ret = sys_mprotect(ptr + page_size * 2, page_size, + PROT_READ | PROT_WRITE); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_mprotect_unalign_len_variant_2(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + if (seal) + seal_mprotect_single_address(ptr, page_size * 2 + 1); + + // 3 pages are sealed. + ret = sys_mprotect(ptr, page_size * 3, PROT_READ | PROT_WRITE); + if (seal) + assert(ret < 0); + else + assert(!ret); + + ret = sys_mprotect(ptr + page_size * 3, page_size, + PROT_READ | PROT_WRITE); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_mprotect_two_vma(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split */ + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + assert(!ret); + + if (seal) + seal_mprotect_single_address(ptr, page_size * 4); + + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + if (seal) + assert(ret < 0); + else + assert(!ret); + + ret = sys_mprotect(ptr + page_size * 2, page_size * 2, + PROT_READ | PROT_WRITE); + if (seal) + assert(ret < 0); + else + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_mprotect_two_vma_with_split(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + // use mprotect to split as two vma. + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + assert(!ret); + + // mseal can apply across 2 vma, also split them. + if (seal) + seal_mprotect_single_address(ptr + page_size, page_size * 2); + + // the first page is not sealed. + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + assert(!ret); + + // the second page is sealed. + ret = sys_mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); + if (seal) + assert(ret < 0); + else + assert(!ret); + + // the third page is sealed. + ret = sys_mprotect(ptr + 2 * page_size, page_size, + PROT_READ | PROT_WRITE); + if (seal) + assert(ret < 0); + else + assert(!ret); + + // the fouth page is not sealed. + ret = sys_mprotect(ptr + 3 * page_size, page_size, + PROT_READ | PROT_WRITE); + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_mprotect_partial_mprotect(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + // seal one page. + if (seal) + seal_mprotect_single_address(ptr, page_size); + + // mprotect first 2 page will fail, since the first page are sealed. + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE); + if (seal) + assert(ret < 0); + else + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_mprotect_two_vma_with_gap(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + // use mprotect to split. + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + assert(!ret); + + // use mprotect to split. + ret = sys_mprotect(ptr + 3 * page_size, page_size, + PROT_READ | PROT_WRITE); + assert(!ret); + + // use munmap to free two pages in the middle + ret = sys_munmap(ptr + page_size, 2 * page_size); + assert(!ret); + + // mprotect will fail, because there is a gap in the address. + // notes, internally mprotect still updated the first page. + ret = sys_mprotect(ptr, 4 * page_size, PROT_READ); + assert(ret < 0); + + // mseal will fail as well. + ret = sys_mseal(ptr, 4 * page_size, MM_SEAL_MPROTECT); + assert(ret < 0); + + // unlike mprotect, the first page is not sealed. + ret = sys_mprotect(ptr, page_size, PROT_READ); + assert(ret == 0); + + // the last page is not sealed. + ret = sys_mprotect(ptr + 3 * page_size, page_size, PROT_READ); + assert(ret == 0); + + clean_single_address(ptr, size); +} + +static void test_seal_mprotect_split(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + //use mprotect to split. + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + assert(!ret); + + //seal all 4 pages. + if (seal) { + ret = sys_mseal(ptr, 4 * page_size, MM_SEAL_MPROTECT); + assert(!ret); + } + + //madvice is OK. + ret = sys_madvise(ptr, page_size * 2, MADV_WILLNEED); + assert(!ret); + + //mprotect is sealed. + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ); + if (seal) + assert(ret < 0); + else + assert(!ret); + + + ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ); + if (seal) + assert(ret < 0); + else + assert(!ret); + + clean_single_address(ptr, size); +} + +static void test_seal_mprotect_merge(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + // use mprotect to split one page. + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + assert(!ret); + + // seal first two pages. + if (seal) { + ret = sys_mseal(ptr, 2 * page_size, MM_SEAL_MPROTECT); + assert(!ret); + } + + ret = sys_madvise(ptr, page_size, MADV_WILLNEED); + assert(!ret); + + // 2 pages are sealed. + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ); + if (seal) + assert(ret < 0); + else + assert(!ret); + + // last 2 pages are not sealed. + ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ); + assert(ret == 0); + + clean_single_address(ptr, size); +} + +static void test_seal_munmap(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size, MM_SEAL_MUNMAP); + assert(!ret); + } + + // 4 pages are sealed. + ret = sys_munmap(ptr, size); + if (seal) + assert(ret < 0); + else + assert(!ret); +} + +/* + * allocate 4 pages, + * use mprotect to split it as two VMAs + * seal the whole range + * munmap will fail on both + */ +static void test_seal_munmap_two_vma(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split */ + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + assert(!ret); + + if (seal) { + ret = sys_mseal(ptr, size, MM_SEAL_MUNMAP); + assert(!ret); + } + + ret = sys_munmap(ptr, page_size * 2); + if (seal) + assert(ret < 0); + else + assert(!ret); + + ret = sys_munmap(ptr + page_size, page_size * 2); + if (seal) + assert(ret < 0); + else + assert(!ret); +} + +/* + * allocate a VMA with 4 pages. + * munmap the middle 2 pages. + * seal the whole 4 pages, will fail. + * note: one of the pages are sealed + * munmap the first page will be OK. + * munmap the last page will be OK. + */ +static void test_seal_munmap_vma_with_gap(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + ret = sys_munmap(ptr + page_size, page_size * 2); + assert(!ret); + + if (seal) { + // can't have gap in the middle. + ret = sys_mseal(ptr, size, MM_SEAL_MUNMAP); + assert(ret < 0); + } + + ret = sys_munmap(ptr, page_size); + assert(!ret); + + ret = sys_munmap(ptr + page_size * 2, page_size); + assert(!ret); + + ret = sys_munmap(ptr, size); + assert(!ret); +} + +static void test_munmap_start_freed(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + // unmap the first page. + ret = sys_munmap(ptr, page_size); + assert(!ret); + + // seal the last 3 pages. + if (seal) { + ret = sys_mseal(ptr + page_size, 3 * page_size, MM_SEAL_MUNMAP); + assert(!ret); + } + + // unmap from the first page. + ret = sys_munmap(ptr, size); + if (seal) { + assert(ret < 0); + + // use mprotect to verify page is not unmapped. + ret = sys_mprotect(ptr + page_size, 3 * page_size, PROT_READ); + assert(!ret); + } else + // note: this will be OK, even the first page is + // already unmapped. + assert(!ret); +} + +static void test_munmap_end_freed(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + // unmap last page. + ret = sys_munmap(ptr + page_size * 3, page_size); + assert(!ret); + + // seal the first 3 pages. + if (seal) { + ret = sys_mseal(ptr, 3 * page_size, MM_SEAL_MUNMAP); + assert(!ret); + } + + // unmap all pages. + ret = sys_munmap(ptr, size); + if (seal) { + assert(ret < 0); + + // use mprotect to verify page is not unmapped. + ret = sys_mprotect(ptr, 3 * page_size, PROT_READ); + assert(!ret); + } else + assert(!ret); +} + +static void test_munmap_middle_freed(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + // unmap 2 pages in the middle. + ret = sys_munmap(ptr + page_size, page_size * 2); + assert(!ret); + + // seal the first page. + if (seal) { + ret = sys_mseal(ptr, page_size, MM_SEAL_MUNMAP); + assert(!ret); + } + + // munmap all 4 pages. + ret = sys_munmap(ptr, size); + if (seal) { + assert(ret < 0); + + // use mprotect to verify page is not unmapped. + ret = sys_mprotect(ptr, page_size, PROT_READ); + assert(!ret); + } else + assert(!ret); +} + +void test_seal_mremap_shrink(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size, MM_SEAL_MREMAP); + assert(!ret); + } + + // shrink from 4 pages to 2 pages. + ret2 = mremap(ptr, size, 2 * page_size, 0, 0); + if (seal) { + assert(ret2 == MAP_FAILED); + assert(errno == EACCES); + } else { + assert(ret2 != MAP_FAILED); + clean_single_address(ret2, 2 * page_size); + } + clean_single_address(ptr, size); +} + +void test_seal_mremap_expand(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + // ummap last 2 pages. + ret = sys_munmap(ptr + 2 * page_size, 2 * page_size); + assert(!ret); + + if (seal) { + ret = sys_mseal(ptr, 2 * page_size, MM_SEAL_MREMAP); + assert(!ret); + } + + // expand from 2 page to 4 pages. + ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0); + if (seal) { + assert(ret2 == MAP_FAILED); + assert(errno == EACCES); + } else { + assert(ret2 == ptr); + clean_single_address(ret2, 4 * page_size); + } + clean_single_address(ptr, size); +} + +void test_seal_mremap_move(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + if (seal) { + ret = sys_mseal(ptr, size, MM_SEAL_MREMAP); + assert(!ret); + } + + // move from ptr to fixed address. + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, addr1); + if (seal) { + assert(ret2 == MAP_FAILED); + assert(errno == EACCES); + } else { + assert(ret2 != MAP_FAILED); + clean_single_address(ret2, size); + } + clean_single_address(ptr, size); +} + +void test_seal_mmap_overwrite_prot(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size, MM_SEAL_MMAP); + assert(!ret); + } + + // use mmap to change protection. + ret2 = mmap(ptr, size, PROT_NONE, + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); + if (seal) { + assert(ret2 == MAP_FAILED); + assert(errno == EACCES); + } else + assert(ret2 == ptr); + + clean_single_address(ptr, size); +} + +void test_seal_mremap_shrink_fixed(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + setup_single_fixed_address(size, &newAddr); + + if (seal) { + ret = sys_mseal(ptr, size, MM_SEAL_MREMAP); + assert(!ret); + } + + // mremap to move and shrink to fixed address + ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED, + newAddr); + if (seal) { + assert(ret2 == MAP_FAILED); + assert(errno == EACCES); + } else + assert(ret2 == newAddr); + + clean_single_address(ptr, size); + clean_single_address(newAddr, size); +} + +void test_seal_mremap_expand_fixed(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(page_size, &ptr); + setup_single_fixed_address(size, &newAddr); + + if (seal) { + ret = sys_mseal(newAddr, size, MM_SEAL_MREMAP); + assert(!ret); + } + + // mremap to move and expand to fixed address + ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED, + newAddr); + if (seal) { + assert(ret2 == MAP_FAILED); + assert(errno == EACCES); + } else + assert(ret2 == newAddr); + + clean_single_address(ptr, page_size); + clean_single_address(newAddr, size); +} + +void test_seal_mremap_move_fixed(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + setup_single_fixed_address(size, &newAddr); + + if (seal) { + ret = sys_mseal(newAddr, size, MM_SEAL_MREMAP); + assert(!ret); + } + + // mremap to move to fixed address + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr); + if (seal) { + assert(ret2 == MAP_FAILED); + assert(errno == EACCES); + } else + assert(ret2 == newAddr); + + clean_single_address(ptr, page_size); + clean_single_address(newAddr, size); +} + +void test_seal_mremap_move_fixed_zero(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size, MM_SEAL_MREMAP); + assert(!ret); + } + + /* + * MREMAP_FIXED can move the mapping to zero address + */ + ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED, + 0); + if (seal) { + assert(ret2 == MAP_FAILED); + assert(errno == EACCES); + } else { + assert(ret2 == 0); + clean_single_address(ret2, 2 * page_size); + } + clean_single_address(ptr, size); +} + +void test_seal_mremap_move_dontunmap(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size, MM_SEAL_MREMAP); + assert(!ret); + } + + // mremap to move, and don't unmap src addr. + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0); + if (seal) { + assert(ret2 == MAP_FAILED); + assert(errno == EACCES); + } else { + assert(ret2 != MAP_FAILED); + clean_single_address(ret2, size); + } + + clean_single_address(ptr, page_size); +} + +void test_seal_mremap_move_dontunmap_anyaddr(bool seal) +{ + LOG_TEST_ENTER(); + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size, MM_SEAL_MREMAP); + assert(!ret); + } + + /* + * The 0xdeaddead should not have effect on dest addr + * when MREMAP_DONTUNMAP is set. + */ + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, + 0xdeaddead); + if (seal) { + assert(ret2 == MAP_FAILED); + assert(errno == EACCES); + } else { + assert(ret2 != MAP_FAILED); + assert((long)ret2 != 0xdeaddead); + clean_single_address(ret2, size); + } + + clean_single_address(ptr, page_size); +} + +int main(int argc, char **argv) +{ + test_seal_invalid_input(); + test_seal_addseals(); + test_seal_addseals_combined(); + test_seal_addseals_reject(); + test_seal_unmapped_start(); + test_seal_unmapped_middle(); + test_seal_unmapped_end(); + test_seal_multiple_vmas(); + test_seal_split_start(); + test_seal_split_end(); + + test_seal_zero_length(); + test_seal_twice(); + + test_seal_mprotect(false); + test_seal_mprotect(true); + + test_seal_start_mprotect(false); + test_seal_start_mprotect(true); + + test_seal_end_mprotect(false); + test_seal_end_mprotect(true); + + test_seal_mprotect_unalign_len(false); + test_seal_mprotect_unalign_len(true); + + test_seal_mprotect_unalign_len_variant_2(false); + test_seal_mprotect_unalign_len_variant_2(true); + + test_seal_mprotect_two_vma(false); + test_seal_mprotect_two_vma(true); + + test_seal_mprotect_two_vma_with_split(false); + test_seal_mprotect_two_vma_with_split(true); + + test_seal_mprotect_partial_mprotect(false); + test_seal_mprotect_partial_mprotect(true); + + test_seal_mprotect_two_vma_with_gap(false); + test_seal_mprotect_two_vma_with_gap(true); + + test_seal_mprotect_merge(false); + test_seal_mprotect_merge(true); + + test_seal_mprotect_split(false); + test_seal_mprotect_split(true); + + test_seal_munmap(false); + test_seal_munmap(true); + test_seal_munmap_two_vma(false); + test_seal_munmap_two_vma(true); + test_seal_munmap_vma_with_gap(false); + test_seal_munmap_vma_with_gap(true); + + test_munmap_start_freed(false); + test_munmap_start_freed(true); + test_munmap_middle_freed(false); + test_munmap_middle_freed(true); + test_munmap_end_freed(false); + test_munmap_end_freed(true); + + test_seal_mremap_shrink(false); + test_seal_mremap_shrink(true); + test_seal_mremap_expand(false); + test_seal_mremap_expand(true); + test_seal_mremap_move(false); + test_seal_mremap_move(true); + + test_seal_mremap_shrink_fixed(false); + test_seal_mremap_shrink_fixed(true); + test_seal_mremap_expand_fixed(false); + test_seal_mremap_expand_fixed(true); + test_seal_mremap_move_fixed(false); + test_seal_mremap_move_fixed(true); + test_seal_mremap_move_dontunmap(false); + test_seal_mremap_move_dontunmap(true); + test_seal_mremap_move_fixed_zero(false); + test_seal_mremap_move_fixed_zero(true); + test_seal_mremap_move_dontunmap_anyaddr(false); + test_seal_mremap_move_dontunmap_anyaddr(true); + + test_seal_mmap_overwrite_prot(false); + test_seal_mmap_overwrite_prot(true); + + printf("OK\n"); + return 0; +}