From patchwork Mon Feb 24 16:02:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= X-Patchwork-Id: 208754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D50FBC11D32 for ; Mon, 24 Feb 2020 17:03:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9B1742084E for ; Mon, 24 Feb 2020 17:03:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727740AbgBXRDI (ORCPT ); Mon, 24 Feb 2020 12:03:08 -0500 Received: from smtp-sh2.infomaniak.ch ([128.65.195.6]:51329 "EHLO smtp-sh2.infomaniak.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726806AbgBXRDH (ORCPT ); Mon, 24 Feb 2020 12:03:07 -0500 Received: from smtp-3-0000.mail.infomaniak.ch (smtp-3-0000.mail.infomaniak.ch [10.4.36.107]) by smtp-sh2.infomaniak.ch (8.14.4/8.14.4/Debian-8+deb8u2) with ESMTP id 01OG2O4L042016 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 24 Feb 2020 17:02:24 +0100 Received: from localhost (unknown [94.23.54.103]) by smtp-3-0000.mail.infomaniak.ch (Postfix) with ESMTPA id 48R6Jw04gPzljlp7; Mon, 24 Feb 2020 17:02:24 +0100 (CET) From: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= To: linux-kernel@vger.kernel.org Cc: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= , Al Viro , Andy Lutomirski , Arnd Bergmann , Casey Schaufler , Greg Kroah-Hartman , James Morris , Jann Horn , Jonathan Corbet , Kees Cook , Michael Kerrisk , =?utf-8?q?Micka=C3=ABl_Sala?= =?utf-8?b?w7xu?= , "Serge E . Hallyn" , Shuah Khan , Vincent Dagonneau , kernel-hardening@lists.openwall.com, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-security-module@vger.kernel.org, x86@kernel.org Subject: [RFC PATCH v14 01/10] landlock: Add object and rule management Date: Mon, 24 Feb 2020 17:02:06 +0100 Message-Id: <20200224160215.4136-2-mic@digikod.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200224160215.4136-1-mic@digikod.net> References: <20200224160215.4136-1-mic@digikod.net> MIME-Version: 1.0 X-Antivirus: Dr.Web (R) for Unix mail servers drweb plugin ver.6.0.2.8 X-Antivirus-Code: 0x100000 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org A Landlock object enables to identify a kernel object (e.g. an inode). A Landlock rule is a set of access rights allowed on an object. Rules are grouped in rulesets that may be tied to a set of processes (i.e. subjects) to enforce a scoped access-control (i.e. a domain). Because Landlock's goal is to empower any process (especially unprivileged ones) to sandbox themselves, we can't rely on a system-wide object identification such as file extended attributes. Indeed, we need innocuous, composable and modular access-controls. The main challenge with this constraints is to identify kernel objects while this identification is useful (i.e. when a security policy makes use of this object). But this identification data should be freed once no policy is using it. This ephemeral tagging should not and may not be written in the filesystem. We then need to manage the lifetime of a rule according to the lifetime of its object. To avoid a global lock, this implementation make use of RCU and counters to safely reference objects. A following commit uses this generic object management for inodes. Signed-off-by: Mickaël Salaün Cc: Andy Lutomirski Cc: James Morris Cc: Kees Cook Cc: Serge E. Hallyn --- Changes since v13: * New dedicated implementation, removing the need for eBPF. Previous version: https://lore.kernel.org/lkml/20190721213116.23476-6-mic@digikod.net/ --- MAINTAINERS | 10 ++ security/Kconfig | 1 + security/Makefile | 2 + security/landlock/Kconfig | 15 ++ security/landlock/Makefile | 3 + security/landlock/object.c | 339 +++++++++++++++++++++++++++++++++++++ security/landlock/object.h | 134 +++++++++++++++ 7 files changed, 504 insertions(+) create mode 100644 security/landlock/Kconfig create mode 100644 security/landlock/Makefile create mode 100644 security/landlock/object.c create mode 100644 security/landlock/object.h diff --git a/MAINTAINERS b/MAINTAINERS index fcd79fc38928..206f85768cd9 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9360,6 +9360,16 @@ F: net/core/skmsg.c F: net/core/sock_map.c F: net/ipv4/tcp_bpf.c +LANDLOCK SECURITY MODULE +M: Mickaël Salaün +L: linux-security-module@vger.kernel.org +W: https://landlock.io +T: git https://github.com/landlock-lsm/linux.git +S: Supported +F: security/landlock/ +K: landlock +K: LANDLOCK + LANTIQ / INTEL Ethernet drivers M: Hauke Mehrtens L: netdev@vger.kernel.org diff --git a/security/Kconfig b/security/Kconfig index 2a1a2d396228..9d9981394fb0 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -238,6 +238,7 @@ source "security/loadpin/Kconfig" source "security/yama/Kconfig" source "security/safesetid/Kconfig" source "security/lockdown/Kconfig" +source "security/landlock/Kconfig" source "security/integrity/Kconfig" diff --git a/security/Makefile b/security/Makefile index 746438499029..2472ef96d40a 100644 --- a/security/Makefile +++ b/security/Makefile @@ -12,6 +12,7 @@ subdir-$(CONFIG_SECURITY_YAMA) += yama subdir-$(CONFIG_SECURITY_LOADPIN) += loadpin subdir-$(CONFIG_SECURITY_SAFESETID) += safesetid subdir-$(CONFIG_SECURITY_LOCKDOWN_LSM) += lockdown +subdir-$(CONFIG_SECURITY_LANDLOCK) += landlock # always enable default capabilities obj-y += commoncap.o @@ -29,6 +30,7 @@ obj-$(CONFIG_SECURITY_YAMA) += yama/ obj-$(CONFIG_SECURITY_LOADPIN) += loadpin/ obj-$(CONFIG_SECURITY_SAFESETID) += safesetid/ obj-$(CONFIG_SECURITY_LOCKDOWN_LSM) += lockdown/ +obj-$(CONFIG_SECURITY_LANDLOCK) += landlock/ obj-$(CONFIG_CGROUP_DEVICE) += device_cgroup.o # Object integrity file lists diff --git a/security/landlock/Kconfig b/security/landlock/Kconfig new file mode 100644 index 000000000000..4a321d5b3f67 --- /dev/null +++ b/security/landlock/Kconfig @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: GPL-2.0-only + +config SECURITY_LANDLOCK + bool "Landlock support" + depends on SECURITY + default n + help + This selects Landlock, a safe sandboxing mechanism. It enables to + restrict processes on the fly (i.e. enforce an access control policy), + which can complement seccomp-bpf. The security policy is a set of access + rights tied to an object, which could be a file, a socket or a process. + + See Documentation/security/landlock/ for further information. + + If you are unsure how to answer this question, answer N. diff --git a/security/landlock/Makefile b/security/landlock/Makefile new file mode 100644 index 000000000000..cb6deefbf4c0 --- /dev/null +++ b/security/landlock/Makefile @@ -0,0 +1,3 @@ +obj-$(CONFIG_SECURITY_LANDLOCK) := landlock.o + +landlock-y := object.o diff --git a/security/landlock/object.c b/security/landlock/object.c new file mode 100644 index 000000000000..38fbbb108120 --- /dev/null +++ b/security/landlock/object.c @@ -0,0 +1,339 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Landlock LSM - Object and rule management + * + * Copyright © 2016-2020 Mickaël Salaün + * Copyright © 2018-2020 ANSSI + * + * Principles and constraints of the object and rule management: + * - Do not leak memory. + * - Try as much as possible to free a memory allocation as soon as it is + * unused. + * - Do not use global lock. + * - Do not charge processes other than the one requesting a Landlock + * operation. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "object.h" + +struct landlock_object *landlock_create_object( + const enum landlock_object_type type, void *underlying_object) +{ + struct landlock_object *object; + + if (WARN_ON_ONCE(!underlying_object)) + return NULL; + object = kzalloc(sizeof(*object), GFP_KERNEL); + if (!object) + return NULL; + refcount_set(&object->usage, 1); + refcount_set(&object->cleaners, 1); + spin_lock_init(&object->lock); + INIT_LIST_HEAD(&object->rules); + object->type = type; + WRITE_ONCE(object->underlying_object, underlying_object); + return object; +} + +struct landlock_object *landlock_get_object(struct landlock_object *object) + __acquires(object->usage) +{ + __acquire(object->usage); + /* + * If @object->usage equal 0, then it will be ignored by writers, and + * underlying_object->object may be replaced, but this is not an issue + * for release_object(). + */ + if (object && refcount_inc_not_zero(&object->usage)) { + /* + * It should not be possible to get a reference to an object if + * its underlying object is being terminated (e.g. with + * landlock_release_object()), because an object is only + * modifiable through such underlying object. This is not the + * case with landlock_get_object_cleaner(). + */ + WARN_ON_ONCE(!READ_ONCE(object->underlying_object)); + return object; + } + return NULL; +} + +static struct landlock_object *get_object_cleaner( + struct landlock_object *object) + __acquires(object->cleaners) +{ + __acquire(object->cleaners); + if (object && refcount_inc_not_zero(&object->cleaners)) + return object; + return NULL; +} + +/* + * There is two cases when an object should be free and the reference to the + * underlying object should be put: + * - when the last rule tied to this object is removed, which is handled by + * landlock_put_rule() and then release_object(); + * - when the object is being terminated (e.g. no more reference to an inode), + * which is handled by landlock_put_object(). + */ +static void put_object_free(struct landlock_object *object) + __releases(object->cleaners) +{ + __release(object->cleaners); + if (!refcount_dec_and_test(&object->cleaners)) + return; + WARN_ON_ONCE(refcount_read(&object->usage)); + /* + * Ensures a safe use of @object in the RCU block from + * landlock_put_rule(). + */ + kfree_rcu(object, rcu_free); +} + +/* + * Destroys a newly created and useless object. + */ +void landlock_drop_object(struct landlock_object *object) +{ + if (WARN_ON_ONCE(!refcount_dec_and_test(&object->usage))) + return; + __acquire(object->cleaners); + put_object_free(object); +} + +/* + * Puts the underlying object (e.g. inode) if it is the first request to + * release @object, without calling landlock_put_object(). + * + * Return true if this call effectively marks @object as released, false + * otherwise. + */ +static bool release_object(struct landlock_object *object) + __releases(&object->lock) +{ + void *underlying_object; + + lockdep_assert_held(&object->lock); + + underlying_object = xchg(&object->underlying_object, NULL); + spin_unlock(&object->lock); + might_sleep(); + if (!underlying_object) + return false; + + switch (object->type) { + case LANDLOCK_OBJECT_INODE: + break; + default: + WARN_ON_ONCE(1); + } + return true; +} + +static void put_object_cleaner(struct landlock_object *object) + __releases(object->cleaners) +{ + /* Let's try an early lockless check. */ + if (list_empty(&object->rules) && + READ_ONCE(object->underlying_object)) { + /* + * Puts @object if there is no rule tied to it and the + * remaining user is the underlying object. This check is + * atomic because @object->rules and @object->underlying_object + * are protected by @object->lock. + */ + spin_lock(&object->lock); + if (list_empty(&object->rules) && + READ_ONCE(object->underlying_object) && + refcount_dec_if_one(&object->usage)) { + /* + * Releases @object, in place of + * landlock_release_object(). + * + * @object is already empty, implying that all its + * previous rules are already disabled. + * + * Unbalance the @object->cleaners counter to reflect + * the underlying object release. + */ + if (!WARN_ON_ONCE(!release_object(object))) { + __acquire(object->cleaners); + put_object_free(object); + } + } else { + spin_unlock(&object->lock); + } + } + put_object_free(object); +} + +/* + * Putting an object is easy when the object is being terminated, but it is + * much more tricky when the reason is that there is no more rule tied to this + * object. Indeed, new rules could be added at the same time. + */ +void landlock_put_object(struct landlock_object *object) + __releases(object->usage) +{ + struct landlock_object *object_cleaner; + + __release(object->usage); + might_sleep(); + if (!object) + return; + /* + * Guards against concurrent termination to be able to terminate + * @object if it is empty and not referenced by another rule-appender + * other than the underlying object. + */ + object_cleaner = get_object_cleaner(object); + if (WARN_ON_ONCE(!object_cleaner)) { + __release(object->cleaners); + return; + } + /* + * Decrements @object->usage and if it reach zero, also decrement + * @object->cleaners. If both reach zero, then release and free + * @object. + */ + if (refcount_dec_and_test(&object->usage)) { + struct landlock_rule *rule_walker, *rule_walker2; + + spin_lock(&object->lock); + /* + * Disables all the rules tied to @object when it is forbidden + * to add new rule but still allowed to remove them with + * landlock_put_rule(). This is crucial to be able to safely + * free a rule according to landlock_rule_is_disabled(). + */ + list_for_each_entry_safe(rule_walker, rule_walker2, + &object->rules, list) + list_del_rcu(&rule_walker->list); + + /* + * Releases @object if it is not already released (e.g. with + * landlock_release_object()). + */ + release_object(object); + /* + * Unbalances the @object->cleaners counter to reflect the + * underlying object release. + */ + __acquire(object->cleaners); + put_object_free(object); + } + put_object_cleaner(object_cleaner); +} + +void landlock_put_rule(struct landlock_object *object, + struct landlock_rule *rule) +{ + if (!rule) + return; + WARN_ON_ONCE(!object); + /* + * Guards against a concurrent @object self-destruction with + * landlock_put_object() or put_object_cleaner(). + */ + rcu_read_lock(); + if (landlock_rule_is_disabled(rule)) { + rcu_read_unlock(); + if (refcount_dec_and_test(&rule->usage)) + kfree_rcu(rule, rcu_free); + return; + } + if (refcount_dec_and_test(&rule->usage)) { + struct landlock_object *safe_object; + + /* + * Now, @rule may still be enabled, or in the process of being + * untied to @object by put_object_cleaner(). However, we know + * that @object will not be freed until rcu_read_unlock() and + * until @object->cleaners reach zero. Furthermore, we may not + * be the only one willing to free a @rule linked with @object. + * If we succeed to hold @object with get_object_cleaner(), we + * know that until put_object_cleaner(), we can safely use + * @object to remove @rule. + */ + safe_object = get_object_cleaner(object); + rcu_read_unlock(); + if (!safe_object) { + __release(safe_object->cleaners); + /* + * We can safely free @rule because it is already + * removed from @object's list. + */ + WARN_ON_ONCE(!landlock_rule_is_disabled(rule)); + kfree_rcu(rule, rcu_free); + } else { + spin_lock(&safe_object->lock); + if (!landlock_rule_is_disabled(rule)) + list_del(&rule->list); + spin_unlock(&safe_object->lock); + kfree_rcu(rule, rcu_free); + put_object_cleaner(safe_object); + } + } else { + rcu_read_unlock(); + } + /* + * put_object_cleaner() might sleep, but it is only reachable if + * !landlock_rule_is_disabled(). Therefore, clean_ref() can not sleep. + */ + might_sleep(); +} + +void landlock_release_object(struct landlock_object __rcu *rcu_object) +{ + struct landlock_object *object; + + if (!rcu_object) + return; + rcu_read_lock(); + object = get_object_cleaner(rcu_dereference(rcu_object)); + rcu_read_unlock(); + if (unlikely(!object)) { + __release(object->cleaners); + return; + } + /* + * Makes sure that the underlying object never point to a freed object + * by firstly releasing the object (i.e. NULL the reference to it) to + * be sure no one could get a new reference to it while it is being + * terminated. Secondly, put the object globally (e.g. for the + * super-block). + * + * This can run concurrently with put_object_cleaner(), which may try + * to release @object as well. + */ + spin_lock(&object->lock); + if (release_object(object)) { + /* + * Unbalances the object to reflect the underlying object + * release. + */ + __acquire(object->usage); + landlock_put_object(object); + } + /* + * If a concurrent thread is adding a new rule, the object will be free + * at the end of this rule addition, otherwise it will be free with the + * following put_object_cleaner() or a remaining one. + */ + put_object_cleaner(object); +} diff --git a/security/landlock/object.h b/security/landlock/object.h new file mode 100644 index 000000000000..15dfc9a75a82 --- /dev/null +++ b/security/landlock/object.h @@ -0,0 +1,134 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Landlock LSM - Object and rule management + * + * Copyright © 2016-2020 Mickaël Salaün + * Copyright © 2018-2020 ANSSI + */ + +#ifndef _SECURITY_LANDLOCK_OBJECT_H +#define _SECURITY_LANDLOCK_OBJECT_H + +#include +#include +#include +#include +#include +#include + +struct landlock_access { + /* + * @self: Bitfield of allowed actions on the kernel object. They are + * relative to the object type (e.g. LANDLOCK_ACTION_FS_READ). + */ + u32 self; + /* + * @beneath: Same as @self, but for the child objects (e.g. a file in a + * directory). + */ + u32 beneath; +}; + +struct landlock_rule { + struct landlock_access access; + /* + * @list: Linked list with other rules tied to the same object, which + * enable to manage their lifetimes. This is also used to identify if + * a rule is still valid, thanks to landlock_rule_is_disabled(), which + * is important in the matching process because the original object + * address might have been recycled. + */ + struct list_head list; + union { + /* + * @usage: Number of rulesets pointing to this rule. This + * field is never used by RCU readers. + */ + refcount_t usage; + struct rcu_head rcu_free; + }; +}; + +enum landlock_object_type { + LANDLOCK_OBJECT_INODE = 1, +}; + +struct landlock_object { + /* + * @usage: Main usage counter, used to tie an object to it's underlying + * object (i.e. create a lifetime) and potentially add new rules. + */ + refcount_t usage; + /* + * @cleaners: Usage counter used to free a rule from @rules (thanks to + * put_rule()). Enables to get a reference to this object until it + * really become freed. Cf. put_object(). + */ + refcount_t cleaners; + union { + /* + * The use of this struct is controlled by @usage and + * @cleaners, which makes it safe to union it with @rcu_free. + */ + struct { + /* + * @underlying_object: Used when cleaning up an object + * and to mark an object as tied to its underlying + * kernel structure. It must then be atomically read + * using READ_ONCE(). + * + * The one who clear @underlying_object must: + * 1. clear the object self-reference and + * 2. decrement @usage (and potentially free the + * object). + * + * Cf. clean_object(). + */ + void *underlying_object; + /* + * @type: Only used when cleaning up an object. + */ + enum landlock_object_type type; + spinlock_t lock; + /* + * @rules: List of struct landlock_rule linked with + * their "list" field. This list is only accessed when + * updating the list (to be able to clean up later) + * while holding @lock. + */ + struct list_head rules; + }; + struct rcu_head rcu_free; + }; +}; + +void landlock_put_rule(struct landlock_object *object, + struct landlock_rule *rule); + +void landlock_release_object(struct landlock_object __rcu *rcu_object); + +struct landlock_object *landlock_create_object( + const enum landlock_object_type type, void *underlying_object); + +struct landlock_object *landlock_get_object(struct landlock_object *object) + __acquires(object->usage); + +void landlock_put_object(struct landlock_object *object) + __releases(object->usage); + +void landlock_drop_object(struct landlock_object *object); + +static inline bool landlock_rule_is_disabled( + struct landlock_rule *rule) +{ + /* + * Disabling (i.e. unlinking) a landlock_rule is a one-way operation. + * It is not possible to re-enable such a rule, then there is no need + * for smp_load_acquire(). + * + * LIST_POISON2 is set by list_del() and list_del_rcu(). + */ + return !rule || READ_ONCE(rule->list.prev) == LIST_POISON2; +} + +#endif /* _SECURITY_LANDLOCK_OBJECT_H */ From patchwork Mon Feb 24 16:02:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= X-Patchwork-Id: 208756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E168C11D30 for ; Mon, 24 Feb 2020 16:10:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7E94D20866 for ; Mon, 24 Feb 2020 16:10:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728028AbgBXQKy (ORCPT ); Mon, 24 Feb 2020 11:10:54 -0500 Received: from smtp-sh.infomaniak.ch ([128.65.195.4]:51414 "EHLO smtp-sh.infomaniak.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727090AbgBXQKx (ORCPT ); Mon, 24 Feb 2020 11:10:53 -0500 X-Greylist: delayed 454 seconds by postgrey-1.27 at vger.kernel.org; Mon, 24 Feb 2020 11:10:31 EST Received: from smtp-2-0001.mail.infomaniak.ch ([10.5.36.108]) by smtp-sh.infomaniak.ch (8.14.5/8.14.5) with ESMTP id 01OG2Pnm021724 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 24 Feb 2020 17:02:25 +0100 Received: from localhost (unknown [94.23.54.103]) by smtp-2-0001.mail.infomaniak.ch (Postfix) with ESMTPA id 48R6Jx2C5hzlnBxH; Mon, 24 Feb 2020 17:02:25 +0100 (CET) From: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= To: linux-kernel@vger.kernel.org Cc: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= , Al Viro , Andy Lutomirski , Arnd Bergmann , Casey Schaufler , Greg Kroah-Hartman , James Morris , Jann Horn , Jonathan Corbet , Kees Cook , Michael Kerrisk , =?utf-8?q?Micka=C3=ABl_Sala?= =?utf-8?b?w7xu?= , "Serge E . Hallyn" , Shuah Khan , Vincent Dagonneau , kernel-hardening@lists.openwall.com, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-security-module@vger.kernel.org, x86@kernel.org Subject: [RFC PATCH v14 02/10] landlock: Add ruleset and domain management Date: Mon, 24 Feb 2020 17:02:07 +0100 Message-Id: <20200224160215.4136-3-mic@digikod.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200224160215.4136-1-mic@digikod.net> References: <20200224160215.4136-1-mic@digikod.net> MIME-Version: 1.0 X-Antivirus: Dr.Web (R) for Unix mail servers drweb plugin ver.6.0.2.8 X-Antivirus-Code: 0x100000 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org A Landlock ruleset is mainly a red-black tree with Landlock rules as nodes. This enables quick update and lookup to match a requested access e.g., to a file. A ruleset is usable through a dedicated file descriptor (cf. following commit adding the syscall) which enables a process to build it by adding new rules. A domain is a ruleset tied to a set of processes. This group of rules defined the security policy enforced on these processes and their future children. A domain can transition to a new domain which is the merge of itself with a ruleset provided by the current process. This merge is the intersection of all the constraints, which means that a process can only gain more constraints over time. Signed-off-by: Mickaël Salaün Cc: Andy Lutomirski Cc: James Morris Cc: Kees Cook Cc: Serge E. Hallyn --- Changes since v13: * New implementation, inspired by the previous inode eBPF map, but agnostic to the underlying kernel object. Previous version: https://lore.kernel.org/lkml/20190721213116.23476-7-mic@digikod.net/ --- MAINTAINERS | 1 + include/uapi/linux/landlock.h | 102 ++++++++ security/landlock/Makefile | 2 +- security/landlock/ruleset.c | 460 ++++++++++++++++++++++++++++++++++ security/landlock/ruleset.h | 106 ++++++++ 5 files changed, 670 insertions(+), 1 deletion(-) create mode 100644 include/uapi/linux/landlock.h create mode 100644 security/landlock/ruleset.c create mode 100644 security/landlock/ruleset.h diff --git a/MAINTAINERS b/MAINTAINERS index 206f85768cd9..937257925e65 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9366,6 +9366,7 @@ L: linux-security-module@vger.kernel.org W: https://landlock.io T: git https://github.com/landlock-lsm/linux.git S: Supported +F: include/uapi/linux/landlock.h F: security/landlock/ K: landlock K: LANDLOCK diff --git a/include/uapi/linux/landlock.h b/include/uapi/linux/landlock.h new file mode 100644 index 000000000000..92760aca3645 --- /dev/null +++ b/include/uapi/linux/landlock.h @@ -0,0 +1,102 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * Landlock - UAPI headers + * + * Copyright © 2017-2020 Mickaël Salaün + * Copyright © 2018-2020 ANSSI + */ + +#ifndef _UAPI__LINUX_LANDLOCK_H__ +#define _UAPI__LINUX_LANDLOCK_H__ + +/** + * DOC: fs_access + * + * A set of actions on kernel objects may be defined by an attribute (e.g. + * &struct landlock_attr_path_beneath) and a bitmask of access. + * + * Filesystem flags + * ~~~~~~~~~~~~~~~~ + * + * These flags enable to restrict a sandbox process to a set of of actions on + * files and directories. Files or directories opened before the sandboxing + * are not subject to these restrictions. + * + * - %LANDLOCK_ACCESS_FS_READ: Open or map a file with read access. + * - %LANDLOCK_ACCESS_FS_READDIR: List the content of a directory. + * - %LANDLOCK_ACCESS_FS_GETATTR: Read metadata of a file or a directory. + * - %LANDLOCK_ACCESS_FS_WRITE: Write to a file. + * - %LANDLOCK_ACCESS_FS_TRUNCATE: Truncate a file. + * - %LANDLOCK_ACCESS_FS_LOCK: Lock a file. + * - %LANDLOCK_ACCESS_FS_CHMOD: Change DAC permissions on a file or a + * directory. + * - %LANDLOCK_ACCESS_FS_CHOWN: Change the owner of a file or a directory. + * - %LANDLOCK_ACCESS_FS_CHGRP: Change the group of a file or a directory. + * - %LANDLOCK_ACCESS_FS_IOCTL: Send various command to a special file, cf. + * :manpage:`ioctl(2)`. + * - %LANDLOCK_ACCESS_FS_LINK_TO: Link a file into a directory. + * - %LANDLOCK_ACCESS_FS_RENAME_FROM: Rename a file or a directory. + * - %LANDLOCK_ACCESS_FS_RENAME_TO: Rename a file or a directory. + * - %LANDLOCK_ACCESS_FS_RMDIR: Remove an empty directory. + * - %LANDLOCK_ACCESS_FS_UNLINK: Remove a file. + * - %LANDLOCK_ACCESS_FS_MAKE_CHAR: Create a character device. + * - %LANDLOCK_ACCESS_FS_MAKE_DIR: Create a directory. + * - %LANDLOCK_ACCESS_FS_MAKE_REG: Create a regular file. + * - %LANDLOCK_ACCESS_FS_MAKE_SOCK: Create a UNIX domain socket. + * - %LANDLOCK_ACCESS_FS_MAKE_FIFO: Create a named pipe. + * - %LANDLOCK_ACCESS_FS_MAKE_BLOCK: Create a block device. + * - %LANDLOCK_ACCESS_FS_MAKE_SYM: Create a symbolic link. + * - %LANDLOCK_ACCESS_FS_EXECUTE: Execute a file. + * - %LANDLOCK_ACCESS_FS_CHROOT: Change the root directory of the current + * process. + * - %LANDLOCK_ACCESS_FS_OPEN: Open a file or a directory. This flag is set + * for any actions (e.g. read, write, execute) requested to open a file or + * directory. + * - %LANDLOCK_ACCESS_FS_MAP: Map a file. This flag is set for any actions + * (e.g. read, write, execute) requested to map a file. + * + * There is currently no restriction for directory walking e.g., + * :manpage:`chdir(2)`. + */ +#define LANDLOCK_ACCESS_FS_READ (1ULL << 0) +#define LANDLOCK_ACCESS_FS_READDIR (1ULL << 1) +#define LANDLOCK_ACCESS_FS_GETATTR (1ULL << 2) +#define LANDLOCK_ACCESS_FS_WRITE (1ULL << 3) +#define LANDLOCK_ACCESS_FS_TRUNCATE (1ULL << 4) +#define LANDLOCK_ACCESS_FS_LOCK (1ULL << 5) +#define LANDLOCK_ACCESS_FS_CHMOD (1ULL << 6) +#define LANDLOCK_ACCESS_FS_CHOWN (1ULL << 7) +#define LANDLOCK_ACCESS_FS_CHGRP (1ULL << 8) +#define LANDLOCK_ACCESS_FS_IOCTL (1ULL << 9) +#define LANDLOCK_ACCESS_FS_LINK_TO (1ULL << 10) +#define LANDLOCK_ACCESS_FS_RENAME_FROM (1ULL << 11) +#define LANDLOCK_ACCESS_FS_RENAME_TO (1ULL << 12) +#define LANDLOCK_ACCESS_FS_RMDIR (1ULL << 13) +#define LANDLOCK_ACCESS_FS_UNLINK (1ULL << 14) +#define LANDLOCK_ACCESS_FS_MAKE_CHAR (1ULL << 15) +#define LANDLOCK_ACCESS_FS_MAKE_DIR (1ULL << 16) +#define LANDLOCK_ACCESS_FS_MAKE_REG (1ULL << 17) +#define LANDLOCK_ACCESS_FS_MAKE_SOCK (1ULL << 18) +#define LANDLOCK_ACCESS_FS_MAKE_FIFO (1ULL << 19) +#define LANDLOCK_ACCESS_FS_MAKE_BLOCK (1ULL << 20) +#define LANDLOCK_ACCESS_FS_MAKE_SYM (1ULL << 21) +#define LANDLOCK_ACCESS_FS_EXECUTE (1ULL << 22) +#define LANDLOCK_ACCESS_FS_CHROOT (1ULL << 23) +#define LANDLOCK_ACCESS_FS_OPEN (1ULL << 24) +#define LANDLOCK_ACCESS_FS_MAP (1ULL << 25) + +/* + * Potential future access: + * - %LANDLOCK_ACCESS_FS_SETATTR + * - %LANDLOCK_ACCESS_FS_APPEND + * - %LANDLOCK_ACCESS_FS_LINK_FROM + * - %LANDLOCK_ACCESS_FS_MOUNT_FROM + * - %LANDLOCK_ACCESS_FS_MOUNT_TO + * - %LANDLOCK_ACCESS_FS_UNMOUNT + * - %LANDLOCK_ACCESS_FS_TRANSFER + * - %LANDLOCK_ACCESS_FS_RECEIVE + * - %LANDLOCK_ACCESS_FS_CHDIR + * - %LANDLOCK_ACCESS_FS_FCNTL + */ + +#endif /* _UAPI__LINUX_LANDLOCK_H__ */ diff --git a/security/landlock/Makefile b/security/landlock/Makefile index cb6deefbf4c0..d846eba445bb 100644 --- a/security/landlock/Makefile +++ b/security/landlock/Makefile @@ -1,3 +1,3 @@ obj-$(CONFIG_SECURITY_LANDLOCK) := landlock.o -landlock-y := object.o +landlock-y := object.o ruleset.o diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c new file mode 100644 index 000000000000..5ec013a4188d --- /dev/null +++ b/security/landlock/ruleset.c @@ -0,0 +1,460 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Landlock LSM - Ruleset management + * + * Copyright © 2016-2020 Mickaël Salaün + * Copyright © 2018-2020 ANSSI + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "object.h" +#include "ruleset.h" + +static struct landlock_ruleset *create_ruleset(void) +{ + struct landlock_ruleset *ruleset; + + ruleset = kzalloc(sizeof(*ruleset), GFP_KERNEL); + if (!ruleset) + return ERR_PTR(-ENOMEM); + refcount_set(&ruleset->usage, 1); + mutex_init(&ruleset->lock); + atomic_set(&ruleset->nb_rules, 0); + ruleset->root = RB_ROOT; + return ruleset; +} + +struct landlock_ruleset *landlock_create_ruleset(u64 fs_access_mask) +{ + struct landlock_ruleset *ruleset; + + /* Safely handles 32-bits conversion. */ + BUILD_BUG_ON(!__same_type(fs_access_mask, _LANDLOCK_ACCESS_FS_LAST)); + + /* Checks content. */ + if ((fs_access_mask | _LANDLOCK_ACCESS_FS_MASK) != + _LANDLOCK_ACCESS_FS_MASK) + return ERR_PTR(-EINVAL); + /* Informs about useless ruleset. */ + if (!fs_access_mask) + return ERR_PTR(-ENOMSG); + ruleset = create_ruleset(); + if (!IS_ERR(ruleset)) + ruleset->fs_access_mask = fs_access_mask; + return ruleset; +} + +/* + * The underlying kernel object must be held by the caller. + */ +static struct landlock_ruleset_elem *create_ruleset_elem( + struct landlock_object *object) +{ + struct landlock_ruleset_elem *ruleset_elem; + + ruleset_elem = kzalloc(sizeof(*ruleset_elem), GFP_KERNEL); + if (!ruleset_elem) + return ERR_PTR(-ENOMEM); + RB_CLEAR_NODE(&ruleset_elem->node); + RCU_INIT_POINTER(ruleset_elem->ref.object, object); + return ruleset_elem; +} + +static struct landlock_rule *create_rule(struct landlock_object *object, + struct landlock_access *access) +{ + struct landlock_rule *new_rule; + + if (WARN_ON_ONCE(!object)) + return ERR_PTR(-EFAULT); + if (WARN_ON_ONCE(!access)) + return ERR_PTR(-EFAULT); + new_rule = kzalloc(sizeof(*new_rule), GFP_KERNEL); + if (!new_rule) + return ERR_PTR(-ENOMEM); + refcount_set(&new_rule->usage, 1); + INIT_LIST_HEAD(&new_rule->list); + new_rule->access = *access; + + spin_lock(&object->lock); + list_add_tail(&new_rule->list, &object->rules); + spin_unlock(&object->lock); + return new_rule; +} + +/* + * An inserted rule can not be removed, only disabled (cf. struct + * landlock_ruleset_elem). + * + * The underlying kernel object must be held by the caller. + * + * @rule: Allocated struct owned by this function. The caller must hold the + * underlying kernel object (e.g., with a FD). + */ +int landlock_insert_ruleset_rule(struct landlock_ruleset *ruleset, + struct landlock_object *object, struct landlock_access *access, + struct landlock_rule *rule) +{ + struct rb_node **new; + struct rb_node *parent = NULL; + struct landlock_ruleset_elem *ruleset_elem; + struct landlock_rule *new_rule; + + might_sleep(); + /* Accesses may be set when creating a new rule. */ + if (rule) { + if (WARN_ON_ONCE(access)) + return -EINVAL; + } else { + if (WARN_ON_ONCE(!access)) + return -EFAULT; + } + + lockdep_assert_held(&ruleset->lock); + new = &(ruleset->root.rb_node); + while (*new) { + struct landlock_ruleset_elem *this = rb_entry(*new, + struct landlock_ruleset_elem, node); + uintptr_t this_object; + struct landlock_rule *this_rule; + struct landlock_access new_access; + + this_object = (uintptr_t)rcu_access_pointer(this->ref.object); + if (this_object != (uintptr_t)object) { + parent = *new; + if (this_object < (uintptr_t)object) + new = &((*new)->rb_right); + else + new = &((*new)->rb_left); + continue; + } + + /* Do not increment ruleset->nb_rules. */ + this_rule = rcu_dereference_protected(this->ref.rule, + lockdep_is_held(&ruleset->lock)); + /* + * Checks if it is a new object with the same address as a + * previously disabled one. There is no possible race + * condition because an object can not be disabled/deleted + * while being inserted in this tree. + */ + if (landlock_rule_is_disabled(this_rule)) { + if (rule) { + refcount_inc(&rule->usage); + new_rule = rule; + } else { + /* Replace the previous rule with a new one. */ + new_rule = create_rule(object, access); + if (IS_ERR(new_rule)) + return PTR_ERR(new_rule); + } + rcu_assign_pointer(this->ref.rule, new_rule); + landlock_put_rule(object, this_rule); + return 0; + } + + /* this_rule is potentially enabled. */ + if (refcount_read(&this_rule->usage) == 1) { + if (rule) { + /* merge rule: intersection of access rights */ + this_rule->access.self &= rule->access.self; + this_rule->access.beneath &= + rule->access.beneath; + } else { + /* extend rule: union of access rights */ + this_rule->access.self |= access->self; + this_rule->access.beneath |= access->beneath; + } + return 0; + } + + /* + * If this_rule is shared with another ruleset, then create a + * new object rule. + */ + if (rule) { + /* Merging a rule means an intersection of access. */ + new_access.self = this_rule->access.self & + rule->access.self; + new_access.beneath = this_rule->access.beneath & + rule->access.beneath; + } else { + /* Extending a rule means a union of access. */ + new_access.self = this_rule->access.self | + access->self; + new_access.beneath = this_rule->access.self | + access->beneath; + } + new_rule = create_rule(object, &new_access); + if (IS_ERR(new_rule)) + return PTR_ERR(new_rule); + rcu_assign_pointer(this->ref.rule, new_rule); + landlock_put_rule(object, this_rule); + return 0; + } + + /* There is no match for @object. */ + ruleset_elem = create_ruleset_elem(object); + if (IS_ERR(ruleset_elem)) + return PTR_ERR(ruleset_elem); + if (rule) { + refcount_inc(&rule->usage); + new_rule = rule; + } else { + new_rule = create_rule(object, access); + if (IS_ERR(new_rule)) { + kfree(ruleset_elem); + return PTR_ERR(new_rule); + } + } + RCU_INIT_POINTER(ruleset_elem->ref.rule, new_rule); + /* + * Because of the missing RCU context annotation in struct rb_node, + * Sparse emits a warning when encountering rb_link_node_rcu(), but + * this function call is still safe. + */ + rb_link_node_rcu(&ruleset_elem->node, parent, new); + rb_insert_color(&ruleset_elem->node, &ruleset->root); + atomic_inc(&ruleset->nb_rules); + return 0; +} + +static int merge_ruleset(struct landlock_ruleset *dst, + struct landlock_ruleset *src) +{ + struct rb_node *node; + int err = 0; + + might_sleep(); + if (!src) + return 0; + if (WARN_ON_ONCE(!dst)) + return -EFAULT; + if (WARN_ON_ONCE(!dst->hierarchy)) + return -EINVAL; + + mutex_lock(&dst->lock); + mutex_lock_nested(&src->lock, 1); + dst->fs_access_mask |= src->fs_access_mask; + for (node = rb_first(&src->root); node; node = rb_next(node)) { + struct landlock_ruleset_elem *elem = rb_entry(node, + struct landlock_ruleset_elem, node); + struct landlock_object *object = + rcu_dereference_protected(elem->ref.object, + lockdep_is_held(&src->lock)); + struct landlock_rule *rule = + rcu_dereference_protected(elem->ref.rule, + lockdep_is_held(&src->lock)); + + err = landlock_insert_ruleset_rule(dst, object, NULL, rule); + if (err) + goto out_unlock; + } + +out_unlock: + mutex_unlock(&src->lock); + mutex_unlock(&dst->lock); + return err; +} + +void landlock_get_ruleset(struct landlock_ruleset *ruleset) +{ + if (!ruleset) + return; + refcount_inc(&ruleset->usage); +} + +static void put_hierarchy(struct landlock_hierarchy *hierarchy) +{ + if (hierarchy && refcount_dec_and_test(&hierarchy->usage)) + kfree(hierarchy); +} + +static void put_ruleset(struct landlock_ruleset *ruleset) +{ + struct rb_node *orig; + + might_sleep(); + for (orig = rb_first(&ruleset->root); orig; orig = rb_next(orig)) { + struct landlock_ruleset_elem *freeme; + struct landlock_object *object; + struct landlock_rule *rule; + + freeme = rb_entry(orig, struct landlock_ruleset_elem, node); + object = rcu_dereference_protected(freeme->ref.object, + refcount_read(&ruleset->usage) == 0); + rule = rcu_dereference_protected(freeme->ref.rule, + refcount_read(&ruleset->usage) == 0); + landlock_put_rule(object, rule); + kfree_rcu(freeme, rcu_free); + } + put_hierarchy(ruleset->hierarchy); + kfree_rcu(ruleset, rcu_free); +} + +void landlock_put_ruleset(struct landlock_ruleset *ruleset) +{ + might_sleep(); + if (ruleset && refcount_dec_and_test(&ruleset->usage)) + put_ruleset(ruleset); +} + +static void put_ruleset_work(struct work_struct *work) +{ + struct landlock_ruleset *ruleset; + + ruleset = container_of(work, struct landlock_ruleset, work_put); + /* + * Clean up rcu_free because of previous use through union work_put. + * ruleset->rcu_free.func is already NULLed by __rcu_reclaim(). + */ + ruleset->rcu_free.next = NULL; + put_ruleset(ruleset); +} + +void landlock_put_ruleset_enqueue(struct landlock_ruleset *ruleset) +{ + if (ruleset && refcount_dec_and_test(&ruleset->usage)) { + INIT_WORK(&ruleset->work_put, put_ruleset_work); + schedule_work(&ruleset->work_put); + } +} + +static bool clean_ref(struct landlock_ref *ref) +{ + struct landlock_rule *rule; + + rule = rcu_dereference(ref->rule); + if (!rule) + return false; + if (!landlock_rule_is_disabled(rule)) + return false; + rcu_assign_pointer(ref->rule, NULL); + /* + * landlock_put_rule() will not sleep because we already checked + * !landlock_rule_is_disabled(rule). + */ + landlock_put_rule(rcu_dereference(ref->object), rule); + return true; +} + +static void clean_ruleset(struct landlock_ruleset *ruleset) +{ + struct rb_node *node; + + if (!ruleset) + return; + /* We must lock the ruleset to not have a wrong nb_rules counter. */ + mutex_lock(&ruleset->lock); + rcu_read_lock(); + for (node = rb_first(&ruleset->root); node; node = rb_next(node)) { + struct landlock_ruleset_elem *elem = rb_entry(node, + struct landlock_ruleset_elem, node); + + if (clean_ref(&elem->ref)) { + rb_erase(&elem->node, &ruleset->root); + kfree_rcu(elem, rcu_free); + atomic_dec(&ruleset->nb_rules); + } + } + rcu_read_unlock(); + mutex_unlock(&ruleset->lock); +} + +/* + * Creates a new ruleset, merged of @parent and @ruleset, or return @parent if + * @ruleset is empty. If @parent is empty, return a duplicate of @ruleset. + * + * @parent: Must not be modified (i.e. locked or read-only). + */ +struct landlock_ruleset *landlock_merge_ruleset( + struct landlock_ruleset *parent, + struct landlock_ruleset *ruleset) +{ + struct landlock_ruleset *new_dom; + int err; + + might_sleep(); + /* Opportunistically put disabled rules. */ + clean_ruleset(ruleset); + + if (parent && WARN_ON_ONCE(!parent->hierarchy)) + return ERR_PTR(-EINVAL); + if (!ruleset || atomic_read(&ruleset->nb_rules) == 0 || + parent == ruleset) { + landlock_get_ruleset(parent); + return parent; + } + + new_dom = create_ruleset(); + if (IS_ERR(new_dom)) + return new_dom; + new_dom->hierarchy = kzalloc(sizeof(*new_dom->hierarchy), GFP_KERNEL); + if (!new_dom->hierarchy) { + landlock_put_ruleset(new_dom); + return ERR_PTR(-ENOMEM); + } + refcount_set(&new_dom->hierarchy->usage, 1); + + if (parent) { + new_dom->hierarchy->parent = parent->hierarchy; + refcount_inc(&parent->hierarchy->usage); + err = merge_ruleset(new_dom, parent); + if (err) { + landlock_put_ruleset(new_dom); + return ERR_PTR(err); + } + } + err = merge_ruleset(new_dom, ruleset); + if (err) { + landlock_put_ruleset(new_dom); + return ERR_PTR(err); + } + return new_dom; +} + +/* + * The return pointer must only be used in a RCU-read block. + */ +const struct landlock_access *landlock_find_access( + const struct landlock_ruleset *ruleset, + const struct landlock_object *object) +{ + struct rb_node *node; + + WARN_ON_ONCE(!rcu_read_lock_held()); + if (!object) + return NULL; + node = ruleset->root.rb_node; + while (node) { + struct landlock_ruleset_elem *this = rb_entry(node, + struct landlock_ruleset_elem, node); + uintptr_t this_object = + (uintptr_t)rcu_access_pointer(this->ref.object); + + if (this_object == (uintptr_t)object) { + struct landlock_rule *rule; + + rule = rcu_dereference(this->ref.rule); + if (!landlock_rule_is_disabled(rule)) + return &rule->access; + return NULL; + } + if (this_object < (uintptr_t)object) + node = node->rb_right; + else + node = node->rb_left; + } + return NULL; +} diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h new file mode 100644 index 000000000000..afc88dbb8b4b --- /dev/null +++ b/security/landlock/ruleset.h @@ -0,0 +1,106 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Landlock LSM - Ruleset management + * + * Copyright © 2016-2020 Mickaël Salaün + * Copyright © 2018-2020 ANSSI + */ + +#ifndef _SECURITY_LANDLOCK_RULESET_H +#define _SECURITY_LANDLOCK_RULESET_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "object.h" + +#define _LANDLOCK_ACCESS_FS_LAST LANDLOCK_ACCESS_FS_MAP +#define _LANDLOCK_ACCESS_FS_MASK ((_LANDLOCK_ACCESS_FS_LAST << 1) - 1) + +struct landlock_ref { + /* + * @object: Identify a kernel object (e.g. an inode). This is used as + * a key for a ruleset tree (cf. struct landlock_ruleset_elem). This + * pointer is set once and never modified. It may point to a deleted + * object and should then be dereferenced with great care, thanks to a + * call to landlock_rule_is_disabled(@rule) from inside an RCU-read + * block, cf. landlock_put_rule(). + */ + struct landlock_object __rcu *object; + /* + * @rule: Ties a rule to an object. Set once with an allocated rule, + * but can be NULLed if the rule is disabled. + */ + struct landlock_rule __rcu *rule; +}; + +/* + * Red-black tree element used in a landlock_ruleset. + */ +struct landlock_ruleset_elem { + struct landlock_ref ref; + struct rb_node node; + struct rcu_head rcu_free; +}; + +/* + * Enable hierarchy identification even when a parent domain vanishes. This is + * needed for the ptrace protection. + */ +struct landlock_hierarchy { + struct landlock_hierarchy *parent; + refcount_t usage; +}; + +/* + * Kernel representation of a ruleset. This data structure must contains + * unique entries, be updatable, and quick to match an object. + */ +struct landlock_ruleset { + /* + * @fs_access_mask: Contains the subset of filesystem actions which are + * restricted by a ruleset. This is used when merging rulesets and for + * userspace backward compatibility (i.e. future-proof). Set once and + * never changed for the lifetime of the ruleset. + */ + u32 fs_access_mask; + struct landlock_hierarchy *hierarchy; + refcount_t usage; + union { + struct rcu_head rcu_free; + struct work_struct work_put; + }; + struct mutex lock; + atomic_t nb_rules; + /* + * @root: Red-black tree containing landlock_ruleset_elem nodes. + */ + struct rb_root root; +}; + +struct landlock_ruleset *landlock_create_ruleset(u64 fs_access_mask); + +void landlock_get_ruleset(struct landlock_ruleset *ruleset); +void landlock_put_ruleset(struct landlock_ruleset *ruleset); +void landlock_put_ruleset_enqueue(struct landlock_ruleset *ruleset); + +int landlock_insert_ruleset_rule(struct landlock_ruleset *ruleset, + struct landlock_object *object, struct landlock_access *access, + struct landlock_rule *rule); + +struct landlock_ruleset *landlock_merge_ruleset( + struct landlock_ruleset *domain, + struct landlock_ruleset *ruleset); + +const struct landlock_access *landlock_find_access( + const struct landlock_ruleset *ruleset, + const struct landlock_object *object); + +#endif /* _SECURITY_LANDLOCK_RULESET_H */ From patchwork Mon Feb 24 16:02:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= X-Patchwork-Id: 208755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7922C11D2F for ; Mon, 24 Feb 2020 16:13:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B495F20836 for ; Mon, 24 Feb 2020 16:13:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727794AbgBXQNA (ORCPT ); Mon, 24 Feb 2020 11:13:00 -0500 Received: from smtp-bc08.mail.infomaniak.ch ([45.157.188.8]:58111 "EHLO smtp-bc08.mail.infomaniak.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727668AbgBXQNA (ORCPT ); Mon, 24 Feb 2020 11:13:00 -0500 Received: from smtp-2-0000.mail.infomaniak.ch (unknown [10.5.36.107]) by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 8BE35100389F6; Mon, 24 Feb 2020 17:02:28 +0100 (CET) Received: from localhost (unknown [94.23.54.103]) by smtp-2-0000.mail.infomaniak.ch (Postfix) with ESMTPA id 48R6Jz702wzlp8Gn; Mon, 24 Feb 2020 17:02:27 +0100 (CET) From: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= To: linux-kernel@vger.kernel.org Cc: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= , Al Viro , Andy Lutomirski , Arnd Bergmann , Casey Schaufler , Greg Kroah-Hartman , James Morris , Jann Horn , Jonathan Corbet , Kees Cook , Michael Kerrisk , =?utf-8?q?Micka=C3=ABl_Sala?= =?utf-8?b?w7xu?= , "Serge E . Hallyn" , Shuah Khan , Vincent Dagonneau , kernel-hardening@lists.openwall.com, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-security-module@vger.kernel.org, x86@kernel.org Subject: [RFC PATCH v14 04/10] landlock: Add ptrace restrictions Date: Mon, 24 Feb 2020 17:02:09 +0100 Message-Id: <20200224160215.4136-5-mic@digikod.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200224160215.4136-1-mic@digikod.net> References: <20200224160215.4136-1-mic@digikod.net> MIME-Version: 1.0 X-Antivirus: Dr.Web (R) for Unix mail servers drweb plugin ver.6.0.2.8 X-Antivirus-Code: 0x100000 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Using ptrace(2) and related debug features on a target process can lead to a privilege escalation. Indeed, ptrace(2) can be used by an attacker to impersonate another task and to remain undetected while performing malicious activities. Thanks to ptrace_may_access(), various part of the kernel can check if a tracer is more privileged than a tracee. A landlocked process has fewer privileges than a non-landlocked process and must then be subject to additional restrictions when manipulating processes. To be allowed to use ptrace(2) and related syscalls on a target process, a landlocked process must have a subset of the target process' rules (i.e. the tracee must be in a sub-domain of the tracer). Signed-off-by: Mickaël Salaün Cc: Andy Lutomirski Cc: James Morris Cc: Kees Cook Cc: Serge E. Hallyn --- Changes since v13: * Make the ptrace restriction mandatory, like in the v10. * Remove the eBPF dependency. Previous version: https://lore.kernel.org/lkml/20191104172146.30797-5-mic@digikod.net/ --- security/landlock/Makefile | 2 +- security/landlock/ptrace.c | 118 +++++++++++++++++++++++++++++++++++++ security/landlock/ptrace.h | 14 +++++ security/landlock/setup.c | 2 + 4 files changed, 135 insertions(+), 1 deletion(-) create mode 100644 security/landlock/ptrace.c create mode 100644 security/landlock/ptrace.h diff --git a/security/landlock/Makefile b/security/landlock/Makefile index 041ea242e627..f1d1eb72fa76 100644 --- a/security/landlock/Makefile +++ b/security/landlock/Makefile @@ -1,4 +1,4 @@ obj-$(CONFIG_SECURITY_LANDLOCK) := landlock.o landlock-y := setup.o object.o ruleset.o \ - cred.o + cred.o ptrace.o diff --git a/security/landlock/ptrace.c b/security/landlock/ptrace.c new file mode 100644 index 000000000000..6c7326788c46 --- /dev/null +++ b/security/landlock/ptrace.c @@ -0,0 +1,118 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Landlock LSM - Ptrace hooks + * + * Copyright © 2017-2020 Mickaël Salaün + * Copyright © 2020 ANSSI + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "cred.h" +#include "ptrace.h" +#include "ruleset.h" +#include "setup.h" + +/** + * domain_scope_le - Checks domain ordering for scoped ptrace + * + * @parent: Parent domain. + * @child: Potential child of @parent. + * + * Checks if the @parent domain is less or equal to (i.e. an ancestor, which + * means a subset of) the @child domain. + */ +static bool domain_scope_le(const struct landlock_ruleset *parent, + const struct landlock_ruleset *child) +{ + const struct landlock_hierarchy *walker; + + if (!parent) + return true; + if (!child) + return false; + for (walker = child->hierarchy; walker; walker = walker->parent) { + if (walker == parent->hierarchy) + /* @parent is in the scoped hierarchy of @child. */ + return true; + } + /* There is no relationship between @parent and @child. */ + return false; +} + +static bool task_is_scoped(struct task_struct *parent, + struct task_struct *child) +{ + bool is_scoped; + const struct landlock_ruleset *dom_parent, *dom_child; + + rcu_read_lock(); + dom_parent = landlock_get_task_domain(parent); + dom_child = landlock_get_task_domain(child); + is_scoped = domain_scope_le(dom_parent, dom_child); + rcu_read_unlock(); + return is_scoped; +} + +static int task_ptrace(struct task_struct *parent, struct task_struct *child) +{ + /* Quick return for non-landlocked tasks. */ + if (!landlocked(parent)) + return 0; + if (task_is_scoped(parent, child)) + return 0; + return -EPERM; +} + +/** + * hook_ptrace_access_check - Determines whether the current process may access + * another + * + * @child: Process to be accessed. + * @mode: Mode of attachment. + * + * If the current task has Landlock rules, then the child must have at least + * the same rules. Else denied. + * + * Determines whether a process may access another, returning 0 if permission + * granted, -errno if denied. + */ +static int hook_ptrace_access_check(struct task_struct *child, + unsigned int mode) +{ + return task_ptrace(current, child); +} + +/** + * hook_ptrace_traceme - Determines whether another process may trace the + * current one + * + * @parent: Task proposed to be the tracer. + * + * If the parent has Landlock rules, then the current task must have the same + * or more rules. Else denied. + * + * Determines whether the nominated task is permitted to trace the current + * process, returning 0 if permission is granted, -errno if denied. + */ +static int hook_ptrace_traceme(struct task_struct *parent) +{ + return task_ptrace(parent, current); +} + +static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = { + LSM_HOOK_INIT(ptrace_access_check, hook_ptrace_access_check), + LSM_HOOK_INIT(ptrace_traceme, hook_ptrace_traceme), +}; + +__init void landlock_add_hooks_ptrace(void) +{ + security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks), + LANDLOCK_NAME); +} diff --git a/security/landlock/ptrace.h b/security/landlock/ptrace.h new file mode 100644 index 000000000000..6740c6a723de --- /dev/null +++ b/security/landlock/ptrace.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Landlock LSM - Ptrace hooks + * + * Copyright © 2017-2019 Mickaël Salaün + * Copyright © 2019 ANSSI + */ + +#ifndef _SECURITY_LANDLOCK_PTRACE_H +#define _SECURITY_LANDLOCK_PTRACE_H + +__init void landlock_add_hooks_ptrace(void); + +#endif /* _SECURITY_LANDLOCK_PTRACE_H */ diff --git a/security/landlock/setup.c b/security/landlock/setup.c index fca5fa185465..117afb344da6 100644 --- a/security/landlock/setup.c +++ b/security/landlock/setup.c @@ -10,6 +10,7 @@ #include #include "cred.h" +#include "ptrace.h" #include "setup.h" struct lsm_blob_sizes landlock_blob_sizes __lsm_ro_after_init = { @@ -20,6 +21,7 @@ static int __init landlock_init(void) { pr_info(LANDLOCK_NAME ": Registering hooks\n"); landlock_add_hooks_cred(); + landlock_add_hooks_ptrace(); return 0; } From patchwork Mon Feb 24 16:02:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= X-Patchwork-Id: 208758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EF54C11D35 for ; Mon, 24 Feb 2020 16:10:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 246FE20836 for ; Mon, 24 Feb 2020 16:10:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727757AbgBXQKU (ORCPT ); Mon, 24 Feb 2020 11:10:20 -0500 Received: from smtp-42ad.mail.infomaniak.ch ([84.16.66.173]:47225 "EHLO smtp-42ad.mail.infomaniak.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727763AbgBXQKT (ORCPT ); Mon, 24 Feb 2020 11:10:19 -0500 Received: from smtp-2-0001.mail.infomaniak.ch (unknown [10.5.36.108]) by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id ED8F41003709C; Mon, 24 Feb 2020 17:02:33 +0100 (CET) Received: from localhost (unknown [94.23.54.103]) by smtp-2-0001.mail.infomaniak.ch (Postfix) with ESMTPA id 48R6K53HsjzlnBxx; Mon, 24 Feb 2020 17:02:33 +0100 (CET) From: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= To: linux-kernel@vger.kernel.org Cc: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= , Al Viro , Andy Lutomirski , Arnd Bergmann , Casey Schaufler , Greg Kroah-Hartman , James Morris , Jann Horn , Jonathan Corbet , Kees Cook , Michael Kerrisk , =?utf-8?q?Micka=C3=ABl_Sala?= =?utf-8?b?w7xu?= , "Serge E . Hallyn" , Shuah Khan , Vincent Dagonneau , kernel-hardening@lists.openwall.com, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-security-module@vger.kernel.org, x86@kernel.org Subject: [RFC PATCH v14 08/10] selftests/landlock: Add initial tests Date: Mon, 24 Feb 2020 17:02:13 +0100 Message-Id: <20200224160215.4136-9-mic@digikod.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200224160215.4136-1-mic@digikod.net> References: <20200224160215.4136-1-mic@digikod.net> MIME-Version: 1.0 X-Antivirus: Dr.Web (R) for Unix mail servers drweb plugin ver.6.0.2.8 X-Antivirus-Code: 0x100000 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Test landlock syscall, ptrace hooks semantic and filesystem access-control. This is an initial batch, more tests will follow. Signed-off-by: Mickaël Salaün Reviewed-by: Vincent Dagonneau Cc: Andy Lutomirski Cc: James Morris Cc: Kees Cook Cc: Serge E. Hallyn Cc: Shuah Khan --- Changes since v13: * Add back the filesystem tests (from v10) and extend them. * Add tests for the new syscall. Previous version: https://lore.kernel.org/lkml/20191104172146.30797-7-mic@digikod.net/ --- tools/testing/selftests/Makefile | 1 + tools/testing/selftests/landlock/.gitignore | 3 + tools/testing/selftests/landlock/Makefile | 13 + tools/testing/selftests/landlock/config | 4 + tools/testing/selftests/landlock/test.h | 40 ++ tools/testing/selftests/landlock/test_base.c | 80 +++ tools/testing/selftests/landlock/test_fs.c | 624 ++++++++++++++++++ .../testing/selftests/landlock/test_ptrace.c | 293 ++++++++ 8 files changed, 1058 insertions(+) create mode 100644 tools/testing/selftests/landlock/.gitignore create mode 100644 tools/testing/selftests/landlock/Makefile create mode 100644 tools/testing/selftests/landlock/config create mode 100644 tools/testing/selftests/landlock/test.h create mode 100644 tools/testing/selftests/landlock/test_base.c create mode 100644 tools/testing/selftests/landlock/test_fs.c create mode 100644 tools/testing/selftests/landlock/test_ptrace.c diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile index 6ec503912bea..5183f269c244 100644 --- a/tools/testing/selftests/Makefile +++ b/tools/testing/selftests/Makefile @@ -24,6 +24,7 @@ TARGETS += ir TARGETS += kcmp TARGETS += kexec TARGETS += kvm +TARGETS += landlock TARGETS += lib TARGETS += livepatch TARGETS += lkdtm diff --git a/tools/testing/selftests/landlock/.gitignore b/tools/testing/selftests/landlock/.gitignore new file mode 100644 index 000000000000..4ee53c733af0 --- /dev/null +++ b/tools/testing/selftests/landlock/.gitignore @@ -0,0 +1,3 @@ +/test_base +/test_fs +/test_ptrace diff --git a/tools/testing/selftests/landlock/Makefile b/tools/testing/selftests/landlock/Makefile new file mode 100644 index 000000000000..c7e26e1251c4 --- /dev/null +++ b/tools/testing/selftests/landlock/Makefile @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: GPL-2.0 + +test_src := $(wildcard test_*.c) + +TEST_GEN_PROGS := $(test_src:.c=) + +usr_include := ../../../../usr/include + +CFLAGS += -Wall -O2 -I$(usr_include) + +include ../lib.mk + +$(TEST_GEN_PROGS): ../kselftest_harness.h $(usr_include)/linux/landlock.h diff --git a/tools/testing/selftests/landlock/config b/tools/testing/selftests/landlock/config new file mode 100644 index 000000000000..662f72c5a0df --- /dev/null +++ b/tools/testing/selftests/landlock/config @@ -0,0 +1,4 @@ +CONFIG_HEADERS_INSTALL=y +CONFIG_SECURITY_LANDLOCK=y +CONFIG_SECURITY_PATH=y +CONFIG_SECURITY=y diff --git a/tools/testing/selftests/landlock/test.h b/tools/testing/selftests/landlock/test.h new file mode 100644 index 000000000000..f9cebd8fc169 --- /dev/null +++ b/tools/testing/selftests/landlock/test.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Landlock test helpers + * + * Copyright © 2017-2020 Mickaël Salaün + * Copyright © 2019-2020 ANSSI + */ + +#include +#include + +#include "../kselftest_harness.h" + +#ifndef landlock +static inline int landlock(unsigned int command, unsigned int options, + size_t attr_size, void *attr_ptr) +{ + errno = 0; + return syscall(__NR_landlock, command, options, attr_size, attr_ptr, 0, + NULL); +} +#endif + +FIXTURE(ruleset_rw) { + struct landlock_attr_ruleset attr_ruleset; + int ruleset_fd; +}; + +FIXTURE_SETUP(ruleset_rw) { + self->attr_ruleset.handled_access_fs = LANDLOCK_ACCESS_FS_READ | + LANDLOCK_ACCESS_FS_WRITE; + self->ruleset_fd = landlock(LANDLOCK_CMD_CREATE_RULESET, + LANDLOCK_OPT_CREATE_RULESET, + sizeof(self->attr_ruleset), &self->attr_ruleset); + ASSERT_LE(0, self->ruleset_fd); +} + +FIXTURE_TEARDOWN(ruleset_rw) { + ASSERT_EQ(0, close(self->ruleset_fd)); +} diff --git a/tools/testing/selftests/landlock/test_base.c b/tools/testing/selftests/landlock/test_base.c new file mode 100644 index 000000000000..1ac7dbead3b2 --- /dev/null +++ b/tools/testing/selftests/landlock/test_base.c @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Landlock tests - common resources + * + * Copyright © 2017-2020 Mickaël Salaün + * Copyright © 2019-2020 ANSSI + */ + +#define _GNU_SOURCE +#include +#include +#include +#include + +#include "test.h" + +#define FDINFO_TEMPLATE "/proc/self/fdinfo/%d" +#define FDINFO_SIZE 128 + +#ifndef O_PATH +#define O_PATH 010000000 +#endif + +TEST_F(ruleset_rw, fdinfo) +{ + int fdinfo_fd, fdinfo_path_size, fdinfo_buf_size; + char fdinfo_path[sizeof(FDINFO_TEMPLATE) + 2]; + char fdinfo_buf[FDINFO_SIZE]; + + fdinfo_path_size = snprintf(fdinfo_path, sizeof(fdinfo_path), + FDINFO_TEMPLATE, self->ruleset_fd); + ASSERT_LE(fdinfo_path_size, sizeof(fdinfo_path)); + + fdinfo_fd = open(fdinfo_path, O_RDONLY | O_CLOEXEC); + ASSERT_GE(fdinfo_fd, 0); + + fdinfo_buf_size = read(fdinfo_fd, fdinfo_buf, sizeof(fdinfo_buf)); + ASSERT_LE(fdinfo_buf_size, sizeof(fdinfo_buf) - 1); + + /* + * fdinfo_buf: pos: 0 + * flags: 02000002 + * mnt_id: 13 + * handled_access_fs: 804000 + */ + EXPECT_EQ(0, close(fdinfo_fd)); +} + +TEST(features) +{ + struct landlock_attr_features attr_features = { + .options_get_features = ~0ULL, + .options_create_ruleset = ~0ULL, + .options_add_rule = ~0ULL, + .options_enforce_ruleset = ~0ULL, + .access_fs = ~0ULL, + .size_attr_ruleset = ~0ULL, + .size_attr_path_beneath = ~0ULL, + }; + + ASSERT_EQ(0, landlock(LANDLOCK_CMD_GET_FEATURES, + LANDLOCK_OPT_CREATE_RULESET, + sizeof(attr_features), &attr_features)); + ASSERT_EQ(((LANDLOCK_OPT_GET_FEATURES << 1) - 1), + attr_features.options_get_features); + ASSERT_EQ(((LANDLOCK_OPT_CREATE_RULESET << 1) - 1), + attr_features.options_create_ruleset); + ASSERT_EQ(((LANDLOCK_OPT_ADD_RULE_PATH_BENEATH << 1) - 1), + attr_features.options_add_rule); + ASSERT_EQ(((LANDLOCK_OPT_ENFORCE_RULESET << 1) - 1), + attr_features.options_enforce_ruleset); + ASSERT_EQ(((LANDLOCK_ACCESS_FS_MAP << 1) - 1), + attr_features.access_fs); + ASSERT_EQ(sizeof(struct landlock_attr_ruleset), + attr_features.size_attr_ruleset); + ASSERT_EQ(sizeof(struct landlock_attr_path_beneath), + attr_features.size_attr_path_beneath); +} + +TEST_HARNESS_MAIN diff --git a/tools/testing/selftests/landlock/test_fs.c b/tools/testing/selftests/landlock/test_fs.c new file mode 100644 index 000000000000..627cb3a71f89 --- /dev/null +++ b/tools/testing/selftests/landlock/test_fs.c @@ -0,0 +1,624 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Landlock tests - filesystem + * + * Copyright © 2017-2020 Mickaël Salaün + * Copyright © 2020 ANSSI + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" + +#define TMP_PREFIX "tmp_" + +/* Paths (sibling number and depth) */ +const char dir_s0_d1[] = TMP_PREFIX "a0"; +const char dir_s0_d2[] = TMP_PREFIX "a0/b0"; +const char dir_s0_d3[] = TMP_PREFIX "a0/b0/c0"; +const char dir_s1_d1[] = TMP_PREFIX "a1"; +const char dir_s2_d1[] = TMP_PREFIX "a2"; +const char dir_s2_d2[] = TMP_PREFIX "a2/b2"; + +/* dir_s3_d1 is a tmpfs mount. */ +const char dir_s3_d1[] = TMP_PREFIX "a3"; +const char dir_s3_d2[] = TMP_PREFIX "a3/b3"; + +/* dir_s4_d2 is a tmpfs mount. */ +const char dir_s4_d1[] = TMP_PREFIX "a4"; +const char dir_s4_d2[] = TMP_PREFIX "a4/b4"; + +static void cleanup_layout1(void) +{ + rmdir(dir_s2_d2); + rmdir(dir_s2_d1); + rmdir(dir_s1_d1); + rmdir(dir_s0_d3); + rmdir(dir_s0_d2); + rmdir(dir_s0_d1); + + /* dir_s3_d2 may be bind mounted */ + umount(dir_s3_d2); + rmdir(dir_s3_d2); + umount(dir_s3_d1); + rmdir(dir_s3_d1); + + umount(dir_s4_d2); + rmdir(dir_s4_d2); + rmdir(dir_s4_d1); +} + +FIXTURE(layout1) { +}; + +FIXTURE_SETUP(layout1) +{ + cleanup_layout1(); + + /* Do not pollute the rest of the system. */ + ASSERT_NE(-1, unshare(CLONE_NEWNS)); + + ASSERT_EQ(0, mkdir(dir_s0_d1, 0600)); + ASSERT_EQ(0, mkdir(dir_s0_d2, 0600)); + ASSERT_EQ(0, mkdir(dir_s0_d3, 0600)); + ASSERT_EQ(0, mkdir(dir_s1_d1, 0600)); + ASSERT_EQ(0, mkdir(dir_s2_d1, 0600)); + ASSERT_EQ(0, mkdir(dir_s2_d2, 0600)); + + ASSERT_EQ(0, mkdir(dir_s3_d1, 0600)); + ASSERT_EQ(0, mount("tmp", dir_s3_d1, "tmpfs", 0, NULL)); + ASSERT_EQ(0, mkdir(dir_s3_d2, 0600)); + + ASSERT_EQ(0, mkdir(dir_s4_d1, 0600)); + ASSERT_EQ(0, mkdir(dir_s4_d2, 0600)); + ASSERT_EQ(0, mount("tmp", dir_s4_d2, "tmpfs", 0, NULL)); +} + +FIXTURE_TEARDOWN(layout1) +{ + /* + * cleanup_layout1() would be denied here, use TEST(cleanup) instead. + */ +} + +static void test_path_rel(struct __test_metadata *_metadata, int dirfd, + const char *path, int ret) +{ + int fd; + struct stat statbuf; + + /* faccessat() can not be restricted for now */ + ASSERT_EQ(ret, fstatat(dirfd, path, &statbuf, 0)) { + TH_LOG("fstatat path \"%s\" returned %s\n", path, + strerror(errno)); + } + if (ret) { + ASSERT_EQ(EACCES, errno); + } + fd = openat(dirfd, path, O_DIRECTORY); + if (ret) { + ASSERT_EQ(-1, fd); + ASSERT_EQ(EACCES, errno); + } else { + ASSERT_NE(-1, fd); + EXPECT_EQ(0, close(fd)); + } +} + +static void test_path(struct __test_metadata *_metadata, const char *path, + int ret) +{ + return test_path_rel(_metadata, AT_FDCWD, path, ret); +} + +TEST_F(layout1, no_restriction) +{ + test_path(_metadata, dir_s0_d1, 0); + test_path(_metadata, dir_s0_d2, 0); + test_path(_metadata, dir_s0_d3, 0); + test_path(_metadata, dir_s1_d1, 0); + test_path(_metadata, dir_s2_d2, 0); +} + +TEST_F(ruleset_rw, inval) +{ + int err; + struct landlock_attr_path_beneath path_beneath = { + .allowed_access = LANDLOCK_ACCESS_FS_READ | + LANDLOCK_ACCESS_FS_WRITE, + .parent_fd = -1, + }; + struct landlock_attr_enforce attr_enforce; + + path_beneath.ruleset_fd = self->ruleset_fd; + path_beneath.parent_fd = open(dir_s0_d2, O_PATH | O_NOFOLLOW | + O_DIRECTORY | O_CLOEXEC); + ASSERT_GE(path_beneath.parent_fd, 0); + err = landlock(LANDLOCK_CMD_ADD_RULE, + LANDLOCK_OPT_ADD_RULE_PATH_BENEATH, + sizeof(path_beneath), &path_beneath); + ASSERT_EQ(errno, 0); + ASSERT_EQ(err, 0); + ASSERT_EQ(0, close(path_beneath.parent_fd)); + + /* Tests without O_PATH. */ + path_beneath.parent_fd = open(dir_s0_d2, O_NOFOLLOW | O_DIRECTORY | + O_CLOEXEC); + ASSERT_GE(path_beneath.parent_fd, 0); + err = landlock(LANDLOCK_CMD_ADD_RULE, + LANDLOCK_OPT_ADD_RULE_PATH_BENEATH, + sizeof(path_beneath), &path_beneath); + ASSERT_EQ(err, -1); + ASSERT_EQ(errno, EBADR); + errno = 0; + ASSERT_EQ(0, close(path_beneath.parent_fd)); + + /* Checks un-handled access. */ + path_beneath.parent_fd = open(dir_s0_d2, O_PATH | O_NOFOLLOW | + O_DIRECTORY | O_CLOEXEC); + ASSERT_GE(path_beneath.parent_fd, 0); + path_beneath.allowed_access |= LANDLOCK_ACCESS_FS_EXECUTE; + err = landlock(LANDLOCK_CMD_ADD_RULE, + LANDLOCK_OPT_ADD_RULE_PATH_BENEATH, + sizeof(path_beneath), &path_beneath); + ASSERT_EQ(errno, EINVAL); + errno = 0; + ASSERT_EQ(err, -1); + ASSERT_EQ(0, close(path_beneath.parent_fd)); + + err = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); + ASSERT_EQ(errno, 0); + ASSERT_EQ(err, 0); + + attr_enforce.ruleset_fd = self->ruleset_fd; + err = landlock(LANDLOCK_CMD_ENFORCE_RULESET, + LANDLOCK_OPT_ENFORCE_RULESET, sizeof(attr_enforce), + &attr_enforce); + ASSERT_EQ(errno, 0); + ASSERT_EQ(err, 0); +} + +TEST_F(ruleset_rw, nsfs) +{ + struct landlock_attr_path_beneath path_beneath = { + .allowed_access = LANDLOCK_ACCESS_FS_READ | + LANDLOCK_ACCESS_FS_WRITE, + .ruleset_fd = self->ruleset_fd, + }; + int err; + + path_beneath.parent_fd = open("/proc/self/ns/mnt", O_PATH | O_NOFOLLOW | + O_CLOEXEC); + ASSERT_GE(path_beneath.parent_fd, 0); + err = landlock(LANDLOCK_CMD_ADD_RULE, + LANDLOCK_OPT_ADD_RULE_PATH_BENEATH, + sizeof(path_beneath), &path_beneath); + ASSERT_EQ(errno, 0); + ASSERT_EQ(err, 0); + ASSERT_EQ(0, close(path_beneath.parent_fd)); +} + +static void add_path_beneath(struct __test_metadata *_metadata, int ruleset_fd, + __u64 allowed_access, const char *path) +{ + int err; + struct landlock_attr_path_beneath path_beneath = { + .ruleset_fd = ruleset_fd, + .allowed_access = allowed_access, + }; + + path_beneath.parent_fd = open(path, O_PATH | O_NOFOLLOW | O_DIRECTORY | + O_CLOEXEC); + ASSERT_GE(path_beneath.parent_fd, 0) { + TH_LOG("Failed to open directory \"%s\": %s\n", path, + strerror(errno)); + } + err = landlock(LANDLOCK_CMD_ADD_RULE, + LANDLOCK_OPT_ADD_RULE_PATH_BENEATH, + sizeof(path_beneath), &path_beneath); + ASSERT_EQ(err, 0) { + TH_LOG("Failed to update the ruleset with \"%s\": %s\n", + path, strerror(errno)); + } + ASSERT_EQ(errno, 0); + ASSERT_EQ(0, close(path_beneath.parent_fd)); +} + +static int create_ruleset(struct __test_metadata *_metadata, + const char *const dirs[]) +{ + int ruleset_fd, dirs_len, i; + struct landlock_attr_features attr_features; + struct landlock_attr_ruleset attr_ruleset = { + .handled_access_fs = + LANDLOCK_ACCESS_FS_OPEN | + LANDLOCK_ACCESS_FS_READ | + LANDLOCK_ACCESS_FS_WRITE | + LANDLOCK_ACCESS_FS_EXECUTE | + LANDLOCK_ACCESS_FS_GETATTR + }; + __u64 allowed_access = + LANDLOCK_ACCESS_FS_OPEN | + LANDLOCK_ACCESS_FS_READ | + LANDLOCK_ACCESS_FS_GETATTR; + + ASSERT_NE(NULL, dirs) { + TH_LOG("No directory list\n"); + } + ASSERT_NE(NULL, dirs[0]) { + TH_LOG("Empty directory list\n"); + } + /* Gets the number of dir entries. */ + for (dirs_len = 0; dirs[dirs_len]; dirs_len++); + + ASSERT_EQ(0, landlock(LANDLOCK_CMD_GET_FEATURES, + LANDLOCK_OPT_GET_FEATURES, + sizeof(attr_features), &attr_features)); + /* Only for test, use a binary AND for real application instead. */ + ASSERT_EQ(attr_ruleset.handled_access_fs, + attr_ruleset.handled_access_fs & + attr_features.access_fs); + ASSERT_EQ(allowed_access, allowed_access & attr_features.access_fs); + ruleset_fd = landlock(LANDLOCK_CMD_CREATE_RULESET, + LANDLOCK_OPT_CREATE_RULESET, sizeof(attr_ruleset), + &attr_ruleset); + ASSERT_GE(ruleset_fd, 0) { + TH_LOG("Failed to create a ruleset: %s\n", strerror(errno)); + } + + for (i = 0; dirs[i]; i++) { + add_path_beneath(_metadata, ruleset_fd, allowed_access, + dirs[i]); + } + return ruleset_fd; +} + +static void enforce_ruleset(struct __test_metadata *_metadata, int ruleset_fd) +{ + struct landlock_attr_enforce attr_enforce = { + .ruleset_fd = ruleset_fd, + }; + int err; + + err = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); + ASSERT_EQ(errno, 0); + ASSERT_EQ(err, 0); + + err = landlock(LANDLOCK_CMD_ENFORCE_RULESET, + LANDLOCK_OPT_ENFORCE_RULESET, sizeof(attr_enforce), + &attr_enforce); + ASSERT_EQ(err, 0) { + TH_LOG("Failed to enforce ruleset: %s\n", strerror(errno)); + } + ASSERT_EQ(errno, 0); +} + +TEST_F(layout1, whitelist) +{ + int ruleset_fd = create_ruleset(_metadata, + (const char *const []){ dir_s0_d2, dir_s1_d1, NULL }); + ASSERT_NE(-1, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + + test_path(_metadata, "/", -1); + test_path(_metadata, dir_s0_d1, -1); + test_path(_metadata, dir_s0_d2, 0); + test_path(_metadata, dir_s0_d3, 0); +} + +TEST_F(layout1, unhandled_access) +{ + int ruleset_fd = create_ruleset(_metadata, + (const char *const []){ dir_s0_d2, NULL }); + ASSERT_NE(-1, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + + /* + * Because the policy does not handled LANDLOCK_ACCESS_FS_CHROOT, + * chroot(2) should be allowed. + */ + ASSERT_EQ(0, chroot(dir_s0_d1)); + ASSERT_EQ(0, chroot(dir_s0_d2)); + ASSERT_EQ(0, chroot(dir_s0_d3)); +} + +TEST_F(layout1, ruleset_overlap) +{ + struct stat statbuf; + int open_fd; + int ruleset_fd = create_ruleset(_metadata, + (const char *const []){ dir_s1_d1, NULL }); + ASSERT_NE(-1, ruleset_fd); + /* These rules should be ORed among them. */ + add_path_beneath(_metadata, ruleset_fd, + LANDLOCK_ACCESS_FS_GETATTR, dir_s0_d2); + add_path_beneath(_metadata, ruleset_fd, + LANDLOCK_ACCESS_FS_OPEN, dir_s0_d2); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + + ASSERT_EQ(-1, fstatat(AT_FDCWD, dir_s0_d1, &statbuf, 0)); + ASSERT_EQ(-1, openat(AT_FDCWD, dir_s0_d1, O_DIRECTORY)); + ASSERT_EQ(0, fstatat(AT_FDCWD, dir_s0_d2, &statbuf, 0)); + open_fd = openat(AT_FDCWD, dir_s0_d2, O_DIRECTORY); + ASSERT_LE(0, open_fd); + EXPECT_EQ(0, close(open_fd)); + ASSERT_EQ(0, fstatat(AT_FDCWD, dir_s0_d3, &statbuf, 0)); + open_fd = openat(AT_FDCWD, dir_s0_d3, O_DIRECTORY); + ASSERT_LE(0, open_fd); + EXPECT_EQ(0, close(open_fd)); +} + +TEST_F(layout1, inherit_superset) +{ + struct stat statbuf; + int ruleset_fd, open_fd; + + ruleset_fd = create_ruleset(_metadata, + (const char *const []){ dir_s1_d1, NULL }); + ASSERT_NE(-1, ruleset_fd); + add_path_beneath(_metadata, ruleset_fd, + LANDLOCK_ACCESS_FS_OPEN, dir_s0_d2); + enforce_ruleset(_metadata, ruleset_fd); + + ASSERT_EQ(-1, fstatat(AT_FDCWD, dir_s0_d1, &statbuf, 0)); + ASSERT_EQ(-1, openat(AT_FDCWD, dir_s0_d1, O_DIRECTORY)); + + ASSERT_EQ(-1, fstatat(AT_FDCWD, dir_s0_d2, &statbuf, 0)); + open_fd = openat(AT_FDCWD, dir_s0_d2, O_DIRECTORY); + ASSERT_NE(-1, open_fd); + ASSERT_EQ(0, close(open_fd)); + + ASSERT_EQ(-1, fstatat(AT_FDCWD, dir_s0_d3, &statbuf, 0)); + open_fd = openat(AT_FDCWD, dir_s0_d3, O_DIRECTORY); + ASSERT_NE(-1, open_fd); + ASSERT_EQ(0, close(open_fd)); + + /* + * Test shared rule extension: the following rules should not grant any + * new access, only remove some. Once enforced, these rules are ANDed + * with the previous ones. + */ + add_path_beneath(_metadata, ruleset_fd, LANDLOCK_ACCESS_FS_GETATTR, + dir_s0_d2); + /* + * In ruleset_fd, dir_s0_d2 should now have the LANDLOCK_ACCESS_FS_OPEN + * and LANDLOCK_ACCESS_FS_GETATTR access rights (even if this directory + * is opened a second time). However, when enforcing this updated + * ruleset, the ruleset tied to the current process will still only + * have the dir_s0_d2 with LANDLOCK_ACCESS_FS_OPEN access, + * LANDLOCK_ACCESS_FS_GETATTR must not be allowed because it would be a + * privilege escalation. + */ + enforce_ruleset(_metadata, ruleset_fd); + + /* Same tests and results as above. */ + ASSERT_EQ(-1, fstatat(AT_FDCWD, dir_s0_d1, &statbuf, 0)); + ASSERT_EQ(-1, openat(AT_FDCWD, dir_s0_d1, O_DIRECTORY)); + + /* It is still forbiden to fstat(dir_s0_d2). */ + ASSERT_EQ(-1, fstatat(AT_FDCWD, dir_s0_d2, &statbuf, 0)); + open_fd = openat(AT_FDCWD, dir_s0_d2, O_DIRECTORY); + ASSERT_NE(-1, open_fd); + ASSERT_EQ(0, close(open_fd)); + + ASSERT_EQ(-1, fstatat(AT_FDCWD, dir_s0_d3, &statbuf, 0)); + open_fd = openat(AT_FDCWD, dir_s0_d3, O_DIRECTORY); + ASSERT_NE(-1, open_fd); + ASSERT_EQ(0, close(open_fd)); + + /* + * Now, dir_s0_d3 get a new rule tied to it, only allowing + * LANDLOCK_ACCESS_FS_GETATTR. The kernel internal difference is that + * there was no rule tied to it before. + */ + add_path_beneath(_metadata, ruleset_fd, LANDLOCK_ACCESS_FS_GETATTR, + dir_s0_d3); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + + /* + * Same tests and results as above, except for open(dir_s0_d3) which is + * now denied because the new rule mask the rule previously inherited + * from dir_s0_d2. + */ + ASSERT_EQ(-1, fstatat(AT_FDCWD, dir_s0_d1, &statbuf, 0)); + ASSERT_EQ(-1, openat(AT_FDCWD, dir_s0_d1, O_DIRECTORY)); + + ASSERT_EQ(-1, fstatat(AT_FDCWD, dir_s0_d2, &statbuf, 0)); + open_fd = openat(AT_FDCWD, dir_s0_d2, O_DIRECTORY); + ASSERT_NE(-1, open_fd); + ASSERT_EQ(0, close(open_fd)); + + /* It is still forbiden to fstat(dir_s0_d3). */ + ASSERT_EQ(-1, fstatat(AT_FDCWD, dir_s0_d3, &statbuf, 0)); + open_fd = openat(AT_FDCWD, dir_s0_d3, O_DIRECTORY); + /* open(dir_s0_d3) is now forbidden. */ + ASSERT_EQ(-1, open_fd); + ASSERT_EQ(EACCES, errno); +} + +TEST_F(layout1, extend_ruleset_with_denied_path) +{ + struct landlock_attr_path_beneath path_beneath = { + .allowed_access = LANDLOCK_ACCESS_FS_GETATTR, + }; + + path_beneath.ruleset_fd = create_ruleset(_metadata, + (const char *const []){ dir_s0_d2, NULL }); + ASSERT_NE(-1, path_beneath.ruleset_fd); + enforce_ruleset(_metadata, path_beneath.ruleset_fd); + + ASSERT_EQ(-1, open(dir_s0_d1, O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC)); + ASSERT_EQ(EACCES, errno); + + /* + * Tests that we can't create a rule for which we are not allowed to + * open its path. + */ + path_beneath.parent_fd = open(dir_s0_d1, O_PATH | O_NOFOLLOW + | O_DIRECTORY | O_CLOEXEC); + ASSERT_GE(path_beneath.parent_fd, 0); + ASSERT_EQ(-1, landlock(LANDLOCK_CMD_ADD_RULE, + LANDLOCK_OPT_CREATE_RULESET, + sizeof(path_beneath), &path_beneath)); + ASSERT_EQ(EACCES, errno); + ASSERT_EQ(0, close(path_beneath.parent_fd)); + EXPECT_EQ(0, close(path_beneath.ruleset_fd)); +} + +TEST_F(layout1, rule_on_mountpoint) +{ + int ruleset_fd = create_ruleset(_metadata, + (const char *const []){ dir_s0_d1, dir_s3_d1, NULL }); + ASSERT_NE(-1, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + + test_path(_metadata, dir_s1_d1, -1); + test_path(_metadata, dir_s0_d1, 0); + test_path(_metadata, dir_s3_d1, 0); +} + +TEST_F(layout1, rule_over_mountpoint) +{ + int ruleset_fd = create_ruleset(_metadata, + (const char *const []){ dir_s4_d1, dir_s0_d1, NULL }); + ASSERT_NE(-1, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + + test_path(_metadata, dir_s4_d2, 0); + test_path(_metadata, dir_s0_d1, 0); + test_path(_metadata, dir_s4_d1, 0); +} + +/* + * This test verifies that we can apply a landlock rule on the root (/), it + * might require special handling. + */ +TEST_F(layout1, rule_over_root) +{ + int ruleset_fd = create_ruleset(_metadata, + (const char *const []){ "/", NULL }); + ASSERT_NE(-1, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + + test_path(_metadata, "/", 0); + test_path(_metadata, dir_s0_d1, 0); +} + +TEST_F(layout1, rule_inside_mount_ns) +{ + ASSERT_NE(-1, mount(NULL, "/", NULL, MS_PRIVATE | MS_REC, NULL)); + ASSERT_NE(-1, syscall(SYS_pivot_root, dir_s3_d1, dir_s3_d2)); + ASSERT_NE(-1, chdir("/")); + + int ruleset_fd = create_ruleset(_metadata, + (const char *const []){ "b3", NULL }); + ASSERT_NE(-1, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + + test_path(_metadata, "b3", 0); + test_path(_metadata, "/", -1); +} + +TEST_F(layout1, mount_and_pivot) +{ + int ruleset_fd = create_ruleset(_metadata, + (const char *const []){ dir_s3_d1, NULL }); + ASSERT_NE(-1, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + + ASSERT_EQ(-1, mount(NULL, "/", NULL, MS_PRIVATE | MS_REC, NULL)); + ASSERT_EQ(-1, syscall(SYS_pivot_root, dir_s3_d1, dir_s3_d2)); +} + +enum relative_access { + REL_OPEN, + REL_CHDIR, + REL_CHROOT, +}; + +static void check_access(struct __test_metadata *_metadata, + bool enforce, enum relative_access rel) +{ + int dirfd; + int ruleset_fd = -1; + + if (enforce) { + ruleset_fd = create_ruleset(_metadata, (const char *const []){ + dir_s0_d2, dir_s1_d1, NULL }); + ASSERT_NE(-1, ruleset_fd); + if (rel == REL_CHROOT) + ASSERT_NE(-1, chdir(dir_s0_d2)); + enforce_ruleset(_metadata, ruleset_fd); + } else if (rel == REL_CHROOT) + ASSERT_NE(-1, chdir(dir_s0_d2)); + switch (rel) { + case REL_OPEN: + dirfd = open(dir_s0_d2, O_DIRECTORY); + ASSERT_NE(-1, dirfd); + break; + case REL_CHDIR: + ASSERT_NE(-1, chdir(dir_s0_d2)); + dirfd = AT_FDCWD; + break; + case REL_CHROOT: + ASSERT_NE(-1, chroot(".")) { + TH_LOG("Failed to chroot: %s\n", strerror(errno)); + } + dirfd = AT_FDCWD; + break; + default: + ASSERT_TRUE(false); + return; + } + + test_path_rel(_metadata, dirfd, "..", (rel == REL_CHROOT) ? 0 : -1); + test_path_rel(_metadata, dirfd, ".", 0); + if (rel != REL_CHROOT) { + test_path_rel(_metadata, dirfd, "./c0", 0); + test_path_rel(_metadata, dirfd, "../../" TMP_PREFIX "a1", 0); + test_path_rel(_metadata, dirfd, "../../" TMP_PREFIX "a2", -1); + } + + if (rel == REL_OPEN) + EXPECT_EQ(0, close(dirfd)); + if (enforce) + EXPECT_EQ(0, close(ruleset_fd)); +} + +TEST_F(layout1, deny_open) +{ + check_access(_metadata, true, REL_OPEN); +} + +TEST_F(layout1, deny_chdir) +{ + check_access(_metadata, true, REL_CHDIR); +} + +TEST_F(layout1, deny_chroot) +{ + check_access(_metadata, true, REL_CHROOT); +} + +TEST(cleanup) +{ + cleanup_layout1(); +} + +TEST_HARNESS_MAIN diff --git a/tools/testing/selftests/landlock/test_ptrace.c b/tools/testing/selftests/landlock/test_ptrace.c new file mode 100644 index 000000000000..fcdb41e172d1 --- /dev/null +++ b/tools/testing/selftests/landlock/test_ptrace.c @@ -0,0 +1,293 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Landlock tests - ptrace + * + * Copyright © 2017-2020 Mickaël Salaün + * Copyright © 2019-2020 ANSSI + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" + +static void create_domain(struct __test_metadata *_metadata) +{ + int ruleset_fd, err; + struct landlock_attr_features attr_features; + struct landlock_attr_enforce attr_enforce; + struct landlock_attr_ruleset attr_ruleset = { + .handled_access_fs = LANDLOCK_ACCESS_FS_READ, + }; + struct landlock_attr_path_beneath path_beneath = { + .allowed_access = LANDLOCK_ACCESS_FS_READ, + }; + + ASSERT_EQ(0, landlock(LANDLOCK_CMD_GET_FEATURES, + LANDLOCK_OPT_GET_FEATURES, + sizeof(attr_features), &attr_features)); + /* Only for test, use a binary AND for real application instead. */ + ASSERT_EQ(attr_ruleset.handled_access_fs, + attr_ruleset.handled_access_fs & + attr_features.access_fs); + ruleset_fd = landlock(LANDLOCK_CMD_CREATE_RULESET, + LANDLOCK_OPT_CREATE_RULESET, sizeof(attr_ruleset), + &attr_ruleset); + ASSERT_GE(ruleset_fd, 0) { + TH_LOG("Failed to create a ruleset: %s\n", strerror(errno)); + } + path_beneath.ruleset_fd = ruleset_fd; + path_beneath.parent_fd = open("/tmp", O_PATH | O_NOFOLLOW | O_DIRECTORY + | O_CLOEXEC); + ASSERT_GE(path_beneath.parent_fd, 0); + err = landlock(LANDLOCK_CMD_ADD_RULE, + LANDLOCK_OPT_ADD_RULE_PATH_BENEATH, + sizeof(path_beneath), &path_beneath); + ASSERT_EQ(err, 0); + ASSERT_EQ(errno, 0); + ASSERT_EQ(0, close(path_beneath.parent_fd)); + + err = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); + ASSERT_EQ(errno, 0); + ASSERT_EQ(err, 0); + + attr_enforce.ruleset_fd = ruleset_fd; + err = landlock(LANDLOCK_CMD_ENFORCE_RULESET, + LANDLOCK_OPT_ENFORCE_RULESET, sizeof(attr_enforce), + &attr_enforce); + ASSERT_EQ(err, 0); + ASSERT_EQ(errno, 0); + + ASSERT_EQ(0, close(ruleset_fd)); +} + +/* test PTRACE_TRACEME and PTRACE_ATTACH for parent and child */ +static void check_ptrace(struct __test_metadata *_metadata, + bool domain_both, bool domain_parent, bool domain_child) +{ + pid_t child, parent; + int status; + int pipe_child[2], pipe_parent[2]; + char buf_parent; + + parent = getpid(); + ASSERT_EQ(0, pipe(pipe_child)); + ASSERT_EQ(0, pipe(pipe_parent)); + if (domain_both) + create_domain(_metadata); + + child = fork(); + ASSERT_LE(0, child); + if (child == 0) { + char buf_child; + + EXPECT_EQ(0, close(pipe_parent[1])); + EXPECT_EQ(0, close(pipe_child[0])); + if (domain_child) + create_domain(_metadata); + + /* sync #1 */ + ASSERT_EQ(1, read(pipe_parent[0], &buf_child, 1)) { + TH_LOG("Failed to read() sync #1 from parent"); + } + ASSERT_EQ('.', buf_child); + + /* Tests the parent protection. */ + ASSERT_EQ(domain_child ? -1 : 0, + ptrace(PTRACE_ATTACH, parent, NULL, 0)); + if (domain_child) { + ASSERT_EQ(EPERM, errno); + } else { + ASSERT_EQ(parent, waitpid(parent, &status, 0)); + ASSERT_EQ(1, WIFSTOPPED(status)); + ASSERT_EQ(0, ptrace(PTRACE_DETACH, parent, NULL, 0)); + } + + /* sync #2 */ + ASSERT_EQ(1, write(pipe_child[1], ".", 1)) { + TH_LOG("Failed to write() sync #2 to parent"); + } + + /* Tests traceme. */ + ASSERT_EQ(domain_parent ? -1 : 0, ptrace(PTRACE_TRACEME)); + if (domain_parent) { + ASSERT_EQ(EPERM, errno); + } else { + ASSERT_EQ(0, raise(SIGSTOP)); + } + + /* sync #3 */ + ASSERT_EQ(1, read(pipe_parent[0], &buf_child, 1)) { + TH_LOG("Failed to read() sync #3 from parent"); + } + ASSERT_EQ('.', buf_child); + _exit(_metadata->passed ? EXIT_SUCCESS : EXIT_FAILURE); + } + + EXPECT_EQ(0, close(pipe_child[1])); + EXPECT_EQ(0, close(pipe_parent[0])); + if (domain_parent) + create_domain(_metadata); + + /* sync #1 */ + ASSERT_EQ(1, write(pipe_parent[1], ".", 1)) { + TH_LOG("Failed to write() sync #1 to child"); + } + + /* Tests the parent protection. */ + /* sync #2 */ + ASSERT_EQ(1, read(pipe_child[0], &buf_parent, 1)) { + TH_LOG("Failed to read() sync #2 from child"); + } + ASSERT_EQ('.', buf_parent); + + /* Tests traceme. */ + if (!domain_parent) { + ASSERT_EQ(child, waitpid(child, &status, 0)); + ASSERT_EQ(1, WIFSTOPPED(status)); + ASSERT_EQ(0, ptrace(PTRACE_DETACH, child, NULL, 0)); + } + /* Tests attach. */ + ASSERT_EQ(domain_parent ? -1 : 0, + ptrace(PTRACE_ATTACH, child, NULL, 0)); + if (domain_parent) { + ASSERT_EQ(EPERM, errno); + } else { + ASSERT_EQ(child, waitpid(child, &status, 0)); + ASSERT_EQ(1, WIFSTOPPED(status)); + ASSERT_EQ(0, ptrace(PTRACE_DETACH, child, NULL, 0)); + } + + /* sync #3 */ + ASSERT_EQ(1, write(pipe_parent[1], ".", 1)) { + TH_LOG("Failed to write() sync #3 to child"); + } + ASSERT_EQ(child, waitpid(child, &status, 0)); + if (WIFSIGNALED(status) || WEXITSTATUS(status)) + _metadata->passed = 0; +} + +/* + * Test multiple tracing combinations between a parent process P1 and a child + * process P2. + * + * Yama's scoped ptrace is presumed disabled. If enabled, this optional + * restriction is enforced in addition to any Landlock check, which means that + * all P2 requests to trace P1 would be denied. + */ + +/* + * No domain + * + * P1-. P1 -> P2 : allow + * \ P2 -> P1 : allow + * 'P2 + */ +TEST(allow_without_domain) { + check_ptrace(_metadata, false, false, false); +} + +/* + * Child domain + * + * P1--. P1 -> P2 : allow + * \ P2 -> P1 : deny + * .'-----. + * | P2 | + * '------' + */ +TEST(allow_with_one_domain) { + check_ptrace(_metadata, false, false, true); +} + +/* + * Parent domain + * .------. + * | P1 --. P1 -> P2 : deny + * '------' \ P2 -> P1 : allow + * ' + * P2 + */ +TEST(deny_with_parent_domain) { + check_ptrace(_metadata, false, true, false); +} + +/* + * Parent + child domain (siblings) + * .------. + * | P1 ---. P1 -> P2 : deny + * '------' \ P2 -> P1 : deny + * .---'--. + * | P2 | + * '------' + */ +TEST(deny_with_sibling_domain) { + check_ptrace(_metadata, false, true, true); +} + +/* + * Same domain (inherited) + * .-------------. + * | P1----. | P1 -> P2 : allow + * | \ | P2 -> P1 : allow + * | ' | + * | P2 | + * '-------------' + */ +TEST(allow_sibling_domain) { + check_ptrace(_metadata, true, false, false); +} + +/* + * Inherited + child domain + * .-----------------. + * | P1----. | P1 -> P2 : allow + * | \ | P2 -> P1 : deny + * | .-'----. | + * | | P2 | | + * | '------' | + * '-----------------' + */ +TEST(allow_with_nested_domain) { + check_ptrace(_metadata, true, false, true); +} + +/* + * Inherited + parent domain + * .-----------------. + * |.------. | P1 -> P2 : deny + * || P1 ----. | P2 -> P1 : allow + * |'------' \ | + * | ' | + * | P2 | + * '-----------------' + */ +TEST(deny_with_nested_and_parent_domain) { + check_ptrace(_metadata, true, true, false); +} + +/* + * Inherited + parent and child domain (siblings) + * .-----------------. + * | .------. | P1 -> P2 : deny + * | | P1 . | P2 -> P1 : deny + * | '------'\ | + * | \ | + * | .--'---. | + * | | P2 | | + * | '------' | + * '-----------------' + */ +TEST(deny_with_forked_domain) { + check_ptrace(_metadata, true, true, true); +} + +TEST_HARNESS_MAIN From patchwork Mon Feb 24 16:02:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= X-Patchwork-Id: 208759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE252C11D30 for ; Mon, 24 Feb 2020 16:08:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7B27720838 for ; Mon, 24 Feb 2020 16:08:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727474AbgBXQIC (ORCPT ); Mon, 24 Feb 2020 11:08:02 -0500 Received: from smtp-190c.mail.infomaniak.ch ([185.125.25.12]:57843 "EHLO smtp-190c.mail.infomaniak.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726762AbgBXQIC (ORCPT ); Mon, 24 Feb 2020 11:08:02 -0500 X-Greylist: delayed 328 seconds by postgrey-1.27 at vger.kernel.org; Mon, 24 Feb 2020 11:07:59 EST Received: from smtp-3-0000.mail.infomaniak.ch (unknown [10.4.36.107]) by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 977BE100389F3; Mon, 24 Feb 2020 17:02:36 +0100 (CET) Received: from localhost (unknown [94.23.54.103]) by smtp-3-0000.mail.infomaniak.ch (Postfix) with ESMTPA id 48R6K81fbBzlh9Dl; Mon, 24 Feb 2020 17:02:36 +0100 (CET) From: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= To: linux-kernel@vger.kernel.org Cc: =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= , Al Viro , Andy Lutomirski , Arnd Bergmann , Casey Schaufler , Greg Kroah-Hartman , James Morris , Jann Horn , Jonathan Corbet , Kees Cook , Michael Kerrisk , =?utf-8?q?Micka=C3=ABl_Sala?= =?utf-8?b?w7xu?= , "Serge E . Hallyn" , Shuah Khan , Vincent Dagonneau , kernel-hardening@lists.openwall.com, linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-security-module@vger.kernel.org, x86@kernel.org Subject: [RFC PATCH v14 10/10] landlock: Add user and kernel documentation Date: Mon, 24 Feb 2020 17:02:15 +0100 Message-Id: <20200224160215.4136-11-mic@digikod.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200224160215.4136-1-mic@digikod.net> References: <20200224160215.4136-1-mic@digikod.net> MIME-Version: 1.0 X-Antivirus: Dr.Web (R) for Unix mail servers drweb plugin ver.6.0.2.8 X-Antivirus-Code: 0x100000 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This documentation can be built with the Sphinx framework. Another location might be more appropriate, though. Signed-off-by: Mickaël Salaün Reviewed-by: Vincent Dagonneau Cc: Andy Lutomirski Cc: James Morris Cc: Kees Cook Cc: Serge E. Hallyn --- Changes since v13: * Rewrote the documentation according to the major revamp. Previous version: https://lore.kernel.org/lkml/20191104172146.30797-8-mic@digikod.net/ --- Documentation/security/index.rst | 1 + Documentation/security/landlock/index.rst | 18 ++ Documentation/security/landlock/kernel.rst | 44 ++++ Documentation/security/landlock/user.rst | 233 +++++++++++++++++++++ 4 files changed, 296 insertions(+) create mode 100644 Documentation/security/landlock/index.rst create mode 100644 Documentation/security/landlock/kernel.rst create mode 100644 Documentation/security/landlock/user.rst diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst index fc503dd689a7..4d213e76ddf4 100644 --- a/Documentation/security/index.rst +++ b/Documentation/security/index.rst @@ -15,3 +15,4 @@ Security Documentation self-protection siphash tpm/index + landlock/index diff --git a/Documentation/security/landlock/index.rst b/Documentation/security/landlock/index.rst new file mode 100644 index 000000000000..dbd33b96ce60 --- /dev/null +++ b/Documentation/security/landlock/index.rst @@ -0,0 +1,18 @@ +========================================= +Landlock LSM: unprivileged access control +========================================= + +:Author: Mickaël Salaün + +The goal of Landlock is to enable to restrict ambient rights (e.g. global +filesystem access) for a set of processes. Because Landlock is a stackable +LSM, it makes possible to create safe security sandboxes as new security layers +in addition to the existing system-wide access-controls. This kind of sandbox +is expected to help mitigate the security impact of bugs or +unexpected/malicious behaviors in user-space applications. Landlock empower any +process, including unprivileged ones, to securely restrict themselves. + +.. toctree:: + + user + kernel diff --git a/Documentation/security/landlock/kernel.rst b/Documentation/security/landlock/kernel.rst new file mode 100644 index 000000000000..b87769909029 --- /dev/null +++ b/Documentation/security/landlock/kernel.rst @@ -0,0 +1,44 @@ +============================== +Landlock: kernel documentation +============================== + +Landlock's goal is to create scoped access-control (i.e. sandboxing). To +harden a whole system, this feature should be available to any process, +including unprivileged ones. Because such process may be compromised or +backdoored (i.e. untrusted), Landlock's features must be safe to use from the +kernel and other processes point of view. Landlock's interface must therefore +expose a minimal attack surface. + +Landlock is designed to be usable by unprivileged processes while following the +system security policy enforced by other access control mechanisms (e.g. DAC, +LSM). Indeed, a Landlock rule shall not interfere with other access-controls +enforced on the system, only add more restrictions. + +Any user can enforce Landlock rulesets on their processes. They are merged and +evaluated according to the inherited ones in a way that ensure that only more +constraints can be added. + + +Guiding principles for safe access controls +=========================================== + +* A Landlock rule shall be focused on access control on kernel objects instead + of syscall filtering (i.e. syscall arguments), which is the purpose of + seccomp-bpf. +* To avoid multiple kind of side-channel attacks (e.g. leak of security + policies, CPU-based attacks), Landlock rules shall not be able to + programmatically communicate with user space. +* Kernel access check shall not slow down access request from unsandboxed + processes. +* Computation related to Landlock operations (e.g. enforce a ruleset) shall + only impact the processes requesting them. + + +Landlock rulesets and domains +============================= + +A domain is a read-only ruleset tied to a set of subjects (i.e. tasks). A +domain can transition to a new one which is the intersection of the constraints +from the current and a new ruleset. The definition of a subject is implicit +for a task sandboxing itself, which makes the reasoning much easier and helps +avoid pitfalls. diff --git a/Documentation/security/landlock/user.rst b/Documentation/security/landlock/user.rst new file mode 100644 index 000000000000..cbd7f61fca8c --- /dev/null +++ b/Documentation/security/landlock/user.rst @@ -0,0 +1,233 @@ +================================= +Landlock: userspace documentation +================================= + +Landlock rules +============== + +A Landlock rule enables to describe an action on an object. An object is +currently a file hierarchy, and the related filesystem actions are defined in +`Access rights`_. A set of rules are aggregated in a ruleset, which can then +restricts the thread enforcing it, and its future children. + + +Defining and enforcing a security policy +---------------------------------------- + +Before defining a security policy, an application should first probe for the +features supported by the running kernel, which is important to be compatible +with older kernels. This can be done thanks to the `landlock` syscall (cf. +:ref:`syscall`). + +.. code-block:: c + + struct landlock_attr_features attr_features; + + if (landlock(LANDLOCK_CMD_GET_FEATURES, LANDLOCK_OPT_GET_FEATURES, + sizeof(attr_features), &attr_features)) { + perror("Failed to probe the Landlock supported features"); + return 1; + } + +Then, we need to create the ruleset that will contains our rules. For this +example, the ruleset will contains rules which only allow read actions, but +write actions will be denied. The ruleset then needs to handle both of these +kind of actions. To have a backward compatibility, these actions should be +ANDed with the supported ones. + +.. code-block:: c + + int ruleset_fd; + struct landlock_attr_ruleset ruleset = { + .handled_access_fs = + LANDLOCK_ACCESS_FS_READ | + LANDLOCK_ACCESS_FS_READDIR | + LANDLOCK_ACCESS_FS_EXECUTE | + LANDLOCK_ACCESS_FS_WRITE | + LANDLOCK_ACCESS_FS_TRUNCATE | + LANDLOCK_ACCESS_FS_CHMOD | + LANDLOCK_ACCESS_FS_CHOWN | + LANDLOCK_ACCESS_FS_CHGRP | + LANDLOCK_ACCESS_FS_LINK_TO | + LANDLOCK_ACCESS_FS_RENAME_FROM | + LANDLOCK_ACCESS_FS_RENAME_TO | + LANDLOCK_ACCESS_FS_RMDIR | + LANDLOCK_ACCESS_FS_UNLINK | + LANDLOCK_ACCESS_FS_MAKE_CHAR | + LANDLOCK_ACCESS_FS_MAKE_DIR | + LANDLOCK_ACCESS_FS_MAKE_REG | + LANDLOCK_ACCESS_FS_MAKE_SOCK | + LANDLOCK_ACCESS_FS_MAKE_FIFO | + LANDLOCK_ACCESS_FS_MAKE_BLOCK | + LANDLOCK_ACCESS_FS_MAKE_SYM, + }; + + ruleset.handled_access_fs &= attr_features.access_fs; + ruleset_fd = landlock(LANDLOCK_CMD_CREATE_RULESET, + LANDLOCK_OPT_CREATE_RULESET, sizeof(ruleset), &ruleset); + if (ruleset_fd < 0) { + perror("Failed to create a ruleset"); + return 1; + } + +We can now add a new rule to this ruleset thanks to the returned file +descriptor referring to this ruleset. The rule will only enable to read the +file hierarchy ``/usr``. Without other rule, write actions would then be +denied by the ruleset. To add ``/usr`` to the ruleset, we open it with the +``O_PATH`` flag and fill the &struct landlock_attr_path_beneath with this file +descriptor. + +.. code-block:: c + + int err; + struct landlock_attr_path_beneath path_beneath = { + .ruleset_fd = ruleset_fd, + .allowed_access = + LANDLOCK_ACCESS_FS_READ | + LANDLOCK_ACCESS_FS_READDIR | + LANDLOCK_ACCESS_FS_EXECUTE, + }; + + path_beneath.allowed_access &= attr_features.access_fs; + path_beneath.parent_fd = open("/usr", O_PATH | O_CLOEXEC); + if (path_beneath.parent_fd < 0) { + perror("Failed to open file"); + close(ruleset_fd); + return 1; + } + err = landlock(LANDLOCK_CMD_ADD_RULE, LANDLOCK_OPT_ADD_RULE_PATH_BENEATH, + sizeof(path_beneath), &path_beneath); + close(path_beneath.parent_fd); + if (err) { + perror("Failed to update ruleset"); + close(ruleset_fd); + return 1; + } + +We now have a ruleset with one rule allowing read access to ``/usr`` while +denying all accesses featured in ``attr_features.access_fs`` to everything else +on the filesystem. The next step is to restrict the current thread from +gaining more privileges (e.g. thanks to a SUID binary). + +.. code-block:: c + + if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) { + perror("Failed to restrict privileges"); + close(ruleset_fd); + return 1; + } + +The current thread is now ready to sandbox itself with the ruleset. + +.. code-block:: c + + struct landlock_attr_enforce attr_enforce = { + .ruleset_fd = ruleset_fd, + }; + + if (landlock(LANDLOCK_CMD_ENFORCE_RULESET, LANDLOCK_OPT_ENFORCE_RULESET, + sizeof(attr_enforce), &attr_enforce)) { + perror("Failed to enforce ruleset"); + close(ruleset_fd); + return 1; + } + close(ruleset_fd); + +If this last system call succeeds, the current thread is now restricted and +this policy will be enforced on all its subsequently created children as well. +Once a thread is landlocked, there is no way to remove its security policy, +only adding more restrictions is allowed. These threads are now in a new +Landlock domain, merge of their parent one (if any) with the new ruleset. + +A full working code can be found in `samples/landlock/sandboxer.c`_. + + +Inheritance +----------- + +Every new thread resulting from a :manpage:`clone(2)` inherits Landlock program +restrictions from its parent. This is similar to the seccomp inheritance (cf. +:doc:`/userspace-api/seccomp_filter`) or any other LSM dealing with task's +:manpage:`credentials(7)`. For instance, one process' thread may apply +Landlock rules to itself, but they will not be automatically applied to other +sibling threads (unlike POSIX thread credential changes, cf. +:manpage:`nptl(7)`). + + +Ptrace restrictions +------------------- + +A sandboxed process has less privileges than a non-sandboxed process and must +then be subject to additional restrictions when manipulating another process. +To be allowed to use :manpage:`ptrace(2)` and related syscalls on a target +process, a sandboxed process should have a subset of the target process rules, +which means the tracee must be in a sub-domain of the tracer. + + +.. _syscall: + +The `landlock` syscall and its arguments +======================================== + +.. kernel-doc:: security/landlock/syscall.c + :functions: sys_landlock + +Commands +-------- + +.. kernel-doc:: include/uapi/linux/landlock.h + :functions: landlock_cmd + +Options +------- + +.. kernel-doc:: include/uapi/linux/landlock.h + :functions: options_intro + options_get_features options_create_ruleset + options_add_rule options_enforce_ruleset + +Attributes +---------- + +.. kernel-doc:: include/uapi/linux/landlock.h + :functions: landlock_attr_features landlock_attr_ruleset + landlock_attr_path_beneath landlock_attr_enforce + +Access rights +------------- + +.. kernel-doc:: include/uapi/linux/landlock.h + :functions: fs_access + + +Questions and answers +===================== + +What about user space sandbox managers? +--------------------------------------- + +Using user space process to enforce restrictions on kernel resources can lead +to race conditions or inconsistent evaluations (i.e. `Incorrect mirroring of +the OS code and state +`_). + +What about namespaces and containers? +------------------------------------- + +Namespaces can help create sandboxes but they are not designed for +access-control and then miss useful features for such use case (e.g. no +fine-grained restrictions). Moreover, their complexity can lead to security +issues, especially when untrusted processes can manipulate them (cf. +`Controlling access to user namespaces `_). + + +Additional documentation +======================== + +See https://landlock.io + + +.. Links +.. _samples/landlock/sandboxer.c: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/samples/landlock/sandboxer.c +.. _tools/testing/selftests/landlock/: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/tools/testing/selftests/landlock/ +.. _tools/testing/selftests/landlock/test_ptrace.c: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/tools/testing/selftests/landlock/test_ptrace.c