From patchwork Tue Jun 25 19:17:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella Netto X-Patchwork-Id: 807277 Delivered-To: patch@linaro.org Received: by 2002:a5d:508d:0:b0:362:4979:7f74 with SMTP id a13csp2517147wrt; Tue, 25 Jun 2024 12:19:11 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWC40Rsxv21L7emIYmv7Xp5xlFKwDKhxN5Y6pUkSFT//tWtfXdDxn4CTMrt9z/PgkPzdPDYQPBz+iJPD/2syc5x X-Google-Smtp-Source: AGHT+IGIdR8SAu11C1nKxRfo4XXb+u3t69MRQ73oXrtq63LbC4xKhyERBuBjozHhTikyoogKybEv X-Received: by 2002:a05:6214:258a:b0:6b2:ca79:b568 with SMTP id 6a1803df08f44-6b56341faa7mr108266416d6.19.1719343151289; Tue, 25 Jun 2024 12:19:11 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1719343151; cv=pass; d=google.com; s=arc-20160816; b=eg+Xed9fkBVCgAS5vWbAUv32AHIQZw1rlMJxHQu+vXXGCNS3JIeHZAEPb60yA8c+mU QVYowbbjZlZDVIzbsKrTBXUXYw83jXWfbh2kR9fkGNLHCs7T14jaSWMyJZXG+S9wGhDT upnY9byFdyO2BJI1evJX1Ew1DPyngL0aplevCyMou7sE8qqRUL61NpAE/dW46HnlIcwJ iW3+NBA09BvcJQMiaUiNr+40nBuL6wF264i47a+5fGinhVwqTAbousux0SYN45MbiSNA WvU2upy6JwTWQVl9Gq+9wXpeLbeUGn0pXuiT7qnue6mn8elR/hV9MUw5/LsH7AbcvJLP EMEg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:message-id:date:subject:cc:to:from:dkim-signature :arc-filter:dmarc-filter:delivered-to; bh=7QzcU8Z973/hcdOiuYH8txwtyHIUi6DIezo4rq2Lo/g=; fh=34aJghbzZDfPMTd7P19/d0ZXMnKNH+YNOkMWkYxSYZU=; b=mzojdX5g04fVDgCk0EEUU5aC0INhrVFNnzJ8UhmfhIb4TceHO8V1xDfUjkS4FtHvIn vOFiJ6bCnHCNLacTHnEhE0ccvwHWnBGF9rcYu/QXPOdT6NMpX0mUK+CbYlbL+sZXYww6 yU+nE2vav/9oksmxzUMTOVyXLAFpeNresNxDqyOsm8iv8PYGjc+lVVocjVh1dohVe9f3 9vesDGcvQMoO1gYyCrxkO4x13PtM3m4+jL2Hiz5hfGHlLyCiSVOm31mn4dHqznj+OV3d eoS9FzOQwI5ZcnMAbH0F5fMSA8uFfH/CqdImnGg9OB+PnaqxacjhM7XoV4V5N2rXXDxN jGOA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KRVTBciu; arc=pass (i=1); spf=pass (google.com: domain of libc-alpha-bounces+patch=linaro.org@sourceware.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="libc-alpha-bounces+patch=linaro.org@sourceware.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from server2.sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id 6a1803df08f44-6b53de4496bsi76192356d6.543.2024.06.25.12.19.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Jun 2024 12:19:11 -0700 (PDT) Received-SPF: pass (google.com: domain of libc-alpha-bounces+patch=linaro.org@sourceware.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KRVTBciu; arc=pass (i=1); spf=pass (google.com: domain of libc-alpha-bounces+patch=linaro.org@sourceware.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="libc-alpha-bounces+patch=linaro.org@sourceware.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id CE32C3870892 for ; Tue, 25 Jun 2024 19:19:10 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by sourceware.org (Postfix) with ESMTPS id 729D638654B0 for ; Tue, 25 Jun 2024 19:18:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 729D638654B0 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 729D638654B0 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::52c ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1719343136; cv=none; b=n8PX40652Zkk2FbkfcxmBo8CB2h0qFl2SArD19B6jibLVHu7XueNawjFbRzhGTdY+PjVPH3+mYYygsUBaLuFHU3fwf8Xd5CY9XU1DsqLpIDJ6YfbLEwuYfgOk7qZbfIjlhFVMNcvWNM2CkI9rdrgxqvyOS2mpondTADA1H8VlyM= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1719343136; c=relaxed/simple; bh=EBj7r+wtuiDniRZd8WvyuLFLejrbfNYScVa+fndfxJ0=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=uxmBII7FPkISOtQnIU7zd4iABOpbRHE5NGIB7mISJIA0PpYcWglqKXceX1/BtEC+bsHoHSpMXhSnXIHwTCL2oqrqSk5IDxPI3liIRa1cXbQ/HbECYG12DqqUI+9FlV9GfiHedhap3ALv2mEqkKDlDyGt4A42Vo9aQvHp+sDNWb0= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-pg1-x52c.google.com with SMTP id 41be03b00d2f7-7152e097461so3805095a12.1 for ; Tue, 25 Jun 2024 12:18:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1719343128; x=1719947928; darn=sourceware.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=7QzcU8Z973/hcdOiuYH8txwtyHIUi6DIezo4rq2Lo/g=; b=KRVTBciu7djie0fn4U0L+O5dFDXQK75YGFT5gEUiJD5kvGU7EYk0hxTkdxgYwMXqis urhFlNJv0/RkioiPwvu5R78f4MTxQlmXamcJDM60rbVxWwTR1stwZV3xysG8g7NHr2Gw efhvekbolnsIAk6R39DCrJRj3HlzCwsZ+YxoKDUfbrtzid16UpLPAXIErFk9tRT9Rbgh XYD35PkXWH4ti+GOfHOKn5jFXuQQ8toQfI1PlEQALaiHytWYtTORyQoUIp6JNoP6xyLJ JDYb8mZ38wQ4OeOoopJyLsrC5LZ/+9qzA6Fjg27mWzHbhZ3kWrxZHkw8vY5rHKqXBFbz F8Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719343128; x=1719947928; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=7QzcU8Z973/hcdOiuYH8txwtyHIUi6DIezo4rq2Lo/g=; b=qNx+GNNig+nvXZiaHwC38qWEzPUDhrFcNQIzvJ+L9LasMKFZIiTpGzbDCBcScnnyDN w0MUxDf8swjNWN+6NjLoo6xu1Nt6jXwY1AV8+wO9nrEgEjwFzMhsCMwEW9SsPWUNzNpx ubD7CdnlgyxkwE269JIIpBtI28hcw50I/pThotQ6FhrAFIhe92kvVhuqTmT2CkIQfEze ofQOdopTeEWawa9D4bT2YZ8UWCa1UTEp4TDzZa2AwJM26hMCAqSTpago2IYqu+JU6Mh7 rXP3zNMYTHhf3dYqFxc4EnfDQWEexICT5YDWNYg6JVo9tZDhKYIGwQfkFeQ+5BifiOUw mOaw== X-Gm-Message-State: AOJu0YzH215H6m5dUeG+KOTjZDWoAXYnq7CceudNal4w7n91fxcM107V BPk2vCmKEIF2/NiadhO5zR/IGtDxqquPGyRYRYmq5ghtPosqPCOln+bkGamLUVv8dAh9YLHzOi0 7 X-Received: by 2002:a17:90b:f02:b0:2c4:e2d6:8de6 with SMTP id 98e67ed59e1d1-2c8a23bf31cmr6999393a91.21.1719343125958; Tue, 25 Jun 2024 12:18:45 -0700 (PDT) Received: from mandiga.. ([2804:1b3:a7c1:e1e5:816c:3941:bc06:ae5b]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c7e64a2bc8sm11170326a91.54.2024.06.25.12.18.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Jun 2024 12:18:44 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org Cc: Carlos O'Donell , Florian Weimer Subject: [PATCH v8] nptl: Fix Race conditions in pthread cancellation [BZ#12683] Date: Tue, 25 Jun 2024 16:17:44 -0300 Message-ID: <20240625191840.2810790-1-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-Spam-Status: No, score=-12.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patch=linaro.org@sourceware.org The current racy approach is to enable asynchronous cancellation before making the syscall and restore the previous cancellation type once the syscall returns, and check if cancellation has happen during the cancellation entrypoint. As described in BZ#12683, this approach shows 2 problems: 1. Cancellation can act after the syscall has returned from the kernel, but before userspace saves the return value. It might result in a resource leak if the syscall allocated a resource or a side effect (partial read/write), and there is no way to program handle it with cancellation handlers. 2. If a signal is handled while the thread is blocked at a cancellable syscall, the entire signal handler runs with asynchronous cancellation enabled. This can lead to issues if the signal handler call functions which are async-signal-safe but not async-cancel-safe. For the cancellation to work correctly, there are 5 points at which the cancellation signal could arrive: [ ... )[ ... )[ syscall ]( ... 1 2 3 4 5 1. Before initial testcancel, e.g. [*... testcancel) 2. Between testcancel and syscall start, e.g. [testcancel...syscall start) 3. While syscall is blocked and no side effects have yet taken place, e.g. [ syscall ] 4. Same as 3 but with side-effects having occurred (e.g. a partial read or write). 5. After syscall end e.g. (syscall end...*] And libc wants to act on cancellation in cases 1, 2, and 3 but not in cases 4 or 5. For the 4 and 5 cases, the cancellation will eventually happen in the next cancellable entrypoint without any further external event. The proposed solution for each case is: 1. Do a conditional branch based on whether the thread has received a cancellation request; 2. It can be caught by the signal handler determining that the saved program counter (from the ucontext_t) is in some address range beginning just before the "testcancel" and ending with the syscall instruction. 3. SIGCANCEL can be caught by the signal handler and determine that the saved program counter (from the ucontext_t) is in the address range beginning just before "testcancel" and ending with the first uninterruptable (via a signal) syscall instruction that enters the kernel. 4. In this case, except for certain syscalls that ALWAYS fail with EINTR even for non-interrupting signals, the kernel will reset the program counter to point at the syscall instruction during signal handling, so that the syscall is restarted when the signal handler returns. So, from the signal handler's standpoint, this looks the same as case 2, and thus it's taken care of. 5. For syscalls with side-effects, the kernel cannot restart the syscall; when it's interrupted by a signal, the kernel must cause the syscall to return with whatever partial result is obtained (e.g. partial read or write). 6. The saved program counter points just after the syscall instruction, so the signal handler won't act on cancellation. This is similar to 4. since the program counter is past the syscall instruction. So The proposed fixes are: 1. Remove the enable_asynccancel/disable_asynccancel function usage in cancellable syscall definition and instead make them call a common symbol that will check if cancellation is enabled (__syscall_cancel at nptl/cancellation.c), call the arch-specific cancellable entry-point (__syscall_cancel_arch), and cancel the thread when required. 2. Provide an arch-specific generic system call wrapper function that contains global markers. These markers will be used in SIGCANCEL signal handler to check if the interruption has been called in a valid syscall and if the syscalls has side-effects. A reference implementation sysdeps/unix/sysv/linux/syscall_cancel.c is provided. However, the markers may not be set on correct expected places depending on how INTERNAL_SYSCALL_NCS is implemented by the architecture. It is expected that all architectures add an arch-specific implementation. 3. Rewrite SIGCANCEL asynchronous handler to check for both canceling type and if current IP from signal handler falls between the global markers and act accordingly. 4. Adjust libc code to replace LIBC_CANCEL_ASYNC/LIBC_CANCEL_RESET to use the appropriate cancelable syscalls. 5. Adjust 'lowlevellock-futex.h' arch-specific implementations to provide cancelable futex calls. Some architectures require specific support on syscall handling: * On i386 the syscall cancel bridge needs to use the old int80 instruction because the optimized vDSO symbol the resulting PC value for an interrupted syscall points to an address outside the expected markers in __syscall_cancel_arch. It has been discussed in LKML [1] on how kernel could help userland to accomplish it, but afaik discussion has stalled. Also, sysenter should not be used directly by libc since its calling convention is set by the kernel depending of the underlying x86 chip (check kernel commit 30bfa7b3488bfb1bb75c9f50a5fcac1832970c60). * mips o32 is the only kABI that requires 7 argument syscall, and to avoid add a requirement on all architectures to support it, mips support is added with extra internal defines. Checked on aarch64-linux-gnu, arm-linux-gnueabihf, powerpc-linux-gnu, powerpc64-linux-gnu, powerpc64le-linux-gnu, i686-linux-gnu, and x86_64-linux-gnu. [1] https://lkml.org/lkml/2016/3/8/1105 Reviewed-by: Carlos O'Donell --- Changes from v7: * Improve some comments on cancellation.c and commit message. * Removed ia64 code. Changes from v6: * Fixed i686 syscall_cancel.S that triggered patchworkd buildbot regressions. --- elf/Makefile | 5 +- nptl/Makefile | 10 +- nptl/cancellation.c | 127 +++++++------ nptl/cleanup_defer.c | 5 +- nptl/descr-const.sym | 6 + nptl/descr.h | 18 ++ nptl/libc-cleanup.c | 5 +- nptl/pthread_cancel.c | 78 +++----- nptl/pthread_exit.c | 4 +- nptl/pthread_setcancelstate.c | 2 +- nptl/pthread_setcanceltype.c | 2 +- nptl/pthread_testcancel.c | 5 +- nptl/tst-cancel31.c | 100 ++++++++++ sysdeps/generic/syscall_types.h | 25 +++ sysdeps/nptl/cancellation-pc-check.h | 54 ++++++ sysdeps/nptl/lowlevellock-futex.h | 20 +- sysdeps/nptl/pthreadP.h | 11 +- sysdeps/powerpc/powerpc32/sysdep.h | 3 + sysdeps/powerpc/powerpc64/sysdep.h | 19 ++ sysdeps/pthread/tst-cancel2.c | 4 + sysdeps/sh/sysdep.h | 1 + sysdeps/unix/sysdep.h | 173 ++++++++++++++---- .../unix/sysv/linux/aarch64/syscall_cancel.S | 59 ++++++ .../unix/sysv/linux/alpha/syscall_cancel.S | 80 ++++++++ sysdeps/unix/sysv/linux/arc/syscall_cancel.S | 56 ++++++ sysdeps/unix/sysv/linux/arm/syscall_cancel.S | 78 ++++++++ sysdeps/unix/sysv/linux/csky/syscall_cancel.S | 114 ++++++++++++ sysdeps/unix/sysv/linux/hppa/syscall_cancel.S | 81 ++++++++ sysdeps/unix/sysv/linux/i386/syscall_cancel.S | 104 +++++++++++ .../sysv/linux/loongarch/syscall_cancel.S | 50 +++++ sysdeps/unix/sysv/linux/m68k/syscall_cancel.S | 84 +++++++++ .../sysv/linux/microblaze/syscall_cancel.S | 61 ++++++ .../sysv/linux/mips/mips32/syscall_cancel.S | 128 +++++++++++++ sysdeps/unix/sysv/linux/mips/mips32/sysdep.h | 4 + .../linux/mips/mips64/n32/syscall_types.h | 28 +++ .../sysv/linux/mips/mips64/syscall_cancel.S | 108 +++++++++++ sysdeps/unix/sysv/linux/mips/mips64/sysdep.h | 52 +++--- .../unix/sysv/linux/nios2/syscall_cancel.S | 95 ++++++++++ sysdeps/unix/sysv/linux/or1k/syscall_cancel.S | 63 +++++++ .../linux/powerpc/cancellation-pc-check.h | 65 +++++++ .../unix/sysv/linux/powerpc/syscall_cancel.S | 86 +++++++++ .../unix/sysv/linux/riscv/syscall_cancel.S | 67 +++++++ .../sysv/linux/s390/s390-32/syscall_cancel.S | 62 +++++++ .../sysv/linux/s390/s390-64/syscall_cancel.S | 62 +++++++ sysdeps/unix/sysv/linux/sh/syscall_cancel.S | 126 +++++++++++++ sysdeps/unix/sysv/linux/socketcall.h | 35 +++- .../sysv/linux/sparc/sparc32/syscall_cancel.S | 71 +++++++ .../sysv/linux/sparc/sparc64/syscall_cancel.S | 74 ++++++++ sysdeps/unix/sysv/linux/syscall_cancel.c | 73 ++++++++ sysdeps/unix/sysv/linux/sysdep-cancel.h | 12 -- .../unix/sysv/linux/x86_64/syscall_cancel.S | 57 ++++++ .../sysv/linux/x86_64/x32/syscall_types.h | 34 ++++ sysdeps/x86_64/nptl/tcb-offsets.sym | 3 - 53 files changed, 2521 insertions(+), 228 deletions(-) create mode 100644 nptl/descr-const.sym create mode 100644 nptl/tst-cancel31.c create mode 100644 sysdeps/generic/syscall_types.h create mode 100644 sysdeps/nptl/cancellation-pc-check.h create mode 100644 sysdeps/unix/sysv/linux/aarch64/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/alpha/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/arc/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/arm/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/csky/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/hppa/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/i386/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/loongarch/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/m68k/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/microblaze/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/mips/mips32/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/mips/mips64/n32/syscall_types.h create mode 100644 sysdeps/unix/sysv/linux/mips/mips64/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/nios2/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/or1k/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/powerpc/cancellation-pc-check.h create mode 100644 sysdeps/unix/sysv/linux/powerpc/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/riscv/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/s390/s390-32/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/s390/s390-64/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/sh/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/sparc/sparc32/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/sparc/sparc64/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/syscall_cancel.c create mode 100644 sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S create mode 100644 sysdeps/unix/sysv/linux/x86_64/x32/syscall_types.h diff --git a/elf/Makefile b/elf/Makefile index bb6cd06dec..8caf5569f6 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -1291,11 +1291,8 @@ $(objpfx)dl-allobjs.os: $(all-rtld-routines:%=$(objpfx)%.os) # discovery mechanism is not compatible with the libc implementation # when compiled for libc. rtld-stubbed-symbols = \ - __GI___pthread_disable_asynccancel \ - __GI___pthread_enable_asynccancel \ __libc_assert_fail \ - __pthread_disable_asynccancel \ - __pthread_enable_asynccancel \ + __syscall_cancel \ calloc \ free \ malloc \ diff --git a/nptl/Makefile b/nptl/Makefile index b3f8af2e1c..7c2b7a5203 100644 --- a/nptl/Makefile +++ b/nptl/Makefile @@ -204,6 +204,7 @@ routines = \ sem_timedwait \ sem_unlink \ sem_wait \ + syscall_cancel \ tpp \ unwind \ vars \ @@ -235,7 +236,8 @@ CFLAGS-pthread_setcanceltype.c += -fexceptions -fasynchronous-unwind-tables # These are internal functions which similar functionality as setcancelstate # and setcanceltype. -CFLAGS-cancellation.c += -fasynchronous-unwind-tables +CFLAGS-cancellation.c += -fexceptions -fasynchronous-unwind-tables +CFLAGS-syscall_cancel.c += -fexceptions -fasynchronous-unwind-tables # Calling pthread_exit() must cause the registered cancel handlers to # be executed. Therefore exceptions have to be thrown through this @@ -279,6 +281,7 @@ tests = \ tst-cancel7 \ tst-cancel17 \ tst-cancel24 \ + tst-cancel31 \ tst-cond26 \ tst-context1 \ tst-default-attr \ @@ -404,7 +407,10 @@ xtests += tst-eintr1 test-srcs = tst-oddstacklimit -gen-as-const-headers = unwindbuf.sym +gen-as-const-headers = \ + descr-const.sym \ + unwindbuf.sym \ + # gen-as-const-headers gen-py-const-headers := nptl_lock_constants.pysym pretty-printers := nptl-printers.py diff --git a/nptl/cancellation.c b/nptl/cancellation.c index 7ce60e70d0..e71008b58b 100644 --- a/nptl/cancellation.c +++ b/nptl/cancellation.c @@ -18,74 +18,93 @@ #include #include #include "pthreadP.h" -#include - -/* The next two functions are similar to pthread_setcanceltype() but - more specialized for the use in the cancelable functions like write(). - They do not need to check parameters etc. These functions must be - AS-safe, with the exception of the actual cancellation, because they - are called by wrappers around AS-safe functions like write().*/ -int -__pthread_enable_asynccancel (void) +/* Called by the INTERNAL_SYSCALL_CANCEL macro, check for cancellation and + returns the syscall value or its negative error code. */ +long int +__internal_syscall_cancel (__syscall_arg_t a1, __syscall_arg_t a2, + __syscall_arg_t a3, __syscall_arg_t a4, + __syscall_arg_t a5, __syscall_arg_t a6, + __SYSCALL_CANCEL7_ARG_DEF + __syscall_arg_t nr) { - struct pthread *self = THREAD_SELF; - int oldval = atomic_load_relaxed (&self->cancelhandling); + long int result; + struct pthread *pd = THREAD_SELF; - while (1) + /* If cancellation is not enabled, call the syscall directly and also + for thread terminatation to avoid call __syscall_do_cancel while + executing cleanup handlers. */ + int ch = atomic_load_relaxed (&pd->cancelhandling); + if (SINGLE_THREAD_P || !cancel_enabled (ch) || cancel_exiting (ch)) { - int newval = oldval | CANCELTYPE_BITMASK; - - if (newval == oldval) - break; + result = INTERNAL_SYSCALL_NCS_CALL (nr, a1, a2, a3, a4, a5, a6 + __SYSCALL_CANCEL7_ARCH_ARG7); + if (INTERNAL_SYSCALL_ERROR_P (result)) + return -INTERNAL_SYSCALL_ERRNO (result); + return result; + } - if (atomic_compare_exchange_weak_acquire (&self->cancelhandling, - &oldval, newval)) - { - if (cancel_enabled_and_canceled_and_async (newval)) - { - self->result = PTHREAD_CANCELED; - __do_cancel (); - } + /* Call the arch-specific entry points that contains the globals markers + to be checked by SIGCANCEL handler. */ + result = __syscall_cancel_arch (&pd->cancelhandling, nr, a1, a2, a3, a4, a5, + a6 __SYSCALL_CANCEL7_ARCH_ARG7); - break; - } - } + /* If the cancellable syscall was interrupted by SIGCANCEL and it has no + side-effect, cancel the thread if cancellation is enabled. */ + ch = atomic_load_relaxed (&pd->cancelhandling); + /* The behaviour here assumes that EINTR is returned only if there are no + visible side effects. POSIX Issue 7 has not yet provided any stronger + language for close, and in theory the close syscall could return EINTR + and leave the file descriptor open (conforming and leaks). It expects + that no such kernel is used with glibc. */ + if (result == -EINTR && cancel_enabled_and_canceled (ch)) + __syscall_do_cancel (); - return oldval; + return result; } -libc_hidden_def (__pthread_enable_asynccancel) -/* See the comment for __pthread_enable_asynccancel regarding - the AS-safety of this function. */ -void -__pthread_disable_asynccancel (int oldtype) +/* Called by the SYSCALL_CANCEL macro, check for cancellation and return the + syscall expected success value (usually 0) or, in case of failure, -1 and + sets errno to syscall return value. */ +long int +__syscall_cancel (__syscall_arg_t a1, __syscall_arg_t a2, + __syscall_arg_t a3, __syscall_arg_t a4, + __syscall_arg_t a5, __syscall_arg_t a6, + __SYSCALL_CANCEL7_ARG_DEF __syscall_arg_t nr) { - /* If asynchronous cancellation was enabled before we do not have - anything to do. */ - if (oldtype & CANCELTYPE_BITMASK) - return; + int r = __internal_syscall_cancel (a1, a2, a3, a4, a5, a6, + __SYSCALL_CANCEL7_ARG nr); + return __glibc_unlikely (INTERNAL_SYSCALL_ERROR_P (r)) + ? SYSCALL_ERROR_LABEL (INTERNAL_SYSCALL_ERRNO (r)) + : r; +} +/* Called by __syscall_cancel_arch or function above start the thread + cancellation. */ +_Noreturn void +__syscall_do_cancel (void) +{ struct pthread *self = THREAD_SELF; - int newval; + + /* Disable thread cancellation to avoid cancellable entrypoints calling + __syscall_do_cancel recursively. We atomic load relaxed to check the + state of cancelhandling, there is no particular ordering requirement + between the syscall call and the other thread setting our cancelhandling + with a atomic store acquire. + + POSIX Issue 7 notes that the cancellation occurs asynchronously on the + target thread, that implies there is no ordering requirements. It does + not need a MO release store here. */ int oldval = atomic_load_relaxed (&self->cancelhandling); - do + while (1) { - newval = oldval & ~CANCELTYPE_BITMASK; + int newval = oldval | CANCELSTATE_BITMASK; + if (oldval == newval) + break; + if (atomic_compare_exchange_weak_acquire (&self->cancelhandling, + &oldval, newval)) + break; } - while (!atomic_compare_exchange_weak_acquire (&self->cancelhandling, - &oldval, newval)); - /* We cannot return when we are being canceled. Upon return the - thread might be things which would have to be undone. The - following loop should loop until the cancellation signal is - delivered. */ - while (__glibc_unlikely ((newval & (CANCELING_BITMASK | CANCELED_BITMASK)) - == CANCELING_BITMASK)) - { - futex_wait_simple ((unsigned int *) &self->cancelhandling, newval, - FUTEX_PRIVATE); - newval = atomic_load_relaxed (&self->cancelhandling); - } + __do_cancel (PTHREAD_CANCELED); } -libc_hidden_def (__pthread_disable_asynccancel) diff --git a/nptl/cleanup_defer.c b/nptl/cleanup_defer.c index dc08eda003..db32c4fc26 100644 --- a/nptl/cleanup_defer.c +++ b/nptl/cleanup_defer.c @@ -82,10 +82,7 @@ ___pthread_unregister_cancel_restore (__pthread_unwind_buf_t *buf) &cancelhandling, newval)); if (cancel_enabled_and_canceled (cancelhandling)) - { - self->result = PTHREAD_CANCELED; - __do_cancel (); - } + __do_cancel (PTHREAD_CANCELED); } } versioned_symbol (libc, ___pthread_unregister_cancel_restore, diff --git a/nptl/descr-const.sym b/nptl/descr-const.sym new file mode 100644 index 0000000000..8608248354 --- /dev/null +++ b/nptl/descr-const.sym @@ -0,0 +1,6 @@ +#include + +-- Not strictly offsets, these values are using thread cancellation by arch +-- specific cancel entrypoint. +TCB_CANCELED_BIT CANCELED_BIT +TCB_CANCELED_BITMASK CANCELED_BITMASK diff --git a/nptl/descr.h b/nptl/descr.h index 8cef95810c..65d3baaee3 100644 --- a/nptl/descr.h +++ b/nptl/descr.h @@ -425,6 +425,24 @@ struct pthread + sizeof ((struct pthread) {}.rseq_area)) } __attribute ((aligned (TCB_ALIGNMENT))); +static inline bool +cancel_enabled (int value) +{ + return (value & CANCELSTATE_BITMASK) == 0; +} + +static inline bool +cancel_async_enabled (int value) +{ + return (value & CANCELTYPE_BITMASK) != 0; +} + +static inline bool +cancel_exiting (int value) +{ + return (value & EXITING_BITMASK) != 0; +} + static inline bool cancel_enabled_and_canceled (int value) { diff --git a/nptl/libc-cleanup.c b/nptl/libc-cleanup.c index fe042c85aa..20d746cbb7 100644 --- a/nptl/libc-cleanup.c +++ b/nptl/libc-cleanup.c @@ -69,10 +69,7 @@ __libc_cleanup_pop_restore (struct _pthread_cleanup_buffer *buffer) &cancelhandling, newval)); if (cancel_enabled_and_canceled (cancelhandling)) - { - self->result = PTHREAD_CANCELED; - __do_cancel (); - } + __do_cancel (PTHREAD_CANCELED); } } libc_hidden_def (__libc_cleanup_pop_restore) diff --git a/nptl/pthread_cancel.c b/nptl/pthread_cancel.c index 69701db6f9..012c4ebeb8 100644 --- a/nptl/pthread_cancel.c +++ b/nptl/pthread_cancel.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -40,31 +41,16 @@ sigcancel_handler (int sig, siginfo_t *si, void *ctx) || si->si_code != SI_TKILL) return; + /* Check if asynchronous cancellation mode is set or if interrupted + instruction pointer falls within the cancellable syscall bridge. For + interruptable syscalls with external side-effects (i.e. partial reads), + the kernel will set the IP to after __syscall_cancel_arch_end, thus + disabling the cancellation and allowing the process to handle such + conditions. */ struct pthread *self = THREAD_SELF; - int oldval = atomic_load_relaxed (&self->cancelhandling); - while (1) - { - /* We are canceled now. When canceled by another thread this flag - is already set but if the signal is directly send (internally or - from another process) is has to be done here. */ - int newval = oldval | CANCELING_BITMASK | CANCELED_BITMASK; - - if (oldval == newval || (oldval & EXITING_BITMASK) != 0) - /* Already canceled or exiting. */ - break; - - if (atomic_compare_exchange_weak_acquire (&self->cancelhandling, - &oldval, newval)) - { - self->result = PTHREAD_CANCELED; - - /* Make sure asynchronous cancellation is still enabled. */ - if ((oldval & CANCELTYPE_BITMASK) != 0) - /* Run the registered destructors and terminate the thread. */ - __do_cancel (); - } - } + if (cancel_async_enabled (oldval) || cancellation_pc_check (ctx)) + __syscall_do_cancel (); } int @@ -106,15 +92,13 @@ __pthread_cancel (pthread_t th) /* Some syscalls are never restarted after being interrupted by a signal handler, regardless of the use of SA_RESTART (they always fail with EINTR). So pthread_cancel cannot send SIGCANCEL unless the cancellation - is enabled and set as asynchronous (in this case the cancellation will - be acted in the cancellation handler instead by the syscall wrapper). - Otherwise the target thread is set as 'cancelling' (CANCELING_BITMASK) + is enabled. + In this case the target thread is set as 'cancelled' (CANCELED_BITMASK) by atomically setting 'cancelhandling' and the cancelation will be acted upon on next cancellation entrypoing in the target thread. - It also requires to atomically check if cancellation is enabled and - asynchronous, so both cancellation state and type are tracked on - 'cancelhandling'. */ + It also requires to atomically check if cancellation is enabled, so the + state are also tracked on 'cancelhandling'. */ int result = 0; int oldval = atomic_load_relaxed (&pd->cancelhandling); @@ -122,19 +106,17 @@ __pthread_cancel (pthread_t th) do { again: - newval = oldval | CANCELING_BITMASK | CANCELED_BITMASK; + newval = oldval | CANCELED_BITMASK; if (oldval == newval) break; - /* If the cancellation is handled asynchronously just send a - signal. We avoid this if possible since it's more - expensive. */ - if (cancel_enabled_and_canceled_and_async (newval)) + /* Only send the SIGANCEL signal if cancellation is enabled, since some + syscalls are never restarted even with SA_RESTART. The signal + will act iff async cancellation is enabled. */ + if (cancel_enabled (newval)) { - /* Mark the cancellation as "in progress". */ - int newval2 = oldval | CANCELING_BITMASK; if (!atomic_compare_exchange_weak_acquire (&pd->cancelhandling, - &oldval, newval2)) + &oldval, newval)) goto again; if (pd == THREAD_SELF) @@ -143,9 +125,8 @@ __pthread_cancel (pthread_t th) pthread_create, so the signal handler may not have been set up for a self-cancel. */ { - pd->result = PTHREAD_CANCELED; - if ((newval & CANCELTYPE_BITMASK) != 0) - __do_cancel (); + if (cancel_async_enabled (newval)) + __do_cancel (PTHREAD_CANCELED); } else /* The cancellation handler will take care of marking the @@ -154,19 +135,18 @@ __pthread_cancel (pthread_t th) break; } - - /* A single-threaded process should be able to kill itself, since - there is nothing in the POSIX specification that says that it - cannot. So we set multiple_threads to true so that cancellation - points get executed. */ - THREAD_SETMEM (THREAD_SELF, header.multiple_threads, 1); -#ifndef TLS_MULTIPLE_THREADS_IN_TCB - __libc_single_threaded_internal = 0; -#endif } while (!atomic_compare_exchange_weak_acquire (&pd->cancelhandling, &oldval, newval)); + /* A single-threaded process should be able to kill itself, since there is + nothing in the POSIX specification that says that it cannot. So we set + multiple_threads to true so that cancellation points get executed. */ + THREAD_SETMEM (THREAD_SELF, header.multiple_threads, 1); +#ifndef TLS_MULTIPLE_THREADS_IN_TCB + __libc_single_threaded_internal = 0; +#endif + return result; } versioned_symbol (libc, __pthread_cancel, pthread_cancel, GLIBC_2_34); diff --git a/nptl/pthread_exit.c b/nptl/pthread_exit.c index dc2635f827..600ab036d9 100644 --- a/nptl/pthread_exit.c +++ b/nptl/pthread_exit.c @@ -31,9 +31,7 @@ __pthread_exit (void *value) " must be installed for pthread_exit to work\n"); } - THREAD_SETMEM (THREAD_SELF, result, value); - - __do_cancel (); + __do_cancel (value); } libc_hidden_def (__pthread_exit) weak_alias (__pthread_exit, pthread_exit) diff --git a/nptl/pthread_setcancelstate.c b/nptl/pthread_setcancelstate.c index 18fb42a5c4..787c45a32b 100644 --- a/nptl/pthread_setcancelstate.c +++ b/nptl/pthread_setcancelstate.c @@ -48,7 +48,7 @@ __pthread_setcancelstate (int state, int *oldstate) &oldval, newval)) { if (cancel_enabled_and_canceled_and_async (newval)) - __do_cancel (); + __do_cancel (PTHREAD_CANCELED); break; } diff --git a/nptl/pthread_setcanceltype.c b/nptl/pthread_setcanceltype.c index cf441cef91..3b5b0346ef 100644 --- a/nptl/pthread_setcanceltype.c +++ b/nptl/pthread_setcanceltype.c @@ -48,7 +48,7 @@ __pthread_setcanceltype (int type, int *oldtype) if (cancel_enabled_and_canceled_and_async (newval)) { THREAD_SETMEM (self, result, PTHREAD_CANCELED); - __do_cancel (); + __do_cancel (PTHREAD_CANCELED); } break; diff --git a/nptl/pthread_testcancel.c b/nptl/pthread_testcancel.c index a0197b5312..be0e8d5a78 100644 --- a/nptl/pthread_testcancel.c +++ b/nptl/pthread_testcancel.c @@ -25,10 +25,7 @@ ___pthread_testcancel (void) struct pthread *self = THREAD_SELF; int cancelhandling = atomic_load_relaxed (&self->cancelhandling); if (cancel_enabled_and_canceled (cancelhandling)) - { - self->result = PTHREAD_CANCELED; - __do_cancel (); - } + __do_cancel (PTHREAD_CANCELED); } versioned_symbol (libc, ___pthread_testcancel, pthread_testcancel, GLIBC_2_34); libc_hidden_ver (___pthread_testcancel, __pthread_testcancel) diff --git a/nptl/tst-cancel31.c b/nptl/tst-cancel31.c new file mode 100644 index 0000000000..f9cc8245e1 --- /dev/null +++ b/nptl/tst-cancel31.c @@ -0,0 +1,100 @@ +/* Verify side-effects of cancellable syscalls (BZ #12683). + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* This testcase checks if there is resource leakage if the syscall has + returned from kernelspace, but before userspace saves the return + value. The 'leaker' thread should be able to close the file descriptor + if the resource is already allocated, meaning that if the cancellation + signal arrives *after* the open syscal return from kernel, the + side-effect should be visible to application. */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +static void * +writeopener (void *arg) +{ + int fd; + for (;;) + { + fd = open (arg, O_WRONLY); + xclose (fd); + } + return NULL; +} + +static void * +leaker (void *arg) +{ + int fd = open (arg, O_RDONLY); + TEST_VERIFY_EXIT (fd > 0); + pthread_setcancelstate (PTHREAD_CANCEL_DISABLE, 0); + xclose (fd); + return NULL; +} + +static int +do_test (void) +{ + enum { + iter_count = 1000 + }; + + char *dir = support_create_temp_directory ("tst-cancel28"); + char *name = xasprintf ("%s/fifo", dir); + TEST_COMPARE (mkfifo (name, 0600), 0); + add_temp_file (name); + + struct support_descriptors *descrs = support_descriptors_list (); + + srand (1); + + xpthread_create (NULL, writeopener, name); + for (int i = 0; i < iter_count; i++) + { + pthread_t td = xpthread_create (NULL, leaker, name); + struct timespec ts = + { .tv_nsec = rand () % 100000, .tv_sec = 0 }; + nanosleep (&ts, NULL); + /* Ignore pthread_cancel result because it might be the + case when pthread_cancel is called when thread is already + exited. */ + pthread_cancel (td); + xpthread_join (td); + } + + support_descriptors_check (descrs); + + support_descriptors_free (descrs); + + free (name); + + return 0; +} + +#include diff --git a/sysdeps/generic/syscall_types.h b/sysdeps/generic/syscall_types.h new file mode 100644 index 0000000000..2ddeaa2b5f --- /dev/null +++ b/sysdeps/generic/syscall_types.h @@ -0,0 +1,25 @@ +/* Types and macros used for syscall issuing. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _SYSCALL_TYPES_H +#define _SYSCALL_TYPES_H + +typedef long int __syscall_arg_t; +#define __SSC(__x) ((__syscall_arg_t) (__x)) + +#endif diff --git a/sysdeps/nptl/cancellation-pc-check.h b/sysdeps/nptl/cancellation-pc-check.h new file mode 100644 index 0000000000..cb38ad6819 --- /dev/null +++ b/sysdeps/nptl/cancellation-pc-check.h @@ -0,0 +1,54 @@ +/* Architecture specific code for pthread cancellation handling. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _NPTL_CANCELLATION_PC_CHECK +#define _NPTL_CANCELLATION_PC_CHECK + +#include + +/* For syscalls with side-effects (e.g read that might return partial read), + the kernel cannot restart the syscall when interrupted by a signal, it must + return from the call with whatever partial result. In this case, the saved + program counter is set just after the syscall instruction, so the SIGCANCEL + handler should not act on cancellation. + + The __syscall_cancel_arch function, used for all cancellable syscalls, + contains two extra markers, __syscall_cancel_arch_start and + __syscall_cancel_arch_end. The former points to just before the initial + conditional branch that checks if the thread has received a cancellation + request, while former points to the instruction after the one responsible + to issue the syscall. + + The function check if the program counter (PC) from ucontext_t CTX is + within the start and then end boundary from the __syscall_cancel_arch + bridge. Return TRUE if the PC is within the boundary, meaning the + syscall does not have any side effects; or FALSE otherwise. */ + +static __always_inline bool +cancellation_pc_check (void *ctx) +{ + /* Both are defined in syscall_cancel.S. */ + extern const char __syscall_cancel_arch_start[1]; + extern const char __syscall_cancel_arch_end[1]; + + uintptr_t pc = sigcontext_get_pc (ctx); + return pc >= (uintptr_t) __syscall_cancel_arch_start + && pc < (uintptr_t) __syscall_cancel_arch_end; +} + +#endif diff --git a/sysdeps/nptl/lowlevellock-futex.h b/sysdeps/nptl/lowlevellock-futex.h index 278213a37b..c205806300 100644 --- a/sysdeps/nptl/lowlevellock-futex.h +++ b/sysdeps/nptl/lowlevellock-futex.h @@ -21,7 +21,6 @@ #ifndef __ASSEMBLER__ # include -# include # include #endif @@ -120,21 +119,10 @@ nr_wake, nr_move, mutex, val) /* Like lll_futex_wait, but acting as a cancellable entrypoint. */ -# define lll_futex_wait_cancel(futexp, val, private) \ - ({ \ - int __oldtype = LIBC_CANCEL_ASYNC (); \ - long int __err = lll_futex_wait (futexp, val, LLL_SHARED); \ - LIBC_CANCEL_RESET (__oldtype); \ - __err; \ - }) - -/* Like lll_futex_timed_wait, but acting as a cancellable entrypoint. */ -# define lll_futex_timed_wait_cancel(futexp, val, timeout, private) \ - ({ \ - int __oldtype = LIBC_CANCEL_ASYNC (); \ - long int __err = lll_futex_timed_wait (futexp, val, timeout, private); \ - LIBC_CANCEL_RESET (__oldtype); \ - __err; \ +# define lll_futex_wait_cancel(futexp, val, private) \ + ({ \ + int __op = __lll_private_flag (FUTEX_WAIT, private); \ + INTERNAL_SYSCALL_CANCEL (futex, futexp, __op, val, NULL); \ }) #endif /* !__ASSEMBLER__ */ diff --git a/sysdeps/nptl/pthreadP.h b/sysdeps/nptl/pthreadP.h index 30e8a2d177..7d9b95e6ac 100644 --- a/sysdeps/nptl/pthreadP.h +++ b/sysdeps/nptl/pthreadP.h @@ -261,10 +261,12 @@ libc_hidden_proto (__pthread_unregister_cancel) /* Called when a thread reacts on a cancellation request. */ static inline void __attribute ((noreturn, always_inline)) -__do_cancel (void) +__do_cancel (void *result) { struct pthread *self = THREAD_SELF; + self->result = result; + /* Make sure we get no more cancellations. */ atomic_fetch_or_relaxed (&self->cancelhandling, EXITING_BITMASK); @@ -272,6 +274,13 @@ __do_cancel (void) THREAD_GETMEM (self, cleanup_jmp_buf)); } +extern long int __syscall_cancel_arch (volatile int *, __syscall_arg_t nr, + __syscall_arg_t arg1, __syscall_arg_t arg2, __syscall_arg_t arg3, + __syscall_arg_t arg4, __syscall_arg_t arg5, __syscall_arg_t arg6 + __SYSCALL_CANCEL7_ARCH_ARG_DEF) attribute_hidden; + +extern _Noreturn void __syscall_do_cancel (void) attribute_hidden; + /* Internal prototypes. */ diff --git a/sysdeps/powerpc/powerpc32/sysdep.h b/sysdeps/powerpc/powerpc32/sysdep.h index 62de4ca2e5..852a755c7c 100644 --- a/sysdeps/powerpc/powerpc32/sysdep.h +++ b/sysdeps/powerpc/powerpc32/sysdep.h @@ -104,6 +104,9 @@ GOT_LABEL: ; \ # define JUMPTARGET(name) name #endif +#define TAIL_CALL_NO_RETURN(__func) \ + b __func@local + #if defined SHARED && defined PIC && !defined NO_HIDDEN # undef HIDDEN_JUMPTARGET # define HIDDEN_JUMPTARGET(name) __GI_##name##@local diff --git a/sysdeps/powerpc/powerpc64/sysdep.h b/sysdeps/powerpc/powerpc64/sysdep.h index c363939e1a..643aadaae0 100644 --- a/sysdeps/powerpc/powerpc64/sysdep.h +++ b/sysdeps/powerpc/powerpc64/sysdep.h @@ -352,6 +352,25 @@ LT_LABELSUFFIX(name,_name_end): ; \ ENTRY (name); \ DO_CALL (SYS_ify (syscall_name)) +#ifdef SHARED +# define TAIL_CALL_NO_RETURN(__func) \ + b JUMPTARGET(__func) +#else +# define TAIL_CALL_NO_RETURN(__func) \ + .ifdef .Local ## __func; \ + b .Local ## __func; \ + .else; \ +.Local ## __func: \ + mflr 0; \ + std 0,FRAME_LR_SAVE(1); \ + stdu 1,-FRAME_MIN_SIZE(1); \ + cfi_adjust_cfa_offset(FRAME_MIN_SIZE); \ + cfi_offset(lr,FRAME_LR_SAVE); \ + bl JUMPTARGET(__func); \ + nop; \ + .endif +#endif + #ifdef SHARED #define TAIL_CALL_SYSCALL_ERROR \ b JUMPTARGET (NOTOC (__syscall_error)) diff --git a/sysdeps/pthread/tst-cancel2.c b/sysdeps/pthread/tst-cancel2.c index ac38b50115..b4f7098235 100644 --- a/sysdeps/pthread/tst-cancel2.c +++ b/sysdeps/pthread/tst-cancel2.c @@ -32,6 +32,10 @@ tf (void *arg) char buf[100000]; while (write (fd[1], buf, sizeof (buf)) > 0); + /* The write can return -1/EPIPE if the pipe was closed before the + thread calls write, which signals a side-effect that must be + signaled to the thread. */ + pthread_testcancel (); return (void *) 42l; } diff --git a/sysdeps/sh/sysdep.h b/sysdeps/sh/sysdep.h index 0c9e5626e9..377d29b950 100644 --- a/sysdeps/sh/sysdep.h +++ b/sysdeps/sh/sysdep.h @@ -24,6 +24,7 @@ #define ALIGNARG(log2) log2 #define ASM_SIZE_DIRECTIVE(name) .size name,.-name +#define L(label) .L##label #ifdef SHARED #define PLTJMP(_x) _x##@PLT diff --git a/sysdeps/unix/sysdep.h b/sysdeps/unix/sysdep.h index a19e84165b..adc8d71f49 100644 --- a/sysdeps/unix/sysdep.h +++ b/sysdeps/unix/sysdep.h @@ -24,6 +24,9 @@ #define SYSCALL__(name, args) PSEUDO (__##name, name, args) #define SYSCALL(name, args) PSEUDO (name, name, args) +#ifndef __ASSEMBLER__ +# include + #define __SYSCALL_CONCAT_X(a,b) a##b #define __SYSCALL_CONCAT(a,b) __SYSCALL_CONCAT_X (a, b) @@ -108,42 +111,148 @@ #define INLINE_SYSCALL_CALL(...) \ __INLINE_SYSCALL_DISP (__INLINE_SYSCALL, __VA_ARGS__) -#if IS_IN (rtld) -/* All cancellation points are compiled out in the dynamic loader. */ -# define NO_SYSCALL_CANCEL_CHECKING 1 +#define __INTERNAL_SYSCALL_NCS0(name) \ + INTERNAL_SYSCALL_NCS (name, 0) +#define __INTERNAL_SYSCALL_NCS1(name, a1) \ + INTERNAL_SYSCALL_NCS (name, 1, a1) +#define __INTERNAL_SYSCALL_NCS2(name, a1, a2) \ + INTERNAL_SYSCALL_NCS (name, 2, a1, a2) +#define __INTERNAL_SYSCALL_NCS3(name, a1, a2, a3) \ + INTERNAL_SYSCALL_NCS (name, 3, a1, a2, a3) +#define __INTERNAL_SYSCALL_NCS4(name, a1, a2, a3, a4) \ + INTERNAL_SYSCALL_NCS (name, 4, a1, a2, a3, a4) +#define __INTERNAL_SYSCALL_NCS5(name, a1, a2, a3, a4, a5) \ + INTERNAL_SYSCALL_NCS (name, 5, a1, a2, a3, a4, a5) +#define __INTERNAL_SYSCALL_NCS6(name, a1, a2, a3, a4, a5, a6) \ + INTERNAL_SYSCALL_NCS (name, 6, a1, a2, a3, a4, a5, a6) +#define __INTERNAL_SYSCALL_NCS7(name, a1, a2, a3, a4, a5, a6, a7) \ + INTERNAL_SYSCALL_NCS (name, 7, a1, a2, a3, a4, a5, a6, a7) + +/* Issue a syscall defined by syscall number plus any other argument required. + It is similar to INTERNAL_SYSCALL_NCS macro, but without the need to pass + the expected argument number as third parameter. */ +#define INTERNAL_SYSCALL_NCS_CALL(...) \ + __INTERNAL_SYSCALL_DISP (__INTERNAL_SYSCALL_NCS, __VA_ARGS__) + +/* Cancellation macros. */ +#include + +/* Adjust both the __syscall_cancel and the SYSCALL_CANCEL macro to support + 7 arguments instead of default 6 (curently only mip32). It avoid add + the requirement to each architecture to support 7 argument macros + {INTERNAL,INLINE}_SYSCALL. */ +#ifdef HAVE_CANCELABLE_SYSCALL_WITH_7_ARGS +# define __SYSCALL_CANCEL7_ARG_DEF __syscall_arg_t a7, +# define __SYSCALL_CANCEL7_ARCH_ARG_DEF ,__syscall_arg_t a7 +# define __SYSCALL_CANCEL7_ARG 0, +# define __SYSCALL_CANCEL7_ARG7 a7, +# define __SYSCALL_CANCEL7_ARCH_ARG7 , a7 #else -# define NO_SYSCALL_CANCEL_CHECKING SINGLE_THREAD_P +# define __SYSCALL_CANCEL7_ARG_DEF +# define __SYSCALL_CANCEL7_ARCH_ARG_DEF +# define __SYSCALL_CANCEL7_ARG +# define __SYSCALL_CANCEL7_ARG7 +# define __SYSCALL_CANCEL7_ARCH_ARG7 #endif +long int __internal_syscall_cancel (__syscall_arg_t a1, __syscall_arg_t a2, + __syscall_arg_t a3, __syscall_arg_t a4, + __syscall_arg_t a5, __syscall_arg_t a6, + __SYSCALL_CANCEL7_ARG_DEF + __syscall_arg_t nr) attribute_hidden; -#define SYSCALL_CANCEL(...) \ - ({ \ - long int sc_ret; \ - if (NO_SYSCALL_CANCEL_CHECKING) \ - sc_ret = INLINE_SYSCALL_CALL (__VA_ARGS__); \ - else \ - { \ - int sc_cancel_oldtype = LIBC_CANCEL_ASYNC (); \ - sc_ret = INLINE_SYSCALL_CALL (__VA_ARGS__); \ - LIBC_CANCEL_RESET (sc_cancel_oldtype); \ - } \ - sc_ret; \ - }) +long int __syscall_cancel (__syscall_arg_t arg1, __syscall_arg_t arg2, + __syscall_arg_t arg3, __syscall_arg_t arg4, + __syscall_arg_t arg5, __syscall_arg_t arg6, + __SYSCALL_CANCEL7_ARG_DEF + __syscall_arg_t nr) attribute_hidden; -/* Issue a syscall defined by syscall number plus any other argument - required. Any error will be returned unmodified (including errno). */ -#define INTERNAL_SYSCALL_CANCEL(...) \ - ({ \ - long int sc_ret; \ - if (NO_SYSCALL_CANCEL_CHECKING) \ - sc_ret = INTERNAL_SYSCALL_CALL (__VA_ARGS__); \ - else \ - { \ - int sc_cancel_oldtype = LIBC_CANCEL_ASYNC (); \ - sc_ret = INTERNAL_SYSCALL_CALL (__VA_ARGS__); \ - LIBC_CANCEL_RESET (sc_cancel_oldtype); \ - } \ - sc_ret; \ - }) +#define __SYSCALL_CANCEL0(name) \ + __syscall_cancel (0, 0, 0, 0, 0, 0, __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL1(name, a1) \ + __syscall_cancel (__SSC (a1), 0, 0, 0, 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL2(name, a1, a2) \ + __syscall_cancel (__SSC (a1), __SSC (a2), 0, 0, 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL3(name, a1, a2, a3) \ + __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), 0, 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL4(name, a1, a2, a3, a4) \ + __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \ + __SSC(a4), 0, 0, __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL5(name, a1, a2, a3, a4, a5) \ + __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC(a4), \ + __SSC (a5), 0, __SYSCALL_CANCEL7_ARG __NR_##name) +#define __SYSCALL_CANCEL6(name, a1, a2, a3, a4, a5, a6) \ + __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC (a4), \ + __SSC (a5), __SSC (a6), __SYSCALL_CANCEL7_ARG \ + __NR_##name) +#define __SYSCALL_CANCEL7(name, a1, a2, a3, a4, a5, a6, a7) \ + __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC (a4), \ + __SSC (a5), __SSC (a6), __SSC (a7), __NR_##name) + +#define __SYSCALL_CANCEL_NARGS_X(a,b,c,d,e,f,g,h,n,...) n +#define __SYSCALL_CANCEL_NARGS(...) \ + __SYSCALL_CANCEL_NARGS_X (__VA_ARGS__,7,6,5,4,3,2,1,0,) +#define __SYSCALL_CANCEL_CONCAT_X(a,b) a##b +#define __SYSCALL_CANCEL_CONCAT(a,b) __SYSCALL_CANCEL_CONCAT_X (a, b) +#define __SYSCALL_CANCEL_DISP(b,...) \ + __SYSCALL_CANCEL_CONCAT (b,__SYSCALL_CANCEL_NARGS(__VA_ARGS__))(__VA_ARGS__) + +/* Issue a cancellable syscall defined first argument plus any other argument + required. If and error occurs its value, the macro returns -1 and sets + errno accordingly. */ +#define __SYSCALL_CANCEL_CALL(...) \ + __SYSCALL_CANCEL_DISP (__SYSCALL_CANCEL, __VA_ARGS__) + +#define __INTERNAL_SYSCALL_CANCEL0(name) \ + __internal_syscall_cancel (0, 0, 0, 0, 0, 0, __SYSCALL_CANCEL7_ARG \ + __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL1(name, a1) \ + __internal_syscall_cancel (__SSC (a1), 0, 0, 0, 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL2(name, a1, a2) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), 0, 0, 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL3(name, a1, a2, a3) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), 0, \ + 0, 0, __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL4(name, a1, a2, a3, a4) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \ + __SSC(a4), 0, 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL5(name, a1, a2, a3, a4, a5) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \ + __SSC(a4), __SSC (a5), 0, \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL6(name, a1, a2, a3, a4, a5, a6) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \ + __SSC (a4), __SSC (a5), __SSC (a6), \ + __SYSCALL_CANCEL7_ARG __NR_##name) +#define __INTERNAL_SYSCALL_CANCEL7(name, a1, a2, a3, a4, a5, a6, a7) \ + __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \ + __SSC (a4), __SSC (a5), __SSC (a6), \ + __SSC (a7), __NR_##name) + +/* Issue a cancellable syscall defined by syscall number NAME plus any other + argument required. If an error occurs its value is returned as an negative + number unmodified and errno is not set. */ +#define __INTERNAL_SYSCALL_CANCEL_CALL(...) \ + __SYSCALL_CANCEL_DISP (__INTERNAL_SYSCALL_CANCEL, __VA_ARGS__) + +#if IS_IN (rtld) +/* The loader does not need to handle thread cancellation, use direct + syscall instead. */ +# define INTERNAL_SYSCALL_CANCEL(...) INTERNAL_SYSCALL_CALL(__VA_ARGS__) +# define SYSCALL_CANCEL(...) INLINE_SYSCALL_CALL (__VA_ARGS__) +#else +# define INTERNAL_SYSCALL_CANCEL(...) \ + __INTERNAL_SYSCALL_CANCEL_CALL (__VA_ARGS__) +# define SYSCALL_CANCEL(...) \ + __SYSCALL_CANCEL_CALL (__VA_ARGS__) +#endif + +#endif /* __ASSEMBLER__ */ /* Machine-dependent sysdep.h files are expected to define the macro PSEUDO (function_name, syscall_name) to emit assembly code to define the diff --git a/sysdeps/unix/sysv/linux/aarch64/syscall_cancel.S b/sysdeps/unix/sysv/linux/aarch64/syscall_cancel.S new file mode 100644 index 0000000000..e91a431b36 --- /dev/null +++ b/sysdeps/unix/sysv/linux/aarch64/syscall_cancel.S @@ -0,0 +1,59 @@ +/* Cancellable syscall wrapper. Linux/AArch64 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int [x0] __syscall_cancel_arch (int *cancelhandling [x0], + long int nr [x1], + long int arg1 [x2], + long int arg2 [x3], + long int arg3 [x4], + long int arg4 [x5], + long int arg5 [x6], + long int arg6 [x7]) */ + +ENTRY (__syscall_cancel_arch) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + ldr w0, [x0] + tbnz w0, TCB_CANCELED_BIT, 1f + + /* Issue a 6 argument syscall, the nr [x1] being the syscall + number. */ + mov x8, x1 + mov x0, x2 + mov x1, x3 + mov x2, x4 + mov x3, x5 + mov x4, x6 + mov x5, x7 + svc 0x0 + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + ret + +1: + b __syscall_do_cancel + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/alpha/syscall_cancel.S b/sysdeps/unix/sysv/linux/alpha/syscall_cancel.S new file mode 100644 index 0000000000..377eef48be --- /dev/null +++ b/sysdeps/unix/sysv/linux/alpha/syscall_cancel.S @@ -0,0 +1,80 @@ +/* Cancellable syscall wrapper. Linux/alpha version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *ch, + __syscall_arg_t nr, + __syscall_arg_t arg1, + __syscall_arg_t arg2, + __syscall_arg_t arg3, + __syscall_arg_t arg4, + __syscall_arg_t arg5, + __syscall_arg_t arg6) */ + + .set noreorder + .set noat + .set nomacro +ENTRY (__syscall_cancel_arch) + .frame sp, 16, ra, 0 + .mask 0x4000000,-16 + cfi_startproc + ldah gp, 0(t12) + lda gp, 0(gp) + lda sp, -16(sp) + cfi_def_cfa_offset (16) + mov a1, v0 + stq ra, 0(sp) + cfi_offset (26, -16) + .prologue 1 + + .global __syscall_cancel_arch_start +__syscall_cancel_arch_start: + ldl t0, 0(a0) + addl zero, t0, t0 + /* if (*ch & CANCELED_BITMASK) */ + and t0, TCB_CANCELED_BITMASK, t0 + bne t0, 1f + mov a2, a0 + mov a3, a1 + mov a4, a2 + ldq a4, 16(sp) + mov a5, a3 + ldq a5, 24(sp) + .set macro + callsys + .set nomacro + + .global __syscall_cancel_arch_end +__syscall_cancel_arch_end: + subq zero, v0, t0 + ldq ra, 0(sp) + cmovne a3, t0, v0 + lda sp, 16(sp) + cfi_remember_state + cfi_restore (26) + cfi_def_cfa_offset (0) + ret zero, (ra), 1 + .align 4 +1: + cfi_restore_state + ldq t12, __syscall_do_cancel(gp) !literal!2 + jsr ra, (t12), __syscall_do_cancel !lituse_jsr!2 + cfi_endproc +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/arc/syscall_cancel.S b/sysdeps/unix/sysv/linux/arc/syscall_cancel.S new file mode 100644 index 0000000000..fa02af4163 --- /dev/null +++ b/sysdeps/unix/sysv/linux/arc/syscall_cancel.S @@ -0,0 +1,56 @@ +/* Cancellable syscall wrapper. Linux/ARC version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + __syscall_arg_t nr, + __syscall_arg_t arg1, + __syscall_arg_t arg2, + __syscall_arg_t arg3, + __syscall_arg_t arg4, + __syscall_arg_t arg5, + __syscall_arg_t arg6) */ + +ENTRY (__syscall_cancel_arch) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + ld_s r12,[r0] + bbit1 r12, TCB_CANCELED_BITMASK, 1f + mov_s r8, r1 + mov_s r0, r2 + mov_s r1, r3 + mov_s r2, r4 + mov_s r3, r5 + mov_s r4, r6 + mov_s r5, r7 + trap_s 0 + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + j_s [blink] + + .align 4 +1: push_s blink + cfi_def_cfa_offset (4) + cfi_offset (31, -4) + bl @__syscall_do_cancel + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/arm/syscall_cancel.S b/sysdeps/unix/sysv/linux/arm/syscall_cancel.S new file mode 100644 index 0000000000..6b899306e3 --- /dev/null +++ b/sysdeps/unix/sysv/linux/arm/syscall_cancel.S @@ -0,0 +1,78 @@ +/* Cancellable syscall wrapper. Linux/arm version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int [r0] __syscall_cancel_arch (int *cancelhandling [r0], + long int nr [r1], + long int arg1 [r2], + long int arg2 [r3], + long int arg3 [SP], + long int arg4 [SP+4], + long int arg5 [SP+8], + long int arg6 [SP+12]) */ + + .syntax unified + +ENTRY (__syscall_cancel_arch) + .fnstart + mov ip, sp + stmfd sp!, {r4, r5, r6, r7, lr} + .save {r4, r5, r6, r7, lr} + + cfi_adjust_cfa_offset (20) + cfi_rel_offset (r4, 0) + cfi_rel_offset (r5, 4) + cfi_rel_offset (r6, 8) + cfi_rel_offset (r7, 12) + cfi_rel_offset (lr, 16) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + ldr r0, [r0] + tst r0, #TCB_CANCELED_BITMASK + bne 1f + + /* Issue a 6 argument syscall, the nr [r1] being the syscall + number. */ + mov r7, r1 + mov r0, r2 + mov r1, r3 + ldmfd ip, {r2, r3, r4, r5, r6} + svc 0x0 + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + ldmfd sp!, {r4, r5, r6, r7, lr} + cfi_adjust_cfa_offset (-20) + cfi_restore (r4) + cfi_restore (r5) + cfi_restore (r6) + cfi_restore (r7) + cfi_restore (lr) + BX (lr) + +1: + ldmfd sp!, {r4, r5, r6, r7, lr} + b __syscall_do_cancel + .fnend +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/csky/syscall_cancel.S b/sysdeps/unix/sysv/linux/csky/syscall_cancel.S new file mode 100644 index 0000000000..2989765f8c --- /dev/null +++ b/sysdeps/unix/sysv/linux/csky/syscall_cancel.S @@ -0,0 +1,114 @@ +/* Cancellable syscall wrapper. Linux/csky version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + __syscall_arg_t nr, + __syscall_arg_t arg1, + __syscall_arg_t arg2, + __syscall_arg_t arg3, + __syscall_arg_t arg4, + __syscall_arg_t arg5, + __syscall_arg_t arg6) */ + +#ifdef SHARED +# define STACK_ADJ 4 +#else +# define STACK_ADJ 0 +#endif + +ENTRY (__syscall_cancel_arch) + subi sp, sp, 16 + STACK_ADJ + cfi_def_cfa_offset (16 + STACK_ADJ) +#ifdef SHARED + st.w gb, (sp, 16) + lrw t1, 1f@GOTPC + cfi_offset (gb, -4) + grs gb, 1f +1: +#endif + st.w lr, (sp, 12) + st.w l3, (sp, 8) + st.w l1, (sp, 4) + st.w l0, (sp, 0) +#ifdef SHARED + addu gb, gb, t1 +#endif + subi sp, sp, 16 + cfi_def_cfa_offset (32 + STACK_ADJ) + cfi_offset (lr, -( 4 + STACK_ADJ)) + cfi_offset (l3, -( 8 + STACK_ADJ)) + cfi_offset (l1, -(12 + STACK_ADJ)) + cfi_offset (l0, -(16 + STACK_ADJ)) + + mov l3, a1 + mov a1, a3 + ld.w a3, (sp, 32 + STACK_ADJ) + st.w a3, (sp, 0) + ld.w a3, (sp, 36 + STACK_ADJ) + st.w a3, (sp, 4) + ld.w a3, (sp, 40 + STACK_ADJ) + st.w a3, (sp, 8) + ld.w a3, (sp, 44 + STACK_ADJ) + st.w a3, (sp, 12) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + ld.w t0, (a0, 0) + andi t0, t0, TCB_CANCELED_BITMASK + jbnez t0, 2f + mov a0, a2 + ld.w a3, (sp, 4) + ld.w a2, (sp, 0) + ld.w l0, (sp, 8) + ld.w l1, (sp, 12) + trap 0 + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + addi sp, sp, 16 + cfi_remember_state + cfi_def_cfa_offset (16 + STACK_ADJ) +#ifdef SHARED + ld.w gb, (sp, 16) + cfi_restore (gb) +#endif + ld.w lr, (sp, 12) + cfi_restore (lr) + ld.w l3, (sp, 8) + cfi_restore (l3) + ld.w l1, (sp, 4) + cfi_restore (l1) + ld.w l0, (sp, 0) + cfi_restore (l0) + addi sp, sp, 16 + cfi_def_cfa_offset (0) + rts + +2: + cfi_restore_state +#ifdef SHARED + lrw a3, __syscall_do_cancel@GOTOFF + addu a3, a3, gb + jsr a3 +#else + jbsr __syscall_do_cancel +#endif +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/hppa/syscall_cancel.S b/sysdeps/unix/sysv/linux/hppa/syscall_cancel.S new file mode 100644 index 0000000000..b9c19747ea --- /dev/null +++ b/sysdeps/unix/sysv/linux/hppa/syscall_cancel.S @@ -0,0 +1,81 @@ +/* Cancellable syscall wrapper. Linux/hppa version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library. If not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + long int nr, + long int arg1, + long int arg2, + long int arg3, + long int arg4, + long int arg5, + long int arg6) */ + + .text +ENTRY(__syscall_cancel_arch) + stw %r2,-20(%r30) + ldo 128(%r30),%r30 + cfi_def_cfa_offset (-128) + cfi_offset (2, -20) + ldw -180(%r30),%r28 + copy %r26,%r20 + stw %r28,-108(%r30) + ldw -184(%r30),%r28 + copy %r24,%r26 + stw %r28,-112(%r30) + ldw -188(%r30),%r28 + stw %r28,-116(%r30) + ldw -192(%r30),%r28 + stw %r4,-104(%r30) + stw %r28,-120(%r30) + copy %r25,%r28 + copy %r23,%r25 +#ifdef __PIC__ + stw %r19,-32(%r30) +#endif + cfi_offset (4, 24) + + .global __syscall_cancel_arch_start +__syscall_cancel_arch_start: + ldw 0(%r20),%r20 + bb,< %r20,31-TCB_CANCELED_BIT,1f + ldw -120(%r30),%r21 + ldw -116(%r30),%r22 + ldw -112(%r30),%r23 + ldw -108(%r30),%r24 + copy %r19, %r4 + ble 0x100(%sr2, %r0) + + .global __syscall_cancel_arch_end +__syscall_cancel_arch_end: + + copy %r28,%r20 + copy %r4,%r19 + + ldw -148(%r30),%r2 + ldw -104(%r30),%r4 + bv %r0(%r2) + ldo -128(%r30),%r30 +1: + bl __syscall_do_cancel,%r2 + nop + nop + +END(__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/i386/syscall_cancel.S b/sysdeps/unix/sysv/linux/i386/syscall_cancel.S new file mode 100644 index 0000000000..46fb746da0 --- /dev/null +++ b/sysdeps/unix/sysv/linux/i386/syscall_cancel.S @@ -0,0 +1,104 @@ +/* Cancellable syscall wrapper. Linux/i686 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int [eax] __syscall_cancel_arch (int *cancelhandling [SP], + long int nr [SP+4], + long int arg1 [SP+8], + long int arg2 [SP+12], + long int arg3 [SP+16], + long int arg4 [SP+20], + long int arg5 [SP+24], + long int arg6 [SP+28]) */ + +ENTRY (__syscall_cancel_arch) + pushl %ebp + cfi_def_cfa_offset (8) + cfi_offset (ebp, -8) + pushl %edi + cfi_def_cfa_offset (12) + cfi_offset (edi, -12) + pushl %esi + cfi_def_cfa_offset (16) + cfi_offset (esi, -16) + pushl %ebx + cfi_def_cfa_offset (20) + cfi_offset (ebx, -20) + + .global __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + movl 20(%esp), %eax + testb $TCB_CANCELED_BITMASK, (%eax) + jne 1f + + /* Issue a 6 argument syscall, the nr [%eax] being the syscall + number. */ + movl 24(%esp), %eax + movl 28(%esp), %ebx + movl 32(%esp), %ecx + movl 36(%esp), %edx + movl 40(%esp), %esi + movl 44(%esp), %edi + movl 48(%esp), %ebp + + /* We can not use the vDSO helper for syscall (__kernel_vsyscall) + because the returned PC from kernel will point to the vDSO page + instead of the expected __syscall_cancel_arch_{start,end} + marks. */ + int $0x80 + + .global __syscall_cancel_arch_end +__syscall_cancel_arch_end: + + popl %ebx + cfi_restore (ebx) + cfi_def_cfa_offset (16) + popl %esi + cfi_restore (esi) + cfi_def_cfa_offset (12) + popl %edi + cfi_restore (edi) + cfi_def_cfa_offset (8) + popl %ebp + cfi_restore (ebp) + cfi_def_cfa_offset (4) + ret + +1: + /* Although the __syscall_do_cancel do not return, we need to stack + being set correctly for unwind. */ + popl %ebx + cfi_restore (ebx) + cfi_def_cfa_offset (16) + popl %esi + cfi_restore (esi) + cfi_def_cfa_offset (12) + popl %edi + cfi_restore (edi) + cfi_def_cfa_offset (8) + popl %ebp + cfi_restore (ebp) + cfi_def_cfa_offset (4) + jmp __syscall_do_cancel + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/loongarch/syscall_cancel.S b/sysdeps/unix/sysv/linux/loongarch/syscall_cancel.S new file mode 100644 index 0000000000..edea9632ff --- /dev/null +++ b/sysdeps/unix/sysv/linux/loongarch/syscall_cancel.S @@ -0,0 +1,50 @@ +/* Cancellable syscall wrapper. Linux/loongarch version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +ENTRY (__syscall_cancel_arch) + + .global __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + ld.w t0, a0, 0 + andi t0, t0, TCB_CANCELED_BITMASK + bnez t0, 1f + + /* Issue a 6 argument syscall. */ + move t1, a1 + move a0, a2 + move a1, a3 + move a2, a4 + move a3, a5 + move a4, a6 + move a5, a7 + move a7, t1 + syscall 0 + + .global __syscall_cancel_arch_end +__syscall_cancel_arch_end: + jr ra +1: + b __syscall_do_cancel + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/m68k/syscall_cancel.S b/sysdeps/unix/sysv/linux/m68k/syscall_cancel.S new file mode 100644 index 0000000000..8923bcc71c --- /dev/null +++ b/sysdeps/unix/sysv/linux/m68k/syscall_cancel.S @@ -0,0 +1,84 @@ +/* Cancellable syscall wrapper. Linux/m68k version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + __syscall_arg_t nr, + __syscall_arg_t arg1, + __syscall_arg_t arg2, + __syscall_arg_t arg3, + __syscall_arg_t arg4, + __syscall_arg_t arg5, + __syscall_arg_t arg6) */ + + +ENTRY (__syscall_cancel_arch) +#ifdef __mcoldfire__ + lea (-16,%sp),%sp + movem.l %d2-%d5,(%sp) +#else + movem.l %d2-%d5,-(%sp) +#endif + cfi_def_cfa_offset (20) + cfi_offset (2, -20) + cfi_offset (3, -16) + cfi_offset (4, -12) + cfi_offset (5, -8) + + .global __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + move.l 20(%sp),%a0 + move.l (%a0),%d0 +#ifdef __mcoldfire__ + move.w %d0,%ccr + jeq 1f +#else + btst #TCB_CANCELED_BIT,%d0 + jne 1f +#endif + + move.l 48(%sp),%a0 + move.l 44(%sp),%d5 + move.l 40(%sp),%d4 + move.l 36(%sp),%d3 + move.l 32(%sp),%d2 + move.l 28(%sp),%d1 + move.l 24(%sp),%d0 + trap #0 + + .global __syscall_cancel_arch_end +__syscall_cancel_arch_end: + +#ifdef __mcoldfire__ + movem.l (%sp),%d2-%d5 + lea (16,%sp),%sp +#else + movem.l (%sp)+,%d2-%d5 +#endif + rts + +1: +#ifdef PIC + bsr.l __syscall_do_cancel +#else + jsr __syscall_do_cancel +#endif +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/microblaze/syscall_cancel.S b/sysdeps/unix/sysv/linux/microblaze/syscall_cancel.S new file mode 100644 index 0000000000..1f9d202bf5 --- /dev/null +++ b/sysdeps/unix/sysv/linux/microblaze/syscall_cancel.S @@ -0,0 +1,61 @@ +/* Cancellable syscall wrapper. Linux/microblaze version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + long int nr, + long int arg1, + long int arg2, + long int arg3, + long int arg4, + long int arg5, + long int arg6) */ + +ENTRY (__syscall_cancel_arch) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + lwi r3,r5,0 + andi r3,r3,TCB_CANCELED_BITMASK + bneid r3,1f + addk r12,r6,r0 + + addk r5,r7,r0 + addk r6,r8,r0 + addk r7,r9,r0 + addk r8,r10,r0 + lwi r9,r1,56 + lwi r10,r1,60 + brki r14,8 + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + + nop + lwi r15,r1,0 + rtsd r15,8 + addik r1,r1,28 + +1: + brlid r15, __syscall_do_cancel + nop + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/mips/mips32/syscall_cancel.S b/sysdeps/unix/sysv/linux/mips/mips32/syscall_cancel.S new file mode 100644 index 0000000000..eb3b2ed005 --- /dev/null +++ b/sysdeps/unix/sysv/linux/mips/mips32/syscall_cancel.S @@ -0,0 +1,128 @@ +/* Cancellable syscall wrapper. Linux/mips32 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + __syscall_arg_t nr, + __syscall_arg_t arg1, + __syscall_arg_t arg2, + __syscall_arg_t arg3, + __syscall_arg_t arg4, + __syscall_arg_t arg5, + __syscall_arg_t arg6, + __syscall_arg_t arg7) */ + +#define FRAME_SIZE 56 + +NESTED (__syscall_cancel_arch, FRAME_SIZE, fp) + .mask 0xc0070000,-SZREG + .fmask 0x00000000,0 + + PTR_ADDIU sp, -FRAME_SIZE + cfi_def_cfa_offset (FRAME_SIZE) + + sw fp, 48(sp) + sw ra, 52(sp) + sw s2, 44(sp) + sw s1, 40(sp) + sw s0, 36(sp) +#ifdef __PIC__ + .cprestore 16 +#endif + cfi_offset (ra, -4) + cfi_offset (fp, -8) + cfi_offset (s2, -12) + cfi_offset (s1, -16) + cfi_offset (s0, -20) + + move fp ,sp + cfi_def_cfa_register (fp) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + lw v0, 0(a0) + andi v0, v0, TCB_CANCELED_BITMASK + bne v0, zero, 2f + + addiu sp, sp, -16 + addiu v0, sp, 16 + sw v0, 24(fp) + + move s0, a1 + move a0, a2 + move a1, a3 + lw a2, 72(fp) + lw a3, 76(fp) + lw v0, 84(fp) + lw s1, 80(fp) + lw s2, 88(fp) + + .set noreorder + subu sp, 32 + sw s1, 16(sp) + sw v0, 20(sp) + sw s2, 24(sp) + move v0, s0 + syscall + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + addiu sp, sp, 32 + .set reorder + + beq a3, zero, 1f + subu v0, zero, v0 +1: + move sp, fp + cfi_remember_state + cfi_def_cfa_register (sp) + lw ra, 52(fp) + lw fp, 48(sp) + lw s2, 44(sp) + lw s1, 40(sp) + lw s0, 36(sp) + + .set noreorder + .set nomacro + jr ra + addiu sp,sp,FRAME_SIZE + + .set macro + .set reorder + + cfi_def_cfa_offset (0) + cfi_restore (s0) + cfi_restore (s1) + cfi_restore (s2) + cfi_restore (fp) + cfi_restore (ra) + +2: + cfi_restore_state +#ifdef __PIC__ + PTR_LA t9, __syscall_do_cancel + jalr t9 +#else + jal __syscall_do_cancel +#endif + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h b/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h index 1827caf595..47a1b97351 100644 --- a/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h +++ b/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h @@ -18,6 +18,10 @@ #ifndef _LINUX_MIPS_MIPS32_SYSDEP_H #define _LINUX_MIPS_MIPS32_SYSDEP_H 1 +/* mips32 have cancelable syscalls with 7 arguments (currently only + sync_file_range). */ +#define HAVE_CANCELABLE_SYSCALL_WITH_7_ARGS 1 + /* There is some commonality. */ #include #include diff --git a/sysdeps/unix/sysv/linux/mips/mips64/n32/syscall_types.h b/sysdeps/unix/sysv/linux/mips/mips64/n32/syscall_types.h new file mode 100644 index 0000000000..b3a8b0b634 --- /dev/null +++ b/sysdeps/unix/sysv/linux/mips/mips64/n32/syscall_types.h @@ -0,0 +1,28 @@ +/* Types and macros used for syscall issuing. MIPS64n32 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _SYSCALL_TYPES_H +#define _SYSCALL_TYPES_H + +typedef long long int __syscall_arg_t; + +/* Convert X to a long long, without losing any bits if it is one + already or warning if it is a 32-bit pointer. */ +#define __SSC(__x) ((__syscall_arg_t) (__typeof__ ((__x) - (__x))) (__x)) + +#endif diff --git a/sysdeps/unix/sysv/linux/mips/mips64/syscall_cancel.S b/sysdeps/unix/sysv/linux/mips/mips64/syscall_cancel.S new file mode 100644 index 0000000000..f172041324 --- /dev/null +++ b/sysdeps/unix/sysv/linux/mips/mips64/syscall_cancel.S @@ -0,0 +1,108 @@ +/* Cancellable syscall wrapper. Linux/mips64 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + __syscall_arg_t nr, + __syscall_arg_t arg1, + __syscall_arg_t arg2, + __syscall_arg_t arg3, + __syscall_arg_t arg4, + __syscall_arg_t arg5, + __syscall_arg_t arg6, + __syscall_arg_t arg7) */ + +#define FRAME_SIZE 32 + + .text +NESTED (__syscall_cancel_arch, FRAME_SIZE, ra) + .mask 0x90010000, -SZREG + .fmask 0x00000000, 0 + LONG_ADDIU sp, sp, -FRAME_SIZE + cfi_def_cfa_offset (FRAME_SIZE) + sd gp, 16(sp) + cfi_offset (gp, -16) + lui gp, %hi(%neg(%gp_rel(__syscall_cancel_arch))) + LONG_ADDU gp, gp, t9 + sd ra, 24(sp) + sd s0, 8(sp) + cfi_offset (ra, -8) + cfi_offset (s0, -24) + LONG_ADDIU gp, gp, %lo(%neg(%gp_rel(__syscall_cancel_arch))) + + .global __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + lw v0, 0(a0) + andi v0, v0, TCB_CANCELED_BITMASK + .set noreorder + .set nomacro + bne v0, zero, 2f + move s0, a1 + .set macro + .set reorder + + move a0, a2 + move a1, a3 + move a2, a4 + move a3, a5 + move a4, a6 + move a5, a7 + + .set noreorder + move v0, s0 + syscall + .set reorder + + .global __syscall_cancel_arch_end +__syscall_cancel_arch_end: + + .set noreorder + .set nomacro + bnel a3, zero, 1f + SUBU v0, zero, v0 + .set macro + .set reorder + +1: + ld ra, 24(sp) + ld gp, 16(sp) + ld s0, 8(sp) + + .set noreorder + .set nomacro + jr ra + LONG_ADDIU sp, sp, FRAME_SIZE + .set macro + .set reorder + + cfi_remember_state + cfi_def_cfa_offset (0) + cfi_restore (s0) + cfi_restore (gp) + cfi_restore (ra) + .align 3 +2: + cfi_restore_state + LONG_L t9, %got_disp(__syscall_do_cancel)(gp) + .reloc 3f, R_MIPS_JALR, __syscall_do_cancel +3: jalr t9 +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/mips/mips64/sysdep.h b/sysdeps/unix/sysv/linux/mips/mips64/sysdep.h index 0a1711dad2..0438bed23d 100644 --- a/sysdeps/unix/sysv/linux/mips/mips64/sysdep.h +++ b/sysdeps/unix/sysv/linux/mips/mips64/sysdep.h @@ -44,15 +44,7 @@ #undef HAVE_INTERNAL_BRK_ADDR_SYMBOL #define HAVE_INTERNAL_BRK_ADDR_SYMBOL 1 -#if _MIPS_SIM == _ABIN32 -/* Convert X to a long long, without losing any bits if it is one - already or warning if it is a 32-bit pointer. */ -# define ARGIFY(X) ((long long int) (__typeof__ ((X) - (X))) (X)) -typedef long long int __syscall_arg_t; -#else -# define ARGIFY(X) ((long int) (X)) -typedef long int __syscall_arg_t; -#endif +#include /* Note that the original Linux syscall restart convention required the instruction immediately preceding SYSCALL to initialize $v0 with the @@ -120,7 +112,7 @@ typedef long int __syscall_arg_t; long int _sys_result; \ \ { \ - __syscall_arg_t _arg1 = ARGIFY (arg1); \ + __syscall_arg_t _arg1 = __SSC (arg1); \ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ = (number); \ register __syscall_arg_t __v0 asm ("$2"); \ @@ -144,8 +136,8 @@ typedef long int __syscall_arg_t; long int _sys_result; \ \ { \ - __syscall_arg_t _arg1 = ARGIFY (arg1); \ - __syscall_arg_t _arg2 = ARGIFY (arg2); \ + __syscall_arg_t _arg1 = __SSC (arg1); \ + __syscall_arg_t _arg2 = __SSC (arg2); \ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ = (number); \ register __syscall_arg_t __v0 asm ("$2"); \ @@ -170,9 +162,9 @@ typedef long int __syscall_arg_t; long int _sys_result; \ \ { \ - __syscall_arg_t _arg1 = ARGIFY (arg1); \ - __syscall_arg_t _arg2 = ARGIFY (arg2); \ - __syscall_arg_t _arg3 = ARGIFY (arg3); \ + __syscall_arg_t _arg1 = __SSC (arg1); \ + __syscall_arg_t _arg2 = __SSC (arg2); \ + __syscall_arg_t _arg3 = __SSC (arg3); \ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ = (number); \ register __syscall_arg_t __v0 asm ("$2"); \ @@ -199,10 +191,10 @@ typedef long int __syscall_arg_t; long int _sys_result; \ \ { \ - __syscall_arg_t _arg1 = ARGIFY (arg1); \ - __syscall_arg_t _arg2 = ARGIFY (arg2); \ - __syscall_arg_t _arg3 = ARGIFY (arg3); \ - __syscall_arg_t _arg4 = ARGIFY (arg4); \ + __syscall_arg_t _arg1 = __SSC (arg1); \ + __syscall_arg_t _arg2 = __SSC (arg2); \ + __syscall_arg_t _arg3 = __SSC (arg3); \ + __syscall_arg_t _arg4 = __SSC (arg4); \ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ = (number); \ register __syscall_arg_t __v0 asm ("$2"); \ @@ -229,11 +221,11 @@ typedef long int __syscall_arg_t; long int _sys_result; \ \ { \ - __syscall_arg_t _arg1 = ARGIFY (arg1); \ - __syscall_arg_t _arg2 = ARGIFY (arg2); \ - __syscall_arg_t _arg3 = ARGIFY (arg3); \ - __syscall_arg_t _arg4 = ARGIFY (arg4); \ - __syscall_arg_t _arg5 = ARGIFY (arg5); \ + __syscall_arg_t _arg1 = __SSC (arg1); \ + __syscall_arg_t _arg2 = __SSC (arg2); \ + __syscall_arg_t _arg3 = __SSC (arg3); \ + __syscall_arg_t _arg4 = __SSC (arg4); \ + __syscall_arg_t _arg5 = __SSC (arg5); \ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ = (number); \ register __syscall_arg_t __v0 asm ("$2"); \ @@ -261,12 +253,12 @@ typedef long int __syscall_arg_t; long int _sys_result; \ \ { \ - __syscall_arg_t _arg1 = ARGIFY (arg1); \ - __syscall_arg_t _arg2 = ARGIFY (arg2); \ - __syscall_arg_t _arg3 = ARGIFY (arg3); \ - __syscall_arg_t _arg4 = ARGIFY (arg4); \ - __syscall_arg_t _arg5 = ARGIFY (arg5); \ - __syscall_arg_t _arg6 = ARGIFY (arg6); \ + __syscall_arg_t _arg1 = __SSC (arg1); \ + __syscall_arg_t _arg2 = __SSC (arg2); \ + __syscall_arg_t _arg3 = __SSC (arg3); \ + __syscall_arg_t _arg4 = __SSC (arg4); \ + __syscall_arg_t _arg5 = __SSC (arg5); \ + __syscall_arg_t _arg6 = __SSC (arg6); \ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ = (number); \ register __syscall_arg_t __v0 asm ("$2"); \ diff --git a/sysdeps/unix/sysv/linux/nios2/syscall_cancel.S b/sysdeps/unix/sysv/linux/nios2/syscall_cancel.S new file mode 100644 index 0000000000..19d0795886 --- /dev/null +++ b/sysdeps/unix/sysv/linux/nios2/syscall_cancel.S @@ -0,0 +1,95 @@ +/* Cancellable syscall wrapper. Linux/nios2 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + __syscall_arg_t nr, + __syscall_arg_t arg1, + __syscall_arg_t arg2, + __syscall_arg_t arg3, + __syscall_arg_t arg4, + __syscall_arg_t arg5, + __syscall_arg_t arg6) */ + +ENTRY (__syscall_cancel_arch) +#ifdef SHARED + addi sp, sp, -8 + stw r22, 0(sp) + nextpc r22 +1: + movhi r8, %hiadj(_gp_got - 1b) + addi r8, r8, %lo(_gp_got - 1b) + stw ra, 4(sp) + add r22, r22, r8 +#else + addi sp, sp, -4 + cfi_def_cfa_offset (4) + stw ra, 0(sp) + cfi_offset (31, -4) +#endif + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + ldw r3, 0(r4) + andi r3, r3, TCB_CANCELED_BITMASK + bne r3, zero, 3f + mov r10, r6 + mov r2, r5 +#ifdef SHARED +# define STACK_ADJ 4 +#else +# define STACK_ADJ 0 +#endif + ldw r9, (16 + STACK_ADJ)(sp) + mov r5, r7 + ldw r8, (12 + STACK_ADJ)(sp) + ldw r7, (8 + STACK_ADJ)(sp) + ldw r6, (4 + STACK_ADJ)(sp) + mov r4, r10 + trap + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + beq r7, zero, 2f + sub r2, zero, r2 +2: +#ifdef SHARED + ldw ra, 4(sp) + ldw r22, 0(sp) + addi sp, sp, 8 +#else + ldw ra, (0 + STACK_ADJ)(sp) + cfi_remember_state + cfi_restore (31) + addi sp, sp, 4 + cfi_def_cfa_offset (0) +#endif + ret + +3: +#ifdef SHARED + ldw r2, %call(__syscall_do_cancel)(r22) + callr r2 +#else + cfi_restore_state + call __syscall_do_cancel +#endif + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/or1k/syscall_cancel.S b/sysdeps/unix/sysv/linux/or1k/syscall_cancel.S new file mode 100644 index 0000000000..876f5e05ab --- /dev/null +++ b/sysdeps/unix/sysv/linux/or1k/syscall_cancel.S @@ -0,0 +1,63 @@ +/* Cancellable syscall wrapper. Linux/or1k version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +ENTRY (__syscall_cancel_arch) + l.addi r1, r1, -4 + cfi_def_cfa_offset (4) + l.sw 0(r1), r9 + cfi_offset (9, -4) + + .global __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + l.movhi r19, hi(0) + l.lwz r17, 0(r3) + l.andi r17, r17, 8 + l.sfeq r17, r19 + l.bnf 1f + + /* Issue a 6 argument syscall. */ + l.or r11, r4, r4 + l.or r3, r5, r5 + l.or r4, r6, r6 + l.or r5, r7, r7 + l.or r6, r8, r8 + l.lwz r7, 4(r1) + l.lwz r8, 8(r1) + l.sys 1 + l.nop + + .global __syscall_cancel_arch_end +__syscall_cancel_arch_end: + + l.lwz r9, 0(r1) + l.jr r9 + l.addi r1, r1, 4 + cfi_remember_state + cfi_def_cfa_offset (0) + cfi_restore (9) +1: + cfi_restore_state + l.jal __syscall_do_cancel + l.nop +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/powerpc/cancellation-pc-check.h b/sysdeps/unix/sysv/linux/powerpc/cancellation-pc-check.h new file mode 100644 index 0000000000..1175e1a070 --- /dev/null +++ b/sysdeps/unix/sysv/linux/powerpc/cancellation-pc-check.h @@ -0,0 +1,65 @@ +/* Architecture specific code for pthread cancellation handling. + Linux/PowerPC version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _NPTL_CANCELLATION_PC_CHECK +#define _NPTL_CANCELLATION_PC_CHECK + +#include + +/* For syscalls with side-effects (e.g read that might return partial read), + the kernel cannot restart the syscall when interrupted by a signal, it must + return from the call with whatever partial result. In this case, the saved + program counter is set just after the syscall instruction, so the SIGCANCEL + handler should not act on cancellation. + + The __syscall_cancel_arch function, used for all cancellable syscalls, + contains two extra markers, __syscall_cancel_arch_start and + __syscall_cancel_arch_end. The former points to just before the initial + conditional branch that checks if the thread has received a cancellation + request, while former points to the instruction after the one responsible + to issue the syscall. + + The function check if the program counter (PC) from ucontext_t CTX is + within the start and then end boundary from the __syscall_cancel_arch + bridge. Return TRUE if the PC is within the boundary, meaning the + syscall does not have any side effects; or FALSE otherwise. */ + +static __always_inline bool +cancellation_pc_check (void *ctx) +{ + /* Both are defined in syscall_cancel.S. */ + extern const char __syscall_cancel_arch_start[1]; + extern const char __syscall_cancel_arch_end_sc[1]; +#if defined(USE_PPC_SVC) && defined(__powerpc64__) + extern const char __syscall_cancel_arch_end_svc[1]; +#endif + + uintptr_t pc = sigcontext_get_pc (ctx); + + return pc >= (uintptr_t) __syscall_cancel_arch_start +#if defined(USE_PPC_SVC) && defined(__powerpc64__) + && THREAD_GET_HWCAP() & PPC_FEATURE2_SCV + ? pc < (uintptr_t) __syscall_cancel_arch_end_sc + : pc < (uintptr_t) __syscall_cancel_arch_end_svc; +#else + && pc < (uintptr_t) __syscall_cancel_arch_end_sc; +#endif +} + +#endif diff --git a/sysdeps/unix/sysv/linux/powerpc/syscall_cancel.S b/sysdeps/unix/sysv/linux/powerpc/syscall_cancel.S new file mode 100644 index 0000000000..1f119d0889 --- /dev/null +++ b/sysdeps/unix/sysv/linux/powerpc/syscall_cancel.S @@ -0,0 +1,86 @@ +/* Cancellable syscall wrapper. Linux/powerpc version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int [r3] __syscall_cancel_arch (int *cancelhandling [r3], + long int nr [r4], + long int arg1 [r5], + long int arg2 [r6], + long int arg3 [r7], + long int arg4 [r8], + long int arg5 [r9], + long int arg6 [r10]) */ + +ENTRY (__syscall_cancel_arch) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + lwz r0,0(r3) + andi. r0,r0,TCB_CANCELED_BITMASK + bne 1f + + /* Issue a 6 argument syscall, the nr [r4] being the syscall + number. */ + mr r0,r4 + mr r3,r5 + mr r4,r6 + mr r5,r7 + mr r6,r8 + mr r7,r9 + mr r8,r10 + +#if defined(USE_PPC_SVC) && defined(__powerpc64__) + CHECK_SCV_SUPPORT r9 0f + + stdu r1, -SCV_FRAME_SIZE(r1) + cfi_adjust_cfa_offset (SCV_FRAME_SIZE) + .machine "push" + .machine "power9" + scv 0 + .machine "pop" + .globl __syscall_cancel_arch_end_svc +__syscall_cancel_arch_end_svc: + ld r9, SCV_FRAME_SIZE + FRAME_LR_SAVE(r1) + mtlr r9 + addi r1, r1, SCV_FRAME_SIZE + cfi_restore (lr) + li r9, -4095 + cmpld r3, r9 + bnslr+ + neg r3,r3 + blr +0: +#endif + sc + .globl __syscall_cancel_arch_end_sc +__syscall_cancel_arch_end_sc: + bnslr+ + neg r3,r3 + blr + + /* Although the __syscall_do_cancel do not return, we need to stack + being set correctly for unwind. */ +1: + TAIL_CALL_NO_RETURN (__syscall_do_cancel) + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/riscv/syscall_cancel.S b/sysdeps/unix/sysv/linux/riscv/syscall_cancel.S new file mode 100644 index 0000000000..93ff0bd90a --- /dev/null +++ b/sysdeps/unix/sysv/linux/riscv/syscall_cancel.S @@ -0,0 +1,67 @@ +/* Cancellable syscall wrapper. Linux/riscv version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + __syscall_arg_t nr, + __syscall_arg_t arg1, + __syscall_arg_t arg2, + __syscall_arg_t arg3, + __syscall_arg_t arg4, + __syscall_arg_t arg5, + __syscall_arg_t arg6) */ + +#ifdef SHARED + .option pic +#else + .option nopic +#endif + +ENTRY (__syscall_cancel_arch) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + lw t1, 0(a0) + /* if (*ch & CANCELED_BITMASK) */ + andi t1, t1, TCB_CANCELED_BITMASK + bne t1, zero, 1f + + mv t3, a1 + mv a0, a2 + mv a1, a3 + mv a2, a4 + mv a3, a5 + mv a4, a6 + mv a5, a7 + mv a7, t3 + scall + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + ret + +1: + addi sp, sp, -16 + cfi_def_cfa_offset (16) + REG_S ra, (16-SZREG)(sp) + cfi_offset (ra, -SZREG) + tail __syscall_do_cancel + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/s390/s390-32/syscall_cancel.S b/sysdeps/unix/sysv/linux/s390/s390-32/syscall_cancel.S new file mode 100644 index 0000000000..9e0ad2a635 --- /dev/null +++ b/sysdeps/unix/sysv/linux/s390/s390-32/syscall_cancel.S @@ -0,0 +1,62 @@ +/* Cancellable syscall wrapper. Linux/s390 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + __syscall_arg_t nr, + __syscall_arg_t arg1, + __syscall_arg_t arg2, + __syscall_arg_t arg3, + __syscall_arg_t arg4, + __syscall_arg_t arg5, + __syscall_arg_t arg6) */ + +ENTRY (__syscall_cancel_arch) + stm %r6,%r7,24(%r15) + cfi_offset (%r6, -72) + cfi_offset (%r7, -68) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + tm 3(%r2),TCB_CANCELED_BITMASK + jne 1f + + /* Issue a 6 argument syscall, the nr [%r1] being the syscall + number. */ + lr %r1,%r3 + lr %r2,%r4 + lr %r3,%r5 + lr %r4,%r6 + lm %r5,%r7,96(%r15) + svc 0 + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + lm %r6,%r7,24(%r15) + cfi_remember_state + cfi_restore (%r7) + cfi_restore (%r6) + br %r14 +1: + cfi_restore_state + jg __syscall_do_cancel +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/s390/s390-64/syscall_cancel.S b/sysdeps/unix/sysv/linux/s390/s390-64/syscall_cancel.S new file mode 100644 index 0000000000..e1620add6a --- /dev/null +++ b/sysdeps/unix/sysv/linux/s390/s390-64/syscall_cancel.S @@ -0,0 +1,62 @@ +/* Cancellable syscall wrapper. Linux/s390x version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + __syscall_arg_t nr, + __syscall_arg_t arg1, + __syscall_arg_t arg2, + __syscall_arg_t arg3, + __syscall_arg_t arg4, + __syscall_arg_t arg5, + __syscall_arg_t arg6) */ + +ENTRY (__syscall_cancel_arch) + stmg %r6,%r7,48(%r15) + cfi_offset (%r6, -112) + cfi_offset (%r7, -104) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + tm 3(%r2),TCB_CANCELED_BITMASK + jne 1f + + /* Issue a 6 argument syscall, the nr [%r1] being the syscall + number. */ + lgr %r1,%r3 + lgr %r2,%r4 + lgr %r3,%r5 + lgr %r4,%r6 + lmg %r5,%r7,160(%r15) + svc 0 + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + lmg %r6,%r7,48(%r15) + cfi_remember_state + cfi_restore (%r7) + cfi_restore (%r6) + br %r14 +1: + cfi_restore_state + jg __syscall_do_cancel +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/sh/syscall_cancel.S b/sysdeps/unix/sysv/linux/sh/syscall_cancel.S new file mode 100644 index 0000000000..2afd23928d --- /dev/null +++ b/sysdeps/unix/sysv/linux/sh/syscall_cancel.S @@ -0,0 +1,126 @@ +/* Cancellable syscall wrapper. Linux/sh version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + long int nr, + long int arg1, + long int arg2, + long int arg3, + long int arg4, + long int arg5, + long int arg6) */ + +ENTRY (__syscall_cancel_arch) + +#ifdef SHARED + mov.l r12,@-r15 + cfi_def_cfa_offset (4) + cfi_offset (12, -4) + mova L(GT),r0 + mov.l L(GT),r12 + sts.l pr,@-r15 + cfi_def_cfa_offset (8) + cfi_offset (17, -8) + add r0,r12 +#else + sts.l pr,@-r15 + cfi_def_cfa_offset (4) + cfi_offset (17, -4) +#endif + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + mov.l @r4,r0 + tst #TCB_CANCELED_BITMASK,r0 + bf/s 1f + + /* Issue a 6 argument syscall. */ + mov r5,r3 + mov r6,r4 + mov r7,r5 +#ifdef SHARED + mov.l @(8,r15),r6 + mov.l @(12,r15),r7 + mov.l @(16,r15),r0 + mov.l @(20,r15),r1 +#else + mov.l @(4,r15),r6 + mov.l @(8,r15),r7 + mov.l @(12,r15),r0 + mov.l @(16,r15),r1 +#endif + trapa #0x16 + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + + /* The additional or is a workaround for a hardware issue: + http://documentation.renesas.com/eng/products/mpumcu/tu/tnsh7456ae.pdf + */ + or r0,r0 + or r0,r0 + or r0,r0 + or r0,r0 + or r0,r0 + + lds.l @r15+,pr + cfi_remember_state + cfi_restore (17) +#ifdef SHARED + cfi_def_cfa_offset (4) + rts + mov.l @r15+,r12 + cfi_def_cfa_offset (0) + cfi_restore (12) + .align 1 +1: + cfi_restore_state + mov.l L(SC),r1 + bsrf r1 +L(M): + nop + + .align 2 +L(GT): + .long _GLOBAL_OFFSET_TABLE_ +L(SC): + .long __syscall_do_cancel-(L(M)+2) +#else + cfi_def_cfa_offset (0) + rts + nop + + .align 1 +1: + cfi_restore_state + mov.l 2f,r1 + jsr @r1 + nop + + .align 2 +2: + .long __syscall_do_cancel +#endif + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/socketcall.h b/sysdeps/unix/sysv/linux/socketcall.h index 537fa43678..0efa5ee9e4 100644 --- a/sysdeps/unix/sysv/linux/socketcall.h +++ b/sysdeps/unix/sysv/linux/socketcall.h @@ -88,14 +88,33 @@ sc_ret; \ }) - -#define SOCKETCALL_CANCEL(name, args...) \ - ({ \ - int oldtype = LIBC_CANCEL_ASYNC (); \ - long int sc_ret = __SOCKETCALL (SOCKOP_##name, args); \ - LIBC_CANCEL_RESET (oldtype); \ - sc_ret; \ - }) +#define __SOCKETCALL_CANCEL1(__name, __a1) \ + SYSCALL_CANCEL (socketcall, __name, \ + ((long int [1]) { (long int) __a1 })) +#define __SOCKETCALL_CANCEL2(__name, __a1, __a2) \ + SYSCALL_CANCEL (socketcall, __name, \ + ((long int [2]) { (long int) __a1, (long int) __a2 })) +#define __SOCKETCALL_CANCEL3(__name, __a1, __a2, __a3) \ + SYSCALL_CANCEL (socketcall, __name, \ + ((long int [3]) { (long int) __a1, (long int) __a2, (long int) __a3 })) +#define __SOCKETCALL_CANCEL4(__name, __a1, __a2, __a3, __a4) \ + SYSCALL_CANCEL (socketcall, __name, \ + ((long int [4]) { (long int) __a1, (long int) __a2, (long int) __a3, \ + (long int) __a4 })) +#define __SOCKETCALL_CANCEL5(__name, __a1, __a2, __a3, __a4, __a5) \ + SYSCALL_CANCEL (socketcall, __name, \ + ((long int [5]) { (long int) __a1, (long int) __a2, (long int) __a3, \ + (long int) __a4, (long int) __a5 })) +#define __SOCKETCALL_CANCEL6(__name, __a1, __a2, __a3, __a4, __a5, __a6) \ + SYSCALL_CANCEL (socketcall, __name, \ + ((long int [6]) { (long int) __a1, (long int) __a2, (long int) __a3, \ + (long int) __a4, (long int) __a5, (long int) __a6 })) + +#define __SOCKETCALL_CANCEL(...) __SOCKETCALL_DISP (__SOCKETCALL_CANCEL,\ + __VA_ARGS__) + +#define SOCKETCALL_CANCEL(name, args...) \ + __SOCKETCALL_CANCEL (SOCKOP_##name, args) #endif /* sys/socketcall.h */ diff --git a/sysdeps/unix/sysv/linux/sparc/sparc32/syscall_cancel.S b/sysdeps/unix/sysv/linux/sparc/sparc32/syscall_cancel.S new file mode 100644 index 0000000000..aa5c658ce1 --- /dev/null +++ b/sysdeps/unix/sysv/linux/sparc/sparc32/syscall_cancel.S @@ -0,0 +1,71 @@ +/* Cancellable syscall wrapper. Linux/sparc32 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int __syscall_cancel_arch (int *cancelhandling, + long int nr, + long int arg1, + long int arg2, + long int arg3, + long int arg4, + long int arg5, + long int arg6) */ + +ENTRY (__syscall_cancel_arch) + save %sp, -96, %sp + + cfi_window_save + cfi_register (%o7, %i7) + cfi_def_cfa_register (%fp) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + ld [%i0], %g2 + andcc %g2, TCB_CANCELED_BITMASK, %g0 + bne,pn %icc, 2f + /* Issue a 6 argument syscall. */ + mov %i1, %g1 + mov %i2, %o0 + mov %i3, %o1 + mov %i4, %o2 + mov %i5, %o3 + ld [%fp+92], %o4 + ld [%fp+96], %o5 + ta 0x10 + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + bcc 1f + nop + sub %g0, %o0, %o0 +1: + mov %o0, %i0 + return %i7+8 + nop + +2: + call __syscall_do_cancel, 0 + nop + nop + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/sparc/sparc64/syscall_cancel.S b/sysdeps/unix/sysv/linux/sparc/sparc64/syscall_cancel.S new file mode 100644 index 0000000000..21b0728d5a --- /dev/null +++ b/sysdeps/unix/sysv/linux/sparc/sparc64/syscall_cancel.S @@ -0,0 +1,74 @@ +/* Cancellable syscall wrapper. Linux/sparc64 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + + .register %g2, #scratch + +/* long int __syscall_cancel_arch (int *cancelhandling, + long int nr, + long int arg1, + long int arg2, + long int arg3, + long int arg4, + long int arg5, + long int arg6) */ + +ENTRY (__syscall_cancel_arch) + save %sp, -176, %sp + + cfi_window_save + cfi_register (%o7, %i7) + cfi_def_cfa_register (%fp) + + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + lduw [%i0], %g2 + andcc %g2, TCB_CANCELED_BITMASK, %g0 + bne,pn %xcc, 2f + /* Issue a 6 argument syscall. */ + mov %i1, %g1 + mov %i2, %o0 + mov %i3, %o1 + mov %i4, %o2 + mov %i5, %o3 + ldx [%fp + STACK_BIAS + 176], %o4 + ldx [%fp + STACK_BIAS + 184], %o5 + ta 0x6d + + .global __syscall_cancel_arch_end +__syscall_cancel_arch_end: + + bcc,pt %xcc, 1f + nop + sub %g0, %o0, %o0 +1: + mov %o0, %i0 + return %i7+8 + nop + +2: + call __syscall_do_cancel, 0 + nop + nop + +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/syscall_cancel.c b/sysdeps/unix/sysv/linux/syscall_cancel.c new file mode 100644 index 0000000000..5fa0706486 --- /dev/null +++ b/sysdeps/unix/sysv/linux/syscall_cancel.c @@ -0,0 +1,73 @@ +/* Pthread cancellation syscall bridge. Default Linux version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +#warning "This implementation should be use just as reference or for bootstrapping" + +/* This is the generic version of the cancellable syscall code which + adds the label guards (__syscall_cancel_arch_{start,end}) used on SIGCANCEL + handler to check if the cancelled syscall have side-effects that need to be + returned to the caller. + + This implementation should be used as a reference one to document the + implementation constraints: + + 1. The __syscall_cancel_arch_start should point just before the test + that thread is already cancelled, + 2. The __syscall_cancel_arch_end should point to the immediate next + instruction after the syscall one. + 3. It should return the syscall value or a negative result if is has + failed, similar to INTERNAL_SYSCALL_CALL. + + The __syscall_cancel_arch_end one is because the kernel will signal + interrupted syscall with side effects by setting the signal frame program + counter (on the ucontext_t third argument from SA_SIGINFO signal handler) + right after the syscall instruction. + + For some architecture, the INTERNAL_SYSCALL_NCS macro use more instructions + to get the error condition from kernel (as for powerpc and sparc that + checks for the conditional register), or uses an out of the line helper + (ARM thumb), or uses a kernel helper gate (i686 or ia64). In this case + the architecture should either adjust the macro or provide a custom + __syscall_cancel_arch implementation. */ + +long int +__syscall_cancel_arch (volatile int *ch, __syscall_arg_t nr, + __syscall_arg_t a1, __syscall_arg_t a2, + __syscall_arg_t a3, __syscall_arg_t a4, + __syscall_arg_t a5, __syscall_arg_t a6 + __SYSCALL_CANCEL7_ARG_DEF) +{ +#define ADD_LABEL(__label) \ + asm volatile ( \ + ".global " __label "\t\n" \ + __label ":\n"); + + ADD_LABEL ("__syscall_cancel_arch_start"); + if (__glibc_unlikely (*ch & CANCELED_BITMASK)) + __syscall_do_cancel(); + + long int result = INTERNAL_SYSCALL_NCS_CALL (nr, a1, a2, a3, a4, a5, a6 + __SYSCALL_CANCEL7_ARG7); + ADD_LABEL ("__syscall_cancel_arch_end"); + if (__glibc_unlikely (INTERNAL_SYSCALL_ERROR_P (result))) + return -INTERNAL_SYSCALL_ERRNO (result); + return result; +} diff --git a/sysdeps/unix/sysv/linux/sysdep-cancel.h b/sysdeps/unix/sysv/linux/sysdep-cancel.h index c48a50fa88..3f1543fec2 100644 --- a/sysdeps/unix/sysv/linux/sysdep-cancel.h +++ b/sysdeps/unix/sysv/linux/sysdep-cancel.h @@ -21,17 +21,5 @@ #define _SYSDEP_CANCEL_H #include -#include -#include - -/* Set cancellation mode to asynchronous. */ -extern int __pthread_enable_asynccancel (void); -libc_hidden_proto (__pthread_enable_asynccancel) -#define LIBC_CANCEL_ASYNC() __pthread_enable_asynccancel () - -/* Reset to previous cancellation mode. */ -extern void __pthread_disable_asynccancel (int oldtype); -libc_hidden_proto (__pthread_disable_asynccancel) -#define LIBC_CANCEL_RESET(oldtype) __pthread_disable_asynccancel (oldtype) #endif diff --git a/sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S b/sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S new file mode 100644 index 0000000000..cda9d20a83 --- /dev/null +++ b/sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S @@ -0,0 +1,57 @@ +/* Cancellable syscall wrapper. Linux/x86_64 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* long int [rax] __syscall_cancel_arch (volatile int *cancelhandling [%rdi], + __syscall_arg_t nr [%rsi], + __syscall_arg_t arg1 [%rdx], + __syscall_arg_t arg2 [%rcx], + __syscall_arg_t arg3 [%r8], + __syscall_arg_t arg4 [%r9], + __syscall_arg_t arg5 [SP+8], + __syscall_arg_t arg6 [SP+16]) */ + +ENTRY (__syscall_cancel_arch) + .globl __syscall_cancel_arch_start +__syscall_cancel_arch_start: + + /* if (*cancelhandling & CANCELED_BITMASK) + __syscall_do_cancel() */ + mov (%rdi),%eax + testb $TCB_CANCELED_BITMASK, (%rdi) + jne __syscall_do_cancel + + /* Issue a 6 argument syscall, the nr [%rax] being the syscall + number. */ + mov %rdi,%r11 + mov %rsi,%rax + mov %rdx,%rdi + mov %rcx,%rsi + mov %r8,%rdx + mov %r9,%r10 + mov 8(%rsp),%r8 + mov 16(%rsp),%r9 + mov %r11,8(%rsp) + syscall + + .globl __syscall_cancel_arch_end +__syscall_cancel_arch_end: + ret +END (__syscall_cancel_arch) diff --git a/sysdeps/unix/sysv/linux/x86_64/x32/syscall_types.h b/sysdeps/unix/sysv/linux/x86_64/x32/syscall_types.h new file mode 100644 index 0000000000..ac2019751d --- /dev/null +++ b/sysdeps/unix/sysv/linux/x86_64/x32/syscall_types.h @@ -0,0 +1,34 @@ +/* Types and macros used for syscall issuing. x86_64/x32 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _SYSCALL_TYPES_H +#define _SYSCALL_TYPES_H + +#include + +typedef long long int __syscall_arg_t; + +/* Syscall arguments for x32 follows x86_64 ABI, however pointers are 32 bits + should be zero extended. */ +#define __SSC(__x) \ + ({ \ + TYPEFY (__x, __tmp) = ARGIFY (__x); \ + (__syscall_arg_t) __tmp; \ + }) + +#endif diff --git a/sysdeps/x86_64/nptl/tcb-offsets.sym b/sysdeps/x86_64/nptl/tcb-offsets.sym index 2bbd563a6c..988a4b8593 100644 --- a/sysdeps/x86_64/nptl/tcb-offsets.sym +++ b/sysdeps/x86_64/nptl/tcb-offsets.sym @@ -13,6 +13,3 @@ MULTIPLE_THREADS_OFFSET offsetof (tcbhead_t, multiple_threads) POINTER_GUARD offsetof (tcbhead_t, pointer_guard) FEATURE_1_OFFSET offsetof (tcbhead_t, feature_1) SSP_BASE_OFFSET offsetof (tcbhead_t, ssp_base) - --- Not strictly offsets, but these values are also used in the TCB. -TCB_CANCELED_BITMASK CANCELED_BITMASK