From patchwork Tue Mar 25 12:15:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876192 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91DED2571B1; Tue, 25 Mar 2025 12:17:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905025; cv=none; b=AQOFVUiPTOrR3teZzTFhXIAmAYlwHYDf+DItN90aiEkbkfycJpuaRq9cfZ8K7Wp1W8LCBxrgkHGR94UgtbqI1qnDG0F7MoSqyq6MMfEA7vKALr6UX7SaNp29pZBfF4bZpnLSAcmk+ni6Qki/X1s6bmXSMMuj4Wh+z9mUqn0wF5k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905025; c=relaxed/simple; bh=nmHGkxmHyJry0ZX08FwwVvEgncr1QB42+YP4ADMAr78=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rMFJ0pOmkN5PgzdlZQbc9bi2WpDekmPmp5y+F5c9Fh99/sKNd7fAiaNJ5vMlzHHdgEnbtJU9OYCc8sIF8l/QRdkhiQIEbsJDtLo6EtNB9wFdqcBuEHHSzAD3eFIghBTcK+PMqyxscAIahwWs8zOVEz+Rozoc5JF49xK4Qn1obEk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=C8stokNp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="C8stokNp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3789C4CEEE; Tue, 25 Mar 2025 12:16:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905025; bh=nmHGkxmHyJry0ZX08FwwVvEgncr1QB42+YP4ADMAr78=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C8stokNpr7dBS5pbLGr75p2zR9Z5E5lnjJIi5FMFx4w1yPnaNsB9NMmqQBRtkfpAS m5D9xLwQKY25aU9wkunIPimypiai/bIZ6XD9oxRJegbXWrcpMM/NuZmLwtZSpFnxg1 yJZMDIrQ39tNY8fcJZvm0yoHbkmHKxLbfYXPMLQ/qvk2/5PhxbMjwQzpd6oS8aI7HG SqcYkjCV3B/bJh0AayYqX7vAsLA0grpT2VNk3O4epeJqyb9gmIPsDSJBy2njlEJ/au z9wEwUIw91F9GlgJA7f/NY90sbIZNWMC9aOmqwSeZEXYxBewSjQbLgzbOs7FyFi1jA 2bUXqrzqMl3sA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 01/43] rv64ilp32_abi: uapi: Reuse lp64 ABI interface Date: Tue, 25 Mar 2025 08:15:42 -0400 Message-Id: <20250325121624.523258-2-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 abi kernel accommodates the lp64 abi userspace and leverages the lp64 abi Linux interface. Hence, unify the BITS_PER_LONG = 32 memory layout to match BITS_PER_LONG = 64. #if (__riscv_xlen == 64) && (BITS_PER_LONG == 32) union { void *datap; __u64 __datap; }; #else void *datap; #endif This is inspired from include/uapi/linux/kvm.h: struct kvm_dirty_log { ... union { void __user *dirty_bitmap; /* one bit per page */ __u64 padding2; }; }; This is a suggestion solution for __riscv_xlen == 64, but we need a general way to determine CONFIG_64BIT/32BIT in uapi. Any help are welcome. TODO: Find a general way to replace __riscv_xlen for uapi headers. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/socket.h | 35 +++++++++++++ include/uapi/asm-generic/siginfo.h | 50 +++++++++++++++++++ include/uapi/asm-generic/signal.h | 35 +++++++++++++ include/uapi/asm-generic/stat.h | 25 ++++++++++ include/uapi/linux/atm.h | 7 +++ include/uapi/linux/atmdev.h | 7 +++ include/uapi/linux/blkpg.h | 7 +++ include/uapi/linux/btrfs.h | 19 +++++++ include/uapi/linux/capi.h | 11 ++++ include/uapi/linux/fs.h | 12 +++++ include/uapi/linux/futex.h | 18 +++++++ include/uapi/linux/if.h | 6 +++ include/uapi/linux/netfilter/x_tables.h | 8 +++ include/uapi/linux/netfilter_ipv4/ip_tables.h | 7 +++ include/uapi/linux/nfs4_mount.h | 14 ++++++ include/uapi/linux/ppp-ioctl.h | 7 +++ include/uapi/linux/sctp.h | 3 ++ include/uapi/linux/sem.h | 38 ++++++++++++++ include/uapi/linux/socket.h | 7 +++ include/uapi/linux/sysctl.h | 32 ++++++++++++ include/uapi/linux/uhid.h | 7 +++ include/uapi/linux/uio.h | 11 ++++ include/uapi/linux/usb/tmc.h | 14 ++++++ include/uapi/linux/usbdevice_fs.h | 50 +++++++++++++++++++ include/uapi/linux/uvcvideo.h | 14 ++++++ include/uapi/linux/vfio.h | 7 +++ include/uapi/linux/videodev2.h | 7 +++ 27 files changed, 458 insertions(+) diff --git a/include/linux/socket.h b/include/linux/socket.h index d18cc47e89bd..a1bc6e2b809e 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -81,12 +81,47 @@ struct msghdr { }; struct user_msghdr { +#if __riscv_xlen == 64 + union { + void __user *msg_name; /* ptr to socket address structure */ + u64 __msg_name; + }; +#else void __user *msg_name; /* ptr to socket address structure */ +#endif int msg_namelen; /* size of socket address structure */ +#if __riscv_xlen == 64 + union { + struct iovec __user *msg_iov; /* scatter/gather array */ + u64 __msg_iov; + }; +#else struct iovec __user *msg_iov; /* scatter/gather array */ +#endif +#if __riscv_xlen == 64 + union { + __kernel_size_t msg_iovlen; /* # elements in msg_iov */ + u64 __msg_iovlen; + }; +#else __kernel_size_t msg_iovlen; /* # elements in msg_iov */ +#endif +#if __riscv_xlen == 64 + union { + void __user *msg_control; /* ancillary data */ + u64 __msg_control; + }; +#else void __user *msg_control; /* ancillary data */ +#endif +#if __riscv_xlen == 64 + union { + __kernel_size_t msg_controllen; /* ancillary data buffer length */ + u64 __msg_controllen; + }; +#else __kernel_size_t msg_controllen; /* ancillary data buffer length */ +#endif unsigned int msg_flags; /* flags on received message */ }; diff --git a/include/uapi/asm-generic/siginfo.h b/include/uapi/asm-generic/siginfo.h index 5a1ca43b5fc6..5c87b85d7858 100644 --- a/include/uapi/asm-generic/siginfo.h +++ b/include/uapi/asm-generic/siginfo.h @@ -7,7 +7,14 @@ typedef union sigval { int sival_int; +#if __riscv_xlen == 64 + union { + void __user *sival_ptr; + __u64 __sival_ptr; + }; +#else void __user *sival_ptr; +#endif } sigval_t; #define SI_MAX_SIZE 128 @@ -67,7 +74,14 @@ union __sifields { /* SIGILL, SIGFPE, SIGSEGV, SIGBUS, SIGTRAP, SIGEMT */ struct { +#if __riscv_xlen == 64 + union { + void __user *_addr; /* faulting insn/memory ref. */ + __u64 ___addr; + }; +#else void __user *_addr; /* faulting insn/memory ref. */ +#endif #define __ADDR_BND_PKEY_PAD (__alignof__(void *) < sizeof(short) ? \ sizeof(short) : __alignof__(void *)) @@ -82,8 +96,23 @@ union __sifields { /* used when si_code=SEGV_BNDERR */ struct { char _dummy_bnd[__ADDR_BND_PKEY_PAD]; +#if __riscv_xlen == 64 + union { + void __user *_lower; + __u64 ___lower; + }; +#else void __user *_lower; +#endif + +#if __riscv_xlen == 64 + union { + void __user *_upper; + __u64 ___upper; + }; +#else void __user *_upper; +#endif } _addr_bnd; /* used when si_code=SEGV_PKUERR */ struct { @@ -92,7 +121,14 @@ union __sifields { } _addr_pkey; /* used when si_code=TRAP_PERF */ struct { +#if __riscv_xlen == 64 + union { + unsigned long _data; + __u64 ___data; + }; +#else unsigned long _data; +#endif __u32 _type; __u32 _flags; } _perf; @@ -101,13 +137,27 @@ union __sifields { /* SIGPOLL */ struct { +#if __riscv_xlen == 64 + union { + __ARCH_SI_BAND_T _band; /* POLL_IN, POLL_OUT, POLL_MSG */ + __u64 ___band; + }; +#else __ARCH_SI_BAND_T _band; /* POLL_IN, POLL_OUT, POLL_MSG */ +#endif int _fd; } _sigpoll; /* SIGSYS */ struct { +#if __riscv_xlen == 64 + union { + void __user *_call_addr; /* calling user insn */ + __u64 ___call_addr; + }; +#else void __user *_call_addr; /* calling user insn */ +#endif int _syscall; /* triggering system call number */ unsigned int _arch; /* AUDIT_ARCH_* of syscall */ } _sigsys; diff --git a/include/uapi/asm-generic/signal.h b/include/uapi/asm-generic/signal.h index 0eb69dc8e572..efcd31a677ee 100644 --- a/include/uapi/asm-generic/signal.h +++ b/include/uapi/asm-generic/signal.h @@ -73,19 +73,54 @@ typedef unsigned long old_sigset_t; #ifndef __KERNEL__ struct sigaction { +#if __riscv_xlen == 64 + union { + __sighandler_t sa_handler; + __u64 __sa_handler; + }; +#else __sighandler_t sa_handler; +#endif +#if __riscv_xlen == 64 + union { + unsigned long sa_flags; + __u64 __sa_flags; + }; +#else unsigned long sa_flags; +#endif #ifdef SA_RESTORER +#if __riscv_xlen == 64 + union { + __sigrestore_t sa_restorer; + __u64 __sa_restorer; + }; +#else __sigrestore_t sa_restorer; +#endif #endif sigset_t sa_mask; /* mask last for extensibility */ }; #endif typedef struct sigaltstack { +#if __riscv_xlen == 64 + union { + void __user *ss_sp; + __u64 __ss_sp; + }; +#else void __user *ss_sp; +#endif int ss_flags; +#if __riscv_xlen == 64 + union { + __kernel_size_t ss_size; + __u64 __ss_size; + }; +#else __kernel_size_t ss_size; +#endif } stack_t; #endif /* __ASSEMBLY__ */ diff --git a/include/uapi/asm-generic/stat.h b/include/uapi/asm-generic/stat.h index 0d962ecd1663..c8908df5213f 100644 --- a/include/uapi/asm-generic/stat.h +++ b/include/uapi/asm-generic/stat.h @@ -21,6 +21,30 @@ #define STAT_HAVE_NSEC 1 +#if __riscv_xlen == 64 +struct stat { + unsigned long long st_dev; /* Device. */ + unsigned long long st_ino; /* File serial number. */ + unsigned int st_mode; /* File mode. */ + unsigned int st_nlink; /* Link count. */ + unsigned int st_uid; /* User ID of the file's owner. */ + unsigned int st_gid; /* Group ID of the file's group. */ + unsigned long long st_rdev; /* Device number, if device. */ + unsigned long long __pad1; + long long st_size; /* Size of file, in bytes. */ + int st_blksize; /* Optimal block size for I/O. */ + int __pad2; + long long st_blocks; /* Number 512-byte blocks allocated. */ + long long st_atime; /* Time of last access. */ + unsigned long long st_atime_nsec; + long long st_mtime; /* Time of last modification. */ + unsigned long long st_mtime_nsec; + long long st_ctime; /* Time of last status change. */ + unsigned long long st_ctime_nsec; + unsigned int __unused4; + unsigned int __unused5; +}; +#else struct stat { unsigned long st_dev; /* Device. */ unsigned long st_ino; /* File serial number. */ @@ -43,6 +67,7 @@ struct stat { unsigned int __unused4; unsigned int __unused5; }; +#endif /* This matches struct stat64 in glibc2.1. Only used for 32 bit. */ #if __BITS_PER_LONG != 64 || defined(__ARCH_WANT_STAT64) diff --git a/include/uapi/linux/atm.h b/include/uapi/linux/atm.h index 95ebdcf4fe88..fe0da6a5e26d 100644 --- a/include/uapi/linux/atm.h +++ b/include/uapi/linux/atm.h @@ -234,7 +234,14 @@ static __inline__ int atmpvc_addr_in_use(struct sockaddr_atmpvc addr) struct atmif_sioc { int number; int length; +#if __riscv_xlen == 64 + union { + void __user *arg; + __u64 __arg; + }; +#else void __user *arg; +#endif }; diff --git a/include/uapi/linux/atmdev.h b/include/uapi/linux/atmdev.h index 20b0215084fc..e0456ed8b698 100644 --- a/include/uapi/linux/atmdev.h +++ b/include/uapi/linux/atmdev.h @@ -155,7 +155,14 @@ struct atm_dev_stats { struct atm_iobuf { int length; +#if __riscv_xlen == 64 + union { + void __user *buffer; + __u64 __buffer; + }; +#else void __user *buffer; +#endif }; /* for ATM_GETCIRANGE / ATM_SETCIRANGE */ diff --git a/include/uapi/linux/blkpg.h b/include/uapi/linux/blkpg.h index d0a64ee97c6d..31f70c9114c2 100644 --- a/include/uapi/linux/blkpg.h +++ b/include/uapi/linux/blkpg.h @@ -12,7 +12,14 @@ struct blkpg_ioctl_arg { int op; int flags; int datalen; +#if __riscv_xlen == 64 + union { + void __user *data; + __u64 __data; + }; +#else void __user *data; +#endif }; /* The subfunctions (for the op field) */ diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h index d3b222d7af24..25a9570cbb1c 100644 --- a/include/uapi/linux/btrfs.h +++ b/include/uapi/linux/btrfs.h @@ -838,7 +838,14 @@ struct btrfs_ioctl_received_subvol_args { struct btrfs_ioctl_send_args { __s64 send_fd; /* in */ __u64 clone_sources_count; /* in */ +#if __riscv_xlen == 64 + union { + __u64 __user *clone_sources; /* in */ + __u64 __pad; + }; +#else __u64 __user *clone_sources; /* in */ +#endif __u64 parent_root; /* in */ __u64 flags; /* in */ __u32 version; /* in */ @@ -959,9 +966,21 @@ struct btrfs_ioctl_encoded_io_args { * increase in the future). This must also be less than or equal to * unencoded_len. */ +#if __riscv_xlen == 64 + union { + const struct iovec __user *iov; + const __u64 __iov; + }; + /* Number of iovecs. */ + union { + unsigned long iovcnt; + __u64 __iovcnt; + }; +#else const struct iovec __user *iov; /* Number of iovecs. */ unsigned long iovcnt; +#endif /* * Offset in file. * diff --git a/include/uapi/linux/capi.h b/include/uapi/linux/capi.h index 31f946f8a88d..dab4bb8e3ebb 100644 --- a/include/uapi/linux/capi.h +++ b/include/uapi/linux/capi.h @@ -77,8 +77,19 @@ typedef struct capi_profile { #define CAPI_GET_PROFILE _IOWR('C',0x09,struct capi_profile) typedef struct capi_manufacturer_cmd { +#if __riscv_xlen == 64 + union { + unsigned long cmd; + __u64 __cmd; + }; + union { + void __user *data; + __u64 __data; + }; +#else unsigned long cmd; void __user *data; +#endif } capi_manufacturer_cmd; /* diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index 2bbe00cf1248..3ccd123a23a2 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -122,15 +122,27 @@ struct file_dedupe_range { /* And dynamically-tunable limits and defaults: */ struct files_stat_struct { +#if __riscv_xlen == 64 + unsigned long long nr_files; /* read only */ + unsigned long long nr_free_files; /* read only */ + unsigned long long max_files; /* tunable */ +#else unsigned long nr_files; /* read only */ unsigned long nr_free_files; /* read only */ unsigned long max_files; /* tunable */ +#endif }; struct inodes_stat_t { +#if __riscv_xlen == 64 + long long nr_inodes; + long long nr_unused; + long long dummy[5]; /* padding for sysctl ABI compatibility */ +#else long nr_inodes; long nr_unused; long dummy[5]; /* padding for sysctl ABI compatibility */ +#endif }; diff --git a/include/uapi/linux/futex.h b/include/uapi/linux/futex.h index d2ee625ea189..ae4ee8a66de1 100644 --- a/include/uapi/linux/futex.h +++ b/include/uapi/linux/futex.h @@ -108,7 +108,14 @@ struct futex_waitv { * changed. */ struct robust_list { +#if __riscv_xlen == 64 + union { + struct robust_list __user *next; + u64 __next; + }; +#else struct robust_list __user *next; +#endif }; /* @@ -131,7 +138,11 @@ struct robust_list_head { * we keep userspace flexible, to freely shape its data-structure, * without hardcoding any particular offset into the kernel: */ +#if __riscv_xlen == 64 + long long futex_offset; +#else long futex_offset; +#endif /* * The death of the thread may race with userspace setting @@ -143,7 +154,14 @@ struct robust_list_head { * _might_ have taken. We check the owner TID in any case, * so only truly owned locks will be handled. */ +#if __riscv_xlen == 64 + union { + struct robust_list __user *list_op_pending; + u64 __list_op_pending; + }; +#else struct robust_list __user *list_op_pending; +#endif }; /* diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h index 797ba2c1562a..232ab74922fe 100644 --- a/include/uapi/linux/if.h +++ b/include/uapi/linux/if.h @@ -219,6 +219,9 @@ struct if_settings { /* interface settings */ sync_serial_settings __user *sync; te1_settings __user *te1; +#if __riscv_xlen == 64 + __u64 unused; +#endif } ifs_ifsu; }; @@ -288,6 +291,9 @@ struct ifconf { union { char __user *ifcu_buf; struct ifreq __user *ifcu_req; +#if __riscv_xlen == 64 + __u64 unused; +#endif } ifc_ifcu; }; #endif /* __UAPI_DEF_IF_IFCONF */ diff --git a/include/uapi/linux/netfilter/x_tables.h b/include/uapi/linux/netfilter/x_tables.h index 796af83a963a..7e02e34c6fad 100644 --- a/include/uapi/linux/netfilter/x_tables.h +++ b/include/uapi/linux/netfilter/x_tables.h @@ -18,7 +18,11 @@ struct xt_entry_match { __u8 revision; } user; struct { +#if __riscv_xlen == 64 + __u64 match_size; +#else __u16 match_size; +#endif /* Used inside the kernel */ struct xt_match *match; @@ -41,7 +45,11 @@ struct xt_entry_target { __u8 revision; } user; struct { +#if __riscv_xlen == 64 + __u64 target_size; +#else __u16 target_size; +#endif /* Used inside the kernel */ struct xt_target *target; diff --git a/include/uapi/linux/netfilter_ipv4/ip_tables.h b/include/uapi/linux/netfilter_ipv4/ip_tables.h index 1485df28b239..3a78f8f7bf5d 100644 --- a/include/uapi/linux/netfilter_ipv4/ip_tables.h +++ b/include/uapi/linux/netfilter_ipv4/ip_tables.h @@ -200,7 +200,14 @@ struct ipt_replace { /* Number of counters (must be equal to current number of entries). */ unsigned int num_counters; /* The old entries' counters. */ +#if __riscv_xlen == 64 + union { + struct xt_counters __user *counters; + __u64 __counters; + }; +#else struct xt_counters __user *counters; +#endif /* The entries (hang off end: not really an array). */ struct ipt_entry entries[]; diff --git a/include/uapi/linux/nfs4_mount.h b/include/uapi/linux/nfs4_mount.h index d20bb869bb99..6ec3cec66b6f 100644 --- a/include/uapi/linux/nfs4_mount.h +++ b/include/uapi/linux/nfs4_mount.h @@ -21,7 +21,14 @@ struct nfs_string { unsigned int len; +#if __riscv_xlen == 64 + union { + const char __user * data; + __u64 __data; + }; +#else const char __user * data; +#endif }; struct nfs4_mount_data { @@ -53,7 +60,14 @@ struct nfs4_mount_data { /* Pseudo-flavours to use for authentication. See RFC2623 */ int auth_flavourlen; /* 1 */ +#if __riscv_xlen == 64 + union { + int __user *auth_flavours; /* 1 */ + __u64 __auth_flavours; + }; +#else int __user *auth_flavours; /* 1 */ +#endif }; /* bits in the flags field */ diff --git a/include/uapi/linux/ppp-ioctl.h b/include/uapi/linux/ppp-ioctl.h index 1cc5ce0ae062..8d48eab430c1 100644 --- a/include/uapi/linux/ppp-ioctl.h +++ b/include/uapi/linux/ppp-ioctl.h @@ -59,7 +59,14 @@ struct npioctl { /* Structure describing a CCP configuration option, for PPPIOCSCOMPRESS */ struct ppp_option_data { +#if __riscv_xlen == 64 + union { + __u8 __user *ptr; + __u64 __ptr; + }; +#else __u8 __user *ptr; +#endif __u32 length; int transmit; }; diff --git a/include/uapi/linux/sctp.h b/include/uapi/linux/sctp.h index b7d91d4cf0db..46a06fddcd2f 100644 --- a/include/uapi/linux/sctp.h +++ b/include/uapi/linux/sctp.h @@ -1024,6 +1024,9 @@ struct sctp_getaddrs_old { #else struct sockaddr *addrs; #endif +#if (__riscv_xlen == 64) && (__SIZEOF_LONG__ == 4) + __u32 unused; +#endif }; struct sctp_getaddrs { diff --git a/include/uapi/linux/sem.h b/include/uapi/linux/sem.h index 75aa3b273cd9..de9f441913cd 100644 --- a/include/uapi/linux/sem.h +++ b/include/uapi/linux/sem.h @@ -26,10 +26,29 @@ struct semid_ds { struct ipc_perm sem_perm; /* permissions .. see ipc.h */ __kernel_old_time_t sem_otime; /* last semop time */ __kernel_old_time_t sem_ctime; /* create/last semctl() time */ +#if __riscv_xlen == 64 + union { + struct sem *sem_base; /* ptr to first semaphore in array */ + __u64 __sem_base; + }; + union { + struct sem_queue *sem_pending; /* pending operations to be processed */ + __u64 __sem_pending; + }; + union { + struct sem_queue **sem_pending_last; /* last pending operation */ + __u64 __sem_pending_last; + }; + union { + struct sem_undo *undo; /* undo requests on this array */ + __u64 __undo; + }; +#else struct sem *sem_base; /* ptr to first semaphore in array */ struct sem_queue *sem_pending; /* pending operations to be processed */ struct sem_queue **sem_pending_last; /* last pending operation */ struct sem_undo *undo; /* undo requests on this array */ +#endif unsigned short sem_nsems; /* no. of semaphores in array */ }; @@ -46,10 +65,29 @@ struct sembuf { /* arg for semctl system calls. */ union semun { int val; /* value for SETVAL */ +#if __riscv_xlen == 64 + union { + struct semid_ds __user *buf; /* buffer for IPC_STAT & IPC_SET */ + __u64 ___buf; + }; + union { + unsigned short __user *array; /* array for GETALL & SETALL */ + __u64 __array; + }; + union { + struct seminfo __user *__buf; /* buffer for IPC_INFO */ + __u64 ____buf; + }; + union { + void __user *__pad; + __u64 ____pad; + }; +#else struct semid_ds __user *buf; /* buffer for IPC_STAT & IPC_SET */ unsigned short __user *array; /* array for GETALL & SETALL */ struct seminfo __user *__buf; /* buffer for IPC_INFO */ void __user *__pad; +#endif }; struct seminfo { diff --git a/include/uapi/linux/socket.h b/include/uapi/linux/socket.h index d3fcd3b5ec53..5f7a83649395 100644 --- a/include/uapi/linux/socket.h +++ b/include/uapi/linux/socket.h @@ -22,7 +22,14 @@ struct __kernel_sockaddr_storage { /* space to achieve desired size, */ /* _SS_MAXSIZE value minus size of ss_family */ }; +#if __riscv_xlen == 64 + union { + void *__align; /* implementation specific desired alignment */ + u64 ___align; + }; +#else void *__align; /* implementation specific desired alignment */ +#endif }; }; diff --git a/include/uapi/linux/sysctl.h b/include/uapi/linux/sysctl.h index 8981f00204db..8ed7b29897f9 100644 --- a/include/uapi/linux/sysctl.h +++ b/include/uapi/linux/sysctl.h @@ -33,13 +33,45 @@ member of a struct __sysctl_args to have? */ struct __sysctl_args { +#if __riscv_xlen == 64 + union { + int __user *name; + __u64 __name; + }; +#else int __user *name; +#endif int nlen; +#if __riscv_xlen == 64 + union { + void __user *oldval; + __u64 __oldval; + }; +#else void __user *oldval; +#endif +#if __riscv_xlen == 64 + union { + size_t __user *oldlenp; + __u64 __oldlenp; + }; +#else size_t __user *oldlenp; +#endif +#if __riscv_xlen == 64 + union { + void __user *newval; + __u64 __newval; + }; +#else void __user *newval; +#endif size_t newlen; +#if __riscv_xlen == 64 + unsigned long long __unused[4]; +#else unsigned long __unused[4]; +#endif }; /* Define sysctl names first */ diff --git a/include/uapi/linux/uhid.h b/include/uapi/linux/uhid.h index cef7534d2d19..4a774dbd3de8 100644 --- a/include/uapi/linux/uhid.h +++ b/include/uapi/linux/uhid.h @@ -130,7 +130,14 @@ struct uhid_create_req { __u8 name[128]; __u8 phys[64]; __u8 uniq[64]; +#if __riscv_xlen == 64 + union { + __u8 __user *rd_data; + __u64 __rd_data; + }; +#else __u8 __user *rd_data; +#endif __u16 rd_size; __u16 bus; diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h index 649739e0c404..27dfd6032dc6 100644 --- a/include/uapi/linux/uio.h +++ b/include/uapi/linux/uio.h @@ -16,8 +16,19 @@ struct iovec { +#if __riscv_xlen == 64 + union { + void __user *iov_base; /* BSD uses caddr_t (1003.1g requires void *) */ + __u64 __iov_base; + }; + union { + __kernel_size_t iov_len; /* Must be size_t (1003.1g) */ + __u64 __iov_len; + }; +#else void __user *iov_base; /* BSD uses caddr_t (1003.1g requires void *) */ __kernel_size_t iov_len; /* Must be size_t (1003.1g) */ +#endif }; struct dmabuf_cmsg { diff --git a/include/uapi/linux/usb/tmc.h b/include/uapi/linux/usb/tmc.h index d791cc58a7f0..443ec5356caf 100644 --- a/include/uapi/linux/usb/tmc.h +++ b/include/uapi/linux/usb/tmc.h @@ -51,7 +51,14 @@ struct usbtmc_request { struct usbtmc_ctrlrequest { struct usbtmc_request req; +#if __riscv_xlen == 64 + union { + void __user *data; /* pointer to user space */ + __u64 __data; /* pointer to user space */ + }; +#else void __user *data; /* pointer to user space */ +#endif } __attribute__ ((packed)); struct usbtmc_termchar { @@ -70,7 +77,14 @@ struct usbtmc_message { __u32 transfer_size; /* size of bytes to transfer */ __u32 transferred; /* size of received/written bytes */ __u32 flags; /* bit 0: 0 = synchronous; 1 = asynchronous */ +#if __riscv_xlen == 64 + union { + void __user *message; /* pointer to header and data in user space */ + __u64 __message; + }; +#else void __user *message; /* pointer to header and data in user space */ +#endif } __attribute__ ((packed)); /* Request values for USBTMC driver's ioctl entry point */ diff --git a/include/uapi/linux/usbdevice_fs.h b/include/uapi/linux/usbdevice_fs.h index 74a84e02422a..8c8efef74c3c 100644 --- a/include/uapi/linux/usbdevice_fs.h +++ b/include/uapi/linux/usbdevice_fs.h @@ -44,14 +44,28 @@ struct usbdevfs_ctrltransfer { __u16 wIndex; __u16 wLength; __u32 timeout; /* in milliseconds */ +#if __riscv_xlen == 64 + union { + void __user *data; + __u64 __data; + }; +#else void __user *data; +#endif }; struct usbdevfs_bulktransfer { unsigned int ep; unsigned int len; unsigned int timeout; /* in milliseconds */ +#if __riscv_xlen == 64 + union { + void __user *data; + __u64 __data; + }; +#else void __user *data; +#endif }; struct usbdevfs_setinterface { @@ -61,7 +75,14 @@ struct usbdevfs_setinterface { struct usbdevfs_disconnectsignal { unsigned int signr; +#if __riscv_xlen == 64 + union { + void __user *context; + __u64 __context; + }; +#else void __user *context; +#endif }; #define USBDEVFS_MAXDRIVERNAME 255 @@ -119,7 +140,14 @@ struct usbdevfs_urb { unsigned char endpoint; int status; unsigned int flags; +#if __riscv_xlen == 64 + union { + void __user *buffer; + __u64 __buffer; + }; +#else void __user *buffer; +#endif int buffer_length; int actual_length; int start_frame; @@ -130,7 +158,14 @@ struct usbdevfs_urb { int error_count; unsigned int signr; /* signal to be sent on completion, or 0 if none should be sent. */ +#if __riscv_xlen == 64 + union { + void __user *usercontext; + __u64 __usercontext; + }; +#else void __user *usercontext; +#endif struct usbdevfs_iso_packet_desc iso_frame_desc[]; }; @@ -139,7 +174,14 @@ struct usbdevfs_ioctl { int ifno; /* interface 0..N ; negative numbers reserved */ int ioctl_code; /* MUST encode size + direction of data so the * macros in give correct values */ +#if __riscv_xlen == 64 + union { + void __user *data; /* param buffer (in, or out) */ + __u64 __pad; + }; +#else void __user *data; /* param buffer (in, or out) */ +#endif }; /* You can do most things with hubs just through control messages, @@ -195,9 +237,17 @@ struct usbdevfs_streams { #define USBDEVFS_SUBMITURB _IOR('U', 10, struct usbdevfs_urb) #define USBDEVFS_SUBMITURB32 _IOR('U', 10, struct usbdevfs_urb32) #define USBDEVFS_DISCARDURB _IO('U', 11) +#if __riscv_xlen == 64 +#define USBDEVFS_REAPURB _IOW('U', 12, __u64) +#else #define USBDEVFS_REAPURB _IOW('U', 12, void *) +#endif #define USBDEVFS_REAPURB32 _IOW('U', 12, __u32) +#if __riscv_xlen == 64 +#define USBDEVFS_REAPURBNDELAY _IOW('U', 13, __u64) +#else #define USBDEVFS_REAPURBNDELAY _IOW('U', 13, void *) +#endif #define USBDEVFS_REAPURBNDELAY32 _IOW('U', 13, __u32) #define USBDEVFS_DISCSIGNAL _IOR('U', 14, struct usbdevfs_disconnectsignal) #define USBDEVFS_DISCSIGNAL32 _IOR('U', 14, struct usbdevfs_disconnectsignal32) diff --git a/include/uapi/linux/uvcvideo.h b/include/uapi/linux/uvcvideo.h index f86185456dc5..3ccb99039a43 100644 --- a/include/uapi/linux/uvcvideo.h +++ b/include/uapi/linux/uvcvideo.h @@ -54,7 +54,14 @@ struct uvc_xu_control_mapping { __u32 v4l2_type; __u32 data_type; +#if __riscv_xlen == 64 + union { + struct uvc_menu_info __user *menu_info; + __u64 __menu_info; + }; +#else struct uvc_menu_info __user *menu_info; +#endif __u32 menu_count; __u32 reserved[4]; @@ -66,7 +73,14 @@ struct uvc_xu_control_query { __u8 query; /* Video Class-Specific Request Code, */ /* defined in linux/usb/video.h A.8. */ __u16 size; +#if __riscv_xlen == 64 + union { + __u8 __user *data; + __u64 __data; + }; +#else __u8 __user *data; +#endif }; #define UVCIOC_CTRL_MAP _IOWR('u', 0x20, struct uvc_xu_control_mapping) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index c8dbf8219c4f..0a1dc2a780fb 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1570,7 +1570,14 @@ struct vfio_iommu_type1_dma_map { struct vfio_bitmap { __u64 pgsize; /* page size for bitmap in bytes */ __u64 size; /* in bytes */ + #if __riscv_xlen == 64 + union { + __u64 __user *data; /* one bit per page */ + __u64 __data; + }; + #else __u64 __user *data; /* one bit per page */ + #endif }; /** diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index e7c4dce39007..8e5391f07626 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -1898,7 +1898,14 @@ struct v4l2_ext_controls { __u32 error_idx; __s32 request_fd; __u32 reserved[1]; +#if __riscv_xlen == 64 + union { + struct v4l2_ext_control *controls; + __u64 __controls; + }; +#else struct v4l2_ext_control *controls; +#endif }; #define V4L2_CTRL_ID_MASK (0x0fffffff) From patchwork Tue Mar 25 12:15:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876191 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9801D255E30; Tue, 25 Mar 2025 12:17:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905052; cv=none; b=VqnJslXjAGh7doPNNin78w3p3GAXx0PmVJ5N+wAVJMDXLgPWqcttJGCyuHTY8BbEcfc51NOzb8w2wg3iGnLSBIAkBHjNzVl9wE4005Lb9HHo9z44KJ0zt2Nes55Ysqh4abe41A2acAteriNAsuhTBTgpFW4ZkfJJTw/lWKj+UQ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905052; c=relaxed/simple; bh=sO/4TmRmKfk2dnjl/JmxjrPBB7D8Gm9kLR/52IOrXMs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=M0Ysbb4IjIbDYXf+MDAHFjSRMrTNvgIicDhTBMkKOJwFgqU8DPOv7GvJRlbU2UczV3cShsLBQQVM0MTqln+W5R9vRkqU582KKX59qsrSiuTnWbcQGc1d4l9d1gshkJ3rUZgl4ev87WE6xMMmp9y/S10M5hEV5y9HgjuDhON5c9A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=n6JUqLMB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="n6JUqLMB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DD08FC4CEE4; Tue, 25 Mar 2025 12:17:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905052; bh=sO/4TmRmKfk2dnjl/JmxjrPBB7D8Gm9kLR/52IOrXMs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n6JUqLMBxN4FBEHCbynRZjyxbbHmfrT5dxslTK0O3f8vBGbwHrHPoAtr3cjfIT9vB PnwhhDUr7ermUkudpqYOZubBzBdljrQtK5DKzFPMtICLWxJ/Sj+2jwOCuD6Vumvr9b OdXacg+BUJ2VRmU+NwKaPI4trgLD+nYEaVDWz/YZRbboklBXdDwxhicYic5GPR2jsb +EIh1oKbAFjAYygDkGblRb1w9cnydYm8h6jUrJ7oXklmwSV5ILl+WT9Thtr5fwygP4 0xruPUpzD8LuQBpXkD4MKTLY+4Jz8Tblf/1Et8uFRxeRDK5g1P0oLHvNz0mk3izFQY QZPBh4eIRz7pQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 03/43] rv64ilp32_abi: riscv: Adapt ULL & UL definition Date: Tue, 25 Mar 2025 08:15:44 -0400 Message-Id: <20250325121624.523258-4-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" On 64-bit systems with ILP32 ABI, BITS_PER_LONG is 32, making the register width different from UL's. Thus, correct them into ULL. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/cmpxchg.h | 4 +- arch/riscv/include/asm/csr.h | 212 ++++++++++++++++--------------- arch/riscv/net/bpf_jit_comp64.c | 6 +- 3 files changed, 115 insertions(+), 107 deletions(-) diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 4cadc56220fe..938d50194dba 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -29,7 +29,7 @@ } else { \ u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \ ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \ - ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \ + ulong __mask = GENMASK_ULL(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \ << __s; \ ulong __newx = (ulong)(n) << __s; \ ulong __retx; \ @@ -145,7 +145,7 @@ } else { \ u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \ ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \ - ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \ + ulong __mask = GENMASK_ULL(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \ << __s; \ ulong __newx = (ulong)(n) << __s; \ ulong __oldx = (ulong)(o) << __s; \ diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 6fed42e37705..25f7c5afea3a 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -9,74 +9,82 @@ #include #include +#if __riscv_xlen == 64 +#define UXL ULL +#define GENMASK_UXL GENMASK_ULL +#else +#define UXL UL +#define GENMASK_UXL GENMASK +#endif + /* Status register flags */ -#define SR_SIE _AC(0x00000002, UL) /* Supervisor Interrupt Enable */ -#define SR_MIE _AC(0x00000008, UL) /* Machine Interrupt Enable */ -#define SR_SPIE _AC(0x00000020, UL) /* Previous Supervisor IE */ -#define SR_MPIE _AC(0x00000080, UL) /* Previous Machine IE */ -#define SR_SPP _AC(0x00000100, UL) /* Previously Supervisor */ -#define SR_MPP _AC(0x00001800, UL) /* Previously Machine */ -#define SR_SUM _AC(0x00040000, UL) /* Supervisor User Memory Access */ - -#define SR_FS _AC(0x00006000, UL) /* Floating-point Status */ -#define SR_FS_OFF _AC(0x00000000, UL) -#define SR_FS_INITIAL _AC(0x00002000, UL) -#define SR_FS_CLEAN _AC(0x00004000, UL) -#define SR_FS_DIRTY _AC(0x00006000, UL) - -#define SR_VS _AC(0x00000600, UL) /* Vector Status */ -#define SR_VS_OFF _AC(0x00000000, UL) -#define SR_VS_INITIAL _AC(0x00000200, UL) -#define SR_VS_CLEAN _AC(0x00000400, UL) -#define SR_VS_DIRTY _AC(0x00000600, UL) - -#define SR_VS_THEAD _AC(0x01800000, UL) /* xtheadvector Status */ -#define SR_VS_OFF_THEAD _AC(0x00000000, UL) -#define SR_VS_INITIAL_THEAD _AC(0x00800000, UL) -#define SR_VS_CLEAN_THEAD _AC(0x01000000, UL) -#define SR_VS_DIRTY_THEAD _AC(0x01800000, UL) - -#define SR_XS _AC(0x00018000, UL) /* Extension Status */ -#define SR_XS_OFF _AC(0x00000000, UL) -#define SR_XS_INITIAL _AC(0x00008000, UL) -#define SR_XS_CLEAN _AC(0x00010000, UL) -#define SR_XS_DIRTY _AC(0x00018000, UL) +#define SR_SIE _AC(0x00000002, UXL) /* Supervisor Interrupt Enable */ +#define SR_MIE _AC(0x00000008, UXL) /* Machine Interrupt Enable */ +#define SR_SPIE _AC(0x00000020, UXL) /* Previous Supervisor IE */ +#define SR_MPIE _AC(0x00000080, UXL) /* Previous Machine IE */ +#define SR_SPP _AC(0x00000100, UXL) /* Previously Supervisor */ +#define SR_MPP _AC(0x00001800, UXL) /* Previously Machine */ +#define SR_SUM _AC(0x00040000, UXL) /* Supervisor User Memory Access */ + +#define SR_FS _AC(0x00006000, UXL) /* Floating-point Status */ +#define SR_FS_OFF _AC(0x00000000, UXL) +#define SR_FS_INITIAL _AC(0x00002000, UXL) +#define SR_FS_CLEAN _AC(0x00004000, UXL) +#define SR_FS_DIRTY _AC(0x00006000, UXL) + +#define SR_VS _AC(0x00000600, UXL) /* Vector Status */ +#define SR_VS_OFF _AC(0x00000000, UXL) +#define SR_VS_INITIAL _AC(0x00000200, UXL) +#define SR_VS_CLEAN _AC(0x00000400, UXL) +#define SR_VS_DIRTY _AC(0x00000600, UXL) + +#define SR_VS_THEAD _AC(0x01800000, UXL) /* xtheadvector Status */ +#define SR_VS_OFF_THEAD _AC(0x00000000, UXL) +#define SR_VS_INITIAL_THEAD _AC(0x00800000, UXL) +#define SR_VS_CLEAN_THEAD _AC(0x01000000, UXL) +#define SR_VS_DIRTY_THEAD _AC(0x01800000, UXL) + +#define SR_XS _AC(0x00018000, UXL) /* Extension Status */ +#define SR_XS_OFF _AC(0x00000000, UXL) +#define SR_XS_INITIAL _AC(0x00008000, UXL) +#define SR_XS_CLEAN _AC(0x00010000, UXL) +#define SR_XS_DIRTY _AC(0x00018000, UXL) #define SR_FS_VS (SR_FS | SR_VS) /* Vector and Floating-Point Unit */ -#ifndef CONFIG_64BIT -#define SR_SD _AC(0x80000000, UL) /* FS/VS/XS dirty */ +#if __riscv_xlen == 32 +#define SR_SD _AC(0x80000000, UXL) /* FS/VS/XS dirty */ #else -#define SR_SD _AC(0x8000000000000000, UL) /* FS/VS/XS dirty */ +#define SR_SD _AC(0x8000000000000000, UXL) /* FS/VS/XS dirty */ #endif -#ifdef CONFIG_64BIT -#define SR_UXL _AC(0x300000000, UL) /* XLEN mask for U-mode */ -#define SR_UXL_32 _AC(0x100000000, UL) /* XLEN = 32 for U-mode */ -#define SR_UXL_64 _AC(0x200000000, UL) /* XLEN = 64 for U-mode */ +#if __riscv_xlen == 64 +#define SR_UXL _AC(0x300000000, UXL) /* XLEN mask for U-mode */ +#define SR_UXL_32 _AC(0x100000000, UXL) /* XLEN = 32 for U-mode */ +#define SR_UXL_64 _AC(0x200000000, UXL) /* XLEN = 64 for U-mode */ #endif /* SATP flags */ -#ifndef CONFIG_64BIT -#define SATP_PPN _AC(0x003FFFFF, UL) -#define SATP_MODE_32 _AC(0x80000000, UL) +#if __riscv_xlen == 32 +#define SATP_PPN _AC(0x003FFFFF, UXL) +#define SATP_MODE_32 _AC(0x80000000, UXL) #define SATP_MODE_SHIFT 31 #define SATP_ASID_BITS 9 #define SATP_ASID_SHIFT 22 -#define SATP_ASID_MASK _AC(0x1FF, UL) +#define SATP_ASID_MASK _AC(0x1FF, UXL) #else -#define SATP_PPN _AC(0x00000FFFFFFFFFFF, UL) -#define SATP_MODE_39 _AC(0x8000000000000000, UL) -#define SATP_MODE_48 _AC(0x9000000000000000, UL) -#define SATP_MODE_57 _AC(0xa000000000000000, UL) +#define SATP_PPN _AC(0x00000FFFFFFFFFFF, UXL) +#define SATP_MODE_39 _AC(0x8000000000000000, UXL) +#define SATP_MODE_48 _AC(0x9000000000000000, UXL) +#define SATP_MODE_57 _AC(0xa000000000000000, UXL) #define SATP_MODE_SHIFT 60 #define SATP_ASID_BITS 16 #define SATP_ASID_SHIFT 44 -#define SATP_ASID_MASK _AC(0xFFFF, UL) +#define SATP_ASID_MASK _AC(0xFFFF, UXL) #endif /* Exception cause high bit - is an interrupt if set */ -#define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1)) +#define CAUSE_IRQ_FLAG (_AC(1, UXL) << (__riscv_xlen - 1)) /* Interrupt causes (minus the high bit) */ #define IRQ_S_SOFT 1 @@ -91,7 +99,7 @@ #define IRQ_S_GEXT 12 #define IRQ_PMU_OVF 13 #define IRQ_LOCAL_MAX (IRQ_PMU_OVF + 1) -#define IRQ_LOCAL_MASK GENMASK((IRQ_LOCAL_MAX - 1), 0) +#define IRQ_LOCAL_MASK GENMASK_UXL((IRQ_LOCAL_MAX - 1), 0) /* Exception causes */ #define EXC_INST_MISALIGNED 0 @@ -124,45 +132,45 @@ #define PMP_L 0x80 /* HSTATUS flags */ -#ifdef CONFIG_64BIT -#define HSTATUS_HUPMM _AC(0x3000000000000, UL) -#define HSTATUS_HUPMM_PMLEN_0 _AC(0x0000000000000, UL) -#define HSTATUS_HUPMM_PMLEN_7 _AC(0x2000000000000, UL) -#define HSTATUS_HUPMM_PMLEN_16 _AC(0x3000000000000, UL) -#define HSTATUS_VSXL _AC(0x300000000, UL) +#if __riscv_xlen == 64 +#define HSTATUS_HUPMM _AC(0x3000000000000, UXL) +#define HSTATUS_HUPMM_PMLEN_0 _AC(0x0000000000000, UXL) +#define HSTATUS_HUPMM_PMLEN_7 _AC(0x2000000000000, UXL) +#define HSTATUS_HUPMM_PMLEN_16 _AC(0x3000000000000, UXL) +#define HSTATUS_VSXL _AC(0x300000000, UXL) #define HSTATUS_VSXL_SHIFT 32 #endif -#define HSTATUS_VTSR _AC(0x00400000, UL) -#define HSTATUS_VTW _AC(0x00200000, UL) -#define HSTATUS_VTVM _AC(0x00100000, UL) -#define HSTATUS_VGEIN _AC(0x0003f000, UL) +#define HSTATUS_VTSR _AC(0x00400000, UXL) +#define HSTATUS_VTW _AC(0x00200000, UXL) +#define HSTATUS_VTVM _AC(0x00100000, UXL) +#define HSTATUS_VGEIN _AC(0x0003f000, UXL) #define HSTATUS_VGEIN_SHIFT 12 -#define HSTATUS_HU _AC(0x00000200, UL) -#define HSTATUS_SPVP _AC(0x00000100, UL) -#define HSTATUS_SPV _AC(0x00000080, UL) -#define HSTATUS_GVA _AC(0x00000040, UL) -#define HSTATUS_VSBE _AC(0x00000020, UL) +#define HSTATUS_HU _AC(0x00000200, UXL) +#define HSTATUS_SPVP _AC(0x00000100, UXL) +#define HSTATUS_SPV _AC(0x00000080, UXL) +#define HSTATUS_GVA _AC(0x00000040, UXL) +#define HSTATUS_VSBE _AC(0x00000020, UXL) /* HGATP flags */ -#define HGATP_MODE_OFF _AC(0, UL) -#define HGATP_MODE_SV32X4 _AC(1, UL) -#define HGATP_MODE_SV39X4 _AC(8, UL) -#define HGATP_MODE_SV48X4 _AC(9, UL) -#define HGATP_MODE_SV57X4 _AC(10, UL) +#define HGATP_MODE_OFF _AC(0, UXL) +#define HGATP_MODE_SV32X4 _AC(1, UXL) +#define HGATP_MODE_SV39X4 _AC(8, UXL) +#define HGATP_MODE_SV48X4 _AC(9, UXL) +#define HGATP_MODE_SV57X4 _AC(10, UXL) #define HGATP32_MODE_SHIFT 31 #define HGATP32_VMID_SHIFT 22 -#define HGATP32_VMID GENMASK(28, 22) -#define HGATP32_PPN GENMASK(21, 0) +#define HGATP32_VMID GENMASK_UXL(28, 22) +#define HGATP32_PPN GENMASK_UXL(21, 0) #define HGATP64_MODE_SHIFT 60 #define HGATP64_VMID_SHIFT 44 -#define HGATP64_VMID GENMASK(57, 44) -#define HGATP64_PPN GENMASK(43, 0) +#define HGATP64_VMID GENMASK_UXL(57, 44) +#define HGATP64_PPN GENMASK_UXL(43, 0) #define HGATP_PAGE_SHIFT 12 -#ifdef CONFIG_64BIT +#if __riscv_xlen == 64 #define HGATP_PPN HGATP64_PPN #define HGATP_VMID_SHIFT HGATP64_VMID_SHIFT #define HGATP_VMID HGATP64_VMID @@ -176,31 +184,31 @@ /* VSIP & HVIP relation */ #define VSIP_TO_HVIP_SHIFT (IRQ_VS_SOFT - IRQ_S_SOFT) -#define VSIP_VALID_MASK ((_AC(1, UL) << IRQ_S_SOFT) | \ - (_AC(1, UL) << IRQ_S_TIMER) | \ - (_AC(1, UL) << IRQ_S_EXT) | \ - (_AC(1, UL) << IRQ_PMU_OVF)) +#define VSIP_VALID_MASK ((_AC(1, UXL) << IRQ_S_SOFT) | \ + (_AC(1, UXL) << IRQ_S_TIMER) | \ + (_AC(1, UXL) << IRQ_S_EXT) | \ + (_AC(1, UXL) << IRQ_PMU_OVF)) /* AIA CSR bits */ #define TOPI_IID_SHIFT 16 -#define TOPI_IID_MASK GENMASK(11, 0) -#define TOPI_IPRIO_MASK GENMASK(7, 0) +#define TOPI_IID_MASK GENMASK_UXL(11, 0) +#define TOPI_IPRIO_MASK GENMASK_UXL(7, 0) #define TOPI_IPRIO_BITS 8 #define TOPEI_ID_SHIFT 16 -#define TOPEI_ID_MASK GENMASK(10, 0) -#define TOPEI_PRIO_MASK GENMASK(10, 0) +#define TOPEI_ID_MASK GENMASK_UXL(10, 0) +#define TOPEI_PRIO_MASK GENMASK_UXL(10, 0) #define ISELECT_IPRIO0 0x30 #define ISELECT_IPRIO15 0x3f -#define ISELECT_MASK GENMASK(8, 0) +#define ISELECT_MASK GENMASK_UXL(8, 0) #define HVICTL_VTI BIT(30) -#define HVICTL_IID GENMASK(27, 16) +#define HVICTL_IID GENMASK_UXL(27, 16) #define HVICTL_IID_SHIFT 16 #define HVICTL_DPR BIT(9) #define HVICTL_IPRIOM BIT(8) -#define HVICTL_IPRIO GENMASK(7, 0) +#define HVICTL_IPRIO GENMASK_UXL(7, 0) /* xENVCFG flags */ #define ENVCFG_STCE (_AC(1, ULL) << 63) @@ -210,14 +218,14 @@ #define ENVCFG_PMM_PMLEN_0 (_AC(0x0, ULL) << 32) #define ENVCFG_PMM_PMLEN_7 (_AC(0x2, ULL) << 32) #define ENVCFG_PMM_PMLEN_16 (_AC(0x3, ULL) << 32) -#define ENVCFG_CBZE (_AC(1, UL) << 7) -#define ENVCFG_CBCFE (_AC(1, UL) << 6) +#define ENVCFG_CBZE (_AC(1, UXL) << 7) +#define ENVCFG_CBCFE (_AC(1, UXL) << 6) #define ENVCFG_CBIE_SHIFT 4 -#define ENVCFG_CBIE (_AC(0x3, UL) << ENVCFG_CBIE_SHIFT) -#define ENVCFG_CBIE_ILL _AC(0x0, UL) -#define ENVCFG_CBIE_FLUSH _AC(0x1, UL) -#define ENVCFG_CBIE_INV _AC(0x3, UL) -#define ENVCFG_FIOM _AC(0x1, UL) +#define ENVCFG_CBIE (_AC(0x3, UXL) << ENVCFG_CBIE_SHIFT) +#define ENVCFG_CBIE_ILL _AC(0x0, UXL) +#define ENVCFG_CBIE_FLUSH _AC(0x1, UXL) +#define ENVCFG_CBIE_INV _AC(0x3, UXL) +#define ENVCFG_FIOM _AC(0x1, UXL) /* Smstateen bits */ #define SMSTATEEN0_AIA_IMSIC_SHIFT 58 @@ -446,12 +454,12 @@ /* Scalar Crypto Extension - Entropy */ #define CSR_SEED 0x015 -#define SEED_OPST_MASK _AC(0xC0000000, UL) -#define SEED_OPST_BIST _AC(0x00000000, UL) -#define SEED_OPST_WAIT _AC(0x40000000, UL) -#define SEED_OPST_ES16 _AC(0x80000000, UL) -#define SEED_OPST_DEAD _AC(0xC0000000, UL) -#define SEED_ENTROPY_MASK _AC(0xFFFF, UL) +#define SEED_OPST_MASK _AC(0xC0000000, UXL) +#define SEED_OPST_BIST _AC(0x00000000, UXL) +#define SEED_OPST_WAIT _AC(0x40000000, UXL) +#define SEED_OPST_ES16 _AC(0x80000000, UXL) +#define SEED_OPST_DEAD _AC(0xC0000000, UXL) +#define SEED_ENTROPY_MASK _AC(0xFFFF, UXL) #ifdef CONFIG_RISCV_M_MODE # define CSR_STATUS CSR_MSTATUS @@ -504,14 +512,14 @@ # define RV_IRQ_TIMER IRQ_S_TIMER # define RV_IRQ_EXT IRQ_S_EXT # define RV_IRQ_PMU IRQ_PMU_OVF -# define SIP_LCOFIP (_AC(0x1, UL) << IRQ_PMU_OVF) +# define SIP_LCOFIP (_AC(0x1, UXL) << IRQ_PMU_OVF) #endif /* !CONFIG_RISCV_M_MODE */ /* IE/IP (Supervisor/Machine Interrupt Enable/Pending) flags */ -#define IE_SIE (_AC(0x1, UL) << RV_IRQ_SOFT) -#define IE_TIE (_AC(0x1, UL) << RV_IRQ_TIMER) -#define IE_EIE (_AC(0x1, UL) << RV_IRQ_EXT) +#define IE_SIE (_AC(0x1, UXL) << RV_IRQ_SOFT) +#define IE_TIE (_AC(0x1, UXL) << RV_IRQ_TIMER) +#define IE_EIE (_AC(0x1, UXL) << RV_IRQ_EXT) #ifndef __ASSEMBLY__ diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index ca60db75199d..4f958722ca41 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -136,7 +136,7 @@ static u8 rv_tail_call_reg(struct rv_jit_context *ctx) static bool is_32b_int(s64 val) { - return -(1L << 31) <= val && val < (1L << 31); + return -(1LL << 31) <= val && val < (1LL << 31); } static bool in_auipc_jalr_range(s64 val) @@ -145,8 +145,8 @@ static bool in_auipc_jalr_range(s64 val) * auipc+jalr can reach any signed PC-relative offset in the range * [-2^31 - 2^11, 2^31 - 2^11). */ - return (-(1L << 31) - (1L << 11)) <= val && - val < ((1L << 31) - (1L << 11)); + return (-(1LL << 31) - (1LL << 11)) <= val && + val < ((1LL << 31) - (1LL << 11)); } /* Modify rd pointer to alternate reg to avoid corrupting original reg */ From patchwork Tue Mar 25 12:15:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876190 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6DB125B69D; Tue, 25 Mar 2025 12:18:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905082; cv=none; b=FmDS6pGYfOS5MTgEfg/0uzgmTgQTeZO3Kw40D1k+D8RHJi40Dqy6+w6KHVsBSHa86zL43NerZ0sFcMyR1C/fqpRhq35f41Po7l8YlBp8cokxAqO7Pp4R4M9dj1/t+QIxfcgPlfMvEoZwbYKllZlJr0szUFmB+TASY20n9xOTL6o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905082; c=relaxed/simple; bh=iWYi5/d3lIOrxHNI1EIVP5k9hUi9+eK2hLYPYcT89jg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hS7fLASXrlg/B6TGaWLkgRSzkCTpRQEafmnj2cUVU5wvkItXf62lH8/hyYbnMKfcxPaZEQLflo8WxR8sE8cf+bjVWi/zwvgn5pJV6Z2qxqQKH0A6MJbm7iVosfx2ctxuG1xDioAuMPq9XjDxT8Y3og3H0T9GD+eK2+eyv6Hwlzg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mJnrUfpQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mJnrUfpQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B2FDC4CEF0; Tue, 25 Mar 2025 12:17:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905081; bh=iWYi5/d3lIOrxHNI1EIVP5k9hUi9+eK2hLYPYcT89jg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mJnrUfpQU3tRiz/k25gw4Uqd12Aw8TcM6cjFO/pD8K+3ShkeSS04mnHDe7BpyS9ke O8z/kOa0aXdpl9GLvZpnTK+Xx+g5W6fJjabNS4JH2Wwvd5+2wgVNE54fuiymHoC1wG 50KxnfqoOdtuha4aPxrF6rszRIRlwHFNHlIfbWoUlSci7i3a1tzNjs60WJvLsFt2CL k4uREm/BjGR8keu0URF80UT0E0VmN+e0XoVL0J9C2x4l4RFyOLnqNqNi9ALPjHGSGJ JkfPLCYVJWSPzMY2y47OjruYC0XtjptJeJQjpc7rLp5TYbe0YDu+bublV7qn3GnUev yahK8zT/PrMfQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 05/43] rv64ilp32_abi: riscv: crc32: Utilize 64-bit width to improve the performance Date: Tue, 25 Mar 2025 08:15:46 -0400 Message-Id: <20250325121624.523258-6-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI, derived from a 64-bit ISA, uses 32-bit BITS_PER_LONG. Therefore, crc32 algorithm could utilize 64-bit width to improve the performance. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/lib/crc32-riscv.c | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/arch/riscv/lib/crc32-riscv.c b/arch/riscv/lib/crc32-riscv.c index 53d56ab422c7..68dfb0565696 100644 --- a/arch/riscv/lib/crc32-riscv.c +++ b/arch/riscv/lib/crc32-riscv.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -59,12 +60,12 @@ */ # define CRC32_POLY_QT_BE 0x04d101df481b4e5a -static inline u64 crc32_le_prep(u32 crc, unsigned long const *ptr) +static inline u64 crc32_le_prep(u32 crc, u64 const *ptr) { return (u64)crc ^ (__force u64)__cpu_to_le64(*ptr); } -static inline u32 crc32_le_zbc(unsigned long s, u32 poly, unsigned long poly_qt) +static inline u32 crc32_le_zbc(u64 s, u32 poly, u64 poly_qt) { u32 crc; @@ -85,7 +86,7 @@ static inline u32 crc32_le_zbc(unsigned long s, u32 poly, unsigned long poly_qt) return crc; } -static inline u64 crc32_be_prep(u32 crc, unsigned long const *ptr) +static inline u64 crc32_be_prep(u32 crc, u64 const *ptr) { return ((u64)crc << 32) ^ (__force u64)__cpu_to_be64(*ptr); } @@ -131,7 +132,7 @@ static inline u32 crc32_be_prep(u32 crc, unsigned long const *ptr) # error "Unexpected __riscv_xlen" #endif -static inline u32 crc32_be_zbc(unsigned long s) +static inline u32 crc32_be_zbc(xlen_t s) { u32 crc; @@ -156,16 +157,16 @@ typedef u32 (*fallback)(u32 crc, unsigned char const *p, size_t len); static inline u32 crc32_le_unaligned(u32 crc, unsigned char const *p, size_t len, u32 poly, - unsigned long poly_qt) + xlen_t poly_qt) { size_t bits = len * 8; - unsigned long s = 0; + xlen_t s = 0; u32 crc_low = 0; for (int i = 0; i < len; i++) - s = ((unsigned long)*p++ << (__riscv_xlen - 8)) | (s >> 8); + s = ((xlen_t)*p++ << (__riscv_xlen - 8)) | (s >> 8); - s ^= (unsigned long)crc << (__riscv_xlen - bits); + s ^= (xlen_t)crc << (__riscv_xlen - bits); if (__riscv_xlen == 32 || len < sizeof(u32)) crc_low = crc >> bits; @@ -177,12 +178,12 @@ static inline u32 crc32_le_unaligned(u32 crc, unsigned char const *p, static inline u32 __pure crc32_le_generic(u32 crc, unsigned char const *p, size_t len, u32 poly, - unsigned long poly_qt, + xlen_t poly_qt, fallback crc_fb) { size_t offset, head_len, tail_len; - unsigned long const *p_ul; - unsigned long s; + xlen_t const *p_ul; + xlen_t s; asm goto(ALTERNATIVE("j %l[legacy]", "nop", 0, RISCV_ISA_EXT_ZBC, 1) @@ -199,7 +200,7 @@ static inline u32 __pure crc32_le_generic(u32 crc, unsigned char const *p, tail_len = len & OFFSET_MASK; len = len >> STEP_ORDER; - p_ul = (unsigned long const *)p; + p_ul = (xlen_t const *)p; for (int i = 0; i < len; i++) { s = crc32_le_prep(crc, p_ul); @@ -236,7 +237,7 @@ static inline u32 crc32_be_unaligned(u32 crc, unsigned char const *p, size_t len) { size_t bits = len * 8; - unsigned long s = 0; + xlen_t s = 0; u32 crc_low = 0; s = 0; @@ -247,7 +248,7 @@ static inline u32 crc32_be_unaligned(u32 crc, unsigned char const *p, s ^= crc >> (32 - bits); crc_low = crc << bits; } else { - s ^= (unsigned long)crc << (bits - 32); + s ^= (xlen_t)crc << (bits - 32); } crc = crc32_be_zbc(s); @@ -259,8 +260,8 @@ static inline u32 crc32_be_unaligned(u32 crc, unsigned char const *p, u32 __pure crc32_be_arch(u32 crc, const u8 *p, size_t len) { size_t offset, head_len, tail_len; - unsigned long const *p_ul; - unsigned long s; + xlen_t const *p_ul; + xlen_t s; asm goto(ALTERNATIVE("j %l[legacy]", "nop", 0, RISCV_ISA_EXT_ZBC, 1) @@ -277,7 +278,7 @@ u32 __pure crc32_be_arch(u32 crc, const u8 *p, size_t len) tail_len = len & OFFSET_MASK; len = len >> STEP_ORDER; - p_ul = (unsigned long const *)p; + p_ul = (xlen_t const *)p; for (int i = 0; i < len; i++) { s = crc32_be_prep(crc, p_ul); From patchwork Tue Mar 25 12:15:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876189 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBE1B2571BF; Tue, 25 Mar 2025 12:18:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905110; cv=none; b=Q4BR1RjAqyqf8wfccMCNcsRgwbzKIEjU7E8OamM7Mm45qB+8cwYkyj9mGXvKGRoKfDX1ZThNVFxtR1zT4PnSfiPtKqtuL1wC+0MejuH1ZwmseMTX1re01tlLEW+O7DOkwRIdRj5tTnUVxoK89bN0Frw7BwUZIMh8XPEtZQ+Gzow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905110; c=relaxed/simple; bh=BZhFrxC5LqWkJB24//QTxmrLgLREqoX75xqFkQDG2c4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pIK058E41+geMZ3KB0ui3+TP5zTuPsuibuHNVOxLzzBCq1PvfUI6q/qz5PrmSyhMvDkf3QIqN7zQ9ZhbUxpi03oLTUkKgKuV1NYrelztzrLNN0iL6uDOvXJ8fNEM5CtTRRRfyvVwIA0fW7SmWG20s9H2d2Mqb+mHLPce/m1FVHM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WAArLR1c; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WAArLR1c" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC5F5C4CEE9; Tue, 25 Mar 2025 12:18:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905109; bh=BZhFrxC5LqWkJB24//QTxmrLgLREqoX75xqFkQDG2c4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WAArLR1cIlUkjcVTXX0Tls/hyrhxZQce1pwWCWmmrhVqZo6CHID40/UUJJxlUdSIH pvJzyeJsFsbR3Qj6oZQI0sosmdA0f0axmt/j3x0JtA4NVFN4oEroDbhzfKMj77LjcG VpeybYwfnePV+fqBKEfi+c8b/VpfgQniRgpy+YGfPxfyz64Z96hgATbQA+bOjettSE TFQQdcxc3W5FBVyXIBMFFRRS9NmLp2l79WVGP9idTb0YBrpmWMraoXJreb1OnK+1kW R4WBNVgCauqIfaxd2Ttz8fBWM1S2wpc+NpxrTt7N9OYAE6+nKKUlTbfrvFhzZhVeQ9 0U3othdem+g7g== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 07/43] rv64ilp32_abi: riscv: arch_hweight: Adapt cpopw & cpop of zbb extension Date: Tue, 25 Mar 2025 08:15:48 -0400 Message-Id: <20250325121624.523258-8-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI is based on 64-bit ISA, but BITS_PER_LONG is 32. Use cpopw for u32_weight and cpop for u64_weight. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/arch_hweight.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/arch_hweight.h b/arch/riscv/include/asm/arch_hweight.h index 613769b9cdc9..42577965f5bb 100644 --- a/arch/riscv/include/asm/arch_hweight.h +++ b/arch/riscv/include/asm/arch_hweight.h @@ -12,7 +12,11 @@ #if (BITS_PER_LONG == 64) #define CPOPW "cpopw " #elif (BITS_PER_LONG == 32) +#ifdef CONFIG_64BIT +#define CPOPW "cpopw " +#else #define CPOPW "cpop " +#endif #else #error "Unexpected BITS_PER_LONG" #endif @@ -47,7 +51,7 @@ static inline unsigned int __arch_hweight8(unsigned int w) return __arch_hweight32(w & 0xff); } -#if BITS_PER_LONG == 64 +#ifdef CONFIG_64BIT static __always_inline unsigned long __arch_hweight64(__u64 w) { # ifdef CONFIG_RISCV_ISA_ZBB @@ -61,7 +65,7 @@ static __always_inline unsigned long __arch_hweight64(__u64 w) ".option pop\n" : "=r" (w) : "r" (w) :); - return w; + return (unsigned long)w; legacy: # endif From patchwork Tue Mar 25 12:15:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876188 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2F7C1531C5; Tue, 25 Mar 2025 12:18:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905137; cv=none; b=n0B1337E7y+I1P6O3E1qGFAy4+c0MzkLnYcXAsvlsHARrtj7dCC6BBUYIsLPFhSFdnh7MLJX4nT5G4tPURnzEh8AcU0mRSHovKGRdXEGMDUKbkQNk7da+AcbsNmjw8BuEqZX77TRS92gt/enRyVJ+BO9VGwFiToyrjZUjQFNhjk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905137; c=relaxed/simple; bh=CkHSnvK7MIdZ92ELdn81bKYc+IanecbF+EpzeS9eHgE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Ey2BljTHrNk38SWIWw4SAlaFUbJy4knR0C1mOhOMKscJF0tbHZqHSNwl6uRFRdiKdevE9e88O5Oi8ir/vBBnviogi1Y0aOL/1C3ELcR8LNGo/ycOgTmxkSySrq4EjOKDHKHehtxYGp8G7Sg0UlfyLMgUPLIxMhJLbZ6qoWqL/B8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Av1iWZ11; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Av1iWZ11" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 86390C4CEE9; Tue, 25 Mar 2025 12:18:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905136; bh=CkHSnvK7MIdZ92ELdn81bKYc+IanecbF+EpzeS9eHgE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Av1iWZ11H8OzWTpyQcaIUWDHae7LxA08a0FAeGsTT0s3iZgbkbcvAiqA4Rq0Xet5j 2+nlUrRJn1Vdt4iPov5qd9QlPxivEDsvOF5iUx9h8lOhYO9d47s7kHGa++wrFLVQww 53MRjEzXHEO+oKZMkSTWTYqHIQtTZyJhdAOmn6ZL1My7/G6jgIBZEKrbYJVnm5oCxa /DbU6+44emZ6CpsPev+uHgQN0R65JAVjtBDYAeDCA2qNPLdBL72n5vY6OzwaN5WDP0 mgLrQoaZyBOPAzgJfO6fikBN77EJGRFL39+NYHrRa8zWigK//5KeYsOg84YR6W6DN2 W8PIgwH1HOLeQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 09/43] rv64ilp32_abi: riscv: Reuse LP64 SBI interface Date: Tue, 25 Mar 2025 08:15:50 -0400 Message-Id: <20250325121624.523258-10-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI leverages the LP64 SBI interface, enabling the RV64ILP32 Linux kernel to run seamlessly on LP64 OpenSBI or KVM. Using RV64ILP32 Linux doesn't require changing the bootloader, firmware, or hypervisor; it could replace the LP64 kernel directly. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/cpu_ops_sbi.h | 4 ++-- arch/riscv/include/asm/sbi.h | 22 +++++++++++----------- arch/riscv/kernel/cpu_ops_sbi.c | 4 ++-- arch/riscv/kernel/sbi_ecall.c | 22 +++++++++++----------- 4 files changed, 26 insertions(+), 26 deletions(-) diff --git a/arch/riscv/include/asm/cpu_ops_sbi.h b/arch/riscv/include/asm/cpu_ops_sbi.h index d6e4665b3195..d967adad6b48 100644 --- a/arch/riscv/include/asm/cpu_ops_sbi.h +++ b/arch/riscv/include/asm/cpu_ops_sbi.h @@ -19,8 +19,8 @@ extern const struct cpu_operations cpu_ops_sbi; * @stack_ptr: A pointer to the hart specific sp */ struct sbi_hart_boot_data { - void *task_ptr; - void *stack_ptr; + xlen_t task_ptr; + xlen_t stack_ptr; }; #endif diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 3d250824178b..fd9a9c723ec6 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -138,16 +138,16 @@ enum sbi_ext_pmu_fid { }; union sbi_pmu_ctr_info { - unsigned long value; + xlen_t value; struct { - unsigned long csr:12; - unsigned long width:6; + xlen_t csr:12; + xlen_t width:6; #if __riscv_xlen == 32 - unsigned long reserved:13; + xlen_t reserved:13; #else - unsigned long reserved:45; + xlen_t reserved:45; #endif - unsigned long type:1; + xlen_t type:1; }; }; @@ -422,15 +422,15 @@ enum sbi_ext_nacl_feature { extern unsigned long sbi_spec_version; struct sbiret { - long error; - long value; + xlen_t error; + xlen_t value; }; void sbi_init(void); long __sbi_base_ecall(int fid); -struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1, - unsigned long arg2, unsigned long arg3, - unsigned long arg4, unsigned long arg5, +struct sbiret __sbi_ecall(xlen_t arg0, xlen_t arg1, + xlen_t arg2, xlen_t arg3, + xlen_t arg4, xlen_t arg5, int fid, int ext); #define sbi_ecall(e, f, a0, a1, a2, a3, a4, a5) \ __sbi_ecall(a0, a1, a2, a3, a4, a5, f, e) diff --git a/arch/riscv/kernel/cpu_ops_sbi.c b/arch/riscv/kernel/cpu_ops_sbi.c index e6fbaaf54956..f9ef3c0155f4 100644 --- a/arch/riscv/kernel/cpu_ops_sbi.c +++ b/arch/riscv/kernel/cpu_ops_sbi.c @@ -71,8 +71,8 @@ static int sbi_cpu_start(unsigned int cpuid, struct task_struct *tidle) /* Make sure tidle is updated */ smp_mb(); - bdata->task_ptr = tidle; - bdata->stack_ptr = task_pt_regs(tidle); + bdata->task_ptr = (ulong)tidle; + bdata->stack_ptr = (ulong)task_pt_regs(tidle); /* Make sure boot data is updated */ smp_mb(); hsm_data = __pa(bdata); diff --git a/arch/riscv/kernel/sbi_ecall.c b/arch/riscv/kernel/sbi_ecall.c index 24aabb4fbde3..ee22e69d70da 100644 --- a/arch/riscv/kernel/sbi_ecall.c +++ b/arch/riscv/kernel/sbi_ecall.c @@ -17,23 +17,23 @@ long __sbi_base_ecall(int fid) } EXPORT_SYMBOL(__sbi_base_ecall); -struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1, - unsigned long arg2, unsigned long arg3, - unsigned long arg4, unsigned long arg5, +struct sbiret __sbi_ecall(xlen_t arg0, xlen_t arg1, + xlen_t arg2, xlen_t arg3, + xlen_t arg4, xlen_t arg5, int fid, int ext) { struct sbiret ret; trace_sbi_call(ext, fid); - register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0); - register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1); - register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2); - register uintptr_t a3 asm ("a3") = (uintptr_t)(arg3); - register uintptr_t a4 asm ("a4") = (uintptr_t)(arg4); - register uintptr_t a5 asm ("a5") = (uintptr_t)(arg5); - register uintptr_t a6 asm ("a6") = (uintptr_t)(fid); - register uintptr_t a7 asm ("a7") = (uintptr_t)(ext); + register xlen_t a0 asm ("a0") = (xlen_t)(arg0); + register xlen_t a1 asm ("a1") = (xlen_t)(arg1); + register xlen_t a2 asm ("a2") = (xlen_t)(arg2); + register xlen_t a3 asm ("a3") = (xlen_t)(arg3); + register xlen_t a4 asm ("a4") = (xlen_t)(arg4); + register xlen_t a5 asm ("a5") = (xlen_t)(arg5); + register xlen_t a6 asm ("a6") = (xlen_t)(fid); + register xlen_t a7 asm ("a7") = (xlen_t)(ext); asm volatile ("ecall" : "+r" (a0), "+r" (a1) : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7) From patchwork Tue Mar 25 12:15:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876187 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E1C5259CB1; Tue, 25 Mar 2025 12:19:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905165; cv=none; b=BPmeNiww5MK547qTldJ6elqQWBaylLzAGxBOROM+fw3fiy6JS3aYdzSwIDrA3Nbm1wHFyvMD5Ju8+0PB2mXav8otVo7MBCHCZWXFP1mi7axLIYf3C39vzJbJoDOqs+Wgrb6457Cf+hlPCIaJQUs5+A1TMtciyBQxhOPxsRnSdF0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905165; c=relaxed/simple; bh=TL2FUUudYeGR9aUH8EHF68qPdOytUMHPJcMNMO4C6gI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=E0vRswEDLaTtylC8c8NMtRzcNyqWTIJ58Gf3fAdpwQvHBUgF3xefhsOv2mqoRm+FEVwCu/vgOSb8+Rnlw/O1LwPMpKNoRwN9zilquDVbdyKWTgUkKNlD/DARklQXjBsMWYCREODUFSQtyuglwYq2qfL8jHCH3t+MVaMamqsJQMQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Z9Owa6vw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Z9Owa6vw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DCB0AC4CEE4; Tue, 25 Mar 2025 12:19:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905165; bh=TL2FUUudYeGR9aUH8EHF68qPdOytUMHPJcMNMO4C6gI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z9Owa6vw61GNZTo9S5QAJkMaxwfFdSHmn39VE6QvFbjvLgobYJXjoxt4SFzZ9+H45 3bBUxyIWXOs8+vctcc1o0z3EPcy5ylluVr1qaPks5R0Of7RuyosVXAlY1uWvUJ2AMq AWXcxuO9frvt0rZhx+kYmwWptlxaskKYsKBYBrhlA00p5ujGlXfEgpH+FdgG+3suN5 DcfXa+xebjYrFGQi64b2ybrTNKHm1wMzPcYxDRlQ+BZrJ9Ss2tuzrYUMsYpIapwuLV epLCYOMwS8It/5/qluGk6hyVcphUsHbtL9YgvqmSpPF9Fliz9vORyp9sFkHZ/283pY je6WQknjoL8Jg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 11/43] rv64ilp32_abi: riscv: Introduce PTR_L and PTR_S Date: Tue, 25 Mar 2025 08:15:52 -0400 Message-Id: <20250325121624.523258-12-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" REG_L and REG_S can't satisfy rv64ilp32 abi requirements, because BITS_PER_LONG != __riscv_xlen. So we introduce new PTR_L and PTR_S macro to help head.S and entry.S deal with the pointer data type. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/asm.h | 13 +++++++++---- arch/riscv/include/asm/scs.h | 4 ++-- arch/riscv/kernel/entry.S | 32 ++++++++++++++++---------------- arch/riscv/kernel/head.S | 8 ++++---- 4 files changed, 31 insertions(+), 26 deletions(-) diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h index 776354895b81..e37d73abbedd 100644 --- a/arch/riscv/include/asm/asm.h +++ b/arch/riscv/include/asm/asm.h @@ -38,6 +38,7 @@ #define RISCV_SZPTR "8" #define RISCV_LGPTR "3" #endif +#define __PTR_SEL(a, b) __ASM_STR(a) #elif __SIZEOF_POINTER__ == 4 #ifdef __ASSEMBLY__ #define RISCV_PTR .word @@ -48,10 +49,14 @@ #define RISCV_SZPTR "4" #define RISCV_LGPTR "2" #endif +#define __PTR_SEL(a, b) __ASM_STR(b) #else #error "Unexpected __SIZEOF_POINTER__" #endif +#define PTR_L __PTR_SEL(ld, lw) +#define PTR_S __PTR_SEL(sd, sw) + #if (__SIZEOF_INT__ == 4) #define RISCV_INT __ASM_STR(.word) #define RISCV_SZINT __ASM_STR(4) @@ -83,18 +88,18 @@ .endm #ifdef CONFIG_SMP -#ifdef CONFIG_32BIT +#if BITS_PER_LONG == 32 #define PER_CPU_OFFSET_SHIFT 2 #else #define PER_CPU_OFFSET_SHIFT 3 #endif .macro asm_per_cpu dst sym tmp - REG_L \tmp, TASK_TI_CPU_NUM(tp) + PTR_L \tmp, TASK_TI_CPU_NUM(tp) slli \tmp, \tmp, PER_CPU_OFFSET_SHIFT la \dst, __per_cpu_offset add \dst, \dst, \tmp - REG_L \tmp, 0(\dst) + PTR_L \tmp, 0(\dst) la \dst, \sym add \dst, \dst, \tmp .endm @@ -106,7 +111,7 @@ .macro load_per_cpu dst ptr tmp asm_per_cpu \dst \ptr \tmp - REG_L \dst, 0(\dst) + PTR_L \dst, 0(\dst) .endm #ifdef CONFIG_SHADOW_CALL_STACK diff --git a/arch/riscv/include/asm/scs.h b/arch/riscv/include/asm/scs.h index 0e45db78b24b..30929afb4e1a 100644 --- a/arch/riscv/include/asm/scs.h +++ b/arch/riscv/include/asm/scs.h @@ -20,7 +20,7 @@ /* Load task_scs_sp(current) to gp. */ .macro scs_load_current - REG_L gp, TASK_TI_SCS_SP(tp) + PTR_L gp, TASK_TI_SCS_SP(tp) .endm /* Load task_scs_sp(current) to gp, but only if tp has changed. */ @@ -32,7 +32,7 @@ /* Save gp to task_scs_sp(current). */ .macro scs_save_current - REG_S gp, TASK_TI_SCS_SP(tp) + PTR_S gp, TASK_TI_SCS_SP(tp) .endm #else /* CONFIG_SHADOW_CALL_STACK */ diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 33a5a9f2a0d4..2cf36e3ab6b9 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -117,19 +117,19 @@ SYM_CODE_START(handle_exception) new_vmalloc_check #endif - REG_S sp, TASK_TI_KERNEL_SP(tp) + PTR_S sp, TASK_TI_KERNEL_SP(tp) #ifdef CONFIG_VMAP_STACK addi sp, sp, -(PT_SIZE_ON_STACK) srli sp, sp, THREAD_SHIFT andi sp, sp, 0x1 bnez sp, handle_kernel_stack_overflow - REG_L sp, TASK_TI_KERNEL_SP(tp) + PTR_L sp, TASK_TI_KERNEL_SP(tp) #endif .Lsave_context: - REG_S sp, TASK_TI_USER_SP(tp) - REG_L sp, TASK_TI_KERNEL_SP(tp) + PTR_S sp, TASK_TI_USER_SP(tp) + PTR_L sp, TASK_TI_KERNEL_SP(tp) addi sp, sp, -(PT_SIZE_ON_STACK) REG_S x1, PT_RA(sp) REG_S x3, PT_GP(sp) @@ -145,7 +145,7 @@ SYM_CODE_START(handle_exception) */ li t0, SR_SUM | SR_FS_VS - REG_L s0, TASK_TI_USER_SP(tp) + PTR_L s0, TASK_TI_USER_SP(tp) csrrc s1, CSR_STATUS, t0 csrr s2, CSR_EPC csrr s3, CSR_TVAL @@ -193,7 +193,7 @@ SYM_CODE_START(handle_exception) add t0, t1, t0 /* Check if exception code lies within bounds */ bgeu t0, t2, 3f - REG_L t1, 0(t0) + PTR_L t1, 0(t0) 2: jalr t1 j ret_from_exception 3: @@ -226,7 +226,7 @@ SYM_CODE_START_NOALIGN(ret_from_exception) /* Save unwound kernel stack pointer in thread_info */ addi s0, sp, PT_SIZE_ON_STACK - REG_S s0, TASK_TI_KERNEL_SP(tp) + PTR_S s0, TASK_TI_KERNEL_SP(tp) /* Save the kernel shadow call stack pointer */ scs_save_current @@ -301,7 +301,7 @@ SYM_CODE_START_LOCAL(handle_kernel_stack_overflow) REG_S x5, PT_T0(sp) save_from_x6_to_x31 - REG_L s0, TASK_TI_KERNEL_SP(tp) + PTR_L s0, TASK_TI_KERNEL_SP(tp) csrr s1, CSR_STATUS csrr s2, CSR_EPC csrr s3, CSR_TVAL @@ -341,8 +341,8 @@ SYM_CODE_END(ret_from_fork) SYM_FUNC_START(call_on_irq_stack) /* Create a frame record to save ra and s0 (fp) */ addi sp, sp, -STACKFRAME_SIZE_ON_STACK - REG_S ra, STACKFRAME_RA(sp) - REG_S s0, STACKFRAME_FP(sp) + PTR_S ra, STACKFRAME_RA(sp) + PTR_S s0, STACKFRAME_FP(sp) addi s0, sp, STACKFRAME_SIZE_ON_STACK /* Switch to the per-CPU shadow call stack */ @@ -360,8 +360,8 @@ SYM_FUNC_START(call_on_irq_stack) /* Switch back to the thread stack and restore ra and s0 */ addi sp, s0, -STACKFRAME_SIZE_ON_STACK - REG_L ra, STACKFRAME_RA(sp) - REG_L s0, STACKFRAME_FP(sp) + PTR_L ra, STACKFRAME_RA(sp) + PTR_L s0, STACKFRAME_FP(sp) addi sp, sp, STACKFRAME_SIZE_ON_STACK ret @@ -383,8 +383,8 @@ SYM_FUNC_START(__switch_to) li a4, TASK_THREAD_RA add a3, a0, a4 add a4, a1, a4 - REG_S ra, TASK_THREAD_RA_RA(a3) - REG_S sp, TASK_THREAD_SP_RA(a3) + PTR_S ra, TASK_THREAD_RA_RA(a3) + PTR_S sp, TASK_THREAD_SP_RA(a3) REG_S s0, TASK_THREAD_S0_RA(a3) REG_S s1, TASK_THREAD_S1_RA(a3) REG_S s2, TASK_THREAD_S2_RA(a3) @@ -400,8 +400,8 @@ SYM_FUNC_START(__switch_to) /* Save the kernel shadow call stack pointer */ scs_save_current /* Restore context from next->thread */ - REG_L ra, TASK_THREAD_RA_RA(a4) - REG_L sp, TASK_THREAD_SP_RA(a4) + PTR_L ra, TASK_THREAD_RA_RA(a4) + PTR_L sp, TASK_THREAD_SP_RA(a4) REG_L s0, TASK_THREAD_S0_RA(a4) REG_L s1, TASK_THREAD_S1_RA(a4) REG_L s2, TASK_THREAD_S2_RA(a4) diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S index 356d5397b2a2..e55a92be12b1 100644 --- a/arch/riscv/kernel/head.S +++ b/arch/riscv/kernel/head.S @@ -42,7 +42,7 @@ SYM_CODE_START(_start) /* Image load offset (0MB) from start of RAM for M-mode */ .dword 0 #else -#if __riscv_xlen == 64 +#ifdef CONFIG_64BIT /* Image load offset(2MB) from start of RAM */ .dword 0x200000 #else @@ -75,7 +75,7 @@ relocate_enable_mmu: /* Relocate return address */ la a1, kernel_map XIP_FIXUP_OFFSET a1 - REG_L a1, KERNEL_MAP_VIRT_ADDR(a1) + PTR_L a1, KERNEL_MAP_VIRT_ADDR(a1) la a2, _start sub a1, a1, a2 add ra, ra, a1 @@ -349,8 +349,8 @@ SYM_CODE_START(_start_kernel) */ .Lwait_for_cpu_up: /* FIXME: We should WFI to save some energy here. */ - REG_L sp, (a1) - REG_L tp, (a2) + PTR_L sp, (a1) + PTR_L tp, (a2) beqz sp, .Lwait_for_cpu_up beqz tp, .Lwait_for_cpu_up fence From patchwork Tue Mar 25 12:15:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876186 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD509257AC7; Tue, 25 Mar 2025 12:19:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905192; cv=none; b=SBeIQ8qO8DtMFTHEWv1Aa+rq2DXpjPdHL9+4wXJ2J/AmA3amJfZotKf48MUg4VCi1JOCOA/e0Sir90panLl72pI7ezaz1d5OiHsfztqBkhq427JVYxXp9xXKubF3wgASevClHxgSoN26CYMwA40sRnx1uUbK+Kr8KSF3pHRG3rE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905192; c=relaxed/simple; bh=XQU8VnkDJ6h/4evak6H0aqpy4CserfZS0N3cJolheZI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rcWrVq4j2fCTHcVg3U/Ti2+lDcu0NaS96tVyxi10TEd1elOOzOfy5p/NUG9sHAd5PxYAj1XxHx8SkNd/5NxZ/GGCkD+dG9+d1mS8MfdV1MZsyJQXnqT+wk0T7BAXLPw/+tA4NIUscOYAj/BP5ugUFXzoDJxXrsUFsZsArTVNUfg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=G1E1R9FL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="G1E1R9FL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DF255C4CEE9; Tue, 25 Mar 2025 12:19:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905191; bh=XQU8VnkDJ6h/4evak6H0aqpy4CserfZS0N3cJolheZI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G1E1R9FLbeWgS5k22Da9vESE7SNBjisQcD8Ll173fXAGcQN37cnhXheRGmLfW6pRb MFLsfzd7HgjVH005lgBFvW06/crTgH4L0EHiQbOEksUxh9Cm3UBKD6ejBcUubgKPF+ 97RlOQAlVtBCil9XLs+Xbgk3C6KGGayJilFKJsAV/UpFtnCXuhZCTJjbZlQdfZKyTb MyfxR93XsxrBACHiTxnYv2eAGsUWe7gPsnFAzSbzhHyU5bDPVis0et56YxbhL/ob5m Q4rucj1fLSrorLrs8uSIG48yz367/37H2evJ3YrCjJS9FYyhi9lHCNduNMhM7w+iCA jH7ZkARaHIwqg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 13/43] rv64ilp32_abi: riscv: Correct stackframe layout Date: Tue, 25 Mar 2025 08:15:54 -0400 Message-Id: <20250325121624.523258-14-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" In RV64ILP32 ABI, the callee saved fp & ra are 64-bit width, not long size. This patch corrects the layout for the struct stackframe. echo c > /proc/sysrq-trigger Before the patch: sysrq: Trigger a crash Kernel panic - not syncing: sysrq triggered crash CPU: 0 PID: 102 Comm: sh Not tainted ... Hardware name: riscv-virtio,qemu (DT) Call Trace: ---[ end Kernel panic - not syncing: sysrq triggered crash ]--- After the patch: sysrq: Trigger a crash Kernel panic - not syncing: sysrq triggered crash CPU: 0 PID: 102 Comm: sh Not tainted ... Hardware name: riscv-virtio,qemu (DT) Call Trace: [] dump_backtrace+0x1e/0x26 [] show_stack+0x2e/0x3c [] dump_stack_lvl+0x40/0x5a [] dump_stack+0x16/0x1e [] panic+0x10c/0x2a8 [] sysrq_reset_seq_param_set+0x0/0x76 [] __handle_sysrq+0x9c/0x19c [] write_sysrq_trigger+0x64/0x78 [] proc_reg_write+0x4a/0xa2 [] vfs_write+0xac/0x308 [] ksys_write+0x62/0xda [] sys_write+0xe/0x16 [] do_trap_ecall_u+0xd8/0xda [] ret_from_exception+0x0/0x66 ---[ end Kernel panic - not syncing: sysrq triggered crash ]--- Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/stacktrace.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/riscv/include/asm/stacktrace.h b/arch/riscv/include/asm/stacktrace.h index b1495a7e06ce..556655cab09d 100644 --- a/arch/riscv/include/asm/stacktrace.h +++ b/arch/riscv/include/asm/stacktrace.h @@ -8,7 +8,13 @@ struct stackframe { unsigned long fp; +#if IS_ENABLED(CONFIG_64BIT) && (BITS_PER_LONG == 32) + unsigned long __fp; +#endif unsigned long ra; +#if IS_ENABLED(CONFIG_64BIT) && (BITS_PER_LONG == 32) + unsigned long __ra; +#endif }; extern void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs, From patchwork Tue Mar 25 12:15:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876185 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1069E55B; Tue, 25 Mar 2025 12:20:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905220; cv=none; b=bN3lGgCm+4EgU86V0yR61wZT2uEW4IRzcUZ51c0RSEbZczPdl+hOq7oyRnuEewNo5sMkjjAa8I1dWgKxCGorMVFSeoZgxUiA3RscaVUCojQtEHtEzaFlwGCSQkr/d/rgO4t2m5O3l9gDEH0Wn8TCwOa7Ne3kULxu4nv9S2acwm4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905220; c=relaxed/simple; bh=mr+aCskDI8FnVUv97AkI1nVng2udAHyoEjzxRXN+v3k=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jbad9ic/MXANFABBexvxCoM+ANXhHZU/j8aMMZJVK/6EWcNz66AAtqnQ+0xWGifj1r8TptGxkCcLjRffy7nqHW6+XOAKM1S9mE7nKRiseMmE4/QyOXBZNwdCMgXMwljeKZLr8wb04N2VA3o8RN+pWN+/kveZT8U9PCHSqq1NIHU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VKT4xQjP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VKT4xQjP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 053F9C4CEE4; Tue, 25 Mar 2025 12:20:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905220; bh=mr+aCskDI8FnVUv97AkI1nVng2udAHyoEjzxRXN+v3k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VKT4xQjPJQDbTJK65nUQN6bTlb4n5iwqfpuEAE6mLf8dAZrJ5VsB3GWoUzEs58E5m iMizoEMYPrRGr9I9IExEu0dkMxh2lwWdwiU8cQrGLp/9IxIvatPcMnB6r59Ome97cF uYUpybpm5G2SdSWA950lwKaUjdu4uSPk8vVJyxswo86EjYCwzXNyg1oC7PK2ZwrxqZ 5Ak7U8tdChp1b8q6i6FAyhIRwxkbJOB3cMtlwyQM8AN2FFP2jxeEj2xAWq2b4vLhof izkyv+oDHImICCaizrUJhFftLkYPZMa+4qOxRPhbdPved6Vlj6q5hjhJ00b8lfBzPG ImwwWKfE+IQLg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 15/43] rv64ilp32_abi: riscv: mm: Adapt MMU_SV39 for 2GiB address space Date: Tue, 25 Mar 2025 08:15:56 -0400 Message-Id: <20250325121624.523258-16-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI has two independent 2GiB address space for kernel and user. There is no sv32 mmu mode support in xlen=64 ISA. This commit enables MMU_SV39 for RV64ILP32 to satisfy the user & kernel 2GiB mapping requirements. The Sv39 is the mandatory MMU mode when rv64 satp != bare, so we needn't care about Sv48 & Sv57. 2GiB virtual userspace memory layout (u64lp64 ABI): 55555000-5560c000 r-xp 00000000 fe:00 17 /bin/busybox 5560c000-5560f000 r--p 000b7000 fe:00 17 /bin/busybox 5560f000-55610000 rw-p 000ba000 fe:00 17 /bin/busybox 55610000-55631000 rw-p 00000000 00:00 0 [heap] 77e69000-77e6b000 rw-p 00000000 00:00 0 77e6b000-77fba000 r-xp 00000000 fe:00 140 /lib/libc.so.6 77fba000-77fbd000 r--p 0014f000 fe:00 140 /lib/libc.so.6 77fbd000-77fbf000 rw-p 00152000 fe:00 140 /lib/libc.so.6 77fbf000-77fcb000 rw-p 00000000 00:00 0 77fcb000-77fd5000 r-xp 00000000 fe:00 148 /lib/libresolv.so.2 77fd5000-77fd6000 r--p 0000a000 fe:00 148 /lib/libresolv.so.2 77fd6000-77fd7000 rw-p 0000b000 fe:00 148 /lib/libresolv.so.2 77fd7000-77fd9000 rw-p 00000000 00:00 0 77fd9000-77fdb000 r--p 00000000 00:00 0 [vvar] 77fdb000-77fdc000 r-xp 00000000 00:00 0 [vdso] 77fdc000-77ffc000 r-xp 00000000 fe:00 135 /lib/ld-linux-riscv64-lp64d.so.1 77ffc000-77ffe000 r--p 0001f000 fe:00 135 /lib/ld-linux-riscv64-lp64d.so.1 77ffe000-78000000 rw-p 00021000 fe:00 135 /lib/ld-linux-riscv64-lp64d.so.1 7ffdf000-80000000 rw-p 00000000 00:00 0 [stack] 2GiB virtual kernel memory layout: fixmap : 0x90a00000 - 0x90ffffff (6144 kB) pci io : 0x91000000 - 0x91ffffff ( 16 MB) vmemmap : 0x92000000 - 0x93ffffff ( 32 MB) vmalloc : 0x94000000 - 0xb3ffffff ( 512 MB) modules : 0xb4000000 - 0xb7ffffff ( 64 MB) lowmem : 0xc0000000 - 0xc7ffffff ( 128 MB) kasan : 0x80000000 - 0x8fffffff ( 256 MB) kernel : 0xb8000000 - 0xbfffffff ( 128 MB) For satp=sv39, introduce a double mapping to make the sign-extended virtual address identical to the zero-extended virtual address: +--------+ +---------+ +--------+ | | +--| 511:PUD1| | | | | | +---------+ | | | | | | 510:PUD0|--+ | | | | | +---------+ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INVALID | | | | | | | | | | | | | .... | | | | | | .... | | | | | | | | | | | | +---------+ | | | | | +--| 3:PUD1 | | | | | | | +---------+ | | | | | | | 2:PUD0 |--+ | | | | | +---------+ | | | | | | |1:USR_PUD| | | | | | | +---------+ | | | | | | |0:USR_PUD| | | | +--------+<--+ +---------+ +-->+--------+ PUD1 ^ PGD PUD0 1GB | 4GB 1GB | +----------+ | Sv39 PGDP| +----------+ SATP Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/Kconfig | 2 +- arch/riscv/include/asm/page.h | 23 ++++++----- arch/riscv/include/asm/pgtable-64.h | 55 ++++++++++++++------------ arch/riscv/include/asm/pgtable.h | 60 ++++++++++++++++++++++++----- arch/riscv/include/asm/processor.h | 2 +- arch/riscv/kernel/cpu.c | 4 +- arch/riscv/mm/fault.c | 10 ++--- arch/riscv/mm/init.c | 55 ++++++++++++++++++-------- arch/riscv/mm/pageattr.c | 4 +- arch/riscv/mm/pgtable.c | 2 +- 10 files changed, 145 insertions(+), 72 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 884235cf4092..9469cdc51ba4 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -293,7 +293,7 @@ config PAGE_OFFSET hex default 0x80000000 if !MMU && RISCV_M_MODE default 0x80200000 if !MMU - default 0xc0000000 if 32BIT + default 0xc0000000 if 32BIT || ABI_RV64ILP32 default 0xff60000000000000 if 64BIT config KASAN_SHADOW_OFFSET diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h index 125f5ecd9565..45091a9de0d4 100644 --- a/arch/riscv/include/asm/page.h +++ b/arch/riscv/include/asm/page.h @@ -24,7 +24,7 @@ * When not using MMU this corresponds to the first free page in * physical memory (aligned on a page boundary). */ -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #ifdef CONFIG_MMU #define PAGE_OFFSET kernel_map.page_offset #else @@ -38,7 +38,7 @@ #define PAGE_OFFSET_L3 _AC(0xffffffd600000000, UL) #else #define PAGE_OFFSET _AC(CONFIG_PAGE_OFFSET, UL) -#endif /* CONFIG_64BIT */ +#endif /* BITS_PER_LONG == 64 */ #ifndef __ASSEMBLY__ @@ -56,19 +56,24 @@ void clear_page(void *page); /* * Use struct definitions to apply C type checking */ +#if CONFIG_PGTABLE_LEVELS > 2 +typedef u64 ptval_t; +#else +typedef ulong ptval_t; +#endif /* Page Global Directory entry */ typedef struct { - unsigned long pgd; + ptval_t pgd; } pgd_t; /* Page Table entry */ typedef struct { - unsigned long pte; + ptval_t pte; } pte_t; typedef struct { - unsigned long pgprot; + ptval_t pgprot; } pgprot_t; typedef struct page *pgtable_t; @@ -81,13 +86,13 @@ typedef struct page *pgtable_t; #define __pgd(x) ((pgd_t) { (x) }) #define __pgprot(x) ((pgprot_t) { (x) }) -#ifdef CONFIG_64BIT -#define PTE_FMT "%016lx" +#if CONFIG_PGTABLE_LEVELS > 2 +#define PTE_FMT "%016llx" #else #define PTE_FMT "%08lx" #endif -#if defined(CONFIG_64BIT) && defined(CONFIG_MMU) +#if (CONFIG_PGTABLE_LEVELS > 2) && defined(CONFIG_MMU) /* * We override this value as its generic definition uses __pa too early in * the boot process (before kernel_map.va_pa_offset is set). @@ -128,7 +133,7 @@ extern unsigned long vmemmap_start_pfn; ((x) >= kernel_map.virt_addr && (x) < (kernel_map.virt_addr + kernel_map.size)) #define is_linear_mapping(x) \ - ((x) >= PAGE_OFFSET && (!IS_ENABLED(CONFIG_64BIT) || (x) < PAGE_OFFSET + KERN_VIRT_SIZE)) + ((x) >= PAGE_OFFSET && ((BITS_PER_LONG == 32) || (x) < PAGE_OFFSET + KERN_VIRT_SIZE)) #ifndef CONFIG_DEBUG_VIRTUAL #define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + kernel_map.va_pa_offset)) diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 0897dd99ab8d..401c012d0b66 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -19,7 +19,12 @@ extern bool pgtable_l5_enabled; #define PGDIR_SHIFT (pgtable_l5_enabled ? PGDIR_SHIFT_L5 : \ (pgtable_l4_enabled ? PGDIR_SHIFT_L4 : PGDIR_SHIFT_L3)) /* Size of region mapped by a page global directory */ +#if BITS_PER_LONG == 64 #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) +#else +#define PGDIR_SIZE (_AC(1, ULL) << PGDIR_SHIFT) +#endif + #define PGDIR_MASK (~(PGDIR_SIZE - 1)) /* p4d is folded into pgd in case of 4-level page table */ @@ -28,7 +33,7 @@ extern bool pgtable_l5_enabled; #define P4D_SHIFT_L5 39 #define P4D_SHIFT (pgtable_l5_enabled ? P4D_SHIFT_L5 : \ (pgtable_l4_enabled ? P4D_SHIFT_L4 : P4D_SHIFT_L3)) -#define P4D_SIZE (_AC(1, UL) << P4D_SHIFT) +#define P4D_SIZE (_AC(1, ULL) << P4D_SHIFT) #define P4D_MASK (~(P4D_SIZE - 1)) /* pud is folded into pgd in case of 3-level page table */ @@ -43,7 +48,7 @@ extern bool pgtable_l5_enabled; /* Page 4th Directory entry */ typedef struct { - unsigned long p4d; + u64 p4d; } p4d_t; #define p4d_val(x) ((x).p4d) @@ -52,7 +57,7 @@ typedef struct { /* Page Upper Directory entry */ typedef struct { - unsigned long pud; + u64 pud; } pud_t; #define pud_val(x) ((x).pud) @@ -61,7 +66,7 @@ typedef struct { /* Page Middle Directory entry */ typedef struct { - unsigned long pmd; + u64 pmd; } pmd_t; #define pmd_val(x) ((x).pmd) @@ -74,7 +79,7 @@ typedef struct { * | 63 | 62 61 | 60 54 | 53 10 | 9 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 * N MT RSV PFN reserved for SW D A G U X W R V */ -#define _PAGE_PFN_MASK GENMASK(53, 10) +#define _PAGE_PFN_MASK GENMASK_ULL(53, 10) /* * [63] Svnapot definitions: @@ -82,7 +87,7 @@ typedef struct { * 1 Svnapot enabled */ #define _PAGE_NAPOT_SHIFT 63 -#define _PAGE_NAPOT BIT(_PAGE_NAPOT_SHIFT) +#define _PAGE_NAPOT BIT_ULL(_PAGE_NAPOT_SHIFT) /* * Only 64KB (order 4) napot ptes supported. */ @@ -100,9 +105,9 @@ enum napot_cont_order { #define napot_cont_order(val) (__builtin_ctzl((val.pte >> _PAGE_PFN_SHIFT) << 1)) #define napot_cont_shift(order) ((order) + PAGE_SHIFT) -#define napot_cont_size(order) BIT(napot_cont_shift(order)) +#define napot_cont_size(order) BIT_ULL(napot_cont_shift(order)) #define napot_cont_mask(order) (~(napot_cont_size(order) - 1UL)) -#define napot_pte_num(order) BIT(order) +#define napot_pte_num(order) BIT_ULL(order) #ifdef CONFIG_RISCV_ISA_SVNAPOT #define HUGE_MAX_HSTATE (2 + (NAPOT_ORDER_MAX - NAPOT_CONT_ORDER_BASE)) @@ -118,8 +123,8 @@ enum napot_cont_order { * 10 - IO Non-cacheable, non-idempotent, strongly-ordered I/O memory * 11 - Rsvd Reserved for future standard use */ -#define _PAGE_NOCACHE_SVPBMT (1UL << 61) -#define _PAGE_IO_SVPBMT (1UL << 62) +#define _PAGE_NOCACHE_SVPBMT (1ULL << 61) +#define _PAGE_IO_SVPBMT (1ULL << 62) #define _PAGE_MTMASK_SVPBMT (_PAGE_NOCACHE_SVPBMT | _PAGE_IO_SVPBMT) /* @@ -133,10 +138,10 @@ enum napot_cont_order { * 01110 - PMA Weakly-ordered, Cacheable, Bufferable, Shareable, Non-trustable * 10010 - IO Strongly-ordered, Non-cacheable, Non-bufferable, Shareable, Non-trustable */ -#define _PAGE_PMA_THEAD ((1UL << 62) | (1UL << 61) | (1UL << 60)) -#define _PAGE_NOCACHE_THEAD ((1UL << 61) | (1UL << 60)) -#define _PAGE_IO_THEAD ((1UL << 63) | (1UL << 60)) -#define _PAGE_MTMASK_THEAD (_PAGE_PMA_THEAD | _PAGE_IO_THEAD | (1UL << 59)) +#define _PAGE_PMA_THEAD ((1ULL << 62) | (1ULL << 61) | (1ULL << 60)) +#define _PAGE_NOCACHE_THEAD ((1ULL << 61) | (1ULL << 60)) +#define _PAGE_IO_THEAD ((1ULL << 63) | (1ULL << 60)) +#define _PAGE_MTMASK_THEAD (_PAGE_PMA_THEAD | _PAGE_IO_THEAD | (1ULL << 59)) static inline u64 riscv_page_mtmask(void) { @@ -167,7 +172,7 @@ static inline u64 riscv_page_io(void) #define _PAGE_MTMASK riscv_page_mtmask() /* Set of bits to preserve across pte_modify() */ -#define _PAGE_CHG_MASK (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ | \ +#define _PAGE_CHG_MASK (~(u64)(_PAGE_PRESENT | _PAGE_READ | \ _PAGE_WRITE | _PAGE_EXEC | \ _PAGE_USER | _PAGE_GLOBAL | \ _PAGE_MTMASK)) @@ -208,12 +213,12 @@ static inline void pud_clear(pud_t *pudp) set_pud(pudp, __pud(0)); } -static inline pud_t pfn_pud(unsigned long pfn, pgprot_t prot) +static inline pud_t pfn_pud(u64 pfn, pgprot_t prot) { return __pud((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot)); } -static inline unsigned long _pud_pfn(pud_t pud) +static inline u64 _pud_pfn(pud_t pud) { return __page_val_to_pfn(pud_val(pud)); } @@ -248,16 +253,16 @@ static inline bool mm_pud_folded(struct mm_struct *mm) #define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) -static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot) +static inline pmd_t pfn_pmd(u64 pfn, pgprot_t prot) { - unsigned long prot_val = pgprot_val(prot); + u64 prot_val = pgprot_val(prot); ALT_THEAD_PMA(prot_val); return __pmd((pfn << _PAGE_PFN_SHIFT) | prot_val); } -static inline unsigned long _pmd_pfn(pmd_t pmd) +static inline u64 _pmd_pfn(pmd_t pmd) { return __page_val_to_pfn(pmd_val(pmd)); } @@ -265,13 +270,13 @@ static inline unsigned long _pmd_pfn(pmd_t pmd) #define mk_pmd(page, prot) pfn_pmd(page_to_pfn(page), prot) #define pmd_ERROR(e) \ - pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e)) + pr_err("%s:%d: bad pmd " PTE_FMT ".\n", __FILE__, __LINE__, pmd_val(e)) #define pud_ERROR(e) \ - pr_err("%s:%d: bad pud %016lx.\n", __FILE__, __LINE__, pud_val(e)) + pr_err("%s:%d: bad pud " PTE_FMT ".\n", __FILE__, __LINE__, pud_val(e)) #define p4d_ERROR(e) \ - pr_err("%s:%d: bad p4d %016lx.\n", __FILE__, __LINE__, p4d_val(e)) + pr_err("%s:%d: bad p4d " PTE_FMT ".\n", __FILE__, __LINE__, p4d_val(e)) static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) { @@ -311,12 +316,12 @@ static inline void p4d_clear(p4d_t *p4d) set_p4d(p4d, __p4d(0)); } -static inline p4d_t pfn_p4d(unsigned long pfn, pgprot_t prot) +static inline p4d_t pfn_p4d(u64 pfn, pgprot_t prot) { return __p4d((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot)); } -static inline unsigned long _p4d_pfn(p4d_t p4d) +static inline u64 _p4d_pfn(p4d_t p4d) { return __page_val_to_pfn(p4d_val(p4d)); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 050fdc49b5ad..5f1b48cb3311 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -9,6 +9,7 @@ #include #include +#include #include #ifndef CONFIG_MMU @@ -19,8 +20,13 @@ #define ADDRESS_SPACE_END (UL(-1)) #ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 /* Leave 2GB for kernel and BPF at the end of the address space */ #define KERNEL_LINK_ADDR (ADDRESS_SPACE_END - SZ_2G + 1) +#elif BITS_PER_LONG == 32 +/* Leave 64MB for kernel and BPF below PAGE_OFFSET */ +#define KERNEL_LINK_ADDR (PAGE_OFFSET - SZ_64M) +#endif #else #define KERNEL_LINK_ADDR PAGE_OFFSET #endif @@ -34,31 +40,45 @@ * Half of the kernel address space (1/4 of the entries of the page global * directory) is for the direct mapping. */ +#if (BITS_PER_LONG == 32) && (CONFIG_PGTABLE_LEVELS > 2) +#define KERN_VIRT_SIZE (PTRS_PER_PGD * PMD_SIZE) +#else #define KERN_VIRT_SIZE ((PTRS_PER_PGD / 2 * PGDIR_SIZE) / 2) +#endif #define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1) +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +#define VMALLOC_END MODULES_LOWEST_VADDR +#else #define VMALLOC_END PAGE_OFFSET -#define VMALLOC_START (PAGE_OFFSET - VMALLOC_SIZE) +#endif +#define VMALLOC_START (VMALLOC_END - VMALLOC_SIZE) #define BPF_JIT_REGION_SIZE (SZ_128M) -#ifdef CONFIG_64BIT #define BPF_JIT_REGION_START (BPF_JIT_REGION_END - BPF_JIT_REGION_SIZE) +#if BITS_PER_LONG == 64 #define BPF_JIT_REGION_END (MODULES_END) #else -#define BPF_JIT_REGION_START (PAGE_OFFSET - BPF_JIT_REGION_SIZE) #define BPF_JIT_REGION_END (VMALLOC_END) #endif /* Modules always live before the kernel */ -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 /* This is used to define the end of the KASAN shadow region */ #define MODULES_LOWEST_VADDR (KERNEL_LINK_ADDR - SZ_2G) #define MODULES_VADDR (PFN_ALIGN((unsigned long)&_end) - SZ_2G) #define MODULES_END (PFN_ALIGN((unsigned long)&_start)) #else +#ifdef CONFIG_64BIT +#define MODULES_LOWEST_VADDR (KERNEL_LINK_ADDR - SZ_64M) +#define MODULES_VADDR MODULES_LOWEST_VADDR +#define MODULES_END KERNEL_LINK_ADDR +#else +#define MODULES_LOWEST_VADDR VMALLOC_START #define MODULES_VADDR VMALLOC_START #define MODULES_END VMALLOC_END #endif +#endif /* * Roughly size the vmemmap space to be large enough to fit enough @@ -66,7 +86,7 @@ * position vmemmap directly below the VMALLOC region. */ #define VA_BITS_SV32 32 -#ifdef CONFIG_64BIT +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 64) #define VA_BITS_SV39 39 #define VA_BITS_SV48 48 #define VA_BITS_SV57 57 @@ -126,9 +146,14 @@ #define MMAP_VA_BITS_64 ((VA_BITS >= VA_BITS_SV48) ? VA_BITS_SV48 : VA_BITS) #define MMAP_MIN_VA_BITS_64 (VA_BITS_SV39) +#if BITS_PER_LONG == 64 #define MMAP_VA_BITS (is_compat_task() ? VA_BITS_SV32 : MMAP_VA_BITS_64) #define MMAP_MIN_VA_BITS (is_compat_task() ? VA_BITS_SV32 : MMAP_MIN_VA_BITS_64) #else +#define MMAP_VA_BITS VA_BITS_SV32 +#define MMAP_MIN_VA_BITS VA_BITS_SV32 +#endif +#else #include #endif /* CONFIG_64BIT */ @@ -252,7 +277,7 @@ static inline void pmd_clear(pmd_t *pmdp) static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot) { - unsigned long prot_val = pgprot_val(prot); + ptval_t prot_val = pgprot_val(prot); ALT_THEAD_PMA(prot_val); @@ -591,7 +616,11 @@ extern int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long a static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, pte_t *ptep) { +#if CONFIG_PGTABLE_LEVELS > 2 + pte_t pte = __pte(atomic64_xchg((atomic64_t *)ptep, 0)); +#else pte_t pte = __pte(atomic_long_xchg((atomic_long_t *)ptep, 0)); +#endif page_table_check_pte_clear(mm, pte); @@ -602,7 +631,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long address, pte_t *ptep) { +#if CONFIG_PGTABLE_LEVELS > 2 + atomic64_and(~(u64)_PAGE_WRITE, (atomic64_t *)ptep); +#else atomic_long_and(~(unsigned long)_PAGE_WRITE, (atomic_long_t *)ptep); +#endif } #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH @@ -636,7 +669,7 @@ static inline pgprot_t pgprot_nx(pgprot_t _prot) #define pgprot_noncached pgprot_noncached static inline pgprot_t pgprot_noncached(pgprot_t _prot) { - unsigned long prot = pgprot_val(_prot); + ptval_t prot = pgprot_val(_prot); prot &= ~_PAGE_MTMASK; prot |= _PAGE_IO; @@ -647,7 +680,7 @@ static inline pgprot_t pgprot_noncached(pgprot_t _prot) #define pgprot_writecombine pgprot_writecombine static inline pgprot_t pgprot_writecombine(pgprot_t _prot) { - unsigned long prot = pgprot_val(_prot); + ptval_t prot = pgprot_val(_prot); prot &= ~_PAGE_MTMASK; prot |= _PAGE_NOCACHE; @@ -905,8 +938,12 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) * and give the kernel the other (upper) half. */ #ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define KERN_VIRT_START (-(BIT(VA_BITS)) + TASK_SIZE) #else +#define KERN_VIRT_START TASK_SIZE_32 +#endif +#else #define KERN_VIRT_START FIXADDR_START #endif @@ -915,6 +952,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) * Note that PGDIR_SIZE must evenly divide TASK_SIZE. * Task size is: * - 0x9fc00000 (~2.5GB) for RV32. + * - 0x80000000 ( 2GB) for RV32_COMPAT & RV64ILP32 * - 0x4000000000 ( 256GB) for RV64 using SV39 mmu * - 0x800000000000 ( 128TB) for RV64 using SV48 mmu * - 0x100000000000000 ( 64PB) for RV64 using SV57 mmu @@ -928,15 +966,19 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) #ifdef CONFIG_64BIT #define TASK_SIZE_64 (PGDIR_SIZE * PTRS_PER_PGD / 2) #define TASK_SIZE_MAX LONG_MAX +#define TASK_SIZE_32 _AC(0x80000000, UL) +#if BITS_PER_LONG == 64 #ifdef CONFIG_COMPAT -#define TASK_SIZE_32 (_AC(0x80000000, UL) - PAGE_SIZE) #define TASK_SIZE (is_compat_task() ? \ TASK_SIZE_32 : TASK_SIZE_64) #else #define TASK_SIZE TASK_SIZE_64 #endif +#else +#define TASK_SIZE TASK_SIZE_32 +#endif #else #define TASK_SIZE FIXADDR_START #endif diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index ca57a650c3d2..9f4e0be595fd 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -24,7 +24,7 @@ base; \ }) -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define DEFAULT_MAP_WINDOW (UL(1) << (MMAP_VA_BITS - 1)) #define STACK_TOP_MAX TASK_SIZE_64 #else diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index f6b13e9f5e6c..ce1440c63606 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -291,9 +291,9 @@ static void print_mmu(struct seq_file *f) const char *sv_type; #ifdef CONFIG_MMU -#if defined(CONFIG_32BIT) +#if CONFIG_PGTABLE_LEVELS == 2 sv_type = "sv32"; -#elif defined(CONFIG_64BIT) +#else if (pgtable_l5_enabled) sv_type = "sv57"; else if (pgtable_l4_enabled) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index fcc23350610e..1e854e9633b3 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -40,25 +40,25 @@ static void show_pte(unsigned long addr) pgdp = pgd_offset(mm, addr); pgd = pgdp_get(pgdp); - pr_alert("[%016lx] pgd=%016lx", addr, pgd_val(pgd)); + pr_alert("[%016lx] pgd=" REG_FMT, addr, pgd_val(pgd)); if (pgd_none(pgd) || pgd_bad(pgd) || pgd_leaf(pgd)) goto out; p4dp = p4d_offset(pgdp, addr); p4d = p4dp_get(p4dp); - pr_cont(", p4d=%016lx", p4d_val(p4d)); + pr_cont(", p4d=" REG_FMT, p4d_val(p4d)); if (p4d_none(p4d) || p4d_bad(p4d) || p4d_leaf(p4d)) goto out; pudp = pud_offset(p4dp, addr); pud = pudp_get(pudp); - pr_cont(", pud=%016lx", pud_val(pud)); + pr_cont(", pud=" REG_FMT, pud_val(pud)); if (pud_none(pud) || pud_bad(pud) || pud_leaf(pud)) goto out; pmdp = pmd_offset(pudp, addr); pmd = pmdp_get(pmdp); - pr_cont(", pmd=%016lx", pmd_val(pmd)); + pr_cont(", pmd=" REG_FMT, pmd_val(pmd)); if (pmd_none(pmd) || pmd_bad(pmd) || pmd_leaf(pmd)) goto out; @@ -67,7 +67,7 @@ static void show_pte(unsigned long addr) goto out; pte = ptep_get(ptep); - pr_cont(", pte=%016lx", pte_val(pte)); + pr_cont(", pte=" REG_FMT, pte_val(pte)); pte_unmap(ptep); out: pr_cont("\n"); diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 15b2eda4c364..3cdbb033860e 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -46,16 +46,20 @@ EXPORT_SYMBOL(kernel_map); #define kernel_map (*(struct kernel_mapping *)XIP_FIXUP(&kernel_map)) #endif -#ifdef CONFIG_64BIT +#if CONFIG_PGTABLE_LEVELS > 2 +#if BITS_PER_LONG == 64 u64 satp_mode __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL) ? SATP_MODE_57 : SATP_MODE_39; #else +u64 satp_mode __ro_after_init = SATP_MODE_39; +#endif +#else u64 satp_mode __ro_after_init = SATP_MODE_32; #endif EXPORT_SYMBOL(satp_mode); #ifdef CONFIG_64BIT -bool pgtable_l4_enabled __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL); -bool pgtable_l5_enabled __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL); +bool pgtable_l4_enabled __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL) && (BITS_PER_LONG == 64); +bool pgtable_l5_enabled __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL) && (BITS_PER_LONG == 64); EXPORT_SYMBOL(pgtable_l4_enabled); EXPORT_SYMBOL(pgtable_l5_enabled); #endif @@ -117,7 +121,7 @@ static inline void print_mlg(char *name, unsigned long b, unsigned long t) (((t) - (b)) >> LOG2_SZ_1G)); } -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 static inline void print_mlt(char *name, unsigned long b, unsigned long t) { pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld TB)\n", name, b, t, @@ -131,7 +135,7 @@ static inline void print_ml(char *name, unsigned long b, unsigned long t) { unsigned long diff = t - b; - if (IS_ENABLED(CONFIG_64BIT) && (diff >> LOG2_SZ_1T) >= 10) + if ((BITS_PER_LONG == 64) && (diff >> LOG2_SZ_1T) >= 10) print_mlt(name, b, t); else if ((diff >> LOG2_SZ_1G) >= 10) print_mlg(name, b, t); @@ -164,7 +168,9 @@ static void __init print_vm_layout(void) #endif print_ml("kernel", (unsigned long)kernel_map.virt_addr, - (unsigned long)ADDRESS_SPACE_END); + (BITS_PER_LONG == 64) ? + (unsigned long)ADDRESS_SPACE_END : + (unsigned long)PAGE_OFFSET); } } #else @@ -173,7 +179,8 @@ static void print_vm_layout(void) { } void __init mem_init(void) { - bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); + bool swiotlb = (BITS_PER_LONG == 32) ? false: + (max_pfn > PFN_DOWN(dma32_phys_limit)); #ifdef CONFIG_FLATMEM BUG_ON(!mem_map); #endif /* CONFIG_FLATMEM */ @@ -319,7 +326,7 @@ static void __init setup_bootmem(void) memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va)); dma_contiguous_reserve(dma32_phys_limit); - if (IS_ENABLED(CONFIG_64BIT)) + if (BITS_PER_LONG == 64) hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); } @@ -685,16 +692,26 @@ void __meminit create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, phy pgd_next_t *nextp; phys_addr_t next_phys; uintptr_t pgd_idx = pgd_index(va); +#if (CONFIG_PGTABLE_LEVELS > 2) && (BITS_PER_LONG == 32) + uintptr_t pgd_idh = pgd_index(sign_extend64((u64)va, 31)); +#endif if (sz == PGDIR_SIZE) { - if (pgd_val(pgdp[pgd_idx]) == 0) + if (pgd_val(pgdp[pgd_idx]) == 0) { pgdp[pgd_idx] = pfn_pgd(PFN_DOWN(pa), prot); +#if (CONFIG_PGTABLE_LEVELS > 2) && (BITS_PER_LONG == 32) + pgdp[pgd_idh] = pfn_pgd(PFN_DOWN(pa), prot); +#endif + } return; } if (pgd_val(pgdp[pgd_idx]) == 0) { next_phys = alloc_pgd_next(va); pgdp[pgd_idx] = pfn_pgd(PFN_DOWN(next_phys), PAGE_TABLE); +#if (CONFIG_PGTABLE_LEVELS > 2) && (BITS_PER_LONG == 32) + pgdp[pgd_idh] = pfn_pgd(PFN_DOWN(next_phys), PAGE_TABLE); +#endif nextp = get_pgd_next_virt(next_phys); memset(nextp, 0, PAGE_SIZE); } else { @@ -775,7 +792,7 @@ static __meminit pgprot_t pgprot_from_va(uintptr_t va) } #endif /* CONFIG_STRICT_KERNEL_RWX */ -#if defined(CONFIG_64BIT) && !defined(CONFIG_XIP_KERNEL) +#if (BITS_PER_LONG == 64) && !defined(CONFIG_XIP_KERNEL) u64 __pi_set_satp_mode_from_cmdline(uintptr_t dtb_pa); static void __init disable_pgtable_l5(void) @@ -981,8 +998,8 @@ static void __init create_fdt_early_page_table(uintptr_t fix_fdt_va, /* Make sure the fdt fixmap address is always aligned on PMD size */ BUILD_BUG_ON(FIX_FDT % (PMD_SIZE / PAGE_SIZE)); - /* In 32-bit only, the fdt lies in its own PGD */ - if (!IS_ENABLED(CONFIG_64BIT)) { + /* In Sv32 only, the fdt lies in its own PGD */ + if (CONFIG_PGTABLE_LEVELS == 2) { create_pgd_mapping(early_pg_dir, fix_fdt_va, pa, MAX_FDT_SIZE, PAGE_KERNEL); } else { @@ -1108,7 +1125,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) kernel_map.virt_addr = KERNEL_LINK_ADDR + kernel_map.virt_offset; #ifdef CONFIG_XIP_KERNEL -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 kernel_map.page_offset = PAGE_OFFSET_L3; #else kernel_map.page_offset = _AC(CONFIG_PAGE_OFFSET, UL); @@ -1133,7 +1150,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) kernel_map.va_kernel_pa_offset = kernel_map.virt_addr - kernel_map.phys_addr; #endif -#if defined(CONFIG_64BIT) && !defined(CONFIG_XIP_KERNEL) +#if (BITS_PER_LONG == 64) && !defined(CONFIG_XIP_KERNEL) set_satp_mode(dtb_pa); set_mmap_rnd_bits_max(); #endif @@ -1164,7 +1181,11 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) * The last 4K bytes of the addressable memory can not be mapped because * of IS_ERR_VALUE macro. */ +#if BITS_PER_LONG == 64 BUG_ON((kernel_map.virt_addr + kernel_map.size) > ADDRESS_SPACE_END - SZ_4K); +#else + BUG_ON((kernel_map.virt_addr + kernel_map.size) > PAGE_OFFSET - SZ_4K); +#endif #endif #ifdef CONFIG_RELOCATABLE @@ -1246,7 +1267,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) fix_bmap_epmd = fixmap_pmd[pmd_index(__fix_to_virt(FIX_BTMAP_END))]; if (pmd_val(fix_bmap_spmd) != pmd_val(fix_bmap_epmd)) { WARN_ON(1); - pr_warn("fixmap btmap start [%08lx] != end [%08lx]\n", + pr_warn("fixmap btmap start [" PTE_FMT "] != end [" PTE_FMT "]\n", pmd_val(fix_bmap_spmd), pmd_val(fix_bmap_epmd)); pr_warn("fix_to_virt(FIX_BTMAP_BEGIN): %08lx\n", fix_to_virt(FIX_BTMAP_BEGIN)); @@ -1336,7 +1357,7 @@ static void __init create_linear_mapping_page_table(void) static void __init setup_vm_final(void) { /* Setup swapper PGD for fixmap */ -#if !defined(CONFIG_64BIT) +#if CONFIG_PGTABLE_LEVELS == 2 /* * In 32-bit, the device tree lies in a pgd entry, so it must be copied * directly in swapper_pg_dir in addition to the pgd entry that points @@ -1354,7 +1375,7 @@ static void __init setup_vm_final(void) create_linear_mapping_page_table(); /* Map the kernel */ - if (IS_ENABLED(CONFIG_64BIT)) + if (CONFIG_PGTABLE_LEVELS > 2) create_kernel_page_table(swapper_pg_dir, false); #ifdef CONFIG_KASAN diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index d815448758a1..45927f713cb9 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -15,10 +15,10 @@ struct pageattr_masks { pgprot_t clear_mask; }; -static unsigned long set_pageattr_masks(unsigned long val, struct mm_walk *walk) +static unsigned long set_pageattr_masks(ptval_t val, struct mm_walk *walk) { struct pageattr_masks *masks = walk->private; - unsigned long new_val = val; + ptval_t new_val = val; new_val &= ~(pgprot_val(masks->clear_mask)); new_val |= (pgprot_val(masks->set_mask)); diff --git a/arch/riscv/mm/pgtable.c b/arch/riscv/mm/pgtable.c index 4ae67324f992..564679b4c48e 100644 --- a/arch/riscv/mm/pgtable.c +++ b/arch/riscv/mm/pgtable.c @@ -37,7 +37,7 @@ int ptep_test_and_clear_young(struct vm_area_struct *vma, { if (!pte_young(ptep_get(ptep))) return 0; - return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep)); + return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, (unsigned long *)&pte_val(*ptep)); } EXPORT_SYMBOL_GPL(ptep_test_and_clear_young); From patchwork Tue Mar 25 12:15:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876184 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C99A525A654; Tue, 25 Mar 2025 12:20:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905249; cv=none; b=PXA5/9Qb0S+RV8JZtCUOA/lBhL14/s2dk44d2hCQerhzIw5o/KQjCgVPjBaqvwa1/CSMRnHG3o6fy7wfjsR8YEoJZQS5k1UGO+YPhOFoyjbPzzqkGszC2Bi7TBfL1ugeq4jSg5auQvPPD4LMXUzicrv8PwSmdPT0jq/ckFsrf1A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905249; c=relaxed/simple; bh=j18oLXPHFjnaGmgCwscTDaPTneiSyVqaoCKba61TqXg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ksvMfmY3i/W6NiLN+klluOkriT32EGq0noZ3BusNVu2GF1AiyMz1e9hsraXsCjKqLoktGK0HuqvGxV7Chie2WyAg0trLkEr7HRdWJXG0YdGkmxxNSw4WqETQnjLPn6jZ97gwQJPhPeJNGOogJavJ9OO67radIxFoZ1IA8nnwXfk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Oz8iARGb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Oz8iARGb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 213B1C4CEE4; Tue, 25 Mar 2025 12:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905248; bh=j18oLXPHFjnaGmgCwscTDaPTneiSyVqaoCKba61TqXg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Oz8iARGb8tr7htUnaAeUCJ6IYWmPD2sUy6Fahrj1eenZT1kKMM+eFWJiKMQdGqcqw rqWz1qpr3jo3lVxFFYwrBasJ+tHw3PWT4Rv+apRvPWQa/i4K/2vaflC0hzKnPuX+P4 8bTM1EjGGpOTuQHdC8TBFwQkrFBcJI49u7thNpghWlOyMzJ3hMkq6mroidX2glUScJ uyG/XkFYIxn9NAsEUQSTt4TYvFJgoGc/h+TFjpPprvl74n7fsNLu4FNEPQT7I99tBn keJQj/Yhn5p3/N2C2J6YITPpmyRqNzvFWJV+jSwmuHsx+jq6dfD8Ek+CQ8ONbZxS8/ mQmFdBGqaAdYQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 17/43] rv64ilp32_abi: riscv: Adapt kasan memory layout Date: Tue, 25 Mar 2025 08:15:58 -0400 Message-Id: <20250325121624.523258-18-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" For generic KASAN, the size of each memory granule is 8, which needs 1/8 address space. The kernel space is 2GiB in rv64ilp32, so we need 256MiB range (0x80000000 ~ 0x90000000), and the offset is 0x7000000 for the whole 4GiB address space. Virtual kernel memory layout: fixmap : 0x90a00000 - 0x90ffffff (6144 kB) pci io : 0x91000000 - 0x91ffffff ( 16 MB) vmemmap : 0x92000000 - 0x93ffffff ( 32 MB) vmalloc : 0x94000000 - 0xb3ffffff ( 512 MB) modules : 0xb4000000 - 0xb7ffffff ( 64 MB) lowmem : 0xc0000000 - 0xc7ffffff ( 128 MB) kasan : 0x80000000 - 0x8fffffff ( 256 MB) <= kernel : 0xb8000000 - 0xbfffffff ( 128 MB) Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/kasan.h | 6 +++++- arch/riscv/mm/kasan_init.c | 2 +- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kasan.h b/arch/riscv/include/asm/kasan.h index e6a0071bdb56..dd3a211bc5d0 100644 --- a/arch/riscv/include/asm/kasan.h +++ b/arch/riscv/include/asm/kasan.h @@ -21,7 +21,7 @@ * [KASAN_SHADOW_OFFSET, KASAN_SHADOW_END) cover all 64-bits of virtual * addresses. So KASAN_SHADOW_OFFSET should satisfy the following equation: * KASAN_SHADOW_OFFSET = KASAN_SHADOW_END - - * (1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT)) + * (1ULL << (BITS_PER_LONG - KASAN_SHADOW_SCALE_SHIFT)) */ #define KASAN_SHADOW_SCALE_SHIFT 3 @@ -31,7 +31,11 @@ * aligned on PGDIR_SIZE, so force its alignment to ease its population. */ #define KASAN_SHADOW_START ((KASAN_SHADOW_END - KASAN_SHADOW_SIZE) & PGDIR_MASK) +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +#define KASAN_SHADOW_END 0x90000000UL +#else #define KASAN_SHADOW_END MODULES_LOWEST_VADDR +#endif #ifdef CONFIG_KASAN #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index 41c635d6aca4..1e864598779a 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -324,7 +324,7 @@ asmlinkage void __init kasan_early_init(void) uintptr_t i; BUILD_BUG_ON(KASAN_SHADOW_OFFSET != - KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT))); + KASAN_SHADOW_END - (1UL << (BITS_PER_LONG - KASAN_SHADOW_SCALE_SHIFT))); for (i = 0; i < PTRS_PER_PTE; ++i) set_pte(kasan_early_shadow_pte + i, From patchwork Tue Mar 25 12:16:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876183 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A62432586C5; Tue, 25 Mar 2025 12:21:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905278; cv=none; b=mEr8USxEYxOrCxyaPiza+sxjz3fT4aQEw5VekROrjf+YJF1RPSLmuKR+OH4/mBshZVnd7kJhRZklrPn+Sy0eDVPvKXI4iExQiyU8qd3xaVo9XKDxOtsosyIAMHjtcnX9x7XK8qKbGC92LrwDD+HL/Zn+9QV8vbpjR1B59V+iqcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905278; c=relaxed/simple; bh=2UWsq5ym5VI7/vPr12wsi4xJfQOhojxJV/AkLgh/hzg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fhizAkYnN+iL3ekWGxKB8XbM0ZhdzYRkD34vg280Q/B1zQL6q4jnPX77CzziIE++LVW0cd5tw+1zlJqmXE/r6+D6qDoew730caks0FoJ/GHWW++A5cXVkdA3Tv6fkMZMum8rMnJHa/loZ+h8Et8S+q3662V2a7VlfyZdk1f+wAY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CASldfZK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CASldfZK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE0C8C4CEED; Tue, 25 Mar 2025 12:21:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905277; bh=2UWsq5ym5VI7/vPr12wsi4xJfQOhojxJV/AkLgh/hzg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CASldfZKFy6JxMfog2czdRUW400bjKu3qkIqf2g0bO4mAfdCAZJOHjQKj39cz8aqg XG1civv4o3gD3MK48Yhqik6WW027RRpl8DQvIB9Yob7C9kX6E9qV9uYudT8hhev5cL QTVrUoI6O6nkCMoTVAOPFZqdY9GzjntjKQO6ayze5oXkt5L0UWJbyNb4+aKdlWWzz3 3BCk6+WJF49ZitzC7I5AW48N1eiiEebNx/f6MnTwAH2grSuKMLPrEypZXRY8Sm16qK mLSvyciL4KYAbVkJERD8dZPZNRmSFb8RH3/1NqleyW9uvgrIwTuaYlC1t3hgHEcd/Z SqwIoUbHe8uUw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 19/43] rv64ilp32_abi: irqchip: irq-riscv-intc: Use xlen_t instead of ulong Date: Tue, 25 Mar 2025 08:16:00 -0400 Message-Id: <20250325121624.523258-20-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI is based on CONFIG_64BIT, so use xlen/xlen_t instead of BITS_PER_LONG/ulong. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/irqchip/irq-riscv-intc.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c index f653c13de62b..4fc7d5704acf 100644 --- a/drivers/irqchip/irq-riscv-intc.c +++ b/drivers/irqchip/irq-riscv-intc.c @@ -20,18 +20,19 @@ #include #include +#include static struct irq_domain *intc_domain; -static unsigned int riscv_intc_nr_irqs __ro_after_init = BITS_PER_LONG; -static unsigned int riscv_intc_custom_base __ro_after_init = BITS_PER_LONG; +static unsigned int riscv_intc_nr_irqs __ro_after_init = __riscv_xlen; +static unsigned int riscv_intc_custom_base __ro_after_init = __riscv_xlen; static unsigned int riscv_intc_custom_nr_irqs __ro_after_init; static void riscv_intc_irq(struct pt_regs *regs) { - unsigned long cause = regs->cause & ~CAUSE_IRQ_FLAG; + xlen_t cause = regs->cause & ~CAUSE_IRQ_FLAG; if (generic_handle_domain_irq(intc_domain, cause)) - pr_warn_ratelimited("Failed to handle interrupt (cause: %ld)\n", cause); + pr_warn_ratelimited("Failed to handle interrupt (cause: " REG_FMT ")\n", cause); } static void riscv_intc_aia_irq(struct pt_regs *regs) From patchwork Tue Mar 25 12:16:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876182 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46C0625C6EA; Tue, 25 Mar 2025 12:21:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905307; cv=none; b=MABosUlkCPqRlUQsblXKz01FcQNIQG69EdgsnthhJudVm7SYLHnzwjYQovKuoe1j2WcMCZDJP0/BZhYsO+27E8zLB6u8IruBiZpGn5hxMmy3rcdS7yYz5ivwQuwX4BtzLW3xC7x9VXsCxZCPJ/B9qDehioJRISHODH22gAVvoa8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905307; c=relaxed/simple; bh=CZdS/GvkA90YU/kEAB36wBveXCqjeNawAvE607HO4+w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=scYlMC+YY0HEBJMCbFygnmXvnD1YYhxzoyYz0ZLifnqfq0PPA0JxyLfp7kTrCSA0d3gfwQfJrSrq70Sq9NQIZtWSiazi2CB1Xv/YajWf5zkaAoQwD296iCPJ+lzhHK9ypE4xjEJZqYT7EOO/woj7F/8UJr1D8SMN7qqDgvu9z74= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Yclr6uQ+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Yclr6uQ+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CBB6BC4CEEE; Tue, 25 Mar 2025 12:21:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905306; bh=CZdS/GvkA90YU/kEAB36wBveXCqjeNawAvE607HO4+w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Yclr6uQ+H+fwk4FeZuO/o1L+d44Ijt8+G1fy933XmRQHqYsRSsd0sTScvUWf8jhFn posU+nxR0I0CJubB8JpreHpWy/aH5nvKFU2hJMpWF6yA979un6JU7n1rYErPHMB+OE 1HtLAfDLBJWL5lUcfbf0A+xaNH26BIwc5uqCly6rbYZnlIoX+horQoW9v4j5Sryd9B XXw3PX/jNNQJEowJrGs9ycLTX55S8tjuI6nqYWKt5HT846Iutyoxz4uQ/nads2ZfRN d2AZWWzbzkjb2I7at8nkI5Hq2azsuTC7jh77lOOfoH2dxmDMXNH64ns6YpTd/KRsOD WXp0YdnLwIufw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 21/43] rv64ilp32_abi: asm-generic: Add custom BITS_PER_LONG definition Date: Tue, 25 Mar 2025 08:16:02 -0400 Message-Id: <20250325121624.523258-22-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI linux kernel is based on CONFIG_64BIT, but BITS_PER_LONG is 32. So, give a custom architectural definition of BITS_PER_LONG to match the correct macro definition. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/uapi/asm/bitsperlong.h | 6 ++++++ include/asm-generic/bitsperlong.h | 2 ++ 2 files changed, 8 insertions(+) diff --git a/arch/riscv/include/uapi/asm/bitsperlong.h b/arch/riscv/include/uapi/asm/bitsperlong.h index 7d0b32e3b701..fec2ad91597c 100644 --- a/arch/riscv/include/uapi/asm/bitsperlong.h +++ b/arch/riscv/include/uapi/asm/bitsperlong.h @@ -9,6 +9,12 @@ #define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8) +#if __BITS_PER_LONG == 64 +#define BITS_PER_LONG 64 +#else +#define BITS_PER_LONG 32 +#endif + #include #endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */ diff --git a/include/asm-generic/bitsperlong.h b/include/asm-generic/bitsperlong.h index 1023e2a4bd37..7ccbb7ce6610 100644 --- a/include/asm-generic/bitsperlong.h +++ b/include/asm-generic/bitsperlong.h @@ -6,7 +6,9 @@ #ifdef CONFIG_64BIT +#ifndef BITS_PER_LONG #define BITS_PER_LONG 64 +#endif #else #define BITS_PER_LONG 32 #endif /* CONFIG_64BIT */ From patchwork Tue Mar 25 12:16:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876181 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC76B257AC7; Tue, 25 Mar 2025 12:22:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905339; cv=none; b=tYhfYU0SXsT4a/io4e7LbhrdrYW4pX0QhPx3oY1HfAm7Bz3dQgSWL+3srsivDJhtZLBiruOQ7nH8zBopVGAavIcc+F33486aPOwFuhZSjPWwqcTvbt99MrESNKUt7qTHO9asU45mVFZt+t5K3KTnIRu2B1NWg6qLRyDrSAVBCCw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905339; c=relaxed/simple; bh=Gba3zEmCGRsT82kSH4PvBGfFD1Y0z5mGbjQJQ/8lKIw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CJfc3yZIkIkb07ftnX+Cinr/5xAhRtom6BuQIBS6LNMu6VzWtaRlnDbhCdlW8KbxlgSScC1sumBto/3hihifXouJXyBY12sZ9O24T4x6MtEgjSvDkIBRkV3o4QVH7OuYsIbFkYieVjT8UrzmnIEnAdpZTky/Gm1A5n4XBVgghv8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YKbe7u9X; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YKbe7u9X" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6BE58C4CEED; Tue, 25 Mar 2025 12:22:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905335; bh=Gba3zEmCGRsT82kSH4PvBGfFD1Y0z5mGbjQJQ/8lKIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YKbe7u9Xvh8isSihGt6kXF+4MLeubRmVXpDt87uyRN/SP7UllOiStMyrtlJQjIfsu 1Hqtk5naMe0UheGZS80Sfa0zl11d6cST2bQbzQbQP6zp3JTlEhxFZ1+erShyWgpVv5 m4OgxrFrdZWkRi8e5adLBRWw11xX6PZl4Esqh79r65lvkXVdx7361AQY5CaHF/wGsB E83s0iVJpTljmHXSCPSP2d0uzzX5JuGLn7J9OB6gQRtgkhW2o+6R7WFKdYQYdMxY96 tsyLis/6rOdw68n8FLG2k9n+ahfkJr3B2G35UK8EUKDj+I6/B4BHklYbO/OqqvREWx amTO7iSypzjUw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 23/43] rv64ilp32_abi: compat: Correct compat_ulong_t cast Date: Tue, 25 Mar 2025 08:16:04 -0400 Message-Id: <20250325121624.523258-24-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI systems have BITS_PER_LONG set to 32, matching sizeof(compat_ulong_t). Adjust code involving compat_ulong_t accordingly. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/uapi/linux/auto_fs.h | 6 ++++++ kernel/compat.c | 15 ++++++++++++--- 2 files changed, 18 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/auto_fs.h b/include/uapi/linux/auto_fs.h index 8081df849743..7d925ee810b6 100644 --- a/include/uapi/linux/auto_fs.h +++ b/include/uapi/linux/auto_fs.h @@ -80,9 +80,15 @@ enum { #define AUTOFS_IOC_SETTIMEOUT32 _IOWR(AUTOFS_IOCTL, \ AUTOFS_IOC_SETTIMEOUT_CMD, \ compat_ulong_t) +#if __riscv_xlen == 64 +#define AUTOFS_IOC_SETTIMEOUT _IOWR(AUTOFS_IOCTL, \ + AUTOFS_IOC_SETTIMEOUT_CMD, \ + unsigned long long) +#else #define AUTOFS_IOC_SETTIMEOUT _IOWR(AUTOFS_IOCTL, \ AUTOFS_IOC_SETTIMEOUT_CMD, \ unsigned long) +#endif #define AUTOFS_IOC_EXPIRE _IOR(AUTOFS_IOCTL, \ AUTOFS_IOC_EXPIRE_CMD, \ struct autofs_packet_expire) diff --git a/kernel/compat.c b/kernel/compat.c index fb50f29d9b36..46ffdc5e7cc4 100644 --- a/kernel/compat.c +++ b/kernel/compat.c @@ -203,11 +203,17 @@ long compat_get_bitmap(unsigned long *mask, const compat_ulong_t __user *umask, return -EFAULT; while (nr_compat_longs > 1) { - compat_ulong_t l1, l2; + compat_ulong_t l1; unsafe_get_user(l1, umask++, Efault); + nr_compat_longs -= 1; +#if BITS_PER_LONG == 64 + compat_ulong_t l2; unsafe_get_user(l2, umask++, Efault); *mask++ = ((unsigned long)l2 << BITS_PER_COMPAT_LONG) | l1; - nr_compat_longs -= 2; + nr_compat_longs -= 1; +#else + *mask++ = l1; +#endif } if (nr_compat_longs) unsafe_get_user(*mask, umask++, Efault); @@ -234,8 +240,11 @@ long compat_put_bitmap(compat_ulong_t __user *umask, unsigned long *mask, while (nr_compat_longs > 1) { unsigned long m = *mask++; unsafe_put_user((compat_ulong_t)m, umask++, Efault); + nr_compat_longs -= 1; +#if BITS_PER_LONG == 64 unsafe_put_user(m >> BITS_PER_COMPAT_LONG, umask++, Efault); - nr_compat_longs -= 2; + nr_compat_longs -= 1; +#endif } if (nr_compat_longs) unsafe_put_user((compat_ulong_t)*mask, umask++, Efault); From patchwork Tue Mar 25 12:16:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876180 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C141C25D8F7; Tue, 25 Mar 2025 12:22:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905364; cv=none; b=O5iLTxfcEvGc7WLje7XHMSkWVcQAxRDCzzO24ZIb+ZWIJ4sPo+gGiW1WS+W6G7FQE5C7ZeRJ1abLlvSXq4LMpzhjIgRRqiTWO3THaFc2365anEGogGWFVkkKUIFiF6XNSA44HO7MeZR4mdkXrLeMqZlNV3DOuLZ+8O2mcLtc1yE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905364; c=relaxed/simple; bh=pRtaUMiFeAu+95a3CJwIpLTg/ZslxDoQ016dR5ldst0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=H2HKm6OcOHwOglAcARoacMwd/Xp5LJfL+ZTDpOoLRCVCBxnLJj4tPqopfcRmXTFl17tSLUzkSfKEgijapwSYbVv3IiW5n4m4rMoF8kNKt6Df/5011bUNODgNmduibrKArfiQsRfUSQSV3m3lO3PwRPg+pjjTPGGmytT7vvsZ3mo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=p2sof5de; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="p2sof5de" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED6B6C4CEE9; Tue, 25 Mar 2025 12:22:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905363; bh=pRtaUMiFeAu+95a3CJwIpLTg/ZslxDoQ016dR5ldst0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p2sof5deaUnKxXsdbuejLa7cMfsQ/jfo9NR4yzxltIbGt9I7LGg3viUZpXT1k8RFw 9WTRwz7YQbBmvTKk30sv6fC7V/LIHte4FG9qKAIedqe9Y/ickF0YMfyR54zTwhm5BQ obpUACKtQgLFBumRgfvC3WYEeXtPc/EwoiH4j3URzIqNlAo4TJWEvolJgwku1sqkJx 7rKaKFTW+8U8evbqHZ+v5cGHZoXKojJ2sQX8sVgxL7kldYux1laci0G+UGSLEZ94Hd tb6MX1/sRkfPX52VC52FXAgkrmJ6qByYnTyzQlY4MscJzEO+hWOnTRGT/1TDXE8C38 5yNfgh3K5oAwg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 25/43] rv64ilp32_abi: exec: Adapt 64lp64 env and argv Date: Tue, 25 Mar 2025 08:16:06 -0400 Message-Id: <20250325121624.523258-26-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 abi reuses the env and argv memory layout of the lp64 abi, so leave the space to fit the lp64 struct layout. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- fs/exec.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/exec.c b/fs/exec.c index 506cd411f4ac..548d18b7ae92 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -424,6 +424,10 @@ static const char __user *get_user_arg_ptr(struct user_arg_ptr argv, int nr) } #endif +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) + nr = nr * 2; +#endif + if (get_user(native, argv.ptr.native + nr)) return ERR_PTR(-EFAULT); From patchwork Tue Mar 25 12:16:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876179 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA94725E835; Tue, 25 Mar 2025 12:23:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905395; cv=none; b=uHsa8SXbLiSJaYASZqj+on3/tMeNZs7DV04sZTJSiZDpQWvMlz+U8duXa25d3e8YNY9pdIxroBATlbravz/+0171yhoDYSwq54EvO07u5zYev5bc7q/2py1fQqThbgNvr/VcO/eAgypVb6OGE6leErYlgiNIlQuEuiQyyrx7jjo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905395; c=relaxed/simple; bh=ZdAkreXNjU924DvNhCbpfQtujvhWVn4yr+yoUPb7XPY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TnjyBy2nhX9ByN5KAlQ0D3xvV0kCIQ3eu0eQExDwsUCK4Uve7OYEWov6sXj2Til+YLvgoI4xAFJ6VDIoha9TruWSx+fxyIFx6ert7ffcQ7U9TYogKe92hq1c1FaeMT4djWJMuKWnN+vF9KkOu2MutCAwso3HKRezuOW3c8o9BtQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=e0wh/8Tx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="e0wh/8Tx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EADAAC4CEED; Tue, 25 Mar 2025 12:23:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905394; bh=ZdAkreXNjU924DvNhCbpfQtujvhWVn4yr+yoUPb7XPY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e0wh/8TxUniEX/ZB9PGz8ZaGiNSQaTZGsJyssDgFsfp+2sbfBlRKcQH+lkBCu3zaB +q5JKkf/6/6G9uTXxsQGTjo+cVtnmWczJLZwsleSvMn8bvWHhxQEszuBw4x52yi9Bk RNfQry6flnLoDYkb77T9WYq6b9r6RmFWBsHS2ig0gx5/AlMuJzcHEncwWe7YEL1iNP 87fpjQaGqgvuIa+FZFiDtwpz20AgIa6I0L4yQIwx2Bw4QTCN9Co6vjT0+kAHFSHwyB wE+o43JlKf9U+bBptVGvCfBIgJ0CtZ70Z/obq8KrvHdjwDYUww471PY6i6UOaWxc62 UYzxYGeDDf+ug== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 27/43] rv64ilp32_abi: input: Adapt BITS_PER_LONG to dword Date: Tue, 25 Mar 2025 08:16:08 -0400 Message-Id: <20250325121624.523258-28-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI linux kernel is based on CONFIG_64BIT, but BITS_PER_LONG is 32. So, adapt bits to dword with BITS_PER_LONG. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/input/input.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/input/input.c b/drivers/input/input.c index c9e3ac64bcd0..7af5e8c66f25 100644 --- a/drivers/input/input.c +++ b/drivers/input/input.c @@ -1006,7 +1006,11 @@ static int input_bits_to_string(char *buf, int buf_size, int len = 0; if (in_compat_syscall()) { +#if BITS_PER_LONG == 64 u32 dword = bits >> 32; +#else + u32 dword = bits; +#endif if (dword || !skip_empty) len += snprintf(buf, buf_size, "%x ", dword); From patchwork Tue Mar 25 12:16:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876178 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6B5F258CE4; Tue, 25 Mar 2025 12:23:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905423; cv=none; b=f5oS/lpZG7jrApvgmApBozBIAW3qi9Rsh8439iojC8PRL2AXTV+YcfOBtj1nWfpFxJqQpWBTFwW01AtvwJGBKZFn7P+l3WvVxwqhoPLUChMLqxZLZSVLrh9mZXTE1UzgrhHwF/arft1gPgO5sbpcUs1upWPQl0c59jhhuQCI0XE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905423; c=relaxed/simple; bh=UIaebIfG0R9TIRnbdttpSpkeO01MnZQH6Hjy2o37EAI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fzKrZFrRNVSJCWibAqqXqfg3QD54dd226rKN9Q+8kum8/3xR7vChlXV8lbrXxZkqq68ZwH7wkWwf+dyNB6JILMSt2geNStkJ8d3cqqmecvTRvr1KdUICGkZNAo+rX7b5DYCGNQwgU+SD+GwkSi2Q06Xk8ariI5Tm1Mk+ES+QWvM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Db/QgCVy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Db/QgCVy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2AABC4CEE9; Tue, 25 Mar 2025 12:23:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905422; bh=UIaebIfG0R9TIRnbdttpSpkeO01MnZQH6Hjy2o37EAI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Db/QgCVyTP1KMjCiu2milpvC3qkLDNwk3mVRrlKUi/lDDidPq6IZsGUssESgSnXHT 8tlLCgF2DLpUf5AyzRz+b6VUbZToe3eNsINg1A5sW1DeN4FI3h6hqjvjGodEGNG3Mh rXbNckGFXXUat8nF+OjELG2T7OAsRhxP5N1Ozf+PiFG4kcl8ToY6X3gRUFGuT59V6Y YP+P1HiohUl8j+H6MaYzerQ0EVmocVtWqxn3ORYHJ6nPM6nrsNgTMbiFmnMTsB/FvM VXYwY9CxVXMJeL/4Uaem46fFV73JfQLtYq07lKQaPXQyFsbq4ou++k7Q2Nf95Q6DAD 7iBflbn0swm5A== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 29/43] rv64ilp32_abi: locking/atomic: Use BITS_PER_LONG for scripts Date: Tue, 25 Mar 2025 08:16:10 -0400 Message-Id: <20250325121624.523258-30-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" In RV64ILP32 ABI systems, BITS_PER_LONG equals 32 and determines code selection, not CONFIG_64BIT. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/atomic/atomic-long.h | 174 ++++++++++++++--------------- scripts/atomic/gen-atomic-long.sh | 4 +- 2 files changed, 89 insertions(+), 89 deletions(-) diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index f86b29d90877..e31e0bdf9e26 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -9,7 +9,7 @@ #include #include -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 typedef atomic64_t atomic_long_t; #define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) #define atomic_long_cond_read_acquire atomic64_cond_read_acquire @@ -34,7 +34,7 @@ typedef atomic_t atomic_long_t; static __always_inline long raw_atomic_long_read(const atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_read(v); #else return raw_atomic_read(v); @@ -54,7 +54,7 @@ raw_atomic_long_read(const atomic_long_t *v) static __always_inline long raw_atomic_long_read_acquire(const atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_read_acquire(v); #else return raw_atomic_read_acquire(v); @@ -75,7 +75,7 @@ raw_atomic_long_read_acquire(const atomic_long_t *v) static __always_inline void raw_atomic_long_set(atomic_long_t *v, long i) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_set(v, i); #else raw_atomic_set(v, i); @@ -96,7 +96,7 @@ raw_atomic_long_set(atomic_long_t *v, long i) static __always_inline void raw_atomic_long_set_release(atomic_long_t *v, long i) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_set_release(v, i); #else raw_atomic_set_release(v, i); @@ -117,7 +117,7 @@ raw_atomic_long_set_release(atomic_long_t *v, long i) static __always_inline void raw_atomic_long_add(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_add(i, v); #else raw_atomic_add(i, v); @@ -138,7 +138,7 @@ raw_atomic_long_add(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return(i, v); #else return raw_atomic_add_return(i, v); @@ -159,7 +159,7 @@ raw_atomic_long_add_return(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return_acquire(i, v); #else return raw_atomic_add_return_acquire(i, v); @@ -180,7 +180,7 @@ raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return_release(i, v); #else return raw_atomic_add_return_release(i, v); @@ -201,7 +201,7 @@ raw_atomic_long_add_return_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return_relaxed(i, v); #else return raw_atomic_add_return_relaxed(i, v); @@ -222,7 +222,7 @@ raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add(i, v); #else return raw_atomic_fetch_add(i, v); @@ -243,7 +243,7 @@ raw_atomic_long_fetch_add(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_acquire(i, v); #else return raw_atomic_fetch_add_acquire(i, v); @@ -264,7 +264,7 @@ raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_release(i, v); #else return raw_atomic_fetch_add_release(i, v); @@ -285,7 +285,7 @@ raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_relaxed(i, v); #else return raw_atomic_fetch_add_relaxed(i, v); @@ -306,7 +306,7 @@ raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_sub(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_sub(i, v); #else raw_atomic_sub(i, v); @@ -327,7 +327,7 @@ raw_atomic_long_sub(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return(i, v); #else return raw_atomic_sub_return(i, v); @@ -348,7 +348,7 @@ raw_atomic_long_sub_return(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return_acquire(i, v); #else return raw_atomic_sub_return_acquire(i, v); @@ -369,7 +369,7 @@ raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return_release(i, v); #else return raw_atomic_sub_return_release(i, v); @@ -390,7 +390,7 @@ raw_atomic_long_sub_return_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return_relaxed(i, v); #else return raw_atomic_sub_return_relaxed(i, v); @@ -411,7 +411,7 @@ raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub(i, v); #else return raw_atomic_fetch_sub(i, v); @@ -432,7 +432,7 @@ raw_atomic_long_fetch_sub(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub_acquire(i, v); #else return raw_atomic_fetch_sub_acquire(i, v); @@ -453,7 +453,7 @@ raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub_release(i, v); #else return raw_atomic_fetch_sub_release(i, v); @@ -474,7 +474,7 @@ raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub_relaxed(i, v); #else return raw_atomic_fetch_sub_relaxed(i, v); @@ -494,7 +494,7 @@ raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_inc(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_inc(v); #else raw_atomic_inc(v); @@ -514,7 +514,7 @@ raw_atomic_long_inc(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return(v); #else return raw_atomic_inc_return(v); @@ -534,7 +534,7 @@ raw_atomic_long_inc_return(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return_acquire(v); #else return raw_atomic_inc_return_acquire(v); @@ -554,7 +554,7 @@ raw_atomic_long_inc_return_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return_release(v); #else return raw_atomic_inc_return_release(v); @@ -574,7 +574,7 @@ raw_atomic_long_inc_return_release(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return_relaxed(v); #else return raw_atomic_inc_return_relaxed(v); @@ -594,7 +594,7 @@ raw_atomic_long_inc_return_relaxed(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc(v); #else return raw_atomic_fetch_inc(v); @@ -614,7 +614,7 @@ raw_atomic_long_fetch_inc(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc_acquire(v); #else return raw_atomic_fetch_inc_acquire(v); @@ -634,7 +634,7 @@ raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc_release(v); #else return raw_atomic_fetch_inc_release(v); @@ -654,7 +654,7 @@ raw_atomic_long_fetch_inc_release(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc_relaxed(v); #else return raw_atomic_fetch_inc_relaxed(v); @@ -674,7 +674,7 @@ raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) static __always_inline void raw_atomic_long_dec(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_dec(v); #else raw_atomic_dec(v); @@ -694,7 +694,7 @@ raw_atomic_long_dec(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return(v); #else return raw_atomic_dec_return(v); @@ -714,7 +714,7 @@ raw_atomic_long_dec_return(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return_acquire(v); #else return raw_atomic_dec_return_acquire(v); @@ -734,7 +734,7 @@ raw_atomic_long_dec_return_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return_release(v); #else return raw_atomic_dec_return_release(v); @@ -754,7 +754,7 @@ raw_atomic_long_dec_return_release(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return_relaxed(v); #else return raw_atomic_dec_return_relaxed(v); @@ -774,7 +774,7 @@ raw_atomic_long_dec_return_relaxed(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec(v); #else return raw_atomic_fetch_dec(v); @@ -794,7 +794,7 @@ raw_atomic_long_fetch_dec(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec_acquire(v); #else return raw_atomic_fetch_dec_acquire(v); @@ -814,7 +814,7 @@ raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec_release(v); #else return raw_atomic_fetch_dec_release(v); @@ -834,7 +834,7 @@ raw_atomic_long_fetch_dec_release(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec_relaxed(v); #else return raw_atomic_fetch_dec_relaxed(v); @@ -855,7 +855,7 @@ raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) static __always_inline void raw_atomic_long_and(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_and(i, v); #else raw_atomic_and(i, v); @@ -876,7 +876,7 @@ raw_atomic_long_and(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and(i, v); #else return raw_atomic_fetch_and(i, v); @@ -897,7 +897,7 @@ raw_atomic_long_fetch_and(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and_acquire(i, v); #else return raw_atomic_fetch_and_acquire(i, v); @@ -918,7 +918,7 @@ raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and_release(i, v); #else return raw_atomic_fetch_and_release(i, v); @@ -939,7 +939,7 @@ raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and_relaxed(i, v); #else return raw_atomic_fetch_and_relaxed(i, v); @@ -960,7 +960,7 @@ raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_andnot(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_andnot(i, v); #else raw_atomic_andnot(i, v); @@ -981,7 +981,7 @@ raw_atomic_long_andnot(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot(i, v); #else return raw_atomic_fetch_andnot(i, v); @@ -1002,7 +1002,7 @@ raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot_acquire(i, v); #else return raw_atomic_fetch_andnot_acquire(i, v); @@ -1023,7 +1023,7 @@ raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot_release(i, v); #else return raw_atomic_fetch_andnot_release(i, v); @@ -1044,7 +1044,7 @@ raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot_relaxed(i, v); #else return raw_atomic_fetch_andnot_relaxed(i, v); @@ -1065,7 +1065,7 @@ raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_or(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_or(i, v); #else raw_atomic_or(i, v); @@ -1086,7 +1086,7 @@ raw_atomic_long_or(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or(i, v); #else return raw_atomic_fetch_or(i, v); @@ -1107,7 +1107,7 @@ raw_atomic_long_fetch_or(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or_acquire(i, v); #else return raw_atomic_fetch_or_acquire(i, v); @@ -1128,7 +1128,7 @@ raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or_release(i, v); #else return raw_atomic_fetch_or_release(i, v); @@ -1149,7 +1149,7 @@ raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or_relaxed(i, v); #else return raw_atomic_fetch_or_relaxed(i, v); @@ -1170,7 +1170,7 @@ raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_xor(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_xor(i, v); #else raw_atomic_xor(i, v); @@ -1191,7 +1191,7 @@ raw_atomic_long_xor(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor(i, v); #else return raw_atomic_fetch_xor(i, v); @@ -1212,7 +1212,7 @@ raw_atomic_long_fetch_xor(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor_acquire(i, v); #else return raw_atomic_fetch_xor_acquire(i, v); @@ -1233,7 +1233,7 @@ raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor_release(i, v); #else return raw_atomic_fetch_xor_release(i, v); @@ -1254,7 +1254,7 @@ raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor_relaxed(i, v); #else return raw_atomic_fetch_xor_relaxed(i, v); @@ -1275,7 +1275,7 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_xchg(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg(v, new); #else return raw_atomic_xchg(v, new); @@ -1296,7 +1296,7 @@ raw_atomic_long_xchg(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg_acquire(v, new); #else return raw_atomic_xchg_acquire(v, new); @@ -1317,7 +1317,7 @@ raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_xchg_release(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg_release(v, new); #else return raw_atomic_xchg_release(v, new); @@ -1338,7 +1338,7 @@ raw_atomic_long_xchg_release(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg_relaxed(v, new); #else return raw_atomic_xchg_relaxed(v, new); @@ -1361,7 +1361,7 @@ raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg(v, old, new); #else return raw_atomic_cmpxchg(v, old, new); @@ -1384,7 +1384,7 @@ raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) static __always_inline long raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg_acquire(v, old, new); #else return raw_atomic_cmpxchg_acquire(v, old, new); @@ -1407,7 +1407,7 @@ raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) static __always_inline long raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg_release(v, old, new); #else return raw_atomic_cmpxchg_release(v, old, new); @@ -1430,7 +1430,7 @@ raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) static __always_inline long raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg_relaxed(v, old, new); #else return raw_atomic_cmpxchg_relaxed(v, old, new); @@ -1454,7 +1454,7 @@ raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg(v, (int *)old, new); @@ -1478,7 +1478,7 @@ raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new); @@ -1502,7 +1502,7 @@ raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg_release(v, (int *)old, new); @@ -1526,7 +1526,7 @@ raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new); @@ -1547,7 +1547,7 @@ raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_and_test(i, v); #else return raw_atomic_sub_and_test(i, v); @@ -1567,7 +1567,7 @@ raw_atomic_long_sub_and_test(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_dec_and_test(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_and_test(v); #else return raw_atomic_dec_and_test(v); @@ -1587,7 +1587,7 @@ raw_atomic_long_dec_and_test(atomic_long_t *v) static __always_inline bool raw_atomic_long_inc_and_test(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_and_test(v); #else return raw_atomic_inc_and_test(v); @@ -1608,7 +1608,7 @@ raw_atomic_long_inc_and_test(atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative(i, v); #else return raw_atomic_add_negative(i, v); @@ -1629,7 +1629,7 @@ raw_atomic_long_add_negative(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative_acquire(i, v); #else return raw_atomic_add_negative_acquire(i, v); @@ -1650,7 +1650,7 @@ raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative_release(i, v); #else return raw_atomic_add_negative_release(i, v); @@ -1671,7 +1671,7 @@ raw_atomic_long_add_negative_release(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative_relaxed(i, v); #else return raw_atomic_add_negative_relaxed(i, v); @@ -1694,7 +1694,7 @@ raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_unless(v, a, u); #else return raw_atomic_fetch_add_unless(v, a, u); @@ -1717,7 +1717,7 @@ raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_unless(v, a, u); #else return raw_atomic_add_unless(v, a, u); @@ -1738,7 +1738,7 @@ raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool raw_atomic_long_inc_not_zero(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_not_zero(v); #else return raw_atomic_inc_not_zero(v); @@ -1759,7 +1759,7 @@ raw_atomic_long_inc_not_zero(atomic_long_t *v) static __always_inline bool raw_atomic_long_inc_unless_negative(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_unless_negative(v); #else return raw_atomic_inc_unless_negative(v); @@ -1780,7 +1780,7 @@ raw_atomic_long_inc_unless_negative(atomic_long_t *v) static __always_inline bool raw_atomic_long_dec_unless_positive(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_unless_positive(v); #else return raw_atomic_dec_unless_positive(v); @@ -1801,7 +1801,7 @@ raw_atomic_long_dec_unless_positive(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_if_positive(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_if_positive(v); #else return raw_atomic_dec_if_positive(v); @@ -1809,4 +1809,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v) } #endif /* _LINUX_ATOMIC_LONG_H */ -// eadf183c3600b8b92b91839dd3be6bcc560c752d +// 1b27315f1248fc8d43401372db7dd5895889c5be diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh index 9826be3ba986..7667305381fc 100755 --- a/scripts/atomic/gen-atomic-long.sh +++ b/scripts/atomic/gen-atomic-long.sh @@ -55,7 +55,7 @@ cat < #include -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 typedef atomic64_t atomic_long_t; #define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) #define atomic_long_cond_read_acquire atomic64_cond_read_acquire From patchwork Tue Mar 25 12:16:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876177 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6860325DD1A; Tue, 25 Mar 2025 12:24:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905450; cv=none; b=e2FVTDJJLdbcPhTPz6p3fz8q+Moqbv2pAC5chieJSExnXqShApflZyYuKShUMjYZvb5piqrJbp2Wf1nUCOEZzjYMTTNurbOvjnwyZKmoGMD2/kQCtX34MdY8tx3X7RwixRhkWc0AkT53/JfVEXbKESAIIzlLmyO/vugbzld5JYs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905450; c=relaxed/simple; bh=YlFb6KkzVSGSuegSI2MJMuDKnOb+jmQgRm/aIyv4ASM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=o6cvTxyHqJDtHSpgpjFNjfiIFW/AmxaRmlS82d96BDgutnbUiINb+8zlBCMGdtqT1wqrfN/fRURAgkZ6tyFIu/AjEJzrQXZsSGWPyVfLN4WfMq9aImmi6AppSmhvqkOHSfGP9Fj2hQKugHK9CCy7OM8+oaMb8FPXgx6nMqIjD4w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sWpvaUIx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sWpvaUIx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33EE3C4CEE9; Tue, 25 Mar 2025 12:23:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905449; bh=YlFb6KkzVSGSuegSI2MJMuDKnOb+jmQgRm/aIyv4ASM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sWpvaUIxJ2xmK0FQF0ljHhlr/YashdPyYsTn7al4PxnIvUdy5/V+e9JCSqYjPiQ5E qVMLggXWg+4qFNOeniAE53fLZ1Prkl0/M6GeEuOJ+uT+3t3q7fV6UUS2UIgffUSeso wRSxhB5nlUaDz5ji9smEUpkoHf9ykZfP/NySchrJUTDEg5MhkdVVvAnNX4EACaRX4T pdKSFNThVJ1sR53X8HbMYhyVDRiX7Q6sFsJkcrdiRsi7pFRqIxw20GaRxB8ae45rL7 7llPuKtxuxdA0NoRclxRUGq3TI+0zr3JNpZA7Af2b2pgVGyaoWF87wcU8ebw0LTt3d JLIfeJGTL7Z9g== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 31/43] rv64ilp32_abi: maple_tree: Use BITS_PER_LONG instead of CONFIG_64BIT Date: Tue, 25 Mar 2025 08:16:12 -0400 Message-Id: <20250325121624.523258-32-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The Maple tree algorithm uses ulong type for each element. The number of slots is based on BITS_PER_LONG for RV64ILP32 ABI, so use BITS_PER_LONG instead of CONFIG_64BIT. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/maple_tree.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h index cbbcd18d4186..ff6265b6468b 100644 --- a/include/linux/maple_tree.h +++ b/include/linux/maple_tree.h @@ -24,7 +24,7 @@ * * Nodes in the tree point to their parent unless bit 0 is set. */ -#if defined(CONFIG_64BIT) || defined(BUILD_VDSO32_64) +#if (BITS_PER_LONG == 64) || defined(BUILD_VDSO32_64) /* 64bit sizes */ #define MAPLE_NODE_SLOTS 31 /* 256 bytes including ->parent */ #define MAPLE_RANGE64_SLOTS 16 /* 256 bytes */ From patchwork Tue Mar 25 12:16:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876176 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF0F72571D1; Tue, 25 Mar 2025 12:24:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905480; cv=none; b=dwjwB+P9TxzatwpxIBvFq4OCDzWmebzS7P3Yj0b55SsW4aPs0agXPmmNH/G/iVTaUcdlvR+K8Myz7Kxv2sRNgOt1gQ7ypNvCcYO3xzxZO38YkFCSP1Jv2yuWonK/wtjla0K3f502sjAtB+RRGnEZmwBrvbWu8XHjfg+qghoypso= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905480; c=relaxed/simple; bh=+rpm/N8/UHXP5n3hRK8c2vpAMzIx7Ul4qNvFlNw8mss=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HagG87bgnClocGxdlKM7dnFgKl7Qpnr1tQhnoUvR3jGCvirCyt742Glw2jC04Ne8sHJr45OmGN5tjDcaYndfDcJ3cAut/RYHgM0LTG8V9DSjN93A5bbACva01tIgw1a9b6Q5bWClFcbKxIEVD71EfbT1S2LGLg+0B5fLWZ+LlAE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fJ61zsFN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fJ61zsFN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2D02C4CEED; Tue, 25 Mar 2025 12:24:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905479; bh=+rpm/N8/UHXP5n3hRK8c2vpAMzIx7Ul4qNvFlNw8mss=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fJ61zsFNPv0tVZchhiONptI9S+phpKdkaxrIZlJXfqP8ZFhX5npdPIZKutJs7jnqd cCfXzR1Lhw26JF6STGdUIKZRIQjVQ3FNgrDHdOvq/LtsFHWE3zxFE/GTBVb31ANpi6 beEwlo8W1sTeTn0cFAZJSTH8Xx2n6hy1MZKcm5ZKAENN6VnIKjbEvFrXx/UwseWpbr aofW9gTlCfs1lWzyuR3LRO5UKXtkNqYmP8M+/YlMc+6Le8QBFSTCqH374JFrd1SAX6 1sinYuNojS4RMnq31N3OYBaEEMx5tdvpoFDLmiGmnn4c78h2enMsQS6DDCJxF83lgY sgAsBfhlY0xYg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 33/43] rv64ilp32_abi: mm/auxvec: Adapt mm->saved_auxv[] to Elf64 Date: Tue, 25 Mar 2025 08:16:14 -0400 Message-Id: <20250325121624.523258-34-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" Unable to handle kernel paging request at virtual address 60723de0 Oops [#1] Modules linked in: CPU: 0 UID: 0 PID: 1 Comm: init Not tainted 6.13.0-rc4-00031-g01dc3ca797b3-dirty #161 Hardware name: riscv-virtio,qemu (DT) epc : percpu_counter_add_batch+0x38/0xc4 ra : filemap_map_pages+0x3ec/0x54c epc : ffffffffbc4ea02e ra : ffffffffbc1722e4 sp : ffffffffc1c4fc60 gp : ffffffffbd6d3918 tp : ffffffffc1c50000 t0 : 0000000000000000 t1 : 000000003fffefff t2 : 0000000000000000 s0 : ffffffffc1c4fca0 s1 : 0000000000000022 a0 : ffffffffc25c8250 a1 : 0000000000000003 a2 : 0000000000000020 a3 : 000000003fffefff a4 : 000000000b1c2000 a5 : 0000000060723de0 a6 : ffffffffbffff000 a7 : 000000003fffffff s2 : ffffffffc25c8250 s3 : ffffffffc246e240 s4 : ffffffffc2138240 s5 : ffffffffbd70c4d0 s6 : 0000000000000003 s7 : 0000000000000000 s8 : ffffffff9a02d780 s9 : 0000000000000100 s10: ffffffffc1c4fda8 s11: 0000000000000003 t3 : 0000000000000000 t4 : 00000000000004f7 t5 : 0000000000000000 t6 : 0000000000000001 status: 0000000200000100 badaddr: 0000000060723de0 cause: 000000000000000d [] percpu_counter_add_batch+0x38/0xc4 [] filemap_map_pages+0x3ec/0x54c [] handle_mm_fault+0xb6c/0xe9c [] handle_page_fault+0xd0/0x418 [] do_page_fault+0x20/0x3a [] _new_vmalloc_restore_context_a0+0xb0/0xbc Code: 8a93 4baa 511c 171b 0027 873b 00ea 4318 2481 9fb9 (aa03) 0007 Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/mm_types.h | 4 ++++ kernel/sys.c | 8 ++++++++ 2 files changed, 12 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index da3ba1a79ad5..0d436b0217fd 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -962,7 +962,11 @@ struct mm_struct { unsigned long start_brk, brk, start_stack; unsigned long arg_start, arg_end, env_start, env_end; +#ifdef CONFIG_64BIT + unsigned long long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */ +#else unsigned long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */ +#endif struct percpu_counter rss_stat[NR_MM_COUNTERS]; diff --git a/kernel/sys.c b/kernel/sys.c index cb366ff8703a..81c0d94ff50d 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -2008,7 +2008,11 @@ static int validate_prctl_map_addr(struct prctl_mm_map *prctl_map) static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data_size) { struct prctl_mm_map prctl_map = { .exe_fd = (u32)-1, }; +#ifdef CONFIG_64BIT + unsigned long long user_auxv[AT_VECTOR_SIZE]; +#else unsigned long user_auxv[AT_VECTOR_SIZE]; +#endif struct mm_struct *mm = current->mm; int error; @@ -2122,7 +2126,11 @@ static int prctl_set_auxv(struct mm_struct *mm, unsigned long addr, * up to the caller to provide sane values here, otherwise userspace * tools which use this vector might be unhappy. */ +#ifdef CONFIG_64BIT + unsigned long long user_auxv[AT_VECTOR_SIZE] = {}; +#else unsigned long user_auxv[AT_VECTOR_SIZE] = {}; +#endif if (len > sizeof(user_auxv)) return -EINVAL; From patchwork Tue Mar 25 12:16:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876175 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7831A262811; Tue, 25 Mar 2025 12:25:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905510; cv=none; b=MiChJk0XGEOXbNyHKx6HnnRsIiSg0ltdTVAi0vWwxIvSu/4YrpWJOLDGd6P1OKrwdr4khTZRlvhhf5Td0P2+D+otTQrfzZK9NeGTvysZ20oTOrMJK2Ql6JEvgKW1/4ldouW2YrGzwu65eXFQU5Ptc/cwCEuXFZPXCuYlInBOJrE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905510; c=relaxed/simple; bh=uqoJTO8TQ6On8+9R1MYgJxGk7xYpfwdXChiuqug5XKY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=C/CdgXTGN/LQteLk88R6CO/9NNzr4GFN2m571anakaMHYn0zJPudLTOhrluGCAHlWivgPDKXK5Rb+hs4CIE1tUHNnanKV+GK3lFXl84Thi7oX+YbsIkHx5BweXZkNGZldZ5uvtZu89q+tWT41/HW53byTfRAt56NoTIZDbtxgGw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Tbm8Xu/x; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Tbm8Xu/x" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2981CC4CEE4; Tue, 25 Mar 2025 12:24:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905510; bh=uqoJTO8TQ6On8+9R1MYgJxGk7xYpfwdXChiuqug5XKY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tbm8Xu/xL9LwAkup8nr/CZGwgZ/HtBocjjMNivSDeL5hMNIibkzNWBIjcdug8EvfD QidFgTYi+HMsMkeR8+VRTweKTrWgmou5CzsndMXzlO4oZEivwlhPhDjj4fjOc56Qc9 LEKFd8XS0hLN2de0NvYOj9MFS3ZC1n3uwR87P/ChdO+Qxq566Apd5rBSyR6mxaQqhR uYwPY7KfQYrAJZbgGx2D1hYimJ9ECj2lRGYUD9xqOKdph3HacGa/jE9YJiGcpZdWWY sfTkSdpa1Iac6eeiW6fFY3790kvmIaOHme0uYaLwkVlbwify8gYY2fjL3qNT/qkevh HrFm01xwFxqLQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 35/43] rv64ilp32_abi: net: Use BITS_PER_LONG in struct dst_entry Date: Tue, 25 Mar 2025 08:16:16 -0400 Message-Id: <20250325121624.523258-36-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 ABI depends on CONFIG_64BIT for its ILP32 data type, which is smaller. To align with ILP32 requirements, CONFIG_64BIT was changed to BITS_PER_LONG in struct dts_entry. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/net/dst.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/net/dst.h b/include/net/dst.h index 78c78cdce0e9..af1c74c4836e 100644 --- a/include/net/dst.h +++ b/include/net/dst.h @@ -65,7 +65,7 @@ struct dst_entry { * __rcuref wants to be on a different cache line from * input/output/ops or performance tanks badly */ -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 rcuref_t __rcuref; /* 64-bit offset 64 */ #endif int __use; @@ -74,7 +74,7 @@ struct dst_entry { short error; short __pad; __u32 tclassid; -#ifndef CONFIG_64BIT +#if BITS_PER_LONG == 32 struct lwtunnel_state *lwtstate; rcuref_t __rcuref; /* 32-bit offset 64 */ #endif @@ -89,7 +89,7 @@ struct dst_entry { */ struct list_head rt_uncached; struct uncached_list *rt_uncached_list; -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 struct lwtunnel_state *lwtstate; #endif }; From patchwork Tue Mar 25 12:16:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876174 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4967C257AC7; Tue, 25 Mar 2025 12:25:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905540; cv=none; b=LiyN/KAaJpya6Z0CXQ849vt7qR6BVNhGS6Q/2LwQTa/Eg806o++jEf/XTiK29/t6+PGsZ5thRK9r2uC/J8909lglcKZIGub5zhycygKwQiuo7NZWmNYg1MlK149JDjhLIdqLWuS8nHgInzpihxN6AuhZh1V9kXBJnFpGCw3uGc0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905540; c=relaxed/simple; bh=sm654hPYjWJMXJD/nEEXxI+s+R28mm6voQq4ZrjJStU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fMe7PiSXydF+rGbYnyvinNjlvNVjYHybAfRu9q2marLG7xZlRm+xWmmyndQq06NRj2pGVv0cUdwOH0+Rj8FzzU1A8zaNLkSQPeIfJXc6rzyfJ2aC/yOnhWGiRvEm+GPFqH6a22XH4b4Qz3KqOcGnUYaoFHKrtdvH3jGod7EIyRc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Kb/eV2Rw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Kb/eV2Rw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2764C4CEE4; Tue, 25 Mar 2025 12:25:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905540; bh=sm654hPYjWJMXJD/nEEXxI+s+R28mm6voQq4ZrjJStU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Kb/eV2Rwu3zYNU7Jt3dNKcNXYL8nMRHt5pyAJqK6Q/FHeFvW6XiemJEA3uvEk7fVs 3EdHOm9rkT05EzHBa5buxdZyLrstfBsowWufM7HqrOlG+CIUjr96jk/fDQXjnFPt+R dWfnumlqW9XRN8RlSa4I28acYRg721cxluLl8IEARR3ZQbcXBk3NZPefJ2WwhFLcUe OiQ0cG9dLZ0sgksiS+3kI0Gf4fzOMT9Xk2MfF9taYcuO0Y49REu0hLiOZAG6x8XWZ3 isLS319gnMj4Vs30saFH4Hmz96Q2P/kZ1dTH5mgBIdyz3rnr7TCbzfvoPIkyCS5b43 0M+WNWc7E/U2Q== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 37/43] rv64ilp32_abi: random: Adapt fast_pool struct Date: Tue, 25 Mar 2025 08:16:18 -0400 Message-Id: <20250325121624.523258-38-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI systems have BITS_PER_LONG set to 32, matching sizeof(compat_ulong_t). Adjust code Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/char/random.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 2581186fa61b..0bfbe02ee255 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1015,7 +1015,11 @@ EXPORT_SYMBOL_GPL(unregister_random_vmfork_notifier); #endif struct fast_pool { +#ifdef CONFIG_64BIT + u64 pool[4]; +#else unsigned long pool[4]; +#endif unsigned long last; unsigned int count; struct timer_list mix; @@ -1040,7 +1044,11 @@ static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { * and therefore this has no security on its own. s represents the * four-word SipHash state, while v represents a two-word input. */ +#ifdef CONFIG_64BIT +static void fast_mix(u64 s[4], u64 v1, u64 v2) +#else static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v2) +#endif { s[3] ^= v1; FASTMIX_PERM(s[0], s[1], s[2], s[3]); From patchwork Tue Mar 25 12:16:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876173 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC9CA25A2A0; Tue, 25 Mar 2025 12:26:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905573; cv=none; b=PbOYDN4UXO5UFlr88bNL+emUT1hOyjG2X17rsiXTjLxqercHJ9fOWyXrlwHOUVB5IaUBzLH19Sm0b4nspzwniNko3aBGs3McRQQQfYuGfuWWxZWsqwTRzyz15N+n1DLFFJrzeefcLjIGtZYRFArP64b7fWxcvGy3CZVfa/J3jjk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905573; c=relaxed/simple; bh=yrVg1aQo8nvzx1orbzCiGBpDINXbK2p/bzc74O/ZG7Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KpxTMCcj04DsTcesaA056UJfqKRw3AVdLwk6VTdznUeWKDzb3c5l8o1/fMONEj9Hs6NZuwbQoXbK2769ZgmmNhhg20cpi5pflCH5VrZvnnXOHDqy/eew8+Px0XPctfXQUB6lnH1if/zWcj8F1Wd1zcgKL2JOBWLiUNnAf8RjpHw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dXHVR37B; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dXHVR37B" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA822C4CEE9; Tue, 25 Mar 2025 12:25:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905572; bh=yrVg1aQo8nvzx1orbzCiGBpDINXbK2p/bzc74O/ZG7Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dXHVR37BYjtnP226wRmqGD/+5q9LGrYaanpzXXOav8r/BHJV5tQivSCfdRSIzthse 8trpo5JD1iz/5OecDTcnupCBJBQX33vaw/hAbD4qk4LeJVqvqHHpw9NHBT1ZFlMkn0 uCFXCw3w+yxAMYAhQHMWBx2FPCBEyeY1A19l2GvG5jVXNeDCNpsZ+dZy8FBqDa7uo/ Zsma4xcYZbwnzagW4sOr51uBOwZp+LjobLSp/6HGruAVKHWCA8RItDHKa1cnmepcuH KTSRJuNlahXURYcKWJr2ZhoKXLWH1u/JzJrtINfTgSwDQFMRVa5eB4TERwwIW5IXtr Utc6YiuHqvQsA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 39/43] rv64ilp32_abi: sysinfo: Adapt sysinfo structure to lp64 uapi Date: Tue, 25 Mar 2025 08:16:20 -0400 Message-Id: <20250325121624.523258-40-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RISC-V 64ilp32 ABI leverages LP64 uapi and accommodates LP64 ABI userspace directly, necessitating updates to the sysinfo struct's unsigned long and array types with u64. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- fs/proc/loadavg.c | 10 +++++++--- include/linux/sched/loadavg.h | 4 ++++ include/uapi/linux/sysinfo.h | 20 ++++++++++++++++++++ kernel/sched/loadavg.c | 4 ++++ 4 files changed, 35 insertions(+), 3 deletions(-) diff --git a/fs/proc/loadavg.c b/fs/proc/loadavg.c index 817981e57223..643e06de3446 100644 --- a/fs/proc/loadavg.c +++ b/fs/proc/loadavg.c @@ -13,14 +13,18 @@ static int loadavg_proc_show(struct seq_file *m, void *v) { +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) + unsigned long long avnrun[3]; +#else unsigned long avnrun[3]; +#endif get_avenrun(avnrun, FIXED_1/200, 0); seq_printf(m, "%lu.%02lu %lu.%02lu %lu.%02lu %u/%d %d\n", - LOAD_INT(avnrun[0]), LOAD_FRAC(avnrun[0]), - LOAD_INT(avnrun[1]), LOAD_FRAC(avnrun[1]), - LOAD_INT(avnrun[2]), LOAD_FRAC(avnrun[2]), + LOAD_INT((ulong)avnrun[0]), LOAD_FRAC((ulong)avnrun[0]), + LOAD_INT((ulong)avnrun[1]), LOAD_FRAC((ulong)avnrun[1]), + LOAD_INT((ulong)avnrun[2]), LOAD_FRAC((ulong)avnrun[2]), nr_running(), nr_threads, idr_get_cursor(&task_active_pid_ns(current)->idr) - 1); return 0; diff --git a/include/linux/sched/loadavg.h b/include/linux/sched/loadavg.h index 83ec54b65e79..8f2d6a827ee9 100644 --- a/include/linux/sched/loadavg.h +++ b/include/linux/sched/loadavg.h @@ -13,7 +13,11 @@ * 11 bit fractions. */ extern unsigned long avenrun[]; /* Load averages */ +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +extern void get_avenrun(unsigned long long *loads, unsigned long offset, int shift); +#else extern void get_avenrun(unsigned long *loads, unsigned long offset, int shift); +#endif #define FSHIFT 11 /* nr of bits of precision */ #define FIXED_1 (1< #define SI_LOAD_SHIFT 16 + +#if (__riscv_xlen == 64) && (__BITS_PER_LONG == 32) +struct sysinfo { + __s64 uptime; /* Seconds since boot */ + __u64 loads[3]; /* 1, 5, and 15 minute load averages */ + __u64 totalram; /* Total usable main memory size */ + __u64 freeram; /* Available memory size */ + __u64 sharedram; /* Amount of shared memory */ + __u64 bufferram; /* Memory used by buffers */ + __u64 totalswap; /* Total swap space size */ + __u64 freeswap; /* swap space still available */ + __u16 procs; /* Number of current processes */ + __u16 pad; /* Explicit padding for m68k */ + __u64 totalhigh; /* Total high memory size */ + __u64 freehigh; /* Available high memory size */ + __u32 mem_unit; /* Memory unit size in bytes */ + char _f[20-2*sizeof(__u64)-sizeof(__u32)]; /* Padding: libc5 uses this.. */ +}; +#else struct sysinfo { __kernel_long_t uptime; /* Seconds since boot */ __kernel_ulong_t loads[3]; /* 1, 5, and 15 minute load averages */ @@ -21,5 +40,6 @@ struct sysinfo { __u32 mem_unit; /* Memory unit size in bytes */ char _f[20-2*sizeof(__kernel_ulong_t)-sizeof(__u32)]; /* Padding: libc5 uses this.. */ }; +#endif #endif /* _LINUX_SYSINFO_H */ diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c index c48900b856a2..f1f5abc64dea 100644 --- a/kernel/sched/loadavg.c +++ b/kernel/sched/loadavg.c @@ -68,7 +68,11 @@ EXPORT_SYMBOL(avenrun); /* should be removed */ * * These values are estimates at best, so no need for locking. */ +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +void get_avenrun(unsigned long long *loads, unsigned long offset, int shift) +#else void get_avenrun(unsigned long *loads, unsigned long offset, int shift) +#endif { loads[0] = (avenrun[0] + offset) << shift; loads[1] = (avenrun[1] + offset) << shift; From patchwork Tue Mar 25 12:16:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876172 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6617525745C; Tue, 25 Mar 2025 12:26:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905603; cv=none; b=i6TphsgKWTOCs4hkVTuMbuDuScl7kHolxQtdVmIdmrBzZWwvZJLabWc6xFYf73EtTXz2wBi2Opr1FtwfkKNuZfyyHRRfODRBjSnL7djM0XsWaSJ6Is4lkQEZs3b7B+dkBLfaz3rsKcRs63C0t8VY2veobCcQ8uRaDutcNL//4fo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905603; c=relaxed/simple; bh=gGEDexkonBv9xkqlklXH6H0siev5It/WclAJgWuLAaE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZJz+SqQonA4Emf+lRFy347EqEP/b9jer/HUpWI1fKDSN0pLBOq8MqIQb5VfDcZvsQu+jDyqTzlbhwqs6VdZv08yt984u0y0qRM/aucA4w/ov+1St/a8jG5T2xjdmBg4etTkZTKpIDd86t8mh8SVQuwhq71WiwYDDoVND31FAB4s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ihVlwKbl; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ihVlwKbl" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58230C4CEE4; Tue, 25 Mar 2025 12:26:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905600; bh=gGEDexkonBv9xkqlklXH6H0siev5It/WclAJgWuLAaE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ihVlwKblKsuWFNcMtDJ1k3ctKSbBwY49h9IjoH9KGH6RI/zq8BylDBqvJ9wyno/qp IUv3RwZeYhK0sWieeIu1X7UarNylQxXDlKivCnDKhn4bx62KBOApARfUIXTkLXXsFn 3rI4E6bpvA/elj8kLNZnsdeON0EYb/TTcZHXFJrgLVFjWNZTsDHQaIIhPnPY6YDHgo 7rJORY6u+zOpHejnqnKv38Ge2nOcCABkagysYa3pxGYsHk/Lvc8HyUvKwE4GThzBZl GffrGstDg5zLMeOPdXBQ0RXicXDZjiJ6nMyN1iFUR0hjCFd9gaaZECI2ylLLzOHNni HKltn+V1I4Cmg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 41/43] rv64ilp32_abi: tty: Adapt ptr_to_compat Date: Tue, 25 Mar 2025 08:16:22 -0400 Message-Id: <20250325121624.523258-42-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI is based on 64-bit ISA, but BITS_PER_LONG is 32. So, the size of unsigned long is the same as compat_ulong_t and no need "(unsigned long)v.iomem_base >> 32 ? 0xfffffff : ..." detection. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/tty/tty_io.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c index 449dbd216460..75e256e879d0 100644 --- a/drivers/tty/tty_io.c +++ b/drivers/tty/tty_io.c @@ -2873,8 +2873,12 @@ static int compat_tty_tiocgserial(struct tty_struct *tty, err = tty->ops->get_serial(tty, &v); if (!err) { memcpy(&v32, &v, offsetof(struct serial_struct32, iomem_base)); +#if BITS_PER_LONG == 64 v32.iomem_base = (unsigned long)v.iomem_base >> 32 ? 0xfffffff : ptr_to_compat(v.iomem_base); +#else + v32.iomem_base = ptr_to_compat(v.iomem_base); +#endif v32.iomem_reg_shift = v.iomem_reg_shift; v32.port_high = v.port_high; if (copy_to_user(ss, &v32, sizeof(v32))) From patchwork Tue Mar 25 12:16:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876171 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39AC425A2C0; Tue, 25 Mar 2025 12:26:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905616; cv=none; b=KWRmue95ZmwWF08cLkg27KpsG31njfBe+yci+lYzec7Qonx2HOsDt53Ga72Sz13dzG/TcvygT/QcFAmDa3VNELYUdp2scy+AcMXO8+c6kI2ruszhSK4pjQP9IYBvtjFi926PBLnNMVeV7IUhLPLeDdsPNxP6ZMM1JI9OUyB+AGY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905616; c=relaxed/simple; bh=9rClAibepR5410hMymY5QyjFNfu9myyrJTkBJ7yJs84=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RxBua4H0dWDogKpleXnKcVygmGTcbymhQrq32sqktiQaQklcHQ61TGtpRLtm26A721MIhwGdqvKi1WxgLZRdn7RoOBntTlpxkVj2WlF4S22cXDXextvAvTGzsoMwaZwd+Tp/VVulQxzhYQOO45puQ/sH6Xkv4jfObA3/oIP9GkE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PSXvc+q0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PSXvc+q0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4AD52C4CEED; Tue, 25 Mar 2025 12:26:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905616; bh=9rClAibepR5410hMymY5QyjFNfu9myyrJTkBJ7yJs84=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PSXvc+q07mYxURgabvhC0EDZ07GzmsdGsbFL99gOWwWn2P5z8FZ/BlJ02MvKw0oi0 DebZmNXKJ2CXOdMnsWEWR3OoVOJMtpx5upcoqoGzimuBC7isnkYS1TqP5L1E6cxZU6 4XrYv8tJiQ/yVO+KpHHDW9WJtvg1IAC6n5lPmXBEkrmtDpQIZKAsCw9ZnEWbQ7Gm8D EFr+OQdexB1ClgCWhZJhpf6cX91lUXmdACsoE7/AQufDa2ng7DPJR1k3C71ynpd+D/ KrFEXsFArC7JDeWksZBrKc+/l/OmyTWBnHSzZmaXAgyqRcEOww2gygQgud3qZdqgXr KuA9Abd2ez09A== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 42/43] rv64ilp32_abi: memfd: Use vm_flag_t Date: Tue, 25 Mar 2025 08:16:23 -0400 Message-Id: <20250325121624.523258-43-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-input@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI linux kernel is based on CONFIG_64BIT, and uses unsigned long long as vm_flags_t. Using unsigned long would break rv64ilp32 abi. The definition of vm_flag_t exists, hence its usage is preferred even if it's not essential. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/memfd.h | 4 ++-- mm/memfd.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/memfd.h b/include/linux/memfd.h index 246daadbfde8..6f606d9573c3 100644 --- a/include/linux/memfd.h +++ b/include/linux/memfd.h @@ -14,7 +14,7 @@ struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx); * We also update VMA flags if appropriate by manipulating the VMA flags pointed * to by vm_flags_ptr. */ -int memfd_check_seals_mmap(struct file *file, unsigned long *vm_flags_ptr); +int memfd_check_seals_mmap(struct file *file, vm_flags_t *vm_flags_ptr); #else static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned int a) { @@ -25,7 +25,7 @@ static inline struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx) return ERR_PTR(-EINVAL); } static inline int memfd_check_seals_mmap(struct file *file, - unsigned long *vm_flags_ptr) + vm_flags_t *vm_flags_ptr) { return 0; } diff --git a/mm/memfd.c b/mm/memfd.c index 37f7be57c2f5..50dad90ffedc 100644 --- a/mm/memfd.c +++ b/mm/memfd.c @@ -332,10 +332,10 @@ static inline bool is_write_sealed(unsigned int seals) return seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE); } -static int check_write_seal(unsigned long *vm_flags_ptr) +static int check_write_seal(vm_flags_t *vm_flags_ptr) { - unsigned long vm_flags = *vm_flags_ptr; - unsigned long mask = vm_flags & (VM_SHARED | VM_WRITE); + vm_flags_t vm_flags = *vm_flags_ptr; + vm_flags_t mask = vm_flags & (VM_SHARED | VM_WRITE); /* If a private matting then writability is irrelevant. */ if (!(mask & VM_SHARED)) @@ -357,7 +357,7 @@ static int check_write_seal(unsigned long *vm_flags_ptr) return 0; } -int memfd_check_seals_mmap(struct file *file, unsigned long *vm_flags_ptr) +int memfd_check_seals_mmap(struct file *file, vm_flags_t *vm_flags_ptr) { int err = 0; unsigned int *seals_ptr = memfd_file_seals_ptr(file);