From patchwork Fri Oct 9 19:49:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A023C388D0 for ; Fri, 9 Oct 2020 19:51:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 65F122231B for ; Fri, 9 Oct 2020 19:51:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2403878AbgJITvM (ORCPT ); Fri, 9 Oct 2020 15:51:12 -0400 Received: from mga05.intel.com ([192.55.52.43]:56235 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390840AbgJITuu (ORCPT ); Fri, 9 Oct 2020 15:50:50 -0400 IronPort-SDR: w1qCiG3AJrbHpSZTfimcXSJzO/yJWuRyVF4QWgalq7actQXgPvgRXk29wA5KGD6UbigUjAM3yo dw4QNiCtCnjg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="250225864" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="250225864" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:46 -0700 IronPort-SDR: qnBWOjzlCrAn0iz4juA1IxPDmIdlUS8zYMFKRiVxpP3UruuZtN+Up+nA/7WPjpw84o9yPhVhmH jUXVTF6t7/+g== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298530884" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:46 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 01/58] x86/pks: Add a global pkrs option Date: Fri, 9 Oct 2020 12:49:36 -0700 Message-Id: <20201009195033.3208459-2-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny Some users, such as kmap(), sometimes requires PKS to be global. However, updating all CPUs, and worse yet all threads is expensive. Introduce a global PKRS state which is checked at critical times to allow the state to enable access when global PKS is required. To accomplish this with minimal locking; the code is carefully designed with the following key concepts. 1) Borrow the idea of lazy TLB invalidations from the fault handler code. When enabling PKS access we anticipate that other threads are not yet running. However, if they are we catch the fault and clean up the MSR value. 2) When disabling PKS access we force all MSR values across all CPU's. This is required to block access as soon as possible.[1] However, it is key that we never attempt to update the per-task PKS values directly. See next point. 3) Per-task PKS values never get updated with global PKS values. This is key to prevent locking requirements and a nearly intractable problem of trying to update every task in the system. Here are a few key points. 3a) The MSR value can be updated with the global PKS value if that global value happened to change while the task was running. 3b) If the task was sleeping while the global PKS was updated then the global value is added in when task's are scheduled. 3c) If the global PKS value restricts access the MSR is updated as soon as possible[1] and the thread value is not updated which ensures the thread does not retain the elevated privileges after a context switch. 4) Follow on patches must be careful to preserve the separation of the thread PKRS value and the MSR value. 5) Access Disable on any individual pkey is turned into (Access Disable | Write Disable) to facilitate faster integration of the global value into the thread local MSR through a simple '&' operation. Doing otherwise would result in complicated individual bit manipulation for each pkey. [1] There is a race condition which is ignored which is required for performance issues. This potentially allows access to a thread until the end of it's time slice. After the context switch the global value will be restored. Signed-off-by: Ira Weiny --- Documentation/core-api/protection-keys.rst | 11 +- arch/x86/entry/common.c | 7 + arch/x86/include/asm/pkeys.h | 6 +- arch/x86/include/asm/pkeys_common.h | 8 +- arch/x86/kernel/process.c | 74 +++++++- arch/x86/mm/fault.c | 189 ++++++++++++++++----- arch/x86/mm/pkeys.c | 88 ++++++++-- include/linux/pkeys.h | 6 +- lib/pks/pks_test.c | 16 +- 9 files changed, 329 insertions(+), 76 deletions(-) diff --git a/Documentation/core-api/protection-keys.rst b/Documentation/core-api/protection-keys.rst index c60366921d60..9e8a98653e13 100644 --- a/Documentation/core-api/protection-keys.rst +++ b/Documentation/core-api/protection-keys.rst @@ -121,9 +121,9 @@ mapping adds that mapping to the protection domain. int pks_key_alloc(const char * const pkey_user); #define PAGE_KERNEL_PKEY(pkey) #define _PAGE_KEY(pkey) - void pks_mknoaccess(int pkey); - void pks_mkread(int pkey); - void pks_mkrdwr(int pkey); + void pks_mknoaccess(int pkey, bool global); + void pks_mkread(int pkey, bool global); + void pks_mkrdwr(int pkey, bool global); void pks_key_free(int pkey); pks_key_alloc() allocates keys dynamically to allow better use of the limited @@ -141,7 +141,10 @@ _PAGE_KEY(). The pks_mk*() family of calls allows kernel users the ability to change the protections for the domain identified by the pkey specified. 3 states are available pks_mknoaccess(), pks_mkread(), and pks_mkrdwr() which set the access -to none, read, and read/write respectively. +to none, read, and read/write respectively. 'global' specifies that the +protection should be set across all threads (logical CPU's) not just the +current running thread/CPU. This increases the overhead of PKS and lessens the +protection so it should be used sparingly. Finally, pks_key_free() allows a user to return the key to the allocator for use by others. diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 324a8fd5ac10..86ad32e0095e 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -261,12 +261,19 @@ noinstr void idtentry_exit_nmi(struct pt_regs *regs, irqentry_state_t *irq_state * current running value and set the default PKRS value for the duration of the * exception. Thus preventing exception handlers from having the elevated * access of the interrupted task. + * + * NOTE That the thread saved PKRS must be preserved separately to ensure + * global overrides do not 'stick' on a thread. */ noinstr void irq_save_pkrs(irqentry_state_t *state) { if (!cpu_feature_enabled(X86_FEATURE_PKS)) return; + /* + * The thread_pkrs must be maintained separately to prevent global + * overrides from 'sticking' on a thread. + */ state->thread_pkrs = current->thread.saved_pkrs; state->pkrs = this_cpu_read(pkrs_cache); write_pkrs(INIT_PKRS_VALUE); diff --git a/arch/x86/include/asm/pkeys.h b/arch/x86/include/asm/pkeys.h index 79952216474e..cae0153a5480 100644 --- a/arch/x86/include/asm/pkeys.h +++ b/arch/x86/include/asm/pkeys.h @@ -143,9 +143,9 @@ u32 update_pkey_val(u32 pk_reg, int pkey, unsigned int flags); int pks_key_alloc(const char *const pkey_user); void pks_key_free(int pkey); -void pks_mknoaccess(int pkey); -void pks_mkread(int pkey); -void pks_mkrdwr(int pkey); +void pks_mknoaccess(int pkey, bool global); +void pks_mkread(int pkey, bool global); +void pks_mkrdwr(int pkey, bool global); #endif /* CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */ diff --git a/arch/x86/include/asm/pkeys_common.h b/arch/x86/include/asm/pkeys_common.h index 8961e2ddd6ff..e380679ba1bb 100644 --- a/arch/x86/include/asm/pkeys_common.h +++ b/arch/x86/include/asm/pkeys_common.h @@ -6,7 +6,12 @@ #define PKR_WD_BIT 0x2 #define PKR_BITS_PER_PKEY 2 -#define PKR_AD_KEY(pkey) (PKR_AD_BIT << ((pkey) * PKR_BITS_PER_PKEY)) +/* + * We must define 11b as the default to make global overrides efficient. + * See arch/x86/kernel/process.c where the global pkrs is factored in during + * context switch. + */ +#define PKR_AD_KEY(pkey) ((PKR_WD_BIT | PKR_AD_BIT) << ((pkey) * PKR_BITS_PER_PKEY)) /* * Define a default PKRS value for each task. @@ -27,6 +32,7 @@ #define PKS_NUM_KEYS 16 #ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS +extern u32 pkrs_global_cache; DECLARE_PER_CPU(u32, pkrs_cache); noinstr void write_pkrs(u32 new_pkrs); #else diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index eb3a95a69392..58edd162d9cb 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -43,7 +43,7 @@ #include #include #include -#include +#include #include "process.h" @@ -189,15 +189,83 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg, } #ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS -DECLARE_PER_CPU(u32, pkrs_cache); static inline void pks_init_task(struct task_struct *tsk) { /* New tasks get the most restrictive PKRS value */ tsk->thread.saved_pkrs = INIT_PKRS_VALUE; } + +extern u32 pkrs_global_cache; + +/** + * The global PKRS value can only increase access. Because 01b and 11b both + * disable access. The following truth table is our desired result for each of + * the pkeys when we add in the global permissions. + * + * 00 R/W - Write enabled (all access) + * 10 Read - write disabled (Read only) + * 01 NO Acc - access disabled + * 11 NO Acc - also access disabled + * + * local global desired required + * result operation + * 00 00 00 & + * 00 10 00 & + * 00 01 00 & + * 00 11 00 & + * + * 10 00 00 & + * 10 10 10 & + * 10 01 10 ^ special case + * 10 11 10 & + * + * 01 00 00 & + * 01 10 10 ^ special case + * 01 01 01 & + * 01 11 01 & + * + * 11 00 00 & + * 11 10 10 & + * 11 01 01 & + * 11 11 11 & + * + * In order to eliminate the need to loop through each pkey and deal with the 2 + * above special cases we force all 01b values to 11b through the API thus + * resulting in the simplified truth table below. + * + * 00 R/W - Write enabled (all access) + * 10 Read - write disabled (Read only) + * 01 NO Acc - access disabled + * (Not allowed in the API always use 11) + * 11 NO Acc - access disabled + * + * local global desired effective + * result operation + * 00 00 00 & + * 00 10 00 & + * 00 11 00 & + * 00 11 00 & + * + * 10 00 00 & + * 10 10 10 & + * 10 11 10 & + * 10 11 10 & + * + * 11 00 00 & + * 11 10 10 & + * 11 11 11 & + * 11 11 11 & + * + * 11 00 00 & + * 11 10 10 & + * 11 11 11 & + * 11 11 11 & + * + * Thus we can simply 'AND' in the global pkrs value. + */ static inline void pks_sched_in(void) { - write_pkrs(current->thread.saved_pkrs); + write_pkrs(current->thread.saved_pkrs & pkrs_global_cache); } #else static inline void pks_init_task(struct task_struct *tsk) { } diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index dd5af9399131..4b4ff9efa298 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -32,6 +32,8 @@ #include /* VMALLOC_START, ... */ #include /* kvm_handle_async_pf */ +#include + #define CREATE_TRACE_POINTS #include @@ -995,9 +997,124 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code, } } -static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) +#ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS +/* + * check if we have had a 'global' pkey update. If so, handle this like a lazy + * TLB; fix up the local MSR and return + * + * See arch/x86/kernel/process.c for the explanation on how global is handled + * with a simple '&' operation. + * + * Also we don't update the current thread saved_pkrs because we don't want the + * global value to 'stick' with the thread. Rather we want this to be valid + * only for the remainder of this time slice. For subsequent time slices the + * global value will be factored in during schedule; see arch/x86/kernel/process.c + * + * Finally we have a trade off between performance and forcing a restriction of + * permissions across all CPUs on a global update. + * + * Given the following window. + * + * Global PKRS CPU #0 CPU #1 + * cache MSR MSR + * + * | | | + * Global |----------\ | | + * Restriction | ------------> read | <= T1 + * (on CPU #0) | | | | + * ------\ | | | | + * ------>| | | | + * | | | | + * Update CPU #1 |--------\ | | | + * | --------------\ | | + * | | --|------------>| + * Global remote | | | | + * MSR update | | | | + * (CPU 2-n) | | | | + * |-----> CPU's | v | + * local | (2-N) | local --\ | + * update | | update ------>|(Update <= T2 + * ----------------\ | | Incorrect) + * | -----------\ | | + * | --->|(Update OK) | + * Context | | | + * Switch |----------\ | | + * | ------------> read | + * | | | | + * | | | | + * | | v | + * | | local --\ | + * | | update ------>|(Update + * | | | Correct) + * + * We allow for a larger window of the global pkey being open because global + * updates should be rare and we don't want to burden normal faults with having + * to read the global state. + */ +static bool global_pkey_is_enabled(pte_t *pte, bool is_write, + irqentry_state_t *irq_state) +{ + u8 pkey = pte_flags_pkey(pte->pte); + int pkey_shift = pkey * PKR_BITS_PER_PKEY; + u32 mask = (((1 << PKR_BITS_PER_PKEY) - 1) << pkey_shift); + u32 global = READ_ONCE(pkrs_global_cache); + u32 val; + + /* Return early if global access is not valid */ + val = (global & mask) >> pkey_shift; + if ((val & PKR_AD_BIT) || (is_write && (val & PKR_WD_BIT))) + return false; + + irq_state->pkrs &= global; + + return true; +} + +#else /* !CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */ +__always_inline bool global_pkey_is_enabled(pte_t *pte, bool is_write, + irqentry_state_t *irq_state) +{ + return false; +} +#endif /* CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */ + +#ifdef CONFIG_PKS_TESTING +bool pks_test_callback(irqentry_state_t *irq_state); +static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state) +{ + /* + * If we get a protection key exception it could be because we + * are running the PKS test. If so, pks_test_callback() will + * clear the protection mechanism and return true to indicate + * the fault was handled + */ + return pks_test_callback(irq_state); +} +#else /* !CONFIG_PKS_TESTING */ +static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state) +{ + return false; +} +#endif /* CONFIG_PKS_TESTING */ + + +static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte, + irqentry_state_t *irq_state) { - if ((error_code & X86_PF_WRITE) && !pte_write(*pte)) + bool is_write = (error_code & X86_PF_WRITE); + + if (IS_ENABLED(CONFIG_ARCH_HAS_SUPERVISOR_PKEYS) && + error_code & X86_PF_PK) { + if (global_pkey_is_enabled(pte, is_write, irq_state)) + return 1; + + if (handle_pks_testing(error_code, irq_state)) + return 1; + + return 0; + } + + if (is_write && !pte_write(*pte)) return 0; if ((error_code & X86_PF_INSTR) && !pte_exec(*pte)) @@ -1007,7 +1124,7 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) } /* - * Handle a spurious fault caused by a stale TLB entry. + * Handle a spurious fault caused by a stale TLB entry or a lazy PKRS update. * * This allows us to lazily refresh the TLB when increasing the * permissions of a kernel page (RO -> RW or NX -> X). Doing it @@ -1022,13 +1139,19 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) * There are no security implications to leaving a stale TLB when * increasing the permissions on a page. * + * Similarly, PKRS increases in permissions are done on a thread local level. + * But if the caller indicates the permission should be allowd globaly we can + * lazily update only those threads which fault and avoid a global IPI MSR + * update. + * * Returns non-zero if a spurious fault was handled, zero otherwise. * * See Intel Developer's Manual Vol 3 Section 4.10.4.3, bullet 3 * (Optional Invalidation). */ static noinline int -spurious_kernel_fault(unsigned long error_code, unsigned long address) +spurious_kernel_fault(unsigned long error_code, unsigned long address, + irqentry_state_t *irq_state) { pgd_t *pgd; p4d_t *p4d; @@ -1038,17 +1161,19 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) int ret; /* - * Only writes to RO or instruction fetches from NX may cause - * spurious faults. + * Only PKey faults or writes to RO or instruction fetches from NX may + * cause spurious faults. * * These could be from user or supervisor accesses but the TLB * is only lazily flushed after a kernel mapping protection * change, so user accesses are not expected to cause spurious * faults. */ - if (error_code != (X86_PF_WRITE | X86_PF_PROT) && - error_code != (X86_PF_INSTR | X86_PF_PROT)) - return 0; + if (!(error_code & X86_PF_PK)) { + if (error_code != (X86_PF_WRITE | X86_PF_PROT) && + error_code != (X86_PF_INSTR | X86_PF_PROT)) + return 0; + } pgd = init_mm.pgd + pgd_index(address); if (!pgd_present(*pgd)) @@ -1059,27 +1184,31 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) return 0; if (p4d_large(*p4d)) - return spurious_kernel_fault_check(error_code, (pte_t *) p4d); + return spurious_kernel_fault_check(error_code, (pte_t *) p4d, + irq_state); pud = pud_offset(p4d, address); if (!pud_present(*pud)) return 0; if (pud_large(*pud)) - return spurious_kernel_fault_check(error_code, (pte_t *) pud); + return spurious_kernel_fault_check(error_code, (pte_t *) pud, + irq_state); pmd = pmd_offset(pud, address); if (!pmd_present(*pmd)) return 0; if (pmd_large(*pmd)) - return spurious_kernel_fault_check(error_code, (pte_t *) pmd); + return spurious_kernel_fault_check(error_code, (pte_t *) pmd, + irq_state); pte = pte_offset_kernel(pmd, address); if (!pte_present(*pte)) return 0; - ret = spurious_kernel_fault_check(error_code, pte); + ret = spurious_kernel_fault_check(error_code, pte, + irq_state); if (!ret) return 0; @@ -1087,7 +1216,8 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) * Make sure we have permissions in PMD. * If not, then there's a bug in the page tables: */ - ret = spurious_kernel_fault_check(error_code, (pte_t *) pmd); + ret = spurious_kernel_fault_check(error_code, (pte_t *) pmd, + irq_state); WARN_ONCE(!ret, "PMD has incorrect permission bits\n"); return ret; @@ -1150,25 +1280,6 @@ static int fault_in_kernel_space(unsigned long address) return address >= TASK_SIZE_MAX; } -#ifdef CONFIG_PKS_TESTING -bool pks_test_callback(irqentry_state_t *irq_state); -static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state) -{ - /* - * If we get a protection key exception it could be because we - * are running the PKS test. If so, pks_test_callback() will - * clear the protection mechanism and return true to indicate - * the fault was handled. - */ - return (hw_error_code & X86_PF_PK) && pks_test_callback(irq_state); -} -#else -static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state) -{ - return false; -} -#endif - /* * Called for all faults where 'address' is part of the kernel address * space. Might get called for faults that originate from *code* that @@ -1186,9 +1297,6 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, !cpu_feature_enabled(X86_FEATURE_PKS)) WARN_ON_ONCE(hw_error_code & X86_PF_PK); - if (handle_pks_testing(hw_error_code, irq_state)) - return; - #ifdef CONFIG_X86_32 /* * We can fault-in kernel-space virtual memory on-demand. The @@ -1220,8 +1328,11 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, } #endif - /* Was the fault spurious, caused by lazy TLB invalidation? */ - if (spurious_kernel_fault(hw_error_code, address)) + /* + * Was the fault spurious; caused by lazy TLB invalidation or PKRS + * update? + */ + if (spurious_kernel_fault(hw_error_code, address, irq_state)) return; /* kprobes don't want to hook the spurious faults: */ @@ -1492,7 +1603,7 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) * * Fingers crossed. * - * The async #PF handling code takes care of idtentry handling + * The async #PF handling code takes care of irqentry handling * itself. */ if (kvm_handle_async_pf(regs, (u32)address)) diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c index 2431c68ef752..a45893069877 100644 --- a/arch/x86/mm/pkeys.c +++ b/arch/x86/mm/pkeys.c @@ -263,33 +263,84 @@ noinstr void write_pkrs(u32 new_pkrs) } EXPORT_SYMBOL_GPL(write_pkrs); +/* + * NOTE: The pkrs_global_cache is _never_ stored in the per thread PKRS cache + * values [thread.saved_pkrs] by design + * + * This allows us to invalidate access on running threads immediately upon + * invalidate. Sleeping threads will not be enabled due to the algorithm + * during pkrs_sched_in() + */ +DEFINE_SPINLOCK(pkrs_global_cache_lock); +u32 pkrs_global_cache = INIT_PKRS_VALUE; +EXPORT_SYMBOL_GPL(pkrs_global_cache); + +static inline void update_global_pkrs(int pkey, unsigned long protection) +{ + int pkey_shift = pkey * PKR_BITS_PER_PKEY; + u32 mask = (((1 << PKR_BITS_PER_PKEY) - 1) << pkey_shift); + u32 old_val; + + spin_lock(&pkrs_global_cache_lock); + old_val = (pkrs_global_cache & mask) >> pkey_shift; + pkrs_global_cache &= ~mask; + if (protection & PKEY_DISABLE_ACCESS) + pkrs_global_cache |= PKR_AD_BIT << pkey_shift; + if (protection & PKEY_DISABLE_WRITE) + pkrs_global_cache |= PKR_WD_BIT << pkey_shift; + + /* + * If we are preventing access from the old value. Force the + * update on all running threads. + */ + if (((old_val == 0) && protection) || + ((old_val & PKR_WD_BIT) && (protection & PKEY_DISABLE_ACCESS))) { + int cpu; + + for_each_online_cpu(cpu) { + u32 *ptr = per_cpu_ptr(&pkrs_cache, cpu); + + *ptr = update_pkey_val(*ptr, pkey, protection); + wrmsrl_on_cpu(cpu, MSR_IA32_PKRS, *ptr); + put_cpu_ptr(ptr); + } + } + spin_unlock(&pkrs_global_cache_lock); +} + /** * Do not call this directly, see pks_mk*() below. * * @pkey: Key for the domain to change * @protection: protection bits to be used + * @global: should this change be made globally or not. * * Protection utilizes the same protection bits specified for User pkeys * PKEY_DISABLE_ACCESS * PKEY_DISABLE_WRITE * */ -static inline void pks_update_protection(int pkey, unsigned long protection) +static inline void pks_update_protection(int pkey, unsigned long protection, + bool global) { - current->thread.saved_pkrs = update_pkey_val(current->thread.saved_pkrs, - pkey, protection); preempt_disable(); + if (global) + update_global_pkrs(pkey, protection); + + current->thread.saved_pkrs = update_pkey_val(current->thread.saved_pkrs, pkey, + protection); write_pkrs(current->thread.saved_pkrs); + preempt_enable(); } /** * PKS access control functions * - * Change the access of the domain specified by the pkey. These are global - * updates. They only affects the current running thread. It is undefined and - * a bug for users to call this without having allocated a pkey and using it as - * pkey here. + * Change the access of the domain specified by the pkey. These may be global + * updates depending on the value of global. It is undefined and a bug for + * users to call this without having allocated a pkey and using it as pkey + * here. * * pks_mknoaccess() * Disable all access to the domain @@ -299,23 +350,30 @@ static inline void pks_update_protection(int pkey, unsigned long protection) * Make the domain Read/Write * * @pkey the pkey for which the access should change. - * + * @global if true the access is enabled on all threads/logical cpus */ -void pks_mknoaccess(int pkey) +void pks_mknoaccess(int pkey, bool global) { - pks_update_protection(pkey, PKEY_DISABLE_ACCESS); + /* + * We force disable access to be 11b + * (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE) + * instaed of 01b See arch/x86/kernel/process.c where the global pkrs + * is factored in during context switch. + */ + pks_update_protection(pkey, PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE, + global); } EXPORT_SYMBOL_GPL(pks_mknoaccess); -void pks_mkread(int pkey) +void pks_mkread(int pkey, bool global) { - pks_update_protection(pkey, PKEY_DISABLE_WRITE); + pks_update_protection(pkey, PKEY_DISABLE_WRITE, global); } EXPORT_SYMBOL_GPL(pks_mkread); -void pks_mkrdwr(int pkey) +void pks_mkrdwr(int pkey, bool global) { - pks_update_protection(pkey, 0); + pks_update_protection(pkey, 0, global); } EXPORT_SYMBOL_GPL(pks_mkrdwr); @@ -377,7 +435,7 @@ void pks_key_free(int pkey) return; /* Restore to default of no access */ - pks_mknoaccess(pkey); + pks_mknoaccess(pkey, true); pks_key_users[pkey] = NULL; __clear_bit(pkey, &pks_key_allocation_map); } diff --git a/include/linux/pkeys.h b/include/linux/pkeys.h index f9552bd9341f..8f3bfec83949 100644 --- a/include/linux/pkeys.h +++ b/include/linux/pkeys.h @@ -57,15 +57,15 @@ static inline int pks_key_alloc(const char * const pkey_user) static inline void pks_key_free(int pkey) { } -static inline void pks_mknoaccess(int pkey) +static inline void pks_mknoaccess(int pkey, bool global) { WARN_ON_ONCE(1); } -static inline void pks_mkread(int pkey) +static inline void pks_mkread(int pkey, bool global) { WARN_ON_ONCE(1); } -static inline void pks_mkrdwr(int pkey) +static inline void pks_mkrdwr(int pkey, bool global) { WARN_ON_ONCE(1); } diff --git a/lib/pks/pks_test.c b/lib/pks/pks_test.c index d7dbf92527bd..286c8b8457da 100644 --- a/lib/pks/pks_test.c +++ b/lib/pks/pks_test.c @@ -163,12 +163,12 @@ static void check_exception(irqentry_state_t *irq_state) * Check we can update the value during exception without affecting the * calling thread. The calling thread is checked after exception... */ - pks_mkrdwr(test_armed_key); + pks_mkrdwr(test_armed_key, false); if (!check_pkrs(test_armed_key, 0)) { pr_err(" FAIL: exception did not change register to 0\n"); test_exception_ctx->pass = false; } - pks_mknoaccess(test_armed_key); + pks_mknoaccess(test_armed_key, false); if (!check_pkrs(test_armed_key, PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)) { pr_err(" FAIL: exception did not change register to 0x3\n"); test_exception_ctx->pass = false; @@ -314,13 +314,13 @@ static int run_access_test(struct pks_test_ctx *ctx, { switch (test->mode) { case PKS_TEST_NO_ACCESS: - pks_mknoaccess(ctx->pkey); + pks_mknoaccess(ctx->pkey, false); break; case PKS_TEST_RDWR: - pks_mkrdwr(ctx->pkey); + pks_mkrdwr(ctx->pkey, false); break; case PKS_TEST_RDONLY: - pks_mkread(ctx->pkey); + pks_mkread(ctx->pkey, false); break; default: pr_err("BUG in test invalid mode\n"); @@ -476,7 +476,7 @@ static void run_exception_test(void) goto free_context; } - pks_mkread(ctx->pkey); + pks_mkread(ctx->pkey, false); spin_lock(&test_lock); WRITE_ONCE(test_exception_ctx, ctx); @@ -556,7 +556,7 @@ static void crash_it(void) return; } - pks_mknoaccess(ctx->pkey); + pks_mknoaccess(ctx->pkey, false); spin_lock(&test_lock); WRITE_ONCE(test_armed_key, 0); @@ -618,7 +618,7 @@ static ssize_t pks_write_file(struct file *file, const char __user *user_buf, /* start of context switch test */ if (!strcmp(buf, "1")) { /* Ensure a known state to test context switch */ - pks_mknoaccess(ctx->pkey); + pks_mknoaccess(ctx->pkey, false); } /* After context switch msr should be restored */ From patchwork Fri Oct 9 19:49:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0A13C43467 for ; Fri, 9 Oct 2020 20:10:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 666D920659 for ; Fri, 9 Oct 2020 20:10:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391944AbgJIUKq (ORCPT ); Fri, 9 Oct 2020 16:10:46 -0400 Received: from mga17.intel.com ([192.55.52.151]:33872 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403820AbgJITvD (ORCPT ); Fri, 9 Oct 2020 15:51:03 -0400 IronPort-SDR: eqJBBpkrLGrKis4SJXQed/VSxmErjrdjeWymzKs05Z9O/Kb0aN689/V3XKJa5P7vgO3L6+mg6b 0K1uqxQQciVg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="145397219" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="145397219" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:59 -0700 IronPort-SDR: /j9oTf64Lc/fwCVe0LT3Z3/pz5zJXnOJNQAbfqyJqoBSoEhIO3zt/eOhOuH8Xl1K1eBwx2q0zq 2edyiKrU43UQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="519846797" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:59 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Randy Dunlap , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 04/58] kmap: Add stray access protection for device pages Date: Fri, 9 Oct 2020 12:49:39 -0700 Message-Id: <20201009195033.3208459-5-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny Device managed pages may have additional protections. These protections need to be removed prior to valid use by kernel users. Check for special treatment of device managed pages in kmap and take action if needed. We use kmap as an interface for generic kernel code because under normal circumstances it would be a bug for general kernel code to not use kmap prior to accessing kernel memory. Therefore, this should allow any valid kernel users to seamlessly use these pages without issues. Because of the critical nature of kmap it must be pointed out that the over head on regular DRAM is carefully implemented to be as fast as possible. Furthermore the underlying MSR write required on device pages when protected is better than a normal MSR write. Specifically, WRMSR(MSR_IA32_PKRS) is not serializing but still maintains ordering properties similar to WRPKRU. The current SDM section on PKRS needs updating but should be the same as that of WRPKRU. So to quote from the WRPKRU text: WRPKRU will never execute speculatively. Memory accesses affected by PKRU register will not execute (even speculatively) until all prior executions of WRPKRU have completed execution and updated the PKRU register. Still this will make accessing pmem more expensive from the kernel but the overhead is minimized and many pmem users access this memory through user page mappings which are not affected at all. Cc: Randy Dunlap Signed-off-by: Ira Weiny --- include/linux/highmem.h | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 14e6202ce47f..2a9806e3b8d2 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -8,6 +8,7 @@ #include #include #include +#include #include @@ -31,6 +32,20 @@ static inline void invalidate_kernel_vmap_range(void *vaddr, int size) #include +static inline void dev_page_enable_access(struct page *page, bool global) +{ + if (!page_is_access_protected(page)) + return; + dev_access_enable(global); +} + +static inline void dev_page_disable_access(struct page *page, bool global) +{ + if (!page_is_access_protected(page)) + return; + dev_access_disable(global); +} + #ifdef CONFIG_HIGHMEM extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); extern void kunmap_atomic_high(void *kvaddr); @@ -55,6 +70,11 @@ static inline void *kmap(struct page *page) else addr = kmap_high(page); kmap_flush_tlb((unsigned long)addr); + /* + * Even non-highmem pages may have additional access protections which + * need to be checked and potentially enabled. + */ + dev_page_enable_access(page, true); return addr; } @@ -63,6 +83,11 @@ void kunmap_high(struct page *page); static inline void kunmap(struct page *page) { might_sleep(); + /* + * Even non-highmem pages may have additional access protections which + * need to be checked and potentially disabled. + */ + dev_page_disable_access(page, true); if (!PageHighMem(page)) return; kunmap_high(page); @@ -85,6 +110,7 @@ static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) { preempt_disable(); pagefault_disable(); + dev_page_enable_access(page, false); if (!PageHighMem(page)) return page_address(page); return kmap_atomic_high_prot(page, prot); @@ -137,6 +163,7 @@ static inline unsigned long totalhigh_pages(void) { return 0UL; } static inline void *kmap(struct page *page) { might_sleep(); + dev_page_enable_access(page, true); return page_address(page); } @@ -146,6 +173,7 @@ static inline void kunmap_high(struct page *page) static inline void kunmap(struct page *page) { + dev_page_disable_access(page, true); #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(page_address(page)); #endif @@ -155,6 +183,7 @@ static inline void *kmap_atomic(struct page *page) { preempt_disable(); pagefault_disable(); + dev_page_enable_access(page, false); return page_address(page); } #define kmap_atomic_prot(page, prot) kmap_atomic(page) @@ -216,7 +245,8 @@ static inline void kmap_atomic_idx_pop(void) #define kunmap_atomic(addr) \ do { \ BUILD_BUG_ON(__same_type((addr), struct page *)); \ - kunmap_atomic_high(addr); \ + dev_page_disable_access(kmap_to_page(addr), false); \ + kunmap_atomic_high(addr); \ pagefault_enable(); \ preempt_enable(); \ } while (0) From patchwork Fri Oct 9 19:49:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAF15C43467 for ; Fri, 9 Oct 2020 20:10:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7669D2067D for ; Fri, 9 Oct 2020 20:10:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391569AbgJIUKK (ORCPT ); Fri, 9 Oct 2020 16:10:10 -0400 Received: from mga11.intel.com ([192.55.52.93]:40406 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403832AbgJITvG (ORCPT ); Fri, 9 Oct 2020 15:51:06 -0400 IronPort-SDR: SD9nz5fTmWZyBOKnNyNhFLCkSRCkJ1a/XNJy/OvFfOQBQwdny4dR7EruCRRxmv68e4U8o8tfxx MKHZNHFKQmKA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162067781" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162067781" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:03 -0700 IronPort-SDR: Zj7B8Ym6Opm+j7NgzILIxHvFMHLpJ+KeG5qMdRlGsqJzYlMhgHY+wciO+/VIwDrpca7gdal3tS LaUxL8Xa1x8w== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="343971995" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:02 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Randy Dunlap , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 05/58] kmap: Introduce k[un]map_thread Date: Fri, 9 Oct 2020 12:49:40 -0700 Message-Id: <20201009195033.3208459-6-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny To correctly support the semantics of kmap() with Kernel protection keys (PKS), kmap() may be required to set the protections on multiple processors (globally). Enabling PKS globally can be very expensive depending on the requested operation. Furthermore, enabling a domain globally reduces the protection afforded by PKS. Most kmap() (Aprox 209 of 229) callers use the map within a single thread and have no need for the protection domain to be enabled globally. However, the remaining callers do not follow this pattern and, as best I can tell, expect the mapping to be 'global' and available to any thread who may access the mapping.[1] We don't anticipate global mappings to pmem, however in general there is a danger in changing the semantics of kmap(). Effectively, this would cause an unresolved page fault with little to no information about why the failure occurred. To resolve this a number of options were considered. 1) Attempt to change all the thread local kmap() calls to kmap_atomic()[2] 2) Introduce a flags parameter to kmap() to indicate if the mapping should be global or not 3) Change ~20 call sites to 'kmap_global()' to indicate that they require a global enablement of the pages. 4) Change ~209 call sites to 'kmap_thread()' to indicate that the mapping is to be used within that thread of execution only Option 1 is simply not feasible. Option 2 would require all of the call sites of kmap() to change. Option 3 seems like a good minimal change but there is a danger that new code may miss the semantic change of kmap() and not get the behavior the developer intended. Therefore, #4 was chosen. Subsequent patches will convert most ~90% of the kmap callers to this new call leaving about 10% of the existing kmap callers to enable PKS globally. Cc: Randy Dunlap Signed-off-by: Ira Weiny --- include/linux/highmem.h | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 2a9806e3b8d2..ef7813544719 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -60,7 +60,7 @@ static inline void kmap_flush_tlb(unsigned long addr) { } #endif void *kmap_high(struct page *page); -static inline void *kmap(struct page *page) +static inline void *__kmap(struct page *page, bool global) { void *addr; @@ -74,20 +74,20 @@ static inline void *kmap(struct page *page) * Even non-highmem pages may have additional access protections which * need to be checked and potentially enabled. */ - dev_page_enable_access(page, true); + dev_page_enable_access(page, global); return addr; } void kunmap_high(struct page *page); -static inline void kunmap(struct page *page) +static inline void __kunmap(struct page *page, bool global) { might_sleep(); /* * Even non-highmem pages may have additional access protections which * need to be checked and potentially disabled. */ - dev_page_disable_access(page, true); + dev_page_disable_access(page, global); if (!PageHighMem(page)) return; kunmap_high(page); @@ -160,10 +160,10 @@ static inline struct page *kmap_to_page(void *addr) static inline unsigned long totalhigh_pages(void) { return 0UL; } -static inline void *kmap(struct page *page) +static inline void *__kmap(struct page *page, bool global) { might_sleep(); - dev_page_enable_access(page, true); + dev_page_enable_access(page, global); return page_address(page); } @@ -171,9 +171,9 @@ static inline void kunmap_high(struct page *page) { } -static inline void kunmap(struct page *page) +static inline void __kunmap(struct page *page, bool global) { - dev_page_disable_access(page, true); + dev_page_disable_access(page, global); #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(page_address(page)); #endif @@ -238,6 +238,24 @@ static inline void kmap_atomic_idx_pop(void) #endif +static inline void *kmap(struct page *page) +{ + return __kmap(page, true); +} +static inline void kunmap(struct page *page) +{ + __kunmap(page, true); +} + +static inline void *kmap_thread(struct page *page) +{ + return __kmap(page, false); +} +static inline void kunmap_thread(struct page *page) +{ + __kunmap(page, false); +} + /* * Prevent people trying to call kunmap_atomic() as if it were kunmap() * kunmap_atomic() should get the return value of kmap_atomic, not the page. From patchwork Fri Oct 9 19:49:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2CF0C83BA0 for ; Fri, 9 Oct 2020 19:52:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E72E22282 for ; Fri, 9 Oct 2020 19:52:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404001AbgJITvx (ORCPT ); Fri, 9 Oct 2020 15:51:53 -0400 Received: from mga02.intel.com ([134.134.136.20]:57597 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403912AbgJITvZ (ORCPT ); Fri, 9 Oct 2020 15:51:25 -0400 IronPort-SDR: Buki3qs6NBvS3hmZtQwJlqZQICcMzBGBfXe0E6UxKErEylTOKT2moJBC37CGA2p4eQB2eod5/G Jwp65DGsuMYA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="152450817" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152450817" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:24 -0700 IronPort-SDR: XjNk9w0mzDhi5BMigEI5d3C8W/iIq4vdRrLZr3Ki7urqpxKKtyqLLY7j1Bq5ukpJe/vk25ySpH cq+qGeUOhHow== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="355862741" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:22 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Mike Marciniszyn , Dennis Dalessandro , Doug Ledford , Jason Gunthorpe , Faisal Latif , Shiraz Saleem , Bernard Metzler , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 10/58] drivers/rdma: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:45 -0700 Message-Id: <20201009195033.3208459-11-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in these drivers are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Mike Marciniszyn Cc: Dennis Dalessandro Cc: Doug Ledford Cc: Jason Gunthorpe Cc: Faisal Latif Cc: Shiraz Saleem Cc: Bernard Metzler Signed-off-by: Ira Weiny --- drivers/infiniband/hw/hfi1/sdma.c | 4 ++-- drivers/infiniband/hw/i40iw/i40iw_cm.c | 10 +++++----- drivers/infiniband/sw/siw/siw_qp_tx.c | 14 +++++++------- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c index 04575c9afd61..09d206e3229a 100644 --- a/drivers/infiniband/hw/hfi1/sdma.c +++ b/drivers/infiniband/hw/hfi1/sdma.c @@ -3130,7 +3130,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx, } if (type == SDMA_MAP_PAGE) { - kvaddr = kmap(page); + kvaddr = kmap_thread(page); kvaddr += offset; } else if (WARN_ON(!kvaddr)) { __sdma_txclean(dd, tx); @@ -3140,7 +3140,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx, memcpy(tx->coalesce_buf + tx->coalesce_idx, kvaddr, len); tx->coalesce_idx += len; if (type == SDMA_MAP_PAGE) - kunmap(page); + kunmap_thread(page); /* If there is more data, return */ if (tx->tlen - tx->coalesce_idx) diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c index a3b95805c154..122d7a5642a1 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_cm.c +++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c @@ -3721,7 +3721,7 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) ibmr->device = iwpd->ibpd.device; iwqp->lsmm_mr = ibmr; if (iwqp->page) - iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page); + iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page); dev->iw_priv_qp_ops->qp_send_lsmm(&iwqp->sc_qp, iwqp->ietf_mem.va, (accept.size + conn_param->private_data_len), @@ -3729,12 +3729,12 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) } else { if (iwqp->page) - iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page); + iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page); dev->iw_priv_qp_ops->qp_send_lsmm(&iwqp->sc_qp, NULL, 0, 0); } if (iwqp->page) - kunmap(iwqp->page); + kunmap_thread(iwqp->page); iwqp->cm_id = cm_id; cm_node->cm_id = cm_id; @@ -4102,10 +4102,10 @@ static void i40iw_cm_event_connected(struct i40iw_cm_event *event) i40iw_cm_init_tsa_conn(iwqp, cm_node); read0 = (cm_node->send_rdma0_op == SEND_RDMA_READ_ZERO); if (iwqp->page) - iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page); + iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page); dev->iw_priv_qp_ops->qp_send_rtt(&iwqp->sc_qp, read0); if (iwqp->page) - kunmap(iwqp->page); + kunmap_thread(iwqp->page); memset(&attr, 0, sizeof(attr)); attr.qp_state = IB_QPS_RTS; diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c index d19d8325588b..4ed37c328d02 100644 --- a/drivers/infiniband/sw/siw/siw_qp_tx.c +++ b/drivers/infiniband/sw/siw/siw_qp_tx.c @@ -76,7 +76,7 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr) if (unlikely(!p)) return -EFAULT; - buffer = kmap(p); + buffer = kmap_thread(p); if (likely(PAGE_SIZE - off >= bytes)) { memcpy(paddr, buffer + off, bytes); @@ -84,7 +84,7 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr) unsigned long part = bytes - (PAGE_SIZE - off); memcpy(paddr, buffer + off, part); - kunmap(p); + kunmap_thread(p); if (!mem->is_pbl) p = siw_get_upage(mem->umem, @@ -96,10 +96,10 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr) if (unlikely(!p)) return -EFAULT; - buffer = kmap(p); + buffer = kmap_thread(p); memcpy(paddr + part, buffer, bytes - part); } - kunmap(p); + kunmap_thread(p); } } return (int)bytes; @@ -505,7 +505,7 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) page_array[seg] = p; if (!c_tx->use_sendpage) { - iov[seg].iov_base = kmap(p) + fp_off; + iov[seg].iov_base = kmap_thread(p) + fp_off; iov[seg].iov_len = plen; /* Remember for later kunmap() */ @@ -518,9 +518,9 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) plen); } else if (do_crc) { crypto_shash_update(c_tx->mpa_crc_hd, - kmap(p) + fp_off, + kmap_thread(p) + fp_off, plen); - kunmap(p); + kunmap_thread(p); } } else { u64 va = sge->laddr + sge_off; From patchwork Fri Oct 9 19:49:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91FEDC43457 for ; Fri, 9 Oct 2020 20:09:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A967206A5 for ; Fri, 9 Oct 2020 20:09:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391823AbgJIUJI (ORCPT ); Fri, 9 Oct 2020 16:09:08 -0400 Received: from mga01.intel.com ([192.55.52.88]:3555 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403946AbgJITvb (ORCPT ); Fri, 9 Oct 2020 15:51:31 -0400 IronPort-SDR: 0M+PJwZD1jFuMQocaU6wMT6hcydV5DguoJbJA0ETTZLrD7/zXfBuqvEBCFLTliVirEmY2c0PEc qHvKAiR1KIcg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976115" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976115" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:27 -0700 IronPort-SDR: zNYl6viNpTmcuoOUFeW6ebDlF6mcWw7xHrLDRpkX9eSJEFyww70EqY5cZFdXSl9ODsqjoVqJW/ EpbiFUFW5goQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="329006240" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:25 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , "David S. Miller" , Jakub Kicinski , Jesse Brandeburg , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 11/58] drivers/net: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:46 -0700 Message-Id: <20201009195033.3208459-12-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in these drivers are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: "David S. Miller" Cc: Jakub Kicinski Cc: Jesse Brandeburg Signed-off-by: Ira Weiny --- drivers/net/ethernet/intel/igb/igb_ethtool.c | 4 ++-- drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c index 6e8231c1ddf0..ac9189752012 100644 --- a/drivers/net/ethernet/intel/igb/igb_ethtool.c +++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c @@ -1794,14 +1794,14 @@ static int igb_check_lbtest_frame(struct igb_rx_buffer *rx_buffer, frame_size >>= 1; - data = kmap(rx_buffer->page); + data = kmap_thread(rx_buffer->page); if (data[3] != 0xFF || data[frame_size + 10] != 0xBE || data[frame_size + 12] != 0xAF) match = false; - kunmap(rx_buffer->page); + kunmap_thread(rx_buffer->page); return match; } diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c index 71ec908266a6..7d469425f8b4 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c @@ -1963,14 +1963,14 @@ static bool ixgbe_check_lbtest_frame(struct ixgbe_rx_buffer *rx_buffer, frame_size >>= 1; - data = kmap(rx_buffer->page) + rx_buffer->page_offset; + data = kmap_thread(rx_buffer->page) + rx_buffer->page_offset; if (data[3] != 0xFF || data[frame_size + 10] != 0xBE || data[frame_size + 12] != 0xAF) match = false; - kunmap(rx_buffer->page); + kunmap_thread(rx_buffer->page); return match; } From patchwork Fri Oct 9 19:49:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C9C5C43457 for ; Fri, 9 Oct 2020 20:07:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D11FE22282 for ; Fri, 9 Oct 2020 20:07:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391727AbgJIUHo (ORCPT ); Fri, 9 Oct 2020 16:07:44 -0400 Received: from mga09.intel.com ([134.134.136.24]:28486 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403855AbgJITve (ORCPT ); Fri, 9 Oct 2020 15:51:34 -0400 IronPort-SDR: xxLTFcykYmc5KgZD1HbS1g7i4OYmICVIRRif8W7Jpite0DKStiZN2ggNBumfLr6hCgxjqqigjG UdsgKzTVcxow== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165642918" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="165642918" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:29 -0700 IronPort-SDR: 0+lGONiHFSeyE/0kQGUe/v/Z0vqstKsQGCedA4TvP63fkNzDWGjPYFj9hsR1yECUQTFrwG697Z McTBAsxZaLwQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298419323" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:28 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , David Howells , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 12/58] fs/afs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:47 -0700 Message-Id: <20201009195033.3208459-13-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: David Howells Signed-off-by: Ira Weiny --- fs/afs/dir.c | 16 ++++++++-------- fs/afs/dir_edit.c | 16 ++++++++-------- fs/afs/mntpt.c | 4 ++-- fs/afs/write.c | 4 ++-- 4 files changed, 20 insertions(+), 20 deletions(-) diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 1d2e61e0ab04..5d01cdb590de 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -127,14 +127,14 @@ static bool afs_dir_check_page(struct afs_vnode *dvnode, struct page *page, qty /= sizeof(union afs_xdr_dir_block); /* check them */ - dbuf = kmap(page); + dbuf = kmap_thread(page); for (tmp = 0; tmp < qty; tmp++) { if (dbuf->blocks[tmp].hdr.magic != AFS_DIR_MAGIC) { printk("kAFS: %s(%lx): bad magic %d/%d is %04hx\n", __func__, dvnode->vfs_inode.i_ino, tmp, qty, ntohs(dbuf->blocks[tmp].hdr.magic)); trace_afs_dir_check_failed(dvnode, off, i_size); - kunmap(page); + kunmap_thread(page); trace_afs_file_error(dvnode, -EIO, afs_file_error_dir_bad_magic); goto error; } @@ -146,7 +146,7 @@ static bool afs_dir_check_page(struct afs_vnode *dvnode, struct page *page, ((u8 *)&dbuf->blocks[tmp])[AFS_DIR_BLOCK_SIZE - 1] = 0; } - kunmap(page); + kunmap_thread(page); checked: afs_stat_v(dvnode, n_read_dir); @@ -177,13 +177,13 @@ static bool afs_dir_check_pages(struct afs_vnode *dvnode, struct afs_read *req) req->pos, req->index, req->nr_pages, req->offset); for (i = 0; i < req->nr_pages; i++) { - dbuf = kmap(req->pages[i]); + dbuf = kmap_thread(req->pages[i]); for (j = 0; j < qty; j++) { union afs_xdr_dir_block *block = &dbuf->blocks[j]; pr_warn("[%02x] %32phN\n", i * qty + j, block); } - kunmap(req->pages[i]); + kunmap_thread(req->pages[i]); } return false; } @@ -481,7 +481,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, limit = blkoff & ~(PAGE_SIZE - 1); - dbuf = kmap(page); + dbuf = kmap_thread(page); /* deal with the individual blocks stashed on this page */ do { @@ -489,7 +489,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, sizeof(union afs_xdr_dir_block)]; ret = afs_dir_iterate_block(dvnode, ctx, dblock, blkoff); if (ret != 1) { - kunmap(page); + kunmap_thread(page); goto out; } @@ -497,7 +497,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx, } while (ctx->pos < dir->i_size && blkoff < limit); - kunmap(page); + kunmap_thread(page); ret = 0; } diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index b108528bf010..35ed6828e205 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -218,7 +218,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, need_slots = round_up(12 + name->len + 1 + 4, AFS_DIR_DIRENT_SIZE); need_slots /= AFS_DIR_DIRENT_SIZE; - meta_page = kmap(page0); + meta_page = kmap_thread(page0); meta = &meta_page->blocks[0]; if (i_size == 0) goto new_directory; @@ -247,7 +247,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, set_page_private(page, 1); SetPagePrivate(page); } - dir_page = kmap(page); + dir_page = kmap_thread(page); } /* Abandon the edit if we got a callback break. */ @@ -284,7 +284,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, if (page != page0) { unlock_page(page); - kunmap(page); + kunmap_thread(page); put_page(page); } } @@ -323,7 +323,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, afs_set_contig_bits(block, slot, need_slots); if (page != page0) { unlock_page(page); - kunmap(page); + kunmap_thread(page); put_page(page); } @@ -337,7 +337,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, out_unmap: unlock_page(page0); - kunmap(page0); + kunmap_thread(page0); put_page(page0); _leave(""); return; @@ -346,7 +346,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, trace_afs_edit_dir(vnode, why, afs_edit_dir_create_inval, 0, 0, 0, 0, name->name); clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); if (page != page0) { - kunmap(page); + kunmap_thread(page); put_page(page); } goto out_unmap; @@ -398,7 +398,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, need_slots = round_up(12 + name->len + 1 + 4, AFS_DIR_DIRENT_SIZE); need_slots /= AFS_DIR_DIRENT_SIZE; - meta_page = kmap(page0); + meta_page = kmap_thread(page0); meta = &meta_page->blocks[0]; /* Find a page that has sufficient slots available. Each VM page @@ -410,7 +410,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, page = find_lock_page(vnode->vfs_inode.i_mapping, index); if (!page) goto error; - dir_page = kmap(page); + dir_page = kmap_thread(page); } else { page = page0; dir_page = meta_page; diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c index 79bc5f1338ed..562454e2fd5c 100644 --- a/fs/afs/mntpt.c +++ b/fs/afs/mntpt.c @@ -139,11 +139,11 @@ static int afs_mntpt_set_params(struct fs_context *fc, struct dentry *mntpt) return ret; } - buf = kmap(page); + buf = kmap_thread(page); ret = -EINVAL; if (buf[size - 1] == '.') ret = vfs_parse_fs_string(fc, "source", buf, size - 1); - kunmap(page); + kunmap_thread(page); put_page(page); if (ret < 0) return ret; diff --git a/fs/afs/write.c b/fs/afs/write.c index 4b2265cb1891..c56e5b4db4ae 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -38,9 +38,9 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key, if (pos >= vnode->vfs_inode.i_size) { p = pos & ~PAGE_MASK; ASSERTCMP(p + len, <=, PAGE_SIZE); - data = kmap(page); + data = kmap_thread(page); memset(data + p, 0, len); - kunmap(page); + kunmap_thread(page); return 0; } From patchwork Fri Oct 9 19:49:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 940B0C43457 for ; Fri, 9 Oct 2020 20:07:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 31C422067D for ; Fri, 9 Oct 2020 20:07:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391735AbgJIUHp (ORCPT ); Fri, 9 Oct 2020 16:07:45 -0400 Received: from mga12.intel.com ([192.55.52.136]:29143 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403958AbgJITve (ORCPT ); Fri, 9 Oct 2020 15:51:34 -0400 IronPort-SDR: uaD/tDmFHK22veBGR3YzL5VTBhxNLwUX5oCzX0P1SaQEJowj8ybTYyBX2T5ywdU6U+5NcEf+EP CKZez4YOWWdA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144850690" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144850690" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:33 -0700 IronPort-SDR: HIW7PNpXrID3gOC4BJBiMK8gMhcX7pYwx4FKSDabG5rp79Jg8ur1teAkpcUK61qJFFQS9ID+ws 9ehpgS1/cOeQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="349957192" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:33 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Chris Mason , Josef Bacik , David Sterba , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 13/58] fs/btrfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:48 -0700 Message-Id: <20201009195033.3208459-14-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Chris Mason Cc: Josef Bacik Cc: David Sterba Signed-off-by: Ira Weiny --- fs/btrfs/check-integrity.c | 4 ++-- fs/btrfs/compression.c | 4 ++-- fs/btrfs/inode.c | 16 ++++++++-------- fs/btrfs/lzo.c | 24 ++++++++++++------------ fs/btrfs/raid56.c | 34 +++++++++++++++++----------------- fs/btrfs/reflink.c | 8 ++++---- fs/btrfs/send.c | 4 ++-- fs/btrfs/zlib.c | 32 ++++++++++++++++---------------- fs/btrfs/zstd.c | 20 ++++++++++---------- 9 files changed, 73 insertions(+), 73 deletions(-) diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c index 81a8c87a5afb..9e5a02512ab5 100644 --- a/fs/btrfs/check-integrity.c +++ b/fs/btrfs/check-integrity.c @@ -2706,7 +2706,7 @@ static void __btrfsic_submit_bio(struct bio *bio) bio_for_each_segment(bvec, bio, iter) { BUG_ON(bvec.bv_len != PAGE_SIZE); - mapped_datav[i] = kmap(bvec.bv_page); + mapped_datav[i] = kmap_thread(bvec.bv_page); i++; if (dev_state->state->print_mask & @@ -2720,7 +2720,7 @@ static void __btrfsic_submit_bio(struct bio *bio) bio, &bio_is_patched, bio->bi_opf); bio_for_each_segment(bvec, bio, iter) - kunmap(bvec.bv_page); + kunmap_thread(bvec.bv_page); kfree(mapped_datav); } else if (NULL != dev_state && (bio->bi_opf & REQ_PREFLUSH)) { if (dev_state->state->print_mask & diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 1ab56a734e70..5944fb36d68a 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -1626,7 +1626,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end, curr_sample_pos = 0; while (index < index_end) { page = find_get_page(inode->i_mapping, index); - in_data = kmap(page); + in_data = kmap_thread(page); /* Handle case where the start is not aligned to PAGE_SIZE */ i = start % PAGE_SIZE; while (i < PAGE_SIZE - SAMPLING_READ_SIZE) { @@ -1639,7 +1639,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end, start += SAMPLING_INTERVAL; curr_sample_pos += SAMPLING_READ_SIZE; } - kunmap(page); + kunmap_thread(page); put_page(page); index++; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 9570458aa847..9710a52c6c42 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -4603,7 +4603,7 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len, if (offset != blocksize) { if (!len) len = blocksize - offset; - kaddr = kmap(page); + kaddr = kmap_thread(page); if (front) memset(kaddr + (block_start - page_offset(page)), 0, offset); @@ -4611,7 +4611,7 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len, memset(kaddr + (block_start - page_offset(page)) + offset, 0, len); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } ClearPageChecked(page); set_page_dirty(page); @@ -6509,9 +6509,9 @@ static noinline int uncompress_inline(struct btrfs_path *path, */ if (max_size + pg_offset < PAGE_SIZE) { - char *map = kmap(page); + char *map = kmap_thread(page); memset(map + pg_offset + max_size, 0, PAGE_SIZE - max_size - pg_offset); - kunmap(page); + kunmap_thread(page); } kfree(tmp); return ret; @@ -6704,7 +6704,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode, goto out; } } else { - map = kmap(page); + map = kmap_thread(page); read_extent_buffer(leaf, map + pg_offset, ptr, copy_size); if (pg_offset + copy_size < PAGE_SIZE) { @@ -6712,7 +6712,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode, PAGE_SIZE - pg_offset - copy_size); } - kunmap(page); + kunmap_thread(page); } flush_dcache_page(page); } @@ -8326,10 +8326,10 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) zero_start = PAGE_SIZE; if (zero_start != PAGE_SIZE) { - kaddr = kmap(page); + kaddr = kmap_thread(page); memset(kaddr + zero_start, 0, PAGE_SIZE - zero_start); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } ClearPageChecked(page); set_page_dirty(page); diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c index aa9cd11f4b78..f29dcc9ec573 100644 --- a/fs/btrfs/lzo.c +++ b/fs/btrfs/lzo.c @@ -140,7 +140,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, *total_in = 0; in_page = find_get_page(mapping, start >> PAGE_SHIFT); - data_in = kmap(in_page); + data_in = kmap_thread(in_page); /* * store the size of all chunks of compressed data in @@ -151,7 +151,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, ret = -ENOMEM; goto out; } - cpage_out = kmap(out_page); + cpage_out = kmap_thread(out_page); out_offset = LZO_LEN; tot_out = LZO_LEN; pages[0] = out_page; @@ -209,7 +209,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, if (out_len == 0 && tot_in >= len) break; - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages == nr_dest_pages) { out_page = NULL; ret = -E2BIG; @@ -221,7 +221,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, ret = -ENOMEM; goto out; } - cpage_out = kmap(out_page); + cpage_out = kmap_thread(out_page); pages[nr_pages++] = out_page; pg_bytes_left = PAGE_SIZE; @@ -243,12 +243,12 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, break; bytes_left = len - tot_in; - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); start += PAGE_SIZE; in_page = find_get_page(mapping, start >> PAGE_SHIFT); - data_in = kmap(in_page); + data_in = kmap_thread(in_page); in_len = min(bytes_left, PAGE_SIZE); } @@ -258,10 +258,10 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, } /* store the size of all chunks of compressed data */ - cpage_out = kmap(pages[0]); + cpage_out = kmap_thread(pages[0]); write_compress_length(cpage_out, tot_out); - kunmap(pages[0]); + kunmap_thread(pages[0]); ret = 0; *total_out = tot_out; @@ -269,10 +269,10 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, out: *out_pages = nr_pages; if (out_page) - kunmap(out_page); + kunmap_thread(out_page); if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } @@ -305,7 +305,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) u64 disk_start = cb->start; struct bio *orig_bio = cb->orig_bio; - data_in = kmap(pages_in[0]); + data_in = kmap_thread(pages_in[0]); tot_len = read_compress_length(data_in); /* * Compressed data header check. @@ -387,7 +387,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) else kunmap(pages_in[page_in_index]); - data_in = kmap(pages_in[++page_in_index]); + data_in = kmap_thread(pages_in[++page_in_index]); in_page_bytes_left = PAGE_SIZE; in_offset = 0; diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index 255490f42b5d..34e646e4548c 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -262,13 +262,13 @@ static void cache_rbio_pages(struct btrfs_raid_bio *rbio) if (!rbio->bio_pages[i]) continue; - s = kmap(rbio->bio_pages[i]); - d = kmap(rbio->stripe_pages[i]); + s = kmap_thread(rbio->bio_pages[i]); + d = kmap_thread(rbio->stripe_pages[i]); copy_page(d, s); - kunmap(rbio->bio_pages[i]); - kunmap(rbio->stripe_pages[i]); + kunmap_thread(rbio->bio_pages[i]); + kunmap_thread(rbio->stripe_pages[i]); SetPageUptodate(rbio->stripe_pages[i]); } set_bit(RBIO_CACHE_READY_BIT, &rbio->flags); @@ -1241,13 +1241,13 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) /* first collect one page from each data stripe */ for (stripe = 0; stripe < nr_data; stripe++) { p = page_in_rbio(rbio, stripe, pagenr, 0); - pointers[stripe] = kmap(p); + pointers[stripe] = kmap_thread(p); } /* then add the parity stripe */ p = rbio_pstripe_page(rbio, pagenr); SetPageUptodate(p); - pointers[stripe++] = kmap(p); + pointers[stripe++] = kmap_thread(p); if (has_qstripe) { @@ -1257,7 +1257,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) */ p = rbio_qstripe_page(rbio, pagenr); SetPageUptodate(p); - pointers[stripe++] = kmap(p); + pointers[stripe++] = kmap_thread(p); raid6_call.gen_syndrome(rbio->real_stripes, PAGE_SIZE, pointers); @@ -1269,7 +1269,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) for (stripe = 0; stripe < rbio->real_stripes; stripe++) - kunmap(page_in_rbio(rbio, stripe, pagenr, 0)); + kunmap_thread(page_in_rbio(rbio, stripe, pagenr, 0)); } /* @@ -1835,7 +1835,7 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio) } else { page = rbio_stripe_page(rbio, stripe, pagenr); } - pointers[stripe] = kmap(page); + pointers[stripe] = kmap_thread(page); } /* all raid6 handling here */ @@ -1940,7 +1940,7 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio) } else { page = rbio_stripe_page(rbio, stripe, pagenr); } - kunmap(page); + kunmap_thread(page); } } @@ -2379,18 +2379,18 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio, /* first collect one page from each data stripe */ for (stripe = 0; stripe < nr_data; stripe++) { p = page_in_rbio(rbio, stripe, pagenr, 0); - pointers[stripe] = kmap(p); + pointers[stripe] = kmap_thread(p); } /* then add the parity stripe */ - pointers[stripe++] = kmap(p_page); + pointers[stripe++] = kmap_thread(p_page); if (has_qstripe) { /* * raid6, add the qstripe and call the * library function to fill in our p/q */ - pointers[stripe++] = kmap(q_page); + pointers[stripe++] = kmap_thread(q_page); raid6_call.gen_syndrome(rbio->real_stripes, PAGE_SIZE, pointers); @@ -2402,17 +2402,17 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio, /* Check scrubbing parity and repair it */ p = rbio_stripe_page(rbio, rbio->scrubp, pagenr); - parity = kmap(p); + parity = kmap_thread(p); if (memcmp(parity, pointers[rbio->scrubp], PAGE_SIZE)) copy_page(parity, pointers[rbio->scrubp]); else /* Parity is right, needn't writeback */ bitmap_clear(rbio->dbitmap, pagenr, 1); - kunmap(p); + kunmap_thread(p); for (stripe = 0; stripe < nr_data; stripe++) - kunmap(page_in_rbio(rbio, stripe, pagenr, 0)); - kunmap(p_page); + kunmap_thread(page_in_rbio(rbio, stripe, pagenr, 0)); + kunmap_thread(p_page); } __free_page(p_page); diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c index 5cd02514cf4d..10e53d7eba8c 100644 --- a/fs/btrfs/reflink.c +++ b/fs/btrfs/reflink.c @@ -92,10 +92,10 @@ static int copy_inline_to_page(struct inode *inode, if (comp_type == BTRFS_COMPRESS_NONE) { char *map; - map = kmap(page); + map = kmap_thread(page); memcpy(map, data_start, datal); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } else { ret = btrfs_decompress(comp_type, data_start, page, 0, inline_size, datal); @@ -119,10 +119,10 @@ static int copy_inline_to_page(struct inode *inode, if (datal < block_size) { char *map; - map = kmap(page); + map = kmap_thread(page); memset(map + datal, 0, block_size - datal); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } SetPageUptodate(page); diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index d9813a5b075a..06c383d3dc43 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -4863,9 +4863,9 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len) } } - addr = kmap(page); + addr = kmap_thread(page); memcpy(sctx->read_buf + ret, addr + pg_offset, cur_len); - kunmap(page); + kunmap_thread(page); unlock_page(page); put_page(page); index++; diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c index 05615a1099db..45b7a907bab3 100644 --- a/fs/btrfs/zlib.c +++ b/fs/btrfs/zlib.c @@ -126,7 +126,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, ret = -ENOMEM; goto out; } - cpage_out = kmap(out_page); + cpage_out = kmap_thread(out_page); pages[0] = out_page; nr_pages = 1; @@ -149,12 +149,12 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, for (i = 0; i < in_buf_pages; i++) { if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } in_page = find_get_page(mapping, start >> PAGE_SHIFT); - data_in = kmap(in_page); + data_in = kmap_thread(in_page); memcpy(workspace->buf + i * PAGE_SIZE, data_in, PAGE_SIZE); start += PAGE_SIZE; @@ -162,12 +162,12 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, workspace->strm.next_in = workspace->buf; } else { if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } in_page = find_get_page(mapping, start >> PAGE_SHIFT); - data_in = kmap(in_page); + data_in = kmap_thread(in_page); start += PAGE_SIZE; workspace->strm.next_in = data_in; } @@ -196,7 +196,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, * the stream end if required */ if (workspace->strm.avail_out == 0) { - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages == nr_dest_pages) { out_page = NULL; ret = -E2BIG; @@ -207,7 +207,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, ret = -ENOMEM; goto out; } - cpage_out = kmap(out_page); + cpage_out = kmap_thread(out_page); pages[nr_pages] = out_page; nr_pages++; workspace->strm.avail_out = PAGE_SIZE; @@ -234,7 +234,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, goto out; } else if (workspace->strm.avail_out == 0) { /* get another page for the stream end */ - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages == nr_dest_pages) { out_page = NULL; ret = -E2BIG; @@ -245,7 +245,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, ret = -ENOMEM; goto out; } - cpage_out = kmap(out_page); + cpage_out = kmap_thread(out_page); pages[nr_pages] = out_page; nr_pages++; workspace->strm.avail_out = PAGE_SIZE; @@ -265,10 +265,10 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, out: *out_pages = nr_pages; if (out_page) - kunmap(out_page); + kunmap_thread(out_page); if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } return ret; @@ -289,7 +289,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb) u64 disk_start = cb->start; struct bio *orig_bio = cb->orig_bio; - data_in = kmap(pages_in[page_in_index]); + data_in = kmap_thread(pages_in[page_in_index]); workspace->strm.next_in = data_in; workspace->strm.avail_in = min_t(size_t, srclen, PAGE_SIZE); workspace->strm.total_in = 0; @@ -311,7 +311,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb) if (Z_OK != zlib_inflateInit2(&workspace->strm, wbits)) { pr_warn("BTRFS: inflateInit failed\n"); - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); return -EIO; } while (workspace->strm.total_in < srclen) { @@ -339,13 +339,13 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb) if (workspace->strm.avail_in == 0) { unsigned long tmp; - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); page_in_index++; if (page_in_index >= total_pages_in) { data_in = NULL; break; } - data_in = kmap(pages_in[page_in_index]); + data_in = kmap_thread(pages_in[page_in_index]); workspace->strm.next_in = data_in; tmp = srclen - workspace->strm.total_in; workspace->strm.avail_in = min(tmp, @@ -359,7 +359,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb) done: zlib_inflateEnd(&workspace->strm); if (data_in) - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); if (!ret) zero_fill_bio(orig_bio); return ret; diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c index 9a4871636c6c..48e03f6dcef7 100644 --- a/fs/btrfs/zstd.c +++ b/fs/btrfs/zstd.c @@ -399,7 +399,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, /* map in the first page of input data */ in_page = find_get_page(mapping, start >> PAGE_SHIFT); - workspace->in_buf.src = kmap(in_page); + workspace->in_buf.src = kmap_thread(in_page); workspace->in_buf.pos = 0; workspace->in_buf.size = min_t(size_t, len, PAGE_SIZE); @@ -411,7 +411,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, goto out; } pages[nr_pages++] = out_page; - workspace->out_buf.dst = kmap(out_page); + workspace->out_buf.dst = kmap_thread(out_page); workspace->out_buf.pos = 0; workspace->out_buf.size = min_t(size_t, max_out, PAGE_SIZE); @@ -446,7 +446,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, if (workspace->out_buf.pos == workspace->out_buf.size) { tot_out += PAGE_SIZE; max_out -= PAGE_SIZE; - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages == nr_dest_pages) { out_page = NULL; ret = -E2BIG; @@ -458,7 +458,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, goto out; } pages[nr_pages++] = out_page; - workspace->out_buf.dst = kmap(out_page); + workspace->out_buf.dst = kmap_thread(out_page); workspace->out_buf.pos = 0; workspace->out_buf.size = min_t(size_t, max_out, PAGE_SIZE); @@ -479,7 +479,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, start += PAGE_SIZE; len -= PAGE_SIZE; in_page = find_get_page(mapping, start >> PAGE_SHIFT); - workspace->in_buf.src = kmap(in_page); + workspace->in_buf.src = kmap_thread(in_page); workspace->in_buf.pos = 0; workspace->in_buf.size = min_t(size_t, len, PAGE_SIZE); } @@ -518,7 +518,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, goto out; } pages[nr_pages++] = out_page; - workspace->out_buf.dst = kmap(out_page); + workspace->out_buf.dst = kmap_thread(out_page); workspace->out_buf.pos = 0; workspace->out_buf.size = min_t(size_t, max_out, PAGE_SIZE); } @@ -565,7 +565,7 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb) goto done; } - workspace->in_buf.src = kmap(pages_in[page_in_index]); + workspace->in_buf.src = kmap_thread(pages_in[page_in_index]); workspace->in_buf.pos = 0; workspace->in_buf.size = min_t(size_t, srclen, PAGE_SIZE); @@ -601,14 +601,14 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb) break; if (workspace->in_buf.pos == workspace->in_buf.size) { - kunmap(pages_in[page_in_index++]); + kunmap_thread(pages_in[page_in_index++]); if (page_in_index >= total_pages_in) { workspace->in_buf.src = NULL; ret = -EIO; goto done; } srclen -= PAGE_SIZE; - workspace->in_buf.src = kmap(pages_in[page_in_index]); + workspace->in_buf.src = kmap_thread(pages_in[page_in_index]); workspace->in_buf.pos = 0; workspace->in_buf.size = min_t(size_t, srclen, PAGE_SIZE); } @@ -617,7 +617,7 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb) zero_fill_bio(orig_bio); done: if (workspace->in_buf.src) - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); return ret; } From patchwork Fri Oct 9 19:49:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56E7EC832C0 for ; Fri, 9 Oct 2020 19:52:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 458A72225B for ; Fri, 9 Oct 2020 19:52:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390925AbgJITwT (ORCPT ); Fri, 9 Oct 2020 15:52:19 -0400 Received: from mga14.intel.com ([192.55.52.115]:15463 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403988AbgJITvr (ORCPT ); Fri, 9 Oct 2020 15:51:47 -0400 IronPort-SDR: ZkAdcgi4swrVMUBVctT1IGj49yKvAen433ACK6SntQoH6405vFPb4w9B8cF6kgYV0SvPIiBLpC Gj7D0CTpxGCw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164743769" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="164743769" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:44 -0700 IronPort-SDR: /XLKsCzQipLZYQ1rW+ETY2grttZNamy0/VnWHXeIGpFKMMZaq8ytLhObxugEuPiPsSx/1cqrId IdubLVsw8k0A== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="419536913" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:44 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Bob Peterson , Andreas Gruenbacher , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 16/58] fs/gfs2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:51 -0700 Message-Id: <20201009195033.3208459-17-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Bob Peterson Cc: Andreas Gruenbacher Signed-off-by: Ira Weiny --- fs/gfs2/bmap.c | 4 ++-- fs/gfs2/ops_fstype.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c index 0f69fbd4af66..375af4528411 100644 --- a/fs/gfs2/bmap.c +++ b/fs/gfs2/bmap.c @@ -67,7 +67,7 @@ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh, } if (!PageUptodate(page)) { - void *kaddr = kmap(page); + void *kaddr = kmap_thread(page); u64 dsize = i_size_read(inode); if (dsize > gfs2_max_stuffed_size(ip)) @@ -75,7 +75,7 @@ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh, memcpy(kaddr, dibh->b_data + sizeof(struct gfs2_dinode), dsize); memset(kaddr + dsize, 0, PAGE_SIZE - dsize); - kunmap(page); + kunmap_thread(page); SetPageUptodate(page); } diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c index 6d18d2c91add..a5d20d9b504a 100644 --- a/fs/gfs2/ops_fstype.c +++ b/fs/gfs2/ops_fstype.c @@ -263,9 +263,9 @@ static int gfs2_read_super(struct gfs2_sbd *sdp, sector_t sector, int silent) __free_page(page); return -EIO; } - p = kmap(page); + p = kmap_thread(page); gfs2_sb_in(sdp, p); - kunmap(page); + kunmap_thread(page); __free_page(page); return gfs2_check_sb(sdp, silent); } From patchwork Fri Oct 9 19:49:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287210 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6D56C8462A for ; Fri, 9 Oct 2020 19:53:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AF3F222456 for ; Fri, 9 Oct 2020 19:53:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389161AbgJITxC (ORCPT ); Fri, 9 Oct 2020 15:53:02 -0400 Received: from mga06.intel.com ([134.134.136.31]:1664 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2403996AbgJITvy (ORCPT ); Fri, 9 Oct 2020 15:51:54 -0400 IronPort-SDR: dx2C651xObHLbB3ebXZq7O3vvDLCkCp7YyEriat/+Fn1vQtkkOifEieLP9BqvQO5TVmymYi7WM okOGmreQGHCQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="227178891" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="227178891" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:47 -0700 IronPort-SDR: vdHXKYZiEWUJ0fN4OExd2VqdiXE9aYdlaLF6p7mD2X3g24JwvvcNWIkVlfM1uW7/GA3kn+ap6+ Rex2Wyxl2AaA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298531066" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:47 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Ryusuke Konishi , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 17/58] fs/nilfs2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:52 -0700 Message-Id: <20201009195033.3208459-18-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Ryusuke Konishi Signed-off-by: Ira Weiny --- fs/nilfs2/alloc.c | 34 +++++++++++++++++----------------- fs/nilfs2/cpfile.c | 4 ++-- 2 files changed, 19 insertions(+), 19 deletions(-) diff --git a/fs/nilfs2/alloc.c b/fs/nilfs2/alloc.c index adf3bb0a8048..2aa4c34094ef 100644 --- a/fs/nilfs2/alloc.c +++ b/fs/nilfs2/alloc.c @@ -524,7 +524,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, ret = nilfs_palloc_get_desc_block(inode, group, 1, &desc_bh); if (ret < 0) return ret; - desc_kaddr = kmap(desc_bh->b_page); + desc_kaddr = kmap_thread(desc_bh->b_page); desc = nilfs_palloc_block_get_group_desc( inode, group, desc_bh, desc_kaddr); n = nilfs_palloc_rest_groups_in_desc_block(inode, group, @@ -536,7 +536,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, inode, group, 1, &bitmap_bh); if (ret < 0) goto out_desc; - bitmap_kaddr = kmap(bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(bitmap_bh); pos = nilfs_palloc_find_available_slot( bitmap, group_offset, @@ -547,21 +547,21 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, desc, lock, -1); req->pr_entry_nr = entries_per_group * group + pos; - kunmap(desc_bh->b_page); - kunmap(bitmap_bh->b_page); + kunmap_thread(desc_bh->b_page); + kunmap_thread(bitmap_bh->b_page); req->pr_desc_bh = desc_bh; req->pr_bitmap_bh = bitmap_bh; return 0; } - kunmap(bitmap_bh->b_page); + kunmap_thread(bitmap_bh->b_page); brelse(bitmap_bh); } group_offset = 0; } - kunmap(desc_bh->b_page); + kunmap_thread(desc_bh->b_page); brelse(desc_bh); } @@ -569,7 +569,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode, return -ENOSPC; out_desc: - kunmap(desc_bh->b_page); + kunmap_thread(desc_bh->b_page); brelse(desc_bh); return ret; } @@ -605,10 +605,10 @@ void nilfs_palloc_commit_free_entry(struct inode *inode, spinlock_t *lock; group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); - desc_kaddr = kmap(req->pr_desc_bh->b_page); + desc_kaddr = kmap_thread(req->pr_desc_bh->b_page); desc = nilfs_palloc_block_get_group_desc(inode, group, req->pr_desc_bh, desc_kaddr); - bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(req->pr_bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); lock = nilfs_mdt_bgl_lock(inode, group); @@ -620,8 +620,8 @@ void nilfs_palloc_commit_free_entry(struct inode *inode, else nilfs_palloc_group_desc_add_entries(desc, lock, 1); - kunmap(req->pr_bitmap_bh->b_page); - kunmap(req->pr_desc_bh->b_page); + kunmap_thread(req->pr_bitmap_bh->b_page); + kunmap_thread(req->pr_desc_bh->b_page); mark_buffer_dirty(req->pr_desc_bh); mark_buffer_dirty(req->pr_bitmap_bh); @@ -646,10 +646,10 @@ void nilfs_palloc_abort_alloc_entry(struct inode *inode, spinlock_t *lock; group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); - desc_kaddr = kmap(req->pr_desc_bh->b_page); + desc_kaddr = kmap_thread(req->pr_desc_bh->b_page); desc = nilfs_palloc_block_get_group_desc(inode, group, req->pr_desc_bh, desc_kaddr); - bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(req->pr_bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); lock = nilfs_mdt_bgl_lock(inode, group); @@ -661,8 +661,8 @@ void nilfs_palloc_abort_alloc_entry(struct inode *inode, else nilfs_palloc_group_desc_add_entries(desc, lock, 1); - kunmap(req->pr_bitmap_bh->b_page); - kunmap(req->pr_desc_bh->b_page); + kunmap_thread(req->pr_bitmap_bh->b_page); + kunmap_thread(req->pr_desc_bh->b_page); brelse(req->pr_bitmap_bh); brelse(req->pr_desc_bh); @@ -754,7 +754,7 @@ int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems) /* Get the first entry number of the group */ group_min_nr = (__u64)group * epg; - bitmap_kaddr = kmap(bitmap_bh->b_page); + bitmap_kaddr = kmap_thread(bitmap_bh->b_page); bitmap = bitmap_kaddr + bh_offset(bitmap_bh); lock = nilfs_mdt_bgl_lock(inode, group); @@ -800,7 +800,7 @@ int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems) entry_start = rounddown(group_offset, epb); } while (true); - kunmap(bitmap_bh->b_page); + kunmap_thread(bitmap_bh->b_page); mark_buffer_dirty(bitmap_bh); brelse(bitmap_bh); diff --git a/fs/nilfs2/cpfile.c b/fs/nilfs2/cpfile.c index 86d4d850d130..402ab8bfce29 100644 --- a/fs/nilfs2/cpfile.c +++ b/fs/nilfs2/cpfile.c @@ -235,11 +235,11 @@ int nilfs_cpfile_get_checkpoint(struct inode *cpfile, ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, create, &cp_bh); if (ret < 0) goto out_header; - kaddr = kmap(cp_bh->b_page); + kaddr = kmap_thread(cp_bh->b_page); cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr); if (nilfs_checkpoint_invalid(cp)) { if (!create) { - kunmap(cp_bh->b_page); + kunmap_thread(cp_bh->b_page); brelse(cp_bh); ret = -ENOENT; goto out_header; From patchwork Fri Oct 9 19:49:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7030C433E7 for ; Fri, 9 Oct 2020 20:07:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6BD0B2225B for ; Fri, 9 Oct 2020 20:07:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391643AbgJIUG7 (ORCPT ); Fri, 9 Oct 2020 16:06:59 -0400 Received: from mga02.intel.com ([134.134.136.20]:57645 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390898AbgJITwA (ORCPT ); Fri, 9 Oct 2020 15:52:00 -0400 IronPort-SDR: 5+LUY2Awrr9j/tUQIAvjFtQA09wGVszIUmNQ5L2DFU9VGy9iI/QOZXZ6nS089EMR6yn+dOlt80 BHiurkWZK9Mw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="152450896" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152450896" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:59 -0700 IronPort-SDR: eWSKFRksQmWHvS1j4lFouXjU/VQ78asIZFlodYo3aryNHFOArT/TLAMBqja15iBonRYEaVT+Tm vvSyZvytK3gg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462300719" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:57 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , David Woodhouse , Richard Weinberger , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 20/58] fs/jffs2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:55 -0700 Message-Id: <20201009195033.3208459-21-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: David Woodhouse Cc: Richard Weinberger Signed-off-by: Ira Weiny --- fs/jffs2/file.c | 4 ++-- fs/jffs2/gc.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c index f8fb89b10227..3e6d54f9b011 100644 --- a/fs/jffs2/file.c +++ b/fs/jffs2/file.c @@ -88,7 +88,7 @@ static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg) BUG_ON(!PageLocked(pg)); - pg_buf = kmap(pg); + pg_buf = kmap_thread(pg); /* FIXME: Can kmap fail? */ ret = jffs2_read_inode_range(c, f, pg_buf, pg->index << PAGE_SHIFT, @@ -103,7 +103,7 @@ static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg) } flush_dcache_page(pg); - kunmap(pg); + kunmap_thread(pg); jffs2_dbg(2, "readpage finished\n"); return ret; diff --git a/fs/jffs2/gc.c b/fs/jffs2/gc.c index 373b3b7c9f44..a7259783ab84 100644 --- a/fs/jffs2/gc.c +++ b/fs/jffs2/gc.c @@ -1335,7 +1335,7 @@ static int jffs2_garbage_collect_dnode(struct jffs2_sb_info *c, struct jffs2_era return PTR_ERR(page); } - pg_ptr = kmap(page); + pg_ptr = kmap_thread(page); mutex_lock(&f->sem); offset = start; @@ -1400,7 +1400,7 @@ static int jffs2_garbage_collect_dnode(struct jffs2_sb_info *c, struct jffs2_era } } - kunmap(page); + kunmap_thread(page); put_page(page); return ret; } From patchwork Fri Oct 9 19:49:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4CF0C43467 for ; Fri, 9 Oct 2020 20:06:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8452E20732 for ; Fri, 9 Oct 2020 20:06:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391483AbgJIUG1 (ORCPT ); Fri, 9 Oct 2020 16:06:27 -0400 Received: from mga12.intel.com ([192.55.52.136]:29217 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388470AbgJITwN (ORCPT ); Fri, 9 Oct 2020 15:52:13 -0400 IronPort-SDR: hgwRDFPMEy9cMXUfN0Yw/6uAAnvmeC6yf9qQgiFQaKECYh9Nrc+ugxAC9EbcOXEraZgvADmSBT 8gykOvVZg+cA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144850837" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144850837" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:10 -0700 IronPort-SDR: zRSzmM1HZFGhzQcGhWxIWUdt7PON477D2ysFpCnQTUxdw9Z8VdaNQXVN+6ixBePqleDwfFxfgw K58aOn7YePQQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="317147210" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:08 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Miklos Szeredi , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 23/58] fs/fuse: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:58 -0700 Message-Id: <20201009195033.3208459-24-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Miklos Szeredi Signed-off-by: Ira Weiny --- fs/fuse/readdir.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/fuse/readdir.c b/fs/fuse/readdir.c index 90e3f01bd796..953ffe6f56e3 100644 --- a/fs/fuse/readdir.c +++ b/fs/fuse/readdir.c @@ -536,9 +536,9 @@ static int fuse_readdir_cached(struct file *file, struct dir_context *ctx) * Contents of the page are now protected against changing by holding * the page lock. */ - addr = kmap(page); + addr = kmap_thread(page); res = fuse_parse_cache(ff, addr, size, ctx); - kunmap(page); + kunmap_thread(page); unlock_page(page); put_page(page); From patchwork Fri Oct 9 19:50:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD85AC43457 for ; Fri, 9 Oct 2020 20:05:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 66D1C2225B for ; Fri, 9 Oct 2020 20:05:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391235AbgJIUFo (ORCPT ); Fri, 9 Oct 2020 16:05:44 -0400 Received: from mga06.intel.com ([134.134.136.31]:1746 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390926AbgJITwV (ORCPT ); Fri, 9 Oct 2020 15:52:21 -0400 IronPort-SDR: y1JJh2sq7n7qvtBHhoeiZDBcJHqQXNvk9XowiHILd4LeNp+F4Dd7RSvV34lAiBY6eVvCkDMg+q 734l2x7TLtxQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="227178997" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="227178997" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:18 -0700 IronPort-SDR: wr8yF8/sTlHJunnzFelUAfkdGIrCwMwbAv+fVYfwv9n4Q9ri2s7tZy/sCWglaZc+50HSVTr6HF aNXvhi/ktrcA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="518801285" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:18 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jan Kara , "Theodore Ts'o" , Randy Dunlap , Alex Shi , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 25/58] fs/reiserfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:00 -0700 Message-Id: <20201009195033.3208459-26-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Jan Kara Cc: "Theodore Ts'o" Cc: Randy Dunlap Cc: Alex Shi Signed-off-by: Ira Weiny --- fs/reiserfs/journal.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c index e98f99338f8f..be8f56261e8c 100644 --- a/fs/reiserfs/journal.c +++ b/fs/reiserfs/journal.c @@ -4194,11 +4194,11 @@ static int do_journal_end(struct reiserfs_transaction_handle *th, int flags) SB_ONDISK_JOURNAL_SIZE(sb))); set_buffer_uptodate(tmp_bh); page = cn->bh->b_page; - addr = kmap(page); + addr = kmap_thread(page); memcpy(tmp_bh->b_data, addr + offset_in_page(cn->bh->b_data), cn->bh->b_size); - kunmap(page); + kunmap_thread(page); mark_buffer_dirty(tmp_bh); jindex++; set_buffer_journal_dirty(cn->bh); From patchwork Fri Oct 9 19:50:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46A77C433DF for ; Fri, 9 Oct 2020 20:04:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EC6A82225B for ; Fri, 9 Oct 2020 20:04:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390814AbgJIUE0 (ORCPT ); Fri, 9 Oct 2020 16:04:26 -0400 Received: from mga11.intel.com ([192.55.52.93]:40547 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389150AbgJITw2 (ORCPT ); Fri, 9 Oct 2020 15:52:28 -0400 IronPort-SDR: JXvRbqfBSh4WcKKhlWmGAJjVVwBBqA0c3vVa62KiqhYKBoCAMWZ7xt97eKLwzIZS1eytWF28Tv 5/HLX+aDaNNA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162067998" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162067998" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:26 -0700 IronPort-SDR: enOwdrUvox960YRmC5C0rYPH71vGkXRrRSf++MkJJnBOIwe1z1BLjEriaudfjb7SF8F3v1X02a epHq/DZLuLjw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="355863002" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:25 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Richard Weinberger , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 27/58] fs/ubifs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:02 -0700 Message-Id: <20201009195033.3208459-28-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Richard Weinberger Signed-off-by: Ira Weiny --- fs/ubifs/file.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c index b77d1637bbbc..a3537447a885 100644 --- a/fs/ubifs/file.c +++ b/fs/ubifs/file.c @@ -111,7 +111,7 @@ static int do_readpage(struct page *page) ubifs_assert(c, !PageChecked(page)); ubifs_assert(c, !PagePrivate(page)); - addr = kmap(page); + addr = kmap_thread(page); block = page->index << UBIFS_BLOCKS_PER_PAGE_SHIFT; beyond = (i_size + UBIFS_BLOCK_SIZE - 1) >> UBIFS_BLOCK_SHIFT; @@ -174,7 +174,7 @@ static int do_readpage(struct page *page) SetPageUptodate(page); ClearPageError(page); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); return 0; error: @@ -182,7 +182,7 @@ static int do_readpage(struct page *page) ClearPageUptodate(page); SetPageError(page); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); return err; } @@ -616,7 +616,7 @@ static int populate_page(struct ubifs_info *c, struct page *page, dbg_gen("ino %lu, pg %lu, i_size %lld, flags %#lx", inode->i_ino, page->index, i_size, page->flags); - addr = zaddr = kmap(page); + addr = zaddr = kmap_thread(page); end_index = (i_size - 1) >> PAGE_SHIFT; if (!i_size || page->index > end_index) { @@ -692,7 +692,7 @@ static int populate_page(struct ubifs_info *c, struct page *page, SetPageUptodate(page); ClearPageError(page); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); *n = nn; return 0; @@ -700,7 +700,7 @@ static int populate_page(struct ubifs_info *c, struct page *page, ClearPageUptodate(page); SetPageError(page); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); ubifs_err(c, "bad data node (block %u, inode %lu)", page_block, inode->i_ino); return -EINVAL; @@ -918,7 +918,7 @@ static int do_writepage(struct page *page, int len) /* Update radix tree tags */ set_page_writeback(page); - addr = kmap(page); + addr = kmap_thread(page); block = page->index << UBIFS_BLOCKS_PER_PAGE_SHIFT; i = 0; while (len) { @@ -950,7 +950,7 @@ static int do_writepage(struct page *page, int len) ClearPagePrivate(page); ClearPageChecked(page); - kunmap(page); + kunmap_thread(page); unlock_page(page); end_page_writeback(page); return err; From patchwork Fri Oct 9 19:50:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87D51C433E7 for ; Fri, 9 Oct 2020 20:03:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 403472225B for ; Fri, 9 Oct 2020 20:03:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391128AbgJIUDp (ORCPT ); Fri, 9 Oct 2020 16:03:45 -0400 Received: from mga07.intel.com ([134.134.136.100]:56781 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388726AbgJITwe (ORCPT ); Fri, 9 Oct 2020 15:52:34 -0400 IronPort-SDR: k5n5dJgad3PH5z1nKWRCqhUQsH9CIT69v4l8qL7SJaW8QquVIZ7+Rkvin4YtVTXz0/2gWsqL9u 0/qOsLBGzxnw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229715221" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="229715221" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:30 -0700 IronPort-SDR: R8dnd4Gi4VR6B2jqh++vX5H71gVtYpBk3ChyBwkq2ll0tg176OYljcvG78jHbXA7T9l0eBxuZS YLX/odwmLDNw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="329006590" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:29 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , David Howells , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 28/58] fs/cachefiles: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:03 -0700 Message-Id: <20201009195033.3208459-29-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: David Howells Signed-off-by: Ira Weiny --- fs/cachefiles/rdwr.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c index 3080cda9e824..2468e5c067ba 100644 --- a/fs/cachefiles/rdwr.c +++ b/fs/cachefiles/rdwr.c @@ -936,9 +936,9 @@ int cachefiles_write_page(struct fscache_storage *op, struct page *page) } } - data = kmap(page); + data = kmap_thread(page); ret = kernel_write(file, data, len, &pos); - kunmap(page); + kunmap_thread(page); fput(file); if (ret != len) goto error_eio; From patchwork Fri Oct 9 19:50:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AD68C352B9 for ; Fri, 9 Oct 2020 20:00:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B773225A9 for ; Fri, 9 Oct 2020 20:00:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390592AbgJIUAU (ORCPT ); Fri, 9 Oct 2020 16:00:20 -0400 Received: from mga01.intel.com ([192.55.52.88]:3555 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391072AbgJITxV (ORCPT ); Fri, 9 Oct 2020 15:53:21 -0400 IronPort-SDR: PfRGZIAmOh8tdE80idIPd0PyOwzPrEY4WDFLreexyia8j9qcHVdM++IWwhhIaC7Ux2Sv+9U3Wf Uxot3xaU94Mg== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976304" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976304" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:33 -0700 IronPort-SDR: XHP7HCDttdXyLYHE1MyL2Bm/StJaryejpRmS+iQtfLlXLzP04sJOdLalPQz60Qvt6cf6bIW1lg L4ex5grUzrzg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298419726" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:32 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Anton Altaparmakov , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 29/58] fs/ntfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:04 -0700 Message-Id: <20201009195033.3208459-30-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Anton Altaparmakov Signed-off-by: Ira Weiny --- fs/ntfs/aops.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c index bb0a43860ad2..11633d732809 100644 --- a/fs/ntfs/aops.c +++ b/fs/ntfs/aops.c @@ -1099,7 +1099,7 @@ static int ntfs_write_mst_block(struct page *page, if (!nr_bhs) goto done; /* Map the page so we can access its contents. */ - kaddr = kmap(page); + kaddr = kmap_thread(page); /* Clear the page uptodate flag whilst the mst fixups are applied. */ BUG_ON(!PageUptodate(page)); ClearPageUptodate(page); @@ -1276,7 +1276,7 @@ static int ntfs_write_mst_block(struct page *page, iput(VFS_I(base_tni)); } SetPageUptodate(page); - kunmap(page); + kunmap_thread(page); done: if (unlikely(err && err != -ENOMEM)) { /* From patchwork Fri Oct 9 19:50:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68235C388E6 for ; Fri, 9 Oct 2020 20:00:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3328B22267 for ; Fri, 9 Oct 2020 20:00:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390619AbgJIUAV (ORCPT ); Fri, 9 Oct 2020 16:00:21 -0400 Received: from mga14.intel.com ([192.55.52.115]:15567 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391075AbgJITxV (ORCPT ); Fri, 9 Oct 2020 15:53:21 -0400 IronPort-SDR: CHG01MNstSpG17mDzGS4y9elLRbBJnVOXJxRUg+piAaYWUVMm+uwaS1rsw4JmTP4Aw9xRUutoh mCMDnkWd+amQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164743944" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="164743944" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:41 -0700 IronPort-SDR: L0Vefok2A93xih84zuFNBu3LKLwMRamDsDkR1F4Gust5uwQPsEktlHxFcxfQt1U0vxS4VptD4d rIp50GoBaQKg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="345148048" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:40 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Hans de Goede , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 31/58] fs/vboxsf: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:06 -0700 Message-Id: <20201009195033.3208459-32-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Hans de Goede Signed-off-by: Ira Weiny --- fs/vboxsf/file.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/vboxsf/file.c b/fs/vboxsf/file.c index c4ab5996d97a..d9c7e6b7b4cc 100644 --- a/fs/vboxsf/file.c +++ b/fs/vboxsf/file.c @@ -216,7 +216,7 @@ static int vboxsf_readpage(struct file *file, struct page *page) u8 *buf; int err; - buf = kmap(page); + buf = kmap_thread(page); err = vboxsf_read(sf_handle->root, sf_handle->handle, off, &nread, buf); if (err == 0) { @@ -227,7 +227,7 @@ static int vboxsf_readpage(struct file *file, struct page *page) SetPageError(page); } - kunmap(page); + kunmap_thread(page); unlock_page(page); return err; } @@ -268,10 +268,10 @@ static int vboxsf_writepage(struct page *page, struct writeback_control *wbc) if (!sf_handle) return -EBADF; - buf = kmap(page); + buf = kmap_thread(page); err = vboxsf_write(sf_handle->root, sf_handle->handle, off, &nwrite, buf); - kunmap(page); + kunmap_thread(page); kref_put(&sf_handle->refcount, vboxsf_handle_release); @@ -302,10 +302,10 @@ static int vboxsf_write_end(struct file *file, struct address_space *mapping, if (!PageUptodate(page) && copied < len) zero_user(page, from + copied, len - copied); - buf = kmap(page); + buf = kmap_thread(page); err = vboxsf_write(sf_handle->root, sf_handle->handle, pos, &nwritten, buf + from); - kunmap(page); + kunmap_thread(page); if (err) { nwritten = 0; From patchwork Fri Oct 9 19:50:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90F66C433DF for ; Fri, 9 Oct 2020 20:03:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4796922284 for ; Fri, 9 Oct 2020 20:03:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390924AbgJIUDW (ORCPT ); Fri, 9 Oct 2020 16:03:22 -0400 Received: from mga17.intel.com ([192.55.52.151]:34046 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389118AbgJITwr (ORCPT ); Fri, 9 Oct 2020 15:52:47 -0400 IronPort-SDR: HCJqdmCgpU6pXG1LsAwcTZbp9UGZICOvUpbuOIRNLSKkgIBl4bErNnLIrbSn24FVu1Q3zQC/2Q 2KazPDmPpjUQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="145397499" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="145397499" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:45 -0700 IronPort-SDR: A7QVfEYpWd+zukvvJSzSDlmpc6UtcI1VTl49mnWle7o9+shZWHPb4ayBrZUheEhKcyM4kHv/QX ai2WmfcpOoFA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="389237190" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:43 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jeff Dike , Richard Weinberger , Anton Ivanov , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 32/58] fs/hostfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:07 -0700 Message-Id: <20201009195033.3208459-33-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Jeff Dike Cc: Richard Weinberger Cc: Anton Ivanov Signed-off-by: Ira Weiny --- fs/hostfs/hostfs_kern.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c index c070c0d8e3e9..608efd0f83cb 100644 --- a/fs/hostfs/hostfs_kern.c +++ b/fs/hostfs/hostfs_kern.c @@ -409,7 +409,7 @@ static int hostfs_writepage(struct page *page, struct writeback_control *wbc) if (page->index >= end_index) count = inode->i_size & (PAGE_SIZE-1); - buffer = kmap(page); + buffer = kmap_thread(page); err = write_file(HOSTFS_I(inode)->fd, &base, buffer, count); if (err != count) { @@ -425,7 +425,7 @@ static int hostfs_writepage(struct page *page, struct writeback_control *wbc) err = 0; out: - kunmap(page); + kunmap_thread(page); unlock_page(page); return err; @@ -437,7 +437,7 @@ static int hostfs_readpage(struct file *file, struct page *page) loff_t start = page_offset(page); int bytes_read, ret = 0; - buffer = kmap(page); + buffer = kmap_thread(page); bytes_read = read_file(FILE_HOSTFS_I(file)->fd, &start, buffer, PAGE_SIZE); if (bytes_read < 0) { @@ -454,7 +454,7 @@ static int hostfs_readpage(struct file *file, struct page *page) out: flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); unlock_page(page); return ret; } @@ -480,9 +480,9 @@ static int hostfs_write_end(struct file *file, struct address_space *mapping, unsigned from = pos & (PAGE_SIZE - 1); int err; - buffer = kmap(page); + buffer = kmap_thread(page); err = write_file(FILE_HOSTFS_I(file)->fd, &pos, buffer + from, copied); - kunmap(page); + kunmap_thread(page); if (!PageUptodate(page) && err == PAGE_SIZE) SetPageUptodate(page); From patchwork Fri Oct 9 19:50:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 120FFC4363A for ; Fri, 9 Oct 2020 20:02:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C3D95225A9 for ; Fri, 9 Oct 2020 20:02:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391165AbgJIUCB (ORCPT ); Fri, 9 Oct 2020 16:02:01 -0400 Received: from mga07.intel.com ([134.134.136.100]:56812 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390983AbgJITwz (ORCPT ); Fri, 9 Oct 2020 15:52:55 -0400 IronPort-SDR: Op9kc88PPdmQBPFTqx/+39WorDKjyG3ipU0g0nie1KwnkHl76GrMsKA5g1TH6J/GMtPSElTjiP VPk0GASkFqkA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229715275" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="229715275" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:51 -0700 IronPort-SDR: raa31ON9t7yf3LIgfzeAsEKPUCppaesVfvBVJ2ruufMJvWsy9JzU1jTdgl2++U0w8tR7AuyOxm Q4+3OjaqYbiw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298531317" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:50 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Gao Xiang , Chao Yu , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 34/58] fs/erofs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:09 -0700 Message-Id: <20201009195033.3208459-35-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Gao Xiang Cc: Chao Yu Signed-off-by: Ira Weiny --- fs/erofs/super.c | 4 ++-- fs/erofs/xattr.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/erofs/super.c b/fs/erofs/super.c index ddaa516c008a..41696b60f1b3 100644 --- a/fs/erofs/super.c +++ b/fs/erofs/super.c @@ -139,7 +139,7 @@ static int erofs_read_superblock(struct super_block *sb) sbi = EROFS_SB(sb); - data = kmap(page); + data = kmap_thread(page); dsb = (struct erofs_super_block *)(data + EROFS_SUPER_OFFSET); ret = -EINVAL; @@ -189,7 +189,7 @@ static int erofs_read_superblock(struct super_block *sb) } ret = 0; out: - kunmap(page); + kunmap_thread(page); put_page(page); return ret; } diff --git a/fs/erofs/xattr.c b/fs/erofs/xattr.c index c8c381eadcd6..1771baa99d77 100644 --- a/fs/erofs/xattr.c +++ b/fs/erofs/xattr.c @@ -20,7 +20,7 @@ static inline void xattr_iter_end(struct xattr_iter *it, bool atomic) { /* the only user of kunmap() is 'init_inode_xattrs' */ if (!atomic) - kunmap(it->page); + kunmap_thread(it->page); else kunmap_atomic(it->kaddr); @@ -96,7 +96,7 @@ static int init_inode_xattrs(struct inode *inode) } /* read in shared xattr array (non-atomic, see kmalloc below) */ - it.kaddr = kmap(it.page); + it.kaddr = kmap_thread(it.page); atomic_map = false; ih = (struct erofs_xattr_ibody_header *)(it.kaddr + it.ofs); From patchwork Fri Oct 9 19:50:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B112EC9DC9E for ; Fri, 9 Oct 2020 20:01:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6C920222C8 for ; Fri, 9 Oct 2020 20:01:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389729AbgJIUBQ (ORCPT ); Fri, 9 Oct 2020 16:01:16 -0400 Received: from mga18.intel.com ([134.134.136.126]:42454 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389721AbgJITxQ (ORCPT ); Fri, 9 Oct 2020 15:53:16 -0400 IronPort-SDR: /0PxmQQKh9INXhqYDaA/qd0UIC/Ocy9AjDBS2exom/XczYR4EeFFQ0eDp921T8r9Qs7rCQoehA KURfPAOckKjQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363728" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="153363728" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:59 -0700 IronPort-SDR: Wf0Bi1Gs1P3ZgILN9Dk8+ODZhbqXc7J3tJZabe+2AUx5fg/nQYQB2E9E/9NHNQgBhyfVAmIu+q D7ag8zQ3enrg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462300964" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:58 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jan Kara , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 36/58] fs/ext2: Use ext2_put_page Date: Fri, 9 Oct 2020 12:50:11 -0700 Message-Id: <20201009195033.3208459-37-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny There are 3 places in namei.c where the equivalent of ext2_put_page() is open coded. We want to use k[un]map_thread() instead of k[un]map() in ext2_[get|put]_page(). Move ext2_put_page() to ext2.h and use it in namei.c in prep for converting the k[un]map() code. Cc: Jan Kara Signed-off-by: Ira Weiny --- fs/ext2/dir.c | 6 ------ fs/ext2/ext2.h | 8 ++++++++ fs/ext2/namei.c | 15 +++++---------- 3 files changed, 13 insertions(+), 16 deletions(-) diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c index 70355ab6740e..f3194bf20733 100644 --- a/fs/ext2/dir.c +++ b/fs/ext2/dir.c @@ -66,12 +66,6 @@ static inline unsigned ext2_chunk_size(struct inode *inode) return inode->i_sb->s_blocksize; } -static inline void ext2_put_page(struct page *page) -{ - kunmap(page); - put_page(page); -} - /* * Return the offset into page `page_nr' of the last valid * byte in that page, plus one. diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h index 5136b7289e8d..021ec8b42ac3 100644 --- a/fs/ext2/ext2.h +++ b/fs/ext2/ext2.h @@ -16,6 +16,8 @@ #include #include #include +#include +#include /* XXX Here for now... not interested in restructing headers JUST now */ @@ -745,6 +747,12 @@ extern int ext2_delete_entry (struct ext2_dir_entry_2 *, struct page *); extern int ext2_empty_dir (struct inode *); extern struct ext2_dir_entry_2 * ext2_dotdot (struct inode *, struct page **); extern void ext2_set_link(struct inode *, struct ext2_dir_entry_2 *, struct page *, struct inode *, int); +static inline void ext2_put_page(struct page *page) +{ + kunmap(page); + put_page(page); +} + /* ialloc.c */ extern struct inode * ext2_new_inode (struct inode *, umode_t, const struct qstr *); diff --git a/fs/ext2/namei.c b/fs/ext2/namei.c index 5bf2c145643b..ea980f1e2e99 100644 --- a/fs/ext2/namei.c +++ b/fs/ext2/namei.c @@ -389,23 +389,18 @@ static int ext2_rename (struct inode * old_dir, struct dentry * old_dentry, if (dir_de) { if (old_dir != new_dir) ext2_set_link(old_inode, dir_de, dir_page, new_dir, 0); - else { - kunmap(dir_page); - put_page(dir_page); - } + else + ext2_put_page(dir_page); inode_dec_link_count(old_dir); } return 0; out_dir: - if (dir_de) { - kunmap(dir_page); - put_page(dir_page); - } + if (dir_de) + ext2_put_page(dir_page); out_old: - kunmap(old_page); - put_page(old_page); + ext2_put_page(old_page); out: return err; } From patchwork Fri Oct 9 19:50:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287200 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6ACDDC56201 for ; Fri, 9 Oct 2020 19:59:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3CE4022267 for ; Fri, 9 Oct 2020 19:59:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390519AbgJIT7d (ORCPT ); Fri, 9 Oct 2020 15:59:33 -0400 Received: from mga01.intel.com ([192.55.52.88]:3593 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391078AbgJITxW (ORCPT ); Fri, 9 Oct 2020 15:53:22 -0400 IronPort-SDR: TuW6f/eUiPOWBVUWPJSST+0egNgNIn2HyrjOLitLNnO3XIXYcZZ7k6u/vtD7EeyPkkCEzhJArJ zTM7g1ijBJ7w== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976382" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976382" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:02 -0700 IronPort-SDR: bH2kRxXxeY8kuFBN5R8RneSpQbEQLW5eawptefbcIpm8xkU067aWcbaReXxx6/6TDXgDaxiIQ5 8lenxDucas2g== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="519847131" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:02 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Jan Kara , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 37/58] fs/ext2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:12 -0700 Message-Id: <20201009195033.3208459-38-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS update use the new kmap_thread() call instead. Cc: Jan Kara Signed-off-by: Ira Weiny --- fs/ext2/dir.c | 2 +- fs/ext2/ext2.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c index f3194bf20733..abe97ba458c8 100644 --- a/fs/ext2/dir.c +++ b/fs/ext2/dir.c @@ -196,7 +196,7 @@ static struct page * ext2_get_page(struct inode *dir, unsigned long n, struct address_space *mapping = dir->i_mapping; struct page *page = read_mapping_page(mapping, n, NULL); if (!IS_ERR(page)) { - kmap(page); + kmap_thread(page); if (unlikely(!PageChecked(page))) { if (PageError(page) || !ext2_check_page(page, quiet)) goto fail; diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h index 021ec8b42ac3..9bcb6714c255 100644 --- a/fs/ext2/ext2.h +++ b/fs/ext2/ext2.h @@ -749,7 +749,7 @@ extern struct ext2_dir_entry_2 * ext2_dotdot (struct inode *, struct page **); extern void ext2_set_link(struct inode *, struct ext2_dir_entry_2 *, struct page *, struct inode *, int); static inline void ext2_put_page(struct page *page) { - kunmap(page); + kunmap_thread(page); put_page(page); } From patchwork Fri Oct 9 19:50:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30F52C8A874 for ; Fri, 9 Oct 2020 19:53:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F26FF2250F for ; Fri, 9 Oct 2020 19:53:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391140AbgJITxe (ORCPT ); Fri, 9 Oct 2020 15:53:34 -0400 Received: from mga14.intel.com ([192.55.52.115]:15570 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391073AbgJITxW (ORCPT ); Fri, 9 Oct 2020 15:53:22 -0400 IronPort-SDR: 7KkidwkDSmKI3kwWlMria/Sju91b/R21i0w1kLrwQYSn3K4W1EsDTLeZ7OuYero5ReiYzcFEEs JOGLzwSLgu9Q== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164744016" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="164744016" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:09 -0700 IronPort-SDR: FciD93oXa6TN5PQF74JzeuCyRYkWBlT2lC/7ncntBELjkiuqMo1u2JSAhMViBtfJLu9YdFEYCB Wo/5PMHidIaw== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="317147397" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:08 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 39/58] fs/jffs2: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:14 -0700 Message-Id: <20201009195033.3208459-40-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- fs/jffs2/file.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c index 3e6d54f9b011..14dd2b18cc16 100644 --- a/fs/jffs2/file.c +++ b/fs/jffs2/file.c @@ -287,13 +287,13 @@ static int jffs2_write_end(struct file *filp, struct address_space *mapping, /* In 2.4, it was already kmapped by generic_file_write(). Doesn't hurt to do it again. The alternative is ifdefs, which are ugly. */ - kmap(pg); + kmap_thread(pg); ret = jffs2_write_inode_range(c, f, ri, page_address(pg) + aligned_start, (pg->index << PAGE_SHIFT) + aligned_start, end - aligned_start, &writtenlen); - kunmap(pg); + kunmap_thread(pg); if (ret) { /* There was an error writing. */ From patchwork Fri Oct 9 19:50:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77BEAC9DCA2 for ; Fri, 9 Oct 2020 19:58:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4C0BF22403 for ; Fri, 9 Oct 2020 19:58:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388121AbgJIT6n (ORCPT ); Fri, 9 Oct 2020 15:58:43 -0400 Received: from mga12.intel.com ([192.55.52.136]:29334 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391094AbgJITx1 (ORCPT ); Fri, 9 Oct 2020 15:53:27 -0400 IronPort-SDR: dNjI/+CbWDQK7qTqW5BqQsQgclEAOq7y/cS1i5jjle6fiDG/78S+D7ApyW4zeTXOr/fIB3FyQB H0HlrqWAKRjQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144851032" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144851032" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:26 -0700 IronPort-SDR: 2ROrwlCi2HA6jb/vbADUiWNpPA9fz//CxLO3iPg6ChQCY2CkDUKdp2BuOf7Z0K54YjJfUTECcl a5kJtnEVjTKQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="329006788" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:25 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Stefano Stabellini , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 44/58] drivers/xen: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:19 -0700 Message-Id: <20201009195033.3208459-45-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Stefano Stabellini Signed-off-by: Ira Weiny --- drivers/xen/gntalloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c index 3fa40c723e8e..3b78e055feff 100644 --- a/drivers/xen/gntalloc.c +++ b/drivers/xen/gntalloc.c @@ -184,9 +184,9 @@ static int add_grefs(struct ioctl_gntalloc_alloc_gref *op, static void __del_gref(struct gntalloc_gref *gref) { if (gref->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) { - uint8_t *tmp = kmap(gref->page); + uint8_t *tmp = kmap_thread(gref->page); tmp[gref->notify.pgoff] = 0; - kunmap(gref->page); + kunmap_thread(gref->page); } if (gref->notify.flags & UNMAP_NOTIFY_SEND_EVENT) { notify_remote_via_evtchn(gref->notify.event); From patchwork Fri Oct 9 19:50:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287202 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EA9DC83034 for ; Fri, 9 Oct 2020 19:57:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 63B1622267 for ; Fri, 9 Oct 2020 19:57:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391060AbgJIT5v (ORCPT ); Fri, 9 Oct 2020 15:57:51 -0400 Received: from mga01.intel.com ([192.55.52.88]:3762 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391129AbgJITxd (ORCPT ); Fri, 9 Oct 2020 15:53:33 -0400 IronPort-SDR: aPoTh4HYeyMS5wGKHIAu0q7/f7DJkClVaj4/EXRmOBmEgcBC+Tg0jAYwjzvyrMLVRskCkInAMu EgiQjSb9DIJw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976460" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="182976460" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:31 -0700 IronPort-SDR: tmjEkPyqc7ovqgXQGaOUDyTXP+wKCpLd9SWVbWzn6wrJD59J17IHvt5wfRHDxw49UwqlYvaTbK hGdPiXGJDlXQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="349957787" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:31 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Greg Kroah-Hartman , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 46/58] drives/staging: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:21 -0700 Message-Id: <20201009195033.3208459-47-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Greg Kroah-Hartman Signed-off-by: Ira Weiny --- drivers/staging/rts5208/rtsx_transport.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/staging/rts5208/rtsx_transport.c b/drivers/staging/rts5208/rtsx_transport.c index 0027bcf638ad..f747cc23951b 100644 --- a/drivers/staging/rts5208/rtsx_transport.c +++ b/drivers/staging/rts5208/rtsx_transport.c @@ -92,13 +92,13 @@ unsigned int rtsx_stor_access_xfer_buf(unsigned char *buffer, while (sglen > 0) { unsigned int plen = min(sglen, (unsigned int) PAGE_SIZE - poff); - unsigned char *ptr = kmap(page); + unsigned char *ptr = kmap_thread(page); if (dir == TO_XFER_BUF) memcpy(ptr + poff, buffer + cnt, plen); else memcpy(buffer + cnt, ptr + poff, plen); - kunmap(page); + kunmap_thread(page); /* Start at the beginning of the next page */ poff = 0; From patchwork Fri Oct 9 19:50:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287208 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58598C83B8A for ; Fri, 9 Oct 2020 19:53:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 254BF22B3A for ; Fri, 9 Oct 2020 19:53:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391223AbgJITxy (ORCPT ); Fri, 9 Oct 2020 15:53:54 -0400 Received: from mga18.intel.com ([134.134.136.126]:42487 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391153AbgJITxm (ORCPT ); Fri, 9 Oct 2020 15:53:42 -0400 IronPort-SDR: BgWZnFJZdvaz1CZmvfS/7De+hsUMEUJdT4/u/BWCM6C97yea5FfZV7v30SZ47vZG/wo8aIdnwg re/Nlv5BC3sA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363823" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="153363823" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:41 -0700 IronPort-SDR: t/zSGeR++7wfj+VtFM7nhfyp5h0blfdWqDlNHnP/5jl48Bm08GFuTiLpDu6MwkEQ7bZpMY8SQr l7ztZVCdgy3g== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="419537249" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:40 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Greg Kroah-Hartman , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 49/58] drivers/misc: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:24 -0700 Message-Id: <20201009195033.3208459-50-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Greg Kroah-Hartman Signed-off-by: Ira Weiny --- drivers/misc/vmw_vmci/vmci_queue_pair.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c index 8531ae781195..f308abb8ad03 100644 --- a/drivers/misc/vmw_vmci/vmci_queue_pair.c +++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c @@ -343,7 +343,7 @@ static int qp_memcpy_to_queue_iter(struct vmci_queue *queue, size_t to_copy; if (kernel_if->host) - va = kmap(kernel_if->u.h.page[page_index]); + va = kmap_thread(kernel_if->u.h.page[page_index]); else va = kernel_if->u.g.vas[page_index + 1]; /* Skip header. */ @@ -357,12 +357,12 @@ static int qp_memcpy_to_queue_iter(struct vmci_queue *queue, if (!copy_from_iter_full((u8 *)va + page_offset, to_copy, from)) { if (kernel_if->host) - kunmap(kernel_if->u.h.page[page_index]); + kunmap_thread(kernel_if->u.h.page[page_index]); return VMCI_ERROR_INVALID_ARGS; } bytes_copied += to_copy; if (kernel_if->host) - kunmap(kernel_if->u.h.page[page_index]); + kunmap_thread(kernel_if->u.h.page[page_index]); } return VMCI_SUCCESS; @@ -391,7 +391,7 @@ static int qp_memcpy_from_queue_iter(struct iov_iter *to, int err; if (kernel_if->host) - va = kmap(kernel_if->u.h.page[page_index]); + va = kmap_thread(kernel_if->u.h.page[page_index]); else va = kernel_if->u.g.vas[page_index + 1]; /* Skip header. */ @@ -405,12 +405,12 @@ static int qp_memcpy_from_queue_iter(struct iov_iter *to, err = copy_to_iter((u8 *)va + page_offset, to_copy, to); if (err != to_copy) { if (kernel_if->host) - kunmap(kernel_if->u.h.page[page_index]); + kunmap_thread(kernel_if->u.h.page[page_index]); return VMCI_ERROR_INVALID_ARGS; } bytes_copied += to_copy; if (kernel_if->host) - kunmap(kernel_if->u.h.page[page_index]); + kunmap_thread(kernel_if->u.h.page[page_index]); } return VMCI_SUCCESS; From patchwork Fri Oct 9 19:50:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C7B1C64E75 for ; Fri, 9 Oct 2020 19:57:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 21AF7225A9 for ; Fri, 9 Oct 2020 19:57:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390260AbgJIT5J (ORCPT ); Fri, 9 Oct 2020 15:57:09 -0400 Received: from mga05.intel.com ([192.55.52.43]:56531 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391159AbgJITxp (ORCPT ); Fri, 9 Oct 2020 15:53:45 -0400 IronPort-SDR: Zi8fB/10ZdxwgrOQJLoGHp29GfkIGrQILKgo8B8+JwyOVRnOCR6id+o1zPE8oNQtf0YN5WB/Pq Myx+WV+qE/+A== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="250226326" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="250226326" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:43 -0700 IronPort-SDR: Uit4axCqwtogby85Iv/hBLI1Moj4COtsBpPB5jXSVWpuENwpihzSQwSQg3nZnBMabzVvybSK2I 1kQ4taZ9oZgQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="298531484" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:43 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Greg Kroah-Hartman , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 50/58] drivers/android: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:25 -0700 Message-Id: <20201009195033.3208459-51-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Greg Kroah-Hartman Signed-off-by: Ira Weiny --- drivers/android/binder_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 69609696a843..5f50856caad7 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -1118,9 +1118,9 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc, page = binder_alloc_get_page(alloc, buffer, buffer_offset, &pgoff); size = min_t(size_t, bytes, PAGE_SIZE - pgoff); - kptr = kmap(page) + pgoff; + kptr = kmap_thread(page) + pgoff; ret = copy_from_user(kptr, from, size); - kunmap(page); + kunmap_thread(page); if (ret) return bytes - size + ret; bytes -= size; From patchwork Fri Oct 9 19:50:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287204 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EDDAC8300E for ; Fri, 9 Oct 2020 19:56:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 57F0D22B4B for ; Fri, 9 Oct 2020 19:56:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391461AbgJITzt (ORCPT ); Fri, 9 Oct 2020 15:55:49 -0400 Received: from mga11.intel.com ([192.55.52.93]:40692 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391217AbgJITxx (ORCPT ); Fri, 9 Oct 2020 15:53:53 -0400 IronPort-SDR: UEyefvdFo8RhoS2CyQvHWs4S368T1Q2/u02koU3gcMNc8vJGdiDc16SNWw6hcKwXxyDmkb/Pj3 l4VxCwhGx8mA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068217" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068217" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:51 -0700 IronPort-SDR: V1KD7wTraPak1zxWaGCn8NDyAgfcS4zsKsBv4Z0vY7HUn25qjJm9u99gQjtiJGVT+R+oXmWyM0 H2mADCd5/x5Q== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462301148" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:50 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 52/58] mm: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:27 -0700 Message-Id: <20201009195033.3208459-53-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- mm/memory.c | 8 ++++---- mm/swapfile.c | 4 ++-- mm/userfaultfd.c | 4 ++-- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index fcfc4ca36eba..75a054882d7a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4945,7 +4945,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, if (bytes > PAGE_SIZE-offset) bytes = PAGE_SIZE-offset; - maddr = kmap(page); + maddr = kmap_thread(page); if (write) { copy_to_user_page(vma, page, addr, maddr + offset, buf, bytes); @@ -4954,7 +4954,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, copy_from_user_page(vma, page, addr, buf, maddr + offset, bytes); } - kunmap(page); + kunmap_thread(page); put_page(page); } len -= bytes; @@ -5216,14 +5216,14 @@ long copy_huge_page_from_user(struct page *dst_page, for (i = 0; i < pages_per_huge_page; i++) { if (allow_pagefault) - page_kaddr = kmap(dst_page + i); + page_kaddr = kmap_thread(dst_page + i); else page_kaddr = kmap_atomic(dst_page + i); rc = copy_from_user(page_kaddr, (const void __user *)(src + i * PAGE_SIZE), PAGE_SIZE); if (allow_pagefault) - kunmap(dst_page + i); + kunmap_thread(dst_page + i); else kunmap_atomic(page_kaddr); diff --git a/mm/swapfile.c b/mm/swapfile.c index debc94155f74..e3296ff95648 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3219,7 +3219,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) error = PTR_ERR(page); goto bad_swap_unlock_inode; } - swap_header = kmap(page); + swap_header = kmap_thread(page); maxpages = read_swap_header(p, swap_header, inode); if (unlikely(!maxpages)) { @@ -3395,7 +3395,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) filp_close(swap_file, NULL); out: if (page && !IS_ERR(page)) { - kunmap(page); + kunmap_thread(page); put_page(page); } if (name) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 9a3d451402d7..4d38c881bb2d 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -586,11 +586,11 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, mmap_read_unlock(dst_mm); BUG_ON(!page); - page_kaddr = kmap(page); + page_kaddr = kmap_thread(page); err = copy_from_user(page_kaddr, (const void __user *) src_addr, PAGE_SIZE); - kunmap(page); + kunmap_thread(page); if (unlikely(err)) { err = -EFAULT; goto out; From patchwork Fri Oct 9 19:50:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D254C64E90 for ; Fri, 9 Oct 2020 19:55:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2B999225A9 for ; Fri, 9 Oct 2020 19:55:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391468AbgJITzu (ORCPT ); Fri, 9 Oct 2020 15:55:50 -0400 Received: from mga12.intel.com ([192.55.52.136]:29384 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389355AbgJITxz (ORCPT ); Fri, 9 Oct 2020 15:53:55 -0400 IronPort-SDR: bDmNjnNdMRZ+ioxLx0lwsy0xCkRGcoIkF1uO7/DGxT31PMyi/yHsDFfOYXBkHjpwCB8GF2sqCK rdqgPQHF8a+Q== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144851091" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144851091" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:54 -0700 IronPort-SDR: bZn0Zma13YkX7FJFkNsCYMjMNff2FSiB/MgoUwYT50qMOfu0yRYfZtOtUYx1uwuwPtN+tdFX+r hwly4wq6Ikyg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="519847271" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:53 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Alexander Viro , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?q?sse?= , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , John Fastabend , KP Singh , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 53/58] lib: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:28 -0700 Message-Id: <20201009195033.3208459-54-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Alexander Viro Cc: "Jérôme Glisse" Cc: Martin KaFai Lau Cc: Song Liu Cc: Yonghong Song Cc: Andrii Nakryiko Cc: John Fastabend Cc: KP Singh Signed-off-by: Ira Weiny --- lib/iov_iter.c | 12 ++++++------ lib/test_bpf.c | 4 ++-- lib/test_hmm.c | 8 ++++---- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 5e40786c8f12..1d47f957cf95 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -208,7 +208,7 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b } /* Too bad - revert to non-atomic kmap */ - kaddr = kmap(page); + kaddr = kmap_thread(page); from = kaddr + offset; left = copyout(buf, from, copy); copy -= left; @@ -225,7 +225,7 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b from += copy; bytes -= copy; } - kunmap(page); + kunmap_thread(page); done: if (skip == iov->iov_len) { @@ -292,7 +292,7 @@ static size_t copy_page_from_iter_iovec(struct page *page, size_t offset, size_t } /* Too bad - revert to non-atomic kmap */ - kaddr = kmap(page); + kaddr = kmap_thread(page); to = kaddr + offset; left = copyin(to, buf, copy); copy -= left; @@ -309,7 +309,7 @@ static size_t copy_page_from_iter_iovec(struct page *page, size_t offset, size_t to += copy; bytes -= copy; } - kunmap(page); + kunmap_thread(page); done: if (skip == iov->iov_len) { @@ -1742,10 +1742,10 @@ int iov_iter_for_each_range(struct iov_iter *i, size_t bytes, return 0; iterate_all_kinds(i, bytes, v, -EINVAL, ({ - w.iov_base = kmap(v.bv_page) + v.bv_offset; + w.iov_base = kmap_thread(v.bv_page) + v.bv_offset; w.iov_len = v.bv_len; err = f(&w, context); - kunmap(v.bv_page); + kunmap_thread(v.bv_page); err;}), ({ w = v; err = f(&w, context);}) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index ca7d635bccd9..441f822f56ba 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -6506,11 +6506,11 @@ static void *generate_test_data(struct bpf_test *test, int sub) if (!page) goto err_kfree_skb; - ptr = kmap(page); + ptr = kmap_thread(page); if (!ptr) goto err_free_page; memcpy(ptr, test->frag_data, MAX_DATA); - kunmap(page); + kunmap_thread(page); skb_add_rx_frag(skb, 0, page, 0, MAX_DATA, MAX_DATA); } diff --git a/lib/test_hmm.c b/lib/test_hmm.c index e7dc3de355b7..e40d26f97f45 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -329,9 +329,9 @@ static int dmirror_do_read(struct dmirror *dmirror, unsigned long start, if (!page) return -ENOENT; - tmp = kmap(page); + tmp = kmap_thread(page); memcpy(ptr, tmp, PAGE_SIZE); - kunmap(page); + kunmap_thread(page); ptr += PAGE_SIZE; bounce->cpages++; @@ -398,9 +398,9 @@ static int dmirror_do_write(struct dmirror *dmirror, unsigned long start, if (!page || xa_pointer_tag(entry) != DPT_XA_TAG_WRITE) return -ENOENT; - tmp = kmap(page); + tmp = kmap_thread(page); memcpy(tmp, ptr, PAGE_SIZE); - kunmap(page); + kunmap_thread(page); ptr += PAGE_SIZE; bounce->cpages++; From patchwork Fri Oct 9 19:50:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287206 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6ED99C832CA for ; Fri, 9 Oct 2020 19:55:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4BBBE22B48 for ; Fri, 9 Oct 2020 19:55:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391268AbgJITyJ (ORCPT ); Fri, 9 Oct 2020 15:54:09 -0400 Received: from mga09.intel.com ([134.134.136.24]:28731 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391236AbgJITyD (ORCPT ); Fri, 9 Oct 2020 15:54:03 -0400 IronPort-SDR: rBsyJhOMb9UFhXYql3GLMUkKgqkgTDscbD99JyXrVAih+gm4OytoVCKEoMaLuveLaKTLTNgXBd qHkVHSzCQ6BA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165643397" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="165643397" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:01 -0700 IronPort-SDR: vscQ8/PPGEHFzlszF4epup1/yO0Zdiea6PUky2hcTMxo6TNU67DufPr1A76ntRogBKZEHAEf8L bsjRzl3g/9bQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="529054196" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:00 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Kirti Wankhede , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 55/58] samples: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:30 -0700 Message-Id: <20201009195033.3208459-56-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Kirti Wankhede Signed-off-by: Ira Weiny --- samples/vfio-mdev/mbochs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c index 3cc5e5921682..6d95422c0b46 100644 --- a/samples/vfio-mdev/mbochs.c +++ b/samples/vfio-mdev/mbochs.c @@ -479,12 +479,12 @@ static ssize_t mdev_access(struct mdev_device *mdev, char *buf, size_t count, pos -= MBOCHS_MMIO_BAR_OFFSET; poff = pos & ~PAGE_MASK; pg = __mbochs_get_page(mdev_state, pos >> PAGE_SHIFT); - map = kmap(pg); + map = kmap_thread(pg); if (is_write) memcpy(map + poff, buf, count); else memcpy(buf, map + poff, count); - kunmap(pg); + kunmap_thread(pg); put_page(pg); } else { From patchwork Fri Oct 9 19:50:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 287207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F62EC388CD for ; Fri, 9 Oct 2020 19:54:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EE69622CA1 for ; Fri, 9 Oct 2020 19:54:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391279AbgJITyL (ORCPT ); Fri, 9 Oct 2020 15:54:11 -0400 Received: from mga14.intel.com ([192.55.52.115]:15724 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391245AbgJITyG (ORCPT ); Fri, 9 Oct 2020 15:54:06 -0400 IronPort-SDR: 7qaDjl2/ZxevUxp9SIgeuOAu8od3VQFfONS9RP6IrIk3xZmTXRvkzFi7/Qwdz+i1yGXcanWplh jXuBSYf4vxRQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164744163" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="164744163" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:04 -0700 IronPort-SDR: NEiErKYZfPZGqrWyxIZ0z43ef9+IKjrcXd9NJIRckxXQwdDEMd5poh/dknsqhifod6Ym/UoaZd Mu4h9c7DJvbA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462301216" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:03 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 56/58] dax: Stray access protection for dax_direct_access() Date: Fri, 9 Oct 2020 12:50:31 -0700 Message-Id: <20201009195033.3208459-57-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Ira Weiny dax_direct_access() is a special case of accessing pmem via a page offset and without a struct page. Because the dax driver is well aware of the special protections it has mapped memory with, call dev_access_[en|dis]able() directly instead of the unnecessary overhead of trying to get a page to kmap. Similar to kmap, we leverage existing functions, dax_read_[un]lock(), because they are already required to surround the use of the memory returned from dax_direct_access(). Signed-off-by: Ira Weiny --- drivers/dax/super.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/dax/super.c b/drivers/dax/super.c index e84070b55463..0ddb3ee73e36 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -30,6 +30,7 @@ static DEFINE_SPINLOCK(dax_host_lock); int dax_read_lock(void) { + dev_access_enable(false); return srcu_read_lock(&dax_srcu); } EXPORT_SYMBOL_GPL(dax_read_lock); @@ -37,6 +38,7 @@ EXPORT_SYMBOL_GPL(dax_read_lock); void dax_read_unlock(int id) { srcu_read_unlock(&dax_srcu, id); + dev_access_disable(false); } EXPORT_SYMBOL_GPL(dax_read_unlock);