diff mbox series

[v2,1/4] kprobes: Fix spelling mistakes

Message ID 20210529110305.9446-2-thunder.leizhen@huawei.com
State New
Headers show
Series kernel: fix some spelling mistakes | expand

Commit Message

Leizhen (ThunderTown) May 29, 2021, 11:03 a.m. UTC
Fix some spelling mistakes in comments:
decrese ==> decrease
immmediately ==> immediately

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>

---
 include/linux/freelist.h | 2 +-
 kernel/kprobes.c         | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

-- 
2.25.1

Comments

Masami Hiramatsu (Google) June 1, 2021, 11:54 p.m. UTC | #1
On Sat, 29 May 2021 19:03:02 +0800
Zhen Lei <thunder.leizhen@huawei.com> wrote:

> Fix some spelling mistakes in comments:

> decrese ==> decrease

> immmediately ==> immediately


This looks good to me.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>


Thanks!

> 

> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>

> ---

>  include/linux/freelist.h | 2 +-

>  kernel/kprobes.c         | 2 +-

>  2 files changed, 2 insertions(+), 2 deletions(-)

> 

> diff --git a/include/linux/freelist.h b/include/linux/freelist.h

> index fc1842b96469..1811c1f3f8cb 100644

> --- a/include/linux/freelist.h

> +++ b/include/linux/freelist.h

> @@ -39,7 +39,7 @@ static inline void __freelist_add(struct freelist_node *node, struct freelist_he

>  	 * and a refcount increment of a node in try_get, then back up to

>  	 * something non-zero, then the refcount increment is done by the other

>  	 * thread) -- so if the CAS to add the node to the actual list fails,

> -	 * decrese the refcount and leave the add operation to the next thread

> +	 * decrease the refcount and leave the add operation to the next thread

>  	 * who puts the refcount back to zero (which could be us, hence the

>  	 * loop).

>  	 */

> diff --git a/kernel/kprobes.c b/kernel/kprobes.c

> index 8c0a6fdef771..d4156082d5a5 100644

> --- a/kernel/kprobes.c

> +++ b/kernel/kprobes.c

> @@ -641,7 +641,7 @@ void wait_for_kprobe_optimizer(void)

>  	while (!list_empty(&optimizing_list) || !list_empty(&unoptimizing_list)) {

>  		mutex_unlock(&kprobe_mutex);

>  

> -		/* this will also make optimizing_work execute immmediately */

> +		/* this will also make optimizing_work execute immediately */

>  		flush_delayed_work(&optimizing_work);

>  		/* @optimizing_work might not have been queued yet, relax */

>  		cpu_relax();

> -- 

> 2.25.1

> 

> 



-- 
Masami Hiramatsu <mhiramat@kernel.org>
diff mbox series

Patch

diff --git a/include/linux/freelist.h b/include/linux/freelist.h
index fc1842b96469..1811c1f3f8cb 100644
--- a/include/linux/freelist.h
+++ b/include/linux/freelist.h
@@ -39,7 +39,7 @@  static inline void __freelist_add(struct freelist_node *node, struct freelist_he
 	 * and a refcount increment of a node in try_get, then back up to
 	 * something non-zero, then the refcount increment is done by the other
 	 * thread) -- so if the CAS to add the node to the actual list fails,
-	 * decrese the refcount and leave the add operation to the next thread
+	 * decrease the refcount and leave the add operation to the next thread
 	 * who puts the refcount back to zero (which could be us, hence the
 	 * loop).
 	 */
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 8c0a6fdef771..d4156082d5a5 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -641,7 +641,7 @@  void wait_for_kprobe_optimizer(void)
 	while (!list_empty(&optimizing_list) || !list_empty(&unoptimizing_list)) {
 		mutex_unlock(&kprobe_mutex);
 
-		/* this will also make optimizing_work execute immmediately */
+		/* this will also make optimizing_work execute immediately */
 		flush_delayed_work(&optimizing_work);
 		/* @optimizing_work might not have been queued yet, relax */
 		cpu_relax();