diff mbox series

[9/9] cpuidle/poll_state: limit POLL_IDLE_RELAX_COUNT on arm64

Message ID 20240430183730.561960-10-ankur.a.arora@oracle.com
State Superseded
Headers show
Series Enable haltpoll for arm64 | expand

Commit Message

Ankur Arora April 30, 2024, 6:37 p.m. UTC
smp_cond_load_relaxed(), in its generic polling variant polls on the
loop condition, waiting for it to change, eventually exiting the loop
if the time limit has been exceeded.

To limit the frequency of the time check it is done only once every
POLL_IDLE_RELAX_COUNT iterations.

arm64, however uses an event based mechanism, where instead of polling,
we wait for store to a region.

Limit the POLL_IDLE_RELAX_COUNT to 1 for that case.

Suggested-by: Haris Okanovic <harisokn@amazon.com>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
 drivers/cpuidle/poll_state.c | 11 +++++++++++
 1 file changed, 11 insertions(+)
diff mbox series

Patch

diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c
index 532e4ed19e0f..b69fe7b67cb4 100644
--- a/drivers/cpuidle/poll_state.c
+++ b/drivers/cpuidle/poll_state.c
@@ -8,7 +8,18 @@ 
 #include <linux/sched/clock.h>
 #include <linux/sched/idle.h>
 
+#ifdef CONFIG_ARM64
+/*
+ * POLL_IDLE_RELAX_COUNT determines how often we check for timeout
+ * while polling for TIF_NEED_RESCHED in thread_info->flags.
+ *
+ * Set this to a low value since arm64, instead of polling, uses a
+ * event based mechanism.
+ */
+#define POLL_IDLE_RELAX_COUNT	1
+#else
 #define POLL_IDLE_RELAX_COUNT	200
+#endif
 
 static int __cpuidle poll_idle(struct cpuidle_device *dev,
 			       struct cpuidle_driver *drv, int index)