On 12/27, Manfred Spraul wrote:
I _think_ that
wait_event_whatever(WQ, CONDITION); vs
CONDITION = 1; if (wq_has_sleeper(WQ)) wake_up_xxx(WQ, ...);
is fine.
This pattern is documented in wait.h:
https://elixir.bootlin.com/linux/v6.12.6/source/include/linux/wait.h#L96
Thus if there an issue, then the documentation should be updated.
Agreed, basically the same pattern, prepare_to_wait_event() is similar to prepare_to_wait().
But I do not understand this comment (from 2.6.0)
https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git/commit/kern...
/* * Note: we use "set_current_state()" _after_ the wait-queue add, * because we need a memory barrier there on SMP, so that any * wake-function that tests for the wait-queue being active * will be guaranteed to see waitqueue addition _or_ subsequent * tests in this thread will see the wakeup having taken place. * * The spin_unlock() itself is semi-permeable and only protects * one way (it only protects stuff inside the critical region and * stops them from bleeding out - it would still allow subsequent * loads to move into the the critical region). */
...
set_current_state() now uses smp_store_mb(), which is a memory barrier _after_ the store.
And afaics this is what we actually need.
Thus I do not see what enforces that the store happens before the store for the __add_wait_queue().
IIUC this is fine, no need to serialize list_add() and STORE(tsk->__state), they can be reordered.
But we need mb() between __add_wait_queue + __set_current_state (in any order) and the subsequent "if (CONDITION)" check.
--- a/kernel/sched/wait.c +++ b/kernel/sched/wait.c @@ -124,6 +124,23 @@ static int __wake_up_common_lock(struct wait_queue_head *wq_head, unsigned int m int __wake_up(struct wait_queue_head *wq_head, unsigned int mode, int nr_exclusive, void *key) {
- if (list_empty(&wq_head->head)) {
struct list_head *pn;
/*
* pairs with spin_unlock_irqrestore(&wq_head->lock);
* We actually do not need to acquire wq_head->lock, we just
* need to be sure that there is no prepare_to_wait() that
* completed on any CPU before __wake_up was called.
* Thus instead of load_acquiring the spinlock and dropping
* it again, we load_acquire the next list entry and check
* that the list is not empty.
*/
pn = smp_load_acquire(&wq_head->head.next);
if(pn == &wq_head->head)
return 0;
- }
Too subtle for me ;)
I have some concerns, but I need to think a bit more to (try to) actually understand this change.
Oleg.