Message ID | 20220701154604.2211008-1-imran.f.khan@oracle.com |
---|---|
State | Superseded |
Headers | show |
Series | [RESEND] kernfs: Avoid re-adding kernfs_node into kernfs_notify_list. | expand |
Hello Tejun, On 6/7/22 4:33 am, Tejun Heo wrote: > Hello, > > On Sun, Jul 03, 2022 at 09:09:05PM +1000, Imran Khan wrote: >> Can we use kernfs_notify_lock like below snippet to serialize producers >> (kernfs_notify): >> >> spin_lock_irqsave(&kernfs_notify_lock, flags); >> if (kn->attr.notify_next.next != NULL) { >> kernfs_get(kn); >> llist_add(&kn->attr.notify_next, &kernfs_notify_list); >> schedule_work(&kernfs_notify_work); >> } >> spin_unlock_irqsave(&kernfs_notify_lock, flags); > > But then what's the point of using llist? > In this case, the point of using llist would be to avoid taking the locks in consumer. >> As per following comments at the beginning of llist.h >> >> * Cases where locking is not needed: >> * If there are multiple producers and multiple consumers, llist_add can be >> * used in producers and llist_del_all can be used in consumers simultaneously >> * without locking. Also a single consumer can use llist_del_first while >> * multiple producers simultaneously use llist_add, without any locking. >> >> Multiple producers and single consumer can work in parallel but as in our case >> addition is dependent on kn->attr.notify_next.next != NULL, we may keep the >> checking and list addition under kernfs_notify_lock and for consumer just lock >> free->next = NULL under kernfs_notify_lock. > > It supports multiple producers in the sense that multiple producers can try > to add their own llist_nodes concurrently. It doesn't support multiple > producers trying to add the same llist_node whether that depends on NULL > check or not. > Hmm. My idea was that eventually we will never run into situation where multiple producers will end up adding the same node because as soon as first producer adds the node (the other potential adders are spinning on kernfs_notify_lock), kn->attr.notif_next.next will get a non-NULL value and checking (kn->attr.notify_next.next != NULL) will avoid the node getting re-added. I must be missing something here, so as per your suggestion I have reverted this change at [1]. Thanks, -- Imran [1]: https://lore.kernel.org/lkml/20220705201026.2487665-1-imran.f.khan@oracle.com/
Hello, On Wed, Jul 06, 2022 at 06:18:28AM +1000, Imran Khan wrote: > In this case, the point of using llist would be to avoid taking the locks in > consumer. Given that the consumer can dispatch the whole list, I doubt that's worth the complication. > Hmm. My idea was that eventually we will never run into situation where multiple > producers will end up adding the same node because as soon as first producer > adds the node (the other potential adders are spinning on kernfs_notify_lock), > kn->attr.notif_next.next will get a non-NULL value and checking > (kn->attr.notify_next.next != NULL) will avoid the node getting re-added. So, here, I don't see how llist can be used without a surrounding lock and I don't see much point in using llist if we need to use a lock anyway. If this needs to be made scalable, we need a different strategy (e.g. per-cpu lock / pending list can be an option). I'm a bit swamped with other stuff and will likely be less engaged from now on. I'll try to review patches where possible. Thanks.
On Fri, Jul 1, 2022 at 5:49 PM Imran Khan <imran.f.khan@oracle.com> wrote: ... > + if (kn->attr.notify_next.next != NULL) { Isn't there a helper to get next pointer from an llist pointer? > + kernfs_get(kn); > + llist_add(&kn->attr.notify_next, &kernfs_notify_list); > + schedule_work(&kernfs_notify_work); > + }
diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c index bb933221b4bae..e8ec054e11c63 100644 --- a/fs/kernfs/file.c +++ b/fs/kernfs/file.c @@ -917,6 +917,7 @@ static void kernfs_notify_workfn(struct work_struct *work) if (free == NULL) return; + free->next = NULL; attr = llist_entry(free, struct kernfs_elem_attr, notify_next); kn = attribute_to_node(attr, struct kernfs_node, attr); root = kernfs_root(kn); @@ -992,9 +993,11 @@ void kernfs_notify(struct kernfs_node *kn) rcu_read_unlock(); /* schedule work to kick fsnotify */ - kernfs_get(kn); - llist_add(&kn->attr.notify_next, &kernfs_notify_list); - schedule_work(&kernfs_notify_work); + if (kn->attr.notify_next.next != NULL) { + kernfs_get(kn); + llist_add(&kn->attr.notify_next, &kernfs_notify_list); + schedule_work(&kernfs_notify_work); + } } EXPORT_SYMBOL_GPL(kernfs_notify);