Message ID | 20240508093005.1044815-1-kovalev@altlinux.org |
---|---|
State | New |
Headers | show |
Series | tty: Fix possible deadlock in tty_buffer_flush | expand |
On 08. 05. 24, 11:30, kovalev@altlinux.org wrote: > From: Vasiliy Kovalev <kovalev@altlinux.org> > > A possible scenario in which a deadlock may occur is as follows: > > flush_to_ldisc() { > > mutex_lock(&buf->lock); > > tty_port_default_receive_buf() { > tty_ldisc_receive_buf() { > n_tty_receive_buf2() { > n_tty_receive_buf_common() { > n_tty_receive_char_special() { > isig() { > tty_driver_flush_buffer() { > pty_flush_buffer() { > tty_buffer_flush() { > > mutex_lock(&buf->lock); (DEADLOCK) > > flush_to_ldisc() and tty_buffer_flush() functions they use the same mutex > (&buf->lock), but not necessarily the same struct tty_bufhead object. "not necessarily" -- so does it mean that it actually can happen (and we should fix it) or not at all (and we should annotate the mutex)? > However, you should probably use a separate mutex for the > tty_buffer_flush() function to exclude such a situation. ... > Cc: stable@vger.kernel.org What commit does this fix? > --- a/drivers/tty/tty_buffer.c > +++ b/drivers/tty/tty_buffer.c > @@ -226,7 +226,7 @@ void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld) > > atomic_inc(&buf->priority); > > - mutex_lock(&buf->lock); > + mutex_lock(&buf->flush_mtx); Hmm, how does this protect against concurrent buf pickup. We free it here and the racing thread can start using it, or? > /* paired w/ release in __tty_buffer_request_room; ensures there are > * no pending memory accesses to the freed buffer > */ thanks,
09.05.2024 09:41, Jiri Slaby wrote: > On 08. 05. 24, 11:30, kovalev@altlinux.org wrote: >> From: Vasiliy Kovalev <kovalev@altlinux.org> >> >> A possible scenario in which a deadlock may occur is as follows: >> >> flush_to_ldisc() { >> >> mutex_lock(&buf->lock); >> >> tty_port_default_receive_buf() { >> tty_ldisc_receive_buf() { >> n_tty_receive_buf2() { >> n_tty_receive_buf_common() { >> n_tty_receive_char_special() { >> isig() { >> tty_driver_flush_buffer() { >> pty_flush_buffer() { >> tty_buffer_flush() { >> >> mutex_lock(&buf->lock); (DEADLOCK) >> >> flush_to_ldisc() and tty_buffer_flush() functions they use the same mutex >> (&buf->lock), but not necessarily the same struct tty_bufhead object. > > "not necessarily" -- so does it mean that it actually can happen (and we > should fix it) or not at all (and we should annotate the mutex)? During debugging, when running the reproducer multiple times, I failed to catch a situation where these mutexes have the same address in memory in the above call scenario, so I'm not sure that such a situation is possible. But earlier, a thread is triggered that accesses the same structure (and mutex), so LOCKDEP tools throw a warning: thread 0: flush_to_ldisc() { mutex_lock(&buf->lock) // Address mutex == 0xA n_tty_receive_buf_common(); mutex_unlock(&buf->lock) // Address mutex == 0xA } thread 1: flush_to_ldisc() { mutex_lock(&buf->lock) // Address mutex == 0xB n_tty_receive_buf_common() { isig() { tty_driver_flush_buffer() { pty_flush_buffer() { tty_buffer_flush() { mutex_lock(&buf->lock) // Address mutex == 0xA -> throw Warning // successful continuation ... } >> However, you should probably use a separate mutex for the >> tty_buffer_flush() function to exclude such a situation. > ... > >> Cc: stable@vger.kernel.org > > What commit does this fix? I will assume that the commit of introducing mutexes in these functions: e9975fdec013 ("tty: Ensure single-threaded flip buffer consumer with mutex") >> --- a/drivers/tty/tty_buffer.c >> +++ b/drivers/tty/tty_buffer.c >> @@ -226,7 +226,7 @@ void tty_buffer_flush(struct tty_struct *tty, >> struct tty_ldisc *ld) >> atomic_inc(&buf->priority); >> - mutex_lock(&buf->lock); >> + mutex_lock(&buf->flush_mtx); > > Hmm, how does this protect against concurrent buf pickup. We free it > here and the racing thread can start using it, or? Yes, assuming that such a scenario is possible.. Otherwise, if such a scenario is not possible and the patch is inappropriate, then you need to mark this mutex in some way to tell lockdep tools to ignore this place.. >> /* paired w/ release in __tty_buffer_request_room; ensures there >> are >> * no pending memory accesses to the freed buffer >> */ > > thanks,
diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c index 79f0ff94ce00da..e777bd5f3a2fca 100644 --- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -226,7 +226,7 @@ void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld) atomic_inc(&buf->priority); - mutex_lock(&buf->lock); + mutex_lock(&buf->flush_mtx); /* paired w/ release in __tty_buffer_request_room; ensures there are * no pending memory accesses to the freed buffer */ @@ -241,7 +241,7 @@ void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld) ld->ops->flush_buffer(tty); atomic_dec(&buf->priority); - mutex_unlock(&buf->lock); + mutex_unlock(&buf->flush_mtx); } /** @@ -577,6 +577,7 @@ void tty_buffer_init(struct tty_port *port) { struct tty_bufhead *buf = &port->buf; + mutex_init(&buf->flush_mtx); mutex_init(&buf->lock); tty_buffer_reset(&buf->sentinel, 0); buf->head = &buf->sentinel; diff --git a/include/linux/tty_buffer.h b/include/linux/tty_buffer.h index 31125e3be3c55e..cea4eacc3b70d3 100644 --- a/include/linux/tty_buffer.h +++ b/include/linux/tty_buffer.h @@ -35,6 +35,7 @@ static inline u8 *flag_buf_ptr(struct tty_buffer *b, unsigned int ofs) struct tty_bufhead { struct tty_buffer *head; /* Queue head */ struct work_struct work; + struct mutex flush_mtx; /* For use in tty_buffer_flush() */ struct mutex lock; atomic_t priority; struct tty_buffer sentinel;