Message ID | 20180504171701.6876-1-paolo.valente@linaro.org |
---|---|
State | Accepted |
Commit | 18e5a57d79878b205d39b2f160082d9098e9bfd6 |
Headers | show |
Series | [BUGFIX] block, bfq: postpone rq preparation to insert or merge | expand |
Tentatively, I suspect you've just fixed the nasty stalls I reported a while back. Not a hint of stall as yet (should have shown itself by now), spinning rust buckets are being all they can be, box feels good. Later mq-deadline (I hope to eventually forget the module dependency eternities we've spent together;), welcome back bfq (maybe.. I hope). -Mike
On Fri, 2018-05-04 at 21:46 +0200, Mike Galbraith wrote: > Tentatively, I suspect you've just fixed the nasty stalls I reported a > while back. Oh well, so much for optimism. It took a lot, but just hung.
> Il giorno 05 mag 2018, alle ore 10:19, Mike Galbraith <efault@gmx.de> ha scritto: > > On Fri, 2018-05-04 at 21:46 +0200, Mike Galbraith wrote: >> Tentatively, I suspect you've just fixed the nasty stalls I reported a >> while back. > > Oh well, so much for optimism. It took a lot, but just hung. Yep, it'd have been a lot of luck, being your hang related to different operations from those touched by this fix. Maybe time-before-failure stretched because your system suffered from the illness cured by this fix too. BTW, if you didn't run out of patience with this permanent issue yet, I was thinking of two o three changes to retry to trigger your failure reliably. In case of success, I could restart racking my brains from there. Thanks, Paolo
On Sat, 2018-05-05 at 12:39 +0200, Paolo Valente wrote: > > BTW, if you didn't run out of patience with this permanent issue yet, > I was thinking of two o three changes to retry to trigger your failure > reliably. Sure, fire away, I'll happily give the annoying little bugger opportunities to show its tender belly. -Mike
Hi. On 04.05.2018 19:17, Paolo Valente wrote: > When invoked for an I/O request rq, the prepare_request hook of bfq > increments reference counters in the destination bfq_queue for rq. In > this respect, after this hook has been invoked, rq may still be > transformed into a request with no icq attached, i.e., for bfq, a > request not associated with any bfq_queue. No further hook is invoked > to signal this tranformation to bfq (in general, to the destination > elevator for rq). This leads bfq into an inconsistent state, because > bfq has no chance to correctly lower these counters back. This > inconsistency may in its turn cause incorrect scheduling and hangs. It > certainly causes memory leaks, by making it impossible for bfq to free > the involved bfq_queue. > > On the bright side, no transformation can still happen for rq after rq > has been inserted into bfq, or merged with another, already inserted, > request. Exploiting this fact, this commit addresses the above issue > by delaying the preparation of an I/O request to when the request is > inserted or merged. > > This change also gives a performance bonus: a lock-contention point > gets removed. To prepare a request, bfq needs to hold its scheduler > lock. After postponing request preparation to insertion or merging, no > lock needs to be grabbed any longer in the prepare_request hook, while > the lock already taken to perform insertion or merging is used to > preparare the request as well. > > Signed-off-by: Paolo Valente <paolo.valente@linaro.org> > --- > block/bfq-iosched.c | 86 > +++++++++++++++++++++++++++++++++++------------------ > 1 file changed, 57 insertions(+), 29 deletions(-) > > diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c > index 771ae9730ac6..ea02162df6c7 100644 > --- a/block/bfq-iosched.c > +++ b/block/bfq-iosched.c > @@ -1858,6 +1858,8 @@ static int bfq_request_merge(struct > request_queue *q, struct request **req, > return ELEVATOR_NO_MERGE; > } > > +static struct bfq_queue *bfq_init_rq(struct request *rq); > + > static void bfq_request_merged(struct request_queue *q, struct request > *req, > enum elv_merge type) > { > @@ -1866,7 +1868,7 @@ static void bfq_request_merged(struct > request_queue *q, struct request *req, > blk_rq_pos(req) < > blk_rq_pos(container_of(rb_prev(&req->rb_node), > struct request, rb_node))) { > - struct bfq_queue *bfqq = RQ_BFQQ(req); > + struct bfq_queue *bfqq = bfq_init_rq(req); > struct bfq_data *bfqd = bfqq->bfqd; > struct request *prev, *next_rq; > > @@ -1894,7 +1896,8 @@ static void bfq_request_merged(struct > request_queue *q, struct request *req, > static void bfq_requests_merged(struct request_queue *q, struct > request *rq, > struct request *next) > { > - struct bfq_queue *bfqq = RQ_BFQQ(rq), *next_bfqq = RQ_BFQQ(next); > + struct bfq_queue *bfqq = bfq_init_rq(rq), > + *next_bfqq = bfq_init_rq(next); > > if (!RB_EMPTY_NODE(&rq->rb_node)) > goto end; > @@ -4540,14 +4543,12 @@ static inline void > bfq_update_insert_stats(struct request_queue *q, > unsigned int cmd_flags) {} > #endif > > -static void bfq_prepare_request(struct request *rq, struct bio *bio); > - > static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct > request *rq, > bool at_head) > { > struct request_queue *q = hctx->queue; > struct bfq_data *bfqd = q->elevator->elevator_data; > - struct bfq_queue *bfqq = RQ_BFQQ(rq); > + struct bfq_queue *bfqq; > bool idle_timer_disabled = false; > unsigned int cmd_flags; > > @@ -4562,24 +4563,13 @@ static void bfq_insert_request(struct > blk_mq_hw_ctx *hctx, struct request *rq, > blk_mq_sched_request_inserted(rq); > > spin_lock_irq(&bfqd->lock); > + bfqq = bfq_init_rq(rq); > if (at_head || blk_rq_is_passthrough(rq)) { > if (at_head) > list_add(&rq->queuelist, &bfqd->dispatch); > else > list_add_tail(&rq->queuelist, &bfqd->dispatch); > - } else { > - if (WARN_ON_ONCE(!bfqq)) { > - /* > - * This should never happen. Most likely rq is > - * a requeued regular request, being > - * re-inserted without being first > - * re-prepared. Do a prepare, to avoid > - * failure. > - */ > - bfq_prepare_request(rq, rq->bio); > - bfqq = RQ_BFQQ(rq); > - } > - > + } else { /* bfqq is assumed to be non null here */ > idle_timer_disabled = __bfq_insert_request(bfqd, rq); > /* > * Update bfqq, because, if a queue merge has occurred > @@ -4922,11 +4912,48 @@ static struct bfq_queue > *bfq_get_bfqq_handle_split(struct bfq_data *bfqd, > } > > /* > - * Allocate bfq data structures associated with this request. > + * Only reset private fields. The actual request preparation will be > + * performed by bfq_init_rq, when rq is either inserted or merged. See > + * comments on bfq_init_rq for the reason behind this delayed > + * preparation. > */ > static void bfq_prepare_request(struct request *rq, struct bio *bio) > +{ > + /* > + * Regardless of whether we have an icq attached, we have to > + * clear the scheduler pointers, as they might point to > + * previously allocated bic/bfqq structs. > + */ > + rq->elv.priv[0] = rq->elv.priv[1] = NULL; > +} > + > +/* > + * If needed, init rq, allocate bfq data structures associated with > + * rq, and increment reference counters in the destination bfq_queue > + * for rq. Return the destination bfq_queue for rq, or NULL is rq is > + * not associated with any bfq_queue. > + * > + * This function is invoked by the functions that perform rq insertion > + * or merging. One may have expected the above preparation operations > + * to be performed in bfq_prepare_request, and not delayed to when rq > + * is inserted or merged. The rationale behind this delayed > + * preparation is that, after the prepare_request hook is invoked for > + * rq, rq may still be transformed into a request with no icq, i.e., a > + * request not associated with any queue. No bfq hook is invoked to > + * signal this tranformation. As a consequence, should these > + * preparation operations be performed when the prepare_request hook > + * is invoked, and should rq be transformed one moment later, bfq > + * would end up in an inconsistent state, because it would have > + * incremented some queue counters for an rq destined to > + * transformation, without any chance to correctly lower these > + * counters back. In contrast, no transformation can still happen for > + * rq after rq has been inserted or merged. So, it is safe to execute > + * these preparation operations when rq is finally inserted or merged. > + */ > +static struct bfq_queue *bfq_init_rq(struct request *rq) > { > struct request_queue *q = rq->q; > + struct bio *bio = rq->bio; > struct bfq_data *bfqd = q->elevator->elevator_data; > struct bfq_io_cq *bic; > const int is_sync = rq_is_sync(rq); > @@ -4934,20 +4961,21 @@ static void bfq_prepare_request(struct request > *rq, struct bio *bio) > bool new_queue = false; > bool bfqq_already_existing = false, split = false; > > + if (unlikely(!rq->elv.icq)) > + return NULL; > + > /* > - * Even if we don't have an icq attached, we should still clear > - * the scheduler pointers, as they might point to previously > - * allocated bic/bfqq structs. > + * Assuming that elv.priv[1] is set only if everything is set > + * for this rq. This holds true, because this function is > + * invoked only for insertion or merging, and, after such > + * events, a request cannot be manipulated any longer before > + * being removed from bfq. > */ > - if (!rq->elv.icq) { > - rq->elv.priv[0] = rq->elv.priv[1] = NULL; > - return; > - } > + if (rq->elv.priv[1]) > + return rq->elv.priv[1]; > > bic = icq_to_bic(rq->elv.icq); > > - spin_lock_irq(&bfqd->lock); > - > bfq_check_ioprio_change(bic, bio); > > bfq_bic_update_cgroup(bic, bio); > @@ -5006,7 +5034,7 @@ static void bfq_prepare_request(struct request > *rq, struct bio *bio) > if (unlikely(bfq_bfqq_just_created(bfqq))) > bfq_handle_burst(bfqd, bfqq); > > - spin_unlock_irq(&bfqd->lock); > + return bfqq; > } > > static void bfq_idle_slice_timer_body(struct bfq_queue *bfqq) No harm is observed on both test VM with smartctl hammer and my laptop. So, Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> Thanks. -- Oleksandr Natalenko (post-factum)
> Il giorno 05 mag 2018, alle ore 16:56, Mike Galbraith <efault@gmx.de> ha scritto: > > On Sat, 2018-05-05 at 12:39 +0200, Paolo Valente wrote: >> >> BTW, if you didn't run out of patience with this permanent issue yet, >> I was thinking of two o three changes to retry to trigger your failure >> reliably. > > Sure, fire away, I'll happily give the annoying little bugger > opportunities to show its tender belly. I've attached a compressed patch (to avoid possible corruption from my mailer). I'm little confident, but no pain, no gain, right? If possible, apply this patch on top of the fix I proposed in this thread, just to eliminate possible further noise. Finally, the patch content follows. Hoping for a stroke of luck, Paolo diff --git a/block/bfq-mq-iosched.c b/block/bfq-mq-iosched.c index 118f319af7c0..6662efe29b69 100644 --- a/block/bfq-mq-iosched.c +++ b/block/bfq-mq-iosched.c @@ -525,8 +525,13 @@ static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data) if (unlikely(bfqd->sb_shift != bt->sb.shift)) bfq_update_depths(bfqd, bt); +#if 0 data->shallow_depth = bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)]; +#else + data->shallow_depth = 1; +#endif + bfq_log(bfqd, "wr_busy %d sync %d depth %u", bfqd->wr_busy_queues, op_is_sync(op), > > -Mike >
On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote: > > I've attached a compressed patch (to avoid possible corruption from my > mailer). I'm little confident, but no pain, no gain, right? > > If possible, apply this patch on top of the fix I proposed in this > thread, just to eliminate possible further noise. Finally, the > patch content follows. > > Hoping for a stroke of luck, FWIW, box didn't survive the first full build of the morning. > Paolo > > diff --git a/block/bfq-mq-iosched.c b/block/bfq-mq-iosched.c > index 118f319af7c0..6662efe29b69 100644 > --- a/block/bfq-mq-iosched.c > +++ b/block/bfq-mq-iosched.c That doesn't exist in master, so I applied it like so. --- block/bfq-iosched.c | 4 ++++ 1 file changed, 4 insertions(+) --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -554,8 +554,12 @@ static void bfq_limit_depth(unsigned int if (unlikely(bfqd->sb_shift != bt->sb.shift)) bfq_update_depths(bfqd, bt); +#if 0 data->shallow_depth = bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)]; +#else + data->shallow_depth = 1; +#endif bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u", __func__, bfqd->wr_busy_queues, op_is_sync(op),
On Mon, 2018-05-07 at 04:43 +0200, Mike Galbraith wrote: > On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote: > > > > I've attached a compressed patch (to avoid possible corruption from my > > mailer). I'm little confident, but no pain, no gain, right? > > > > If possible, apply this patch on top of the fix I proposed in this > > thread, just to eliminate possible further noise. Finally, the > > patch content follows. > > > > Hoping for a stroke of luck, > > FWIW, box didn't survive the first full build of the morning. Nor the second. -Mike
On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote: > > diff --git a/block/bfq-mq-iosched.c b/block/bfq-mq-iosched.c > index 118f319af7c0..6662efe29b69 100644 > --- a/block/bfq-mq-iosched.c > +++ b/block/bfq-mq-iosched.c > @@ -525,8 +525,13 @@ static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data) > if (unlikely(bfqd->sb_shift != bt->sb.shift)) > bfq_update_depths(bfqd, bt); > > +#if 0 > data->shallow_depth = > bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)]; ^^^^^^^^^^^^^ Q: why doesn't the top of this function look like so? --- block/bfq-iosched.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -539,7 +539,7 @@ static void bfq_limit_depth(unsigned int struct bfq_data *bfqd = data->q->elevator->elevator_data; struct sbitmap_queue *bt; - if (op_is_sync(op) && !op_is_write(op)) + if (!op_is_write(op)) return; if (data->flags & BLK_MQ_REQ_RESERVED) { It looks a bit odd that these elements exist... + /* + * no more than 75% of tags for sync writes (25% extra tags + * w.r.t. async I/O, to prevent async I/O from starving sync + * writes) + */ + bfqd->word_depths[0][1] = max(((1U<<bfqd->sb_shift) * 3)>>2, 1U); + /* no more than ~37% of tags for sync writes (~20% extra tags) */ + bfqd->word_depths[1][1] = max(((1U<<bfqd->sb_shift) * 6)>>4, 1U); ...yet we index via and log a guaranteed zero. -Mike
> Il giorno 07 mag 2018, alle ore 07:56, Mike Galbraith <efault@gmx.de> ha scritto: > > On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote: >> >> diff --git a/block/bfq-mq-iosched.c b/block/bfq-mq-iosched.c >> index 118f319af7c0..6662efe29b69 100644 >> --- a/block/bfq-mq-iosched.c >> +++ b/block/bfq-mq-iosched.c >> @@ -525,8 +525,13 @@ static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data) >> if (unlikely(bfqd->sb_shift != bt->sb.shift)) >> bfq_update_depths(bfqd, bt); >> >> +#if 0 >> data->shallow_depth = >> bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)]; > ^^^^^^^^^^^^^ > > Q: why doesn't the top of this function look like so? > > --- > block/bfq-iosched.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > --- a/block/bfq-iosched.c > +++ b/block/bfq-iosched.c > @@ -539,7 +539,7 @@ static void bfq_limit_depth(unsigned int > struct bfq_data *bfqd = data->q->elevator->elevator_data; > struct sbitmap_queue *bt; > > - if (op_is_sync(op) && !op_is_write(op)) > + if (!op_is_write(op)) > return; > > if (data->flags & BLK_MQ_REQ_RESERVED) { > > It looks a bit odd that these elements exist... > > + /* > + * no more than 75% of tags for sync writes (25% extra tags > + * w.r.t. async I/O, to prevent async I/O from starving sync > + * writes) > + */ > + bfqd->word_depths[0][1] = max(((1U<<bfqd->sb_shift) * 3)>>2, 1U); > > + /* no more than ~37% of tags for sync writes (~20% extra tags) */ > + bfqd->word_depths[1][1] = max(((1U<<bfqd->sb_shift) * 6)>>4, 1U); > > ...yet we index via and log a guaranteed zero. > I'm not sure I got your point, so, to help you help me quickly, I'll repeat what I expect the code you highlight to do: - sync reads must have no limitation, and the lines if (op_is_sync(op) && !op_is_write(op)) return; make sure they don't - sync writes must be limited, and the code you pasted above computes those limits - for sync writes, for which op_is_sync(op) is true (but the condition "op_is_sync(op) && !op_is_write(op)" is false), the line: bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)]; becomes bfqd->word_depths[!!bfqd->wr_busy_queues][1]; e yields the right limit for sync writes, depending on bfqd->wr_busy_queues. Where is the bug? Thanks, Paolo > -Mike > >
> Il giorno 07 mag 2018, alle ore 05:23, Mike Galbraith <efault@gmx.de> ha scritto: > > On Mon, 2018-05-07 at 04:43 +0200, Mike Galbraith wrote: >> On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote: >>> >>> I've attached a compressed patch (to avoid possible corruption from my >>> mailer). I'm little confident, but no pain, no gain, right? >>> >>> If possible, apply this patch on top of the fix I proposed in this >>> thread, just to eliminate possible further noise. Finally, the >>> patch content follows. >>> >>> Hoping for a stroke of luck, >> >> FWIW, box didn't survive the first full build of the morning. > > Nor the second. > Great, finally the first good news! Help from blk-mq experts would be fundamental here. To increase the chances to get it, I'm going to open a new thread on this. In that thread I'll ask you to provide an OOPS or something, I hope it is now easier for you to get it. Thanks, Paolo > -Mike
On Mon, 2018-05-07 at 11:27 +0200, Paolo Valente wrote: > > > Where is the bug? Hm, seems potent pain-killers and C don't mix all that well.
> Il giorno 07 mag 2018, alle ore 12:01, Mike Galbraith <efault@gmx.de> ha scritto: > > On Mon, 2018-05-07 at 11:27 +0200, Paolo Valente wrote: >> >> >> Where is the bug? > > Hm, seems potent pain-killers and C don't mix all that well. > I'll try to keep it in mind, and I do wis you to get well soon. Thanks, Paolo
On Fri, 2018-05-04 at 19:17 +0200, Paolo Valente wrote: > When invoked for an I/O request rq, [ ... ] Tested-by: Bart Van Assche <bart.vanassche@wdc.com>
> Il giorno 10 mag 2018, alle ore 18:14, Bart Van Assche <bart.vanassche@wdc.com> ha scritto: > > On Fri, 2018-05-04 at 19:17 +0200, Paolo Valente wrote: >> When invoked for an I/O request rq, [ ... ] > > Tested-by: Bart Van Assche <bart.vanassche@wdc.com> > > > Any decision for this fix, Jens? Thanks, Paolo
On 5/14/18 11:16 AM, Paolo Valente wrote: > > >> Il giorno 10 mag 2018, alle ore 18:14, Bart Van Assche <bart.vanassche@wdc.com> ha scritto: >> >> On Fri, 2018-05-04 at 19:17 +0200, Paolo Valente wrote: >>> When invoked for an I/O request rq, [ ... ] >> >> Tested-by: Bart Van Assche <bart.vanassche@wdc.com> >> >> >> > > Any decision for this fix, Jens? Guess I didn't reply, but I did commit this on Thursday. -- Jens Axboe
> Il giorno 14 mag 2018, alle ore 19:31, Jens Axboe <axboe@kernel.dk> ha scritto: > > On 5/14/18 11:16 AM, Paolo Valente wrote: >> >> >>> Il giorno 10 mag 2018, alle ore 18:14, Bart Van Assche <bart.vanassche@wdc.com> ha scritto: >>> >>> On Fri, 2018-05-04 at 19:17 +0200, Paolo Valente wrote: >>>> When invoked for an I/O request rq, [ ... ] >>> >>> Tested-by: Bart Van Assche <bart.vanassche@wdc.com> >>> >>> >>> >> >> Any decision for this fix, Jens? > > Guess I didn't reply, but I did commit this on Thursday. > Great, thank you! Paolo > -- > Jens Axboe >
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 771ae9730ac6..ea02162df6c7 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -1858,6 +1858,8 @@ static int bfq_request_merge(struct request_queue *q, struct request **req, return ELEVATOR_NO_MERGE; } +static struct bfq_queue *bfq_init_rq(struct request *rq); + static void bfq_request_merged(struct request_queue *q, struct request *req, enum elv_merge type) { @@ -1866,7 +1868,7 @@ static void bfq_request_merged(struct request_queue *q, struct request *req, blk_rq_pos(req) < blk_rq_pos(container_of(rb_prev(&req->rb_node), struct request, rb_node))) { - struct bfq_queue *bfqq = RQ_BFQQ(req); + struct bfq_queue *bfqq = bfq_init_rq(req); struct bfq_data *bfqd = bfqq->bfqd; struct request *prev, *next_rq; @@ -1894,7 +1896,8 @@ static void bfq_request_merged(struct request_queue *q, struct request *req, static void bfq_requests_merged(struct request_queue *q, struct request *rq, struct request *next) { - struct bfq_queue *bfqq = RQ_BFQQ(rq), *next_bfqq = RQ_BFQQ(next); + struct bfq_queue *bfqq = bfq_init_rq(rq), + *next_bfqq = bfq_init_rq(next); if (!RB_EMPTY_NODE(&rq->rb_node)) goto end; @@ -4540,14 +4543,12 @@ static inline void bfq_update_insert_stats(struct request_queue *q, unsigned int cmd_flags) {} #endif -static void bfq_prepare_request(struct request *rq, struct bio *bio); - static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, bool at_head) { struct request_queue *q = hctx->queue; struct bfq_data *bfqd = q->elevator->elevator_data; - struct bfq_queue *bfqq = RQ_BFQQ(rq); + struct bfq_queue *bfqq; bool idle_timer_disabled = false; unsigned int cmd_flags; @@ -4562,24 +4563,13 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, blk_mq_sched_request_inserted(rq); spin_lock_irq(&bfqd->lock); + bfqq = bfq_init_rq(rq); if (at_head || blk_rq_is_passthrough(rq)) { if (at_head) list_add(&rq->queuelist, &bfqd->dispatch); else list_add_tail(&rq->queuelist, &bfqd->dispatch); - } else { - if (WARN_ON_ONCE(!bfqq)) { - /* - * This should never happen. Most likely rq is - * a requeued regular request, being - * re-inserted without being first - * re-prepared. Do a prepare, to avoid - * failure. - */ - bfq_prepare_request(rq, rq->bio); - bfqq = RQ_BFQQ(rq); - } - + } else { /* bfqq is assumed to be non null here */ idle_timer_disabled = __bfq_insert_request(bfqd, rq); /* * Update bfqq, because, if a queue merge has occurred @@ -4922,11 +4912,48 @@ static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd, } /* - * Allocate bfq data structures associated with this request. + * Only reset private fields. The actual request preparation will be + * performed by bfq_init_rq, when rq is either inserted or merged. See + * comments on bfq_init_rq for the reason behind this delayed + * preparation. */ static void bfq_prepare_request(struct request *rq, struct bio *bio) +{ + /* + * Regardless of whether we have an icq attached, we have to + * clear the scheduler pointers, as they might point to + * previously allocated bic/bfqq structs. + */ + rq->elv.priv[0] = rq->elv.priv[1] = NULL; +} + +/* + * If needed, init rq, allocate bfq data structures associated with + * rq, and increment reference counters in the destination bfq_queue + * for rq. Return the destination bfq_queue for rq, or NULL is rq is + * not associated with any bfq_queue. + * + * This function is invoked by the functions that perform rq insertion + * or merging. One may have expected the above preparation operations + * to be performed in bfq_prepare_request, and not delayed to when rq + * is inserted or merged. The rationale behind this delayed + * preparation is that, after the prepare_request hook is invoked for + * rq, rq may still be transformed into a request with no icq, i.e., a + * request not associated with any queue. No bfq hook is invoked to + * signal this tranformation. As a consequence, should these + * preparation operations be performed when the prepare_request hook + * is invoked, and should rq be transformed one moment later, bfq + * would end up in an inconsistent state, because it would have + * incremented some queue counters for an rq destined to + * transformation, without any chance to correctly lower these + * counters back. In contrast, no transformation can still happen for + * rq after rq has been inserted or merged. So, it is safe to execute + * these preparation operations when rq is finally inserted or merged. + */ +static struct bfq_queue *bfq_init_rq(struct request *rq) { struct request_queue *q = rq->q; + struct bio *bio = rq->bio; struct bfq_data *bfqd = q->elevator->elevator_data; struct bfq_io_cq *bic; const int is_sync = rq_is_sync(rq); @@ -4934,20 +4961,21 @@ static void bfq_prepare_request(struct request *rq, struct bio *bio) bool new_queue = false; bool bfqq_already_existing = false, split = false; + if (unlikely(!rq->elv.icq)) + return NULL; + /* - * Even if we don't have an icq attached, we should still clear - * the scheduler pointers, as they might point to previously - * allocated bic/bfqq structs. + * Assuming that elv.priv[1] is set only if everything is set + * for this rq. This holds true, because this function is + * invoked only for insertion or merging, and, after such + * events, a request cannot be manipulated any longer before + * being removed from bfq. */ - if (!rq->elv.icq) { - rq->elv.priv[0] = rq->elv.priv[1] = NULL; - return; - } + if (rq->elv.priv[1]) + return rq->elv.priv[1]; bic = icq_to_bic(rq->elv.icq); - spin_lock_irq(&bfqd->lock); - bfq_check_ioprio_change(bic, bio); bfq_bic_update_cgroup(bic, bio); @@ -5006,7 +5034,7 @@ static void bfq_prepare_request(struct request *rq, struct bio *bio) if (unlikely(bfq_bfqq_just_created(bfqq))) bfq_handle_burst(bfqd, bfqq); - spin_unlock_irq(&bfqd->lock); + return bfqq; } static void bfq_idle_slice_timer_body(struct bfq_queue *bfqq)
When invoked for an I/O request rq, the prepare_request hook of bfq increments reference counters in the destination bfq_queue for rq. In this respect, after this hook has been invoked, rq may still be transformed into a request with no icq attached, i.e., for bfq, a request not associated with any bfq_queue. No further hook is invoked to signal this tranformation to bfq (in general, to the destination elevator for rq). This leads bfq into an inconsistent state, because bfq has no chance to correctly lower these counters back. This inconsistency may in its turn cause incorrect scheduling and hangs. It certainly causes memory leaks, by making it impossible for bfq to free the involved bfq_queue. On the bright side, no transformation can still happen for rq after rq has been inserted into bfq, or merged with another, already inserted, request. Exploiting this fact, this commit addresses the above issue by delaying the preparation of an I/O request to when the request is inserted or merged. This change also gives a performance bonus: a lock-contention point gets removed. To prepare a request, bfq needs to hold its scheduler lock. After postponing request preparation to insertion or merging, no lock needs to be grabbed any longer in the prepare_request hook, while the lock already taken to perform insertion or merging is used to preparare the request as well. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> --- block/bfq-iosched.c | 86 +++++++++++++++++++++++++++++++++++------------------ 1 file changed, 57 insertions(+), 29 deletions(-) -- 2.16.1