Message ID | Y1EQdafQlKNAsutk@T590 |
---|---|
State | New |
Headers | show |
Series | [Bug] double ->queue_rq() because of timeout in ->queue_rq() | expand |
On Fri, Oct 21, 2022 at 10:23:57AM +0800, Ming Lei wrote: > On Thu, Oct 20, 2022 at 04:01:11PM -0400, Stefan Hajnoczi wrote: > > On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote: > > > Hi, > > > > > > David Jeffery found one double ->queue_rq() issue, so far it can > > > be triggered in the following two cases: > > > > > > 1) scsi driver in guest kernel > > > > > > - the story could be long vmexit latency or long preempt latency of > > > vCPU pthread, then IO req is timed out before queuing the request > > > to hardware but after calling blk_mq_start_request() during ->queue_rq(), > > > then timeout handler handles it by requeue, then double ->queue_rq() is > > > caused, and kernel panic > > > > > > 2) burst of kernel messages from irq handler > > > > > > For 1), I think it is one reasonable case, given latency from host side > > > can come anytime in theory because vCPU is emulated by one normal host > > > pthread which can be preempted anywhere. For 2), I guess kernel message is > > > supposed to be rate limited. > > > > > > Firstly, is this kind of so long(30sec) random latency when running kernel > > > code something normal? Or do we need to take care of it? IMO, it looks > > > reasonable in case of VM, but our VM experts may have better idea about this > > > situation. Also the default 30sec timeout could be reduced via sysfs or > > > drivers. > > > > 30 seconds is a long latency that does not occur during normal > > operation, but unfortunately does happen on occasion. > > Thanks for the confirmation! > > > > > I think there's an interest in understanding the root cause and solving > > long latencies (if possible) in the QEMU/KVM communities. We can > > investigate specific cases on kvm@vger.kernel.org and/or > > qemu-devel@nongnu.org. > > The issue was original reported on VMware VM, but maybe David can figure > out how to trigger it on QEMU/KVM. A very basic question: The virtio_blk driver has no q->mq_ops->timeout() callback. Why does the block layer still enable the timeout mechanism when the driver doesn't implement ->timeout()? I saw there was some "idle" hctx logic and I guess the requests are resubmitted (although it wasn't obvious to me how that happens in the code)? Maybe that's why the timer is still used if the driver doesn't care about timeouts... Stefan
On Mon, Oct 24, 2022 at 11:30:39AM -0400, Stefan Hajnoczi wrote: > On Fri, Oct 21, 2022 at 10:23:57AM +0800, Ming Lei wrote: > > On Thu, Oct 20, 2022 at 04:01:11PM -0400, Stefan Hajnoczi wrote: > > > On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote: > > > > Hi, > > > > > > > > David Jeffery found one double ->queue_rq() issue, so far it can > > > > be triggered in the following two cases: > > > > > > > > 1) scsi driver in guest kernel > > > > > > > > - the story could be long vmexit latency or long preempt latency of > > > > vCPU pthread, then IO req is timed out before queuing the request > > > > to hardware but after calling blk_mq_start_request() during ->queue_rq(), > > > > then timeout handler handles it by requeue, then double ->queue_rq() is > > > > caused, and kernel panic > > > > > > > > 2) burst of kernel messages from irq handler > > > > > > > > For 1), I think it is one reasonable case, given latency from host side > > > > can come anytime in theory because vCPU is emulated by one normal host > > > > pthread which can be preempted anywhere. For 2), I guess kernel message is > > > > supposed to be rate limited. > > > > > > > > Firstly, is this kind of so long(30sec) random latency when running kernel > > > > code something normal? Or do we need to take care of it? IMO, it looks > > > > reasonable in case of VM, but our VM experts may have better idea about this > > > > situation. Also the default 30sec timeout could be reduced via sysfs or > > > > drivers. > > > > > > 30 seconds is a long latency that does not occur during normal > > > operation, but unfortunately does happen on occasion. > > > > Thanks for the confirmation! > > > > > > > > I think there's an interest in understanding the root cause and solving > > > long latencies (if possible) in the QEMU/KVM communities. We can > > > investigate specific cases on kvm@vger.kernel.org and/or > > > qemu-devel@nongnu.org. > > > > The issue was original reported on VMware VM, but maybe David can figure > > out how to trigger it on QEMU/KVM. > > A very basic question: > > The virtio_blk driver has no q->mq_ops->timeout() callback. Why does the > block layer still enable the timeout mechanism when the driver doesn't > implement ->timeout()? No matter if ->timeout() is implemented or not, request still may be timed out, and it is better for block layer to find such issue and simply reset timer in case of no ->timeout(). > > I saw there was some "idle" hctx logic and I guess the requests are timeout timer is reused for idle hctx detection. > resubmitted (although it wasn't obvious to me how that happens in the > code)? Maybe that's why the timer is still used if the driver doesn't > care about timeouts... Timeout handling is totally decided by driver's ->timeout() callback. If driver doesn't implement ->timeout(), the request's timer is reset. Thanks Ming
diff --git a/block/blk-mq.c b/block/blk-mq.c index 8070b6c10e8d..ca57c060bb65 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1523,7 +1523,12 @@ static void blk_mq_rq_timed_out(struct request *req) blk_add_timer(req); } -static bool blk_mq_req_expired(struct request *rq, unsigned long *next) +struct blk_expired_data { + unsigned long next; + unsigned long now; +}; + +static bool blk_mq_req_expired(struct request *rq, struct blk_expired_data *expired) { unsigned long deadline; @@ -1533,13 +1538,13 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next) return false; deadline = READ_ONCE(rq->deadline); - if (time_after_eq(jiffies, deadline)) + if (time_after_eq(expired->now, deadline)) return true; - if (*next == 0) - *next = deadline; - else if (time_after(*next, deadline)) - *next = deadline; + if (expired->next == 0) + expired->next = deadline; + else if (time_after(expired->next, deadline)) + expired->next = deadline; return false; } @@ -1555,7 +1560,7 @@ void blk_mq_put_rq_ref(struct request *rq) static bool blk_mq_check_expired(struct request *rq, void *priv) { - unsigned long *next = priv; + struct blk_expired_data *expired = priv; /* * blk_mq_queue_tag_busy_iter() has locked the request, so it cannot @@ -1564,7 +1569,7 @@ static bool blk_mq_check_expired(struct request *rq, void *priv) * it was completed and reallocated as a new request after returning * from blk_mq_check_expired(). */ - if (blk_mq_req_expired(rq, next)) + if (blk_mq_req_expired(rq, expired)) blk_mq_rq_timed_out(rq); return true; } @@ -1573,7 +1578,7 @@ static void blk_mq_timeout_work(struct work_struct *work) { struct request_queue *q = container_of(work, struct request_queue, timeout_work); - unsigned long next = 0; + struct blk_expired_data expired = {.next = 0, .now = jiffies}; struct blk_mq_hw_ctx *hctx; unsigned long i; @@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work) if (!percpu_ref_tryget(&q->q_usage_counter)) return; - blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next); + /* Before walking tags, we must ensure any submit started before the + * current time has finished. Since the submit uses srcu or rcu, wait + * for a synchronization point to ensure all running submits have + * finished + */ + blk_mq_wait_quiesce_done(q); + + blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired); - if (next != 0) { - mod_timer(&q->timeout, next); + if (expired.next != 0) { + mod_timer(&q->timeout, expired.next); } else { /* * Request timeouts are handled as a forward rolling timer. If