]> git.kernelconcepts.de Git - karo-tx-linux.git/commitdiff
block: block new I/O just after queue is set as dying
authorMing Lei <tom.leiming@gmail.com>
Mon, 27 Mar 2017 12:06:58 +0000 (20:06 +0800)
committerJens Axboe <axboe@fb.com>
Wed, 29 Mar 2017 14:03:42 +0000 (08:03 -0600)
Before commit 780db2071a(blk-mq: decouble blk-mq freezing
from generic bypassing), the dying flag is checked before
entering queue, and Tejun converts the checking into .mq_freeze_depth,
and assumes the counter is increased just after dying flag
is set. Unfortunately we doesn't do that in blk_set_queue_dying().

This patch calls blk_freeze_queue_start() in blk_set_queue_dying(),
so that we can block new I/O coming once the queue is set as dying.

Given blk_set_queue_dying() is always called in remove path
of block device, and queue will be cleaned up later, we don't
need to worry about undoing the counter.

Cc: Tejun Heo <tj@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
block/blk-core.c

index 7b66f76f9cff6123732c2c4ab6130b2d094c6a60..43b7d06ced695b7d0fef474be402bfee360413ec 100644 (file)
@@ -500,6 +500,13 @@ void blk_set_queue_dying(struct request_queue *q)
        queue_flag_set(QUEUE_FLAG_DYING, q);
        spin_unlock_irq(q->queue_lock);
 
+       /*
+        * When queue DYING flag is set, we need to block new req
+        * entering queue, so we call blk_freeze_queue_start() to
+        * prevent I/O from crossing blk_queue_enter().
+        */
+       blk_freeze_queue_start(q);
+
        if (q->mq_ops)
                blk_mq_wake_waiters(q);
        else {
@@ -672,9 +679,9 @@ int blk_queue_enter(struct request_queue *q, bool nowait)
                /*
                 * read pair of barrier in blk_freeze_queue_start(),
                 * we need to order reading __PERCPU_REF_DEAD flag of
-                * .q_usage_counter and reading .mq_freeze_depth,
-                * otherwise the following wait may never return if the
-                * two reads are reordered.
+                * .q_usage_counter and reading .mq_freeze_depth or
+                * queue dying flag, otherwise the following wait may
+                * never return if the two reads are reordered.
                 */
                smp_rmb();