workqueue: better define synchronization rule around rescuer->pool updates
authorLai Jiangshan <laijs@cn.fujitsu.com>
Tue, 19 Feb 2013 20:17:02 +0000 (12:17 -0800)
committerTejun Heo <tj@kernel.org>
Mon, 4 Mar 2013 17:44:58 +0000 (09:44 -0800)
Rescuers visit different worker_pools to process work items from pools
under pressure.  Currently, rescuer->pool is updated outside any
locking and when an outsider looks at a rescuer, there's no way to
tell when and whether rescuer->pool is gonna change.  While this
doesn't currently cause any problem, it is nasty.

With recent worker_maybe_bind_and_lock() changes, we can move
rescuer->pool updates inside pool locks such that if rescuer->pool
equals a locked pool, it's guaranteed to stay that way until the pool
is unlocked.

Move rescuer->pool inside pool->lock.

This patch doesn't introduce any visible behavior difference.

tj: Updated the description.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
kernel/workqueue.c
kernel/workqueue_internal.h

index 09545d4..fd9a28a 100644 (file)
@@ -2357,8 +2357,8 @@ repeat:
                mayday_clear_cpu(cpu, wq->mayday_mask);
 
                /* migrate to the target cpu if possible */
-               rescuer->pool = pool;
                worker_maybe_bind_and_lock(pool);
+               rescuer->pool = pool;
 
                /*
                 * Slurp in all works issued via this workqueue and
@@ -2379,6 +2379,7 @@ repeat:
                if (keep_working(pool))
                        wake_up_worker(pool);
 
+               rescuer->pool = NULL;
                spin_unlock_irq(&pool->lock);
        }
 
index 0765026..f9c8877 100644 (file)
@@ -32,6 +32,7 @@ struct worker {
        struct list_head        scheduled;      /* L: scheduled works */
        struct task_struct      *task;          /* I: worker task */
        struct worker_pool      *pool;          /* I: the associated pool */
+                                               /* L: for rescuers */
        /* 64 bytes boundary on 64bit, 32 on 32bit */
        unsigned long           last_active;    /* L: last active timestamp */
        unsigned int            flags;          /* X: flags */