tmpfs: change final i_blocks BUG to WARNING
authorHugh Dickins <hughd@google.com>
Fri, 16 Nov 2012 22:15:04 +0000 (14:15 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 16 Nov 2012 22:33:04 +0000 (14:33 -0800)
commit0f3c42f522dc1ad7e27affc0a4aa8c790bce0a66
treebfd85ce2e0fede607ad196258d8071c67897d9eb
parent215c02bc33bbd5ff4d7379a909462d11f0103218
tmpfs: change final i_blocks BUG to WARNING

Under a particular load on one machine, I have hit shmem_evict_inode()'s
BUG_ON(inode->i_blocks), enough times to narrow it down to a particular
race between swapout and eviction.

It comes from the "if (freed > 0)" asymmetry in shmem_recalc_inode(),
and the lack of coherent locking between mapping's nrpages and shmem's
swapped count.  There's a window in shmem_writepage(), between lowering
nrpages in shmem_delete_from_page_cache() and then raising swapped
count, when the freed count appears to be +1 when it should be 0, and
then the asymmetry stops it from being corrected with -1 before hitting
the BUG.

One answer is coherent locking: using tree_lock throughout, without
info->lock; reasonable, but the raw_spin_lock in percpu_counter_add() on
used_blocks makes that messier than expected.  Another answer may be a
further effort to eliminate the weird shmem_recalc_inode() altogether,
but previous attempts at that failed.

So far undecided, but for now change the BUG_ON to WARN_ON: in usual
circumstances it remains a useful consistency check.

Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/shmem.c