mmu_gather: move minimal range calculations into generic code
authorWill Deacon <will.deacon@arm.com>
Wed, 29 Oct 2014 10:03:09 +0000 (10:03 +0000)
committerWill Deacon <will.deacon@arm.com>
Mon, 17 Nov 2014 10:12:42 +0000 (10:12 +0000)
commitfb7332a9fedfd62b1ba6530c86f39f0fa38afd49
tree5e77bd4944da750634c4438df64257cdeaa58888
parent63648dd20fa0780ab6c1e923b5c276d257422cb3
mmu_gather: move minimal range calculations into generic code

On architectures with hardware broadcasting of TLB invalidation messages
, it makes sense to reduce the range of the mmu_gather structure when
unmapping page ranges based on the dirty address information passed to
tlb_remove_tlb_entry.

arm64 already does this by directly manipulating the start/end fields
of the gather structure, but this confuses the generic code which
does not expect these fields to change and can end up calculating
invalid, negative ranges when forcing a flush in zap_pte_range.

This patch moves the minimal range calculation out of the arm64 code
and into the generic implementation, simplifying zap_pte_range in the
process (which no longer needs to care about start/end, since they will
point to the appropriate ranges already). With the range being tracked
by core code, the need_flush flag is dropped in favour of checking that
the end of the range has actually been set.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
Cc: Michal Simek <monstr@monstr.eu>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arch/arm64/include/asm/tlb.h
arch/microblaze/include/asm/tlb.h
arch/powerpc/include/asm/pgalloc.h
arch/powerpc/include/asm/tlb.h
arch/powerpc/mm/hugetlbpage.c
include/asm-generic/tlb.h
mm/memory.c