x86/power/64: Always create temporary identity mapping correctly
authorRafael J. Wysocki <rafael.j.wysocki@intel.com>
Mon, 8 Aug 2016 13:31:31 +0000 (15:31 +0200)
committerRafael J. Wysocki <rafael.j.wysocki@intel.com>
Mon, 8 Aug 2016 20:04:30 +0000 (22:04 +0200)
commite4630fdd47637168927905983205d7b7c5c08c09
tree3528e218e396d17ab5db1e74346e39b59424ec74
parentc226fab474291e3c6ac5fa30a2b0778acc311e61
x86/power/64: Always create temporary identity mapping correctly

The low-level resume-from-hibernation code on x86-64 uses
kernel_ident_mapping_init() to create the temoprary identity mapping,
but that function assumes that the offset between kernel virtual
addresses and physical addresses is aligned on the PGD level.

However, with a randomized identity mapping base, it may be aligned
on the PUD level and if that happens, the temporary identity mapping
created by set_up_temporary_mappings() will not reflect the actual
kernel identity mapping and the image restoration will fail as a
result (leading to a kernel panic most of the time).

To fix this problem, rework kernel_ident_mapping_init() to support
unaligned offsets between KVA and PA up to the PMD level and make
set_up_temporary_mappings() use it as approprtiate.

Reported-and-tested-by: Thomas Garnier <thgarnie@google.com>
Reported-by: Borislav Petkov <bp@suse.de>
Suggested-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Yinghai Lu <yinghai@kernel.org>
arch/x86/include/asm/init.h
arch/x86/mm/ident_map.c
arch/x86/power/hibernate_64.c