https://github.com/torvalds/linux
Revision 0d3e4d4fade6b04e933b11e69e80044f35e9cd60 authored by Marc Zyngier on 05 January 2015, 21:13:24 UTC, committed by Christoffer Dall on 29 January 2015, 22:24:57 UTC
When handling a fault in stage-2, we need to resync I$ and D$, just
to be sure we don't leave any old cache line behind.

That's very good, except that we do so using the *user* address.
Under heavy load (swapping like crazy), we may end up in a situation
where the page gets mapped in stage-2 while being unmapped from
userspace by another CPU.

At that point, the DC/IC instructions can generate a fault, which
we handle with kvm->mmu_lock held. The box quickly deadlocks, user
is unhappy.

Instead, perform this invalidation through the kernel mapping,
which is guaranteed to be present. The box is much happier, and so
am I.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
1 parent 363ef89
Raw File
Tip revision: 0d3e4d4fade6b04e933b11e69e80044f35e9cd60 authored by Marc Zyngier on 05 January 2015, 21:13:24 UTC
arm/arm64: KVM: Use kernel mapping to perform invalidation on page fault
Tip revision: 0d3e4d4
gcc-goto.sh
#!/bin/sh
# Test for gcc 'asm goto' support
# Copyright (C) 2010, Jason Baron <jbaron@redhat.com>

cat << "END" | $@ -x c - -c -o /dev/null >/dev/null 2>&1 && echo "y"
int main(void)
{
#if defined(__arm__) || defined(__aarch64__)
	/*
	 * Not related to asm goto, but used by jump label
	 * and broken on some ARM GCC versions (see GCC Bug 48637).
	 */
	static struct { int dummy; int state; } tp;
	asm (".long %c0" :: "i" (&tp.state));
#endif

entry:
	asm goto ("" :::: entry);
	return 0;
}
END
back to top