https://github.com/torvalds/linux
Revision a9e5c87ca7443d09fb530fffa4d96ce1c76dbe4d authored by David Howells on 22 November 2020, 13:13:45 UTC, committed by Linus Torvalds on 22 November 2020, 19:27:03 UTC
When doing a lookup in a directory, the afs filesystem uses a bulk
status fetch to speculatively retrieve the statuses of up to 48 other
vnodes found in the same directory and it will then either update extant
inodes or create new ones - effectively doing 'lookup ahead'.

To avoid the possibility of deadlocking itself, however, the filesystem
doesn't lock all of those inodes; rather just the directory inode is
locked (by the VFS).

When the operation completes, afs_inode_init_from_status() or
afs_apply_status() is called, depending on whether the inode already
exists, to commit the new status.

A case exists, however, where the speculative status fetch operation may
straddle a modification operation on one of those vnodes.  What can then
happen is that the speculative bulk status RPC retrieves the old status,
and whilst that is happening, the modification happens - which returns
an updated status, then the modification status is committed, then we
attempt to commit the speculative status.

This results in something like the following being seen in dmesg:

	kAFS: vnode modified {100058:861} 8->9 YFS.InlineBulkStatus

showing that for vnode 861 on volume 100058, we saw YFS.InlineBulkStatus
say that the vnode had data version 8 when we'd already recorded version
9 due to a local modification.  This was causing the cache to be
invalidated for that vnode when it shouldn't have been.  If it happens
on a data file, this might lead to local changes being lost.

Fix this by ignoring speculative status updates if the data version
doesn't match the expected value.

Note that it is possible to get a DV regression if a volume gets
restored from a backup - but we should get a callback break in such a
case that should trigger a recheck anyway.  It might be worth checking
the volume creation time in the volsync info and, if a change is
observed in that (as would happen on a restore), invalidate all caches
associated with the volume.

Fixes: 5cf9dd55a0ec ("afs: Prospectively look up extra files when doing a single lookup")
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent a349e4c
History
Tip revision: a9e5c87ca7443d09fb530fffa4d96ce1c76dbe4d authored by David Howells on 22 November 2020, 13:13:45 UTC
afs: Fix speculative status fetch going out of order wrt to modifications
Tip revision: a9e5c87
File Mode Size
Documentation
LICENSES
arch
block
certs
crypto
drivers
fs
include
init
ipc
kernel
lib
mm
net
samples
scripts
security
sound
tools
usr
virt
.clang-format -rw-r--r-- 16.3 KB
.cocciconfig -rw-r--r-- 59 bytes
.get_maintainer.ignore -rw-r--r-- 71 bytes
.gitattributes -rw-r--r-- 62 bytes
.gitignore -rw-r--r-- 1.8 KB
.mailmap -rw-r--r-- 17.6 KB
COPYING -rw-r--r-- 496 bytes
CREDITS -rw-r--r-- 98.0 KB
Kbuild -rw-r--r-- 1.3 KB
Kconfig -rw-r--r-- 555 bytes
MAINTAINERS -rw-r--r-- 562.1 KB
Makefile -rw-r--r-- 62.7 KB
README -rw-r--r-- 727 bytes

README

back to top