Revision a742994aa2e271eb8cd8e043d276515ec858ed73 authored by Filipe Manana on 13 February 2015, 16:56:14 UTC, committed by Chris Mason on 14 February 2015, 16:22:49 UTC
If we are recording in the tree log that an inode has new names (new hard
links were added), we would drop items, belonging to the inode, that we
shouldn't:

1) When the flag BTRFS_INODE_COPY_EVERYTHING is set in the inode's runtime
   flags, we ended up dropping all the extent and xattr items that were
   previously logged. This was done only in memory, since logging a new
   name doesn't imply syncing the log;

2) When the flag BTRFS_INODE_COPY_EVERYTHING is set in the inode's runtime
   flags, we ended up dropping all the xattr items that were previously
   logged. Like the case before, this was done only in memory because
   logging a new name doesn't imply syncing the log.

This led to some surprises in scenarios such as the following:

1) write some extents to an inode;
2) fsync the inode;
3) truncate the inode or delete/modify some of its xattrs
4) add a new hard link for that inode
5) fsync some other file, to force the log tree to be durably persisted
6) power failure happens

The next time the fs is mounted, the fsync log replay code is executed,
and the resulting file doesn't have the content it had when the last fsync
against it was performed, instead if has a content matching what it had
when the last transaction commit happened.

So change the behaviour such that when a new name is logged, only the inode
item and reference items are processed.

This is easy to reproduce with the test I just made for xfstests, whose
main body is:

  _scratch_mkfs >> $seqres.full 2>&1
  _init_flakey
  _mount_flakey

  # Create our test file with some data.
  $XFS_IO_PROG -f -c "pwrite -S 0xaa -b 8K 0 8K" \
      $SCRATCH_MNT/foo | _filter_xfs_io

  # Make sure the file is durably persisted.
  sync

  # Append some data to our file, to increase its size.
  $XFS_IO_PROG -f -c "pwrite -S 0xcc -b 4K 8K 4K" \
      $SCRATCH_MNT/foo | _filter_xfs_io

  # Fsync the file, so from this point on if a crash/power failure happens, our
  # new data is guaranteed to be there next time the fs is mounted.
  $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foo

  # Now shrink our file to 5000 bytes.
  $XFS_IO_PROG -c "truncate 5000" $SCRATCH_MNT/foo

  # Now do an expanding truncate to a size larger than what we had when we last
  # fsync'ed our file. This is just to verify that after power failure and
  # replaying the fsync log, our file matches what it was when we last fsync'ed
  # it - 12Kb size, first 8Kb of data had a value of 0xaa and the last 4Kb of
  # data had a value of 0xcc.
  $XFS_IO_PROG -c "truncate 32K" $SCRATCH_MNT/foo

  # Add one hard link to our file. This made btrfs drop all of our file's
  # metadata from the fsync log, including the metadata relative to the
  # extent we just wrote and fsync'ed. This change was made only to the fsync
  # log in memory, so adding the hard link alone doesn't change the persisted
  # fsync log. This happened because the previous truncates set the runtime
  # flag BTRFS_INODE_NEEDS_FULL_SYNC in the btrfs inode structure.
  ln $SCRATCH_MNT/foo $SCRATCH_MNT/foo_link

  # Now make sure the in memory fsync log is durably persisted.
  # Creating and fsync'ing another file will do it.
  # After this our persisted fsync log will no longer have metadata for our file
  # foo that points to the extent we wrote and fsync'ed before.
  touch $SCRATCH_MNT/bar
  $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/bar

  # As expected, before the crash/power failure, we should be able to see a file
  # with a size of 32Kb, with its first 5000 bytes having the value 0xaa and all
  # the remaining bytes with value 0x00.
  echo "File content before:"
  od -t x1 $SCRATCH_MNT/foo

  # Simulate a crash/power loss.
  _load_flakey_table $FLAKEY_DROP_WRITES
  _unmount_flakey

  _load_flakey_table $FLAKEY_ALLOW_WRITES
  _mount_flakey

  # After mounting the fs again, the fsync log was replayed.
  # The expected result is to see a file with a size of 12Kb, with its first 8Kb
  # of data having the value 0xaa and its last 4Kb of data having a value of 0xcc.
  # The btrfs bug used to leave the file as it used te be as of the last
  # transaction commit - that is, with a size of 8Kb with all bytes having a
  # value of 0xaa.
  echo "File content after:"
  od -t x1 $SCRATCH_MNT/foo

The test case for xfstests follows soon.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
1 parent 1a4bcf4
Raw File
xz.txt

XZ data compression in Linux
============================

Introduction

    XZ is a general purpose data compression format with high compression
    ratio and relatively fast decompression. The primary compression
    algorithm (filter) is LZMA2. Additional filters can be used to improve
    compression ratio even further. E.g. Branch/Call/Jump (BCJ) filters
    improve compression ratio of executable data.

    The XZ decompressor in Linux is called XZ Embedded. It supports
    the LZMA2 filter and optionally also BCJ filters. CRC32 is supported
    for integrity checking. The home page of XZ Embedded is at
    <http://tukaani.org/xz/embedded.html>, where you can find the
    latest version and also information about using the code outside
    the Linux kernel.

    For userspace, XZ Utils provide a zlib-like compression library
    and a gzip-like command line tool. XZ Utils can be downloaded from
    <http://tukaani.org/xz/>.

XZ related components in the kernel

    The xz_dec module provides XZ decompressor with single-call (buffer
    to buffer) and multi-call (stateful) APIs. The usage of the xz_dec
    module is documented in include/linux/xz.h.

    The xz_dec_test module is for testing xz_dec. xz_dec_test is not
    useful unless you are hacking the XZ decompressor. xz_dec_test
    allocates a char device major dynamically to which one can write
    .xz files from userspace. The decompressed output is thrown away.
    Keep an eye on dmesg to see diagnostics printed by xz_dec_test.
    See the xz_dec_test source code for the details.

    For decompressing the kernel image, initramfs, and initrd, there
    is a wrapper function in lib/decompress_unxz.c. Its API is the
    same as in other decompress_*.c files, which is defined in
    include/linux/decompress/generic.h.

    scripts/xz_wrap.sh is a wrapper for the xz command line tool found
    from XZ Utils. The wrapper sets compression options to values suitable
    for compressing the kernel image.

    For kernel makefiles, two commands are provided for use with
    $(call if_needed). The kernel image should be compressed with
    $(call if_needed,xzkern) which will use a BCJ filter and a big LZMA2
    dictionary. It will also append a four-byte trailer containing the
    uncompressed size of the file, which is needed by the boot code.
    Other things should be compressed with $(call if_needed,xzmisc)
    which will use no BCJ filter and 1 MiB LZMA2 dictionary.

Notes on compression options

    Since the XZ Embedded supports only streams with no integrity check or
    CRC32, make sure that you don't use some other integrity check type
    when encoding files that are supposed to be decoded by the kernel. With
    liblzma, you need to use either LZMA_CHECK_NONE or LZMA_CHECK_CRC32
    when encoding. With the xz command line tool, use --check=none or
    --check=crc32.

    Using CRC32 is strongly recommended unless there is some other layer
    which will verify the integrity of the uncompressed data anyway.
    Double checking the integrity would probably be waste of CPU cycles.
    Note that the headers will always have a CRC32 which will be validated
    by the decoder; you can only change the integrity check type (or
    disable it) for the actual uncompressed data.

    In userspace, LZMA2 is typically used with dictionary sizes of several
    megabytes. The decoder needs to have the dictionary in RAM, thus big
    dictionaries cannot be used for files that are intended to be decoded
    by the kernel. 1 MiB is probably the maximum reasonable dictionary
    size for in-kernel use (maybe more is OK for initramfs). The presets
    in XZ Utils may not be optimal when creating files for the kernel,
    so don't hesitate to use custom settings. Example:

        xz --check=crc32 --lzma2=dict=512KiB inputfile

    An exception to above dictionary size limitation is when the decoder
    is used in single-call mode. Decompressing the kernel itself is an
    example of this situation. In single-call mode, the memory usage
    doesn't depend on the dictionary size, and it is perfectly fine to
    use a big dictionary: for maximum compression, the dictionary should
    be at least as big as the uncompressed data itself.

Future plans

    Creating a limited XZ encoder may be considered if people think it is
    useful. LZMA2 is slower to compress than e.g. Deflate or LZO even at
    the fastest settings, so it isn't clear if LZMA2 encoder is wanted
    into the kernel.

    Support for limited random-access reading is planned for the
    decompression code. I don't know if it could have any use in the
    kernel, but I know that it would be useful in some embedded projects
    outside the Linux kernel.

Conformance to the .xz file format specification

    There are a couple of corner cases where things have been simplified
    at expense of detecting errors as early as possible. These should not
    matter in practice all, since they don't cause security issues. But
    it is good to know this if testing the code e.g. with the test files
    from XZ Utils.

Reporting bugs

    Before reporting a bug, please check that it's not fixed already
    at upstream. See <http://tukaani.org/xz/embedded.html> to get the
    latest code.

    Report bugs to <lasse.collin@tukaani.org> or visit #tukaani on
    Freenode and talk to Larhzu. I don't actively read LKML or other
    kernel-related mailing lists, so if there's something I should know,
    you should email to me personally or use IRC.

    Don't bother Igor Pavlov with questions about the XZ implementation
    in the kernel or about XZ Utils. While these two implementations
    include essential code that is directly based on Igor Pavlov's code,
    these implementations aren't maintained nor supported by him.
back to top