https://github.com/facebook/rocksdb
- HEAD
- refs/heads/2.2.fb.branch
- refs/heads/2.3.fb.branch
- refs/heads/2.4.fb.branch
- refs/heads/2.5.fb.branch
- refs/heads/2.6.fb.branch
- refs/heads/2.7
- refs/heads/2.7.fb.branch
- refs/heads/2.8.1.fb
- refs/heads/2.8.fb
- refs/heads/2.8.fb.trunk
- refs/heads/3.0.fb
- refs/heads/3.0.fb.branch
- refs/heads/3.1.fb
- refs/heads/3.10.fb
- refs/heads/3.11.fb
- refs/heads/3.12.fb
- refs/heads/3.13.fb
- refs/heads/3.2.fb
- refs/heads/3.3.fb
- refs/heads/3.4.fb
- refs/heads/3.5.fb
- refs/heads/3.6.fb
- refs/heads/3.7.fb
- refs/heads/3.8.fb
- refs/heads/3.9.fb
- refs/heads/4.0.fb
- refs/heads/4.1.fb
- refs/heads/4.10.fb
- refs/heads/4.11.fb
- refs/heads/4.12.fb
- refs/heads/4.13.fb
- refs/heads/4.2.fb
- refs/heads/4.3.fb
- refs/heads/4.4.fb
- refs/heads/4.5.fb
- refs/heads/4.6.fb
- refs/heads/4.7.fb
- refs/heads/4.8.fb
- refs/heads/4.9.fb
- refs/heads/5.0.fb
- refs/heads/5.1.fb
- refs/heads/5.10.fb
- refs/heads/5.11.fb
- refs/heads/5.12.fb
- refs/heads/5.13.fb
- refs/heads/5.13.fb.myrocks
- refs/heads/5.14.fb
- refs/heads/5.14.fb.myrocks
- refs/heads/5.15.fb
- refs/heads/5.16.fb
- refs/heads/5.17.fb
- refs/heads/5.17.fb.myrocks
- refs/heads/5.18.fb
- refs/heads/5.2.fb
- refs/heads/5.3.fb
- refs/heads/5.4.fb
- refs/heads/5.5.fb
- refs/heads/5.6.fb
- refs/heads/5.7.fb
- refs/heads/5.7.fb.myrocks
- refs/heads/5.8.3
- refs/heads/5.8.fb
- refs/heads/5.9.fb
- refs/heads/5.9.fb.myrocks
- refs/heads/6.0.fb
- refs/heads/6.0.fb.myrocks
- refs/heads/6.1.fb
- refs/heads/6.1.fb.myrocks
- refs/heads/6.1.fb.prod201905
- refs/heads/6.10.fb
- refs/heads/6.11.fb
- refs/heads/6.12.fb
- refs/heads/6.13.fb
- refs/heads/6.13.fb.laser
- refs/heads/6.14.fb
- refs/heads/6.14.fb.laser
- refs/heads/6.15.fb
- refs/heads/6.16.fb
- refs/heads/6.17.fb
- refs/heads/6.17.fb.laser
- refs/heads/6.18.fb
- refs/heads/6.19.fb
- refs/heads/6.2.fb
- refs/heads/6.20.fb
- refs/heads/6.21.fb
- refs/heads/6.22-history.md-fixup
- refs/heads/6.22.fb
- refs/heads/6.23.fb
- refs/heads/6.24.fb
- refs/heads/6.25.fb
- refs/heads/6.26.fb
- refs/heads/6.27.fb
- refs/heads/6.28.fb
- refs/heads/6.29.fb
- refs/heads/6.3.fb
- refs/heads/6.3.fb.myrocks
- refs/heads/6.3.fb.myrocks2
- refs/heads/6.3fb
- refs/heads/6.4.fb
- refs/heads/6.5.fb
- refs/heads/6.6.fb
- refs/heads/6.7.fb
- refs/heads/6.8.fb
- refs/heads/6.9.fb
- refs/heads/7.0.fb
- refs/heads/7.1.fb
- refs/heads/7.10.fb
- refs/heads/7.2.fb
- refs/heads/7.3.fb
- refs/heads/7.4.fb
- refs/heads/7.5.fb
- refs/heads/7.6.fb
- refs/heads/7.7.fb
- refs/heads/7.8.fb
- refs/heads/7.9.fb
- refs/heads/8.0.fb
- refs/heads/8.1.fb
- refs/heads/8.10.fb
- refs/heads/8.11.2_zippydb
- refs/heads/8.11.fb
- refs/heads/8.11.fb_zippydb
- refs/heads/8.2.fb
- refs/heads/8.3.fb
- refs/heads/8.4.fb
- refs/heads/8.5.fb
- refs/heads/8.6.fb
- refs/heads/8.7.fb
- refs/heads/8.8.fb
- refs/heads/8.9.fb
- refs/heads/9.0.fb
- refs/heads/9.1.fb
- refs/heads/adaptive
- refs/heads/ajkr-patch-1
- refs/heads/ajkr-patch-2
- refs/heads/blob_shadow
- refs/heads/bottom-pri-level
- refs/heads/bugfix-build-detect
- refs/heads/checksum_readahead_mmap_fix
- refs/heads/draft-myrocks-and-fbcode-8.0.fb
- refs/heads/feature/debug-rocksdbjavastatic
- refs/heads/feature/travis-arm64
- refs/heads/fix-release-notes
- refs/heads/fix-win2022-build
- refs/heads/fix-write-batch-comment
- refs/heads/format_compatible_4
- refs/heads/getmergeops
- refs/heads/gh-pages-old
- refs/heads/history-update
- refs/heads/hotfix/lambda-capture
- refs/heads/improve-support
- refs/heads/jijiew-patch-1
- refs/heads/katherinez-patch-1
- refs/heads/katherinez-patch-2
- refs/heads/main
- refs/heads/master
- refs/heads/mdcallag_benchmark_oct22
- refs/heads/nvm_cache_proto
- refs/heads/pr-sanity-check-as-GHAction
- refs/heads/pr/11267
- refs/heads/pr/6062
- refs/heads/ramvadiv-patch-1
- refs/heads/release_fix
- refs/heads/revert-10606-7.6.1
- refs/heads/ribbon_bloom_hybrid
- refs/heads/scaffold
- refs/heads/siying-patch-1
- refs/heads/siying-patch-10
- refs/heads/siying-patch-2
- refs/heads/siying-patch-3
- refs/heads/siying-patch-4
- refs/heads/siying-patch-5
- refs/heads/siying-patch-6
- refs/heads/siying-patch-7
- refs/heads/siying-patch-8
- refs/heads/skip_memtable_flush
- refs/heads/testing_ppc_build
- refs/heads/tests
- refs/heads/unschedule_issue_test_base
- refs/heads/unused-var
- refs/heads/v6.6.4
- refs/heads/xxhash_merge_base
- refs/heads/yiwu_stackable
- refs/heads/yuslepukhin
- refs/remotes/origin/5.13.fb
- refs/tags/2.5.fb
- refs/tags/2.6.fb
- refs/tags/3.0.fb
- refs/tags/do-not-use-me2
- refs/tags/rocksdb-3.1
- refs/tags/rocksdb-3.10.2
- refs/tags/rocksdb-3.11
- refs/tags/rocksdb-3.11.1
- refs/tags/rocksdb-3.11.2
- refs/tags/rocksdb-3.2
- refs/tags/rocksdb-3.3
- refs/tags/rocksdb-3.4
- refs/tags/rocksdb-3.5
- refs/tags/rocksdb-3.5.1
- refs/tags/rocksdb-3.6.1
- refs/tags/rocksdb-3.6.2
- refs/tags/rocksdb-3.7
- refs/tags/rocksdb-3.8
- refs/tags/rocksdb-3.9
- refs/tags/rocksdb-3.9.1
- refs/tags/rocksdb-4.1
- refs/tags/rocksdb-5.10.2
- refs/tags/rocksdb-5.10.3
- refs/tags/rocksdb-5.10.4
- refs/tags/rocksdb-5.11.2
- refs/tags/rocksdb-5.11.3
- refs/tags/rocksdb-5.14.3
- refs/tags/rocksdb-5.2.1
- refs/tags/rocksdb-5.3.3
- refs/tags/rocksdb-5.3.4
- refs/tags/rocksdb-5.3.5
- refs/tags/rocksdb-5.3.6
- refs/tags/rocksdb-5.4.10
- refs/tags/rocksdb-5.4.5
- refs/tags/rocksdb-5.4.6
- refs/tags/rocksdb-5.5.2
- refs/tags/rocksdb-5.5.3
- refs/tags/rocksdb-5.5.4
- refs/tags/rocksdb-5.5.5
- refs/tags/rocksdb-5.5.6
- refs/tags/rocksdb-5.6.1
- refs/tags/rocksdb-5.6.2
- refs/tags/rocksdb-5.7.1
- refs/tags/rocksdb-5.7.2
- refs/tags/rocksdb-5.7.3
- refs/tags/rocksdb-5.7.5
- refs/tags/rocksdb-5.8.6
- refs/tags/rocksdb-5.8.7
- refs/tags/rocksdb-5.8.8
- refs/tags/rocksdb-5.9.2
- refs/tags/v4.0
- refs/tags/v4.1
- refs/tags/v5.10.2
- refs/tags/v5.10.3
- refs/tags/v5.10.4
- refs/tags/v5.11.2
- refs/tags/v5.11.3
- refs/tags/v5.13.3
- refs/tags/v5.14.3
- refs/tags/v5.15.10
- refs/tags/v5.18.3
- refs/tags/v5.2.1
- refs/tags/v5.3.3
- refs/tags/v5.3.4
- refs/tags/v5.3.5
- refs/tags/v5.3.6
- refs/tags/v5.4.10
- refs/tags/v5.4.5
- refs/tags/v5.4.6
- refs/tags/v5.5.2
- refs/tags/v5.5.3
- refs/tags/v5.5.4
- refs/tags/v5.5.5
- refs/tags/v5.5.6
- refs/tags/v5.6.1
- refs/tags/v5.6.2
- refs/tags/v5.7.1
- refs/tags/v5.7.2
- refs/tags/v5.7.3
- refs/tags/v5.7.5
- refs/tags/v5.8.6
- refs/tags/v5.8.7
- refs/tags/v5.8.8
- refs/tags/v5.9.2
- refs/tags/v6.0.1
- refs/tags/v6.0.2
- refs/tags/v6.1.1
- refs/tags/v6.1.2
- refs/tags/v6.10.1
- refs/tags/v6.10.2
- refs/tags/v6.11.4
- refs/tags/v6.11.6
- refs/tags/v6.12.6
- refs/tags/v6.12.7
- refs/tags/v6.13.2
- refs/tags/v6.13.3
- refs/tags/v6.14.5
- refs/tags/v6.14.6
- refs/tags/v6.15.4
- refs/tags/v6.15.5
- refs/tags/v6.16.3
- refs/tags/v6.16.4
- refs/tags/v6.17.3
- refs/tags/v6.2.2
- refs/tags/v6.2.4
- refs/tags/v6.20.3
- refs/tags/v6.22.1
- refs/tags/v6.25.3
- refs/tags/v6.26.1
- refs/tags/v6.28.2
- refs/tags/v6.29.3
- refs/tags/v6.29.4
- refs/tags/v6.29.5
- refs/tags/v6.3.6
- refs/tags/v6.4.6
- refs/tags/v6.5.2
- refs/tags/v6.5.3
- refs/tags/v6.6.3
- refs/tags/v6.6.4
- refs/tags/v6.7.3
- refs/tags/v6.8.1
- refs/tags/v7.0.1
- refs/tags/v7.0.2
- refs/tags/v7.0.4
- refs/tags/v7.2.0
- refs/tags/v7.2.2
- refs/tags/v7.5.3
- refs/tags/v7.7.2
- refs/tags/v7.9.2
- refs/tags/v8.0.0
- refs/tags/v8.11.4
- refs/tags/v8.3.2
- refs/tags/v8.3.3
- refs/tags/v8.4.4
- refs/tags/v8.5.3
- refs/tags/v8.6.7
- refs/tags/v8.7.3
- v9.0.0
- v8.9.1
- v8.8.1
- v8.5.4
- v8.11.3
- v8.10.2
- v8.10.0
- v8.1.1
- v7.8.3
- v7.7.8
- v7.7.3
- v7.6.0
- v7.4.5
- v7.4.4
- v7.4.3
- v7.3.1
- v7.10.2
- v7.1.2
- v7.1.1
- v7.0.3
- v6.27.3
- v6.26.0
- v6.25.1
- v6.24.2
- v6.23.3
- v6.23.2
- v6.19.3
- v6.15.2
- v5.8
- v5.5.1
- v5.4.7
- v5.18.4
- v5.17.2
- v5.16.6
- v5.14.2
- v5.13.4
- v5.13.2
- v5.13.1
- v5.12.5
- v5.12.4
- v5.12.3
- v5.12.2
- v5.1.4
- v5.1.3
- v5.1.2
- v5.0.2
- v5.0.1
- v4.9
- v4.8
- v4.6.1
- v4.5.1
- v4.4.1
- v4.4
- v4.3.1
- v4.3
- v4.2
- v4.13.5
- v4.13
- v4.11.2
- v3.9
- v3.8
- v3.7
- v3.6.1
- v3.5
- v3.4
- v3.3
- v3.2
- v3.13.1
- v3.13
- v3.12.1
- v3.12
- v3.11
- v3.10
- v3.1
- v3.0
- v2.8
- v2.7
- v2.6
- v2.5
- v2.4
- v2.3
- v2.2
- v2.1
- v2.0
- v1.5.9.1
- v1.5.8.2
- v1.5.8.1
- v1.5.8
- v1.5.7
- rocksdb-5.8
- rocksdb-5.4.7
- rocksdb-5.1.4
- rocksdb-5.1.3
- rocksdb-5.1.2
- rocksdb-5.0.2
- rocksdb-5.0.1
- rocksdb-4.9
- rocksdb-4.8
- rocksdb-4.6.1
- rocksdb-4.5.1
- rocksdb-4.4.1
- rocksdb-4.4
- rocksdb-4.3.1
- rocksdb-4.3
- rocksdb-4.2
- rocksdb-4.13.5
- rocksdb-4.13
- rocksdb-4.11.2
- rocksdb-3.10.1
- blob_st_lvl-pre
- 2.8.fb
- 2.7.fb
- 2.4.fb
- 2.3.fb
- 2.2.fb
- 2.1.fb
- 2.0.fb
- 1.5.9.fb
- 1.5.9.2.fb
- 1.5.9.1.fb
- 1.5.8.fb
- 1.5.8.2.fb
- 1.5.8.1.fb
- 1.5.7.fb
Take a new snapshot of a software origin
If the archived software origin currently browsed is not synchronized with its upstream version (for instance when new commits have been issued), you can explicitly request Software Heritage to take a new snapshot of it.
Use the form below to proceed. Once a request has been submitted and accepted, it will be processed as soon as possible. You can then check its processing state by visiting this dedicated page.Processing "take a new snapshot" request ...
Permalinks
To reference or cite the objects present in the Software Heritage archive, permalinks based on SoftWare Hash IDentifiers (SWHIDs) must be used.
Select below a type of object currently browsed in order to display its associated SWHID and permalink.
Revision | Author | Date | Message | Commit Date |
---|---|---|---|---|
bf2c335 | akankshamahajan | 29 November 2022, 14:51:03 UTC | Update HISTORY.md and version to 7.8.3 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: | 29 November 2022, 14:51:03 UTC |
7aa573e | Peter Dillinger | 28 November 2022, 17:37:05 UTC | Update HISTORY for reverts | 28 November 2022, 17:37:05 UTC |
e0cf5cd | Peter Dillinger | 28 November 2022, 17:34:16 UTC | Revert "Improve / refactor anonymous mmap capabilities (#10810)" This reverts commit 8367f0d2d76de0f7d096cc65f5f9ebfb907d551a. | 28 November 2022, 17:34:16 UTC |
7ad900e | Peter Dillinger | 28 November 2022, 17:33:56 UTC | Revert "Fix include of windows.h in mmap.h (#10885)" This reverts commit 49b7f219de87e4429067666cd92f826fe202f2f1. | 28 November 2022, 17:33:56 UTC |
491615c | Andrew Kryczka | 28 November 2022, 07:14:56 UTC | update version.h for 7.8.2 | 28 November 2022, 07:14:56 UTC |
1cf4539 | Andrew Kryczka | 28 November 2022, 07:12:09 UTC | batch latest fixes into 7.8.2 | 28 November 2022, 07:12:09 UTC |
442b6b8 | Andrew Kryczka | 22 November 2022, 00:14:03 UTC | Fix CompactionIterator flag for penultimate level output (#10967) Summary: We were not resetting it in non-debug mode so it could be true once and then stay true for future keys where it should be false. This PR adds the reset logic. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10967 Test Plan: - built `db_bench` with DEBUG_LEVEL=0 - ran benchmark: `TEST_TMPDIR=/dev/shm/prefix ./db_bench -benchmarks=fillrandom -compaction_style=1 -preserve_internal_time_seconds=100 -preclude_last_level_data_seconds=10 -write_buffer_size=1048576 -target_file_size_base=1048576 -subcompactions=8 -duration=120` - compared "output_to_penultimate_level: X bytes + last: Y bytes" lines in LOG output - Before this fix, Y was always zero - After this fix, Y gradually increased throughout the benchmark Reviewed By: riversand963 Differential Revision: D41417726 Pulled By: ajkr fbshipit-source-id: ace1e9a289e751a5b0c2fbaa8addd4eda5525329 | 24 November 2022, 21:44:34 UTC |
8c06988 | Andrew Kryczka | 04 November 2022, 22:55:54 UTC | Fix flush picking non-consecutive memtables (#10921) Summary: Prevents `MemTableList::PickMemtablesToFlush()` from picking non-consecutive memtables. It leads to wrong ordering in L0 if the files are committed, or an error like below if force_consistency_checks=true catches it: ``` Corruption: force_consistency_checks: VersionBuilder: L0 file https://github.com/facebook/rocksdb/issues/25 with seqno 320416 368066 vs. file https://github.com/facebook/rocksdb/issues/24 with seqno 336037 352068 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/10921 Test Plan: fix the expectation in the existing test of this behavior Reviewed By: riversand963 Differential Revision: D41046935 Pulled By: ajkr fbshipit-source-id: 783696bff56115063d5dc5856dfaed6a9881d1ab | 24 November 2022, 21:43:36 UTC |
8d5edb6 | Changyu Bi | 24 November 2022, 01:29:25 UTC | Prevent iterating over range tombstones beyond `iterate_upper_bound` (#10966) (#10985) Summary: Currently, `iterate_upper_bound` is not checked for range tombstone keys in MergingIterator. This may impact performance when there is a large number of range tombstones right after `iterate_upper_bound`. This PR fixes this issue by checking `iterate_upper_bound` in MergingIterator for range tombstone keys. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10966 Test Plan: - added unit test - stress test: `python3 tools/db_crashtest.py whitebox --simple --verify_iterator_with_expected_state_one_in=5 --delrangepercent=5 --prefixpercent=18 --writepercent=48 --readpercen=15 --duration=36000 --range_deletion_width=100` - ran different stress tests over sandcastle - Falcon team ran some test traffic and saw reduced CPU usage on processing range tombstones. | 24 November 2022, 01:29:25 UTC |
32d853f | Yanqin Jin | 23 November 2022, 06:53:31 UTC | Make best-efforts recovery verify SST unique ID before Version construction (#10962) Summary: The check for SST unique IDs added to best-efforts recovery (`Options::best_efforts_recovery` is true). With best_efforts_recovery being true, RocksDB will recover to the latest point in MANIFEST such that all valid SST files included up to this point pass unique ID checks as well. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10962 Test Plan: make check Reviewed By: pdillinger Differential Revision: D41378241 Pulled By: riversand963 fbshipit-source-id: a036064e2c17dec13d080a24ef2a9f85d607b16c | 24 November 2022, 00:03:15 UTC |
e0b1793 | anand76 | 14 November 2022, 05:38:35 UTC | Add some async read stats (#10947) Summary: Add stats for time spent in the ReadAsync call, and async read errors. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10947 Test Plan: Run db_bench and look at stats Reviewed By: akankshamahajan15 Differential Revision: D41236637 Pulled By: anand1976 fbshipit-source-id: 70539b69a28491d57acead449436a761f7108acf | 21 November 2022, 18:37:00 UTC |
f690f24 | Akanksha Mahajan | 15 November 2022, 00:14:41 UTC | Fix db_stress failure in async_io in FilePrefetchBuffer (#10949) Summary: Fix db_stress failure in async_io in FilePrefetchBuffer. From the logs, assertion was caused when - prev_offset_ = offset but somehow prev_len != 0 and explicit_prefetch_submitted_ = true. That scenario is when we send async request to prefetch buffer during seek but in second seek that data is found in cache. prev_offset_ and prev_len_ get updated but we were not setting explicit_prefetch_submitted_ = false because of which buffers were getting out of sync. It's possible a read by another thread might have loaded the block into the cache in the meantime. Particular assertion example: ``` prev_offset: 0, prev_len_: 8097 , offset: 0, length: 8097, actual_length: 8097 , actual_offset: 0 , curr_: 0, bufs_[curr_].offset_: 4096 ,bufs_[curr_].CurrentSize(): 48541 , async_len_to_read: 278528, bufs_[curr_].async_in_progress_: false second: 1, bufs_[second].offset_: 282624 ,bufs_[second].CurrentSize(): 0, async_len_to_read: 262144 ,bufs_[second].async_in_progress_: true , explicit_prefetch_submitted_: true , copy_to_third_buffer: false ``` As we can see curr_ was expected to read 278528 but it read 48541. Also buffers are out of sync. Also `explicit_prefetch_submitted_` is set true but prev_len not 0. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10949 Test Plan: - Ran db_bench for regression to make sure there is no regression; - Ran db_stress failing without this fix, - Ran build-linux-mini-crashtest 7- 8 times locally + CircleCI Reviewed By: anand1976 Differential Revision: D41257786 Pulled By: akankshamahajan15 fbshipit-source-id: 1d100f94f8c06bbbe4cc76ca27f1bbc820c2494f | 21 November 2022, 18:36:43 UTC |
c097b01 | akankshamahajan | 11 November 2022, 21:34:49 UTC | Fix async_io regression in scans (#10939) Summary: Fix async_io regression in scans due to incorrect check which was causing the valid data in buffer to be cleared during seek. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10939 Test Plan: - stress tests export CRASH_TEST_EXT_ARGS="--async_io=1" make crash_test -j32 - Ran db_bench command which was caught the regression: ./db_bench --db=/rocksdb_async_io_testing/prefix_scan --disable_wal=1 --use_existing_db=true --benchmarks="seekrandom" -key_size=32 -value_size=512 -num=50000000 -use_direct_reads=false -seek_nexts=963 -duration=30 -ops_between_duration_checks=1 --async_io=true --compaction_readahead_size=4194304 --log_readahead_size=0 --blob_compaction_readahead_size=0 --initial_auto_readahead_size=65536 --num_file_reads_for_auto_readahead=0 --max_auto_readahead_size=524288 seekrandom : 3777.415 micros/op 264 ops/sec 30.000 seconds 7942 operations; 132.3 MB/s (7942 of 7942 found) Reviewed By: anand1976 Differential Revision: D41173899 Pulled By: akankshamahajan15 fbshipit-source-id: 2d75b06457d65b1851c92382565d9c3fac329dfe | 21 November 2022, 18:36:09 UTC |
6b2f41f | akankshamahajan | 02 November 2022, 16:56:28 UTC | Update HISTORY.md and version.h for 7.8.1 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: | 02 November 2022, 16:56:28 UTC |
d53da91 | akankshamahajan | 01 November 2022, 23:06:51 UTC | Fix async_io failures in case there is error in reading data (#10890) Summary: Fix memory corruption error in scans if async_io is enabled. Memory corruption happened if data is overlapping between two buffers. If there is IOError while reading the data, it leads to empty buffer and other buffer already in progress of async read goes again for reading causing the error. Fix: Added check to abort IO in second buffer if curr_ got empty. This PR also fixes db_stress failures which happened when buffers are not aligned. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10890 Test Plan: - Ran make crash_test -j32 with async_io enabled. - Ran benchmarks to make sure there is no regression. Reviewed By: anand1976 Differential Revision: D40881731 Pulled By: akankshamahajan15 fbshipit-source-id: 39fcf2134c7b1bbb08415ede3e1ef261ac2dbc58 | 01 November 2022, 23:26:20 UTC |
0ffd94d | Changyu Bi | 27 October 2022, 21:28:50 UTC | Reduce heap operations for range tombstone keys in iterator (#10877) Summary: Right now in MergingIterator, for each range tombstone start and end key, we pop one end from heap and push the other end into the heap. This involves extra downheap and upheap cost. In the likely cases when a range tombstone iterator emits relatively adjacent keys, these keys should have similar order within all keys in the heap. This can happen when there is a burst of consecutive range tombstones, and most of the keys covered by them are dropped already. This PR uses `replace_top()` when inserting new range tombstone keys, which is more efficient in these common cases. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10877 Test Plan: - existing UT - ran all flavors of stress test through sandcastle - benchmark: ``` TEST_TMPDIR=/tmp/rocksdb-rangedel-test-all-tombstone ./db_bench --benchmarks=fillseq,levelstats --writes_per_range_tombstone=1 --max_num_range_tombstones=1000000 --range_tombstone_width=2 --num=100000000 --writes=800000 --max_bytes_for_level_base=4194304 --disable_auto_compactions --write_buffer_size=33554432 --key_size=64 Level Files Size(MB) -------------------- 0 8 152 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 TEST_TMPDIR=/tmp/rocksdb-rangedel-test-all-tombstone/ ./db_bench --benchmarks=readseq[-W1][-X5],levelstats --use_existing_db=true --cache_size=3221225472 --num=100000000 --reads=1000000 --disable_auto_compactions=true --avoid_flush_during_recovery=true readseq [AVG 5 runs] : 1432116 (± 59664) ops/sec; 224.0 (± 9.3) MB/sec readseq [MEDIAN 5 runs] : 1454886 ops/sec; 227.5 MB/sec readseq [AVG 5 runs] : 1944425 (± 29521) ops/sec; 304.1 (± 4.6) MB/sec readseq [MEDIAN 5 runs] : 1959430 ops/sec; 306.5 MB/sec ``` Reviewed By: ajkr Differential Revision: D40710936 Pulled By: cbi42 fbshipit-source-id: cb782fb9cdcd26c0c3eb9443215a4ef4d2f79022 | 31 October 2022, 16:41:30 UTC |
fb7b420 | Levi Tamasi | 27 October 2022, 22:39:29 UTC | Use malloc/free for LRUHandle instead of new[]/delete[] (#10884) Summary: It's unsafe to call `malloc_usable_size` with an address not returned by a function from the `malloc` family (see https://github.com/facebook/rocksdb/issues/10798). The patch switches from using `new[]` / `delete[]` for `LRUHandle` to `malloc` / `free`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10884 Test Plan: `make check` Reviewed By: pdillinger Differential Revision: D40738089 Pulled By: ltamasi fbshipit-source-id: ac5583f88125fee49c314639be6b6df85937fbee | 28 October 2022, 22:35:11 UTC |
3453a88 | anand76 | 27 October 2022, 05:34:36 UTC | Fix a potential std::vector use after move bug (#10845) Summary: The call to `folly::coro::collectAllRange()` should move the input `mget_tasks`. But just in case, assert and clear the std::vector before reusing. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10845 Reviewed By: akankshamahajan15 Differential Revision: D40611719 Pulled By: anand1976 fbshipit-source-id: 0f32b387cf5a2894b13389016c020b01ab479b5e | 27 October 2022, 05:41:35 UTC |
49b7f21 | Peter Dillinger | 27 October 2022, 01:07:57 UTC | Fix include of windows.h in mmap.h (#10885) Summary: If windows.h is not included in a particular way, it can conflict with other code including it. I don't know all the details, but having just one standard place where we include windows.h in header files seems best and seems to fix the internal issue we hit. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10885 Test Plan: CI and internal validation Reviewed By: anand1976 Differential Revision: D40738945 Pulled By: pdillinger fbshipit-source-id: 88f635e895b1c7b810baad159e6dbb8351344cac | 27 October 2022, 02:34:46 UTC |
0d82c62 | akankshamahajan | 25 October 2022, 00:13:26 UTC | Fix override error in system_clock.h (#10858) Summary: Fix error ``` rocksdb/system_clock.h:30:11: error: '~SystemClock' overrides a destructor but is not marked 'override' [-Werror,-Wsuggest-destructor-override] virtual ~SystemClock() {} ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/10858 Test Plan: Ran internally Reviewed By: siying Differential Revision: D40652374 Pulled By: akankshamahajan15 fbshipit-source-id: 5dda8ca03ea57d709442c87e23e5fe097d7db672 | 25 October 2022, 00:38:41 UTC |
3ecef27 | akankshamahajan | 24 October 2022, 23:13:16 UTC | Update header file to include right copyright (#10854) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10854 Reviewed By: siying Differential Revision: D40651483 Pulled By: akankshamahajan15 fbshipit-source-id: 95ce53297e9699a34cc80439bc7553f6cc3ac957 | 25 October 2022, 00:38:19 UTC |
9e0c4a0 | Changyu Bi | 24 October 2022, 03:17:14 UTC | Remove range tombstone test code from sst_file_reader (#10847) Summary: `#include "db/range_tombstone_fragmenter.h"` seems to break some internal test for 7.8 release. I'm removing it from sst_file_reader.h for now to unblock release. This should be fine as it is only used in a unit test for DeleteRange with timestamp. In addition, it does not seem to be useful to support delete range for sst file writer, since the range tombstone won't cover any key (its sequence number is 0). So maybe we can remove it in the future. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10847 Test Plan: CI. Reviewed By: akankshamahajan15 Differential Revision: D40620865 Pulled By: cbi42 fbshipit-source-id: be44b2f31e062bff87ed1b8d94482c3f7eaa370c | 24 October 2022, 16:17:32 UTC |
9a55e5d | akankshamahajan | 22 October 2022, 17:09:07 UTC | Update HISTORY.md for 7.8 release (#10844) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10844 Reviewed By: ajkr Differential Revision: D40592956 Pulled By: akankshamahajan15 fbshipit-source-id: 6656f4bc5faa30fa7882bf44155f7931895590e2 | 22 October 2022, 17:09:07 UTC |
f726d29 | Jay Zhuang | 22 October 2022, 15:57:38 UTC | Allow penultimate level output for the last level only compaction (#10822) Summary: Allow the last level only compaction able to output result to penultimate level if the penultimate level is empty. Which will also block the other compaction output to the penultimate level. (it includes the PR https://github.com/facebook/rocksdb/issues/10829) Pull Request resolved: https://github.com/facebook/rocksdb/pull/10822 Reviewed By: siying Differential Revision: D40389180 Pulled By: jay-zhuang fbshipit-source-id: 4e5dcdce307795b5e07b5dd1fa29dd75bb093bad | 22 October 2022, 15:57:38 UTC |
27c9705 | Peter Dillinger | 22 October 2022, 01:09:12 UTC | Use kXXH3 as default checksum (CPU efficiency) (#10778) Summary: Since this has been supported for about a year, I think it's time to make it the default. This should improve CPU efficiency slightly on most hardware. A current DB performance comparison using buck+clang build: ``` TEST_TMPDIR=/dev/shm ./db_bench -checksum_type={1,4} -benchmarks=fillseq[-X1000] -num=3000000 -disable_wal ``` kXXH3 (+0.2% DB write throughput): `fillseq [AVG 1000 runs] : 822149 (± 1004) ops/sec; 91.0 (± 0.1) MB/sec` kCRC32c: `fillseq [AVG 1000 runs] : 820484 (± 1203) ops/sec; 90.8 (± 0.1) MB/sec` Micro benchmark comparison: ``` ./db_bench --benchmarks=xxh3[-X20],crc32c[-X20] ``` Machine 1, buck+clang build: `xxh3 [AVG 20 runs] : 3358616 (± 19091) ops/sec; 13119.6 (± 74.6) MB/sec` `crc32c [AVG 20 runs] : 2578725 (± 7742) ops/sec; 10073.1 (± 30.2) MB/sec` Machine 2, make+gcc build, DEBUG_LEVEL=0 PORTABLE=0: `xxh3 [AVG 20 runs] : 6182084 (± 137223) ops/sec; 24148.8 (± 536.0) MB/sec` `crc32c [AVG 20 runs] : 5032465 (± 42454) ops/sec; 19658.1 (± 165.8) MB/sec` Pull Request resolved: https://github.com/facebook/rocksdb/pull/10778 Test Plan: make check, unit tests updated Reviewed By: ajkr Differential Revision: D40112510 Pulled By: pdillinger fbshipit-source-id: e59a8d50a60346137732f8668ba7cfac93be2b37 | 22 October 2022, 01:09:12 UTC |
5d17297 | sdong | 21 October 2022, 19:27:50 UTC | Make UserComparatorWrapper not Customizable (#10837) Summary: Right now UserComparatorWrapper is a Customizable object, although it is not, which introduces some intialization overhead for the object. In some benchmarks, it shows up in CPU profiling. Make it not configurable by defining most functions needed by UserComparatorWrapper to an interface and implement the interface. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10837 Test Plan: Make sure existing tests pass Reviewed By: pdillinger Differential Revision: D40528511 fbshipit-source-id: 70eaac89ecd55401a26e8ed32abbc413a9617c62 | 21 October 2022, 19:27:50 UTC |
0e7b27b | akankshamahajan | 21 October 2022, 19:15:35 UTC | Refactor block cache tracing APIs (#10811) Summary: Refactor the classes, APIs and data structures for block cache tracing to allow a user provided trace writer to be used. Currently, only a TraceWriter is supported, with a default built-in implementation of FileTraceWriter. The TraceWriter, however, takes a flat trace record and is thus only suitable for file tracing. This PR introduces an abstract BlockCacheTraceWriter class that takes a structured BlockCacheTraceRecord. The BlockCacheTraceWriter implementation can then format and log the record in whatever way it sees fit. The default BlockCacheTraceWriterImpl does file tracing using a user provided TraceWriter. `DB::StartBlockTrace` will internally redirect to changed `BlockCacheTrace::StartBlockCacheTrace`. New API `DB::StartBlockTrace` is also added that directly takes `BlockCacheTraceWriter` pointer. This same philosophy can be applied to KV and IO tracing as well. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10811 Test Plan: existing unit tests Old API DB::StartBlockTrace checked with db_bench tool create database ``` ./db_bench --benchmarks="fillseq" \ --key_size=20 --prefix_size=20 --keys_per_prefix=0 --value_size=100 \ --cache_index_and_filter_blocks --cache_size=1048576 \ --disable_auto_compactions=1 --disable_wal=1 --compression_type=none \ --min_level_to_compress=-1 --compression_ratio=1 --num=10000000 ``` To trace block cache accesses when running readrandom benchmark: ``` ./db_bench --benchmarks="readrandom" --use_existing_db --duration=60 \ --key_size=20 --prefix_size=20 --keys_per_prefix=0 --value_size=100 \ --cache_index_and_filter_blocks --cache_size=1048576 \ --disable_auto_compactions=1 --disable_wal=1 --compression_type=none \ --min_level_to_compress=-1 --compression_ratio=1 --num=10000000 \ --threads=16 \ -block_cache_trace_file="/tmp/binary_trace_test_example" \ -block_cache_trace_max_trace_file_size_in_bytes=1073741824 \ -block_cache_trace_sampling_frequency=1 ``` Reviewed By: anand1976 Differential Revision: D40435289 Pulled By: akankshamahajan15 fbshipit-source-id: fa2755f4788185e19f4605e731641cfd21ab3282 | 21 October 2022, 19:15:35 UTC |
b6e33db | Peter Dillinger | 21 October 2022, 19:09:03 UTC | Fix HyperClockCache Rollback bug in #10801 (#10843) Summary: In https://github.com/facebook/rocksdb/issues/10801 in ClockHandleTable::Evict, we saved a reference to the hash value (`const UniqueId64x2& hashed_key`) instead of saving the hash value itself before marking the handle as empty and thus free for use by other threads. This could lead to Rollback seeing the wrong hash value for updating the `displacements` after an entry is removed. The fix is (like other places) to copy the hash value before it's released. (We could Rollback while we own the entry, but that creates more dependences between atomic updates, because in that case, based on the code, the Rollback writes would have to happen before or after the entry is released by marking empty. By doing the relaxed Rollback after marking empty, there's more opportunity for re-ordering / ILP.) Intended follow-up: refactoring for better code sharing in clock_cache.cc Pull Request resolved: https://github.com/facebook/rocksdb/pull/10843 Test Plan: watch for clean crash test, TSAN Reviewed By: siying Differential Revision: D40579680 Pulled By: pdillinger fbshipit-source-id: 258e43b3b80bc980a161d5c675ccc6708ecb8025 | 21 October 2022, 19:09:03 UTC |
333abe9 | Changyu Bi | 21 October 2022, 17:22:41 UTC | Ignore max_compaction_bytes for compaction input that are within output key-range (#10835) Summary: When picking compaction input files, we sometimes stop picking a file that is fully included in the output key-range due to hitting max_compaction_bytes. Including these input files can potentially reduce WA at the expense of larger compactions. Larger compaction should be fine as files from input level are usually 10X smaller than files from output level. This PR adds a mutable CF option `ignore_max_compaction_bytes_for_input` that is enabled by default. We can remove this option once we are sure it is safe. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10835 Test Plan: - CI, a unit test on max_compaction_bytes fails before turning this flag off. - Benchmark does not show much difference in WA: `./db_bench --benchmarks=fillrandom,waitforcompaction,stats,levelstats -max_background_jobs=12 -num=2000000000 -target_file_size_base=33554432 --write_buffer_size=33554432` ``` main: ** Compaction Stats [default] ** Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ L0 3/0 91.59 MB 0.8 70.9 0.0 70.9 200.8 129.9 0.0 1.5 25.2 71.2 2886.55 2463.45 9725 0.297 1093M 254K 0.0 0.0 L1 9/0 248.03 MB 1.0 392.0 129.8 262.2 391.7 129.5 0.0 3.0 69.0 68.9 5821.71 5536.90 804 7.241 6029M 5814K 0.0 0.0 L2 87/0 2.50 GB 1.0 537.0 128.5 408.5 533.8 125.2 0.7 4.2 69.5 69.1 7912.24 7323.70 4417 1.791 8299M 36M 0.0 0.0 L3 836/0 24.99 GB 1.0 616.9 118.3 498.7 594.5 95.8 5.2 5.0 66.9 64.5 9442.38 8490.28 4204 2.246 9749M 306M 0.0 0.0 L4 2355/0 62.95 GB 0.3 67.3 37.1 30.2 54.2 24.0 38.9 1.5 72.2 58.2 954.37 821.18 917 1.041 1076M 173M 0.0 0.0 Sum 3290/0 90.77 GB 0.0 1684.2 413.7 1270.5 1775.0 504.5 44.9 13.7 63.8 67.3 27017.25 24635.52 20067 1.346 26G 522M 0.0 0.0 Cumulative compaction: 1774.96 GB write, 154.29 MB/s write, 1684.19 GB read, 146.40 MB/s read, 27017.3 seconds This PR: ** Compaction Stats [default] ** Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ L0 3/0 45.71 MB 0.8 72.9 0.0 72.9 202.8 129.9 0.0 1.6 25.4 70.7 2938.16 2510.36 9741 0.302 1124M 265K 0.0 0.0 L1 8/0 234.54 MB 0.9 384.5 129.8 254.7 384.2 129.6 0.0 3.0 69.0 68.9 5708.08 5424.43 791 7.216 5913M 5753K 0.0 0.0 L2 84/0 2.47 GB 1.0 543.1 128.6 414.5 539.9 125.4 0.7 4.2 69.6 69.2 7989.31 7403.13 4418 1.808 8393M 36M 0.0 0.0 L3 839/0 24.96 GB 1.0 615.6 118.4 497.2 593.2 96.0 5.1 5.0 66.6 64.1 9471.23 8489.31 4193 2.259 9726M 306M 0.0 0.0 L4 2360/0 63.04 GB 0.3 67.6 37.3 30.3 54.4 24.1 38.9 1.5 71.5 57.6 967.30 827.99 907 1.066 1080M 173M 0.0 0.0 Sum 3294/0 90.75 GB 0.0 1683.8 414.2 1269.6 1774.5 504.9 44.8 13.7 63.7 67.1 27074.08 24655.22 20050 1.350 26G 522M 0.0 0.0 Cumulative compaction: 1774.52 GB write, 157.09 MB/s write, 1683.77 GB read, 149.06 MB/s read, 27074.1 seconds ``` Reviewed By: ajkr Differential Revision: D40518319 Pulled By: cbi42 fbshipit-source-id: f4ea614bc0ebefe007ffaf05bb9aec9a8ca25b60 | 21 October 2022, 17:22:41 UTC |
8dd4bf6 | Levi Tamasi | 21 October 2022, 17:05:46 UTC | Separate the handling of value types in SaveValue (#10840) Summary: Currently, the code in `SaveValue` that handles `kTypeValue` and `kTypeBlobIndex` (and more recently, `kTypeWideColumnEntity`) is mostly shared. This made sense originally; however, by now the handling of these three value types has diverged significantly. The patch makes the logic cleaner and also eliminates quite a bit of branching by giving each value type its own `case` and removing a fall-through. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10840 Test Plan: `make check` Reviewed By: riversand963 Differential Revision: D40568420 Pulled By: ltamasi fbshipit-source-id: 2e614606afd1c3d9c76d9b5f1efa0959fc174103 | 21 October 2022, 17:05:46 UTC |
2564215 | dependabot[bot] | 21 October 2022, 05:13:41 UTC | Bump nokogiri from 1.13.6 to 1.13.9 in /docs (#10842) Summary: Bumps [nokogiri](https://github.com/sparklemotion/nokogiri) from 1.13.6 to 1.13.9. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/sparklemotion/nokogiri/releases">nokogiri's releases</a>.</em></p> <blockquote> <h2>1.13.9 / 2022-10-18</h2> <h3>Security</h3> <ul> <li>[CRuby] Vendored libxml2 is updated to address <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-2309">CVE-2022-2309</a>, <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-40304">CVE-2022-40304</a>, and <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-40303">CVE-2022-40303</a>. See <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-2qc6-mcvw-92cw">GHSA-2qc6-mcvw-92cw</a> for more information.</li> <li>[CRuby] Vendored zlib is updated to address <a href="https://ubuntu.com/security/CVE-2022-37434">CVE-2022-37434</a>. Nokogiri was not affected by this vulnerability, but this version of zlib was being flagged up by some vulnerability scanners, see <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2626">https://github.com/facebook/rocksdb/issues/2626</a> for more information.</li> </ul> <h3>Dependencies</h3> <ul> <li>[CRuby] Vendored libxml2 is updated to <a href="https://gitlab.gnome.org/GNOME/libxml2/-/releases/v2.10.3">v2.10.3</a> from v2.9.14.</li> <li>[CRuby] Vendored libxslt is updated to <a href="https://gitlab.gnome.org/GNOME/libxslt/-/releases/v1.1.37">v1.1.37</a> from v1.1.35.</li> <li>[CRuby] Vendored zlib is updated from 1.2.12 to 1.2.13. (See <a href="https://github.com/sparklemotion/nokogiri/blob/v1.13.x/LICENSE-DEPENDENCIES.md#platform-releases">LICENSE-DEPENDENCIES.md</a> for details on which packages redistribute this library.)</li> </ul> <h3>Fixed</h3> <ul> <li>[CRuby] <code>Nokogiri::XML::Namespace</code> objects, when compacted, update their internal struct's reference to the Ruby object wrapper. Previously, with GC compaction enabled, a segmentation fault was possible after compaction was triggered. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2658">https://github.com/facebook/rocksdb/issues/2658</a>] (Thanks, <a href="https://github.com/eightbitraptor"><code>@​eightbitraptor</code></a> and <a href="https://github.com/peterzhu2118"><code>@​peterzhu2118</code></a>!)</li> <li>[CRuby] <code>Document#remove_namespaces!</code> now defers freeing the underlying <code>xmlNs</code> struct until the <code>Document</code> is GCed. Previously, maintaining a reference to a <code>Namespace</code> object that was removed in this way could lead to a segfault. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2658">https://github.com/facebook/rocksdb/issues/2658</a>]</li> </ul> <hr /> <p>sha256 checksums:</p> <pre><code>9b69829561d30c4461ea803baeaf3460e8b145cff7a26ce397119577a4083a02 nokogiri-1.13.9-aarch64-linux.gem e76ebb4b7b2e02c72b2d1541289f8b0679fb5984867cf199d89b8ef485764956 nokogiri-1.13.9-arm64-darwin.gem 15bae7d08bddeaa898d8e3f558723300137c26a2dc2632a1f89c8574c4467165 nokogiri-1.13.9-java.gem f6a1dbc7229184357f3129503530af73cc59ceba4932c700a458a561edbe04b9 nokogiri-1.13.9-x64-mingw-ucrt.gem 36d935d799baa4dc488024f71881ff0bc8b172cecdfc54781169c40ec02cbdb3 nokogiri-1.13.9-x64-mingw32.gem ebaf82aa9a11b8fafb67873d19ee48efb565040f04c898cdce8ca0cd53ff1a12 nokogiri-1.13.9-x86-linux.gem 11789a2a11b28bc028ee111f23311461104d8c4468d5b901ab7536b282504154 nokogiri-1.13.9-x86-mingw32.gem 01830e1646803ff91c0fe94bc768ff40082c6de8cfa563dafd01b3f7d5f9d795 nokogiri-1.13.9-x86_64-darwin.gem 8e93b8adec22958013799c8690d81c2cdf8a90b6f6e8150ab22e11895844d781 nokogiri-1.13.9-x86_64-linux.gem 96f37c1baf0234d3ae54c2c89aef7220d4a8a1b03d2675ff7723565b0a095531 nokogiri-1.13.9.gem </code></pre> <h2>1.13.8 / 2022-07-23</h2> <h3>Deprecated</h3> <ul> <li><code>XML::Reader#attribute_nodes</code> is deprecated due to incompatibility between libxml2's <code>xmlReader</code> memory semantics and Ruby's garbage collector. Although this method continues to exist for backwards compatibility, it is unsafe to call and may segfault. This method will be removed in a future version of Nokogiri, and callers should use <code>#attribute_hash</code> instead. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2598">https://github.com/facebook/rocksdb/issues/2598</a>]</li> </ul> <h3>Improvements</h3> <ul> <li><code>XML::Reader#attribute_hash</code> is a new method to safely retrieve the attributes of a node from <code>XML::Reader</code>. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2598">https://github.com/facebook/rocksdb/issues/2598</a>, <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2599">https://github.com/facebook/rocksdb/issues/2599</a>]</li> </ul> <h3>Fixed</h3> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/sparklemotion/nokogiri/blob/main/CHANGELOG.md">nokogiri's changelog</a>.</em></p> <blockquote> <h2>1.13.9 / 2022-10-18</h2> <h3>Security</h3> <ul> <li>[CRuby] Vendored libxml2 is updated to address <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-2309">CVE-2022-2309</a>, <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-40304">CVE-2022-40304</a>, and <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-40303">CVE-2022-40303</a>. See <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-2qc6-mcvw-92cw">GHSA-2qc6-mcvw-92cw</a> for more information.</li> <li>[CRuby] Vendored zlib is updated to address <a href="https://ubuntu.com/security/CVE-2022-37434">CVE-2022-37434</a>. Nokogiri was not affected by this vulnerability, but this version of zlib was being flagged up by some vulnerability scanners, see <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2626">https://github.com/facebook/rocksdb/issues/2626</a> for more information.</li> </ul> <h3>Dependencies</h3> <ul> <li>[CRuby] Vendored libxml2 is updated to <a href="https://gitlab.gnome.org/GNOME/libxml2/-/releases/v2.10.3">v2.10.3</a> from v2.9.14.</li> <li>[CRuby] Vendored libxslt is updated to <a href="https://gitlab.gnome.org/GNOME/libxslt/-/releases/v1.1.37">v1.1.37</a> from v1.1.35.</li> <li>[CRuby] Vendored zlib is updated from 1.2.12 to 1.2.13. (See <a href="https://github.com/sparklemotion/nokogiri/blob/v1.13.x/LICENSE-DEPENDENCIES.md#platform-releases">LICENSE-DEPENDENCIES.md</a> for details on which packages redistribute this library.)</li> </ul> <h3>Fixed</h3> <ul> <li>[CRuby] <code>Nokogiri::XML::Namespace</code> objects, when compacted, update their internal struct's reference to the Ruby object wrapper. Previously, with GC compaction enabled, a segmentation fault was possible after compaction was triggered. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2658">https://github.com/facebook/rocksdb/issues/2658</a>] (Thanks, <a href="https://github.com/eightbitraptor"><code>@​eightbitraptor</code></a> and <a href="https://github.com/peterzhu2118"><code>@​peterzhu2118</code></a>!)</li> <li>[CRuby] <code>Document#remove_namespaces!</code> now defers freeing the underlying <code>xmlNs</code> struct until the <code>Document</code> is GCed. Previously, maintaining a reference to a <code>Namespace</code> object that was removed in this way could lead to a segfault. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2658">https://github.com/facebook/rocksdb/issues/2658</a>]</li> </ul> <h2>1.13.8 / 2022-07-23</h2> <h3>Deprecated</h3> <ul> <li><code>XML::Reader#attribute_nodes</code> is deprecated due to incompatibility between libxml2's <code>xmlReader</code> memory semantics and Ruby's garbage collector. Although this method continues to exist for backwards compatibility, it is unsafe to call and may segfault. This method will be removed in a future version of Nokogiri, and callers should use <code>#attribute_hash</code> instead. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2598">https://github.com/facebook/rocksdb/issues/2598</a>]</li> </ul> <h3>Improvements</h3> <ul> <li><code>XML::Reader#attribute_hash</code> is a new method to safely retrieve the attributes of a node from <code>XML::Reader</code>. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2598">https://github.com/facebook/rocksdb/issues/2598</a>, <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2599">https://github.com/facebook/rocksdb/issues/2599</a>]</li> </ul> <h3>Fixed</h3> <ul> <li>[CRuby] Calling <code>XML::Reader#attributes</code> is now safe to call. In Nokogiri <= 1.13.7 this method may segfault. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2598">https://github.com/facebook/rocksdb/issues/2598</a>, <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2599">https://github.com/facebook/rocksdb/issues/2599</a>]</li> </ul> <h2>1.13.7 / 2022-07-12</h2> <h3>Fixed</h3> <p><code>XML::Node</code> objects, when compacted, update their internal struct's reference to the Ruby object wrapper. Previously, with GC compaction enabled, a segmentation fault was possible after compaction was triggered. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2578">https://github.com/facebook/rocksdb/issues/2578</a>] (Thanks, <a href="https://github.com/eightbitraptor"><code>@​eightbitraptor</code></a>!)</p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/sparklemotion/nokogiri/commit/897759cc25b57ebf2754897e910c86931dec7d39"><code>897759c</code></a> version bump to v1.13.9</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/aeb1ac32830a34369a46625613f21ee17e3e445e"><code>aeb1ac3</code></a> doc: update CHANGELOG</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/c663e4905a35edd23f7cc05a80126b4e446e4fd2"><code>c663e49</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2671">https://github.com/facebook/rocksdb/issues/2671</a> from sparklemotion/flavorjones-update-zlib-1.2.13_v1...</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/212e07da28096db7d2cbda697bc2a38d71f6dc3a"><code>212e07d</code></a> ext: hack to cross-compile zlib v1.2.13 on darwin</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/76dbc8c5bef99467f3403297e29da4297fbddeb7"><code>76dbc8c</code></a> dep: update zlib to v1.2.13</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/24e3a9c41428195c66745fef8ce697101167bd08"><code>24e3a9c</code></a> doc: update CHANGELOG</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/4db3b4daa9ca8d1c1996cc9741c76ba2b8d1673b"><code>4db3b4d</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2668">https://github.com/facebook/rocksdb/issues/2668</a> from sparklemotion/flavorjones-namespace-scopes-comp...</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/73d73d6e433f17f39e188f5c03ec176b60719416"><code>73d73d6</code></a> fix: Document#remove_namespaces! use-after-free bug</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/5f58b34724a6e48c7c478cfda5fc9c4cac581e08"><code>5f58b34</code></a> fix: namespace nodes behave properly when compacted</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/b08a8586c7c34831be0f13f9147b84016d17d94b"><code>b08a858</code></a> test: repro namespace_scopes compaction issue</li> <li>Additional commits viewable in <a href="https://github.com/sparklemotion/nokogiri/compare/v1.13.6...v1.13.9">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=nokogiri&package-manager=bundler&previous-version=1.13.6&new-version=1.13.9)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `dependabot rebase` will rebase this PR - `dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `dependabot merge` will merge this PR after your CI passes on it - `dependabot squash and merge` will squash and merge this PR after your CI passes on it - `dependabot cancel merge` will cancel a previously requested merge and block automerging - `dependabot reopen` will reopen this PR if it is closed - `dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/facebook/rocksdb/network/alerts). </details> Pull Request resolved: https://github.com/facebook/rocksdb/pull/10842 Reviewed By: siying Differential Revision: D40579643 Pulled By: ajkr fbshipit-source-id: 45035f691035cdbb111dc0b36489c4e91fe31cae | 21 October 2022, 05:13:41 UTC |
1663f77 | Jay Zhuang | 21 October 2022, 00:11:38 UTC | Fix no internal time recorded for small preclude_last_level (#10829) Summary: When the `preclude_last_level_data_seconds` or `preserve_internal_time_seconds` is smaller than 100 (seconds), no seqno->time information was recorded. Also make sure all data will be compacted to the last level even if there's no write to record the time information. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10829 Test Plan: added unittest Reviewed By: siying Differential Revision: D40443934 Pulled By: jay-zhuang fbshipit-source-id: 2ecf1361daf9f3e5c3385aee6dc924fa59e2813a | 21 October 2022, 00:11:38 UTC |
865d557 | Levi Tamasi | 20 October 2022, 23:00:58 UTC | Support providing the default column separately when serializing columns (#10839) Summary: The patch makes it possible to provide the value of the default column separately when calling `WideColumnSerialization::Serialize`. This eliminates the need to construct a new `WideColumns` vector in certain cases (for example, it will come in handy when implementing `Merge`). Pull Request resolved: https://github.com/facebook/rocksdb/pull/10839 Test Plan: `make check` Reviewed By: riversand963 Differential Revision: D40561448 Pulled By: ltamasi fbshipit-source-id: 69becdd510e6a83ab1feb956c12772110e1040d6 | 20 October 2022, 23:00:58 UTC |
33ceea9 | Andrew Kryczka | 20 October 2022, 22:04:29 UTC | Add DB property for fast block cache stats collection (#10832) Summary: This new property allows users to trigger the background block cache stats collection mode through the `GetProperty()` and `GetMapProperty()` APIs. The background mode has much lower overhead at the expense of returning stale values in more cases. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10832 Test Plan: updated unit test Reviewed By: pdillinger Differential Revision: D40497883 Pulled By: ajkr fbshipit-source-id: bdcc93402f426463abb2153756aad9e295447343 | 20 October 2022, 22:04:29 UTC |
7555243 | Peter Dillinger | 19 October 2022, 05:06:57 UTC | Refactor ShardedCache for more sharing, static polymorphism (#10801) Summary: The motivations for this change include * Free up space in ClockHandle so that we can add data for secondary cache handling while still keeping within single cache line (64 byte) size. * This change frees up space by eliminating the need for the `hash` field by making the fixed-size key itself a hash, using a 128-bit bijective (lossless) hash. * Generally more customizability of ShardedCache (such as hashing) without worrying about virtual call overheads * ShardedCache now uses static polymorphism (template) instead of dynamic polymorphism (virtual overrides) for the CacheShard. No obvious performance benefit is seen from the change (as mostly expected; most calls to virtual functions in CacheShard could already be optimized to static calls), but offers more flexibility without incurring the runtime cost of adhering to a common interface (without type parameters or static callbacks). * You'll also notice less `reinterpret_cast`ing and other boilerplate in the Cache implementations, as this can go in ShardedCache. More detail: * Don't have LRUCacheShard maintain `std::shared_ptr<SecondaryCache>` copies (extra refcount) when LRUCache can be in charge of keeping a `shared_ptr`. * Renamed `capacity_mutex_` to `config_mutex_` to better represent the scope of what it guards. * Some preparation for 64-bit hash and indexing in LRUCache, but didn't include the full change because of slight performance regression. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10801 Test Plan: Unit test updates were non-trivial because of major changes to the ClockCacheShard interface in handling of key vs. hash. Performance: Create with `TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=30000000 -disable_wal=1 -bloom_bits=16` Test with ``` TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=readrandom[-X1000] -readonly -num=30000000 -bloom_bits=16 -cache_index_and_filter_blocks=1 -cache_size=610000000 -duration 20 -threads=16 ``` Before: `readrandom [AVG 150 runs] : 321147 (± 253) ops/sec` After: `readrandom [AVG 150 runs] : 321530 (± 326) ops/sec` So possibly ~0.1% improvement. And with `-cache_type=hyper_clock_cache`: Before: `readrandom [AVG 30 runs] : 614126 (± 7978) ops/sec` After: `readrandom [AVG 30 runs] : 645349 (± 8087) ops/sec` So roughly 5% improvement! Reviewed By: anand1976 Differential Revision: D40252236 Pulled By: pdillinger fbshipit-source-id: ff8fc70ef569585edc95bcbaaa0386f61355ae5b | 19 October 2022, 05:06:57 UTC |
e267909 | Yueh-Hsuan Chiang | 18 October 2022, 21:38:13 UTC | Enable a multi-level db to smoothly migrate to FIFO via DB::Open (#10348) Summary: FIFO compaction can theoretically open a DB with any compaction style. However, the current code only allows FIFO compaction to open a DB with a single level. This PR relaxes the limitation of FIFO compaction and allows it to open a DB with multiple levels. Below is the read / write / compaction behavior: * The read behavior is untouched, and it works like a regular rocksdb instance. * The write behavior is untouched as well. When a FIFO compacted DB is opened with multiple levels, all new files will still be in level 0, and no files will be moved to a different level. * Compaction logic is extended. It will first identify the bottom-most non-empty level. Then, it will delete the oldest file in that level. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10348 Test Plan: Added a new test to verify the migration from level to FIFO where the db has multiple levels. Extended existing test cases in db_test and db_basic_test to also verify all entries of a key after reopening the DB with FIFO compaction. Reviewed By: jay-zhuang Differential Revision: D40233744 fbshipit-source-id: 6cc011d6c3467e6bfb9b6a4054b87619e69815e1 | 18 October 2022, 21:38:13 UTC |
e466173 | Peter Dillinger | 18 October 2022, 07:35:35 UTC | Print stack traces on frozen tests in CI (#10828) Summary: Instead of existing calls to ps from gnu_parallel, call a new wrapper that does ps, looks for unit test like processes, and uses pstack or gdb to print thread stack traces. Also, using `ps -wwf` instead of `ps -wf` ensures output is not cut off. For security, CircleCI runs with security restrictions on ptrace (/proc/sys/kernel/yama/ptrace_scope = 1), and this change adds a work-around to `InstallStackTraceHandler()` (only used by testing tools) to allow any process from the same user to debug it. (I've also touched >100 files to ensure all the unit tests call this function.) Pull Request resolved: https://github.com/facebook/rocksdb/pull/10828 Test Plan: local manual + temporary infinite loop in a unit test to observe in CircleCI Reviewed By: hx235 Differential Revision: D40447634 Pulled By: pdillinger fbshipit-source-id: 718a4c4a5b54fa0f9af2d01a446162b45e5e84e1 | 18 October 2022, 07:35:35 UTC |
8367f0d | Peter Dillinger | 18 October 2022, 00:10:16 UTC | Improve / refactor anonymous mmap capabilities (#10810) Summary: The motivation for this change is a planned feature (related to HyperClockCache) that will depend on a large array that can essentially grow automatically, up to some bound, without the pointer address changing and with guaranteed zero-initialization of the data. Anonymous mmaps provide such functionality, and this change provides an internal API for that. The other existing use of anonymous mmap in RocksDB is for allocating in huge pages. That code and other related Arena code used some awkward non-RAII and pre-C++11 idioms, so I cleaned up much of that as well, with RAII, move semantics, constexpr, etc. More specifcs: * Minimize conditional compilation * Add Windows support for anonymous mmaps * Use std::deque instead of std::vector for more efficient bag Pull Request resolved: https://github.com/facebook/rocksdb/pull/10810 Test Plan: unit test added for new functionality Reviewed By: riversand963 Differential Revision: D40347204 Pulled By: pdillinger fbshipit-source-id: ca83fcc47e50fabf7595069380edd2954f4f879c | 18 October 2022, 00:10:16 UTC |
11c0d13 | Levi Tamasi | 17 October 2022, 21:32:59 UTC | Do not adjust test_batches_snapshots to avoid mixing runs (#10830) Summary: This is a small follow-up to https://github.com/facebook/rocksdb/pull/10821. The goal of that PR was to hold `test_batches_snapshots` fixed across all `db_stress` invocations; however, that patch didn't address the case when `test_batches_snapshots` is unset due to a conflicting `enable_compaction_filter` or `prefix_size` setting. This PR updates the logic so the other parameter is sanitized instead in the case of such conflicts. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10830 Reviewed By: riversand963 Differential Revision: D40444548 Pulled By: ltamasi fbshipit-source-id: 0331265704904b729262adec37139292fcbb7805 | 17 October 2022, 21:32:59 UTC |
8142223 | Peter Dillinger | 17 October 2022, 15:33:58 UTC | Git ignore .clangd/ (#10817) Summary: Used for IDE integration Pull Request resolved: https://github.com/facebook/rocksdb/pull/10817 Test Plan: CI Reviewed By: riversand963 Differential Revision: D40348563 Pulled By: pdillinger fbshipit-source-id: ae2151017de7df6afc55363276105a7dac53683c | 17 October 2022, 15:33:58 UTC |
8124bc3 | Jay Zhuang | 16 October 2022, 16:28:43 UTC | Enable preclude_last_level_data_seconds in stress test (#10824) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10824 Reviewed By: siying Differential Revision: D40390535 Pulled By: jay-zhuang fbshipit-source-id: 700803a1aff8a1e77c038740d87931577e79bcf6 | 16 October 2022, 16:28:43 UTC |
2f3042d | Levi Tamasi | 14 October 2022, 21:25:05 UTC | Check wide columns in TestIterateAgainstExpected (#10820) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10820 Reviewed By: riversand963 Differential Revision: D40363653 Pulled By: ltamasi fbshipit-source-id: d347547d8cdd3f8926b35b6af4d1fa0f827e4a10 | 14 October 2022, 21:25:05 UTC |
3cd78bc | Levi Tamasi | 14 October 2022, 01:00:30 UTC | Temporarily disable mixing batched and non-batched runs (#10821) Summary: We have recently made some stress test improvements that rely on decoding the "value base" from the values stored in the database. This logic does not currently support the case when some KVs are written by a non-batched ops run and some by a batched ops run. The patch temporarily disables mixing these two. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10821 Reviewed By: riversand963 Differential Revision: D40367326 Pulled By: ltamasi fbshipit-source-id: 66f2e0cbc097ab6b1f9e4b39b833bd466f1aaab5 | 14 October 2022, 01:00:30 UTC |
eae3a68 | Levi Tamasi | 13 October 2022, 19:06:36 UTC | Check wide columns in TestIterate (#10818) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10818 Test Plan: Tested using some simple blackbox crash test runs in the various modes (non-batched, batched, CF consistency). Reviewed By: riversand963 Differential Revision: D40349527 Pulled By: ltamasi fbshipit-source-id: 2918bc26adbbeac314beaa958aafe770b01e5cc6 | 13 October 2022, 19:06:36 UTC |
1ee747d | Peter Dillinger | 13 October 2022, 16:08:09 UTC | Deflake^2 DBBloomFilterTest.OptimizeFiltersForHits (#10816) Summary: This reverts https://github.com/facebook/rocksdb/issues/10792 and uses a different strategy to stabilize the test: remove the unnecessary randomness by providing a constant seed for shuffling keys. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10816 Test Plan: `gtest-parallel ./db_bloom_filter_test -r1000 --gtest_filter=*ForHits*` Reviewed By: jay-zhuang Differential Revision: D40347957 Pulled By: pdillinger fbshipit-source-id: a270e157485cbd94ed03b80cdd21b954ebd57d57 | 13 October 2022, 16:08:09 UTC |
a2eea18 | Peter Dillinger | 13 October 2022, 16:00:37 UTC | Fix file modes (#10815) Summary: *.sh files need execute permission. Benchmark-linux failing in CircleCI due to https://github.com/facebook/rocksdb/issues/10803 Pull Request resolved: https://github.com/facebook/rocksdb/pull/10815 Test Plan: CI Reviewed By: ltamasi Differential Revision: D40346922 Pulled By: pdillinger fbshipit-source-id: 658f185b5d2e906ee50e1de1b12f27fa9968ba5d | 13 October 2022, 16:00:37 UTC |
6ff0c20 | Mark Callaghan | 12 October 2022, 22:13:28 UTC | Several small improvements (#10803) Summary: This has several small improvements. benchmark.sh * add BYTES_PER_SYNC as an env variable * use --prepopulate_block_cache when O_DIRECT is used * use --undefok to list options that don't work for all 7.x releases * print "failure" in report.tsv when a benchmark fails * parse the slightly different throughput line used by db_bench for multireadrandom * remove the trailing comma for BlobDB size before printing it in report.tsv * use the last line of the output from /bin/time as there can be more than one line when db_bench has a non-zero exit * fix more bash lint warnings * add ",stats" to the --benchmark=... lines to get stats at the end of each benchmark benchmark_compare.sh * run revrange immediately after fillseq to let compaction debt get removed * add --multiread_batched when --benchmarks=multireadrandom is used * use --benchmarks=overwriteandwait when supported to get a more accurate measure of write-amp Pull Request resolved: https://github.com/facebook/rocksdb/pull/10803 Test Plan: Run it for leveled, universal and BlobDB Reviewed By: jay-zhuang Differential Revision: D40278315 Pulled By: mdcallag fbshipit-source-id: 793134ddc7d48d05a07436cd8942c375a23983a7 | 12 October 2022, 22:13:28 UTC |
23b7dc2 | Levi Tamasi | 12 October 2022, 18:43:34 UTC | Check columns in CfConsistencyStressTest::VerifyDb (#10804) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10804 Reviewed By: riversand963 Differential Revision: D40279057 Pulled By: ltamasi fbshipit-source-id: 9efc3dae7f5eaab162d55a41c58c2535b0a53054 | 12 October 2022, 18:43:34 UTC |
85399b1 | Levi Tamasi | 11 October 2022, 21:40:25 UTC | Consider wide columns when checksumming in the stress tests (#10788) Summary: There are two places in the stress test code where we compute the CRC for a range of KVs for the purposes of checking consistency, namely in the CF consistency test (to make sure CFs contain the same data), and when performing `CompactRange` (to make sure the pre- and post-compaction states are equivalent). The patch extends the logic so that wide columns are also considered in both cases. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10788 Test Plan: Tested using some simple blackbox crash test runs. Reviewed By: riversand963 Differential Revision: D40191134 Pulled By: ltamasi fbshipit-source-id: 542c21cac9077c6d225780deb210319bb5eee955 | 11 October 2022, 21:40:25 UTC |
5a5f21c | Jay Zhuang | 11 October 2022, 05:50:34 UTC | Allow the last level data moving up to penultimate level (#10782) Summary: Lock the penultimate level for the whole compaction inputs range, so any key in that compaction is safe to move up from the last level to penultimate level. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10782 Reviewed By: siying Differential Revision: D40231540 Pulled By: siying fbshipit-source-id: ca115cc8b4018b35d797329fa85a19b06cc8c13e | 11 October 2022, 05:50:34 UTC |
2d0380a | Peter Dillinger | 11 October 2022, 00:59:17 UTC | Allow manifest fix-up without requiring prior state (#10796) Summary: This change is motivated by ensuring that `ldb update_manifest` or `UpdateManifestForFilesState` can run without expecting files to open when the old temperature is provided (in case the FileSystem strictly interprets non-kUnknown), but ended up fixing a problem in `OfflineManifestWriter` (used by `ldb unsafe_remove_sst_file`) where it would open some SST files during recovery and expect them to match the prior manifest state, even if not required by the intended new state. Also update BackupEngine to retry with Temperature kUnknown when reading file with potentially "wrong" temperature. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10796 Test Plan: tests added/updated, that fail before the change(s) and now pass Reviewed By: jay-zhuang Differential Revision: D40232645 Pulled By: jay-zhuang fbshipit-source-id: b5aa2688aecfe0c320b80a7da689b315414c20be | 11 October 2022, 00:59:17 UTC |
f6a0065 | Hui Xiao | 10 October 2022, 22:52:10 UTC | Allow Flush(sync=true) not supported in DB::Open() and db_stress (#10784) Summary: **Context:** https://github.com/facebook/rocksdb/pull/10698 made `Flush(sync=true)` required for` DB::Open()` (to pass the original but now deleted assertion `impl->TEST_WALBufferIsEmpty()` under `manual_wal_flush=true`, see https://github.com/facebook/rocksdb/pull/10698 summary for more ) as well as db_stress to pass. However RocksDB users may not implement SyncWAL() (used inFlush(sync=true)). Therefore we replace such in DB::Open and db_stress in this PR and align with https://github.com/facebook/rocksdb/blob/main/db/db_impl/db_impl_open.cc#L1883-L1887 and https://github.com/facebook/rocksdb/blob/main/db_stress_tool/db_stress_test_base.cc#L847-L849 Pull Request resolved: https://github.com/facebook/rocksdb/pull/10784 Test Plan: make check Reviewed By: anand1976 Differential Revision: D40193354 Pulled By: anand1976 fbshipit-source-id: e80d53880799ae01bdd717641d07997d3bfe2b54 | 10 October 2022, 22:52:10 UTC |
ebf8c45 | akankshamahajan | 10 October 2022, 22:48:48 UTC | Provide support for async_io with tailing iterators (#10781) Summary: Provide support for async_io if ReadOptions.tailing is set true. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10781 Test Plan: - Update unit tests - Ran db_bench: ./db_bench --benchmarks="readrandom" --use_existing_db --use_tailing_iterator=1 --async_io=1 Reviewed By: anand1976 Differential Revision: D40128882 Pulled By: anand1976 fbshipit-source-id: 55e17855536871a5c47e2de92d238ae005c32d01 | 10 October 2022, 22:48:48 UTC |
5182bf3 | Levi Tamasi | 10 October 2022, 22:07:07 UTC | Skip column validation for non-value types when iter_start_ts is set (#10799) Summary: When the `iter_start_ts` read option is set, iterator exposes internal keys. This also includes tombstones, which by definition do not have a value (or columns). The patch makes sure we skip the wide-column consistency check in this case. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10799 Test Plan: Tested using a simple blackbox crash test with timestamps enabled. Reviewed By: jay-zhuang, riversand963 Differential Revision: D40235628 fbshipit-source-id: 49519fb55d8fe2bb9249ced809f7a81bff2b9df2 | 10 October 2022, 22:07:07 UTC |
a6ce195 | Changyu Bi | 10 October 2022, 20:58:55 UTC | Fix flaky test ShuttingDownNotBlockStalledWrites (#10800) Summary: DBTest::ShuttingDownNotBlockStalledWrites is flaky, added new sync point dependency to fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10800 Test Plan: gtest-parallel --repeat=1000 ./db_test --gtest_filter="*ShuttingDownNotBlockStalledWrites" Reviewed By: jay-zhuang Differential Revision: D40239116 Pulled By: jay-zhuang fbshipit-source-id: 8c2d7e7df58f202d287bd9f5c9b60b7eff270d0c | 10 October 2022, 20:58:55 UTC |
62ba5c8 | Jay Zhuang | 10 October 2022, 19:34:25 UTC | Deflake DBBloomFilterTest.OptimizeFiltersForHits (#10792) Summary: The test may fail because the L5 files may only cover small portion of the whole key range. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10792 Test Plan: ``` gtest-parallel ./db_bloom_filter_test --gtest_filter=DBBloomFilterTest.OptimizeFiltersForHits -r 1000 -w 100 ``` Reviewed By: siying Differential Revision: D40217600 Pulled By: siying fbshipit-source-id: 18db549184bccf5e513eaa7e31ab17385b71ef71 | 10 October 2022, 19:34:25 UTC |
fac7a31 | anand76 | 10 October 2022, 17:47:07 UTC | Fix a few errors in async IO blog post (#10795) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10795 Reviewed By: jay-zhuang, akankshamahajan15 Differential Revision: D40229329 fbshipit-source-id: 7ec5347e0a8a52f80a0a9cc2a0c17b094736d6d9 | 10 October 2022, 17:47:07 UTC |
a45e687 | Qingping Wang | 10 October 2022, 16:46:09 UTC | fix issue 10751 (#10765) Summary: Fix https://github.com/facebook/rocksdb/issues/10751 where a stalled write could be blocked forever when DB shutdown. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10765 Reviewed By: ajkr Differential Revision: D40110069 Pulled By: ajkr fbshipit-source-id: 598c05777db9be85913a0a85e421b3295ecdff5e | 10 October 2022, 16:46:09 UTC |
c401f28 | Jay Zhuang | 08 October 2022, 01:49:40 UTC | Add option `preserve_internal_time_seconds` to preserve the time info (#10747) Summary: Add option `preserve_internal_time_seconds` to preserve the internal time information. It's mostly for the migration of the existing data to tiered storage ( `preclude_last_level_data_seconds`). When the tiering feature is just enabled, the existing data won't have the time information to decide if it's hot or cold. Enabling this feature will start collect and preserve the time information for the new data. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10747 Reviewed By: siying Differential Revision: D39910141 Pulled By: siying fbshipit-source-id: 25c21638e37b1a7c44006f636b7d714fe7242138 | 08 October 2022, 01:49:40 UTC |
f366f90 | anand76 | 08 October 2022, 00:42:48 UTC | Blog post for asynchronous IO (#10789) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10789 Reviewed By: akankshamahajan15 Differential Revision: D40198988 Pulled By: akankshamahajan15 fbshipit-source-id: 5db74f12dd8854f6288fbbf8775c8e759778c307 | 08 October 2022, 00:42:48 UTC |
11943e8 | Yanqin Jin | 07 October 2022, 21:11:23 UTC | Exclude timestamp when checking compaction boundaries (#10787) Summary: When checking if a range [start, end) overlaps with a compaction whose range is [start1, end1), always exclude timestamp from start, end, start1 and end1, otherwise some versions of one user key may be compacted to bottommost layer while others remain in the original level. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10787 Test Plan: make check Reviewed By: ltamasi Differential Revision: D40187672 Pulled By: ltamasi fbshipit-source-id: 81226267fd3e33ffa79665c62abadf2ebec45496 | 07 October 2022, 21:11:23 UTC |
7af47c5 | Levi Tamasi | 07 October 2022, 18:17:57 UTC | Verify wide columns during prefix scan in stress tests (#10786) Summary: The patch adds checks to the `{NonBatchedOps,BatchedOps,CfConsistency}StressTest::TestPrefixScan` methods to make sure the wide columns exposed by the iterators are as expected (based on the value base encoded into the iterator value). It also makes some code hygiene improvements in these methods. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10786 Test Plan: Ran some simple blackbox tests in the various modes (non-batched, batched, CF consistency). Reviewed By: riversand963 Differential Revision: D40163623 Pulled By: riversand963 fbshipit-source-id: 72f4c3b51063e48c15f974c4ec64d751d3ed0a83 | 07 October 2022, 18:17:57 UTC |
943247b | Yanqin Jin | 07 October 2022, 01:08:19 UTC | Expand stress test coverage for min_write_buffer_number_to_merge (#10785) Summary: As title. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10785 Test Plan: CI Reviewed By: ltamasi Differential Revision: D40162583 Pulled By: ltamasi fbshipit-source-id: 4e01f9b682f397130e286cf5d82190b7973fa3c1 | 07 October 2022, 01:08:19 UTC |
23fa5b7 | Jay Zhuang | 06 October 2022, 22:54:58 UTC | Use `sstableKeyCompare()` for compaction output boundary check (#10763) Summary: To make it consistent with the compaction picker which uses the `sstableKeyCompare()` to pick the overlap files. For example, without this change, it may cut L1 files like: ``` L1: [2-21] [22-30] L2: [1-10] [21-30] ``` Because "21" on L1 is smaller than "21" on L2. But for compaction, these 2 files are overlapped. `sstableKeyCompare()` also take range delete into consideration which may cut file for the same key. It also makes the `max_compaction_bytes` calculation more accurate for cases like above, the overlapped bytes was under estimated. Also make sure the 2 keys won't be splitted to 2 files because of reaching `max_compaction_bytes`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10763 Reviewed By: cbi42 Differential Revision: D39971904 Pulled By: cbi42 fbshipit-source-id: bcc309e9c3dc61a8f50667a6f633e6132c0154a8 | 06 October 2022, 22:54:58 UTC |
d6d8c00 | Levi Tamasi | 06 October 2022, 22:07:16 UTC | Verify columns in NonBatchedOpsStressTest::VerifyDb (#10783) Summary: As the first step of covering the wide-column functionality of iterators in our stress tests, the patch adds verification logic to `NonBatchedOpsStressTest::VerifyDb` that checks whether the iterator's value and columns are in sync. Note: I plan to update the other types of stress tests and add similar verification for prefix scans etc. in separate PRs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10783 Test Plan: Ran some simple blackbox crash tests. Reviewed By: riversand963 Differential Revision: D40152370 Pulled By: riversand963 fbshipit-source-id: 8f9d17d7af5da58ccf1bd2057cab53cc9645ac35 | 06 October 2022, 22:07:16 UTC |
b205c6d | Peter Dillinger | 06 October 2022, 21:54:21 UTC | Fix bug in HyperClockCache ApplyToEntries; cleanup (#10768) Summary: We have seen some rare crash test failures in HyperClockCache, and the source could certainly be a bug fixed in this change, in ClockHandleTable::ConstApplyToEntriesRange. It wasn't properly accounting for the fact that incrementing the acquire counter could be ineffective, due to parallel updates. (When incrementing the acquire counter is ineffective, it is incorrect to then decrement it.) This change includes some other minor clean-up in HyperClockCache, and adds stats_dump_period_sec with a much lower period to the crash test. This should be the primary caller of ApplyToEntries, in collecting cache entry stats. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10768 Test Plan: haven't been able to reproduce the failure, but should be in a better state (bug fix and improved crash test) Reviewed By: anand1976 Differential Revision: D40034747 Pulled By: anand1976 fbshipit-source-id: a06fcefe146e17ee35001984445cedcf3b63eb68 | 06 October 2022, 21:54:21 UTC |
f461e06 | Andrew Kryczka | 05 October 2022, 22:31:04 UTC | Address feedback on recent recovery testing blog post (#10780) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10780 Reviewed By: hx235 Differential Revision: D40120327 Pulled By: hx235 fbshipit-source-id: 08b43a11cee11743b4428dd2a9aff44270668e05 | 05 October 2022, 22:31:04 UTC |
4d82b94 | Yanqin Jin | 05 October 2022, 19:24:39 UTC | Sanitize min_write_buffer_number_to_merge to 1 with atomic_flush (#10773) Summary: With current implementation, within the same RocksDB instance, all column families with non-empty memtables will be scheduled for flush if RocksDB determines that any column family needs to be flushed, e.g. memtable full, write buffer manager, etc., if atomic flush is enabled. Not doing so can lead to data loss and inconsistency when WAL is disabled, which is a common setting when atomic flush is enabled. Therefore, setting a per-column-family knob, min_write_buffer_number_to_merge to a value greater than 1 is not compatible with atomic flush, and should be sanitized during column family creation and db open. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10773 Test Plan: Reproduce: D39993203 has detailed steps. Run the test with and without the fix. Reviewed By: cbi42 Differential Revision: D40077955 Pulled By: cbi42 fbshipit-source-id: 451a9179eb531ac42eaccf40b451b9dec4085240 | 05 October 2022, 19:24:39 UTC |
eca47fb | Changyu Bi | 05 October 2022, 16:27:14 UTC | Ignore kBottommostFiles compaction logic when allow_ingest_behind (#10767) Summary: fix for https://github.com/facebook/rocksdb/issues/10752 where RocksDB could be in an infinite compaction loop (with compaction reason kBottommostFiles) if allow_ingest_behind is enabled and the bottommost level is unfilled. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10767 Test Plan: Added a unit test to reproduce the compaction loop. Reviewed By: ajkr Differential Revision: D40031861 Pulled By: ajkr fbshipit-source-id: 71c4b02931fbe507a847632905404c9b8fa8c96b | 05 October 2022, 16:27:14 UTC |
00d697b | Andrew Kryczka | 05 October 2022, 06:24:54 UTC | blog post: Verifying crash-recovery with lost buffered writes (#10775) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10775 Reviewed By: hx235 Differential Revision: D40090300 Pulled By: hx235 fbshipit-source-id: 1358f0a4a1583b49548305cfd1477e520c8985ba | 05 October 2022, 06:24:54 UTC |
ffde463 | Changyu Bi | 05 October 2022, 05:23:24 UTC | Cleanup SuperVersion in Iterator::Refresh() (#10770) Summary: Fix a bug in Iterator::Refresh() where the local SV it obtained could be obsolete upon return, and should be cleaned up. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10770 Test Plan: added a unit test to reproduce the issue. Reviewed By: ajkr Differential Revision: D40063809 Pulled By: ajkr fbshipit-source-id: 619e728eb0f1ac9540b4d0ad38e43acc37a514b2 | 05 October 2022, 05:23:24 UTC |
edda219 | Yanqin Jin | 04 October 2022, 23:43:01 UTC | Manual flush with `wait=false` should not stall when writes stopped (#10001) Summary: When `FlushOptions::wait` is set to false, manual flush should not stall forever. If the database has already stopped writes, then the thread calling `DB::Flush()` with `FlushOptions::wait=false` should not enter the `DBImpl::write_thread_`. To prevent this, we should do a check at the beginning and return `TryAgain()` Resolves: https://github.com/facebook/rocksdb/issues/9892 Pull Request resolved: https://github.com/facebook/rocksdb/pull/10001 Reviewed By: siying Differential Revision: D36422303 Pulled By: siying fbshipit-source-id: 723bd3065e8edc4f17c82449d0d6b95a2381ac0a | 04 October 2022, 23:43:01 UTC |
f007ad8 | Jay Zhuang | 04 October 2022, 21:53:32 UTC | RoundRobin TTL compaction (#10725) Summary: For RoundRobin compaction, the data should be mostly sorted per level and within level. Use normal compaction picker for RR until all expired data is compacted. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10725 Reviewed By: ajkr Differential Revision: D39771069 Pulled By: jay-zhuang fbshipit-source-id: 7ccf88d7c093fad5673bda73a7b08cc4757780cd | 04 October 2022, 21:53:32 UTC |
626eaa4 | Varun Sharma | 04 October 2022, 19:10:30 UTC | ci: add GitHub token permissions for workflow (#10549) Summary: This PR adds minimum token permissions for the GITHUB_TOKEN in GitHub Actions workflows using https://github.com/step-security/secure-workflows. GitHub recommends defining minimum GITHUB_TOKEN permissions for securing GitHub Actions workflows - https://github.blog/changelog/2021-04-20-github-actions-control-permissions-for-github_token/ - https://docs.github.com/en/actions/security-guides/automatic-token-authentication#modifying-the-permissions-for-the-github_token - The Open Source Security Foundation (OpenSSF) [Scorecards](https://github.com/ossf/scorecard) treats not setting token permissions as a high-risk issue This project is part of the top 100 critical projects as per OpenSSF (https://github.com/ossf/wg-securing-critical-projects), so fixing the token permissions to improve security. Before the change: `GITHUB_TOKEN` has `write` permissions for multiple scopes, e.g. https://github.com/facebook/rocksdb/runs/7936368166?check_suite_focus=true#step:1:19 After the change: `GITHUB_TOKEN` will have minimum permissions needed for the jobs. Signed-off-by: Varun Sharma <varunsh@stepsecurity.io> Pull Request resolved: https://github.com/facebook/rocksdb/pull/10549 Reviewed By: ajkr Differential Revision: D38923184 Pulled By: jay-zhuang fbshipit-source-id: 0c48f98fe90665e53724f57a7d3b01dd80f34a93 | 04 October 2022, 19:10:30 UTC |
5f4391d | Peter Dillinger | 04 October 2022, 05:23:38 UTC | Some clean-up of secondary cache (#10730) Summary: This is intended as a step toward possibly separating secondary cache integration from the Cache implementation as much as possible, to (hopefully) minimize code duplication in adding secondary cache support to HyperClockCache. * Major clarifications to API docs of secondary cache compatible parts of Cache. For example, previously the docs seemed to suggest that Wait() was not needed if IsReady()==true. And it wasn't clear what operations were actually supported on pending handles. * Add some assertions related to these requirements, such as that we don't Release() before Wait() (which would leak a secondary cache handle). * Fix a leaky abstraction with dummy handles, which are supposed to be internal to the Cache. Previously, these just used value=nullptr to indicate dummy handle, which meant that they could be confused with legitimate value=nullptr cases like cache reservations. Also fixed blob_source_test which was relying on this leaky abstraction. * Drop "incomplete" terminology, which was another name for "pending". * Split handle flags into "mutable" ones requiring mutex and "immutable" ones which do not. Because of single-threaded access to pending handles, the "Is Pending" flag can be in the "immutable" set. This allows removal of a TSAN work-around and removing a mutex acquire-release in IsReady(). * Remove some unnecessary handling of charges on handles of failed lookups. Keeping total_charge=0 means no special handling needed. (Removed one unnecessary mutex acquire/release.) * Simplify handling of dummy handle in Lookup(). There is no need to explicitly Ref & Release w/Erase if we generally overwrite the dummy anyway. (Removed one mutex acquire/release, a call to Release().) Intended follow-up: * Clarify APIs in secondary_cache.h * Doesn't SecondaryCacheResultHandle transfer ownership of the Value() on success (implementations should not release the value in destructor)? * Does Wait() need to be called if IsReady() == true? (This would be different from Cache.) * Do Value() and Size() have undefined behavior if IsReady() == false? * Why have a custom API for what is essentially a std::future<std::pair<void*, size_t>>? * Improve unit testing of standalone handle case * Apparent null `e` bug in `free_standalone_handle` case * Clean up secondary cache testing in lru_cache_test * Why does TestSecondaryCacheResultHandle hold on to a Cache::Handle? * Why does TestSecondaryCacheResultHandle::Wait() do nothing? Shouldn't it establish the post-condition IsReady() == true? * (Assuming that is sorted out...) Shouldn't TestSecondaryCache::WaitAll simply wait on each handle in order (no casting required)? How about making that the default implementation? * Why does TestSecondaryCacheResultHandle::Size() check Value() first? If the API is intended to be returning 0 before IsReady(), then that is weird but should at least be documented. Otherwise, if it's intended to be undefined behavior, we should assert IsReady(). * Consider replacing "standalone" and "dummy" entries with a single kind of "weak" entry that deletes its value when it reaches zero refs. Suppose you are using compressed secondary cache and have two iterators at similar places. It will probably common for one iterator to have standalone results pinned (out of cache) when the second iterator needs those same blocks and has to re-load them from secondary cache and duplicate the memory. Combining the dummy and the standalone should fix this. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10730 Test Plan: existing tests (minor update), and crash test with sanitizers and secondary cache Performance test for any regressions in LRUCache (primary only): Create DB with ``` TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=30000000 -disable_wal=1 -bloom_bits=16 ``` Test before & after (run at same time) with ``` TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=readrandom[-X100] -readonly -num=30000000 -bloom_bits=16 -cache_index_and_filter_blocks=1 -cache_size=233000000 -duration 30 -threads=16 ``` Before: readrandom [AVG 100 runs] : 22234 (± 63) ops/sec; 1.6 (± 0.0) MB/sec After: readrandom [AVG 100 runs] : 22197 (± 64) ops/sec; 1.6 (± 0.0) MB/sec That's within 0.2%, which is not significant by the confidence intervals. Reviewed By: anand1976 Differential Revision: D39826010 Pulled By: anand1976 fbshipit-source-id: 3202b4a91f673231c97648ae070e502ae16b0f44 | 04 October 2022, 05:23:38 UTC |
3ae00de | Levi Tamasi | 04 October 2022, 01:09:56 UTC | Disable ingestion in stress tests when PutEntity is used (#10769) Summary: `SstFileWriter` currently does not support the `PutEntity` API, so in `TestIngestExternalFile` all key-values are written using regular `Put`s. This violates the assumption that whether or not a key corresponds to a plain old key-value or a wide-column entity can be determined by solely looking at the "value base" used when generating the value. The patch fixes this issue by disabling ingestion when `PutEntity` is enabled in the stress tests. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10769 Test Plan: Ran a simple blackbox stress test. Reviewed By: akankshamahajan15 Differential Revision: D40042132 Pulled By: ltamasi fbshipit-source-id: 93e75ff55545b7b69fa4ddef1d96093c961158a0 | 04 October 2022, 01:09:56 UTC |
8b430e0 | Changyu Bi | 03 October 2022, 23:22:39 UTC | Add iterator refresh to stress test (#10766) Summary: added calls to `Iterator::Refresh()` in `NonBatchedOpsStressTest::TestIterateAgainstExpected()`. The testing key range is locked in `TestIterateAgainstExpected` so I do not expect this change to provide thorough stress test to `Iterator::Refresh()`. However, it can still be helpful for catching bugs like https://github.com/facebook/rocksdb/issues/10739. Will add calls to refresh in `TestIterate` once we support iterator refresh with snapshots. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10766 Test Plan: `python3 tools/db_crashtest.py whitebox --simple --verify_iterator_with_expected_state_one_in=2` Reviewed By: ajkr Differential Revision: D40008320 Pulled By: ajkr fbshipit-source-id: cec93b07f915ef6476d41c1fee9b23c115188085 | 03 October 2022, 23:22:39 UTC |
ae0f9c3 | akankshamahajan | 03 October 2022, 17:59:45 UTC | Add new property in IOOptions to skip recursing through directories and list only files during GetChildren. (#10668) Summary: Add new property "do_not_recurse" in IOOptions for underlying file system to skip iteration of directories during DB::Open if there are no sub directories and list only files. By default this property is set to false. This property is set true currently in the code where RocksDB is sure only files are needed during DB::Open. Provided support in PosixFileSystem to use "do_not_recurse". TestPlan: - Existing tests Pull Request resolved: https://github.com/facebook/rocksdb/pull/10668 Reviewed By: anand1976 Differential Revision: D39471683 Pulled By: akankshamahajan15 fbshipit-source-id: 90e32f0b86d5346d53bc2714d3a0e7002590527f | 03 October 2022, 17:59:45 UTC |
9f2363f | Changyu Bi | 30 September 2022, 23:13:03 UTC | User-defined timestamp support for `DeleteRange()` (#10661) Summary: Add user-defined timestamp support for range deletion. The new API is `DeleteRange(opt, cf, begin_key, end_key, ts)`. Most of the change is to update the comparator to compare without timestamp. Other than that, major changes are - internal range tombstone data structures (`FragmentedRangeTombstoneList`, `RangeTombstone`, etc.) to store timestamps. - Garbage collection of range tombstones and range tombstone covered keys during compaction. - Get()/MultiGet() to return the timestamp of a range tombstone when needed. - Get/Iterator with range tombstones bounded by readoptions.timestamp. - timestamp crash test now issues DeleteRange by default. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10661 Test Plan: - Added unit test: `make check` - Stress test: `python3 tools/db_crashtest.py --enable_ts whitebox --readpercent=57 --prefixpercent=4 --writepercent=25 -delpercent=5 --iterpercent=5 --delrangepercent=4` - Ran `db_bench` to measure regression when timestamp is not enabled. The tests are for write (with some range deletion) and iterate with DB fitting in memory: `./db_bench--benchmarks=fillrandom,seekrandom --writes_per_range_tombstone=200 --max_write_buffer_number=100 --min_write_buffer_number_to_merge=100 --writes=500000 --reads=500000 --seek_nexts=10 --disable_auto_compactions -disable_wal=true --max_num_range_tombstones=1000`. Did not see consistent regression in no timestamp case. | micros/op | fillrandom | seekrandom | | --- | --- | --- | |main| 2.58 |10.96| |PR 10661| 2.68 |10.63| Reviewed By: riversand963 Differential Revision: D39441192 Pulled By: cbi42 fbshipit-source-id: f05aca3c41605caf110daf0ff405919f300ddec2 | 30 September 2022, 23:13:03 UTC |
3b81649 | Hui Xiao | 30 September 2022, 22:48:33 UTC | Add manual_wal_flush, FlushWAL() to stress/crash test (#10698) Summary: **Context/Summary:** Introduce `manual_wal_flush_one_in` as titled. - When `manual_wal_flush_one_in > 0`, we also need tracing to correctly verify recovery because WAL data can be lost in this case when `FlushWAL()` is not explicitly called by users of RocksDB (in our case, db stress) and the recovery from such potential WAL data loss is a prefix recovery that requires tracing to verify. As another consequence, we need to disable features can't run under unsync data loss with `manual_wal_flush_one_in` Incompatibilities fixed along the way: ``` db_stress: db/db_impl/db_impl_open.cc:2063: static rocksdb::Status rocksdb::DBImpl::Open(const rocksdb::DBOptions&, const string&, const std::vector<rocksdb::ColumnFamilyDescriptor>&, std::vector<rocksdb::ColumnFamilyHandle*>*, rocksdb::DB**, bool, bool): Assertion `impl->TEST_WALBufferIsEmpty()' failed. ``` - It turns out that `Writer::AddCompressionTypeRecord` before this assertion `EmitPhysicalRecord(kSetCompressionType, encode.data(), encode.size());` but do not trigger flush if `manual_wal_flush` is set . This leads to `impl->TEST_WALBufferIsEmpty()' is false. - As suggested, assertion is removed and violation case is handled by `FlushWAL(sync=true)` along with refactoring `TEST_WALBufferIsEmpty()` to be `WALBufferIsEmpty()` since it is used in prod code now. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10698 Test Plan: - Locally running `python3 tools/db_crashtest.py blackbox --manual_wal_flush_one_in=1 --manual_wal_flush=1 --sync_wal_one_in=100 --atomic_flush=1 --flush_one_in=100 --column_families=3` - Joined https://github.com/facebook/rocksdb/pull/10624 in auto CI testings with all RocksDB stress/crash test jobs Reviewed By: ajkr Differential Revision: D39593752 Pulled By: ajkr fbshipit-source-id: 3a2135bb792c52d2ffa60257d4fbc557fb04d2ce | 30 September 2022, 22:48:33 UTC |
793fd09 | anand76 | 30 September 2022, 20:37:05 UTC | Track expected state only if expected values dir is non-empty (#10764) Summary: If the `-expected_values_dir` argument to db_stress is empty, then verification against expected state is effectively disabled. But `RunStressTest` still calls `TrackExpectedState`, which returns `NotSupported` causing a the crash test to fail with a false alarm. Fix it by only calling `TrackExpectedState` if necessary. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10764 Reviewed By: ajkr Differential Revision: D39980129 Pulled By: anand1976 fbshipit-source-id: d02651746fe3a297877a4b2b2fbcb7274860f49c | 30 September 2022, 20:37:05 UTC |
9078fcc | Levi Tamasi | 30 September 2022, 18:11:07 UTC | Add the PutEntity API to the stress/crash tests (#10760) Summary: The patch adds the `PutEntity` API to the non-batched, batched, and CF consistency stress tests. Namely, when the new `db_stress` command line parameter `use_put_entity_one_in` is greater than zero, one in N writes on average is performed using `PutEntity` rather than `Put`. The wide-column entity written has the generated value in its default column; in addition, it contains up to three additional columns where the original generated value is divided up between the column name and the column value (with the column name containing the first k characters of the generated value, and the column value containing the rest). Whether `PutEntity` is used (and if so, how many columns the entity has) is completely determined by the "value base" used to generate the value (that is, there is no randomness involved). Assuming the same `use_put_entity_one_in` setting is used across `db_stress` invocations, this enables us to reconstruct and validate the entity during subsequent `db_stress` runs. Note that `PutEntity` is currently incompatible with `Merge`, transactions, and user-defined timestamps; these combinations are currently disabled/disallowed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10760 Test Plan: Ran some batched, non-batched, and CF consistency stress tests using the script. Reviewed By: riversand963 Differential Revision: D39939032 Pulled By: ltamasi fbshipit-source-id: eafdf124e95993fb7d73158e3b006d11819f7fa9 | 30 September 2022, 18:11:07 UTC |
fd71a82 | Changyu Bi | 30 September 2022, 17:50:44 UTC | Use actual file size when checking max_compaction_size (#10728) Summary: currently, there are places in compaction_picker where we add up `compensated_file_size` of files being compacted and limit the sum to be under `max_compaction_bytes`. `compensated_file_size` contains booster for point tombstones and should be used only for determining file's compaction priority. This PR replaces `compensated_file_size` with actual file size in such places. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10728 Test Plan: CI Reviewed By: ajkr Differential Revision: D39789427 Pulled By: cbi42 fbshipit-source-id: 1f89fb6c0159c53bf01d8dc783f465959f442c81 | 30 September 2022, 17:50:44 UTC |
f3cc666 | Jay Zhuang | 30 September 2022, 02:43:55 UTC | Align compaction output file boundaries to the next level ones (#10655) Summary: Try to align the compaction output file boundaries to the next level ones (grandparent level), to reduce the level compaction write-amplification. In level compaction, there are "wasted" data at the beginning and end of the output level files. Align the file boundary can avoid such "wasted" compaction. With this PR, it tries to align the non-bottommost level file boundaries to its next level ones. It may cut file when the file size is large enough (at least 50% of target_file_size) and not too large (2x target_file_size). db_bench shows about 12.56% compaction reduction: ``` TEST_TMPDIR=/data/dbbench2 ./db_bench --benchmarks=fillrandom,readrandom -max_background_jobs=12 -num=400000000 -target_file_size_base=33554432 # baseline: Flush(GB): cumulative 25.882, interval 7.216 Cumulative compaction: 285.90 GB write, 162.36 MB/s write, 269.68 GB read, 153.15 MB/s read, 2926.7 seconds # with this change: Flush(GB): cumulative 25.882, interval 7.753 Cumulative compaction: 249.97 GB write, 141.96 MB/s write, 233.74 GB read, 132.74 MB/s read, 2534.9 seconds ``` The compaction simulator shows a similar result (14% with 100G random data). As a side effect, with this PR, the SST file size can exceed the target_file_size, but is capped at 2x target_file_size. And there will be smaller files. Here are file size statistics when loading 100GB with the target file size 32MB: ``` baseline this_PR count 1.656000e+03 1.705000e+03 mean 3.116062e+07 3.028076e+07 std 7.145242e+06 8.046139e+06 ``` The feature is enabled by default, to revert to the old behavior disable it with `AdvancedColumnFamilyOptions.level_compaction_dynamic_file_size = false` Also includes https://github.com/facebook/rocksdb/issues/1963 to cut file before skippable grandparent file. Which is for use case like user adding 2 or more non-overlapping data range at the same time, it can reduce the overlapping of 2 datasets in the lower levels. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10655 Reviewed By: cbi42 Differential Revision: D39552321 Pulled By: jay-zhuang fbshipit-source-id: 640d15f159ab0cd973f2426cfc3af266fc8bdde2 | 30 September 2022, 02:43:55 UTC |
47b57a3 | gitbw95 | 30 September 2022, 02:15:04 UTC | add SetCapacity and GetCapacity for secondary cache (#10712) Summary: To support tuning secondary cache dynamically, add `SetCapacity()` and `GetCapacity()` for CompressedSecondaryCache. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10712 Test Plan: Unit Tests Reviewed By: anand1976 Differential Revision: D39685212 Pulled By: gitbw95 fbshipit-source-id: 19573c67237011927320207732b5de083cb87240 | 30 September 2022, 02:15:04 UTC |
aa71464 | Hui Xiao | 29 September 2022, 23:29:51 UTC | Remove and recreate expected values dir in white-box testing 2nd half (#10743) Summary: **Context:** https://github.com/facebook/rocksdb/pull/10732#pullrequestreview-1121076205 Pull Request resolved: https://github.com/facebook/rocksdb/pull/10743 Test Plan: - Locally run `python3 ./tools/db_crashtest.py whitebox --simple -max_key=1000000 -value_size_mult=33 -write_buffer_size=524288 -target_file_size_base=524288 -max_bytes_for_level_base=2097152 --duration=120 --interval=10 --ops_per_thread=1000 --random_kill_odd=887` - CI jobs testing Reviewed By: ajkr Differential Revision: D39838733 Pulled By: ajkr fbshipit-source-id: 9e819b66b0293dfc7a31a908a9d42c6baca4aeaa | 29 September 2022, 23:29:51 UTC |
5f4b736 | Joel Andres Granados | 29 September 2022, 19:42:52 UTC | cmake : Add ALL plugin LIBS to THIRD_PARTYLIBS (#10727) Summary: Bringing in multiple libraries failed as they were not considered as separate arguments. In this commit we make sure to add *all* the libraries to THIRD_PARTYLIBS. Additionally we add more informative status messages for when the plugins get added. Signed-off-by: Joel Granados <joel.granados@gmail.com> Pull Request resolved: https://github.com/facebook/rocksdb/pull/10727 Reviewed By: riversand963 Differential Revision: D39778566 Pulled By: ajkr fbshipit-source-id: 34306b26ab4c726d17353ddd765f368967a1b59f | 29 September 2022, 19:42:52 UTC |
dc9f499 | Andrew Kryczka | 28 September 2022, 23:21:43 UTC | db_stress TestIngestExternalFile avoid empty files (#10754) Summary: If all the keys in range [key_base, shared->GetMaxKey()) are non-overwritable `TestIngestExternalFile()` would attempt to ingest a file with zero keys, leading to the following error: "Cannot create sst file with no entries". This PR changes `TestIngestExternalFile()` to return early in that case instead of going through with the ingestion attempt. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10754 Reviewed By: hx235 Differential Revision: D39909195 Pulled By: ajkr fbshipit-source-id: e06e6b9cc24826fbd450e5130885e6f07164badd | 28 September 2022, 23:21:43 UTC |
b0d8ccb | Andrew Kryczka | 28 September 2022, 22:17:12 UTC | db_stress print TestMultiGet error value in hex (#10753) Summary: Without this fix, db_crashtest.py could fail with useless output such as: `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 267: invalid start byte` Pull Request resolved: https://github.com/facebook/rocksdb/pull/10753 Reviewed By: hx235 Differential Revision: D39905809 Pulled By: ajkr fbshipit-source-id: 50ba2cf20d206eeb168309cec137e827a34c8f0b | 28 September 2022, 22:17:12 UTC |
d2578ab | Yanqin Jin | 28 September 2022, 03:12:13 UTC | Add DECLARE_uint32 to gflags compatibility (#10729) Summary: Older versions of gflags do not have `DEFINE_uint32` and `DECLARE_uint32`. In util/gflag_compat.h, we already add a hack for `DEFINE_uint32`. This PR adds a hack for `DECLARE_uint32`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10729 Test Plan: ROCKSDB_NO_FBCODE=1 make V=1 -j16 db_stress make check Resolves https://github.com/facebook/rocksdb/issues/10704 Reviewed By: pdillinger Differential Revision: D39789183 Pulled By: riversand963 fbshipit-source-id: a58747e0163dcf55dd762733aa5c40d8f0ae70a6 | 28 September 2022, 03:12:13 UTC |
f3b359a | Hui Xiao | 27 September 2022, 19:18:28 UTC | Set options.num_levels in db_stress_test_base (#10732) Summary: An add-on to https://github.com/facebook/rocksdb/pull/6818 to complete adding single-level universal compaction to stress/crash testing. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10732 Test Plan: - Locally run for 10 min `python3 ./tools/db_crashtest.py whitebox --simple --compaction_style=1 --num_levels=1 -max_key=1000000 -value_size_mult=33 -write_buffer_size=524288 -target_file_size_base=524288 -max_bytes_for_level_base=2097152 --duration=120 --interval=10 --ops_per_thread=1000 --random_kill_odd=887` - Check LOG to confirm single-level universal compaction is called - Manual testing and log checking to ensure destroy_db_initially=1 is correctly set across runs with different compaction styles (i.e, in the second half of whitebox testing). - [ongoing]CI jobs stress test Reviewed By: ajkr Differential Revision: D39797612 Pulled By: ajkr fbshipit-source-id: 16f5c40c3464c57360c06c8305f92118e426149c | 27 September 2022, 19:18:28 UTC |
7045b74 | Yanqin Jin | 27 September 2022, 16:04:57 UTC | Remove timestamp before inserting to WBWI's index (#10742) Summary: Currently, this original behavior should not lead to incorrect result, but will violate the contract of CompareWithTimestamp() that when a_has_ts or b_has_ts is false, the slice does not include timestamp. Resolves https://github.com/facebook/rocksdb/issues/10709 Pull Request resolved: https://github.com/facebook/rocksdb/pull/10742 Test Plan: make check Reviewed By: ltamasi Differential Revision: D39834096 Pulled By: riversand963 fbshipit-source-id: c597600f5a7820734f07d0926cdc224cea5eabe1 | 27 September 2022, 16:04:57 UTC |
df49279 | Changyu Bi | 27 September 2022, 01:57:23 UTC | Fix segfault in Iterator::Refresh() (#10739) Summary: when a new internal iterator is constructed during iterator refresh, pointer to the previous memtable range tombstone iterator was not cleared. This could cause segfault for future `Refresh()` calls when they try to free the memtable range tombstones. This PR fixes this issue. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10739 Test Plan: added a unit test in db_range_del_test.cc to reproduce this issue. Reviewed By: ajkr, riversand963 Differential Revision: D39825283 Pulled By: cbi42 fbshipit-source-id: 3b59a2b73865aed39e28cdd5c1b57eed7991b94c | 27 September 2022, 01:57:23 UTC |
aed30dd | Hui Xiao | 27 September 2022, 01:01:59 UTC | Support WriteCommit policy with sync_fault_injection=1 (#10624) Summary: **Context:** Prior to this PR, correctness testing with un-sync data loss [disabled](https://github.com/facebook/rocksdb/pull/10605) transaction (`use_txn=1`) thus all of the `txn_write_policy` . This PR improved that by adding support for one policy - WriteCommit (`txn_write_policy=0`). **Summary:** They key to this support is (a) handle Mark{Begin, End}Prepare/MarkCommit/MarkRollback in constructing ExpectedState under WriteCommit policy correctly and (b) monitor CI jobs and solve any test incompatibility issue till jobs are stable. (b) will be part of the test plan. For (a) - During prepare (i.e, between `MarkBeginPrepare()` and `MarkEndPrepare(xid)`), `ExpectedStateTraceRecordHandler` will buffer all writes by adding all writes to an internal `WriteBatch`. - On `MarkEndPrepare()`, that `WriteBatch` will be associated with the transaction's `xid`. - During the commit (i.e, on `MarkCommit(xid)`), `ExpectedStateTraceRecordHandler` will retrieve and iterate the internal `WriteBatch` and finally apply those writes to `ExpectedState` - During the rollback (i.e, on `MarkRollback(xid)`), `ExpectedStateTraceRecordHandler` will erase the internal `WriteBatch` from the map. For (b) - one major issue described below: - TransactionsDB in db stress recovers prepared-but-not-committed txns from the previous crashed run by randomly committing or rolling back it at the start of the current run, see a historical [PR](https://github.com/facebook/rocksdb/commit/6d06be22c083ccf185fd38dba49fde73b644b4c1) predated correctness testing. - And we will verify those processed keys in a recovered db against their expected state. - However since now we turn on `sync_fault_injection=1` where the expected state is constructed from the trace instead of using the LATEST.state from previous run. The expected state now used to verify those processed keys won't contain UNKNOWN_SENTINEL as they should - see test 1 for a failed case. - Therefore, we decided to manually update its expected state to be UNKNOWN_SENTINEL as part of the processing. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10624 Test Plan: 1. Test exposed the major issue described above. This test will fail without setting UNKNOWN_SENTINEL in expected state during the processing and pass after ``` db=/dev/shm/rocksdb_crashtest_blackbox exp=/dev/shm/rocksdb_crashtest_expected dbt=$db.tmp expt=$exp.tmp rm -rf $db $exp mkdir -p $exp echo "RUN 1" ./db_stress \ --clear_column_family_one_in=0 --column_families=1 --db=$db --delpercent=10 --delrangepercent=0 --destroy_db_initially=0 --expected_values_dir=$exp --iterpercent=0 --key_len_percent_dist=1,30,69 --max_key=1000000 --max_key_len=3 --prefixpercent=0 --readpercent=0 --reopen=0 --ops_per_thread=100000000 --test_batches_snapshots=0 --value_size_mult=32 --writepercent=90 \ --use_txn=1 --txn_write_policy=0 --sync_fault_injection=1 & pid=$! sleep 0.2 sleep 20 kill $pid sleep 0.2 echo "RUN 2" ./db_stress \ --clear_column_family_one_in=0 --column_families=1 --db=$db --delpercent=10 --delrangepercent=0 --destroy_db_initially=0 --expected_values_dir=$exp --iterpercent=0 --key_len_percent_dist=1,30,69 --max_key=1000000 --max_key_len=3 --prefixpercent=0 --readpercent=0 --reopen=0 --ops_per_thread=100000000 --test_batches_snapshots=0 --value_size_mult=32 --writepercent=90 \ --use_txn=1 --txn_write_policy=0 --sync_fault_injection=1 & pid=$! sleep 0.2 sleep 20 kill $pid sleep 0.2 echo "RUN 3" ./db_stress \ --clear_column_family_one_in=0 --column_families=1 --db=$db --delpercent=10 --delrangepercent=0 --destroy_db_initially=0 --expected_values_dir=$exp --iterpercent=0 --key_len_percent_dist=1,30,69 --max_key=1000000 --max_key_len=3 --prefixpercent=0 --readpercent=0 --reopen=0 --ops_per_thread=100000000 --test_batches_snapshots=0 --value_size_mult=32 --writepercent=90 \ --use_txn=1 --txn_write_policy=0 --sync_fault_injection=1 ``` 2. Manual testing to ensure ExpectedState is constructed correctly during recovery by verifying it against previously crashed TransactionDB's WAL. - Run the following command to crash a TransactionDB with WriteCommit policy. Then `./ldb dump_wal` on its WAL file ``` db=/dev/shm/rocksdb_crashtest_blackbox exp=/dev/shm/rocksdb_crashtest_expected rm -rf $db $exp mkdir -p $exp ./db_stress \ --clear_column_family_one_in=0 --column_families=1 --db=$db --delpercent=10 --delrangepercent=0 --destroy_db_initially=0 --expected_values_dir=$exp --iterpercent=0 --key_len_percent_dist=1,30,69 --max_key=1000000 --max_key_len=3 --prefixpercent=0 --readpercent=0 --reopen=0 --ops_per_thread=100000000 --test_batches_snapshots=0 --value_size_mult=32 --writepercent=90 \ --use_txn=1 --txn_write_policy=0 --sync_fault_injection=1 & pid=$! sleep 30 kill $pid sleep 1 ``` - Run the following command to verify recovery of the crashed db under debugger. Compare the step-wise result with WAL records (e.g, WriteBatch content, xid, prepare/commit/rollback marker) ``` ./db_stress \ --clear_column_family_one_in=0 --column_families=1 --db=$db --delpercent=10 --delrangepercent=0 --destroy_db_initially=0 --expected_values_dir=$exp --iterpercent=0 --key_len_percent_dist=1,30,69 --max_key=1000000 --max_key_len=3 --prefixpercent=0 --readpercent=0 --reopen=0 --ops_per_thread=100000000 --test_batches_snapshots=0 --value_size_mult=32 --writepercent=90 \ --use_txn=1 --txn_write_policy=0 --sync_fault_injection=1 ``` 3. Automatic testing by triggering all RocksDB stress/crash test jobs for 3 rounds with no failure. Reviewed By: ajkr, riversand963 Differential Revision: D39199373 Pulled By: hx235 fbshipit-source-id: 7a1dec0e3e2ee6ea86ddf5dd19ceb5543a3d6f0c | 27 September 2022, 01:01:59 UTC |
5d7cf31 | anand76 | 27 September 2022, 00:36:57 UTC | Add OpenSSL to docker image (#10741) Summary: Update the docker image with OpenSSL, required by the folly build. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10741 Reviewed By: jay-zhuang Differential Revision: D39831081 Pulled By: anand1976 fbshipit-source-id: 900154f70a456d1b6f9e384b8bdbcc227af4adbc | 27 September 2022, 00:36:57 UTC |
52f2411 | Yanqin Jin | 26 September 2022, 22:59:30 UTC | Update HISTORY to mention PR #10724 (#10737) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10737 Reviewed By: cbi42 Differential Revision: D39825386 Pulled By: riversand963 fbshipit-source-id: a3c55f2777e034d6ae6ff44ef0219d9fbbf1cc96 | 26 September 2022, 22:59:30 UTC |
2280b26 | Levi Tamasi | 26 September 2022, 22:33:36 UTC | Small cleanup in NonBatchedOpsStressTest::VerifyDb (#10740) Summary: The PR cleans up the logic in `NonBatchedOpsStressTest::VerifyDb` so that the verification method is picked using a single random number generation. It also eliminates some repeated key comparisons and makes some small code hygiene improvements. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10740 Test Plan: Ran a simple blackbox crash test. Reviewed By: riversand963 Differential Revision: D39828646 Pulled By: ltamasi fbshipit-source-id: 60ee5a3bb1851278f62c7d83b0c93b902ed9702e | 26 September 2022, 22:33:36 UTC |
07249fe | Yanqin Jin | 24 September 2022, 00:29:05 UTC | Fix DBImpl::GetLatestSequenceForKey() for Merge (#10724) Summary: Currently, without this fix, DBImpl::GetLatestSequenceForKey() may not return the latest sequence number for merge operands of the key. This can cause conflict checking during optimistic transaction commit phase to fail. Fix it by always returning the latest sequence number of the key, also considering range tombstones. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10724 Test Plan: make check Reviewed By: cbi42 Differential Revision: D39756847 Pulled By: riversand963 fbshipit-source-id: 0764c3dd4cb24960b37e18adccc6e7feed0e6876 | 24 September 2022, 00:29:05 UTC |
c76a90c | Alan Paxton | 23 September 2022, 16:39:40 UTC | CI benchmarks return NUM_KEYS to previous size (#10649) Summary: Larger size is necessary to stress levels 2, 3 of LSM tree Pull Request resolved: https://github.com/facebook/rocksdb/pull/10649 Reviewed By: ajkr Differential Revision: D39744515 Pulled By: jay-zhuang fbshipit-source-id: 62ff097bfbfdfc26ff1e6290e1e3b71506b7042c | 23 September 2022, 16:39:40 UTC |
6d2a983 | Levi Tamasi | 23 September 2022, 15:27:41 UTC | Clarify API comments for blob_cache/prepopulate_blob_cache (#10723) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10723 Reviewed By: riversand963 Differential Revision: D39749277 Pulled By: ltamasi fbshipit-source-id: 4bda94b4620a0db1fcd4309c7ad03fc23e8718cb | 23 September 2022, 15:27:41 UTC |