https://github.com/facebook/rocksdb
- HEAD
- refs/heads/2.2.fb.branch
- refs/heads/2.3.fb.branch
- refs/heads/2.4.fb.branch
- refs/heads/2.5.fb.branch
- refs/heads/2.6.fb.branch
- refs/heads/2.7
- refs/heads/2.7.fb.branch
- refs/heads/2.8.1.fb
- refs/heads/2.8.fb
- refs/heads/2.8.fb.trunk
- refs/heads/3.0.fb
- refs/heads/3.0.fb.branch
- refs/heads/3.1.fb
- refs/heads/3.10.fb
- refs/heads/3.11.fb
- refs/heads/3.12.fb
- refs/heads/3.13.fb
- refs/heads/3.2.fb
- refs/heads/3.3.fb
- refs/heads/3.4.fb
- refs/heads/3.5.fb
- refs/heads/3.6.fb
- refs/heads/3.7.fb
- refs/heads/3.8.fb
- refs/heads/3.9.fb
- refs/heads/4.0.fb
- refs/heads/4.1.fb
- refs/heads/4.10.fb
- refs/heads/4.11.fb
- refs/heads/4.12.fb
- refs/heads/4.13.fb
- refs/heads/4.2.fb
- refs/heads/4.3.fb
- refs/heads/4.4.fb
- refs/heads/4.5.fb
- refs/heads/4.6.fb
- refs/heads/4.7.fb
- refs/heads/4.8.fb
- refs/heads/4.9.fb
- refs/heads/5.0.fb
- refs/heads/5.1.fb
- refs/heads/5.10.fb
- refs/heads/5.11.fb
- refs/heads/5.12.fb
- refs/heads/5.13.fb
- refs/heads/5.13.fb.myrocks
- refs/heads/5.14.fb
- refs/heads/5.14.fb.myrocks
- refs/heads/5.15.fb
- refs/heads/5.16.fb
- refs/heads/5.17.fb
- refs/heads/5.17.fb.myrocks
- refs/heads/5.18.fb
- refs/heads/5.2.fb
- refs/heads/5.3.fb
- refs/heads/5.4.fb
- refs/heads/5.5.fb
- refs/heads/5.6.fb
- refs/heads/5.7.fb
- refs/heads/5.7.fb.myrocks
- refs/heads/5.8.3
- refs/heads/5.8.fb
- refs/heads/5.9.fb
- refs/heads/5.9.fb.myrocks
- refs/heads/6.0.fb
- refs/heads/6.0.fb.myrocks
- refs/heads/6.1.fb
- refs/heads/6.1.fb.myrocks
- refs/heads/6.1.fb.prod201905
- refs/heads/6.10.fb
- refs/heads/6.11.fb
- refs/heads/6.12.fb
- refs/heads/6.13.fb
- refs/heads/6.13.fb.laser
- refs/heads/6.14.fb
- refs/heads/6.14.fb.laser
- refs/heads/6.15.fb
- refs/heads/6.16.fb
- refs/heads/6.17.fb
- refs/heads/6.17.fb.laser
- refs/heads/6.18.fb
- refs/heads/6.19.fb
- refs/heads/6.2.fb
- refs/heads/6.20.fb
- refs/heads/6.21.fb
- refs/heads/6.22-history.md-fixup
- refs/heads/6.22.fb
- refs/heads/6.23.fb
- refs/heads/6.24.fb
- refs/heads/6.25.fb
- refs/heads/6.26.fb
- refs/heads/6.27.fb
- refs/heads/6.28.fb
- refs/heads/6.29.fb
- refs/heads/6.3.fb
- refs/heads/6.3.fb.myrocks
- refs/heads/6.3.fb.myrocks2
- refs/heads/6.3fb
- refs/heads/6.4.fb
- refs/heads/6.5.fb
- refs/heads/6.6.fb
- refs/heads/6.7.fb
- refs/heads/6.8.fb
- refs/heads/6.9.fb
- refs/heads/7.0.fb
- refs/heads/7.1.fb
- refs/heads/7.10.fb
- refs/heads/7.2.fb
- refs/heads/7.3.fb
- refs/heads/7.4.fb
- refs/heads/7.5.fb
- refs/heads/7.6.fb
- refs/heads/7.7.fb
- refs/heads/7.8.fb
- refs/heads/7.9.fb
- refs/heads/8.0.fb
- refs/heads/8.1.fb
- refs/heads/8.10.fb
- refs/heads/8.11.2_zippydb
- refs/heads/8.11.fb
- refs/heads/8.11.fb_zippydb
- refs/heads/8.2.fb
- refs/heads/8.3.fb
- refs/heads/8.4.fb
- refs/heads/8.5.fb
- refs/heads/8.6.fb
- refs/heads/8.7.fb
- refs/heads/8.8.fb
- refs/heads/8.9.fb
- refs/heads/9.0.fb
- refs/heads/9.1.fb
- refs/heads/9.1.fb.myrocks
- refs/heads/9.2.fb
- refs/heads/9.3.fb
- refs/heads/adaptive
- refs/heads/ajkr-patch-1
- refs/heads/ajkr-patch-2
- refs/heads/blob_shadow
- refs/heads/bottom-pri-level
- refs/heads/bugfix-build-detect
- refs/heads/checksum_readahead_mmap_fix
- refs/heads/draft-myrocks-and-fbcode-8.0.fb
- refs/heads/feature/debug-rocksdbjavastatic
- refs/heads/feature/travis-arm64
- refs/heads/fix-release-notes
- refs/heads/fix-win2022-build
- refs/heads/fix-write-batch-comment
- refs/heads/format_compatible_4
- refs/heads/getmergeops
- refs/heads/gh-pages-old
- refs/heads/history-update
- refs/heads/hotfix/lambda-capture
- refs/heads/improve-support
- refs/heads/jijiew-patch-1
- refs/heads/katherinez-patch-1
- refs/heads/katherinez-patch-2
- refs/heads/main
- refs/heads/master
- refs/heads/mdcallag_benchmark_oct22
- refs/heads/nvm_cache_proto
- refs/heads/pr-sanity-check-as-GHAction
- refs/heads/pr/11267
- refs/heads/pr/6062
- refs/heads/ramvadiv-patch-1
- refs/heads/release_fix
- refs/heads/revert-10606-7.6.1
- refs/heads/ribbon_bloom_hybrid
- refs/heads/scaffold
- refs/heads/siying-patch-1
- refs/heads/siying-patch-10
- refs/heads/siying-patch-2
- refs/heads/siying-patch-3
- refs/heads/siying-patch-4
- refs/heads/siying-patch-5
- refs/heads/siying-patch-6
- refs/heads/siying-patch-7
- refs/heads/siying-patch-8
- refs/heads/skip_memtable_flush
- refs/heads/testing_ppc_build
- refs/heads/tests
- refs/heads/unschedule_issue_test_base
- refs/heads/unused-var
- refs/heads/v6.6.4
- refs/heads/xxhash_merge_base
- refs/heads/yiwu_stackable
- refs/heads/yuslepukhin
- refs/remotes/origin/5.13.fb
- refs/tags/2.5.fb
- refs/tags/2.6.fb
- refs/tags/3.0.fb
- refs/tags/do-not-use-me2
- refs/tags/rocksdb-3.1
- refs/tags/rocksdb-3.10.2
- refs/tags/rocksdb-3.11
- refs/tags/rocksdb-3.11.1
- refs/tags/rocksdb-3.11.2
- refs/tags/rocksdb-3.2
- refs/tags/rocksdb-3.3
- refs/tags/rocksdb-3.4
- refs/tags/rocksdb-3.5
- refs/tags/rocksdb-3.5.1
- refs/tags/rocksdb-3.6.1
- refs/tags/rocksdb-3.6.2
- refs/tags/rocksdb-3.7
- refs/tags/rocksdb-3.8
- refs/tags/rocksdb-3.9
- refs/tags/rocksdb-3.9.1
- refs/tags/rocksdb-4.1
- refs/tags/rocksdb-5.10.2
- refs/tags/rocksdb-5.10.3
- refs/tags/rocksdb-5.10.4
- refs/tags/rocksdb-5.11.2
- refs/tags/rocksdb-5.11.3
- refs/tags/rocksdb-5.14.3
- refs/tags/rocksdb-5.2.1
- refs/tags/rocksdb-5.3.3
- refs/tags/rocksdb-5.3.4
- refs/tags/rocksdb-5.3.5
- refs/tags/rocksdb-5.3.6
- refs/tags/rocksdb-5.4.10
- refs/tags/rocksdb-5.4.5
- refs/tags/rocksdb-5.4.6
- refs/tags/rocksdb-5.5.2
- refs/tags/rocksdb-5.5.3
- refs/tags/rocksdb-5.5.4
- refs/tags/rocksdb-5.5.5
- refs/tags/rocksdb-5.5.6
- refs/tags/rocksdb-5.6.1
- refs/tags/rocksdb-5.6.2
- refs/tags/rocksdb-5.7.1
- refs/tags/rocksdb-5.7.2
- refs/tags/rocksdb-5.7.3
- refs/tags/rocksdb-5.7.5
- refs/tags/rocksdb-5.8.6
- refs/tags/rocksdb-5.8.7
- refs/tags/rocksdb-5.8.8
- refs/tags/rocksdb-5.9.2
- refs/tags/v4.0
- refs/tags/v4.1
- refs/tags/v5.10.2
- refs/tags/v5.10.3
- refs/tags/v5.10.4
- refs/tags/v5.11.2
- refs/tags/v5.11.3
- refs/tags/v5.13.3
- refs/tags/v5.14.3
- refs/tags/v5.15.10
- refs/tags/v5.18.3
- refs/tags/v5.2.1
- refs/tags/v5.3.3
- refs/tags/v5.3.4
- refs/tags/v5.3.5
- refs/tags/v5.3.6
- refs/tags/v5.4.10
- refs/tags/v5.4.5
- refs/tags/v5.4.6
- refs/tags/v5.5.2
- refs/tags/v5.5.3
- refs/tags/v5.5.4
- refs/tags/v5.5.5
- refs/tags/v5.5.6
- refs/tags/v5.6.1
- refs/tags/v5.6.2
- refs/tags/v5.7.1
- refs/tags/v5.7.2
- refs/tags/v5.7.3
- refs/tags/v5.7.5
- refs/tags/v5.8.6
- refs/tags/v5.8.7
- refs/tags/v5.8.8
- refs/tags/v5.9.2
- refs/tags/v6.0.1
- refs/tags/v6.0.2
- refs/tags/v6.1.1
- refs/tags/v6.1.2
- refs/tags/v6.10.1
- refs/tags/v6.10.2
- refs/tags/v6.11.4
- refs/tags/v6.11.6
- refs/tags/v6.12.6
- refs/tags/v6.12.7
- refs/tags/v6.13.2
- refs/tags/v6.13.3
- refs/tags/v6.14.5
- refs/tags/v6.14.6
- refs/tags/v6.15.4
- refs/tags/v6.15.5
- refs/tags/v6.16.3
- refs/tags/v6.16.4
- refs/tags/v6.17.3
- refs/tags/v6.2.2
- refs/tags/v6.2.4
- refs/tags/v6.20.3
- refs/tags/v6.22.1
- refs/tags/v6.25.3
- refs/tags/v6.26.1
- refs/tags/v6.28.2
- refs/tags/v6.29.3
- refs/tags/v6.29.4
- refs/tags/v6.29.5
- refs/tags/v6.3.6
- refs/tags/v6.4.6
- refs/tags/v6.5.2
- refs/tags/v6.5.3
- refs/tags/v6.6.3
- refs/tags/v6.6.4
- refs/tags/v6.7.3
- refs/tags/v6.8.1
- refs/tags/v7.0.1
- refs/tags/v7.0.2
- refs/tags/v7.0.4
- refs/tags/v7.2.0
- refs/tags/v7.2.2
- refs/tags/v7.5.3
- refs/tags/v7.7.2
- refs/tags/v7.9.2
- refs/tags/v8.0.0
- refs/tags/v8.11.4
- refs/tags/v8.3.2
- refs/tags/v8.3.3
- refs/tags/v8.4.4
- refs/tags/v8.5.3
- refs/tags/v8.6.7
- refs/tags/v8.7.3
- refs/tags/v9.0.1
- refs/tags/v9.1.1
- refs/tags/v9.2.1
- v9.1.0
- v9.0.0
- v8.9.1
- v8.8.1
- v8.5.4
- v8.11.3
- v8.10.2
- v8.10.0
- v8.1.1
- v7.8.3
- v7.7.8
- v7.7.3
- v7.6.0
- v7.4.5
- v7.4.4
- v7.4.3
- v7.3.1
- v7.10.2
- v7.1.2
- v7.1.1
- v7.0.3
- v6.27.3
- v6.26.0
- v6.25.1
- v6.24.2
- v6.23.3
- v6.23.2
- v6.19.3
- v6.15.2
- v5.8
- v5.5.1
- v5.4.7
- v5.18.4
- v5.17.2
- v5.16.6
- v5.14.2
- v5.13.4
- v5.13.2
- v5.13.1
- v5.12.5
- v5.12.4
- v5.12.3
- v5.12.2
- v5.1.4
- v5.1.3
- v5.1.2
- v5.0.2
- v5.0.1
- v4.9
- v4.8
- v4.6.1
- v4.5.1
- v4.4.1
- v4.4
- v4.3.1
- v4.3
- v4.2
- v4.13.5
- v4.13
- v4.11.2
- v3.9
- v3.8
- v3.7
- v3.6.1
- v3.5
- v3.4
- v3.3
- v3.2
- v3.13.1
- v3.13
- v3.12.1
- v3.12
- v3.11
- v3.10
- v3.1
- v3.0
- v2.8
- v2.7
- v2.6
- v2.5
- v2.4
- v2.3
- v2.2
- v2.1
- v2.0
- v1.5.9.1
- v1.5.8.2
- v1.5.8.1
- v1.5.8
- v1.5.7
- rocksdb-5.8
- rocksdb-5.4.7
- rocksdb-5.1.4
- rocksdb-5.1.3
- rocksdb-5.1.2
- rocksdb-5.0.2
- rocksdb-5.0.1
- rocksdb-4.9
- rocksdb-4.8
- rocksdb-4.6.1
- rocksdb-4.5.1
- rocksdb-4.4.1
- rocksdb-4.4
- rocksdb-4.3.1
- rocksdb-4.3
- rocksdb-4.2
- rocksdb-4.13.5
- rocksdb-4.13
- rocksdb-4.11.2
- rocksdb-3.10.1
- blob_st_lvl-pre
- 2.8.fb
- 2.7.fb
- 2.4.fb
- 2.3.fb
- 2.2.fb
- 2.1.fb
- 2.0.fb
- 1.5.9.fb
- 1.5.9.2.fb
- 1.5.9.1.fb
- 1.5.8.fb
- 1.5.8.2.fb
- 1.5.8.1.fb
- 1.5.7.fb
Take a new snapshot of a software origin
If the archived software origin currently browsed is not synchronized with its upstream version (for instance when new commits have been issued), you can explicitly request Software Heritage to take a new snapshot of it.
Use the form below to proceed. Once a request has been submitted and accepted, it will be processed as soon as possible. You can then check its processing state by visiting this dedicated page.Processing "take a new snapshot" request ...
Permalinks
To reference or cite the objects present in the Software Heritage archive, permalinks based on SoftWare Hash IDentifiers (SWHIDs) must be used.
Select below a type of object currently browsed in order to display its associated SWHID and permalink.
Revision | Author | Date | Message | Commit Date |
---|---|---|---|---|
6d113fc | Yanqin Jin | 07 June 2019, 23:23:07 UTC | Update HISTORY and bump version | 07 June 2019, 23:23:07 UTC |
11afcbe | Yanqin Jin | 05 June 2019, 20:56:46 UTC | Disable dynamic extension support by default for CMake (#5419) Summary: We have users reporting linking error while building RocksDB using CMake, and we do not enable dynamic extension feature for them. The fix is to add `-DROCKSDB_NO_DYNAMIC_EXTENSION` to CMake by default. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5419 Differential Revision: D15676792 Pulled By: riversand963 fbshipit-source-id: d45aaacfc64ea61646fd7329c352cd760145baf3 | 05 June 2019, 21:09:27 UTC |
b6e554e | Levi Tamasi | 04 June 2019, 23:38:51 UTC | Bump version to 6.2.1 | 04 June 2019, 23:38:51 UTC |
a1f08cc | Andrew Kryczka | 04 June 2019, 17:17:24 UTC | Fix merging range tombstone covering put during flush/compaction (#5406) Summary: Flush/compaction use `MergeUntil` which has a special code path to handle a merge ending with a non-`Merge` point key. In particular if that key is a `Put` we forgot to check whether it is covered by a range tombstone. If it is covered then we must not include it in the following call to `TimedFullMerge`. Fixes #5392. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5406 Differential Revision: D15611144 Pulled By: sagar0 fbshipit-source-id: ba6a7863ca2d043f591de78fd0c4f4561f0c500e | 04 June 2019, 20:44:37 UTC |
0d9bfa6 | anand76 | 14 May 2019, 18:54:52 UTC | Fix a bug in db_stress and an incorrect assertion in FilePickerMultiGet (#5301) Summary: This PR has two fixes for crash test failures - 1. Fix a bug in TestMultiGet() in db_stress that was passing list of key to MultiGet() in the wrong order, thus ensuring that actual values don't match expected values 2. Remove an incorrect assertion in FilePickerMultiGet::GetNextFileInLevelWithKeys() that checks that files in a level are in sorted order. This is not true with MultiGet(), especially if there are duplicate keys and we may have to go back one file for the next key. Furthermore, this assertion makes more sense when a new version is created, rather than at lookup time Test - asan_crash and ubsan_crash tests Pull Request resolved: https://github.com/facebook/rocksdb/pull/5301 Differential Revision: D15337383 Pulled By: anand1976 fbshipit-source-id: 35092cb15bbc1700e5e823cbe07bfa62f1e9e6c6 | 24 May 2019, 20:22:04 UTC |
9feb730 | anand76 | 09 May 2019, 20:03:37 UTC | Fix bugs in FilePickerMultiGet (#5292) Summary: This PR fixes a couple of bugs in FilePickerMultiGet that were causing db_stress test failures. The failures were caused by - 1. Improper handling of a key that matches the user key portion of an L0 file's largest key. In this case, the curr_index_in_curr_level file index in L0 for that key was getting incremented, but batch_iter_ was not advanced. By design, all keys in a batch are supposed to be checked against an L0 file before advancing to the next L0 file. Not advancing to the next key in the batch was causing a double increment of curr_index_in_curr_level due to the same key being processed again 2. Improper handling of a key that matches the user key portion of the largest key in the last file of L1 and higher. This was resulting in a premature end to the processing of the batch for that level when the next key in the batch is a duplicate. Typically, the keys in MultiGet will not be duplicates, but its good to handle that case correctly Test - asan_crash make check Pull Request resolved: https://github.com/facebook/rocksdb/pull/5292 Differential Revision: D15282530 Pulled By: anand1976 fbshipit-source-id: d1a6a86e0af273169c3632db22a44d79c66a581f | 24 May 2019, 20:19:55 UTC |
70dca18 | Zhongyi Xie | 03 May 2019, 22:55:48 UTC | multiget: fix memory issues due to vector auto resizing (#5279) Summary: This PR fixes three memory issues found by ASAN * in db_stress, the key vector for MultiGet is created using `emplace_back` which could potentially invalidates references to the underlying storage (vector<string>) due to auto resizing. Fix by calling reserve in advance. * Similar issue in construction of GetContext autovector in version_set.cc * In multiget_context.h use T[] specialization for unique_ptr that holds a char array Pull Request resolved: https://github.com/facebook/rocksdb/pull/5279 Differential Revision: D15202893 Pulled By: miasantreble fbshipit-source-id: 14cc2cda0ed64d29f2a1e264a6bfdaa4294ee75d | 24 May 2019, 20:19:06 UTC |
5f703af | anand76 | 02 May 2019, 06:04:03 UTC | Add option to use MultiGet in db_stress (#5264) Summary: The new option will pick a batch size randomly in the range 1-64. It will then space the keys in the batch by random intervals. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5264 Differential Revision: D15175522 Pulled By: anand1976 fbshipit-source-id: c16baa69d0f1ff4cf53c55c813ddd82c8aeb58fc | 24 May 2019, 20:18:02 UTC |
8baa66a | Yanqin Jin | 15 May 2019, 18:26:03 UTC | Update HISTORY.md | 15 May 2019, 18:26:03 UTC |
570d490 | Yanqin Jin | 11 May 2019, 00:53:41 UTC | Fix a race condition caused by unlocking db mutex (#5294) Summary: Previous code may call `~ColumnFamilyData` in `DBImpl::AtomicFlushMemTablesToOutputFiles` if the column family is dropped or `cfd->IsFlushPending() == false`. In `~ColumnFamilyData`, the db mutex is released briefly and re-acquired. This can cause correctness issue. The reason is as follows. Assume there are more bg flush threads. After bg_flush_thr1 releases the db mutex, bg_flush_thr2 can grab it and pop an element from the flush queue. This will cause bg_flush_thr2 to accidentally pick some memtables which should have been picked by bg_flush_thr1. To make the matter worse, bg_flush_thr2 can clear `flush_requested_` flag for the memtable list, causing a subsequent call to `MemTableList::IsFlushPending()` by bg_flush_thr1 to return false, which is wrong. The fix is to delay `ColumnFamilyData::Unref` and `~ColumnFamilyData` for column families not selected for flush until `AtomicFlushMemTablesToOutputFiles` returns. Furthermore, a bg flush thread should not clear `MemTableList::flush_requested_` in `MemTableList::PickMemtablesToFlush` unless atomic flush is not used **or** the memtable list does not have unpicked memtables. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5294 Differential Revision: D15295297 Pulled By: riversand963 fbshipit-source-id: 03b101205ca22c242647cbf488bcf0ed80b2ecbd | 15 May 2019, 18:23:47 UTC |
55320de | Fosco Marotto | 30 April 2019, 20:07:04 UTC | Update history and version for future 6.2.0 | 30 April 2019, 20:07:04 UTC |
03c7ae2 | Yuqi Gu | 30 April 2019, 17:56:06 UTC | RocksDB CRC32c optimization with ARMv8 Intrinsic (#5221) Summary: 1. Add Arm linear crc32c implemtation for RocksDB. 2. Arm runtime check for crc32 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5221 Differential Revision: D15013685 Pulled By: siying fbshipit-source-id: 2c2983743d26656d93f212dc7c1a3cf66a1acf12 | 30 April 2019, 17:59:05 UTC |
a5debd7 | David Palm | 30 April 2019, 17:08:13 UTC | Add rocksdb_property_int_cf (#5268) Summary: Adds the missing `rocksdb_property_int_cf` function to the C API to let consuming libraries avoid parsing strings. Fixes https://github.com/facebook/rocksdb/issues/5249 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5268 Differential Revision: D15149461 Pulled By: maysamyabandeh fbshipit-source-id: e9fe5f1ad7c64066d921dba8473507269b51d331 | 30 April 2019, 17:13:28 UTC |
b02d0c2 | Andrew Kryczka | 30 April 2019, 16:46:40 UTC | Init compression dict handle before reading meta-blocks (#5267) Summary: At least one of the meta-block loading functions (`ReadRangeDelBlock`) uses the same block reading function (`NewDataBlockIterator`) as data block reads, which means it uses the dictionary handle. However, the dictionary handle was uninitialized while reading meta-blocks, causing readers to receive an error. This situation was only noticed when `cache_index_and_filter_blocks=true`. This PR initializes the handle to null while reading meta-blocks to prevent the error. It also adds support to `db_stress` / `db_crashtest.py` for `cache_index_and_filter_blocks`. Fixes #5263. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5267 Differential Revision: D15149264 Pulled By: maysamyabandeh fbshipit-source-id: 991d38a306c62db5976778bfb050fa3cd4a0671b | 30 April 2019, 16:50:49 UTC |
25810ca | bxq2011hust | 30 April 2019, 16:30:46 UTC | compile gtest only when enable test Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5248 Differential Revision: D15149190 Pulled By: maysamyabandeh fbshipit-source-id: fd6d799e80bb502a7ddbc07032ea87e2e3f1e24f | 30 April 2019, 16:33:44 UTC |
210b49c | Yanqin Jin | 30 April 2019, 15:09:13 UTC | Disable pipelined write in atomic flush stress test (#5266) Summary: Since currently pipelined write allows one thread to perform memtable writes while another thread is traversing the `flush_scheduler_`, it will cause an assertion failure in `FlushScheduler::Clear`. To unblock crash recoery tests, we temporarily disable pipelined write when atomic flush is enabled. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5266 Differential Revision: D15142285 Pulled By: riversand963 fbshipit-source-id: a0c20fe4ac543e08feaed602414f982054df7831 | 30 April 2019, 15:12:42 UTC |
1886456 | Tongliang Liao | 29 April 2019, 22:27:09 UTC | CMake has stock FindZLIB in upper case. (#5261) Summary: More details in https://cmake.org/cmake/help/v3.14/module/FindZLIB.html This resolves the cmake config error of not finding `Findzlib` on Linux (CentOS 7 + cmake 3.14.3 + gcc-8). Pull Request resolved: https://github.com/facebook/rocksdb/pull/5261 Differential Revision: D15138052 Pulled By: maysamyabandeh fbshipit-source-id: 2f4445f49a36c16e6f1e05c090018c02379c0de4 | 29 April 2019, 22:30:29 UTC |
35e6ba7 | Yanqin Jin | 29 April 2019, 19:29:57 UTC | Fix a bug when trigger atomic flush and close db (#5254) Summary: With atomic flush, RocksDB background flush will flush memtables of a column family up to the largest memtable id in the immutable memtable list. This can introduce a bug in the following scenario. A user thread inserts into a column family until the memtable is full and triggers a flush. This will add the column family to flush_scheduler_. Then the user thread writes another record to the column family. In the PreprocessWrite function, the user thread picks the column family from flush_scheduler_ and schedules a flush request. The flush request gaurantees to flush all the memtables up to the current largest memtable ID of the immutable memtable list. Then the user thread writes new data to the newly-created active memtable. After the write returns, the user thread closes the db. This can cause assertion failure when the background flush thread tries to install superversion for the column family. The solution is to not install flush results if the db has already set `shutting_down_` to true. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5254 Differential Revision: D15124149 Pulled By: riversand963 fbshipit-source-id: 0a667a41339dedb5a18bcb01b0bf11c275c04df0 | 29 April 2019, 19:48:32 UTC |
3548e42 | Sagar Vemuri | 27 April 2019, 04:20:25 UTC | Improve explicit user readahead performance (#5246) Summary: Improve the iterators performance when the user explicitly sets the readahead size via `ReadOptions.readahead_size`. 1. Stop creating new table readers when the user explicitly sets readahead size. 2. Make use of an internal buffer based on `FilePrefetchBuffer` instead of using `ReadaheadRandomAccessFileReader`, to handle the user readahead requests (for both buffered and direct io cases). 3. Add `readahead_size` to db_bench. **Benchmarks:** https://gist.github.com/sagar0/53693edc320a18abeaeca94ca32f5737 For 1 MB readahead, Buffered IO performance improves by 28% and Direct IO performance improves by 50%. For 512KB readahead, Buffered IO performance improves by 30% and Direct IO performance improves by 67%. **Test Plan:** Updated `DBIteratorTest.ReadAhead` test to make sure that: - no new table readers are created for iterators on setting ReadOptions.readahead_size - At least "readahead" number of bytes are actually getting read on each iterator read. TODO later: - Use similar logic for compactions as well. - This ties in nicely with #4052 and paves the way for removing ReadaheadRandomAcessFile later. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5246 Differential Revision: D15107946 Pulled By: sagar0 fbshipit-source-id: 2c1149729ca7d779e4e8b7710ba6f4e8cbfd3bea | 27 April 2019, 04:24:10 UTC |
8c7eb59 | Maysam Yabandeh | 27 April 2019, 00:26:29 UTC | Fix ubsan failure in snapshot refresh (#5257) Summary: The newly added test CompactionJobTest.SnapshotRefresh sets the snapshot refresh period to 0 to stress the feature. This results into large number of refresh events, which in turn results into an UBSAN failure when a bitwise shift operand goes beyond the uint64_t size. The patch fixes that by simplifying the shift logic to be done only by 2 bits after each refresh. Furthermore it verifies that the shift operation does not result in decreasing the refresh period. Testing: COMPILE_WITH_UBSAN=1 make -j32 compaction_job_test ./compaction_job_test --gtest_filter=CompactionJobTest.SnapshotRefresh Pull Request resolved: https://github.com/facebook/rocksdb/pull/5257 Differential Revision: D15106463 Pulled By: maysamyabandeh fbshipit-source-id: f2718898ea7ba4fa9f7e87b70cf98fe647c0de80 | 27 April 2019, 00:30:30 UTC |
506e844 | Maysam Yabandeh | 26 April 2019, 01:14:10 UTC | Refresh snapshot list during long compactions (#5099) Summary: Part of compaction cpu goes to processing snapshot list, the larger the list the bigger the overhead. Although the lifetime of most of the snapshots is much shorter than the lifetime of compactions, the compaction conservatively operates on the list of snapshots that it initially obtained. This patch allows the snapshot list to be updated via a callback if the compaction is taking long. This should let the compaction to continue more efficiently with much smaller snapshot list. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5099 Differential Revision: D15086710 Pulled By: maysamyabandeh fbshipit-source-id: 7649f56c3b6b2fb334962048150142a3bf9c1a12 | 26 April 2019, 01:17:22 UTC |
6eb317b | Andrew Kryczka | 25 April 2019, 18:31:58 UTC | Option string/map/file can set env from object registry (#5237) Summary: - By providing the "env" field in any text-based options (i.e., string, map, or file), we can use `NewCustomObject` to deserialize the text value into an actual `Env` object. - Currently factory functions for `Env` registered with object registry should only return pointer to static `Env` objects. That's because `DBOptions::env` is a raw pointer so we cannot easily delegate cleanup. - Note I did not add `env` to `db_option_type_info`. It wasn't needed for (de)serialization, and I believe we don't want to do verification on `env`, even by checking name. That's because the user should be able to copy their DB from Linux to Windows, change envs, and not see an option verification error. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5237 Differential Revision: D15056360 Pulled By: siying fbshipit-source-id: 4b5f0b83297a5058f8949ec955dbf27d98d73d7e | 25 April 2019, 18:35:09 UTC |
084a3c6 | niukuo | 25 April 2019, 18:20:01 UTC | add missing rocksdb_flush_cf in c (#5243) Summary: same to #5229 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5243 Differential Revision: D15082800 Pulled By: siying fbshipit-source-id: f4a68a480db0e40e1ba7cf37e18b88e43dff7c08 | 25 April 2019, 18:25:43 UTC |
da96f2f | Yanqin Jin | 25 April 2019, 17:04:57 UTC | Close WAL files before deletion (#5233) Summary: Currently one thread in RocksDB keeps a WAL file open while another thread deletes it. Although the first thread never writes to the WAL again, it still tries to close it in the end. This is fine on POSIX, but can be problematic on other platforms, e.g. HDFS, etc.. It will either cause a lot of warning messages or throw exceptions. The solution is to let the second thread close the WAL before deleting it. RocksDB keeps the writers of the logs to delete in `logs_to_free_`, which is passed to `job_context` during `FindObsoleteFiles` (holding mutex). Then in `PurgeObsoleteFiles` (without mutex), these writers should close the logs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5233 Differential Revision: D15032670 Pulled By: riversand963 fbshipit-source-id: c55e8a612db8cc2306644001a5e6d53842a8f754 | 25 April 2019, 17:11:41 UTC |
66d8360 | Zhongyi Xie | 25 April 2019, 04:25:50 UTC | update history.md (#5245) Summary: update history.md for `BottommostLevelCompaction::kForceOptimized` to mention possible user impact. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5245 Differential Revision: D15073712 Pulled By: miasantreble fbshipit-source-id: d40f698c42e8a6368be4eac0a00d02279615edea | 25 April 2019, 04:30:00 UTC |
cd77d3c | Mike Kolupaev | 25 April 2019, 00:07:31 UTC | Don't call FindObsoleteFiles() in ~ColumnFamilyHandleImpl() if CF is not dropped (#5238) Summary: We have a DB with ~4k column families and ~70k files. On shutdown, destroying the 4k ColumnFamilyHandle-s takes over 2 minutes. Most of this time is spent in VersionSet::AddLiveFiles() called from FindObsoleteFiles() from ~ColumnFamilyHandleImpl(). It's just iterating over the list of files in memory. This seems completely unnecessary as no obsolete files are actually found since the CFs are not even dropped. This PR fixes that. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5238 Differential Revision: D15056342 Pulled By: siying fbshipit-source-id: 2aa342ef3770b4aa384ce81f8768e485480e4f08 | 25 April 2019, 00:11:36 UTC |
aa56b7e | Zhongyi Xie | 24 April 2019, 19:05:29 UTC | secondary instance: add support for WAL tailing on `OpenAsSecondary` Summary: PR https://github.com/facebook/rocksdb/pull/4899 implemented the general framework for RocksDB secondary instances. This PR adds the support for WAL tailing in `OpenAsSecondary`, which means after the `OpenAsSecondary` call, the secondary is now able to see primary's writes that are yet to be flushed. The secondary can see primary's writes in the WAL up to the moment of `OpenAsSecondary` call starts. Differential Revision: D15059905 Pulled By: miasantreble fbshipit-source-id: 44f71f548a30b38179a7940165e138f622de1f10 | 24 April 2019, 19:08:44 UTC |
1c8cbf3 | anand76 | 23 April 2019, 21:08:24 UTC | Extend MultiGet batching to Transactions (#5210) Summary: MultiGet batching was implemented in #5011 in order to reduce CPU utilization when looking up multiple keys at once. This PR implements corresponding ```MultiGet``` and ```MultiGetSingleCFForUpdate``` in ```rocksdb::Transaction``` that call the underlying batching implementation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5210 Differential Revision: D15048164 Pulled By: anand1976 fbshipit-source-id: c52f6043102ab0cbc723f4cba2a7b7d1767f6f52 | 23 April 2019, 21:11:26 UTC |
a7d1031 | qinzuoyan | 23 April 2019, 18:14:41 UTC | Print smallest and largest seqno in Version::DebugString() for more details (#5231) Summary: In some cases, we want to known the smallest and largest sequence numbers of sstable files, to help us get more details. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5231 Differential Revision: D15038087 Pulled By: siying fbshipit-source-id: c473c1ca07b53efe2f1884fa1ecdc8686f455ed8 | 23 April 2019, 18:22:02 UTC |
990b2f4 | Adam Retter | 23 April 2019, 18:12:39 UTC | Fix compilation on db_bench_tool.cc on Windows (#5227) Summary: I needed this change to be able to build the v6.0.1 release on Windows. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5227 Differential Revision: D15033815 Pulled By: sagar0 fbshipit-source-id: 579f3b8e694c34c0d43527eb2fa37175e37f5911 | 23 April 2019, 18:16:51 UTC |
72c8533 | Siying Dong | 23 April 2019, 17:51:50 UTC | DBIter to use IteratorWrapper for inner iterator (#5214) Summary: It's hard to get DBIter to directly use InternalIterator::NextAndGetResult() because the code change would be complicated. Instead, use IteratorWrapper, where Next() is already using NextAndGetResult(). Performance number is hard to measure because it is small and ther is variation. I run readseq many times, and there seems to be 1% gain. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5214 Differential Revision: D15003635 Pulled By: siying fbshipit-source-id: 17af1965c409c2fe90cd85037fbd2c5a1364f82a | 23 April 2019, 17:55:01 UTC |
78a6e07 | Yuchi Chen | 22 April 2019, 22:59:16 UTC | Fix compilation errors for 32bits/LITE/ios build. (#5220) Summary: When I build RocksDB for 32bits/LITE/iOS environment, some errors like the following. ` table/block_based_table_reader.cc:971:44: error: implicit conversion loses integer precision: 'uint64_t' (aka 'unsigned long long') to 'size_t' (aka 'unsigned long') [-Werror,-Wshorten-64-to-32] size_t block_size = props_block_handle.size(); ~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~^~~~~~ ./util/file_reader_writer.h:177:8: error: private field 'env_' is not used [-Werror,-Wunused-private-field] Env* env_; ^ ` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5220 Differential Revision: D15023481 Pulled By: siying fbshipit-source-id: 1b5d121d3016f2b0a8a9a2cc1bd638479357f9f7 | 22 April 2019, 23:02:16 UTC |
47fd574 | Sagar Vemuri | 22 April 2019, 22:24:04 UTC | Log file_creation_time table property (#5232) Summary: Log file_creation_time table property when a new table file is created. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5232 Differential Revision: D15033069 Pulled By: sagar0 fbshipit-source-id: aaac56a4c03a8f96c338cad1b0cdb7fbfb887647 | 22 April 2019, 22:30:07 UTC |
8272a6d | Andrew Kryczka | 22 April 2019, 18:48:45 UTC | Optionally wait on bytes_per_sync to smooth I/O (#5183) Summary: The existing implementation does not guarantee bytes reach disk every `bytes_per_sync` when writing SST files, or every `wal_bytes_per_sync` when writing WALs. This can cause confusing behavior for users who enable this feature to avoid large syncs during flush and compaction, but then end up hitting them anyways. My understanding of the existing behavior is we used `sync_file_range` with `SYNC_FILE_RANGE_WRITE` to submit ranges for async writeback, such that we could continue processing the next range of bytes while that I/O is happening. I believe we can preserve that benefit while also limiting how far the processing can get ahead of the I/O, which prevents huge syncs from happening when the file finishes. Consider this `sync_file_range` usage: `sync_file_range(fd_, 0, static_cast<off_t>(offset + nbytes), SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE)`. Expanding the range to start at 0 and adding the `SYNC_FILE_RANGE_WAIT_BEFORE` flag causes any pending writeback (like from a previous call to `sync_file_range`) to finish before it proceeds to submit the latest `nbytes` for writeback. The latest `nbytes` are still written back asynchronously, unless processing exceeds I/O speed, in which case the following `sync_file_range` will need to wait on it. There is a second change in this PR to use `fdatasync` when `sync_file_range` is unavailable (determined statically) or has some known problem with the underlying filesystem (determined dynamically). The above two changes only apply when the user enables a new option, `strict_bytes_per_sync`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5183 Differential Revision: D14953553 Pulled By: siying fbshipit-source-id: 445c3862e019fb7b470f9c7f314fc231b62706e9 | 22 April 2019, 18:51:39 UTC |
df38c1c | Mike Kolupaev | 22 April 2019, 15:17:45 UTC | Add BlockBasedTableOptions::index_shortening (#5174) Summary: Introduce BlockBasedTableOptions::index_shortening to give users control on which key shortening techniques to be used in building index blocks. Before this patch, both separators and successor keys where shortened in indexes. With this patch, the default is set to kShortenSeparators to only shorten the separators. Since each index block has many separators and only one successor (last key), the change should not have negative impact on index block size. However it should prevent many unnecessary block loads where due to approximation introduced by shorted successor, seek would land us to the previous block and then fix it by moving to the next one. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5174 Differential Revision: D14884185 Pulled By: al13n321 fbshipit-source-id: 1b08bc8c03edcf09b6b8c16e9a7eea08ad4dd534 | 22 April 2019, 15:20:35 UTC |
de76909 | jsteemann | 20 April 2019, 03:30:03 UTC | refactor SavePoints (#5192) Summary: Savepoints are assumed to be used in a stack-wise fashion (only the top element should be used), so they were stored by `WriteBatch` in a member variable `save_points` using an std::stack. Conceptually this is fine, but the implementation had a few issues: - the `save_points_` instance variable was a plain pointer to a heap- allocated `SavePoints` struct. The destructor of `WriteBatch` simply deletes this pointer. However, the copy constructor of WriteBatch just copied that pointer, meaning that copying a WriteBatch with active savepoints will very likely have crashed before. Now a proper copy of the savepoints is made in the copy constructor, and not just a copy of the pointer - `save_points_` was an std::stack, which defaults to `std::deque` for the underlying container. A deque is a bit over the top here, as we only need access to the most recent savepoint (i.e. stack.top()) but never any elements at the front. std::deque is rather expensive to initialize in common environments. For example, the STL implementation shipped with GNU g++ will perform a heap allocation of more than 500 bytes to create an empty deque object. Although the `save_points_` container is created lazily by RocksDB, moving from a deque to a plain `std::vector` is much more memory-efficient. So `save_points_` is now a vector. - `save_points_` was changed from a plain pointer to an `std::unique_ptr`, making ownership more explicit. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5192 Differential Revision: D15024074 Pulled By: maysamyabandeh fbshipit-source-id: 5b128786d3789cde94e46465c9e91badd07a25d7 | 20 April 2019, 03:33:04 UTC |
dc64c2f | Sagar Vemuri | 19 April 2019, 19:58:06 UTC | Fix history to not include some features in 6.1 (#5224) Summary: Fix HISTORY.md by removing a few items from 6.1.1 history as they did not make into the 6.1.fb branch. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5224 Differential Revision: D15017030 Pulled By: sagar0 fbshipit-source-id: 090724d326d29168952e06dc1a5090c03fdd739e | 19 April 2019, 20:00:53 UTC |
c77aab5 | Yanqin Jin | 19 April 2019, 18:52:13 UTC | Force read existing data during db repair (#5209) Summary: Setting read_opts.total_order_seek achieves this, even with a different prefix extractor. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5209 Differential Revision: D14980388 Pulled By: riversand963 fbshipit-source-id: 16527989a3d6b3e3ae8241c894d011326429d66e | 19 April 2019, 18:55:13 UTC |
5265c57 | anand76 | 19 April 2019, 18:07:22 UTC | Remove a couple of non-public includes from public header file (#5219) Summary: Cleanup a couple of stray includes left by #5011. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5219 Differential Revision: D15007244 Pulled By: anand1976 fbshipit-source-id: 15ca1d4f977b5b60e99df3bfb8fc3db217d19bdd | 19 April 2019, 18:10:33 UTC |
7a73add | Siying Dong | 19 April 2019, 17:34:23 UTC | Add some "inline" annotation to DBIter functions (#5217) Summary: My compiler doesn't inline DBIter::Next() to arena wrapped iterator, even if it is a direct forward. Adding this annotation makes it inlined. It might not always work but inlinging this function to arena wrapped iterator always feels like the right decision. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5217 Differential Revision: D15004086 Pulled By: siying fbshipit-source-id: a4cffd79c6fb092669a3a90633c9aa5e494f8a66 | 19 April 2019, 17:38:43 UTC |
efa9487 | Sagar Vemuri | 19 April 2019, 05:36:32 UTC | Use creation_time or mtime when file_creation_time=0 (#5184) Summary: We found an issue in Periodic Compactions (introduced in #5166) where files were not being picked up for compactions as all the SST files created with older versions of RocksDB have `file_creation_time` as 0. (Note that `file_creation_time` is a new table property introduced in #5166). To address this, Periodic compactions now fall back to looking at the `creation_time` table property or the file's modification time (as given by the Env) when `file_creation_time` table property is found to be 0. Here how the file's modification time (and, in turn, the file age) is computed now: 1. Use `file_creation_time` table property if it is > 0. 1. If not, then use `creation_time` table property if it is > 0. 1. If not, then use file's mtime stat metadata given by the underlying Env. Don't consider the file at all for compaction if the modification time cannot be correctly determined based on the above conditions. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5184 Differential Revision: D14907795 Pulled By: sagar0 fbshipit-source-id: 4bb2f3631f9a3e04470c674a1d13544584e1e56c | 19 April 2019, 05:39:34 UTC |
3bdce20 | Zhongyi Xie | 18 April 2019, 21:52:54 UTC | reorganize history.md to list unreleased changes separately Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5216 Differential Revision: D15003749 Pulled By: miasantreble fbshipit-source-id: a52c264e694cd7c55813be33ee22b4f3046b545a | 18 April 2019, 21:55:57 UTC |
d6862b3 | Siying Dong | 18 April 2019, 19:22:29 UTC | Make ReadRangeDelAggregator::ShouldDelete() more inline friendly (#5202) Summary: Reorganize the code so that no function call into ReadRangeDelAggregator is needed if there is no tomb range stone. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5202 Differential Revision: D14968155 Pulled By: siying fbshipit-source-id: 0bd61911293c7a27b4e1b8d57c66d0c4ad6a6a5f | 18 April 2019, 19:27:25 UTC |
01cfea6 | Siying Dong | 18 April 2019, 19:07:48 UTC | Some small code changes to improve Next() (#5200) Summary: Several small changes for Next(): 1. Reducing branching by always update local_stats_.next_count_++ even if statistics is null. This should be faster than a branching. 2. Replacing ResetInternalKeysSkippedCounter() in Next() because the valid_ check is not needed in this case. 3. iter_->Valid() should always be true for non merge case. Remove this check. 4. Adding an inline annotation. It ends up with not picked up by my compiler, but it shouldn't hurt. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5200 Differential Revision: D15000391 Pulled By: siying fbshipit-source-id: be97f61c708968234fb8e5cf272b5c2ac07dc4dd | 18 April 2019, 19:18:11 UTC |
992dfc7 | Siying Dong | 18 April 2019, 18:08:33 UTC | Introduce InternalIteratorBase::NextAndGetResult() (#5197) Summary: In long scans, virtual function calls of Next(), Valid(), key() and value() are not trivial. By introducing NextAndGetResult(), Some of the Next(), Valid() and key() calls are consolidated into one virtual function call to reduce CPU. Also did some inline tricks and add some "final" randomly in some functions. Even without the "final" annotation, most Next() calls are inlined with -O3, but sometimes with a final it is inlined by O2 too. It doesn't hurt to add those final annotations. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5197 Differential Revision: D14945977 Pulled By: siying fbshipit-source-id: 7003969f9a5f1d5717f0bda503b91d19ba75ed88 | 18 April 2019, 18:12:39 UTC |
6c2bf9e | Fosco Marotto | 18 April 2019, 17:51:19 UTC | Add copyright headers per FB open-source checkup tool. (#5199) Summary: internal task: T35568575 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5199 Differential Revision: D14962794 Pulled By: gfosco fbshipit-source-id: 93838ede6d0235eaecff90d200faed9a8515bbbe | 18 April 2019, 17:55:01 UTC |
392f6d4 | Yanqin Jin | 18 April 2019, 16:17:41 UTC | Fix a bug in GetOverlappingInputsRangeBinarySearch (#5211) Summary: As title. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5211 Differential Revision: D14992018 Pulled By: riversand963 fbshipit-source-id: b5720ea4742029e2fb47ff6d9f8d9de006db4ed4 | 18 April 2019, 16:22:16 UTC |
5b7e09b | JiYou | 18 April 2019, 01:12:20 UTC | VersionSet: optmize GetOverlappingInputsRangeBinarySearch (#4987) Summary: `GetOverlappingInputsRangeBinarySearch` firstly use binary search to find a index in the given range `[begin, end]`. But after find the index, then use linear search to find the `start_index` and `end_index`. So the search process degraded to linear time. Here optmize the search process with below changes: - use `std::lower_bound` and `std::upper_bound` to get `lg(n)` search complexity. - use uniformed lambda for search process. - simplify process for `within_interval` true or false. - remove function `ExtendFileRangeWithinInterval` and `ExtendFileRangeOverlappingInterval`. Signed-off-by: JiYou <jiyou09@gmail.com> Pull Request resolved: https://github.com/facebook/rocksdb/pull/4987 Differential Revision: D14984192 Pulled By: riversand963 fbshipit-source-id: fae4b8e59a21b7e350718d60cdc94dd55ac81e89 | 18 April 2019, 01:15:20 UTC |
248b6b5 | Zhongyi Xie | 17 April 2019, 17:08:36 UTC | rename variable to avoid shadowing (#5204) Summary: this PR fixes the following compile warning: ``` db/memtable.cc: In member function ‘virtual void rocksdb::MemTableIterator::Seek(const rocksdb::Slice&)’: db/memtable.cc:321:22: error: declaration of ‘user_key’ shadows a member of 'this' [-Werror=shadow] Slice user_key(ExtractUserKey(k)); ^ db/memtable.cc: In member function ‘virtual void rocksdb::MemTableIterator::SeekForPrev(const rocksdb::Slice&)’: db/memtable.cc:338:22: error: declaration of ‘user_key’ shadows a member of 'this' [-Werror=shadow] Slice user_key(ExtractUserKey(k)); ^ ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5204 Differential Revision: D14970160 Pulled By: miasantreble fbshipit-source-id: 388eb089f90c4528cc6d615dd4607fb53ceac705 | 17 April 2019, 17:15:05 UTC |
baa5302 | Zhongyi Xie | 17 April 2019, 06:29:32 UTC | Avoid double-compacting data in bottom level in manual compactions (#5138) Summary: Depending on the config, manual compaction (leveled compaction style) does following compactions: L0->L1 L1->L2 ... Ln-1 -> Ln Ln -> Ln The final Ln -> Ln compaction is partly unnecessary as it recompacts all the files that were just generated by the Ln-1 -> Ln. We should avoid recompacting such files. This rule should be applied to Lmax only. Resolves issue https://github.com/facebook/rocksdb/issues/4995 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5138 Differential Revision: D14940106 Pulled By: miasantreble fbshipit-source-id: 8d3cf5507a17e76f3333cfd4bac5256d005636e5 | 17 April 2019, 06:32:20 UTC |
d9280ff | Yanqin Jin | 17 April 2019, 03:24:52 UTC | Add back NewEmptyIterator (#5203) Summary: #4905 removed the implementation of `NewEmptyIterator` but kept its declaration in the public header. This breaks some systems that depend on RocksDB if the systems use `NewEmptyIterator`. Therefore, add it back to fix. cc maysamyabandeh please remind me if I miss anything here. Thanks Pull Request resolved: https://github.com/facebook/rocksdb/pull/5203 Differential Revision: D14968382 Pulled By: riversand963 fbshipit-source-id: 5fb86e99c8cfaf9f7a9473cdb1355d7558ff6e01 | 17 April 2019, 03:28:05 UTC |
beb44ec | Siying Dong | 16 April 2019, 18:59:35 UTC | WriteBufferManager's dummy entry size to block cache 1MB -> 256KB (#5175) Summary: Dummy cache size of 1MB is too large for small block sizes. Our GetDefaultCacheShardBits() use min_shard_size = 512L * 1024L to determine number of shards, so 1MB will excceeds the size of the whole shard and make the cache excceeds the budget. Change it to 256KB accordingly. There shouldn't be obvious performance impact, since inserting a cache entry every 256KB of memtable inserts is still infrequently enough. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5175 Differential Revision: D14954289 Pulled By: siying fbshipit-source-id: 2c275255c1ac3992174e06529e44c55538325c94 | 16 April 2019, 19:03:07 UTC |
f1239d5 | yiwu-arbug | 16 April 2019, 18:32:03 UTC | Avoid per-key upper bound check in BlockBasedTableIterator (#5142) Summary: This is second attempt for #5101. Original commit message: `BlockBasedTableIterator` avoid reading next block on `Next()` if it detects the iterator will be out of bound, by checking against index key. The optimization was added in #2239, and by the time it only check the bound per block. It seems later change make it a per-key check, which introduce unnecessary key comparisons. This patch come with two fixes: Fix 1: To optimize checking for bounds, we need comparing the bounds with index key as well. However BlockBasedTableIterator doesn't know whether its index iterator is internally using user keys or internal keys. The patch fixes that by extending InternalIterator with a user_key() function that is overridden by In IndexBlockIter. Fix 2: In #5101 we return `IsOutOfBound()=true` when block index key is out of bound. But the index key can be larger than smallest key of the next file on the level. That file can be within upper bound and should not be filtered out. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5142 Differential Revision: D14907113 Pulled By: siying fbshipit-source-id: ac95775c5b4e7b700f76ab43e39f45402c98fbfb | 16 April 2019, 18:37:47 UTC |
71a82a0 | Vijay Nadimpalli | 16 April 2019, 01:47:24 UTC | Consolidating WAL creation which currently has duplicate logic in db_impl_write.cc and db_impl_open.cc (#5188) Summary: Right now, two separate pieces of code are used to create WAL files in DBImpl::Open function of db_impl_open.cc and DBImpl::SwitchMemtable function of db_impl_write.cc. This code change simply creates 1 function called DBImpl::CreateWAL in db_impl_open.cc which is used to replace existing WAL creation logic in DBImpl::Open and DBImpl::SwitchMemtable. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5188 Differential Revision: D14942832 Pulled By: vjnadimpalli fbshipit-source-id: d49230e04c36176015c8c1b422575872f92157fb | 16 April 2019, 01:51:04 UTC |
3e63e55 | Yi Zhang | 15 April 2019, 18:32:31 UTC | Fix MultiGet ASSERT bug when passing unsorted result (#5195) Summary: Found this when test driving the new MultiGet. If you pass unsorted result with sorted_result = false you'll trigger the ASSERT incorrect even though we'll sort down below. I've also added simple test cover sorted_result=true/false scenario copied from MultiGetSimple. anand1976 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5195 Differential Revision: D14935475 Pulled By: yizhang82 fbshipit-source-id: 1d2af5e3a003847d965066a16e3b19da68acf170 | 15 April 2019, 18:35:21 UTC |
b70967a | Yi Wu | 15 April 2019, 17:49:47 UTC | db_bench: support seek to non-exist prefix (#5163) Summary: Add `--seek_missing_prefix` flag to db_bench to allow benchmarking seeking to non-existing prefix. Usage example: ``` ./db_bench --db=/dev/shm/db_bench --use_existing_db=false --benchmarks=fillrandom --num=100000000 --prefix_size=9 --keys_per_prefix=10 ./db_bench --db=/dev/shm/db_bench --use_existing_db=true --benchmarks=seekrandom --disable_auto_compactions=true --num=100000000 --prefix_size=9 --keys_per_prefix=10 --reads=1000 --prefix_same_as_start=true --seek_missing_prefix=true ``` Also adding `--total_order_seek` and `--prefix_same_as_start` flags. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5163 Differential Revision: D14935724 Pulled By: riversand963 fbshipit-source-id: 7c41023f007febe373eb1589861f215432a9e18a | 15 April 2019, 17:54:58 UTC |
b5cad5c | Fosco Marotto | 15 April 2019, 17:40:47 UTC | Update history and version to 6.1.1 (#5171) Summary: Including latest fixes. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5171 Differential Revision: D14875157 Pulled By: gfosco fbshipit-source-id: 86ec7ee3553a9b25ab71ed98966ce08a16322e2c | 15 April 2019, 17:49:38 UTC |
8295d36 | jsteemann | 15 April 2019, 17:30:48 UTC | Improve transaction lock details (#5193) Summary: This branch contains two small improvements: * Create `LockMap` entries using `std::make_shared`. This saves one heap allocation per LockMap entry but also locates the control block and the LockMap object closely together in memory, which can help with caching * Reorder the members of `TrackedTrxInfo`, so that the resulting struct uses less memory (at least on 64bit systems) Pull Request resolved: https://github.com/facebook/rocksdb/pull/5193 Differential Revision: D14934536 Pulled By: maysamyabandeh fbshipit-source-id: f7b49812bb4b6029eef9d131e7cd56260df5b28e | 15 April 2019, 17:44:03 UTC |
29111e9 | anand76 | 13 April 2019, 01:01:00 UTC | Add bounds check in FilePickerMultiGet::PrepareNextLevel() (#5189) Summary: Add bounds check when looping through empty levels in FilePickerMultiGet Pull Request resolved: https://github.com/facebook/rocksdb/pull/5189 Differential Revision: D14925334 Pulled By: anand1976 fbshipit-source-id: 65d53247cf443153e28ce2b8b753fa51c6ae4566 | 13 April 2019, 01:05:09 UTC |
cca141e | yiwu-arbug | 13 April 2019, 00:03:08 UTC | Fix crash with memtable prefix bloom and key out of prefix extractor domain (#5190) Summary: Before using prefix extractor `InDomain()` should be check. All uses in memtable.cc didn't check `InDomain()`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5190 Differential Revision: D14923773 Pulled By: miasantreble fbshipit-source-id: b3ad60bcca5f3a1a2b929a6eb34b0b7ba6326f04 | 13 April 2019, 00:07:49 UTC |
d655a3a | Manuel Ung | 12 April 2019, 23:34:20 UTC | Remove extraneous call to TrackKey (#5173) Summary: In `PessimisticTransaction::TryLock`, we were calling `TrackKey` even when assume_tracked=true, which defeats the purpose of assume_tracked. Remove this. For keys that are already tracked, TrackKey will actually bump some counters (num_reads/num_writes) which are consumed in `TransactionBaseImpl::GetTrackedKeysSinceSavePoint`, and this is used to determine which keys were tracked since the last savepoint. I believe this functionality should still work, since I think the user should not call GetForUpdate/Put(assume_tracked=true) across savepoints, and if they do, they should not expect the Put(assume_tracked=true) to show up as a tracked key in the second savepoint. This is another 2-3% cpu improvement. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5173 Differential Revision: D14883809 Pulled By: lth fbshipit-source-id: 7d09f0772da422384af0519773e310c22b0cbca3 | 12 April 2019, 23:37:12 UTC |
fe642cb | Maysam Yabandeh | 12 April 2019, 21:36:36 UTC | WritePrepared: fix race condition in reading batch with duplicate keys (#5147) Summary: When ReadOption doesn't specify a snapshot, WritePrepared::Get used kMaxSequenceNumber to avoid the cost of creating a new snapshot object (that requires sync over db_mutex). This creates a race condition if it is reading from the writes of a transaction that had duplicate keys: each instance of duplicate key is inserted with a different sequence number and depending on the ordering the ::Get might skip the newer one and read the older one that is obsolete. The patch fixes that by using last published seq as the snapshot sequence number. It also adds a check after the read is done to ensure that the max_evicted_seq has not advanced the aforementioned seq, which is a very unlikely event. If it did, then the read is not valid since the seq is not backed by an actually snapshot to let IsInSnapshot handle that properly when an overlapping commit is evicted from commit cache. A unit test is added to reproduce the race condition with duplicate keys. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5147 Differential Revision: D14758815 Pulled By: maysamyabandeh fbshipit-source-id: a56915657132cf6ba5e3f5ea1b5d78c803407719 | 12 April 2019, 21:40:41 UTC |
1966a7c | ableegoldman | 12 April 2019, 20:58:25 UTC | Expose JavaAPI for getting the filter policy of a BlockBasedTableConfig (#5186) Summary: I would like to be able to read out the current Filter that has been set (or not) for a BlockBasedTableConfig. Added one public method to BlockBasedTableConfig: public Filter filterPolicy() { return filterPolicy; } Pull Request resolved: https://github.com/facebook/rocksdb/pull/5186 Differential Revision: D14921415 Pulled By: siying fbshipit-source-id: 2a63c8685480197862b49fc48916c757cd6daf95 | 12 April 2019, 21:01:36 UTC |
85b2bde | Siying Dong | 12 April 2019, 17:55:14 UTC | Still implement StatisticsImpl::measureTime() (#5181) Summary: Since Statistics::measureTime() is deprecated, StatisticsImpl::measureTime() is not implemented. We realized that users might have a wrapped Statistics implementation in which measureTime() is implemented as forwarded to StatisticsImpl, and causes assert failure. In order to make the change less intrusive, we implement StatisticsImpl::measureTime(). We will revisit whether we need to remove it after several releases. Also, add a test to make sure that a Statistics implementation using the old interface still works. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5181 Differential Revision: D14907089 Pulled By: siying fbshipit-source-id: 29b6202fd04e30ed6f6adcaeb1000e87f10d1e1a | 12 April 2019, 18:00:35 UTC |
3189398 | Yanqin Jin | 12 April 2019, 17:42:06 UTC | Fix bugs detected by clang analyzer (#5185) Summary: as titled. False positive included, fixed anyway to make the check pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5185 Differential Revision: D14909384 Pulled By: riversand963 fbshipit-source-id: dc5177e72b1929ccfd6175a60e2cd7bdb9bd80f3 | 12 April 2019, 17:45:56 UTC |
f49e12b | vijaynadimpalli | 11 April 2019, 21:28:08 UTC | Added missing table properties in log (#5168) Summary: When a new SST file is created via flush or compaction, we dump out the table properties, however only a few table properties are logged. The change here is to log all the table properties Pull Request resolved: https://github.com/facebook/rocksdb/pull/5168 Differential Revision: D14876928 Pulled By: vjnadimpalli fbshipit-source-id: 1aca42ad00f9f650761d39e187f8beeb8700149b | 11 April 2019, 21:33:49 UTC |
fefd4b9 | anand76 | 11 April 2019, 21:24:09 UTC | Introduce a new MultiGet batching implementation (#5011) Summary: This PR introduces a new MultiGet() API, with the underlying implementation grouping keys based on SST file and batching lookups in a file. The reason for the new API is twofold - the definition allows callers to allocate storage for status and values on stack instead of std::vector, as well as return values as PinnableSlices in order to avoid copying, and it keeps the original MultiGet() implementation intact while we experiment with batching. Batching is useful when there is some spatial locality to the keys being queries, as well as larger batch sizes. The main benefits are due to - 1. Fewer function calls, especially to BlockBasedTableReader::MultiGet() and FullFilterBlockReader::KeysMayMatch() 2. Bloom filter cachelines can be prefetched, hiding the cache miss latency The next step is to optimize the binary searches in the level_storage_info, index blocks and data blocks, since we could reduce the number of key comparisons if the keys are relatively close to each other. The batching optimizations also need to be extended to other formats, such as PlainTable and filter formats. This also needs to be added to db_stress. Benchmark results from db_bench for various batch size/locality of reference combinations are given below. Locality was simulated by offsetting the keys in a batch by a stride length. Each SST file is about 8.6MB uncompressed and key/value size is 16/100 uncompressed. To focus on the cpu benefit of batching, the runs were single threaded and bound to the same cpu to eliminate interference from other system events. The results show a 10-25% improvement in micros/op from smaller to larger batch sizes (4 - 32). Batch Sizes 1 | 2 | 4 | 8 | 16 | 32 Random pattern (Stride length 0) 4.158 | 4.109 | 4.026 | 4.05 | 4.1 | 4.074 - Get 4.438 | 4.302 | 4.165 | 4.122 | 4.096 | 4.075 - MultiGet (no batching) 4.461 | 4.256 | 4.277 | 4.11 | 4.182 | 4.14 - MultiGet (w/ batching) Good locality (Stride length 16) 4.048 | 3.659 | 3.248 | 2.99 | 2.84 | 2.753 4.429 | 3.728 | 3.406 | 3.053 | 2.911 | 2.781 4.452 | 3.45 | 2.833 | 2.451 | 2.233 | 2.135 Good locality (Stride length 256) 4.066 | 3.786 | 3.581 | 3.447 | 3.415 | 3.232 4.406 | 4.005 | 3.644 | 3.49 | 3.381 | 3.268 4.393 | 3.649 | 3.186 | 2.882 | 2.676 | 2.62 Medium locality (Stride length 4096) 4.012 | 3.922 | 3.768 | 3.61 | 3.582 | 3.555 4.364 | 4.057 | 3.791 | 3.65 | 3.57 | 3.465 4.479 | 3.758 | 3.316 | 3.077 | 2.959 | 2.891 dbbench command used (on a DB with 4 levels, 12 million keys)- TEST_TMPDIR=/dev/shm numactl -C 10 ./db_bench.tmp -use_existing_db=true -benchmarks="readseq,multireadrandom" -write_buffer_size=4194304 -target_file_size_base=4194304 -max_bytes_for_level_base=16777216 -num=12000000 -reads=12000000 -duration=90 -threads=1 -compression_type=none -cache_size=4194304000 -batch_size=32 -disable_auto_compactions=true -bloom_bits=10 -cache_index_and_filter_blocks=true -pin_l0_filter_and_index_blocks_in_cache=true -multiread_batched=true -multiread_stride=4 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5011 Differential Revision: D14348703 Pulled By: anand1976 fbshipit-source-id: 774406dab3776d979c809522a67bedac6c17f84b | 11 April 2019, 21:28:26 UTC |
ed9f5e2 | Siying Dong | 11 April 2019, 17:22:07 UTC | Change OptimizeForPointLookup() and OptimizeForSmallDb() (#5165) Summary: Change the behavior of OptimizeForSmallDb() so that it is less likely to go out of memory. Change the behavior of OptimizeForPointLookup() to take advantage of the new memtable whole key filter, and move away from prefix extractor as well as hash-based indexing, as they are prone to misuse. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5165 Differential Revision: D14880709 Pulled By: siying fbshipit-source-id: 9af30e3c9e151eceea6d6b38701a58f1f9fb692d | 11 April 2019, 17:45:36 UTC |
d3d20dc | Sagar Vemuri | 11 April 2019, 02:24:25 UTC | Periodic Compactions (#5166) Summary: Introducing Periodic Compactions. This feature allows all the files in a CF to be periodically compacted. It could help in catching any corruptions that could creep into the DB proactively as every file is constantly getting re-compacted. And also, of course, it helps to cleanup data older than certain threshold. - Introduced a new option `periodic_compaction_time` to control how long a file can live without being compacted in a CF. - This works across all levels. - The files are put in the same level after going through the compaction. (Related files in the same level are picked up as `ExpandInputstoCleanCut` is used). - Compaction filters, if any, are invoked as usual. - A new table property, `file_creation_time`, is introduced to implement this feature. This property is set to the time at which the SST file was created (and that time is given by the underlying Env/OS). This feature can be enabled on its own, or in conjunction with `ttl`. It is possible to set a different time threshold for the bottom level when used in conjunction with ttl. Since `ttl` works only on 0 to last but one levels, you could set `ttl` to, say, 1 day, and `periodic_compaction_time` to, say, 7 days. Since `ttl < periodic_compaction_time` all files in last but one levels keep getting picked up based on ttl, and almost never based on periodic_compaction_time. The files in the bottom level get picked up for compaction based on `periodic_compaction_time`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5166 Differential Revision: D14884441 Pulled By: sagar0 fbshipit-source-id: 408426cbacb409c06386a98632dcf90bfa1bda47 | 11 April 2019, 02:31:18 UTC |
ef0fc1b | Manuel Ung | 10 April 2019, 22:54:53 UTC | Reduce copies of LockInfo (#5172) Summary: The LockInfo struct is not easy to copy because it contains std::vector. Reduce copies by using move constructor and `unordered_map::emplace`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5172 Differential Revision: D14882053 Pulled By: lth fbshipit-source-id: 93999ec6ab1a5841fb5115abb764b6c1831a6de1 | 10 April 2019, 22:58:58 UTC |
313e877 | jsteemann | 08 April 2019, 21:54:36 UTC | fix reading encrypted files beyond file boundaries (#5160) Summary: This fix should help reading from encrypted files if the file-to-be-read is smaller than expected. For example, when using the encrypted env and making it read a journal file of exactly 0 bytes size, the encrypted env code crashes with SIGSEGV in its Decrypt function, as there is no check if the read attempts to read over the file's boundaries (as specified originally by the `dataSize` parameter). The most important problem this patch addresses is however that there is no size underlow check in `CTREncryptionProvider::CreateCipherStream`: The stream to be read will be initialized to a size of always `prefix.size() - (2 * blockSize)`. If the prefix however is smaller than twice the block size, this will obviously assume a _very_ large stream and read over the bounds. The patch adds a check here as follows: // If the prefix is smaller than twice the block size, we would below read a // very large chunk of the file (and very likely read over the bounds) assert(prefix.size() >= 2 * blockSize); if (prefix.size() < 2 * blockSize) { return Status::Corruption("Unable to read from file " + fname + ": read attempt would read beyond file bounds"); } so embedders can catch the error in their release builds. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5160 Differential Revision: D14834633 Pulled By: sagar0 fbshipit-source-id: 47aa39a6db8977252cede054c7eb9a663b9a3484 | 08 April 2019, 21:57:25 UTC |
0bb5556 | Siying Dong | 08 April 2019, 20:24:29 UTC | Consolidate hash function used for non-persistent data in a new function (#5155) Summary: Create new function NPHash64() and GetSliceNPHash64(), which are currently implemented using murmurhash. Replace the current direct call of murmurhash() to use the new functions if the hash results are not used in on-disk format. This will make it easier to try out or switch to alternative functions in the uses where data format compatibility doesn't need to be considered. This part shouldn't have any performance impact. Also, the sharded cache hash function is changed to the new format, because it falls into this categoery. It doesn't show visible performance impact in db_bench results. CPU showed by perf is increased from about 0.2% to 0.4% in an extreme benchmark setting (4KB blocks, no-compression, everything cached in block cache). We've known that the current hash function used, our own Hash() has serious hash quality problem. It can generate a lots of conflicts with similar input. In this use case, it means extra lock contention for reads from the same file. This slight CPU regression is worthy to me to counter the potential bad performance with hot keys. And hopefully this will get further improved in the future with a better hash function. cache_test's condition is relaxed a little bit to. The new hash is slightly more skewed in this use case, but I manually checked the data and see the hash results are still in a reasonable range. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5155 Differential Revision: D14834821 Pulled By: siying fbshipit-source-id: ec9a2c0a2f8ae4b54d08b13a5c2e9cc97aa80cb5 | 08 April 2019, 20:32:06 UTC |
de00f28 | Yanqin Jin | 08 April 2019, 18:12:25 UTC | Refactor ExternalSSTFileTest (#5129) Summary: remove an unnecessary function `GenerateAndAddFileIngestBehind` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5129 Differential Revision: D14686710 Pulled By: riversand963 fbshipit-source-id: 5698ae63e10f8ef76c2da753bbb07a36024ac065 | 08 April 2019, 18:16:34 UTC |
39c6c5f | Sergei Glushchenko | 06 April 2019, 13:36:42 UTC | Expose DB methods to lock and unlock the WAL (#5146) Summary: Expose DB methods to lock and unlock the WAL. These methods are intended to use by MyRocks in order to obtain WAL coordinates in consistent way. Usage scenario is following: MySQL has performance_schema.log_status which provides information that enables a backup tool to copy the required log files without locking for the duration of copy. To populate this table MySQL does following: 1. Lock the binary log. Transactions are not allowed to commit now 2. Save the binary log coordinates 3. Walk through the storage engines and lock writes on each engine. For InnoDB, redo log is locked. For MyRocks, WAL should be locked. 4. Ask storage engines for their coordinates. InnoDB reports its current LSN and checkpoint LSN. MyRocks should report active WAL files names and sizes. 5. Release storage engine's locks 6. Unlock binary log Backup tool will then use this information to copy InnoDB, RocksDB and MySQL binary logs up to specified positions to end up with consistent DB state after restore. Currently, RocksDB allows to obtain the list of WAL files. Only missing bit is the method to lock the writes to WAL files. LockWAL method must flush the WAL in order for the reported size to be accurate (GetSortedWALFiles is using file system stat call to return the file size), also, since backup tool is going to copy the WAL, it is better to be flushed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5146 Differential Revision: D14815447 Pulled By: maysamyabandeh fbshipit-source-id: eec9535a6025229ed471119f19fe7b3d8ae888a3 | 06 April 2019, 13:40:36 UTC |
479c566 | Siying Dong | 05 April 2019, 23:05:10 UTC | Add final annotations to some cache functions (#5156) Summary: cache functions heavily use virtual functions. Add some "final" annotations to give compilers more information to optimize. The compiler doesn't seem to take advantage of it though. But it doesn't hurt. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5156 Differential Revision: D14814837 Pulled By: siying fbshipit-source-id: 4423f58eafc93f7dd3c5f04b02b5c993dba2ea94 | 05 April 2019, 23:08:01 UTC |
8d1e521 | Harry Wong | 05 April 2019, 22:16:15 UTC | Removed const fields in copyable classes (#5095) Summary: This fixed the compile error in Clang-8: ``` error: explicitly defaulted copy assignment operator is implicitly deleted [-Werror,-Wdefaulted-function-deleted] ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5095 Differential Revision: D14811961 Pulled By: riversand963 fbshipit-source-id: d935d1f85a4e8694dca10033fb5af92d8777eca0 | 05 April 2019, 22:40:30 UTC |
59ef2ba | Levi Tamasi | 04 April 2019, 23:18:07 UTC | Evict the uncompression dictionary from the block cache upon table close (#5150) Summary: The uncompression dictionary object has a Statistics pointer that might dangle if the database closed. This patch evicts the dictionary from the block cache when a table is closed, similarly to how index and filter readers are handled. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5150 Differential Revision: D14782422 Pulled By: ltamasi fbshipit-source-id: 0cec9336c742c479aa92206e04521767f1aa9622 | 04 April 2019, 23:21:12 UTC |
306b9ad | Mike Kolupaev | 04 April 2019, 21:43:29 UTC | Add missing methods to EnvWrapper, and more wrappers in Env.h (#5131) Summary: - Some newer methods of Env weren't wrapped in EnvWrapper. Fixed. - Added more wrapper classes similar to WritableFileWrapper: SequentialFileWrapper, RandomAccessFileWrapper, RandomRWFileWrapper, DirectoryWrapper, LoggerWrapper. - Moved the code around a bit, removed some unused friendships, added some comments. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5131 Differential Revision: D14738932 Pulled By: al13n321 fbshipit-source-id: 99a9b1af28f2c629e7b7501389fa920b5ce30218 | 04 April 2019, 21:47:41 UTC |
c06c4c0 | Adam Simpkins | 04 April 2019, 19:05:42 UTC | Fix many bugs in log statement arguments (#5089) Summary: Annotate all of the logging functions to inform the compiler that these use printf-style formatting arguments. This allows the compiler to emit warnings if the format arguments are incorrect. This also fixes many problems reported now that format string checking is enabled. Many of these are simply mix-ups in the argument type (e.g, int vs uint64_t), but in several cases the wrong number of arguments were being passed in which can cause the code to crash. The primary motivation for this was to fix the log message in `DBImpl::SwitchMemtable()` which caused a segfault due to an extra %s format parameter with no argument supplied. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5089 Differential Revision: D14574795 Pulled By: simpkins fbshipit-source-id: 0921b03f0743652bf4ae21e414ff54b3bb65422a | 04 April 2019, 19:12:11 UTC |
f0edf9d | datonli | 04 April 2019, 18:31:04 UTC | #5145 , rename port/dirent.h to port/port_dirent.h to avoid compile err when use port dir as header dir output (#5152) Summary: mv port/dirent.h to port/port_dirent.h to avoid compile err when use port dir as header dir output Pull Request resolved: https://github.com/facebook/rocksdb/pull/5152 Differential Revision: D14779409 Pulled By: siying fbshipit-source-id: d4162c47c979c6e8cc6a9e601802864ab3768ecb | 04 April 2019, 18:38:19 UTC |
75e8b6d | Maysam Yabandeh | 03 April 2019, 23:01:38 UTC | Fix race condition in IteratorWithLocalStatistics (#5149) Summary: The ReadCallback was shared between all threads in IteratorWithLocalStatistics. A race condition was hence introduced with recent changes that changes the content of ReadCallback. The patch fixes that by using a separate callback per thread. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5149 Differential Revision: D14761612 Pulled By: maysamyabandeh fbshipit-source-id: 814a316aed046c318cb90e22379a6e32ac528949 | 03 April 2019, 23:04:38 UTC |
7441a0e | Maysam Yabandeh | 03 April 2019, 22:43:34 UTC | WriteUnPrepared: fix ubsan complaint (#5148) Summary: Ubsna complains that in initialization of WriteUnpreparedTxnReadCallback the method of the child class is used before the parent class is constructed. The patch fixes that by making the aforementioned method static. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5148 Differential Revision: D14760098 Pulled By: maysamyabandeh fbshipit-source-id: cf19b7c1fdb5de0a54e62c1deebe09a0fa048ded | 03 April 2019, 22:51:30 UTC |
ebb9b2e | Zhichao Cao | 03 April 2019, 20:23:48 UTC | Fix the potential DB crash caused by call EndTrace before StartTrace (#5130) Summary: Although user should first call StartTrace to begin the RocksDB tracing function and call EndTrace to stop the tracing process, user can accidentally call EndTrace first. It will cause segment fault and crash the DB instance. The issue is fixed by checking the pointer first. Test case added in db_test2. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5130 Differential Revision: D14691420 Pulled By: zhichao-cao fbshipit-source-id: 3be13d2f944bc453728ef8eef67b68d7ad0939c8 | 03 April 2019, 20:26:34 UTC |
e8480d4 | Zhongyi Xie | 03 April 2019, 04:13:01 UTC | add assert to silence clang analyzer and fix variable shadowing (#5140) Summary: This PR address two open issues: 1. clang analyzer is paranoid about db_ being nullptr after DB::Open calls in the test. See https://github.com/facebook/rocksdb/pull/5043#discussion_r271394579 Add an assert to keep clang happy 2. PR https://github.com/facebook/rocksdb/pull/5049 introduced a variable shadowing: ``` db/db_iterator_test.cc: In constructor ‘rocksdb::DBIteratorWithReadCallbackTest_ReadCallback_Test::TestBody()::TestReadCallback::TestReadCallback(rocksdb::SequenceNumber)’: db/db_iterator_test.cc:2484:9: error: declaration of ‘max_visible_seq’ shadows a member of 'this' [-Werror=shadow] : ReadCallback(max_visible_seq) {} ^ ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5140 Differential Revision: D14735497 Pulled By: miasantreble fbshipit-source-id: 3219ea75cf4ae04f64d889323f6779e84be98144 | 03 April 2019, 04:15:44 UTC |
5234fc1 | Maysam Yabandeh | 02 April 2019, 22:14:41 UTC | Mark logs with prepare in PreReleaseCallback (#5121) Summary: In prepare phase of 2PC, the db promises to remember the prepared data, for possible future commits. To fulfill the promise the prepared data must be persisted in the WAL so that they could be recovered after a crash. The log that contains a prepare batch that is not committed yet, is marked so that it is not garbage collected before the transaction commits/rollbacks. The bug was that the write to the log file and the mark of the file was not atomic, and WAL gc could have happened before the WAL log is actually marked. This patch moves the marking logic to PreReleaseCallback so that the WAL gc logic that joins both write threads would see the WAL write and WAL mark atomically. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5121 Differential Revision: D14665210 Pulled By: maysamyabandeh fbshipit-source-id: 1d66aeb1c66a296cb4899a5a20c4d40c59e4b534 | 02 April 2019, 22:17:47 UTC |
26015f3 | Zhongyi Xie | 02 April 2019, 21:48:52 UTC | add compression options to table properties (#5081) Summary: Since we are planning to use dictionary compression and to use different compression level, it is quite useful to add compression options to TableProperties. For example, in MyRocks, if the feature is available, we can query from information_schema.rocksdb_sst_props to see if all sst files are converted to ZSTD dictionary compressions. Resolves https://github.com/facebook/rocksdb/issues/4992 With this PR, user can query table properties through `GetPropertiesOfAllTables` API and get compression options as std::string: `window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0;` or table_properties->ToString() will also contain it `# data blocks=1; # entries=13; # deletions=0; # merge operands=0; # range deletions=0; raw key size=143; raw average key size=11.000000; raw value size=39; raw average value size=3.000000; data block size=120; index block size (user-key? 0, delta-value? 0)=27; filter block size=0; (estimated) table size=147; filter policy name=N/A; prefix extractor name=nullptr; column family ID=0; column family name=default; comparator name=leveldb.BytewiseComparator; merge operator name=nullptr; property collectors names=[]; SST file compression algo=Snappy; SST file compression options=window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ; creation time=1552946632; time stamp of earliest key=1552946632;` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5081 Differential Revision: D14716692 Pulled By: miasantreble fbshipit-source-id: 7d2f2cf84e052bff876e71b4212cfdebf5be32dd | 02 April 2019, 21:52:34 UTC |
14b3f68 | Maysam Yabandeh | 02 April 2019, 21:43:03 UTC | WriteUnPrepared: less virtual in iterator callback (#5049) Summary: WriteUnPrepared adds a virtual function, MaxUnpreparedSequenceNumber, to ReadCallback, which returns 0 unless WriteUnPrepared is enabled and the transaction has uncommitted data written to the DB. Together with snapshot sequence number, this determines the last sequence that is visible to reads. The patch clarifies the guarantees of the GetIterator API in WriteUnPrepared transactions and make use of that to statically initialize the read callback and thus avoid the virtual call. Furthermore it increases the minimum value for min_uncommitted from 0 to 1 as seq 0 is used only for last level keys that are committed in all snapshots. The following benchmark shows +0.26% higher throughput in seekrandom benchmark. Benchmark: ./db_bench --benchmarks=fillrandom --use_existing_db=0 --num=1000000 --db=/dev/shm/dbbench ./db_bench --benchmarks=seekrandom[X10] --use_existing_db=1 --db=/dev/shm/dbbench --num=1000000 --duration=60 --seek_nexts=100 seekrandom [AVG 10 runs] : 20355 ops/sec; 225.2 MB/sec seekrandom [MEDIAN 10 runs] : 20425 ops/sec; 225.9 MB/sec ./db_bench_lessvirtual3 --benchmarks=seekrandom[X10] --use_existing_db=1 --db=/dev/shm/dbbench --num=1000000 --duration=60 --seek_nexts=100 seekrandom [AVG 10 runs] : 20409 ops/sec; 225.8 MB/sec seekrandom [MEDIAN 10 runs] : 20487 ops/sec; 226.6 MB/sec Pull Request resolved: https://github.com/facebook/rocksdb/pull/5049 Differential Revision: D14366459 Pulled By: maysamyabandeh fbshipit-source-id: ebaff8908332a5ae9af7defeadabcb624be660ef | 02 April 2019, 21:47:16 UTC |
d9d3cac | Simon Grätzer | 02 April 2019, 18:52:53 UTC | Add a missing define to monitoring/iostats_context_imp.h (#5136) Summary: I think when PR https://github.com/facebook/rocksdb/pull/4889 added the `IOSTATS_CPU_TIMER_GUARD` define to this header file, the noop version in the `#else` branch was forgotten. Not sure if this is common, but on my MacOS machine it breaks my build Pull Request resolved: https://github.com/facebook/rocksdb/pull/5136 Differential Revision: D14727727 Pulled By: siying fbshipit-source-id: 1076e56bdbe6ecda01d461b371dabf7f1593a149 | 02 April 2019, 18:56:18 UTC |
ebcc8ae | Siying Dong | 02 April 2019, 16:57:35 UTC | Revert "Avoid per-key upper bound check in BlockBasedTableIterator (#5101)" (#5132) Summary: This reverts commit f29dc1b90641e7f44b14f932e3866c5840391cd5. In BlockBasedTableIterator, index_iter_->key() is sometimes a user key, so it is wrong to call ExtractUserKey() against it. This is a bug introduced by #5101. Temporarily revert the diff to keep the branch clean. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5132 Differential Revision: D14718584 Pulled By: siying fbshipit-source-id: 0ac55dc9b5dbc18c7809092146bdf7eb9364b9ad | 02 April 2019, 17:00:38 UTC |
fa1b558 | xinbenlv | 02 April 2019, 01:16:26 UTC | Add LevelDB repository link in the Readme Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5134 Differential Revision: D14719068 Pulled By: siying fbshipit-source-id: c09a544f06ff414dbe2f90792aaf2bb5b8550bee | 02 April 2019, 01:19:09 UTC |
120bc47 | Mike Kolupaev | 02 April 2019, 00:07:38 UTC | Add DBOptions. avoid_unnecessary_blocking_io to defer file deletions (#5043) Summary: Just like ReadOptions::background_purge_on_iterator_cleanup but for ColumnFamilyHandle instead of Iterator. In our use case we sometimes call ColumnFamilyHandle's destructor from low-latency threads, and sometimes it blocks the thread for a few seconds deleting the files. To avoid that, we can either offload ColumnFamilyHandle's destruction to a background thread on our side, or add this option on rocksdb side. This PR does the latter, to be consistent with how we solve exactly the same problem for iterators using background_purge_on_iterator_cleanup option. (EDIT: It's avoid_unnecessary_blocking_io now, and affects both CF drops and iterator destructors.) I'm not quite comfortable with having two separate options (background_purge_on_iterator_cleanup and background_purge_on_cf_cleanup) for such a rarely used thing. Maybe we should merge them? Rename background_purge_on_cf_cleanup to something like delete_files_on_background_threads_only or avoid_blocking_io_in_unexpected_places, and make iterators use it instead of the one in ReadOptions? I can do that here if you guys think it's better. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5043 Differential Revision: D14339233 Pulled By: al13n321 fbshipit-source-id: ccf7efa11c85c9a5b91d969bb55627d0fb01e7b8 | 02 April 2019, 00:10:40 UTC |
127a850 | Remington Brasga | 29 March 2019, 22:05:29 UTC | Fix arena allocation size in NewEmptyInternalIterator (#4905) Summary: NewEmptyInternalIterator with arena mistakenly used EmptyIterator to allocate the size from area but then initialized it to a totally different object: EmptyInternalIterator. The patch fixes that. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4905 Differential Revision: D14689840 Pulled By: maysamyabandeh fbshipit-source-id: af64fd8ee93d5a4ad54691c792e5ecc5efabc887 | 29 March 2019, 22:09:35 UTC |
a703f16 | Maysam Yabandeh | 29 March 2019, 20:09:51 UTC | WriteUnPrepared: Enable auto-compaction after max_evicted_seq_ init (#5128) Summary: Compaction would depend on max_evicted_seq_ value. The ::Initialize method should do that after max_evicted_seq_ is properly initialized. The patch also back ports #4853 from WritePrepared txn to WriteUnPrepared. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5128 Differential Revision: D14686562 Pulled By: maysamyabandeh fbshipit-source-id: b2355025712a72676ac3b20a95258adcf4774490 | 29 March 2019, 20:18:57 UTC |
f29dc1b | Yi Wu | 29 March 2019, 20:07:44 UTC | Avoid per-key upper bound check in BlockBasedTableIterator (#5101) Summary: `BlockBasedTableIterator` avoid reading next block on `Next()` if it detects the iterator will be out of bound, by checking against index key. The optimization was added in #2239, and by the time it only check the bound per block. It seems later change make it a per-key check, which introduce unnecessary key comparisons. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5101 Differential Revision: D14678707 Pulled By: siying fbshipit-source-id: 2372446116753c7892ea4cec7b4b49ef87ba463e | 29 March 2019, 20:11:46 UTC |
09957de | Yanqin Jin | 29 March 2019, 17:03:02 UTC | Update RepeatableThreadTest with MockTimeEnv (#5107) Summary: **This PR updates RepeatableThread::wait, breaking some tests on OS X. The rest of the PR fixes the tests on OS X.** `RepeatableThreadTest.MockEnvTest` uses `MockTimeEnv` and `RepeatableThread`. If `RepeatableThread::wait` calls `TimedWait` with a time smaller than or equal to the current (real) time, `TimedWait` returns immediately on certain platforms, e.g. OS X. #4560 addresses this issue by replacing `TimedWait` with `Wait` in test. This fixes the test but makes test/production code diverge, which is not optimal for test coverage. This PR proposes an alternative fix which unifies test and production code path for `RepeatableThread::wait`. We obtain the current (real) time in seconds and add 10 extra seconds to ensure that `RepeatableThread::wait` invokes `TimedWait` with a time greater than (real) current time. This is to prevent the `TimedWait` function from returning immediately without sleeping and releasing the mutex. If `TimedWait` returns immediately, the mutex will not be released, and `RepeatableThread::TEST_WaitForRun` never has a chance to execute the callback which, in this case, updates the result returned by `mock_env->NowMicros()`. Consequently, `RepeatableThread::wait` cannot break out of the loop, causing test to hang. The extra 10 seconds is a best-effort approach because there seems no reliable and deterministic way to provide the aforementioned guarantee. By the time `RepeatableThread::wait` is called, there is no guarantee that the `delay + mock_env->NowMicros()` will be greater than the current real time. However, 10 seconds should be sufficient in most cases. We will keep an eye for possible flakiness of this test. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5107 Differential Revision: D14680885 Pulled By: riversand963 fbshipit-source-id: d1ecbe10e1dacd110bd464cd01e188bfee72b89e | 29 March 2019, 17:08:50 UTC |
d77476e | Yanqin Jin | 29 March 2019, 02:16:58 UTC | Fix db_stress for custom env (#5122) Summary: Fix some hdfs-related code so that it can compile and run 'db_stress' Pull Request resolved: https://github.com/facebook/rocksdb/pull/5122 Differential Revision: D14675495 Pulled By: riversand963 fbshipit-source-id: cac280479efcf5451982558947eac1732e8bc45a | 29 March 2019, 02:20:27 UTC |
dae3b55 | anand76 | 28 March 2019, 22:13:02 UTC | Smooth the deletion of WAL files (#5116) Summary: WAL files are currently not subject to deletion rate limiting by DeleteScheduler. If the size of the WAL files is significant, this can cause a high delete rate on SSDs that may affect other operations. To fix it, force WAL file deletions to go through the SstFileManager. Original PR for this is #2768 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5116 Differential Revision: D14669437 Pulled By: anand1976 fbshipit-source-id: c5f62d0640cebaa1574de841a1d01e4ce2faadf0 | 28 March 2019, 22:17:13 UTC |
a98317f | Siying Dong | 28 March 2019, 21:50:06 UTC | Option string/map can set merge operator from object registry (#5123) Summary: Allow customized merge operator to be loaded from option file/map/string by allowing users to pre-regiester merge operators to object registry. Also update HISTORY.md and header files for the same feature for comparator. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5123 Differential Revision: D14658488 Pulled By: siying fbshipit-source-id: 86ea2fbd2a0a04632d8ea9fceaffefd041f6ae61 | 28 March 2019, 21:54:29 UTC |
106a94a | Siying Dong | 28 March 2019, 20:11:53 UTC | Improve obsolete_files_test (#5125) Summary: We see a failure of obsolete_files_test but aren't able to identify the issue. Improve the test in following way and hope we can debug better next time: 1. Place sync point before automatic compaction runs so race condition will always trigger. 2. Disable sync point before test finishes. 3. ASSERT_OK() instead of ASSERT_TRUE(status.ok()) Pull Request resolved: https://github.com/facebook/rocksdb/pull/5125 Differential Revision: D14669456 Pulled By: siying fbshipit-source-id: dccb7648e334501ad651eb212880096eef1f4ab2 | 28 March 2019, 20:16:02 UTC |
d1edf4e | Burton Li | 27 March 2019, 23:31:30 UTC | Format env_win coding style by google c++ style guide (#5096) Summary: The existing code for env_win src and header file doesn't fully followed the recommended code style (https://google.github.io/styleguide/cppguide.html#Functions). Fix it for better readability. anand1976 siying Pull Request resolved: https://github.com/facebook/rocksdb/pull/5096 Differential Revision: D14585358 Pulled By: anand1976 fbshipit-source-id: 7ce35ffe9e922f5c1421b0bbaa5fce7abad57617 | 27 March 2019, 23:38:20 UTC |