swh:1:snp:5115096b921df712aeb2a08114fede57fb3331fb

sort by:
Revision Author Date Message Commit Date
3513d4e Making platform 007 (gcc 7) default in build_detect_platform.sh (#5947) Summary: Making platform 007 (gcc 7) default in build_detect_platform.sh. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5947 Differential Revision: D18038837 Pulled By: vjnadimpalli fbshipit-source-id: 9ac2ddaa93bf328a416faec028970e039886378e 30 October 2019, 17:29:05 UTC
76a56d8 Bump version to 6.2.4 18 September 2019, 21:42:43 UTC
dd757ff Disable snapshot refresh feature when snap_refresh_nanos is 0 (#5724) Summary: The comments of snap_refresh_nanos advertise that the snapshot refresh feature will be disabled when the option is set to 0. This contract is however not honored in the code: https://github.com/facebook/rocksdb/pull/5278 The patch fixes that and also adds an assert to ensure that the feature is not used when the option is zero. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5724 Differential Revision: D16918185 Pulled By: maysamyabandeh fbshipit-source-id: fec167287df7d85093e087fc39c0eb243e3bbd7e 18 September 2019, 21:39:45 UTC
fdc56e2 Declare snapshot refresh incompatible with delete range (#5625) Summary: The ::snap_refresh_nanos option is incompatible with DeleteRange feature. Currently the code relies on range_del_agg.IsEmpty() to disable it if there are range delete tombstones. However ::IsEmpty does not guarantee that there is no RangeDelete tombstones in the SST files. The patch declares the two features incompatible in inline comments until we later figure how to properly detect the presence of RangeDelete tombstones in compaction inputs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5625 Differential Revision: D16468218 Pulled By: maysamyabandeh fbshipit-source-id: bd7beca278bc7e1db75e7ee4522d05a3a6ca86f4 18 September 2019, 21:37:22 UTC
cfd6eb3 Disable refresh snapshot feature by default (#5606) Summary: There are concerns about the correctness of this patch. Disabling by default until the concerns are resolved. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5606 Differential Revision: D16428064 Pulled By: maysamyabandeh fbshipit-source-id: a89280f0ea85796c9c9dfbfd9a8e91dad9b000b3 18 September 2019, 21:37:16 UTC
9623a66 Refresh snapshot list during long compactions (2nd attempt) (#5278) Summary: Part of compaction cpu goes to processing snapshot list, the larger the list the bigger the overhead. Although the lifetime of most of the snapshots is much shorter than the lifetime of compactions, the compaction conservatively operates on the list of snapshots that it initially obtained. This patch allows the snapshot list to be updated via a callback if the compaction is taking long. This should let the compaction to continue more efficiently with much smaller snapshot list. For simplicity, to avoid the feature is disabled in two cases: i) When more than one sub-compaction are sharing the same snapshot list, ii) when Range Delete is used in which the range delete aggregator has its own copy of snapshot list. This fixes the reverted https://github.com/facebook/rocksdb/pull/5099 issue with range deletes. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5278 Differential Revision: D15203291 Pulled By: maysamyabandeh fbshipit-source-id: fa645611e606aa222c7ce53176dc5bb6f259c258 18 September 2019, 21:35:50 UTC
47f0d5f Revert snap_refresh_nanos feature (#5269) Summary: Our daily stress tests are failing after this feature. Reverting temporarily until we figure the reason for test failures. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5269 Differential Revision: D15151285 Pulled By: maysamyabandeh fbshipit-source-id: e4002b99690a97df30d4b4b58bf0f61e9591bc6e 18 September 2019, 21:30:55 UTC
3542f7e Bump patch version 03 September 2019, 20:41:22 UTC
5c12a47 Fix a bug in file ingestion (#5760) Summary: Before this PR, when the number of column families involved in a file ingestion exceeds 2, a bug in the looping logic prevents correct file number being assigned to each ingestion job. Also skip deleting non-existing hard links during cleanup-after-failure. Test plan (devserver) ``` $COMPILE_WITH_ASAN=1 make all $./external_sst_file_test --gtest_filter=ExternalSSTFileTest/ExternalSSTFileTest.IngestFilesIntoMultipleColumnFamilies_*/* $makke check ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5760 Differential Revision: D17142982 Pulled By: riversand963 fbshipit-source-id: 06c1847a4e7a402647bcf28d124e70f2a0f9daf6 03 September 2019, 20:29:55 UTC
6d113fc Update HISTORY and bump version 07 June 2019, 23:23:07 UTC
11afcbe Disable dynamic extension support by default for CMake (#5419) Summary: We have users reporting linking error while building RocksDB using CMake, and we do not enable dynamic extension feature for them. The fix is to add `-DROCKSDB_NO_DYNAMIC_EXTENSION` to CMake by default. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5419 Differential Revision: D15676792 Pulled By: riversand963 fbshipit-source-id: d45aaacfc64ea61646fd7329c352cd760145baf3 05 June 2019, 21:09:27 UTC
b6e554e Bump version to 6.2.1 04 June 2019, 23:38:51 UTC
a1f08cc Fix merging range tombstone covering put during flush/compaction (#5406) Summary: Flush/compaction use `MergeUntil` which has a special code path to handle a merge ending with a non-`Merge` point key. In particular if that key is a `Put` we forgot to check whether it is covered by a range tombstone. If it is covered then we must not include it in the following call to `TimedFullMerge`. Fixes #5392. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5406 Differential Revision: D15611144 Pulled By: sagar0 fbshipit-source-id: ba6a7863ca2d043f591de78fd0c4f4561f0c500e 04 June 2019, 20:44:37 UTC
0d9bfa6 Fix a bug in db_stress and an incorrect assertion in FilePickerMultiGet (#5301) Summary: This PR has two fixes for crash test failures - 1. Fix a bug in TestMultiGet() in db_stress that was passing list of key to MultiGet() in the wrong order, thus ensuring that actual values don't match expected values 2. Remove an incorrect assertion in FilePickerMultiGet::GetNextFileInLevelWithKeys() that checks that files in a level are in sorted order. This is not true with MultiGet(), especially if there are duplicate keys and we may have to go back one file for the next key. Furthermore, this assertion makes more sense when a new version is created, rather than at lookup time Test - asan_crash and ubsan_crash tests Pull Request resolved: https://github.com/facebook/rocksdb/pull/5301 Differential Revision: D15337383 Pulled By: anand1976 fbshipit-source-id: 35092cb15bbc1700e5e823cbe07bfa62f1e9e6c6 24 May 2019, 20:22:04 UTC
9feb730 Fix bugs in FilePickerMultiGet (#5292) Summary: This PR fixes a couple of bugs in FilePickerMultiGet that were causing db_stress test failures. The failures were caused by - 1. Improper handling of a key that matches the user key portion of an L0 file's largest key. In this case, the curr_index_in_curr_level file index in L0 for that key was getting incremented, but batch_iter_ was not advanced. By design, all keys in a batch are supposed to be checked against an L0 file before advancing to the next L0 file. Not advancing to the next key in the batch was causing a double increment of curr_index_in_curr_level due to the same key being processed again 2. Improper handling of a key that matches the user key portion of the largest key in the last file of L1 and higher. This was resulting in a premature end to the processing of the batch for that level when the next key in the batch is a duplicate. Typically, the keys in MultiGet will not be duplicates, but its good to handle that case correctly Test - asan_crash make check Pull Request resolved: https://github.com/facebook/rocksdb/pull/5292 Differential Revision: D15282530 Pulled By: anand1976 fbshipit-source-id: d1a6a86e0af273169c3632db22a44d79c66a581f 24 May 2019, 20:19:55 UTC
70dca18 multiget: fix memory issues due to vector auto resizing (#5279) Summary: This PR fixes three memory issues found by ASAN * in db_stress, the key vector for MultiGet is created using `emplace_back` which could potentially invalidates references to the underlying storage (vector<string>) due to auto resizing. Fix by calling reserve in advance. * Similar issue in construction of GetContext autovector in version_set.cc * In multiget_context.h use T[] specialization for unique_ptr that holds a char array Pull Request resolved: https://github.com/facebook/rocksdb/pull/5279 Differential Revision: D15202893 Pulled By: miasantreble fbshipit-source-id: 14cc2cda0ed64d29f2a1e264a6bfdaa4294ee75d 24 May 2019, 20:19:06 UTC
5f703af Add option to use MultiGet in db_stress (#5264) Summary: The new option will pick a batch size randomly in the range 1-64. It will then space the keys in the batch by random intervals. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5264 Differential Revision: D15175522 Pulled By: anand1976 fbshipit-source-id: c16baa69d0f1ff4cf53c55c813ddd82c8aeb58fc 24 May 2019, 20:18:02 UTC
8baa66a Update HISTORY.md 15 May 2019, 18:26:03 UTC
570d490 Fix a race condition caused by unlocking db mutex (#5294) Summary: Previous code may call `~ColumnFamilyData` in `DBImpl::AtomicFlushMemTablesToOutputFiles` if the column family is dropped or `cfd->IsFlushPending() == false`. In `~ColumnFamilyData`, the db mutex is released briefly and re-acquired. This can cause correctness issue. The reason is as follows. Assume there are more bg flush threads. After bg_flush_thr1 releases the db mutex, bg_flush_thr2 can grab it and pop an element from the flush queue. This will cause bg_flush_thr2 to accidentally pick some memtables which should have been picked by bg_flush_thr1. To make the matter worse, bg_flush_thr2 can clear `flush_requested_` flag for the memtable list, causing a subsequent call to `MemTableList::IsFlushPending()` by bg_flush_thr1 to return false, which is wrong. The fix is to delay `ColumnFamilyData::Unref` and `~ColumnFamilyData` for column families not selected for flush until `AtomicFlushMemTablesToOutputFiles` returns. Furthermore, a bg flush thread should not clear `MemTableList::flush_requested_` in `MemTableList::PickMemtablesToFlush` unless atomic flush is not used **or** the memtable list does not have unpicked memtables. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5294 Differential Revision: D15295297 Pulled By: riversand963 fbshipit-source-id: 03b101205ca22c242647cbf488bcf0ed80b2ecbd 15 May 2019, 18:23:47 UTC
55320de Update history and version for future 6.2.0 30 April 2019, 20:07:04 UTC
03c7ae2 RocksDB CRC32c optimization with ARMv8 Intrinsic (#5221) Summary: 1. Add Arm linear crc32c implemtation for RocksDB. 2. Arm runtime check for crc32 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5221 Differential Revision: D15013685 Pulled By: siying fbshipit-source-id: 2c2983743d26656d93f212dc7c1a3cf66a1acf12 30 April 2019, 17:59:05 UTC
a5debd7 Add rocksdb_property_int_cf (#5268) Summary: Adds the missing `rocksdb_property_int_cf` function to the C API to let consuming libraries avoid parsing strings. Fixes https://github.com/facebook/rocksdb/issues/5249 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5268 Differential Revision: D15149461 Pulled By: maysamyabandeh fbshipit-source-id: e9fe5f1ad7c64066d921dba8473507269b51d331 30 April 2019, 17:13:28 UTC
b02d0c2 Init compression dict handle before reading meta-blocks (#5267) Summary: At least one of the meta-block loading functions (`ReadRangeDelBlock`) uses the same block reading function (`NewDataBlockIterator`) as data block reads, which means it uses the dictionary handle. However, the dictionary handle was uninitialized while reading meta-blocks, causing readers to receive an error. This situation was only noticed when `cache_index_and_filter_blocks=true`. This PR initializes the handle to null while reading meta-blocks to prevent the error. It also adds support to `db_stress` / `db_crashtest.py` for `cache_index_and_filter_blocks`. Fixes #5263. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5267 Differential Revision: D15149264 Pulled By: maysamyabandeh fbshipit-source-id: 991d38a306c62db5976778bfb050fa3cd4a0671b 30 April 2019, 16:50:49 UTC
25810ca compile gtest only when enable test Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5248 Differential Revision: D15149190 Pulled By: maysamyabandeh fbshipit-source-id: fd6d799e80bb502a7ddbc07032ea87e2e3f1e24f 30 April 2019, 16:33:44 UTC
210b49c Disable pipelined write in atomic flush stress test (#5266) Summary: Since currently pipelined write allows one thread to perform memtable writes while another thread is traversing the `flush_scheduler_`, it will cause an assertion failure in `FlushScheduler::Clear`. To unblock crash recoery tests, we temporarily disable pipelined write when atomic flush is enabled. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5266 Differential Revision: D15142285 Pulled By: riversand963 fbshipit-source-id: a0c20fe4ac543e08feaed602414f982054df7831 30 April 2019, 15:12:42 UTC
1886456 CMake has stock FindZLIB in upper case. (#5261) Summary: More details in https://cmake.org/cmake/help/v3.14/module/FindZLIB.html This resolves the cmake config error of not finding `Findzlib` on Linux (CentOS 7 + cmake 3.14.3 + gcc-8). Pull Request resolved: https://github.com/facebook/rocksdb/pull/5261 Differential Revision: D15138052 Pulled By: maysamyabandeh fbshipit-source-id: 2f4445f49a36c16e6f1e05c090018c02379c0de4 29 April 2019, 22:30:29 UTC
35e6ba7 Fix a bug when trigger atomic flush and close db (#5254) Summary: With atomic flush, RocksDB background flush will flush memtables of a column family up to the largest memtable id in the immutable memtable list. This can introduce a bug in the following scenario. A user thread inserts into a column family until the memtable is full and triggers a flush. This will add the column family to flush_scheduler_. Then the user thread writes another record to the column family. In the PreprocessWrite function, the user thread picks the column family from flush_scheduler_ and schedules a flush request. The flush request gaurantees to flush all the memtables up to the current largest memtable ID of the immutable memtable list. Then the user thread writes new data to the newly-created active memtable. After the write returns, the user thread closes the db. This can cause assertion failure when the background flush thread tries to install superversion for the column family. The solution is to not install flush results if the db has already set `shutting_down_` to true. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5254 Differential Revision: D15124149 Pulled By: riversand963 fbshipit-source-id: 0a667a41339dedb5a18bcb01b0bf11c275c04df0 29 April 2019, 19:48:32 UTC
3548e42 Improve explicit user readahead performance (#5246) Summary: Improve the iterators performance when the user explicitly sets the readahead size via `ReadOptions.readahead_size`. 1. Stop creating new table readers when the user explicitly sets readahead size. 2. Make use of an internal buffer based on `FilePrefetchBuffer` instead of using `ReadaheadRandomAccessFileReader`, to handle the user readahead requests (for both buffered and direct io cases). 3. Add `readahead_size` to db_bench. **Benchmarks:** https://gist.github.com/sagar0/53693edc320a18abeaeca94ca32f5737 For 1 MB readahead, Buffered IO performance improves by 28% and Direct IO performance improves by 50%. For 512KB readahead, Buffered IO performance improves by 30% and Direct IO performance improves by 67%. **Test Plan:** Updated `DBIteratorTest.ReadAhead` test to make sure that: - no new table readers are created for iterators on setting ReadOptions.readahead_size - At least "readahead" number of bytes are actually getting read on each iterator read. TODO later: - Use similar logic for compactions as well. - This ties in nicely with #4052 and paves the way for removing ReadaheadRandomAcessFile later. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5246 Differential Revision: D15107946 Pulled By: sagar0 fbshipit-source-id: 2c1149729ca7d779e4e8b7710ba6f4e8cbfd3bea 27 April 2019, 04:24:10 UTC
8c7eb59 Fix ubsan failure in snapshot refresh (#5257) Summary: The newly added test CompactionJobTest.SnapshotRefresh sets the snapshot refresh period to 0 to stress the feature. This results into large number of refresh events, which in turn results into an UBSAN failure when a bitwise shift operand goes beyond the uint64_t size. The patch fixes that by simplifying the shift logic to be done only by 2 bits after each refresh. Furthermore it verifies that the shift operation does not result in decreasing the refresh period. Testing: COMPILE_WITH_UBSAN=1 make -j32 compaction_job_test ./compaction_job_test --gtest_filter=CompactionJobTest.SnapshotRefresh Pull Request resolved: https://github.com/facebook/rocksdb/pull/5257 Differential Revision: D15106463 Pulled By: maysamyabandeh fbshipit-source-id: f2718898ea7ba4fa9f7e87b70cf98fe647c0de80 27 April 2019, 00:30:30 UTC
506e844 Refresh snapshot list during long compactions (#5099) Summary: Part of compaction cpu goes to processing snapshot list, the larger the list the bigger the overhead. Although the lifetime of most of the snapshots is much shorter than the lifetime of compactions, the compaction conservatively operates on the list of snapshots that it initially obtained. This patch allows the snapshot list to be updated via a callback if the compaction is taking long. This should let the compaction to continue more efficiently with much smaller snapshot list. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5099 Differential Revision: D15086710 Pulled By: maysamyabandeh fbshipit-source-id: 7649f56c3b6b2fb334962048150142a3bf9c1a12 26 April 2019, 01:17:22 UTC
6eb317b Option string/map/file can set env from object registry (#5237) Summary: - By providing the "env" field in any text-based options (i.e., string, map, or file), we can use `NewCustomObject` to deserialize the text value into an actual `Env` object. - Currently factory functions for `Env` registered with object registry should only return pointer to static `Env` objects. That's because `DBOptions::env` is a raw pointer so we cannot easily delegate cleanup. - Note I did not add `env` to `db_option_type_info`. It wasn't needed for (de)serialization, and I believe we don't want to do verification on `env`, even by checking name. That's because the user should be able to copy their DB from Linux to Windows, change envs, and not see an option verification error. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5237 Differential Revision: D15056360 Pulled By: siying fbshipit-source-id: 4b5f0b83297a5058f8949ec955dbf27d98d73d7e 25 April 2019, 18:35:09 UTC
084a3c6 add missing rocksdb_flush_cf in c (#5243) Summary: same to #5229 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5243 Differential Revision: D15082800 Pulled By: siying fbshipit-source-id: f4a68a480db0e40e1ba7cf37e18b88e43dff7c08 25 April 2019, 18:25:43 UTC
da96f2f Close WAL files before deletion (#5233) Summary: Currently one thread in RocksDB keeps a WAL file open while another thread deletes it. Although the first thread never writes to the WAL again, it still tries to close it in the end. This is fine on POSIX, but can be problematic on other platforms, e.g. HDFS, etc.. It will either cause a lot of warning messages or throw exceptions. The solution is to let the second thread close the WAL before deleting it. RocksDB keeps the writers of the logs to delete in `logs_to_free_`, which is passed to `job_context` during `FindObsoleteFiles` (holding mutex). Then in `PurgeObsoleteFiles` (without mutex), these writers should close the logs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5233 Differential Revision: D15032670 Pulled By: riversand963 fbshipit-source-id: c55e8a612db8cc2306644001a5e6d53842a8f754 25 April 2019, 17:11:41 UTC
66d8360 update history.md (#5245) Summary: update history.md for `BottommostLevelCompaction::kForceOptimized` to mention possible user impact. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5245 Differential Revision: D15073712 Pulled By: miasantreble fbshipit-source-id: d40f698c42e8a6368be4eac0a00d02279615edea 25 April 2019, 04:30:00 UTC
cd77d3c Don't call FindObsoleteFiles() in ~ColumnFamilyHandleImpl() if CF is not dropped (#5238) Summary: We have a DB with ~4k column families and ~70k files. On shutdown, destroying the 4k ColumnFamilyHandle-s takes over 2 minutes. Most of this time is spent in VersionSet::AddLiveFiles() called from FindObsoleteFiles() from ~ColumnFamilyHandleImpl(). It's just iterating over the list of files in memory. This seems completely unnecessary as no obsolete files are actually found since the CFs are not even dropped. This PR fixes that. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5238 Differential Revision: D15056342 Pulled By: siying fbshipit-source-id: 2aa342ef3770b4aa384ce81f8768e485480e4f08 25 April 2019, 00:11:36 UTC
aa56b7e secondary instance: add support for WAL tailing on `OpenAsSecondary` Summary: PR https://github.com/facebook/rocksdb/pull/4899 implemented the general framework for RocksDB secondary instances. This PR adds the support for WAL tailing in `OpenAsSecondary`, which means after the `OpenAsSecondary` call, the secondary is now able to see primary's writes that are yet to be flushed. The secondary can see primary's writes in the WAL up to the moment of `OpenAsSecondary` call starts. Differential Revision: D15059905 Pulled By: miasantreble fbshipit-source-id: 44f71f548a30b38179a7940165e138f622de1f10 24 April 2019, 19:08:44 UTC
1c8cbf3 Extend MultiGet batching to Transactions (#5210) Summary: MultiGet batching was implemented in #5011 in order to reduce CPU utilization when looking up multiple keys at once. This PR implements corresponding ```MultiGet``` and ```MultiGetSingleCFForUpdate``` in ```rocksdb::Transaction``` that call the underlying batching implementation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5210 Differential Revision: D15048164 Pulled By: anand1976 fbshipit-source-id: c52f6043102ab0cbc723f4cba2a7b7d1767f6f52 23 April 2019, 21:11:26 UTC
a7d1031 Print smallest and largest seqno in Version::DebugString() for more details (#5231) Summary: In some cases, we want to known the smallest and largest sequence numbers of sstable files, to help us get more details. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5231 Differential Revision: D15038087 Pulled By: siying fbshipit-source-id: c473c1ca07b53efe2f1884fa1ecdc8686f455ed8 23 April 2019, 18:22:02 UTC
990b2f4 Fix compilation on db_bench_tool.cc on Windows (#5227) Summary: I needed this change to be able to build the v6.0.1 release on Windows. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5227 Differential Revision: D15033815 Pulled By: sagar0 fbshipit-source-id: 579f3b8e694c34c0d43527eb2fa37175e37f5911 23 April 2019, 18:16:51 UTC
72c8533 DBIter to use IteratorWrapper for inner iterator (#5214) Summary: It's hard to get DBIter to directly use InternalIterator::NextAndGetResult() because the code change would be complicated. Instead, use IteratorWrapper, where Next() is already using NextAndGetResult(). Performance number is hard to measure because it is small and ther is variation. I run readseq many times, and there seems to be 1% gain. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5214 Differential Revision: D15003635 Pulled By: siying fbshipit-source-id: 17af1965c409c2fe90cd85037fbd2c5a1364f82a 23 April 2019, 17:55:01 UTC
78a6e07 Fix compilation errors for 32bits/LITE/ios build. (#5220) Summary: When I build RocksDB for 32bits/LITE/iOS environment, some errors like the following. ` table/block_based_table_reader.cc:971:44: error: implicit conversion loses integer precision: 'uint64_t' (aka 'unsigned long long') to 'size_t' (aka 'unsigned long') [-Werror,-Wshorten-64-to-32] size_t block_size = props_block_handle.size(); ~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~^~~~~~ ./util/file_reader_writer.h:177:8: error: private field 'env_' is not used [-Werror,-Wunused-private-field] Env* env_; ^ ` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5220 Differential Revision: D15023481 Pulled By: siying fbshipit-source-id: 1b5d121d3016f2b0a8a9a2cc1bd638479357f9f7 22 April 2019, 23:02:16 UTC
47fd574 Log file_creation_time table property (#5232) Summary: Log file_creation_time table property when a new table file is created. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5232 Differential Revision: D15033069 Pulled By: sagar0 fbshipit-source-id: aaac56a4c03a8f96c338cad1b0cdb7fbfb887647 22 April 2019, 22:30:07 UTC
8272a6d Optionally wait on bytes_per_sync to smooth I/O (#5183) Summary: The existing implementation does not guarantee bytes reach disk every `bytes_per_sync` when writing SST files, or every `wal_bytes_per_sync` when writing WALs. This can cause confusing behavior for users who enable this feature to avoid large syncs during flush and compaction, but then end up hitting them anyways. My understanding of the existing behavior is we used `sync_file_range` with `SYNC_FILE_RANGE_WRITE` to submit ranges for async writeback, such that we could continue processing the next range of bytes while that I/O is happening. I believe we can preserve that benefit while also limiting how far the processing can get ahead of the I/O, which prevents huge syncs from happening when the file finishes. Consider this `sync_file_range` usage: `sync_file_range(fd_, 0, static_cast<off_t>(offset + nbytes), SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE)`. Expanding the range to start at 0 and adding the `SYNC_FILE_RANGE_WAIT_BEFORE` flag causes any pending writeback (like from a previous call to `sync_file_range`) to finish before it proceeds to submit the latest `nbytes` for writeback. The latest `nbytes` are still written back asynchronously, unless processing exceeds I/O speed, in which case the following `sync_file_range` will need to wait on it. There is a second change in this PR to use `fdatasync` when `sync_file_range` is unavailable (determined statically) or has some known problem with the underlying filesystem (determined dynamically). The above two changes only apply when the user enables a new option, `strict_bytes_per_sync`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5183 Differential Revision: D14953553 Pulled By: siying fbshipit-source-id: 445c3862e019fb7b470f9c7f314fc231b62706e9 22 April 2019, 18:51:39 UTC
df38c1c Add BlockBasedTableOptions::index_shortening (#5174) Summary: Introduce BlockBasedTableOptions::index_shortening to give users control on which key shortening techniques to be used in building index blocks. Before this patch, both separators and successor keys where shortened in indexes. With this patch, the default is set to kShortenSeparators to only shorten the separators. Since each index block has many separators and only one successor (last key), the change should not have negative impact on index block size. However it should prevent many unnecessary block loads where due to approximation introduced by shorted successor, seek would land us to the previous block and then fix it by moving to the next one. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5174 Differential Revision: D14884185 Pulled By: al13n321 fbshipit-source-id: 1b08bc8c03edcf09b6b8c16e9a7eea08ad4dd534 22 April 2019, 15:20:35 UTC
de76909 refactor SavePoints (#5192) Summary: Savepoints are assumed to be used in a stack-wise fashion (only the top element should be used), so they were stored by `WriteBatch` in a member variable `save_points` using an std::stack. Conceptually this is fine, but the implementation had a few issues: - the `save_points_` instance variable was a plain pointer to a heap- allocated `SavePoints` struct. The destructor of `WriteBatch` simply deletes this pointer. However, the copy constructor of WriteBatch just copied that pointer, meaning that copying a WriteBatch with active savepoints will very likely have crashed before. Now a proper copy of the savepoints is made in the copy constructor, and not just a copy of the pointer - `save_points_` was an std::stack, which defaults to `std::deque` for the underlying container. A deque is a bit over the top here, as we only need access to the most recent savepoint (i.e. stack.top()) but never any elements at the front. std::deque is rather expensive to initialize in common environments. For example, the STL implementation shipped with GNU g++ will perform a heap allocation of more than 500 bytes to create an empty deque object. Although the `save_points_` container is created lazily by RocksDB, moving from a deque to a plain `std::vector` is much more memory-efficient. So `save_points_` is now a vector. - `save_points_` was changed from a plain pointer to an `std::unique_ptr`, making ownership more explicit. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5192 Differential Revision: D15024074 Pulled By: maysamyabandeh fbshipit-source-id: 5b128786d3789cde94e46465c9e91badd07a25d7 20 April 2019, 03:33:04 UTC
dc64c2f Fix history to not include some features in 6.1 (#5224) Summary: Fix HISTORY.md by removing a few items from 6.1.1 history as they did not make into the 6.1.fb branch. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5224 Differential Revision: D15017030 Pulled By: sagar0 fbshipit-source-id: 090724d326d29168952e06dc1a5090c03fdd739e 19 April 2019, 20:00:53 UTC
c77aab5 Force read existing data during db repair (#5209) Summary: Setting read_opts.total_order_seek achieves this, even with a different prefix extractor. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5209 Differential Revision: D14980388 Pulled By: riversand963 fbshipit-source-id: 16527989a3d6b3e3ae8241c894d011326429d66e 19 April 2019, 18:55:13 UTC
5265c57 Remove a couple of non-public includes from public header file (#5219) Summary: Cleanup a couple of stray includes left by #5011. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5219 Differential Revision: D15007244 Pulled By: anand1976 fbshipit-source-id: 15ca1d4f977b5b60e99df3bfb8fc3db217d19bdd 19 April 2019, 18:10:33 UTC
7a73add Add some "inline" annotation to DBIter functions (#5217) Summary: My compiler doesn't inline DBIter::Next() to arena wrapped iterator, even if it is a direct forward. Adding this annotation makes it inlined. It might not always work but inlinging this function to arena wrapped iterator always feels like the right decision. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5217 Differential Revision: D15004086 Pulled By: siying fbshipit-source-id: a4cffd79c6fb092669a3a90633c9aa5e494f8a66 19 April 2019, 17:38:43 UTC
efa9487 Use creation_time or mtime when file_creation_time=0 (#5184) Summary: We found an issue in Periodic Compactions (introduced in #5166) where files were not being picked up for compactions as all the SST files created with older versions of RocksDB have `file_creation_time` as 0. (Note that `file_creation_time` is a new table property introduced in #5166). To address this, Periodic compactions now fall back to looking at the `creation_time` table property or the file's modification time (as given by the Env) when `file_creation_time` table property is found to be 0. Here how the file's modification time (and, in turn, the file age) is computed now: 1. Use `file_creation_time` table property if it is > 0. 1. If not, then use `creation_time` table property if it is > 0. 1. If not, then use file's mtime stat metadata given by the underlying Env. Don't consider the file at all for compaction if the modification time cannot be correctly determined based on the above conditions. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5184 Differential Revision: D14907795 Pulled By: sagar0 fbshipit-source-id: 4bb2f3631f9a3e04470c674a1d13544584e1e56c 19 April 2019, 05:39:34 UTC
3bdce20 reorganize history.md to list unreleased changes separately Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5216 Differential Revision: D15003749 Pulled By: miasantreble fbshipit-source-id: a52c264e694cd7c55813be33ee22b4f3046b545a 18 April 2019, 21:55:57 UTC
d6862b3 Make ReadRangeDelAggregator::ShouldDelete() more inline friendly (#5202) Summary: Reorganize the code so that no function call into ReadRangeDelAggregator is needed if there is no tomb range stone. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5202 Differential Revision: D14968155 Pulled By: siying fbshipit-source-id: 0bd61911293c7a27b4e1b8d57c66d0c4ad6a6a5f 18 April 2019, 19:27:25 UTC
01cfea6 Some small code changes to improve Next() (#5200) Summary: Several small changes for Next(): 1. Reducing branching by always update local_stats_.next_count_++ even if statistics is null. This should be faster than a branching. 2. Replacing ResetInternalKeysSkippedCounter() in Next() because the valid_ check is not needed in this case. 3. iter_->Valid() should always be true for non merge case. Remove this check. 4. Adding an inline annotation. It ends up with not picked up by my compiler, but it shouldn't hurt. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5200 Differential Revision: D15000391 Pulled By: siying fbshipit-source-id: be97f61c708968234fb8e5cf272b5c2ac07dc4dd 18 April 2019, 19:18:11 UTC
992dfc7 Introduce InternalIteratorBase::NextAndGetResult() (#5197) Summary: In long scans, virtual function calls of Next(), Valid(), key() and value() are not trivial. By introducing NextAndGetResult(), Some of the Next(), Valid() and key() calls are consolidated into one virtual function call to reduce CPU. Also did some inline tricks and add some "final" randomly in some functions. Even without the "final" annotation, most Next() calls are inlined with -O3, but sometimes with a final it is inlined by O2 too. It doesn't hurt to add those final annotations. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5197 Differential Revision: D14945977 Pulled By: siying fbshipit-source-id: 7003969f9a5f1d5717f0bda503b91d19ba75ed88 18 April 2019, 18:12:39 UTC
6c2bf9e Add copyright headers per FB open-source checkup tool. (#5199) Summary: internal task: T35568575 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5199 Differential Revision: D14962794 Pulled By: gfosco fbshipit-source-id: 93838ede6d0235eaecff90d200faed9a8515bbbe 18 April 2019, 17:55:01 UTC
392f6d4 Fix a bug in GetOverlappingInputsRangeBinarySearch (#5211) Summary: As title. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5211 Differential Revision: D14992018 Pulled By: riversand963 fbshipit-source-id: b5720ea4742029e2fb47ff6d9f8d9de006db4ed4 18 April 2019, 16:22:16 UTC
5b7e09b VersionSet: optmize GetOverlappingInputsRangeBinarySearch (#4987) Summary: `GetOverlappingInputsRangeBinarySearch` firstly use binary search to find a index in the given range `[begin, end]`. But after find the index, then use linear search to find the `start_index` and `end_index`. So the search process degraded to linear time. Here optmize the search process with below changes: - use `std::lower_bound` and `std::upper_bound` to get `lg(n)` search complexity. - use uniformed lambda for search process. - simplify process for `within_interval` true or false. - remove function `ExtendFileRangeWithinInterval` and `ExtendFileRangeOverlappingInterval`. Signed-off-by: JiYou <jiyou09@gmail.com> Pull Request resolved: https://github.com/facebook/rocksdb/pull/4987 Differential Revision: D14984192 Pulled By: riversand963 fbshipit-source-id: fae4b8e59a21b7e350718d60cdc94dd55ac81e89 18 April 2019, 01:15:20 UTC
248b6b5 rename variable to avoid shadowing (#5204) Summary: this PR fixes the following compile warning: ``` db/memtable.cc: In member function ‘virtual void rocksdb::MemTableIterator::Seek(const rocksdb::Slice&)’: db/memtable.cc:321:22: error: declaration of ‘user_key’ shadows a member of 'this' [-Werror=shadow] Slice user_key(ExtractUserKey(k)); ^ db/memtable.cc: In member function ‘virtual void rocksdb::MemTableIterator::SeekForPrev(const rocksdb::Slice&)’: db/memtable.cc:338:22: error: declaration of ‘user_key’ shadows a member of 'this' [-Werror=shadow] Slice user_key(ExtractUserKey(k)); ^ ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5204 Differential Revision: D14970160 Pulled By: miasantreble fbshipit-source-id: 388eb089f90c4528cc6d615dd4607fb53ceac705 17 April 2019, 17:15:05 UTC
baa5302 Avoid double-compacting data in bottom level in manual compactions (#5138) Summary: Depending on the config, manual compaction (leveled compaction style) does following compactions: L0->L1 L1->L2 ... Ln-1 -> Ln Ln -> Ln The final Ln -> Ln compaction is partly unnecessary as it recompacts all the files that were just generated by the Ln-1 -> Ln. We should avoid recompacting such files. This rule should be applied to Lmax only. Resolves issue https://github.com/facebook/rocksdb/issues/4995 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5138 Differential Revision: D14940106 Pulled By: miasantreble fbshipit-source-id: 8d3cf5507a17e76f3333cfd4bac5256d005636e5 17 April 2019, 06:32:20 UTC
d9280ff Add back NewEmptyIterator (#5203) Summary: #4905 removed the implementation of `NewEmptyIterator` but kept its declaration in the public header. This breaks some systems that depend on RocksDB if the systems use `NewEmptyIterator`. Therefore, add it back to fix. cc maysamyabandeh please remind me if I miss anything here. Thanks Pull Request resolved: https://github.com/facebook/rocksdb/pull/5203 Differential Revision: D14968382 Pulled By: riversand963 fbshipit-source-id: 5fb86e99c8cfaf9f7a9473cdb1355d7558ff6e01 17 April 2019, 03:28:05 UTC
beb44ec WriteBufferManager's dummy entry size to block cache 1MB -> 256KB (#5175) Summary: Dummy cache size of 1MB is too large for small block sizes. Our GetDefaultCacheShardBits() use min_shard_size = 512L * 1024L to determine number of shards, so 1MB will excceeds the size of the whole shard and make the cache excceeds the budget. Change it to 256KB accordingly. There shouldn't be obvious performance impact, since inserting a cache entry every 256KB of memtable inserts is still infrequently enough. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5175 Differential Revision: D14954289 Pulled By: siying fbshipit-source-id: 2c275255c1ac3992174e06529e44c55538325c94 16 April 2019, 19:03:07 UTC
f1239d5 Avoid per-key upper bound check in BlockBasedTableIterator (#5142) Summary: This is second attempt for #5101. Original commit message: `BlockBasedTableIterator` avoid reading next block on `Next()` if it detects the iterator will be out of bound, by checking against index key. The optimization was added in #2239, and by the time it only check the bound per block. It seems later change make it a per-key check, which introduce unnecessary key comparisons. This patch come with two fixes: Fix 1: To optimize checking for bounds, we need comparing the bounds with index key as well. However BlockBasedTableIterator doesn't know whether its index iterator is internally using user keys or internal keys. The patch fixes that by extending InternalIterator with a user_key() function that is overridden by In IndexBlockIter. Fix 2: In #5101 we return `IsOutOfBound()=true` when block index key is out of bound. But the index key can be larger than smallest key of the next file on the level. That file can be within upper bound and should not be filtered out. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5142 Differential Revision: D14907113 Pulled By: siying fbshipit-source-id: ac95775c5b4e7b700f76ab43e39f45402c98fbfb 16 April 2019, 18:37:47 UTC
71a82a0 Consolidating WAL creation which currently has duplicate logic in db_impl_write.cc and db_impl_open.cc (#5188) Summary: Right now, two separate pieces of code are used to create WAL files in DBImpl::Open function of db_impl_open.cc and DBImpl::SwitchMemtable function of db_impl_write.cc. This code change simply creates 1 function called DBImpl::CreateWAL in db_impl_open.cc which is used to replace existing WAL creation logic in DBImpl::Open and DBImpl::SwitchMemtable. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5188 Differential Revision: D14942832 Pulled By: vjnadimpalli fbshipit-source-id: d49230e04c36176015c8c1b422575872f92157fb 16 April 2019, 01:51:04 UTC
3e63e55 Fix MultiGet ASSERT bug when passing unsorted result (#5195) Summary: Found this when test driving the new MultiGet. If you pass unsorted result with sorted_result = false you'll trigger the ASSERT incorrect even though we'll sort down below. I've also added simple test cover sorted_result=true/false scenario copied from MultiGetSimple. anand1976 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5195 Differential Revision: D14935475 Pulled By: yizhang82 fbshipit-source-id: 1d2af5e3a003847d965066a16e3b19da68acf170 15 April 2019, 18:35:21 UTC
b70967a db_bench: support seek to non-exist prefix (#5163) Summary: Add `--seek_missing_prefix` flag to db_bench to allow benchmarking seeking to non-existing prefix. Usage example: ``` ./db_bench --db=/dev/shm/db_bench --use_existing_db=false --benchmarks=fillrandom --num=100000000 --prefix_size=9 --keys_per_prefix=10 ./db_bench --db=/dev/shm/db_bench --use_existing_db=true --benchmarks=seekrandom --disable_auto_compactions=true --num=100000000 --prefix_size=9 --keys_per_prefix=10 --reads=1000 --prefix_same_as_start=true --seek_missing_prefix=true ``` Also adding `--total_order_seek` and `--prefix_same_as_start` flags. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5163 Differential Revision: D14935724 Pulled By: riversand963 fbshipit-source-id: 7c41023f007febe373eb1589861f215432a9e18a 15 April 2019, 17:54:58 UTC
b5cad5c Update history and version to 6.1.1 (#5171) Summary: Including latest fixes. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5171 Differential Revision: D14875157 Pulled By: gfosco fbshipit-source-id: 86ec7ee3553a9b25ab71ed98966ce08a16322e2c 15 April 2019, 17:49:38 UTC
8295d36 Improve transaction lock details (#5193) Summary: This branch contains two small improvements: * Create `LockMap` entries using `std::make_shared`. This saves one heap allocation per LockMap entry but also locates the control block and the LockMap object closely together in memory, which can help with caching * Reorder the members of `TrackedTrxInfo`, so that the resulting struct uses less memory (at least on 64bit systems) Pull Request resolved: https://github.com/facebook/rocksdb/pull/5193 Differential Revision: D14934536 Pulled By: maysamyabandeh fbshipit-source-id: f7b49812bb4b6029eef9d131e7cd56260df5b28e 15 April 2019, 17:44:03 UTC
29111e9 Add bounds check in FilePickerMultiGet::PrepareNextLevel() (#5189) Summary: Add bounds check when looping through empty levels in FilePickerMultiGet Pull Request resolved: https://github.com/facebook/rocksdb/pull/5189 Differential Revision: D14925334 Pulled By: anand1976 fbshipit-source-id: 65d53247cf443153e28ce2b8b753fa51c6ae4566 13 April 2019, 01:05:09 UTC
cca141e Fix crash with memtable prefix bloom and key out of prefix extractor domain (#5190) Summary: Before using prefix extractor `InDomain()` should be check. All uses in memtable.cc didn't check `InDomain()`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5190 Differential Revision: D14923773 Pulled By: miasantreble fbshipit-source-id: b3ad60bcca5f3a1a2b929a6eb34b0b7ba6326f04 13 April 2019, 00:07:49 UTC
d655a3a Remove extraneous call to TrackKey (#5173) Summary: In `PessimisticTransaction::TryLock`, we were calling `TrackKey` even when assume_tracked=true, which defeats the purpose of assume_tracked. Remove this. For keys that are already tracked, TrackKey will actually bump some counters (num_reads/num_writes) which are consumed in `TransactionBaseImpl::GetTrackedKeysSinceSavePoint`, and this is used to determine which keys were tracked since the last savepoint. I believe this functionality should still work, since I think the user should not call GetForUpdate/Put(assume_tracked=true) across savepoints, and if they do, they should not expect the Put(assume_tracked=true) to show up as a tracked key in the second savepoint. This is another 2-3% cpu improvement. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5173 Differential Revision: D14883809 Pulled By: lth fbshipit-source-id: 7d09f0772da422384af0519773e310c22b0cbca3 12 April 2019, 23:37:12 UTC
fe642cb WritePrepared: fix race condition in reading batch with duplicate keys (#5147) Summary: When ReadOption doesn't specify a snapshot, WritePrepared::Get used kMaxSequenceNumber to avoid the cost of creating a new snapshot object (that requires sync over db_mutex). This creates a race condition if it is reading from the writes of a transaction that had duplicate keys: each instance of duplicate key is inserted with a different sequence number and depending on the ordering the ::Get might skip the newer one and read the older one that is obsolete. The patch fixes that by using last published seq as the snapshot sequence number. It also adds a check after the read is done to ensure that the max_evicted_seq has not advanced the aforementioned seq, which is a very unlikely event. If it did, then the read is not valid since the seq is not backed by an actually snapshot to let IsInSnapshot handle that properly when an overlapping commit is evicted from commit cache. A unit test is added to reproduce the race condition with duplicate keys. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5147 Differential Revision: D14758815 Pulled By: maysamyabandeh fbshipit-source-id: a56915657132cf6ba5e3f5ea1b5d78c803407719 12 April 2019, 21:40:41 UTC
1966a7c Expose JavaAPI for getting the filter policy of a BlockBasedTableConfig (#5186) Summary: I would like to be able to read out the current Filter that has been set (or not) for a BlockBasedTableConfig. Added one public method to BlockBasedTableConfig: public Filter filterPolicy() { return filterPolicy; } Pull Request resolved: https://github.com/facebook/rocksdb/pull/5186 Differential Revision: D14921415 Pulled By: siying fbshipit-source-id: 2a63c8685480197862b49fc48916c757cd6daf95 12 April 2019, 21:01:36 UTC
85b2bde Still implement StatisticsImpl::measureTime() (#5181) Summary: Since Statistics::measureTime() is deprecated, StatisticsImpl::measureTime() is not implemented. We realized that users might have a wrapped Statistics implementation in which measureTime() is implemented as forwarded to StatisticsImpl, and causes assert failure. In order to make the change less intrusive, we implement StatisticsImpl::measureTime(). We will revisit whether we need to remove it after several releases. Also, add a test to make sure that a Statistics implementation using the old interface still works. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5181 Differential Revision: D14907089 Pulled By: siying fbshipit-source-id: 29b6202fd04e30ed6f6adcaeb1000e87f10d1e1a 12 April 2019, 18:00:35 UTC
3189398 Fix bugs detected by clang analyzer (#5185) Summary: as titled. False positive included, fixed anyway to make the check pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5185 Differential Revision: D14909384 Pulled By: riversand963 fbshipit-source-id: dc5177e72b1929ccfd6175a60e2cd7bdb9bd80f3 12 April 2019, 17:45:56 UTC
f49e12b Added missing table properties in log (#5168) Summary: When a new SST file is created via flush or compaction, we dump out the table properties, however only a few table properties are logged. The change here is to log all the table properties Pull Request resolved: https://github.com/facebook/rocksdb/pull/5168 Differential Revision: D14876928 Pulled By: vjnadimpalli fbshipit-source-id: 1aca42ad00f9f650761d39e187f8beeb8700149b 11 April 2019, 21:33:49 UTC
fefd4b9 Introduce a new MultiGet batching implementation (#5011) Summary: This PR introduces a new MultiGet() API, with the underlying implementation grouping keys based on SST file and batching lookups in a file. The reason for the new API is twofold - the definition allows callers to allocate storage for status and values on stack instead of std::vector, as well as return values as PinnableSlices in order to avoid copying, and it keeps the original MultiGet() implementation intact while we experiment with batching. Batching is useful when there is some spatial locality to the keys being queries, as well as larger batch sizes. The main benefits are due to - 1. Fewer function calls, especially to BlockBasedTableReader::MultiGet() and FullFilterBlockReader::KeysMayMatch() 2. Bloom filter cachelines can be prefetched, hiding the cache miss latency The next step is to optimize the binary searches in the level_storage_info, index blocks and data blocks, since we could reduce the number of key comparisons if the keys are relatively close to each other. The batching optimizations also need to be extended to other formats, such as PlainTable and filter formats. This also needs to be added to db_stress. Benchmark results from db_bench for various batch size/locality of reference combinations are given below. Locality was simulated by offsetting the keys in a batch by a stride length. Each SST file is about 8.6MB uncompressed and key/value size is 16/100 uncompressed. To focus on the cpu benefit of batching, the runs were single threaded and bound to the same cpu to eliminate interference from other system events. The results show a 10-25% improvement in micros/op from smaller to larger batch sizes (4 - 32). Batch Sizes 1 | 2 | 4 | 8 | 16 | 32 Random pattern (Stride length 0) 4.158 | 4.109 | 4.026 | 4.05 | 4.1 | 4.074 - Get 4.438 | 4.302 | 4.165 | 4.122 | 4.096 | 4.075 - MultiGet (no batching) 4.461 | 4.256 | 4.277 | 4.11 | 4.182 | 4.14 - MultiGet (w/ batching) Good locality (Stride length 16) 4.048 | 3.659 | 3.248 | 2.99 | 2.84 | 2.753 4.429 | 3.728 | 3.406 | 3.053 | 2.911 | 2.781 4.452 | 3.45 | 2.833 | 2.451 | 2.233 | 2.135 Good locality (Stride length 256) 4.066 | 3.786 | 3.581 | 3.447 | 3.415 | 3.232 4.406 | 4.005 | 3.644 | 3.49 | 3.381 | 3.268 4.393 | 3.649 | 3.186 | 2.882 | 2.676 | 2.62 Medium locality (Stride length 4096) 4.012 | 3.922 | 3.768 | 3.61 | 3.582 | 3.555 4.364 | 4.057 | 3.791 | 3.65 | 3.57 | 3.465 4.479 | 3.758 | 3.316 | 3.077 | 2.959 | 2.891 dbbench command used (on a DB with 4 levels, 12 million keys)- TEST_TMPDIR=/dev/shm numactl -C 10 ./db_bench.tmp -use_existing_db=true -benchmarks="readseq,multireadrandom" -write_buffer_size=4194304 -target_file_size_base=4194304 -max_bytes_for_level_base=16777216 -num=12000000 -reads=12000000 -duration=90 -threads=1 -compression_type=none -cache_size=4194304000 -batch_size=32 -disable_auto_compactions=true -bloom_bits=10 -cache_index_and_filter_blocks=true -pin_l0_filter_and_index_blocks_in_cache=true -multiread_batched=true -multiread_stride=4 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5011 Differential Revision: D14348703 Pulled By: anand1976 fbshipit-source-id: 774406dab3776d979c809522a67bedac6c17f84b 11 April 2019, 21:28:26 UTC
ed9f5e2 Change OptimizeForPointLookup() and OptimizeForSmallDb() (#5165) Summary: Change the behavior of OptimizeForSmallDb() so that it is less likely to go out of memory. Change the behavior of OptimizeForPointLookup() to take advantage of the new memtable whole key filter, and move away from prefix extractor as well as hash-based indexing, as they are prone to misuse. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5165 Differential Revision: D14880709 Pulled By: siying fbshipit-source-id: 9af30e3c9e151eceea6d6b38701a58f1f9fb692d 11 April 2019, 17:45:36 UTC
d3d20dc Periodic Compactions (#5166) Summary: Introducing Periodic Compactions. This feature allows all the files in a CF to be periodically compacted. It could help in catching any corruptions that could creep into the DB proactively as every file is constantly getting re-compacted. And also, of course, it helps to cleanup data older than certain threshold. - Introduced a new option `periodic_compaction_time` to control how long a file can live without being compacted in a CF. - This works across all levels. - The files are put in the same level after going through the compaction. (Related files in the same level are picked up as `ExpandInputstoCleanCut` is used). - Compaction filters, if any, are invoked as usual. - A new table property, `file_creation_time`, is introduced to implement this feature. This property is set to the time at which the SST file was created (and that time is given by the underlying Env/OS). This feature can be enabled on its own, or in conjunction with `ttl`. It is possible to set a different time threshold for the bottom level when used in conjunction with ttl. Since `ttl` works only on 0 to last but one levels, you could set `ttl` to, say, 1 day, and `periodic_compaction_time` to, say, 7 days. Since `ttl < periodic_compaction_time` all files in last but one levels keep getting picked up based on ttl, and almost never based on periodic_compaction_time. The files in the bottom level get picked up for compaction based on `periodic_compaction_time`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5166 Differential Revision: D14884441 Pulled By: sagar0 fbshipit-source-id: 408426cbacb409c06386a98632dcf90bfa1bda47 11 April 2019, 02:31:18 UTC
ef0fc1b Reduce copies of LockInfo (#5172) Summary: The LockInfo struct is not easy to copy because it contains std::vector. Reduce copies by using move constructor and `unordered_map::emplace`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5172 Differential Revision: D14882053 Pulled By: lth fbshipit-source-id: 93999ec6ab1a5841fb5115abb764b6c1831a6de1 10 April 2019, 22:58:58 UTC
313e877 fix reading encrypted files beyond file boundaries (#5160) Summary: This fix should help reading from encrypted files if the file-to-be-read is smaller than expected. For example, when using the encrypted env and making it read a journal file of exactly 0 bytes size, the encrypted env code crashes with SIGSEGV in its Decrypt function, as there is no check if the read attempts to read over the file's boundaries (as specified originally by the `dataSize` parameter). The most important problem this patch addresses is however that there is no size underlow check in `CTREncryptionProvider::CreateCipherStream`: The stream to be read will be initialized to a size of always `prefix.size() - (2 * blockSize)`. If the prefix however is smaller than twice the block size, this will obviously assume a _very_ large stream and read over the bounds. The patch adds a check here as follows: // If the prefix is smaller than twice the block size, we would below read a // very large chunk of the file (and very likely read over the bounds) assert(prefix.size() >= 2 * blockSize); if (prefix.size() < 2 * blockSize) { return Status::Corruption("Unable to read from file " + fname + ": read attempt would read beyond file bounds"); } so embedders can catch the error in their release builds. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5160 Differential Revision: D14834633 Pulled By: sagar0 fbshipit-source-id: 47aa39a6db8977252cede054c7eb9a663b9a3484 08 April 2019, 21:57:25 UTC
0bb5556 Consolidate hash function used for non-persistent data in a new function (#5155) Summary: Create new function NPHash64() and GetSliceNPHash64(), which are currently implemented using murmurhash. Replace the current direct call of murmurhash() to use the new functions if the hash results are not used in on-disk format. This will make it easier to try out or switch to alternative functions in the uses where data format compatibility doesn't need to be considered. This part shouldn't have any performance impact. Also, the sharded cache hash function is changed to the new format, because it falls into this categoery. It doesn't show visible performance impact in db_bench results. CPU showed by perf is increased from about 0.2% to 0.4% in an extreme benchmark setting (4KB blocks, no-compression, everything cached in block cache). We've known that the current hash function used, our own Hash() has serious hash quality problem. It can generate a lots of conflicts with similar input. In this use case, it means extra lock contention for reads from the same file. This slight CPU regression is worthy to me to counter the potential bad performance with hot keys. And hopefully this will get further improved in the future with a better hash function. cache_test's condition is relaxed a little bit to. The new hash is slightly more skewed in this use case, but I manually checked the data and see the hash results are still in a reasonable range. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5155 Differential Revision: D14834821 Pulled By: siying fbshipit-source-id: ec9a2c0a2f8ae4b54d08b13a5c2e9cc97aa80cb5 08 April 2019, 20:32:06 UTC
de00f28 Refactor ExternalSSTFileTest (#5129) Summary: remove an unnecessary function `GenerateAndAddFileIngestBehind` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5129 Differential Revision: D14686710 Pulled By: riversand963 fbshipit-source-id: 5698ae63e10f8ef76c2da753bbb07a36024ac065 08 April 2019, 18:16:34 UTC
39c6c5f Expose DB methods to lock and unlock the WAL (#5146) Summary: Expose DB methods to lock and unlock the WAL. These methods are intended to use by MyRocks in order to obtain WAL coordinates in consistent way. Usage scenario is following: MySQL has performance_schema.log_status which provides information that enables a backup tool to copy the required log files without locking for the duration of copy. To populate this table MySQL does following: 1. Lock the binary log. Transactions are not allowed to commit now 2. Save the binary log coordinates 3. Walk through the storage engines and lock writes on each engine. For InnoDB, redo log is locked. For MyRocks, WAL should be locked. 4. Ask storage engines for their coordinates. InnoDB reports its current LSN and checkpoint LSN. MyRocks should report active WAL files names and sizes. 5. Release storage engine's locks 6. Unlock binary log Backup tool will then use this information to copy InnoDB, RocksDB and MySQL binary logs up to specified positions to end up with consistent DB state after restore. Currently, RocksDB allows to obtain the list of WAL files. Only missing bit is the method to lock the writes to WAL files. LockWAL method must flush the WAL in order for the reported size to be accurate (GetSortedWALFiles is using file system stat call to return the file size), also, since backup tool is going to copy the WAL, it is better to be flushed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5146 Differential Revision: D14815447 Pulled By: maysamyabandeh fbshipit-source-id: eec9535a6025229ed471119f19fe7b3d8ae888a3 06 April 2019, 13:40:36 UTC
479c566 Add final annotations to some cache functions (#5156) Summary: cache functions heavily use virtual functions. Add some "final" annotations to give compilers more information to optimize. The compiler doesn't seem to take advantage of it though. But it doesn't hurt. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5156 Differential Revision: D14814837 Pulled By: siying fbshipit-source-id: 4423f58eafc93f7dd3c5f04b02b5c993dba2ea94 05 April 2019, 23:08:01 UTC
8d1e521 Removed const fields in copyable classes (#5095) Summary: This fixed the compile error in Clang-8: ``` error: explicitly defaulted copy assignment operator is implicitly deleted [-Werror,-Wdefaulted-function-deleted] ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5095 Differential Revision: D14811961 Pulled By: riversand963 fbshipit-source-id: d935d1f85a4e8694dca10033fb5af92d8777eca0 05 April 2019, 22:40:30 UTC
59ef2ba Evict the uncompression dictionary from the block cache upon table close (#5150) Summary: The uncompression dictionary object has a Statistics pointer that might dangle if the database closed. This patch evicts the dictionary from the block cache when a table is closed, similarly to how index and filter readers are handled. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5150 Differential Revision: D14782422 Pulled By: ltamasi fbshipit-source-id: 0cec9336c742c479aa92206e04521767f1aa9622 04 April 2019, 23:21:12 UTC
306b9ad Add missing methods to EnvWrapper, and more wrappers in Env.h (#5131) Summary: - Some newer methods of Env weren't wrapped in EnvWrapper. Fixed. - Added more wrapper classes similar to WritableFileWrapper: SequentialFileWrapper, RandomAccessFileWrapper, RandomRWFileWrapper, DirectoryWrapper, LoggerWrapper. - Moved the code around a bit, removed some unused friendships, added some comments. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5131 Differential Revision: D14738932 Pulled By: al13n321 fbshipit-source-id: 99a9b1af28f2c629e7b7501389fa920b5ce30218 04 April 2019, 21:47:41 UTC
c06c4c0 Fix many bugs in log statement arguments (#5089) Summary: Annotate all of the logging functions to inform the compiler that these use printf-style formatting arguments. This allows the compiler to emit warnings if the format arguments are incorrect. This also fixes many problems reported now that format string checking is enabled. Many of these are simply mix-ups in the argument type (e.g, int vs uint64_t), but in several cases the wrong number of arguments were being passed in which can cause the code to crash. The primary motivation for this was to fix the log message in `DBImpl::SwitchMemtable()` which caused a segfault due to an extra %s format parameter with no argument supplied. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5089 Differential Revision: D14574795 Pulled By: simpkins fbshipit-source-id: 0921b03f0743652bf4ae21e414ff54b3bb65422a 04 April 2019, 19:12:11 UTC
f0edf9d #5145 , rename port/dirent.h to port/port_dirent.h to avoid compile err when use port dir as header dir output (#5152) Summary: mv port/dirent.h to port/port_dirent.h to avoid compile err when use port dir as header dir output Pull Request resolved: https://github.com/facebook/rocksdb/pull/5152 Differential Revision: D14779409 Pulled By: siying fbshipit-source-id: d4162c47c979c6e8cc6a9e601802864ab3768ecb 04 April 2019, 18:38:19 UTC
75e8b6d Fix race condition in IteratorWithLocalStatistics (#5149) Summary: The ReadCallback was shared between all threads in IteratorWithLocalStatistics. A race condition was hence introduced with recent changes that changes the content of ReadCallback. The patch fixes that by using a separate callback per thread. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5149 Differential Revision: D14761612 Pulled By: maysamyabandeh fbshipit-source-id: 814a316aed046c318cb90e22379a6e32ac528949 03 April 2019, 23:04:38 UTC
7441a0e WriteUnPrepared: fix ubsan complaint (#5148) Summary: Ubsna complains that in initialization of WriteUnpreparedTxnReadCallback the method of the child class is used before the parent class is constructed. The patch fixes that by making the aforementioned method static. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5148 Differential Revision: D14760098 Pulled By: maysamyabandeh fbshipit-source-id: cf19b7c1fdb5de0a54e62c1deebe09a0fa048ded 03 April 2019, 22:51:30 UTC
ebb9b2e Fix the potential DB crash caused by call EndTrace before StartTrace (#5130) Summary: Although user should first call StartTrace to begin the RocksDB tracing function and call EndTrace to stop the tracing process, user can accidentally call EndTrace first. It will cause segment fault and crash the DB instance. The issue is fixed by checking the pointer first. Test case added in db_test2. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5130 Differential Revision: D14691420 Pulled By: zhichao-cao fbshipit-source-id: 3be13d2f944bc453728ef8eef67b68d7ad0939c8 03 April 2019, 20:26:34 UTC
e8480d4 add assert to silence clang analyzer and fix variable shadowing (#5140) Summary: This PR address two open issues: 1. clang analyzer is paranoid about db_ being nullptr after DB::Open calls in the test. See https://github.com/facebook/rocksdb/pull/5043#discussion_r271394579 Add an assert to keep clang happy 2. PR https://github.com/facebook/rocksdb/pull/5049 introduced a variable shadowing: ``` db/db_iterator_test.cc: In constructor ‘rocksdb::DBIteratorWithReadCallbackTest_ReadCallback_Test::TestBody()::TestReadCallback::TestReadCallback(rocksdb::SequenceNumber)’: db/db_iterator_test.cc:2484:9: error: declaration of ‘max_visible_seq’ shadows a member of 'this' [-Werror=shadow] : ReadCallback(max_visible_seq) {} ^ ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5140 Differential Revision: D14735497 Pulled By: miasantreble fbshipit-source-id: 3219ea75cf4ae04f64d889323f6779e84be98144 03 April 2019, 04:15:44 UTC
5234fc1 Mark logs with prepare in PreReleaseCallback (#5121) Summary: In prepare phase of 2PC, the db promises to remember the prepared data, for possible future commits. To fulfill the promise the prepared data must be persisted in the WAL so that they could be recovered after a crash. The log that contains a prepare batch that is not committed yet, is marked so that it is not garbage collected before the transaction commits/rollbacks. The bug was that the write to the log file and the mark of the file was not atomic, and WAL gc could have happened before the WAL log is actually marked. This patch moves the marking logic to PreReleaseCallback so that the WAL gc logic that joins both write threads would see the WAL write and WAL mark atomically. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5121 Differential Revision: D14665210 Pulled By: maysamyabandeh fbshipit-source-id: 1d66aeb1c66a296cb4899a5a20c4d40c59e4b534 02 April 2019, 22:17:47 UTC
26015f3 add compression options to table properties (#5081) Summary: Since we are planning to use dictionary compression and to use different compression level, it is quite useful to add compression options to TableProperties. For example, in MyRocks, if the feature is available, we can query from information_schema.rocksdb_sst_props to see if all sst files are converted to ZSTD dictionary compressions. Resolves https://github.com/facebook/rocksdb/issues/4992 With this PR, user can query table properties through `GetPropertiesOfAllTables` API and get compression options as std::string: `window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0;` or table_properties->ToString() will also contain it `# data blocks=1; # entries=13; # deletions=0; # merge operands=0; # range deletions=0; raw key size=143; raw average key size=11.000000; raw value size=39; raw average value size=3.000000; data block size=120; index block size (user-key? 0, delta-value? 0)=27; filter block size=0; (estimated) table size=147; filter policy name=N/A; prefix extractor name=nullptr; column family ID=0; column family name=default; comparator name=leveldb.BytewiseComparator; merge operator name=nullptr; property collectors names=[]; SST file compression algo=Snappy; SST file compression options=window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ; creation time=1552946632; time stamp of earliest key=1552946632;` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5081 Differential Revision: D14716692 Pulled By: miasantreble fbshipit-source-id: 7d2f2cf84e052bff876e71b4212cfdebf5be32dd 02 April 2019, 21:52:34 UTC
14b3f68 WriteUnPrepared: less virtual in iterator callback (#5049) Summary: WriteUnPrepared adds a virtual function, MaxUnpreparedSequenceNumber, to ReadCallback, which returns 0 unless WriteUnPrepared is enabled and the transaction has uncommitted data written to the DB. Together with snapshot sequence number, this determines the last sequence that is visible to reads. The patch clarifies the guarantees of the GetIterator API in WriteUnPrepared transactions and make use of that to statically initialize the read callback and thus avoid the virtual call. Furthermore it increases the minimum value for min_uncommitted from 0 to 1 as seq 0 is used only for last level keys that are committed in all snapshots. The following benchmark shows +0.26% higher throughput in seekrandom benchmark. Benchmark: ./db_bench --benchmarks=fillrandom --use_existing_db=0 --num=1000000 --db=/dev/shm/dbbench ./db_bench --benchmarks=seekrandom[X10] --use_existing_db=1 --db=/dev/shm/dbbench --num=1000000 --duration=60 --seek_nexts=100 seekrandom [AVG 10 runs] : 20355 ops/sec; 225.2 MB/sec seekrandom [MEDIAN 10 runs] : 20425 ops/sec; 225.9 MB/sec ./db_bench_lessvirtual3 --benchmarks=seekrandom[X10] --use_existing_db=1 --db=/dev/shm/dbbench --num=1000000 --duration=60 --seek_nexts=100 seekrandom [AVG 10 runs] : 20409 ops/sec; 225.8 MB/sec seekrandom [MEDIAN 10 runs] : 20487 ops/sec; 226.6 MB/sec Pull Request resolved: https://github.com/facebook/rocksdb/pull/5049 Differential Revision: D14366459 Pulled By: maysamyabandeh fbshipit-source-id: ebaff8908332a5ae9af7defeadabcb624be660ef 02 April 2019, 21:47:16 UTC
d9d3cac Add a missing define to monitoring/iostats_context_imp.h (#5136) Summary: I think when PR https://github.com/facebook/rocksdb/pull/4889 added the `IOSTATS_CPU_TIMER_GUARD` define to this header file, the noop version in the `#else` branch was forgotten. Not sure if this is common, but on my MacOS machine it breaks my build Pull Request resolved: https://github.com/facebook/rocksdb/pull/5136 Differential Revision: D14727727 Pulled By: siying fbshipit-source-id: 1076e56bdbe6ecda01d461b371dabf7f1593a149 02 April 2019, 18:56:18 UTC
ebcc8ae Revert "Avoid per-key upper bound check in BlockBasedTableIterator (#5101)" (#5132) Summary: This reverts commit f29dc1b90641e7f44b14f932e3866c5840391cd5. In BlockBasedTableIterator, index_iter_->key() is sometimes a user key, so it is wrong to call ExtractUserKey() against it. This is a bug introduced by #5101. Temporarily revert the diff to keep the branch clean. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5132 Differential Revision: D14718584 Pulled By: siying fbshipit-source-id: 0ac55dc9b5dbc18c7809092146bdf7eb9364b9ad 02 April 2019, 17:00:38 UTC
fa1b558 Add LevelDB repository link in the Readme Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5134 Differential Revision: D14719068 Pulled By: siying fbshipit-source-id: c09a544f06ff414dbe2f90792aaf2bb5b8550bee 02 April 2019, 01:19:09 UTC
120bc47 Add DBOptions. avoid_unnecessary_blocking_io to defer file deletions (#5043) Summary: Just like ReadOptions::background_purge_on_iterator_cleanup but for ColumnFamilyHandle instead of Iterator. In our use case we sometimes call ColumnFamilyHandle's destructor from low-latency threads, and sometimes it blocks the thread for a few seconds deleting the files. To avoid that, we can either offload ColumnFamilyHandle's destruction to a background thread on our side, or add this option on rocksdb side. This PR does the latter, to be consistent with how we solve exactly the same problem for iterators using background_purge_on_iterator_cleanup option. (EDIT: It's avoid_unnecessary_blocking_io now, and affects both CF drops and iterator destructors.) I'm not quite comfortable with having two separate options (background_purge_on_iterator_cleanup and background_purge_on_cf_cleanup) for such a rarely used thing. Maybe we should merge them? Rename background_purge_on_cf_cleanup to something like delete_files_on_background_threads_only or avoid_blocking_io_in_unexpected_places, and make iterators use it instead of the one in ReadOptions? I can do that here if you guys think it's better. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5043 Differential Revision: D14339233 Pulled By: al13n321 fbshipit-source-id: ccf7efa11c85c9a5b91d969bb55627d0fb01e7b8 02 April 2019, 00:10:40 UTC
back to top