e1c468d | sdong | 16 August 2019, 23:40:09 UTC | Do readahead in VerifyChecksum() (#5713) Summary: Right now VerifyChecksum() doesn't do read-ahead. In some use cases, users won't be able to achieve good performance. With this change, by default, RocksDB will do a default readahead, and users will be able to overwrite the readahead size by passing in a ReadOptions. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5713 Test Plan: Add a new unit test. Differential Revision: D16860874 fbshipit-source-id: 0cff0fe79ac855d3d068e6ccd770770854a68413 | 16 August 2019, 23:42:56 UTC |
e89b1c9 | Zhongyi Xie | 16 August 2019, 23:37:20 UTC | add missing check for hash index when calling BlockBasedTableIterator (#5712) Summary: Previous PR https://github.com/facebook/rocksdb/pull/3601 added support for making prefix_extractor dynamically mutable. However, there was a missing check for hash index when creating new BlockBasedTableIterator. While the check may be redundant because no other types of IndexReader makes uses of the flag, it is less error-prone to add the missing check so that future index reader implementation will not worry about violating the contract. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5712 Differential Revision: D16842052 Pulled By: miasantreble fbshipit-source-id: aef11c0ff7a690ed248f5b8fe23481cac486b381 | 16 August 2019, 23:39:49 UTC |
f2bf0b2 | Adam Retter | 16 August 2019, 23:25:11 UTC | Fixes for building RocksJava releases on arm64v8 Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5674 Differential Revision: D16870338 fbshipit-source-id: c8dac644b1479fa734b491f3a8d50151772290f7 | 16 August 2019, 23:27:50 UTC |
35fe685 | Kefu Chai | 16 August 2019, 22:48:09 UTC | cmake: s/SNAPPY_LIBRARIES/snappy_LIBRARIES/ (#5687) Summary: fix the regression introduced by cc9fa7fc Signed-off-by: Kefu Chai <tchaikov@gmail.com> Pull Request resolved: https://github.com/facebook/rocksdb/pull/5687 Differential Revision: D16870212 fbshipit-source-id: 78b5519e1d2b03262d102ca530491254ddffdc38 | 16 August 2019, 22:49:23 UTC |
e051560 | sdong | 16 August 2019, 22:34:49 UTC | Blacklist TransactionTest.GetWithoutSnapshot from valgrind_test (#5715) Summary: In valgrind_test, TransactionTest.GetWithoutSnapshot ran 2 hours and still didn't finish. Black list from valgrind_test to prevent timeout. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5715 Test Plan: run "make valgrind_test" and see whether the test is still generated. Differential Revision: D16866009 fbshipit-source-id: 92c78049b0bc1c2b9a0dfc1b7c8a9206b36f02f0 | 16 August 2019, 22:36:49 UTC |
353a68d | Yanqin Jin | 16 August 2019, 22:05:56 UTC | Update HISTORY.md for 6.4.0 (#5714) Summary: Update HISTORY.md by removing a feature from "Unreleased" to 6.4.0 after cherry-picking related commits to 6.4.fb branch. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5714 Differential Revision: D16865334 Pulled By: riversand963 fbshipit-source-id: f17ede905a1dfbbcdf98806ca398c618cf54748a | 16 August 2019, 22:09:20 UTC |
a2e46ea | jsteemann | 16 August 2019, 21:36:41 UTC | fix compiling with `-DNPERF_CONTEXT` (#5704) Summary: This was previously broken, as the performance context-related macro signatures in file monitoring/perf_context_imp.h deviated for the case when NPERF_CONTEXT was defined and when it was not. Update the macros for the `-DNPERF_CONTEXT` case, so it compiles. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5704 Differential Revision: D16867746 fbshipit-source-id: 05539724cb1f7955ecc42828365836a677759ad9 | 16 August 2019, 21:38:08 UTC |
c2404d9 | Eli Pozniansky | 16 August 2019, 21:16:49 UTC | Optimizing ApproximateSize to create index iterator just once (#5693) Summary: VersionSet::ApproximateSize doesn't need to create two separate index iterators and do binary search for each in BlockBasedTable. So BlockBasedTable::ApproximateSize was added that creates the iterator once and uses it to calculate the data size between start and end keys. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5693 Differential Revision: D16774056 Pulled By: elipoz fbshipit-source-id: 53ce262e1a057788243bf30cd9b8aa6581df1a18 | 16 August 2019, 21:18:28 UTC |
c762efc | sheng qiu | 16 August 2019, 20:55:37 UTC | fix compile error: ‘FALLOC_FL_KEEP_SIZE’ undeclared (#5708) Summary: add "linux/falloc.h" in env/io_posix.cc to fix compile error: ‘FALLOC_FL_KEEP_SIZE’ undeclared Signed-off-by: sheng qiu <herbert1984106@gmail.com> Pull Request resolved: https://github.com/facebook/rocksdb/pull/5708 Differential Revision: D16832922 fbshipit-source-id: 30e787c4a1b5a9724a8acfd68962ff5ec5f27d3e | 16 August 2019, 20:58:05 UTC |
40712df | Kefu Chai | 16 August 2019, 20:54:23 UTC | ThreadPoolImpl::Impl::BGThreadWrapper() returns void (#5709) Summary: there is no need to return void*, as std::thread::thread(Func&& f, Args&&... args ) only requires `Func` to be callable. Signed-off-by: Kefu Chai <tchaikov@gmail.com> Pull Request resolved: https://github.com/facebook/rocksdb/pull/5709 Differential Revision: D16832894 fbshipit-source-id: a1e1b876fa8d55589ef5feb5b27f3a435068b747 | 16 August 2019, 20:55:41 UTC |
3a3dc29 | Levi Tamasi | 16 August 2019, 18:15:13 UTC | Update HISTORY.md for 6.3.2/6.4.0 and add a not-yet-released change Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5710 Test Plan: HISTORY.md-only change, no testing required. Differential Revision: D16836869 Pulled By: ltamasi fbshipit-source-id: 978148f1d14b0c46839a94d7ada8a5e8ecf73965 | 16 August 2019, 18:17:03 UTC |
bd2c753 | sdong | 15 August 2019, 23:59:42 UTC | Add command "list_file_range_deletes" in ldb (#5615) Summary: Add a command in ldb so that users can print out tombstones in SST files. In order to test the code, change the interface of LDBCommandRunner::RunCommand() so that it doesn't return from the program, but return the status code. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5615 Test Plan: Add a new unit test Differential Revision: D16550326 fbshipit-source-id: 88ddfe6984bdcbb3a528abdd115089df09eba52e | 16 August 2019, 00:01:03 UTC |
6ec2bf3 | Maysam Yabandeh | 15 August 2019, 21:39:47 UTC | Blog post for write_unprepared (#5711) Summary: Introducing write_unprepared feature. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5711 Differential Revision: D16838307 Pulled By: maysamyabandeh fbshipit-source-id: d9a4daf63dd0f855bea49c14ce84e6299f1401c7 | 15 August 2019, 21:41:13 UTC |
d61d450 | Jeffrey Xiao | 15 August 2019, 03:58:59 UTC | Fix IngestExternalFile overlapping check (#5649) Summary: Previously, the end key of a range deletion tombstone was considered exclusive for the purposes of deletion, but considered inclusive when checking if two SSTables overlap. For example, an SSTable with a range deletion tombstone [a, b) would be considered overlapping with an SSTable with a range deletion tombstone [b, c). This commit fixes this check. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5649 Differential Revision: D16808765 Pulled By: anand1976 fbshipit-source-id: 5c7ad1c027e4f778d35070e5dae1b8e6037e0d68 | 15 August 2019, 04:02:28 UTC |
d92a59b | Levi Tamasi | 15 August 2019, 01:13:14 UTC | Fix regression affecting partitioned indexes/filters when cache_index_and_filter_blocks is false (#5705) Summary: PR https://github.com/facebook/rocksdb/issues/5298 (and subsequent related patches) unintentionally changed the semantics of cache_index_and_filter_blocks: historically, this option only affected the main index/filter block; with the changes, it affects index/filter partitions as well. This can cause performance issues when cache_index_and_filter_blocks is false since in this case, partitions are neither cached nor preloaded (i.e. they are loaded on demand upon each access). The patch reverts to the earlier behavior, that is, partitions are cached similarly to data blocks regardless of the value of the above option. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5705 Test Plan: make check ./db_bench -benchmarks=fillrandom --statistics --stats_interval_seconds=1 --duration=30 --num=500000000 --bloom_bits=20 --partition_index_and_filters=true --cache_index_and_filter_blocks=false ./db_bench -benchmarks=readrandom --use_existing_db --statistics --stats_interval_seconds=1 --duration=10 --num=500000000 --bloom_bits=20 --partition_index_and_filters=true --cache_index_and_filter_blocks=false --cache_size=8000000000 Relevant statistics from the readrandom benchmark with the old code: rocksdb.block.cache.index.miss COUNT : 0 rocksdb.block.cache.index.hit COUNT : 0 rocksdb.block.cache.index.add COUNT : 0 rocksdb.block.cache.index.bytes.insert COUNT : 0 rocksdb.block.cache.index.bytes.evict COUNT : 0 rocksdb.block.cache.filter.miss COUNT : 0 rocksdb.block.cache.filter.hit COUNT : 0 rocksdb.block.cache.filter.add COUNT : 0 rocksdb.block.cache.filter.bytes.insert COUNT : 0 rocksdb.block.cache.filter.bytes.evict COUNT : 0 With the new code: rocksdb.block.cache.index.miss COUNT : 2500 rocksdb.block.cache.index.hit COUNT : 42696 rocksdb.block.cache.index.add COUNT : 2500 rocksdb.block.cache.index.bytes.insert COUNT : 4050048 rocksdb.block.cache.index.bytes.evict COUNT : 0 rocksdb.block.cache.filter.miss COUNT : 2500 rocksdb.block.cache.filter.hit COUNT : 4550493 rocksdb.block.cache.filter.add COUNT : 2500 rocksdb.block.cache.filter.bytes.insert COUNT : 10331040 rocksdb.block.cache.filter.bytes.evict COUNT : 0 Differential Revision: D16817382 Pulled By: ltamasi fbshipit-source-id: 28a516b0da1f041a03313e0b70b28cf5cf205d00 | 15 August 2019, 01:16:06 UTC |
77273d4 | Aaryaman Sagar | 14 August 2019, 23:58:11 UTC | Fix TSAN failures in DistributedMutex tests (#5684) Summary: TSAN was not able to correctly instrument atomic bts and btr instructions, so when TSAN is enabled implement those with std::atomic::fetch_or and std::atomic::fetch_and. Also disable tests that fail on TSAN with false negatives (we know these are false negatives because this other verifiably correct program fails with the same TSAN error <link>) ``` make clean TEST_TMPDIR=/dev/shm/rocksdb OPT=-g COMPILE_WITH_TSAN=1 make J=1 -j56 folly_synchronization_distributed_mutex_test ``` This is the code that fails with the same false-negative with TSAN ``` namespace { class ExceptionWithConstructionTrack : public std::exception { public: explicit ExceptionWithConstructionTrack(int id) : id_{folly::to<std::string>(id)}, constructionTrack_{id} {} const char* what() const noexcept override { return id_.c_str(); } private: std::string id_; TestConstruction constructionTrack_; }; template <typename Storage, typename Atomic> void transferCurrentException(Storage& storage, Atomic& produced) { assert(std::current_exception()); new (&storage) std::exception_ptr(std::current_exception()); produced->store(true, std::memory_order_release); } void concurrentExceptionPropagationStress( int numThreads, std::chrono::milliseconds milliseconds) { auto&& stop = std::atomic<bool>{false}; auto&& exceptions = std::vector<std::aligned_storage<48, 8>::type>{}; auto&& produced = std::vector<std::unique_ptr<std::atomic<bool>>>{}; auto&& consumed = std::vector<std::unique_ptr<std::atomic<bool>>>{}; auto&& consumers = std::vector<std::thread>{}; for (auto i = 0; i < numThreads; ++i) { produced.emplace_back(new std::atomic<bool>{false}); consumed.emplace_back(new std::atomic<bool>{false}); exceptions.push_back({}); } auto producer = std::thread{[&]() { auto counter = std::vector<int>(numThreads, 0); for (auto i = 0; true; i = ((i + 1) % numThreads)) { try { throw ExceptionWithConstructionTrack{counter.at(i)++}; } catch (...) { transferCurrentException(exceptions.at(i), produced.at(i)); } while (!consumed.at(i)->load(std::memory_order_acquire)) { if (stop.load(std::memory_order_acquire)) { return; } } consumed.at(i)->store(false, std::memory_order_release); } }}; for (auto i = 0; i < numThreads; ++i) { consumers.emplace_back([&, i]() { auto counter = 0; while (true) { while (!produced.at(i)->load(std::memory_order_acquire)) { if (stop.load(std::memory_order_acquire)) { return; } } produced.at(i)->store(false, std::memory_order_release); try { auto storage = &exceptions.at(i); auto exc = folly::launder( reinterpret_cast<std::exception_ptr*>(storage)); auto copy = std::move(*exc); exc->std::exception_ptr::~exception_ptr(); std::rethrow_exception(std::move(copy)); } catch (std::exception& exc) { auto value = std::stoi(exc.what()); EXPECT_EQ(value, counter++); } consumed.at(i)->store(true, std::memory_order_release); } }); } std::this_thread::sleep_for(milliseconds); stop.store(true); producer.join(); for (auto& thread : consumers) { thread.join(); } } } // namespace ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5684 Differential Revision: D16746077 Pulled By: miasantreble fbshipit-source-id: 8af88dcf9161c05daec1a76290f577918638f79d | 15 August 2019, 00:01:31 UTC |
7785f61 | Manuel Ung | 14 August 2019, 23:08:38 UTC | WriteUnPrepared: Fix bug in savepoints (#5703) Summary: Fix a bug in write unprepared savepoints. When flushing the write batch according to savepoint boundaries, we were forgetting to flush the last write batch after the last savepoint, meaning that some data was not written to DB. Also, add a small optimization where we avoid flushing empty batches. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5703 Differential Revision: D16811996 Pulled By: lth fbshipit-source-id: 600c7e0e520ad7a8fad32d77e11d932453e68e3f | 14 August 2019, 23:15:46 UTC |
0a97125 | Levi Tamasi | 14 August 2019, 23:07:03 UTC | Fix data races in BlobDB (#5698) Summary: Some accesses to blob_files_ and open_ttl_files_ in BlobDBImpl, as well as to expiration_range_ in BlobFile were not properly synchronized. The patch fixes this and also makes sure the invariant that obsolete_files_ is a subset of blob_files_ holds even when an attempt to delete an obsolete blob file fails. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5698 Test Plan: COMPILE_WITH_TSAN=1 make blob_db_test gtest-parallel --repeat=1000 ./blob_db_test --gtest_filter="*ShutdownWait*" The test fails with TSAN errors ~20 times out of 1000 without the patch but completes successfully 1000 out of 1000 times with the fix. Differential Revision: D16793235 Pulled By: ltamasi fbshipit-source-id: 8034b987598d4fdc9f15098d4589cc49cde484e9 | 14 August 2019, 23:10:36 UTC |
4c70cb7 | Manuel Ung | 14 August 2019, 21:25:00 UTC | WriteUnPrepared: support iterating while writing to transaction (#5699) Summary: In MyRocks, there are cases where we write while iterating through keys. This currently breaks WBWIIterator, because if a write batch flushes during iteration, the delta iterator would point to invalid memory. For now, fix by disallowing flush if there are active iterators. In the future, we will loop through all the iterators on a transaction, and refresh the iterators when a write batch is flushed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5699 Differential Revision: D16794157 Pulled By: lth fbshipit-source-id: 5d5bf70688bd68fe58e8a766475ae88fd1be3190 | 14 August 2019, 21:28:53 UTC |
90cd6c2 | Zhongyi Xie | 14 August 2019, 04:51:42 UTC | Fix double deletion in transaction_test (#5700) Summary: Fix the following clang analyze failures: ``` In file included from utilities/transactions/transaction_test.cc:8: ./utilities/transactions/transaction_test.h:174:14: warning: Attempt to delete released memory delete root_db; ^ ``` The destructor of StackableDB already deletes the root db and there is no need to delete the db separately. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5700 Test Plan: USE_CLANG=1 TEST_TMPDIR=/dev/shm/rocksdb OPT=-g make -j24 analyze Differential Revision: D16800579 Pulled By: maysamyabandeh fbshipit-source-id: 64c2d70f23e07e6a15242add97c744902ea33be5 | 14 August 2019, 04:54:55 UTC |
8a678a5 | Manuel Ung | 13 August 2019, 20:08:48 UTC | WriteUnPrepared: Relax restriction on iterators and writes with no snapshot (#5697) Summary: Currently, if a write is done without a snapshot, then `largest_validated_seq_` is set to `kMaxSequenceNumber`. This is too aggressive, because an iterator with a snapshot created after this write should be valid. Set `largest_validated_seq_` to `GetLastPublishedSequence` instead. The variable means that no keys in the current tracked key set has changed by other transactions since `largest_validated_seq_`. Also, do some extra cleanup in Clear() for safety. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5697 Differential Revision: D16788613 Pulled By: lth fbshipit-source-id: f2aa40b8b12e0c0cf9e38c940fecc8f1cc0d2385 | 13 August 2019, 20:11:51 UTC |
04a849b | Yi Zhang | 12 August 2019, 23:39:02 UTC | Fix compiler error by deleting GetContext default ctor (#5685) Summary: When updating compiler version for MyRocks I'm seeing this error with rocksdb: ``` ome/yzha/mysql/mysql-fork2/rocksdb/table/get_context.h:91:3: error: explicitly defaulted default constructor is implicitly deleted [-Werror,-Wdefaulted-function-deleted] GetContext() = default; ^ /home/yzha/mysql/mysql-fork2/rocksdb/table/get_context.h:166:18: note: default constructor of 'GetContext' is implicitly deleted because field 'tracing_get_id_' of const-qualified type 'const uint64_t' (aka 'const unsigned long') would not be initialized const uint64_t tracing_get_id_; ^ ``` The error itself is rather self explanatory and makes sense. Given that no one seems to be using the default ctor (they shouldn't, anyway), I'm deleting it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5685 Differential Revision: D16747712 Pulled By: yizhang82 fbshipit-source-id: 95c0acb958a1ed41154c0047d2e6fce7644de53f | 12 August 2019, 23:42:10 UTC |
6485597 | Maysam Yabandeh | 12 August 2019, 19:17:26 UTC | WriteUnPrepared: Pass snap_released to the callback (#5691) Summary: With changes made in https://github.com/facebook/rocksdb/pull/5664 we meant to pass snap_released parameter of ::IsInSnapshot from the read callbacks. Although the variable was defined, passing it to the callback in WritePreparedTxnReadCallback was missing, which is fixed in this PR. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5691 Differential Revision: D16767310 Pulled By: maysamyabandeh fbshipit-source-id: 3bf53f5964a2756a66ceef7c8f6b3ac75f102f48 | 12 August 2019, 19:20:46 UTC |
6f0f82d | Manuel Ung | 12 August 2019, 19:11:21 UTC | WriteUnPrepared: increase test coverage in transaction_test (#5658) Summary: The changes transaction_test to set `txn_db_options.default_write_batch_flush_threshold = 1` in order to give better test coverage for WriteUnprepared. As part of the change, some tests had to be updated. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5658 Differential Revision: D16740468 Pulled By: lth fbshipit-source-id: 3821eec20baf13917c8c1fab444332f75a509de9 | 12 August 2019, 19:16:04 UTC |
de3fb9a | Zhongyi Xie | 11 August 2019, 02:12:09 UTC | exclude TEST_ENV_URI from rocksdb lite (#5686) Summary: PR https://github.com/facebook/rocksdb/pull/5676 added some test coverage for `TEST_ENV_URI`, which unfortunately isn't supported in lite mode, causing some test failures for rocksdb lite. For example, ``` db/db_test_util.cc: In constructor ‘rocksdb::DBTestBase::DBTestBase(std::__cxx11::string)’: db/db_test_util.cc:57:16: error: ‘ObjectRegistry’ has not been declared Status s = ObjectRegistry::NewInstance()->NewSharedObject(test_env_uri, ^ ``` This PR fixes these errors by excluding the new code from test functions for lite mode. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5686 Differential Revision: D16749000 Pulled By: miasantreble fbshipit-source-id: e8b3088c31a78b3dffc5fe7814261909d2c3e369 | 11 August 2019, 02:15:05 UTC |
12eaacb | Maysam Yabandeh | 09 August 2019, 23:35:16 UTC | WritePrepared: Fix SmallestUnCommittedSeq bug (#5683) Summary: SmallestUnCommittedSeq reads two data structures, prepared_txns_ and delayed_prepared_. These two are updated in CheckPreparedAgainstMax when max_evicted_seq_ advances some prepared entires. To avoid the cost of acquiring a mutex, the read from them in SmallestUnCommittedSeq is not atomic. This creates a potential race condition. The fix is to read the two data structures in the reverse order of their update. CheckPreparedAgainstMax copies the prepared entry to delayed_prepared_ before removing it from prepared_txns_ and SmallestUnCommittedSeq looks into prepared_txns_ before reading delayed_prepared_. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5683 Differential Revision: D16744699 Pulled By: maysamyabandeh fbshipit-source-id: b1bdb134018beb0b9de58827f512662bea35cad0 | 09 August 2019, 23:40:00 UTC |
5d9a67e | Yanqin Jin | 09 August 2019, 22:08:36 UTC | Support loading custom objects in unit tests (#5676) Summary: Most existing RocksDB unit tests run on `Env::Default()`. It will be useful to port the unit tests to non-default environments, e.g. `HdfsEnv`, etc. This pull request is one step towards this goal. If RocksDB unit tests are built with a static library exposing a function `RegisterCustomObjects()`, then it is possible to implement custom object registrar logic in the library. RocksDB unit test can call `RegisterCustomObjects()` at the beginning. By default, `ROCKSDB_UNITTESTS_WITH_CUSTOM_OBJECTS_FROM_STATIC_LIBS` is not defined, thus this PR has no impact on existing RocksDB because `RegisterCustomObjects()` is a noop. Test plan (on devserver): ``` $make clean && COMPILE_WITH_ASAN=1 make -j32 all $make check ``` All unit tests must pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5676 Differential Revision: D16679157 Pulled By: riversand963 fbshipit-source-id: aca571af3fd0525277cdc674248d0fe06e060f9d | 09 August 2019, 22:12:08 UTC |
3da2257 | haoyuhuang | 09 August 2019, 20:09:04 UTC | Block cache analyzer: Support reading from human readable trace file. (#5679) Summary: This PR adds support in block cache trace analyzer to read from human readable trace file. This is needed when a user does not have access to the binary trace file. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5679 Test Plan: USE_CLANG=1 make check -j32 Differential Revision: D16697239 Pulled By: HaoyuHuang fbshipit-source-id: f2e29d7995816c389b41458f234ec8e184a924db | 09 August 2019, 20:13:54 UTC |
e0b8453 | Zhongyi Xie | 08 August 2019, 03:15:21 UTC | Fix clang_check and lite failures (#5680) Summary: This PR fixes two test failures: 1. clang check: ``` third-party/folly/folly/detail/Futex.cpp:52:12: error: implicit conversion loses integer precision: 'long' to 'int' [-Werror,-Wshorten-64-to-32] int rv = syscall( ~~ ^~~~~~~~ third-party/folly/folly/detail/Futex.cpp:114:12: error: implicit conversion loses integer precision: 'long' to 'int' [-Werror,-Wshorten-64-to-32] int rv = syscall( ~~ ^~~~~~~~ ``` 2. lite ``` ./third-party/folly/folly/synchronization/DistributedMutex-inl.h:1337:7: error: exception handling disabled, use -fexceptions to enable } catch (...) { ^ ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5680 Differential Revision: D16704042 Pulled By: miasantreble fbshipit-source-id: a53cb06128365d9e864f07476b0af8fc27140f07 | 08 August 2019, 03:19:39 UTC |
38b03c8 | Aaryaman Sagar | 07 August 2019, 21:29:35 UTC | Port folly/synchronization/DistributedMutex to rocksdb (#5642) Summary: This ports `folly::DistributedMutex` into RocksDB. The PR includes everything else needed to compile and use DistributedMutex as a component within folly. Most files are unchanged except for some portability stuff and includes. For now, I've put this under `rocksdb/third-party`, but if there is a better folder to put this under, let me know. I also am not sure how or where to put unit tests for third-party stuff like this. It seems like gtest is included already, but I need to link with it from another third-party folder. This also includes some other common components from folly - folly/Optional - folly/ScopeGuard (In particular `SCOPE_EXIT`) - folly/synchronization/ParkingLot (A portable futex-like interface) - folly/synchronization/AtomicNotification (The standard C++ interface for futexes) - folly/Indestructible (For singletons that don't get destroyed without allocations) Pull Request resolved: https://github.com/facebook/rocksdb/pull/5642 Differential Revision: D16544439 fbshipit-source-id: 179b98b5dcddc3075926d31a30f92fd064245731 | 07 August 2019, 21:34:19 UTC |
6e78fe3 | haoyuhuang | 07 August 2019, 01:47:39 UTC | Pysim more algorithms (#5644) Summary: This PR adds four more eviction policies. - OPT [1] - Hyperbolic caching [2] - ARC [3] - GreedyDualSize [4] [1] L. A. Belady. 1966. A Study of Replacement Algorithms for a Virtual-storage Computer. IBM Syst. J. 5, 2 (June 1966), 78-101. DOI=http://dx.doi.org/10.1147/sj.52.0078 [2] Aaron Blankstein, Siddhartha Sen, and Michael J. Freedman. 2017. Hyperbolic caching: flexible caching for web applications. In Proceedings of the 2017 USENIX Conference on Usenix Annual Technical Conference (USENIX ATC '17). USENIX Association, Berkeley, CA, USA, 499-511. [3] Nimrod Megiddo and Dharmendra S. Modha. 2003. ARC: A Self-Tuning, Low Overhead Replacement Cache. In Proceedings of the 2nd USENIX Conference on File and Storage Technologies (FAST '03). USENIX Association, Berkeley, CA, USA, 115-130. [4] N. Young. The k-server dual and loose competitiveness for paging. Algorithmica, June 1994, vol. 11,(no.6):525-41. Rewritten version of ''On-line caching as cache size varies'', in The 2nd Annual ACM-SIAM Symposium on Discrete Algorithms, 241-250, 1991. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5644 Differential Revision: D16548817 Pulled By: HaoyuHuang fbshipit-source-id: 838f76db9179f07911abaab46c97e1c929cfcd63 | 07 August 2019, 01:50:59 UTC |
d150e01 | Vijay Nadimpalli | 06 August 2019, 21:22:34 UTC | New API to get all merge operands for a Key (#5604) Summary: This is a new API added to db.h to allow for fetching all merge operands associated with a Key. The main motivation for this API is to support use cases where doing a full online merge is not necessary as it is performance sensitive. Example use-cases: 1. Update subset of columns and read subset of columns - Imagine a SQL Table, a row is encoded as a K/V pair (as it is done in MyRocks). If there are many columns and users only updated one of them, we can use merge operator to reduce write amplification. While users only read one or two columns in the read query, this feature can avoid a full merging of the whole row, and save some CPU. 2. Updating very few attributes in a value which is a JSON-like document - Updating one attribute can be done efficiently using merge operator, while reading back one attribute can be done more efficiently if we don't need to do a full merge. ---------------------------------------------------------------------------------------------------- API : Status GetMergeOperands( const ReadOptions& options, ColumnFamilyHandle* column_family, const Slice& key, PinnableSlice* merge_operands, GetMergeOperandsOptions* get_merge_operands_options, int* number_of_operands) Example usage : int size = 100; int number_of_operands = 0; std::vector<PinnableSlice> values(size); GetMergeOperandsOptions merge_operands_info; db_->GetMergeOperands(ReadOptions(), db_->DefaultColumnFamily(), "k1", values.data(), merge_operands_info, &number_of_operands); Description : Returns all the merge operands corresponding to the key. If the number of merge operands in DB is greater than merge_operands_options.expected_max_number_of_operands no merge operands are returned and status is Incomplete. Merge operands returned are in the order of insertion. merge_operands-> Points to an array of at-least merge_operands_options.expected_max_number_of_operands and the caller is responsible for allocating it. If the status returned is Incomplete then number_of_operands will contain the total number of merge operands found in DB for key. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5604 Test Plan: Added unit test and perf test in db_bench that can be run using the command: ./db_bench -benchmarks=getmergeoperands --merge_operator=sortlist Differential Revision: D16657366 Pulled By: vjnadimpalli fbshipit-source-id: 0faadd752351745224ee12d4ae9ef3cb529951bf | 06 August 2019, 21:26:44 UTC |
4f98b43 | Yun Tang | 06 August 2019, 16:10:32 UTC | Correct the default write buffer size of java doc (#5670) Summary: The actual value of default write buffer size within `rocksdb/include/rocksdb/options.h` is 64 MB, we should correct this value in java doc. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5670 Differential Revision: D16668815 Pulled By: maysamyabandeh fbshipit-source-id: cc3a981c9f1c2cd4a8392b0ed5f1fd0a2d729afb | 06 August 2019, 16:13:48 UTC |
cc9fa7f | Kefu Chai | 06 August 2019, 02:47:33 UTC | cmake: cmake related cleanups (#5662) Summary: - cmake: use the builtin FindBzip2.cmake from CMake - cmake: require CMake v3.5.1 - cmake: add imported target for 3rd party libraries - cmake: extract ReadVersion.cmake out and refactor it Pull Request resolved: https://github.com/facebook/rocksdb/pull/5662 Differential Revision: D16660974 Pulled By: maysamyabandeh fbshipit-source-id: 681594910e74253251fe14ad0befc41a4d0f4fd4 | 06 August 2019, 02:51:20 UTC |
f4a616e | haoyuhuang | 06 August 2019, 01:31:42 UTC | Block cache analyzer: python script to plot graphs (#5673) Summary: This PR updated the python script to plot graphs for stats output from block cache analyzer. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5673 Test Plan: Manually run the script to generate graphs. Differential Revision: D16657145 Pulled By: HaoyuHuang fbshipit-source-id: fd510b5fd4307835f9a986fac545734dbe003d28 | 06 August 2019, 01:35:52 UTC |
b1a02ff | Yanqin Jin | 05 August 2019, 22:40:31 UTC | Fix make target 'all' and 'check' (#5672) Summary: If a test is one of parallel tests, then it should also be one of the 'tests'. Otherwise, `make all` won't build the binaries. For examle, ``` $COMPILE_WITH_ASAN=1 make -j32 all ``` Then if you do ``` $make check ``` The second command will invoke the compilation and building for db_bloom_test and file_reader_writer_test **without** the `COMPILE_WITH_ASAN=1`, causing the command to fail. Test plan (on devserver): ``` $make -j32 all ``` Verify all binaries are built so that `make check` won't have to compile any thing. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5672 Differential Revision: D16655834 Pulled By: riversand963 fbshipit-source-id: 050131412b5313496f85ae3deeeeb8d28af75746 | 05 August 2019, 22:45:56 UTC |
208556e | Maysam Yabandeh | 05 August 2019, 20:30:56 UTC | WritePrepared: fix Get without snapshot (#5664) Summary: if read_options.snapshot is not set, ::Get will take the last sequence number after taking a super-version and uses that as the sequence number. Theoretically max_eviceted_seq_ could advance this sequence number. This could lead ::IsInSnapshot that will be invoked by the ReadCallback to notice the absence of the snapshot. In this case, the ReadCallback should have passed a non-value to snap_released so that it could be set by the ::IsInSnapshot. The patch does that, and adds a unit test to verify it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5664 Differential Revision: D16614033 Pulled By: maysamyabandeh fbshipit-source-id: 06fb3fd4aacd75806ed1a1acec7961f5d02486f2 | 05 August 2019, 20:41:21 UTC |
e579e32 | Maysam Yabandeh | 05 August 2019, 20:30:31 UTC | Disable ReadYourOwnWriteStress when run under Valgrind (#5671) Summary: It sometimes times out when run under valgrind taking around 20m. The patch skips the test under Valgrind. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5671 Differential Revision: D16652382 Pulled By: maysamyabandeh fbshipit-source-id: 0f6f4f76d37337d56226b689e01b14523dd07aae | 05 August 2019, 20:35:39 UTC |
30edf18 | Yanqin Jin | 02 August 2019, 17:40:32 UTC | Change buckifier to support parameterized dependencies (#5648) Summary: Users may desire to specify extra dependencies via buck. This PR allows users to pass additional dependencies as a JSON object so that the buckifier script can generate TARGETS file with desired extra dependencies. Test plan (on dev server) ``` $python buckifier/buckify_rocksdb.py '{"fake": {"extra_deps": [":test_dep", "//fakes/module:mock1"], "extra_compiler_flags": ["-DROCKSDB_LITE", "-Os"]}}' Generating TARGETS Extra dependencies: {'': {'extra_compiler_flags': [], 'extra_deps': []}, 'test_dep1': {'extra_compiler_flags': ['-O2', '-DROCKSDB_LITE'], 'extra_deps': [':fake', '//dep1/mock']}} Generated TARGETS Summary: - 5 libs - 0 binarys - 296 tests ``` Verify the TARGETS file. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5648 Differential Revision: D16565043 Pulled By: riversand963 fbshipit-source-id: a6ef02274174fcf159692d7b846e828454d01e89 | 02 August 2019, 17:55:17 UTC |
d1c9ede | Zhongyi Xie | 01 August 2019, 22:45:19 UTC | Fix duplicated file names in PurgeObsoleteFiles (#5603) Summary: Currently in `DBImpl::PurgeObsoleteFiles`, the list of candidate files is create through a combination of calling LogFileName using `log_delete_files` and `full_scan_candidate_files`. In full_scan_candidate_files, the filenames look like this {file_name = "074715.log", file_path = "/txlogs/3306"}, but LogFileName produces filenames like this that prepends a slash: {file_name = "/074715.log", file_path = "/txlogs/3306"}, This confuses the dedup step here: https://github.com/facebook/rocksdb/blob/bb4178066dc4f18b9b7f1d371e641db027b3edbe/db/db_impl/db_impl_files.cc#L339-L345 Because duplicates still exist, DeleteFile is called on the same file twice, and hits an error on the second try. Error message: Failed to mark /txlogs/3302/764418.log as trash. The root cause is the use of `kDumbDbName` when generating file names, it creates file names like /074715.log. This PR removes the use of `kDumbDbName` and create paths without leading '/' when dbname can be ignored. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5603 Test Plan: make check Differential Revision: D16413203 Pulled By: miasantreble fbshipit-source-id: 6ba8288382c55f7d5e3892d722fc94b57d2e4491 | 01 August 2019, 22:50:05 UTC |
1dfc5ea | Levi Tamasi | 31 July 2019, 22:16:01 UTC | Test the various configurations in parallel in MergeOperatorPinningTest (#5659) Summary: MergeOperatorPinningTest.Randomized frequently times out under TSAN because it tests ~40 option configurations sequentially in a loop. The patch parallelizes the tests of the various configurations to make the test complete faster. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5659 Test Plan: Tested using buck test mode/dev-tsan ... Differential Revision: D16587518 Pulled By: ltamasi fbshipit-source-id: 65bd25c0ad9a23587fed5592e69c1a0097fa27f6 | 31 July 2019, 22:20:26 UTC |
f622ca2 | Manuel Ung | 31 July 2019, 20:36:22 UTC | WriteUnPrepared: savepoint support (#5627) Summary: Add savepoint support when the current transaction has flushed unprepared batches. Rolling back to savepoint is similar to rolling back a transaction. It requires the set of keys that have changed since the savepoint, re-reading the keys at the snapshot at that savepoint, and the restoring the old keys by writing out another unprepared batch. For this strategy to work though, we must be capable of reading keys at a savepoint. This does not work if keys were written out using the same sequence number before and after a savepoint. Therefore, when we flush out unprepared batches, we must split the batch by savepoint if any savepoints exist. eg. If we have the following: ``` Put(A) Put(B) Put(C) SetSavePoint() Put(D) Put(E) SetSavePoint() Put(F) ``` Then we will write out 3 separate unprepared batches: ``` Put(A) 1 Put(B) 1 Put(C) 1 Put(D) 2 Put(E) 2 Put(F) 3 ``` This is so that when we rollback to eg. the first savepoint, we can just read keys at snapshot_seq = 1. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5627 Differential Revision: D16584130 Pulled By: lth fbshipit-source-id: 6d100dd548fb20c4b76661bd0f8a2647e64477fa | 31 July 2019, 20:39:39 UTC |
d599135 | Manuel Ung | 31 July 2019, 17:41:05 UTC | WriteUnPrepared: use WriteUnpreparedTxnReadCallback for ValidateSnapshot (#5657) Summary: In DeferSnapshotSavePointTest, writes were failing with snapshot validation error because the key with the latest sequence number was an unprepared key from the current transaction. Fix this by passing down the correct read callback. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5657 Differential Revision: D16582466 Pulled By: lth fbshipit-source-id: 11645dac0e7c1374d917ef5fdf757d13c1d1108d | 31 July 2019, 17:44:56 UTC |
4834dab | Eli Pozniansky | 31 July 2019, 15:46:48 UTC | Improve CPU Efficiency of ApproximateSize (part 2) (#5609) Summary: In some cases, we don't have to get really accurate number. Something like 10% off is fine, we can create a new option for that use case. In this case, we can calculate size for full files first, and avoid estimation inside SST files if full files got us a huge number. For example, if we already covered 100GB of data, we should be able to skip partial dives into 10 SST files of 30MB. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5609 Differential Revision: D16433481 Pulled By: elipoz fbshipit-source-id: 5830b31e1c656d0fd3a00d7fd2678ddc8f6e601b | 31 July 2019, 15:50:00 UTC |
b538e75 | Levi Tamasi | 31 July 2019, 00:41:15 UTC | Split the recent block based table changes between 6.3 and 6.4 in HISTORY.md Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5653 Differential Revision: D16573445 Pulled By: ltamasi fbshipit-source-id: 19c639044fcfd43b5d5c627c8def33ff2dbb2af8 | 31 July 2019, 00:46:02 UTC |
265db3e | Fosco Marotto | 30 July 2019, 23:05:19 UTC | Update history and version for 6.4.0 (#5652) Summary: Master branch had been left at 6.2 and history of 6.3 and beyond were merged. Updated this to correct. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5652 Differential Revision: D16570498 Pulled By: gfosco fbshipit-source-id: 79f62ec570539a3e3d7d7c84a6cf7b722395fafe | 30 July 2019, 23:10:06 UTC |
55f4f54 | Yanqin Jin | 30 July 2019, 22:56:41 UTC | Update buckifier templates (#5647) Summary: Update buckifier templates in the scripts. Test plan (on devserver) ``` $python buckifier/buckify_rocksdb.py ``` Then ``` $git diff ``` Verify that generated TARGETS file is the same (except for indentation). Pull Request resolved: https://github.com/facebook/rocksdb/pull/5647 Differential Revision: D16555647 Pulled By: riversand963 fbshipit-source-id: 32574a4d0e820858eab2391304dd731141719bcd | 30 July 2019, 23:00:35 UTC |
849a8c0 | Yi Wu | 30 July 2019, 21:09:02 UTC | fix sign compare warnings (#5651) Summary: Fix -Wsign-compare warnings for gcc9. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5651 Test Plan: Tested with ubuntu19.10+gcc9 Differential Revision: D16567428 fbshipit-source-id: 730b2704d42ba0c4e4ea946a3199bbb34be4c25c | 30 July 2019, 21:12:54 UTC |
399f477 | Manuel Ung | 30 July 2019, 00:51:30 UTC | WriteUnPrepared: Use WriteUnpreparedTxnReadCallback for MultiGet (#5634) Summary: The `TransactionTest.MultiGetBatchedTest` were failing with unprepared batches because we were not using the correct callbacks. Override MultiGet to pass down the correct ReadCallback. A similar problem is also fixed in WritePrepared. This PR also fixes an issue similar to (https://github.com/facebook/rocksdb/pull/5147), but for MultiGet instead of Get. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5634 Differential Revision: D16552674 Pulled By: lth fbshipit-source-id: 736eaf8e919c6b13d5f5655b1c0d36b57ad04804 | 30 July 2019, 00:56:13 UTC |
e648c1d | haoyuhuang | 29 July 2019, 17:52:32 UTC | Cache simulator: Optimize hybrid row-block cache. (#5616) Summary: This PR optimizes the hybrid row-block cache simulator. If a Get request hits the cache, we treat all its future accesses as hits. Consider a Get request (no snapshot) accesses multiple files, e.g, file1, file2, file3. We construct the row key as "fdnumber_key_0". Before this PR, if it hits the cache when searching the key in file1, we continue to process its accesses in file2 and file3 which is unnecessary. With this PR, if "file1_key_0" is in the cache, we treat all future accesses of this Get request as hits. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5616 Differential Revision: D16453187 Pulled By: HaoyuHuang fbshipit-source-id: 56f3169cc322322305baaf5543226a0824fae19f | 29 July 2019, 17:58:15 UTC |
80d7067 | Manuel Ung | 26 July 2019, 23:28:38 UTC | Use int64_t instead of ssize_t (#5638) Summary: The ssize_t type was introduced in https://github.com/facebook/rocksdb/pull/5633, but it seems like it's a POSIX specific type. I just need a signed type to represent number of bytes, so use int64_t instead. It seems like we have a typedef from SSIZE_T for Windows, but it doesn't seem like we ever include "port/port.h" in our public header files. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5638 Differential Revision: D16526269 Pulled By: lth fbshipit-source-id: 8d3a5c41003951b74b29bc5f1d949b2b22da0cee | 26 July 2019, 23:36:49 UTC |
3f89af1 | Levi Tamasi | 26 July 2019, 22:48:35 UTC | Reduce the number of random iterations in compact_on_deletion_collector_test (#5635) Summary: This test frequently times out under TSAN; reducing the number of random iterations to make it complete faster. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5635 Test Plan: buck test mode/dev-tsan internal_repo_rocksdb/repo:compact_on_deletion_collector_test Differential Revision: D16523505 Pulled By: ltamasi fbshipit-source-id: 6a69909bce9d204c891150fcb3d536547b3253d0 | 26 July 2019, 22:53:34 UTC |
70c7302 | haoyuhuang | 26 July 2019, 21:36:16 UTC | Block cache simulator: Add pysim to simulate caches using reinforcement learning. (#5610) Summary: This PR implements cache eviction using reinforcement learning. It includes two implementations: 1. An implementation of Thompson Sampling for the Bernoulli Bandit [1]. 2. An implementation of LinUCB with disjoint linear models [2]. The idea is that a cache uses multiple eviction policies, e.g., MRU, LRU, and LFU. The cache learns which eviction policy is the best and uses it upon a cache miss. Thompson Sampling is contextless and does not include any features. LinUCB includes features such as level, block type, caller, column family id to decide which eviction policy to use. [1] Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. 2018. A Tutorial on Thompson Sampling. Found. Trends Mach. Learn. 11, 1 (July 2018), 1-96. DOI: https://doi.org/10.1561/2200000070 [2] Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web (WWW '10). ACM, New York, NY, USA, 661-670. DOI=http://dx.doi.org/10.1145/1772690.1772758 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5610 Differential Revision: D16435067 Pulled By: HaoyuHuang fbshipit-source-id: 6549239ae14115c01cb1e70548af9e46d8dc21bb | 26 July 2019, 21:41:13 UTC |
41df734 | Manuel Ung | 26 July 2019, 19:52:07 UTC | WriteUnPrepared: Add new variable write_batch_flush_threshold (#5633) Summary: Instead of reusing `TransactionOptions::max_write_batch_size` for determining when to flush a write batch for write unprepared, add a new variable called `write_batch_flush_threshold` for this use case instead. Also add `TransactionDBOptions::default_write_batch_flush_threshold` which sets the default value if `TransactionOptions::write_batch_flush_threshold` is unspecified. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5633 Differential Revision: D16520364 Pulled By: lth fbshipit-source-id: d75ae5a2141ce7708982d5069dc3f0b58d250e8c | 26 July 2019, 19:56:26 UTC |
3617287 | Levi Tamasi | 26 July 2019, 18:44:32 UTC | Parallelize db_bloom_filter_test (#5632) Summary: This test frequently times out under TSAN; parallelizing it should fix this issue. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5632 Test Plan: make check buck test mode/dev-tsan internal_repo_rocksdb/repo:db_bloom_filter_test Differential Revision: D16519399 Pulled By: ltamasi fbshipit-source-id: 66e05a644d6f79c6d544255ffcf6de195d2d62fe | 26 July 2019, 18:48:17 UTC |
230b909 | Manuel Ung | 26 July 2019, 18:31:46 UTC | Fix PopSavePoint to merge info into the previous savepoint (#5628) Summary: Transaction::RollbackToSavePoint undos the modification made since the SavePoint beginning, and also unlocks the corresponding keys, which are tracked in the last SavePoint. Currently ::PopSavePoint simply discard these tracked keys, leaving them locked in the lock manager. This breaks a subsequent ::RollbackToSavePoint behavior as it loses track of such keys, and thus cannot unlock them. The patch fixes ::PopSavePoint by passing on the track key information to the previous SavePoint. Fixes https://github.com/facebook/rocksdb/issues/5618 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5628 Differential Revision: D16505325 Pulled By: lth fbshipit-source-id: 2bc3b30963ab4d36d996d1f66543c93abf358980 | 26 July 2019, 18:39:30 UTC |
74782ce | Yanqin Jin | 26 July 2019, 16:52:23 UTC | Fix target 'clean' to include parallel test binaries (#5629) Summary: current `clean` target in Makefile does not remove parallel test binaries. Fix this. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5629 Test Plan: (on devserver) Take file_reader_writer_test for instance. ``` $make -j32 file_reader_writer_test $make clean ``` Verify that binary file 'file_reader_writer_test' is delete by `make clean`. Differential Revision: D16513176 Pulled By: riversand963 fbshipit-source-id: 70acb9f56c928a494964121b86aacc0090f31ff6 | 26 July 2019, 16:56:09 UTC |
9625a2b | Eli Pozniansky | 26 July 2019, 05:38:53 UTC | Added SizeApproximationOptions to DB::GetApproximateSizes (#5626) Summary: The new DB::GetApproximateSizes with SizeApproximationOptions argument, which allows to add more options/knobs to the DB::GetApproximateSizes call (beyond only the include_flags) Pull Request resolved: https://github.com/facebook/rocksdb/pull/5626 Differential Revision: D16496913 Pulled By: elipoz fbshipit-source-id: ee8c6c182330a285fa056ecfc3905a592b451720 | 26 July 2019, 05:42:30 UTC |
ae152ee | Yanqin Jin | 25 July 2019, 22:23:46 UTC | Avoid user key copying for Get/Put/Write with user-timestamp (#5502) Summary: In previous https://github.com/facebook/rocksdb/issues/5079, we added user-specified timestamp to `DB::Get()` and `DB::Put()`. Limitation is that these two functions may cause extra memory allocation and key copy. The reason is that `WriteBatch` does not allocate extra memory for timestamps because it is not aware of timestamp size, and we did not provide an API to assign/update timestamp of each key within a `WriteBatch`. We address these issues in this PR by doing the following. 1. Add a `timestamp_size_` to `WriteBatch` so that `WriteBatch` can take timestamps into account when calling `WriteBatch::Put`, `WriteBatch::Delete`, etc. 2. Add APIs `WriteBatch::AssignTimestamp` and `WriteBatch::AssignTimestamps` so that application can assign/update timestamps for each key in a `WriteBatch`. 3. Avoid key copy in `GetImpl` by adding new constructor to `LookupKey`. Test plan (on devserver): ``` $make clean && COMPILE_WITH_ASAN=1 make -j32 all $./db_basic_test --gtest_filter=Timestamp/DBBasicTestWithTimestampWithParam.PutAndGet/* $make check ``` If the API extension looks good, I will add more unit tests. Some simple benchmark using db_bench. ``` $rm -rf /dev/shm/dbbench/* && TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillseq,readrandom -num=1000000 $rm -rf /dev/shm/dbbench/* && TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=1000000 -disable_wal=true ``` Master is at a78503bd6c80a3c4137df1962a972fe406b4d90b. ``` | | readrandom | fillrandom | | master | 15.53 MB/s | 25.97 MB/s | | PR5502 | 16.70 MB/s | 25.80 MB/s | ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5502 Differential Revision: D16340894 Pulled By: riversand963 fbshipit-source-id: 51132cf792be07d1efc3ac33f5768c4ee2608bb8 | 25 July 2019, 22:27:39 UTC |
0d16fad | Chad Austin | 25 July 2019, 18:42:31 UTC | rocksdb: build on macosx Summary: Make rocksdb build on macos: 1) Reorganize OS-specific flags and deps in rocksdb/src/TARGETS 2) Sandbox fbcode apple platform builds from repo root include path (which conflicts with layout of rocksdb headers). 3) Fix dep-translation for bzip2. Reviewed By: andrewjcg Differential Revision: D15125826 fbshipit-source-id: 8e143c689b88b5727e54881a5e80500f879a320b | 25 July 2019, 18:45:54 UTC |
d9dc6b4 | Maysam Yabandeh | 24 July 2019, 22:17:55 UTC | Declare snapshot refresh incompatible with delete range (#5625) Summary: The ::snap_refresh_nanos option is incompatible with DeleteRange feature. Currently the code relies on range_del_agg.IsEmpty() to disable it if there are range delete tombstones. However ::IsEmpty does not guarantee that there is no RangeDelete tombstones in the SST files. The patch declares the two features incompatible in inline comments until we later figure how to properly detect the presence of RangeDelete tombstones in compaction inputs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5625 Differential Revision: D16468218 Pulled By: maysamyabandeh fbshipit-source-id: bd7beca278bc7e1db75e7ee4522d05a3a6ca86f4 | 24 July 2019, 22:22:14 UTC |
7260347 | sdong | 24 July 2019, 22:11:36 UTC | Auto Roll Logger to add some extra checking to avoid segfault. (#5623) Summary: AutoRollLogger sets GetStatus() to be non-OK if the log file fails to be created and logger_ is set to null. It is left to the caller to check the status before calling function to this class. There is no harm to create another null checking to logger_ before we using it, so that in case users mis-use the logger, they don't get a segfault. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5623 Test Plan: Run all existing tests. Differential Revision: D16466251 fbshipit-source-id: 262b885eec28bf741d91e9191c3cb5ff964e1bce | 24 July 2019, 22:14:40 UTC |
5daa426 | sdong | 24 July 2019, 19:04:58 UTC | Fix regression bug of Auto rolling logger when handling failures (#5622) Summary: Auto roll logger fails to handle file creation error in the correct way, which may expose to seg fault condition to users. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5622 Test Plan: Add a unit test on creating file under a non-existing directory. The test fails without the fix. Differential Revision: D16460853 fbshipit-source-id: e96da4bef4f16db171ea04a11b2ec5a9448ddbde | 24 July 2019, 19:08:40 UTC |
66b524a | Manuel Ung | 24 July 2019, 17:21:18 UTC | Simplify WriteUnpreparedTxnReadCallback and fix some comments (#5621) Summary: Simplify WriteUnpreparedTxnReadCallback so we just have one function `CalcMaxVisibleSeq`. Also, there's no need for the read callback to hold onto the transaction any more, so just hold the set of unprep_seqs, reducing about of indirection in `IsVisibleFullCheck`. Also, some comments about using transaction snapshot were out of date, so remove them. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5621 Differential Revision: D16459883 Pulled By: lth fbshipit-source-id: cd581323fd18982e817d99af57b6eaba59e599bb | 24 July 2019, 17:25:26 UTC |
f5b951f | sdong | 24 July 2019, 02:34:56 UTC | Fix wrong info log printing for num_range_deletions (#5617) Summary: num_range_deletions printing is wrong in this log line: 2019/07/18-12:59:15.309271 7f869f9ff700 EVENT_LOG_v1 {"time_micros": 1563479955309228, "cf_name": "5", "job": 955, "event": "table_file_creation", "file_number": 34579, "file_size": 2239842, "table_properties": {"data_size": 1988792, "index_size": 3067, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 1, "filter_size": 170821, "raw_key_size": 1951792, "raw_average_key_size": 16, "raw_value_size": 1731720, "raw_average_value_size": 14, "num_data_blocks": 199, "num_entries": 121987, "num_deletions": 15184, "num_merge_operands": 86512, "num_range_deletions": 86512, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "5", "column_family_id": 5, "comparator": "leveldb.BytewiseComparator", "merge_operator": "PutOperator", "prefix_extractor_name": "rocksdb.FixedPrefix.7", "property_collectors": "[]", "compression": "ZSTD", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1563479951, "oldest_key_time": 0, "file_creation_time": 1563479954}} It actually prints "num_merge_operands" number. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5617 Test Plan: Just build. Differential Revision: D16453110 fbshipit-source-id: fc1024b3cd5650312ed47a1379f0d2cf8b2d8a8f | 24 July 2019, 02:38:16 UTC |
cfcf045 | Mark Rambacher | 24 July 2019, 00:08:26 UTC | The ObjectRegistry class replaces the Registrar and NewCustomObjects.… (#5293) Summary: The ObjectRegistry class replaces the Registrar and NewCustomObjects. Objects are registered with the registry by Type (the class must implement the static const char *Type() method). This change is necessary for a few reasons: - By having a class (rather than static template instances), the class can be passed between compilation units, meaning that objects could be registered and shared from a dynamic library with an executable. - By having a class with instances, different units could have different objects registered. This could be useful if, for example, one Option allowed for a dynamic library and one did not. When combined with some other PRs (being able to load shared libraries, a Configurable interface to configure objects to/from string), this code will allow objects in external shared libraries to be added to a RocksDB image at run-time, rather than requiring every new extension to be built into the main library and called explicitly by every program. Test plan (on riversand963's devserver) ``` $COMPILE_WITH_ASAN=1 make -j32 all && sleep 1 && make check ``` All tests pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5293 Differential Revision: D16363396 Pulled By: riversand963 fbshipit-source-id: fbe4acb615bfc11103eef40a0b288845791c0180 | 24 July 2019, 00:13:05 UTC |
092f417 | Levi Tamasi | 23 July 2019, 22:57:43 UTC | Move the uncompression dictionary object out of the block cache (#5584) Summary: RocksDB has historically stored uncompression dictionary objects in the block cache as opposed to storing just the block contents. This neccesitated evicting the object upon table close. With the new code, only the raw blocks are stored in the cache, eliminating the need for eviction. In addition, the patch makes the following improvements: 1) Compression dictionary blocks are now prefetched/pinned similarly to index/filter blocks. 2) A copy operation got eliminated when the uncompression dictionary is retrieved. 3) Errors related to retrieving the uncompression dictionary are propagated as opposed to silently ignored. Note: the patch temporarily breaks the compression dictionary evicition stats. They will be fixed in a separate phase. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5584 Test Plan: make asan_check Differential Revision: D16344151 Pulled By: ltamasi fbshipit-source-id: 2962b295f5b19628f9da88a3fcebbce5a5017a7b | 23 July 2019, 23:01:44 UTC |
6b7fcc0 | Eli Pozniansky | 23 July 2019, 22:30:59 UTC | Improve CPU Efficiency of ApproximateSize (part 1) (#5613) Summary: 1. Avoid creating the iterator in order to call BlockBasedTable::ApproximateOffsetOf(). Instead, directly call into it. 2. Optimize BlockBasedTable::ApproximateOffsetOf() keeps the index block iterator in stack. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5613 Differential Revision: D16442660 Pulled By: elipoz fbshipit-source-id: 9320be3e918c139b10e758cbbb684706d172e516 | 23 July 2019, 22:34:33 UTC |
3782acc | sdong | 23 July 2019, 20:56:52 UTC | ldb sometimes specify a string-append merge operator (#5607) Summary: Right now, ldb cannot scan a DB with merge operands with default ldb. There is no hard to give a general merge operator so that it can at least print out something Pull Request resolved: https://github.com/facebook/rocksdb/pull/5607 Test Plan: Run ldb against a DB with merge operands and see the outputs. Differential Revision: D16442634 fbshipit-source-id: c66c414ec07f219cfc6e6ec2cc14c783ee95df54 | 23 July 2019, 21:25:18 UTC |
112702a | anand76 | 23 July 2019, 18:12:25 UTC | Parallelize file_reader_writer_test in order to reduce timeouts Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5608 Test Plan: make check buck test mode/dev-tsan internal_repo_rocksdb/repo:file_reader_writer_test -- --run-disabled Differential Revision: D16441796 Pulled By: anand1976 fbshipit-source-id: afbb88a9fcb1c0ba22215118767e8eab3d1d6a4a | 23 July 2019, 18:50:10 UTC |
eae8327 | Manuel Ung | 23 July 2019, 15:04:58 UTC | WriteUnPrepared: improve read your own write functionality (#5573) Summary: There are a number of fixes in this PR (with most bugs found via the added stress tests): 1. Re-enable reseek optimization. This was initially disabled to avoid infinite loops in https://github.com/facebook/rocksdb/pull/3955 but this can be resolved by remembering not to reseek after a reseek has already been done. This problem only affects forward iteration in `DBIter::FindNextUserEntryInternal`, as we already disable reseeking in `DBIter::FindValueForCurrentKeyUsingSeek`. 2. Verify that ReadOption.snapshot can be safely used for iterator creation. Some snapshots would not give correct results because snaphsot validation would not be enforced, breaking some assumptions in Prev() iteration. 3. In the non-snapshot Get() case, reads done at `LastPublishedSequence` may not be enough, because unprepared sequence numbers are not published. Use `std::max(published_seq, max_visible_seq)` to do lookups instead. 4. Add stress test to test reading own writes. 5. Minor bug in the allow_concurrent_memtable_write case where we forgot to pass in batch_per_txn_. 6. Minor performance optimization in `CalcMaxUnpreparedSequenceNumber` by assigning by reference instead of value. 7. Add some more comments everywhere. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5573 Differential Revision: D16276089 Pulled By: lth fbshipit-source-id: 18029c944eb427a90a87dee76ac1b23f37ec1ccb | 23 July 2019, 15:08:19 UTC |
327c480 | Maysam Yabandeh | 23 July 2019, 03:01:25 UTC | Disable refresh snapshot feature by default (#5606) Summary: There are concerns about the correctness of this patch. Disabling by default until the concerns are resolved. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5606 Differential Revision: D16428064 Pulled By: maysamyabandeh fbshipit-source-id: a89280f0ea85796c9c9dfbfd9a8e91dad9b000b3 | 23 July 2019, 03:05:00 UTC |
66b5613 | sdong | 23 July 2019, 01:53:03 UTC | row_cache to share entry for recent snapshots (#5600) Summary: Right now, users cannot take advantage of row cache, unless no snapshot is used, or Get() is repeated for the same snapshots. This limits the usage of row cache. This change eliminate this restriction in some cases. If the snapshot used is newer than the largest sequence number in the file, and write callback function is not registered, the same row cache key is used as no snapshot is given. We still need the callback function restriction for now because the callback function may filter out different keys for different snapshots even if the snapshots are new. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5600 Test Plan: Add a unit test. Differential Revision: D16386616 fbshipit-source-id: 6b7d214bd215d191b03ccf55926ad4b703ec2e53 | 23 July 2019, 01:56:19 UTC |
3778470 | haoyuhuang | 23 July 2019, 00:47:54 UTC | Block cache analyzer: Compute correlation of features and human readable trace file. (#5596) Summary: - Compute correlation between a few features and predictions, e.g., number of accesses since the last access vs number of accesses till the next access on a block. - Output human readable trace file so python can consume it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5596 Test Plan: make clean && USE_CLANG=1 make check -j32 Differential Revision: D16373200 Pulled By: HaoyuHuang fbshipit-source-id: c848d26bc2e9210461f317d7dbee42d55be5a0cc | 23 July 2019, 00:51:34 UTC |
a78503b | Yanqin Jin | 22 July 2019, 21:35:03 UTC | Temporarily disable snapshot list refresh for atomic flush stress test (#5581) Summary: Atomic flush test started to fail after https://github.com/facebook/rocksdb/issues/5099. Then https://github.com/facebook/rocksdb/issues/5278 provided a fix after which the same error occurred much less frequently. However it still occur occasionally. Not sure what the root cause is. This PR disables the feature of snapshot list refresh, and we should keep an eye on the failure in the future. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5581 Differential Revision: D16295985 Pulled By: riversand963 fbshipit-source-id: c9e62e65133c52c21b07097de359632ca62571e4 | 22 July 2019, 21:38:16 UTC |
0be1fee | Eli Pozniansky | 19 July 2019, 21:55:07 UTC | Added .watchmanconfig file to rocksdb repo (#5593) Summary: Added .watchmanconfig file to rocksdb repo. It is currently .gitignored. This allows to auto sync modified files with watchman when editing them remotely. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5593 Differential Revision: D16363860 Pulled By: elipoz fbshipit-source-id: 5ae221e21c6c757ceb08877771550d508f773d55 | 19 July 2019, 22:00:33 UTC |
4f7ba3a | anand76 | 19 July 2019, 20:20:45 UTC | Fix tsan and valgrind failures in import_column_family_test Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5598 Test Plan: tsan_check valgrind_test Differential Revision: D16380167 Pulled By: anand1976 fbshipit-source-id: 2d0caea7d2d02a9606457f62811175d762b89d5c | 19 July 2019, 20:25:36 UTC |
c129c75 | Eli Pozniansky | 19 July 2019, 18:54:38 UTC | Added log_readahead_size option to control prefetching for Log::Reader (#5592) Summary: Added log_readahead_size option to control prefetching for Log::Reader. This is mostly useful for reading a remotely located log, as it can save the number of round-trips when reading it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5592 Differential Revision: D16362989 Pulled By: elipoz fbshipit-source-id: c5d4d5245a44008cd59879640efff70c091ad3e8 | 19 July 2019, 19:00:19 UTC |
6bb3b4b | sdong | 19 July 2019, 18:31:52 UTC | ldb idump to support non-default column families. (#5594) Summary: ldb idump now only works for default column family. Extend it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5594 Test Plan: Compile and run the tool against a multiple CF DB. Differential Revision: D16380684 fbshipit-source-id: bfb8af36fdad1806837c90aaaab492d71528aceb | 19 July 2019, 18:36:59 UTC |
abd1fdd | anand76 | 18 July 2019, 21:38:23 UTC | Fix asan_check failures Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5589 Test Plan: TEST_TMPDIR=/dev/shm/rocksdb COMPILE_WITH_ASAN=1 OPT=-g make J=64 -j64 asan_check Differential Revision: D16361081 Pulled By: anand1976 fbshipit-source-id: 09474832b9cfb318a840d4b633e22dfad105d58c | 18 July 2019, 21:51:25 UTC |
3a6e83b | Venki Pallipadi | 18 July 2019, 17:13:05 UTC | HISTORY update for export and import column family APIs Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5587 Differential Revision: D16359919 fbshipit-source-id: cfd9c448d79a8b8e7ac1d2b661d10151df269dba | 18 July 2019, 17:16:38 UTC |
ec2b996 | anand76 | 18 July 2019, 05:02:49 UTC | Fix LITE mode build failure Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5588 Test Plan: make LITE=1 all check Differential Revision: D16354543 Pulled By: anand1976 fbshipit-source-id: 327a171439e183ac3a5e5057c511d6bca445e97d | 18 July 2019, 05:06:12 UTC |
9f5cfb8 | Eli Pozniansky | 18 July 2019, 00:01:30 UTC | Fix for ReadaheadSequentialFile crash in ldb_cmd_test (#5586) Summary: Fixing a corner case crash when there was no data read from file, but status is still OK Pull Request resolved: https://github.com/facebook/rocksdb/pull/5586 Differential Revision: D16348117 Pulled By: elipoz fbshipit-source-id: f97973308024f020d8be79ca3c56466b84d80656 | 18 July 2019, 00:04:39 UTC |
8a008d4 | haoyuhuang | 17 July 2019, 20:02:00 UTC | Block access tracing: Trace referenced key for Get on non-data blocks. (#5548) Summary: This PR traces the referenced key for Get for all types of blocks. This is useful when evaluating hybrid row-block caches. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5548 Test Plan: make clean && USE_CLANG=1 make check -j32 Differential Revision: D16157979 Pulled By: HaoyuHuang fbshipit-source-id: f6327411c9deb74e35e22a35f66cdbae09ab9d87 | 17 July 2019, 20:05:58 UTC |
22ce462 | Venki Pallipadi | 17 July 2019, 19:22:21 UTC | Export Import sst files (#5495) Summary: Refresh of the earlier change here - https://github.com/facebook/rocksdb/issues/5135 This is a review request for code change needed for - https://github.com/facebook/rocksdb/issues/3469 "Add support for taking snapshot of a column family and creating column family from a given CF snapshot" We have an implementation for this that we have been testing internally. We have two new APIs that together provide this functionality. (1) ExportColumnFamily() - This API is modelled after CreateCheckpoint() as below. // Exports all live SST files of a specified Column Family onto export_dir, // returning SST files information in metadata. // - SST files will be created as hard links when the directory specified // is in the same partition as the db directory, copied otherwise. // - export_dir should not already exist and will be created by this API. // - Always triggers a flush. virtual Status ExportColumnFamily(ColumnFamilyHandle* handle, const std::string& export_dir, ExportImportFilesMetaData** metadata); Internally, the API will DisableFileDeletions(), GetColumnFamilyMetaData(), Parse through metadata, creating links/copies of all the sst files, EnableFileDeletions() and complete the call by returning the list of file metadata. (2) CreateColumnFamilyWithImport() - This API is modeled after IngestExternalFile(), but invoked only during a CF creation as below. // CreateColumnFamilyWithImport() will create a new column family with // column_family_name and import external SST files specified in metadata into // this column family. // (1) External SST files can be created using SstFileWriter. // (2) External SST files can be exported from a particular column family in // an existing DB. // Option in import_options specifies whether the external files are copied or // moved (default is copy). When option specifies copy, managing files at // external_file_path is caller's responsibility. When option specifies a // move, the call ensures that the specified files at external_file_path are // deleted on successful return and files are not modified on any error // return. // On error return, column family handle returned will be nullptr. // ColumnFamily will be present on successful return and will not be present // on error return. ColumnFamily may be present on any crash during this call. virtual Status CreateColumnFamilyWithImport( const ColumnFamilyOptions& options, const std::string& column_family_name, const ImportColumnFamilyOptions& import_options, const ExportImportFilesMetaData& metadata, ColumnFamilyHandle** handle); Internally, this API creates a new CF, parses all the sst files and adds it to the specified column family, at the same level and with same sequence number as in the metadata. Also performs safety checks with respect to overlaps between the sst files being imported. If incoming sequence number is higher than current local sequence number, local sequence number is updated to reflect this. Note, as the sst files is are being moved across Column Families, Column Family name in sst file will no longer match the actual column family on destination DB. The API does not modify Column Family name or id in the sst files being imported. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5495 Differential Revision: D16018881 fbshipit-source-id: 9ae2251025d5916d35a9fc4ea4d6707f6be16ff9 | 17 July 2019, 19:27:14 UTC |
a3c1832 | Yuqi Gu | 17 July 2019, 18:19:06 UTC | Arm64 CRC32 parallel computation optimization for RocksDB (#5494) Summary: Crc32c Parallel computation optimization: Algorithm comes from Intel whitepaper: [crc-iscsi-polynomial-crc32-instruction-paper](https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/crc-iscsi-polynomial-crc32-instruction-paper.pdf) Input data is divided into three equal-sized blocks Three parallel blocks (crc0, crc1, crc2) for 1024 Bytes One Block: 42(BLK_LENGTH) * 8(step length: crc32c_u64) bytes 1. crc32c_test: ``` [==========] Running 4 tests from 1 test case. [----------] Global test environment set-up. [----------] 4 tests from CRC [ RUN ] CRC.StandardResults [ OK ] CRC.StandardResults (1 ms) [ RUN ] CRC.Values [ OK ] CRC.Values (0 ms) [ RUN ] CRC.Extend [ OK ] CRC.Extend (0 ms) [ RUN ] CRC.Mask [ OK ] CRC.Mask (0 ms) [----------] 4 tests from CRC (1 ms total) [----------] Global test environment tear-down [==========] 4 tests from 1 test case ran. (1 ms total) [ PASSED ] 4 tests. ``` 2. RocksDB benchmark: db_bench --benchmarks="crc32c" ``` Linear Arm crc32c: crc32c: 1.005 micros/op 995133 ops/sec; 3887.2 MB/s (4096 per op) ``` ``` Parallel optimization with Armv8 crypto extension: crc32c: 0.419 micros/op 2385078 ops/sec; 9316.7 MB/s (4096 per op) ``` It gets ~2.4x speedup compared to linear Arm crc32c instructions. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5494 Differential Revision: D16340806 fbshipit-source-id: 95dae9a5b646fd20a8303671d82f17b2e162e945 | 17 July 2019, 18:22:38 UTC |
74fb7f0 | Eli Pozniansky | 17 July 2019, 02:13:35 UTC | Cleaned up and simplified LRU cache implementation (#5579) Summary: The 'refs' field in LRUHandle now counts only external references, since anyway we already have the IN_CACHE flag. This simplifies reference accounting logic a bit. Also cleaned up few asserts code as well as the comments - to be more readable. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5579 Differential Revision: D16286747 Pulled By: elipoz fbshipit-source-id: 7186d88f80f512ce584d0a303437494b5cbefd7f | 17 July 2019, 02:17:45 UTC |
0f4d90e | Eli Pozniansky | 17 July 2019, 01:18:07 UTC | Added support for sequential read-ahead file (#5580) Summary: Added support for sequential read-ahead file that can prefetch the read data and later serve it from internal cache buffer. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5580 Differential Revision: D16287082 Pulled By: elipoz fbshipit-source-id: a3e7ad9643d377d39352ff63058ce050ec31dcf3 | 17 July 2019, 01:21:18 UTC |
699a569 | sdong | 16 July 2019, 23:27:32 UTC | Remove RandomAccessFileReader.for_compaction_ (#5572) Summary: RandomAccessFileReader.for_compaction_ doesn't seem to be used anymore. Remove it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5572 Test Plan: USE_CLANG=1 make all check -j Differential Revision: D16286178 fbshipit-source-id: aa338049761033dfbe5e8b1707bbb0be2df5be7e | 16 July 2019, 23:32:18 UTC |
0acaa1a | Manuel Ung | 16 July 2019, 22:19:45 UTC | WriteUnPrepared: use tracked_keys_ to track keys needed for rollback (#5562) Summary: Currently, we are tracking keys we need to rollback via a separate structure specific to WriteUnprepared in write_set_keys_. We already have a data structure called tracked_keys_ used to track which keys to unlock on transaction termination. This is exactly what we want, since we should only rollback keys that we have locked anyway. Save some memory by reusing that data structure instead of making our own. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5562 Differential Revision: D16206484 Pulled By: lth fbshipit-source-id: 5894d2b824a4b19062d84adbd6e6e86f00047488 | 16 July 2019, 22:24:56 UTC |
3bde41b | Levi Tamasi | 16 July 2019, 20:11:23 UTC | Move the filter readers out of the block cache (#5504) Summary: Currently, when the block cache is used for the filter block, it is not really the block itself that is stored in the cache but a FilterBlockReader object. Since this object is not pure data (it has, for instance, pointers that might dangle, including in one case a back pointer to the TableReader), it's not really sharable. To avoid the issues around this, the current code erases the cache entries when the TableReader is closed (which, BTW, is not sufficient since a concurrent TableReader might have picked up the object in the meantime). Instead of doing this, the patch moves the FilterBlockReader out of the cache altogether, and decouples the filter reader object from the filter block. In particular, instead of the TableReader owning, or caching/pinning the FilterBlockReader (based on the customer's settings), with the change the TableReader unconditionally owns the FilterBlockReader, which in turn owns/caches/pins the filter block. This change also enables us to reuse the code paths historically used for data blocks for filters as well. Note: Eviction statistics for filter blocks are temporarily broken. We plan to fix this in a separate phase. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5504 Test Plan: make asan_check Differential Revision: D16036974 Pulled By: ltamasi fbshipit-source-id: 770f543c5fb4ed126fd1e04bfd3809cf4ff9c091 | 16 July 2019, 20:14:58 UTC |
cd25203 | Jim Lin | 15 July 2019, 19:55:37 UTC | Fix memorty leak in `rocksdb_wal_iter_get_batch` function (#5515) Summary: `wal_batch.writeBatchPtr.release()` gives up the ownership of the original `WriteBatch`, but there is no new owner, which causes memory leak. The patch is simple. Removing `release()` prevent ownership change. `std::move` is for speed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5515 Differential Revision: D16264281 Pulled By: riversand963 fbshipit-source-id: 51c556b7a1c977325c3aa24acb636303847151fa | 15 July 2019, 19:59:39 UTC |
6e8a135 | Tomas Kolda | 15 July 2019, 19:15:21 UTC | Fix regression - 100% CPU - Regression for Windows 7 (#5557) Summary: Fixes https://github.com/facebook/rocksdb/issues/5552 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5557 Differential Revision: D16266329 fbshipit-source-id: a8f6b50298a6f7c8d6c7e172bb26dd7eb6bd8a4d | 15 July 2019, 19:19:49 UTC |
b0259e4 | Zhongyi Xie | 15 July 2019, 18:39:18 UTC | add more tracing for stats history (#5566) Summary: Sample info log output from db_bench: In-memory: ``` 2019/07/12-21:39:19.478490 7fa01b3f5700 [_impl/db_impl.cc:702] ------- PERSISTING STATS ------- 2019/07/12-21:39:19.478633 7fa01b3f5700 [_impl/db_impl.cc:753] Storing 145 stats with timestamp 1562992759 to in-memory stats history 2019/07/12-21:39:19.478670 7fa01b3f5700 [_impl/db_impl.cc:766] [Pre-GC] In-memory stats history size: 1051218 bytes, slice count: 103 2019/07/12-21:39:19.478704 7fa01b3f5700 [_impl/db_impl.cc:775] [Post-GC] In-memory stats history size: 1051218 bytes, slice count: 102 ``` On-disk: ``` 2019/07/12-21:48:53.862548 7f24943f5700 [_impl/db_impl.cc:702] ------- PERSISTING STATS ------- 2019/07/12-21:48:53.862553 7f24943f5700 [_impl/db_impl.cc:709] Reading 145 stats from statistics 2019/07/12-21:48:53.862852 7f24943f5700 [_impl/db_impl.cc:737] Writing 145 stats with timestamp 1562993333 to persistent stats CF succeeded ``` ``` 2019/07/12-21:48:51.861711 7f24943f5700 [_impl/db_impl.cc:702] ------- PERSISTING STATS ------- 2019/07/12-21:48:51.861729 7f24943f5700 [_impl/db_impl.cc:709] Reading 145 stats from statistics 2019/07/12-21:48:51.861921 7f24943f5700 [_impl/db_impl.cc:732] Writing to persistent stats CF failed -- Result incomplete: Write stall ... 2019/07/12-21:48:51.873032 7f2494bf6700 [WARN] [lumn_family.cc:749] [default] Stopping writes because we have 2 immutable memtables (waiting for flush), max_write_buffer_number is set to 2 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5566 Differential Revision: D16258187 Pulled By: miasantreble fbshipit-source-id: 292497099b941418590ed4312411bee36e244dc5 | 15 July 2019, 18:49:17 UTC |
f064d74 | Yikun Jiang | 15 July 2019, 18:16:55 UTC | Cleanup the Arm64 CRC32 unused warning (#5565) Summary: When 'HAVE_ARM64_CRC' is set, the blew methods: - bool rocksdb::crc32c::isSSE42() - bool rocksdb::crc32c::isPCLMULQDQ() are defined but not used, the unused-function is raised when do rocksdb build. This patch try to cleanup these warnings by add ifndef, if it build under the HAVE_ARM64_CRC, we will not define `isSSE42` and `isPCLMULQDQ`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5565 Differential Revision: D16233654 fbshipit-source-id: c32a9dda7465dbf65f9ccafef159124db92cdffd | 15 July 2019, 18:20:26 UTC |
68d43b4 | haoyuhuang | 13 July 2019, 01:52:48 UTC | A python script to plot graphs for cvs files generated by block_cache_trace_analyzer Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5563 Test Plan: Manually run the script on files generated by block_cache_trace_analyzer. Differential Revision: D16214400 Pulled By: HaoyuHuang fbshipit-source-id: 94485eed995e9b2b63e197c5dfeb80129fa7897f | 13 July 2019, 01:56:20 UTC |
6187661 | Sergei Petrunia | 13 July 2019, 00:26:19 UTC | Fix MyRocks compile warnings-treated-as-errors on Fedora 30, gcc 9.1.1 (#5553) Summary: - Provide assignment operator in CompactionStats - Provide a copy constructor for FileDescriptor - Remove std::move from "return std::move(t)" in BoundedQueue Pull Request resolved: https://github.com/facebook/rocksdb/pull/5553 Differential Revision: D16230170 fbshipit-source-id: fd7c6e52390b2db1be24141e25649cf62424d078 | 13 July 2019, 00:30:51 UTC |
3e9c5a3 | haoyuhuang | 12 July 2019, 23:52:15 UTC | Block cache analyzer: Add more stats (#5516) Summary: This PR provides more command line options for block cache analyzer to better understand block cache access pattern. -analyze_bottom_k_access_count_blocks -analyze_top_k_access_count_blocks -reuse_lifetime_labels -reuse_lifetime_buckets -analyze_callers -access_count_buckets -analyze_blocks_reuse_k_reuse_window Pull Request resolved: https://github.com/facebook/rocksdb/pull/5516 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16037440 Pulled By: HaoyuHuang fbshipit-source-id: b9a4ac0d4712053fab910732077a4d4b91400bc8 | 12 July 2019, 23:55:34 UTC |
1a59b6e | haoyuhuang | 11 July 2019, 19:40:08 UTC | Cache simulator: Add a ghost cache for admission control and a hybrid row-block cache. (#5534) Summary: This PR adds a ghost cache for admission control. Specifically, it admits an entry on its second access. It also adds a hybrid row-block cache that caches the referenced key-value pairs of a Get/MultiGet request instead of its blocks. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5534 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16101124 Pulled By: HaoyuHuang fbshipit-source-id: b99edda6418a888e94eb40f71ece45d375e234b1 | 11 July 2019, 19:43:29 UTC |
82d8ca8 | Yanqin Jin | 10 July 2019, 18:26:22 UTC | Upload db directory during cleanup for certain tests (#5554) Summary: Add an extra cleanup step so that db directory can be saved and uploaded. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5554 Reviewed By: yancouto Differential Revision: D16168844 Pulled By: riversand963 fbshipit-source-id: ec7b2cee5f11c7d388c36531f8b076d648e2fb19 | 10 July 2019, 18:29:55 UTC |