sort by:
Revision Author Date Message Commit Date
849a8c0 fix sign compare warnings (#5651) Summary: Fix -Wsign-compare warnings for gcc9. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5651 Test Plan: Tested with ubuntu19.10+gcc9 Differential Revision: D16567428 fbshipit-source-id: 730b2704d42ba0c4e4ea946a3199bbb34be4c25c 30 July 2019, 21:12:54 UTC
399f477 WriteUnPrepared: Use WriteUnpreparedTxnReadCallback for MultiGet (#5634) Summary: The `TransactionTest.MultiGetBatchedTest` were failing with unprepared batches because we were not using the correct callbacks. Override MultiGet to pass down the correct ReadCallback. A similar problem is also fixed in WritePrepared. This PR also fixes an issue similar to (https://github.com/facebook/rocksdb/pull/5147), but for MultiGet instead of Get. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5634 Differential Revision: D16552674 Pulled By: lth fbshipit-source-id: 736eaf8e919c6b13d5f5655b1c0d36b57ad04804 30 July 2019, 00:56:13 UTC
e648c1d Cache simulator: Optimize hybrid row-block cache. (#5616) Summary: This PR optimizes the hybrid row-block cache simulator. If a Get request hits the cache, we treat all its future accesses as hits. Consider a Get request (no snapshot) accesses multiple files, e.g, file1, file2, file3. We construct the row key as "fdnumber_key_0". Before this PR, if it hits the cache when searching the key in file1, we continue to process its accesses in file2 and file3 which is unnecessary. With this PR, if "file1_key_0" is in the cache, we treat all future accesses of this Get request as hits. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5616 Differential Revision: D16453187 Pulled By: HaoyuHuang fbshipit-source-id: 56f3169cc322322305baaf5543226a0824fae19f 29 July 2019, 17:58:15 UTC
80d7067 Use int64_t instead of ssize_t (#5638) Summary: The ssize_t type was introduced in https://github.com/facebook/rocksdb/pull/5633, but it seems like it's a POSIX specific type. I just need a signed type to represent number of bytes, so use int64_t instead. It seems like we have a typedef from SSIZE_T for Windows, but it doesn't seem like we ever include "port/port.h" in our public header files. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5638 Differential Revision: D16526269 Pulled By: lth fbshipit-source-id: 8d3a5c41003951b74b29bc5f1d949b2b22da0cee 26 July 2019, 23:36:49 UTC
3f89af1 Reduce the number of random iterations in compact_on_deletion_collector_test (#5635) Summary: This test frequently times out under TSAN; reducing the number of random iterations to make it complete faster. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5635 Test Plan: buck test mode/dev-tsan internal_repo_rocksdb/repo:compact_on_deletion_collector_test Differential Revision: D16523505 Pulled By: ltamasi fbshipit-source-id: 6a69909bce9d204c891150fcb3d536547b3253d0 26 July 2019, 22:53:34 UTC
70c7302 Block cache simulator: Add pysim to simulate caches using reinforcement learning. (#5610) Summary: This PR implements cache eviction using reinforcement learning. It includes two implementations: 1. An implementation of Thompson Sampling for the Bernoulli Bandit [1]. 2. An implementation of LinUCB with disjoint linear models [2]. The idea is that a cache uses multiple eviction policies, e.g., MRU, LRU, and LFU. The cache learns which eviction policy is the best and uses it upon a cache miss. Thompson Sampling is contextless and does not include any features. LinUCB includes features such as level, block type, caller, column family id to decide which eviction policy to use. [1] Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. 2018. A Tutorial on Thompson Sampling. Found. Trends Mach. Learn. 11, 1 (July 2018), 1-96. DOI: https://doi.org/10.1561/2200000070 [2] Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web (WWW '10). ACM, New York, NY, USA, 661-670. DOI=http://dx.doi.org/10.1145/1772690.1772758 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5610 Differential Revision: D16435067 Pulled By: HaoyuHuang fbshipit-source-id: 6549239ae14115c01cb1e70548af9e46d8dc21bb 26 July 2019, 21:41:13 UTC
41df734 WriteUnPrepared: Add new variable write_batch_flush_threshold (#5633) Summary: Instead of reusing `TransactionOptions::max_write_batch_size` for determining when to flush a write batch for write unprepared, add a new variable called `write_batch_flush_threshold` for this use case instead. Also add `TransactionDBOptions::default_write_batch_flush_threshold` which sets the default value if `TransactionOptions::write_batch_flush_threshold` is unspecified. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5633 Differential Revision: D16520364 Pulled By: lth fbshipit-source-id: d75ae5a2141ce7708982d5069dc3f0b58d250e8c 26 July 2019, 19:56:26 UTC
3617287 Parallelize db_bloom_filter_test (#5632) Summary: This test frequently times out under TSAN; parallelizing it should fix this issue. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5632 Test Plan: make check buck test mode/dev-tsan internal_repo_rocksdb/repo:db_bloom_filter_test Differential Revision: D16519399 Pulled By: ltamasi fbshipit-source-id: 66e05a644d6f79c6d544255ffcf6de195d2d62fe 26 July 2019, 18:48:17 UTC
230b909 Fix PopSavePoint to merge info into the previous savepoint (#5628) Summary: Transaction::RollbackToSavePoint undos the modification made since the SavePoint beginning, and also unlocks the corresponding keys, which are tracked in the last SavePoint. Currently ::PopSavePoint simply discard these tracked keys, leaving them locked in the lock manager. This breaks a subsequent ::RollbackToSavePoint behavior as it loses track of such keys, and thus cannot unlock them. The patch fixes ::PopSavePoint by passing on the track key information to the previous SavePoint. Fixes https://github.com/facebook/rocksdb/issues/5618 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5628 Differential Revision: D16505325 Pulled By: lth fbshipit-source-id: 2bc3b30963ab4d36d996d1f66543c93abf358980 26 July 2019, 18:39:30 UTC
74782ce Fix target 'clean' to include parallel test binaries (#5629) Summary: current `clean` target in Makefile does not remove parallel test binaries. Fix this. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5629 Test Plan: (on devserver) Take file_reader_writer_test for instance. ``` $make -j32 file_reader_writer_test $make clean ``` Verify that binary file 'file_reader_writer_test' is delete by `make clean`. Differential Revision: D16513176 Pulled By: riversand963 fbshipit-source-id: 70acb9f56c928a494964121b86aacc0090f31ff6 26 July 2019, 16:56:09 UTC
9625a2b Added SizeApproximationOptions to DB::GetApproximateSizes (#5626) Summary: The new DB::GetApproximateSizes with SizeApproximationOptions argument, which allows to add more options/knobs to the DB::GetApproximateSizes call (beyond only the include_flags) Pull Request resolved: https://github.com/facebook/rocksdb/pull/5626 Differential Revision: D16496913 Pulled By: elipoz fbshipit-source-id: ee8c6c182330a285fa056ecfc3905a592b451720 26 July 2019, 05:42:30 UTC
ae152ee Avoid user key copying for Get/Put/Write with user-timestamp (#5502) Summary: In previous https://github.com/facebook/rocksdb/issues/5079, we added user-specified timestamp to `DB::Get()` and `DB::Put()`. Limitation is that these two functions may cause extra memory allocation and key copy. The reason is that `WriteBatch` does not allocate extra memory for timestamps because it is not aware of timestamp size, and we did not provide an API to assign/update timestamp of each key within a `WriteBatch`. We address these issues in this PR by doing the following. 1. Add a `timestamp_size_` to `WriteBatch` so that `WriteBatch` can take timestamps into account when calling `WriteBatch::Put`, `WriteBatch::Delete`, etc. 2. Add APIs `WriteBatch::AssignTimestamp` and `WriteBatch::AssignTimestamps` so that application can assign/update timestamps for each key in a `WriteBatch`. 3. Avoid key copy in `GetImpl` by adding new constructor to `LookupKey`. Test plan (on devserver): ``` $make clean && COMPILE_WITH_ASAN=1 make -j32 all $./db_basic_test --gtest_filter=Timestamp/DBBasicTestWithTimestampWithParam.PutAndGet/* $make check ``` If the API extension looks good, I will add more unit tests. Some simple benchmark using db_bench. ``` $rm -rf /dev/shm/dbbench/* && TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillseq,readrandom -num=1000000 $rm -rf /dev/shm/dbbench/* && TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=1000000 -disable_wal=true ``` Master is at a78503bd6c80a3c4137df1962a972fe406b4d90b. ``` | | readrandom | fillrandom | | master | 15.53 MB/s | 25.97 MB/s | | PR5502 | 16.70 MB/s | 25.80 MB/s | ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5502 Differential Revision: D16340894 Pulled By: riversand963 fbshipit-source-id: 51132cf792be07d1efc3ac33f5768c4ee2608bb8 25 July 2019, 22:27:39 UTC
0d16fad rocksdb: build on macosx Summary: Make rocksdb build on macos: 1) Reorganize OS-specific flags and deps in rocksdb/src/TARGETS 2) Sandbox fbcode apple platform builds from repo root include path (which conflicts with layout of rocksdb headers). 3) Fix dep-translation for bzip2. Reviewed By: andrewjcg Differential Revision: D15125826 fbshipit-source-id: 8e143c689b88b5727e54881a5e80500f879a320b 25 July 2019, 18:45:54 UTC
d9dc6b4 Declare snapshot refresh incompatible with delete range (#5625) Summary: The ::snap_refresh_nanos option is incompatible with DeleteRange feature. Currently the code relies on range_del_agg.IsEmpty() to disable it if there are range delete tombstones. However ::IsEmpty does not guarantee that there is no RangeDelete tombstones in the SST files. The patch declares the two features incompatible in inline comments until we later figure how to properly detect the presence of RangeDelete tombstones in compaction inputs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5625 Differential Revision: D16468218 Pulled By: maysamyabandeh fbshipit-source-id: bd7beca278bc7e1db75e7ee4522d05a3a6ca86f4 24 July 2019, 22:22:14 UTC
7260347 Auto Roll Logger to add some extra checking to avoid segfault. (#5623) Summary: AutoRollLogger sets GetStatus() to be non-OK if the log file fails to be created and logger_ is set to null. It is left to the caller to check the status before calling function to this class. There is no harm to create another null checking to logger_ before we using it, so that in case users mis-use the logger, they don't get a segfault. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5623 Test Plan: Run all existing tests. Differential Revision: D16466251 fbshipit-source-id: 262b885eec28bf741d91e9191c3cb5ff964e1bce 24 July 2019, 22:14:40 UTC
5daa426 Fix regression bug of Auto rolling logger when handling failures (#5622) Summary: Auto roll logger fails to handle file creation error in the correct way, which may expose to seg fault condition to users. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5622 Test Plan: Add a unit test on creating file under a non-existing directory. The test fails without the fix. Differential Revision: D16460853 fbshipit-source-id: e96da4bef4f16db171ea04a11b2ec5a9448ddbde 24 July 2019, 19:08:40 UTC
66b524a Simplify WriteUnpreparedTxnReadCallback and fix some comments (#5621) Summary: Simplify WriteUnpreparedTxnReadCallback so we just have one function `CalcMaxVisibleSeq`. Also, there's no need for the read callback to hold onto the transaction any more, so just hold the set of unprep_seqs, reducing about of indirection in `IsVisibleFullCheck`. Also, some comments about using transaction snapshot were out of date, so remove them. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5621 Differential Revision: D16459883 Pulled By: lth fbshipit-source-id: cd581323fd18982e817d99af57b6eaba59e599bb 24 July 2019, 17:25:26 UTC
f5b951f Fix wrong info log printing for num_range_deletions (#5617) Summary: num_range_deletions printing is wrong in this log line: 2019/07/18-12:59:15.309271 7f869f9ff700 EVENT_LOG_v1 {"time_micros": 1563479955309228, "cf_name": "5", "job": 955, "event": "table_file_creation", "file_number": 34579, "file_size": 2239842, "table_properties": {"data_size": 1988792, "index_size": 3067, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 1, "filter_size": 170821, "raw_key_size": 1951792, "raw_average_key_size": 16, "raw_value_size": 1731720, "raw_average_value_size": 14, "num_data_blocks": 199, "num_entries": 121987, "num_deletions": 15184, "num_merge_operands": 86512, "num_range_deletions": 86512, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "5", "column_family_id": 5, "comparator": "leveldb.BytewiseComparator", "merge_operator": "PutOperator", "prefix_extractor_name": "rocksdb.FixedPrefix.7", "property_collectors": "[]", "compression": "ZSTD", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1563479951, "oldest_key_time": 0, "file_creation_time": 1563479954}} It actually prints "num_merge_operands" number. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5617 Test Plan: Just build. Differential Revision: D16453110 fbshipit-source-id: fc1024b3cd5650312ed47a1379f0d2cf8b2d8a8f 24 July 2019, 02:38:16 UTC
cfcf045 The ObjectRegistry class replaces the Registrar and NewCustomObjects.… (#5293) Summary: The ObjectRegistry class replaces the Registrar and NewCustomObjects. Objects are registered with the registry by Type (the class must implement the static const char *Type() method). This change is necessary for a few reasons: - By having a class (rather than static template instances), the class can be passed between compilation units, meaning that objects could be registered and shared from a dynamic library with an executable. - By having a class with instances, different units could have different objects registered. This could be useful if, for example, one Option allowed for a dynamic library and one did not. When combined with some other PRs (being able to load shared libraries, a Configurable interface to configure objects to/from string), this code will allow objects in external shared libraries to be added to a RocksDB image at run-time, rather than requiring every new extension to be built into the main library and called explicitly by every program. Test plan (on riversand963's devserver) ``` $COMPILE_WITH_ASAN=1 make -j32 all && sleep 1 && make check ``` All tests pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5293 Differential Revision: D16363396 Pulled By: riversand963 fbshipit-source-id: fbe4acb615bfc11103eef40a0b288845791c0180 24 July 2019, 00:13:05 UTC
092f417 Move the uncompression dictionary object out of the block cache (#5584) Summary: RocksDB has historically stored uncompression dictionary objects in the block cache as opposed to storing just the block contents. This neccesitated evicting the object upon table close. With the new code, only the raw blocks are stored in the cache, eliminating the need for eviction. In addition, the patch makes the following improvements: 1) Compression dictionary blocks are now prefetched/pinned similarly to index/filter blocks. 2) A copy operation got eliminated when the uncompression dictionary is retrieved. 3) Errors related to retrieving the uncompression dictionary are propagated as opposed to silently ignored. Note: the patch temporarily breaks the compression dictionary evicition stats. They will be fixed in a separate phase. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5584 Test Plan: make asan_check Differential Revision: D16344151 Pulled By: ltamasi fbshipit-source-id: 2962b295f5b19628f9da88a3fcebbce5a5017a7b 23 July 2019, 23:01:44 UTC
6b7fcc0 Improve CPU Efficiency of ApproximateSize (part 1) (#5613) Summary: 1. Avoid creating the iterator in order to call BlockBasedTable::ApproximateOffsetOf(). Instead, directly call into it. 2. Optimize BlockBasedTable::ApproximateOffsetOf() keeps the index block iterator in stack. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5613 Differential Revision: D16442660 Pulled By: elipoz fbshipit-source-id: 9320be3e918c139b10e758cbbb684706d172e516 23 July 2019, 22:34:33 UTC
3782acc ldb sometimes specify a string-append merge operator (#5607) Summary: Right now, ldb cannot scan a DB with merge operands with default ldb. There is no hard to give a general merge operator so that it can at least print out something Pull Request resolved: https://github.com/facebook/rocksdb/pull/5607 Test Plan: Run ldb against a DB with merge operands and see the outputs. Differential Revision: D16442634 fbshipit-source-id: c66c414ec07f219cfc6e6ec2cc14c783ee95df54 23 July 2019, 21:25:18 UTC
112702a Parallelize file_reader_writer_test in order to reduce timeouts Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5608 Test Plan: make check buck test mode/dev-tsan internal_repo_rocksdb/repo:file_reader_writer_test -- --run-disabled Differential Revision: D16441796 Pulled By: anand1976 fbshipit-source-id: afbb88a9fcb1c0ba22215118767e8eab3d1d6a4a 23 July 2019, 18:50:10 UTC
eae8327 WriteUnPrepared: improve read your own write functionality (#5573) Summary: There are a number of fixes in this PR (with most bugs found via the added stress tests): 1. Re-enable reseek optimization. This was initially disabled to avoid infinite loops in https://github.com/facebook/rocksdb/pull/3955 but this can be resolved by remembering not to reseek after a reseek has already been done. This problem only affects forward iteration in `DBIter::FindNextUserEntryInternal`, as we already disable reseeking in `DBIter::FindValueForCurrentKeyUsingSeek`. 2. Verify that ReadOption.snapshot can be safely used for iterator creation. Some snapshots would not give correct results because snaphsot validation would not be enforced, breaking some assumptions in Prev() iteration. 3. In the non-snapshot Get() case, reads done at `LastPublishedSequence` may not be enough, because unprepared sequence numbers are not published. Use `std::max(published_seq, max_visible_seq)` to do lookups instead. 4. Add stress test to test reading own writes. 5. Minor bug in the allow_concurrent_memtable_write case where we forgot to pass in batch_per_txn_. 6. Minor performance optimization in `CalcMaxUnpreparedSequenceNumber` by assigning by reference instead of value. 7. Add some more comments everywhere. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5573 Differential Revision: D16276089 Pulled By: lth fbshipit-source-id: 18029c944eb427a90a87dee76ac1b23f37ec1ccb 23 July 2019, 15:08:19 UTC
327c480 Disable refresh snapshot feature by default (#5606) Summary: There are concerns about the correctness of this patch. Disabling by default until the concerns are resolved. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5606 Differential Revision: D16428064 Pulled By: maysamyabandeh fbshipit-source-id: a89280f0ea85796c9c9dfbfd9a8e91dad9b000b3 23 July 2019, 03:05:00 UTC
66b5613 row_cache to share entry for recent snapshots (#5600) Summary: Right now, users cannot take advantage of row cache, unless no snapshot is used, or Get() is repeated for the same snapshots. This limits the usage of row cache. This change eliminate this restriction in some cases. If the snapshot used is newer than the largest sequence number in the file, and write callback function is not registered, the same row cache key is used as no snapshot is given. We still need the callback function restriction for now because the callback function may filter out different keys for different snapshots even if the snapshots are new. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5600 Test Plan: Add a unit test. Differential Revision: D16386616 fbshipit-source-id: 6b7d214bd215d191b03ccf55926ad4b703ec2e53 23 July 2019, 01:56:19 UTC
3778470 Block cache analyzer: Compute correlation of features and human readable trace file. (#5596) Summary: - Compute correlation between a few features and predictions, e.g., number of accesses since the last access vs number of accesses till the next access on a block. - Output human readable trace file so python can consume it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5596 Test Plan: make clean && USE_CLANG=1 make check -j32 Differential Revision: D16373200 Pulled By: HaoyuHuang fbshipit-source-id: c848d26bc2e9210461f317d7dbee42d55be5a0cc 23 July 2019, 00:51:34 UTC
a78503b Temporarily disable snapshot list refresh for atomic flush stress test (#5581) Summary: Atomic flush test started to fail after https://github.com/facebook/rocksdb/issues/5099. Then https://github.com/facebook/rocksdb/issues/5278 provided a fix after which the same error occurred much less frequently. However it still occur occasionally. Not sure what the root cause is. This PR disables the feature of snapshot list refresh, and we should keep an eye on the failure in the future. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5581 Differential Revision: D16295985 Pulled By: riversand963 fbshipit-source-id: c9e62e65133c52c21b07097de359632ca62571e4 22 July 2019, 21:38:16 UTC
0be1fee Added .watchmanconfig file to rocksdb repo (#5593) Summary: Added .watchmanconfig file to rocksdb repo. It is currently .gitignored. This allows to auto sync modified files with watchman when editing them remotely. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5593 Differential Revision: D16363860 Pulled By: elipoz fbshipit-source-id: 5ae221e21c6c757ceb08877771550d508f773d55 19 July 2019, 22:00:33 UTC
4f7ba3a Fix tsan and valgrind failures in import_column_family_test Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5598 Test Plan: tsan_check valgrind_test Differential Revision: D16380167 Pulled By: anand1976 fbshipit-source-id: 2d0caea7d2d02a9606457f62811175d762b89d5c 19 July 2019, 20:25:36 UTC
c129c75 Added log_readahead_size option to control prefetching for Log::Reader (#5592) Summary: Added log_readahead_size option to control prefetching for Log::Reader. This is mostly useful for reading a remotely located log, as it can save the number of round-trips when reading it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5592 Differential Revision: D16362989 Pulled By: elipoz fbshipit-source-id: c5d4d5245a44008cd59879640efff70c091ad3e8 19 July 2019, 19:00:19 UTC
6bb3b4b ldb idump to support non-default column families. (#5594) Summary: ldb idump now only works for default column family. Extend it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5594 Test Plan: Compile and run the tool against a multiple CF DB. Differential Revision: D16380684 fbshipit-source-id: bfb8af36fdad1806837c90aaaab492d71528aceb 19 July 2019, 18:36:59 UTC
abd1fdd Fix asan_check failures Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5589 Test Plan: TEST_TMPDIR=/dev/shm/rocksdb COMPILE_WITH_ASAN=1 OPT=-g make J=64 -j64 asan_check Differential Revision: D16361081 Pulled By: anand1976 fbshipit-source-id: 09474832b9cfb318a840d4b633e22dfad105d58c 18 July 2019, 21:51:25 UTC
3a6e83b HISTORY update for export and import column family APIs Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5587 Differential Revision: D16359919 fbshipit-source-id: cfd9c448d79a8b8e7ac1d2b661d10151df269dba 18 July 2019, 17:16:38 UTC
ec2b996 Fix LITE mode build failure Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5588 Test Plan: make LITE=1 all check Differential Revision: D16354543 Pulled By: anand1976 fbshipit-source-id: 327a171439e183ac3a5e5057c511d6bca445e97d 18 July 2019, 05:06:12 UTC
9f5cfb8 Fix for ReadaheadSequentialFile crash in ldb_cmd_test (#5586) Summary: Fixing a corner case crash when there was no data read from file, but status is still OK Pull Request resolved: https://github.com/facebook/rocksdb/pull/5586 Differential Revision: D16348117 Pulled By: elipoz fbshipit-source-id: f97973308024f020d8be79ca3c56466b84d80656 18 July 2019, 00:04:39 UTC
8a008d4 Block access tracing: Trace referenced key for Get on non-data blocks. (#5548) Summary: This PR traces the referenced key for Get for all types of blocks. This is useful when evaluating hybrid row-block caches. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5548 Test Plan: make clean && USE_CLANG=1 make check -j32 Differential Revision: D16157979 Pulled By: HaoyuHuang fbshipit-source-id: f6327411c9deb74e35e22a35f66cdbae09ab9d87 17 July 2019, 20:05:58 UTC
22ce462 Export Import sst files (#5495) Summary: Refresh of the earlier change here - https://github.com/facebook/rocksdb/issues/5135 This is a review request for code change needed for - https://github.com/facebook/rocksdb/issues/3469 "Add support for taking snapshot of a column family and creating column family from a given CF snapshot" We have an implementation for this that we have been testing internally. We have two new APIs that together provide this functionality. (1) ExportColumnFamily() - This API is modelled after CreateCheckpoint() as below. // Exports all live SST files of a specified Column Family onto export_dir, // returning SST files information in metadata. // - SST files will be created as hard links when the directory specified // is in the same partition as the db directory, copied otherwise. // - export_dir should not already exist and will be created by this API. // - Always triggers a flush. virtual Status ExportColumnFamily(ColumnFamilyHandle* handle, const std::string& export_dir, ExportImportFilesMetaData** metadata); Internally, the API will DisableFileDeletions(), GetColumnFamilyMetaData(), Parse through metadata, creating links/copies of all the sst files, EnableFileDeletions() and complete the call by returning the list of file metadata. (2) CreateColumnFamilyWithImport() - This API is modeled after IngestExternalFile(), but invoked only during a CF creation as below. // CreateColumnFamilyWithImport() will create a new column family with // column_family_name and import external SST files specified in metadata into // this column family. // (1) External SST files can be created using SstFileWriter. // (2) External SST files can be exported from a particular column family in // an existing DB. // Option in import_options specifies whether the external files are copied or // moved (default is copy). When option specifies copy, managing files at // external_file_path is caller's responsibility. When option specifies a // move, the call ensures that the specified files at external_file_path are // deleted on successful return and files are not modified on any error // return. // On error return, column family handle returned will be nullptr. // ColumnFamily will be present on successful return and will not be present // on error return. ColumnFamily may be present on any crash during this call. virtual Status CreateColumnFamilyWithImport( const ColumnFamilyOptions& options, const std::string& column_family_name, const ImportColumnFamilyOptions& import_options, const ExportImportFilesMetaData& metadata, ColumnFamilyHandle** handle); Internally, this API creates a new CF, parses all the sst files and adds it to the specified column family, at the same level and with same sequence number as in the metadata. Also performs safety checks with respect to overlaps between the sst files being imported. If incoming sequence number is higher than current local sequence number, local sequence number is updated to reflect this. Note, as the sst files is are being moved across Column Families, Column Family name in sst file will no longer match the actual column family on destination DB. The API does not modify Column Family name or id in the sst files being imported. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5495 Differential Revision: D16018881 fbshipit-source-id: 9ae2251025d5916d35a9fc4ea4d6707f6be16ff9 17 July 2019, 19:27:14 UTC
a3c1832 Arm64 CRC32 parallel computation optimization for RocksDB (#5494) Summary: Crc32c Parallel computation optimization: Algorithm comes from Intel whitepaper: [crc-iscsi-polynomial-crc32-instruction-paper](https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/crc-iscsi-polynomial-crc32-instruction-paper.pdf) Input data is divided into three equal-sized blocks Three parallel blocks (crc0, crc1, crc2) for 1024 Bytes One Block: 42(BLK_LENGTH) * 8(step length: crc32c_u64) bytes 1. crc32c_test: ``` [==========] Running 4 tests from 1 test case. [----------] Global test environment set-up. [----------] 4 tests from CRC [ RUN ] CRC.StandardResults [ OK ] CRC.StandardResults (1 ms) [ RUN ] CRC.Values [ OK ] CRC.Values (0 ms) [ RUN ] CRC.Extend [ OK ] CRC.Extend (0 ms) [ RUN ] CRC.Mask [ OK ] CRC.Mask (0 ms) [----------] 4 tests from CRC (1 ms total) [----------] Global test environment tear-down [==========] 4 tests from 1 test case ran. (1 ms total) [ PASSED ] 4 tests. ``` 2. RocksDB benchmark: db_bench --benchmarks="crc32c" ``` Linear Arm crc32c: crc32c: 1.005 micros/op 995133 ops/sec; 3887.2 MB/s (4096 per op) ``` ``` Parallel optimization with Armv8 crypto extension: crc32c: 0.419 micros/op 2385078 ops/sec; 9316.7 MB/s (4096 per op) ``` It gets ~2.4x speedup compared to linear Arm crc32c instructions. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5494 Differential Revision: D16340806 fbshipit-source-id: 95dae9a5b646fd20a8303671d82f17b2e162e945 17 July 2019, 18:22:38 UTC
74fb7f0 Cleaned up and simplified LRU cache implementation (#5579) Summary: The 'refs' field in LRUHandle now counts only external references, since anyway we already have the IN_CACHE flag. This simplifies reference accounting logic a bit. Also cleaned up few asserts code as well as the comments - to be more readable. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5579 Differential Revision: D16286747 Pulled By: elipoz fbshipit-source-id: 7186d88f80f512ce584d0a303437494b5cbefd7f 17 July 2019, 02:17:45 UTC
0f4d90e Added support for sequential read-ahead file (#5580) Summary: Added support for sequential read-ahead file that can prefetch the read data and later serve it from internal cache buffer. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5580 Differential Revision: D16287082 Pulled By: elipoz fbshipit-source-id: a3e7ad9643d377d39352ff63058ce050ec31dcf3 17 July 2019, 01:21:18 UTC
699a569 Remove RandomAccessFileReader.for_compaction_ (#5572) Summary: RandomAccessFileReader.for_compaction_ doesn't seem to be used anymore. Remove it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5572 Test Plan: USE_CLANG=1 make all check -j Differential Revision: D16286178 fbshipit-source-id: aa338049761033dfbe5e8b1707bbb0be2df5be7e 16 July 2019, 23:32:18 UTC
0acaa1a WriteUnPrepared: use tracked_keys_ to track keys needed for rollback (#5562) Summary: Currently, we are tracking keys we need to rollback via a separate structure specific to WriteUnprepared in write_set_keys_. We already have a data structure called tracked_keys_ used to track which keys to unlock on transaction termination. This is exactly what we want, since we should only rollback keys that we have locked anyway. Save some memory by reusing that data structure instead of making our own. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5562 Differential Revision: D16206484 Pulled By: lth fbshipit-source-id: 5894d2b824a4b19062d84adbd6e6e86f00047488 16 July 2019, 22:24:56 UTC
3bde41b Move the filter readers out of the block cache (#5504) Summary: Currently, when the block cache is used for the filter block, it is not really the block itself that is stored in the cache but a FilterBlockReader object. Since this object is not pure data (it has, for instance, pointers that might dangle, including in one case a back pointer to the TableReader), it's not really sharable. To avoid the issues around this, the current code erases the cache entries when the TableReader is closed (which, BTW, is not sufficient since a concurrent TableReader might have picked up the object in the meantime). Instead of doing this, the patch moves the FilterBlockReader out of the cache altogether, and decouples the filter reader object from the filter block. In particular, instead of the TableReader owning, or caching/pinning the FilterBlockReader (based on the customer's settings), with the change the TableReader unconditionally owns the FilterBlockReader, which in turn owns/caches/pins the filter block. This change also enables us to reuse the code paths historically used for data blocks for filters as well. Note: Eviction statistics for filter blocks are temporarily broken. We plan to fix this in a separate phase. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5504 Test Plan: make asan_check Differential Revision: D16036974 Pulled By: ltamasi fbshipit-source-id: 770f543c5fb4ed126fd1e04bfd3809cf4ff9c091 16 July 2019, 20:14:58 UTC
cd25203 Fix memorty leak in `rocksdb_wal_iter_get_batch` function (#5515) Summary: `wal_batch.writeBatchPtr.release()` gives up the ownership of the original `WriteBatch`, but there is no new owner, which causes memory leak. The patch is simple. Removing `release()` prevent ownership change. `std::move` is for speed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5515 Differential Revision: D16264281 Pulled By: riversand963 fbshipit-source-id: 51c556b7a1c977325c3aa24acb636303847151fa 15 July 2019, 19:59:39 UTC
6e8a135 Fix regression - 100% CPU - Regression for Windows 7 (#5557) Summary: Fixes https://github.com/facebook/rocksdb/issues/5552 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5557 Differential Revision: D16266329 fbshipit-source-id: a8f6b50298a6f7c8d6c7e172bb26dd7eb6bd8a4d 15 July 2019, 19:19:49 UTC
b0259e4 add more tracing for stats history (#5566) Summary: Sample info log output from db_bench: In-memory: ``` 2019/07/12-21:39:19.478490 7fa01b3f5700 [_impl/db_impl.cc:702] ------- PERSISTING STATS ------- 2019/07/12-21:39:19.478633 7fa01b3f5700 [_impl/db_impl.cc:753] Storing 145 stats with timestamp 1562992759 to in-memory stats history 2019/07/12-21:39:19.478670 7fa01b3f5700 [_impl/db_impl.cc:766] [Pre-GC] In-memory stats history size: 1051218 bytes, slice count: 103 2019/07/12-21:39:19.478704 7fa01b3f5700 [_impl/db_impl.cc:775] [Post-GC] In-memory stats history size: 1051218 bytes, slice count: 102 ``` On-disk: ``` 2019/07/12-21:48:53.862548 7f24943f5700 [_impl/db_impl.cc:702] ------- PERSISTING STATS ------- 2019/07/12-21:48:53.862553 7f24943f5700 [_impl/db_impl.cc:709] Reading 145 stats from statistics 2019/07/12-21:48:53.862852 7f24943f5700 [_impl/db_impl.cc:737] Writing 145 stats with timestamp 1562993333 to persistent stats CF succeeded ``` ``` 2019/07/12-21:48:51.861711 7f24943f5700 [_impl/db_impl.cc:702] ------- PERSISTING STATS ------- 2019/07/12-21:48:51.861729 7f24943f5700 [_impl/db_impl.cc:709] Reading 145 stats from statistics 2019/07/12-21:48:51.861921 7f24943f5700 [_impl/db_impl.cc:732] Writing to persistent stats CF failed -- Result incomplete: Write stall ... 2019/07/12-21:48:51.873032 7f2494bf6700 [WARN] [lumn_family.cc:749] [default] Stopping writes because we have 2 immutable memtables (waiting for flush), max_write_buffer_number is set to 2 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5566 Differential Revision: D16258187 Pulled By: miasantreble fbshipit-source-id: 292497099b941418590ed4312411bee36e244dc5 15 July 2019, 18:49:17 UTC
f064d74 Cleanup the Arm64 CRC32 unused warning (#5565) Summary: When 'HAVE_ARM64_CRC' is set, the blew methods: - bool rocksdb::crc32c::isSSE42() - bool rocksdb::crc32c::isPCLMULQDQ() are defined but not used, the unused-function is raised when do rocksdb build. This patch try to cleanup these warnings by add ifndef, if it build under the HAVE_ARM64_CRC, we will not define `isSSE42` and `isPCLMULQDQ`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5565 Differential Revision: D16233654 fbshipit-source-id: c32a9dda7465dbf65f9ccafef159124db92cdffd 15 July 2019, 18:20:26 UTC
68d43b4 A python script to plot graphs for cvs files generated by block_cache_trace_analyzer Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5563 Test Plan: Manually run the script on files generated by block_cache_trace_analyzer. Differential Revision: D16214400 Pulled By: HaoyuHuang fbshipit-source-id: 94485eed995e9b2b63e197c5dfeb80129fa7897f 13 July 2019, 01:56:20 UTC
6187661 Fix MyRocks compile warnings-treated-as-errors on Fedora 30, gcc 9.1.1 (#5553) Summary: - Provide assignment operator in CompactionStats - Provide a copy constructor for FileDescriptor - Remove std::move from "return std::move(t)" in BoundedQueue Pull Request resolved: https://github.com/facebook/rocksdb/pull/5553 Differential Revision: D16230170 fbshipit-source-id: fd7c6e52390b2db1be24141e25649cf62424d078 13 July 2019, 00:30:51 UTC
3e9c5a3 Block cache analyzer: Add more stats (#5516) Summary: This PR provides more command line options for block cache analyzer to better understand block cache access pattern. -analyze_bottom_k_access_count_blocks -analyze_top_k_access_count_blocks -reuse_lifetime_labels -reuse_lifetime_buckets -analyze_callers -access_count_buckets -analyze_blocks_reuse_k_reuse_window Pull Request resolved: https://github.com/facebook/rocksdb/pull/5516 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16037440 Pulled By: HaoyuHuang fbshipit-source-id: b9a4ac0d4712053fab910732077a4d4b91400bc8 12 July 2019, 23:55:34 UTC
1a59b6e Cache simulator: Add a ghost cache for admission control and a hybrid row-block cache. (#5534) Summary: This PR adds a ghost cache for admission control. Specifically, it admits an entry on its second access. It also adds a hybrid row-block cache that caches the referenced key-value pairs of a Get/MultiGet request instead of its blocks. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5534 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16101124 Pulled By: HaoyuHuang fbshipit-source-id: b99edda6418a888e94eb40f71ece45d375e234b1 11 July 2019, 19:43:29 UTC
82d8ca8 Upload db directory during cleanup for certain tests (#5554) Summary: Add an extra cleanup step so that db directory can be saved and uploaded. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5554 Reviewed By: yancouto Differential Revision: D16168844 Pulled By: riversand963 fbshipit-source-id: ec7b2cee5f11c7d388c36531f8b076d648e2fb19 10 July 2019, 18:29:55 UTC
60d8b19 Implemented a file logger that uses WritableFileWriter (#5491) Summary: Current PosixLogger performs IO operations using posix calls. Thus the current implementation will not work for non-posix env. Created a new logger class EnvLogger that uses env specific WritableFileWriter for IO operations. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5491 Test Plan: make check Differential Revision: D15909002 Pulled By: ggaurav28 fbshipit-source-id: 13a8105176e8e42db0c59798d48cb6a0dbccc965 09 July 2019, 23:27:22 UTC
f786b4a Improve result print on atomic flush stress test failure (#5549) Summary: When atomic flush stress test fails, we print internal keys within the range with mismatched key/values for all column families. Test plan (on devserver) Manually hack the code to randomly insert wrong data. Run the test. ``` $make clean && COMPILE_WITH_TSAN=1 make -j32 db_stress $./db_stress -test_atomic_flush=true -ops_per_thread=10000 ``` Check that proper error messages are printed, as follows: ``` 2019/07/08-17:40:14 Starting verification Verification failed Latest Sequence Number: 190903 [default] 000000000000050B => 56290000525350515E5F5C5D5A5B5859 [3] 0000000000000533 => EE100000EAEBE8E9E6E7E4E5E2E3E0E1FEFFFCFDFAFBF8F9 Internal keys in CF 'default', [000000000000050B, 0000000000000533] (max 8) key 000000000000050B seq 139920 type 1 key 0000000000000533 seq 0 type 1 Internal keys in CF '3', [000000000000050B, 0000000000000533] (max 8) key 0000000000000533 seq 0 type 1 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5549 Differential Revision: D16158709 Pulled By: riversand963 fbshipit-source-id: f07fa87763f87b3bd908da03c956709c6456bcab 09 July 2019, 23:27:22 UTC
aa0367a Allow ldb to open DB as secondary (#5537) Summary: Right now ldb can open running DB through read-only DB. However, it might leave info logs files to the read-only DB directory. Add an option to open the DB as secondary to avoid it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5537 Test Plan: Run ./ldb scan --max_keys=10 --db=/tmp/rocksdbtest-2491/dbbench --secondary_path=/tmp --no_value --hex and ./ldb get 0x00000000000000103030303030303030 --hex --db=/tmp/rocksdbtest-2491/dbbench --secondary_path=/tmp against a normal db_bench run and observe the output changes. Also observe that no new info logs files are created under /tmp/rocksdbtest-2491/dbbench. Run without --secondary_path and observe that new info logs created under /tmp/rocksdbtest-2491/dbbench. Differential Revision: D16113886 fbshipit-source-id: 4e09dec47c2528f6ca08a9e7a7894ba2d9daebbb 09 July 2019, 19:51:28 UTC
cb19e74 Fix bugs in DBWALTest.kTolerateCorruptedTailRecords triggered by #5520 (#5550) Summary: https://github.com/facebook/rocksdb/pull/5520 caused a buffer overflow bug in DBWALTest.kTolerateCorruptedTailRecords. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5550 Test Plan: Run the test in UBSAN. It used to fail. Not it succeeds. Differential Revision: D16165516 fbshipit-source-id: 42c56a6bc64eb091f054b87757fcbef60da825f7 09 July 2019, 18:18:32 UTC
a6a9213 Fix interpreter lines for files with python2-only syntax. Reviewed By: lisroach Differential Revision: D15362271 fbshipit-source-id: 48fab12ab6e55a8537b19b4623d2545ca9950ec5 09 July 2019, 17:51:37 UTC
872a261 db_stress to print some internal keys after verification failure (#5543) Summary: Print out some more information when db_tress fails with verification failures to help debugging problems. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5543 Test Plan: Manually ingest some failures and observe the outputs are like this: Verification failed [default] 0000000000199A5A => 7C3D000078797A7B74757677707172736C6D6E6F68696A6B [6] 000000000019C8BD => 65380000616063626D6C6F6E69686B6A internal keys in default CF [0000000000199A5A, 000000000019C8BD] (max 8) key 0000000000199A5A seq 179246 type 1 key 000000000019C8BD seq 163970 type 1 Lastest Sequence Number: 292234 Differential Revision: D16153717 fbshipit-source-id: b33fa50a828c190cbf8249a37955432044f92daf 08 July 2019, 20:36:37 UTC
6ca3fee Fix -Werror=shadow (#5546) Summary: This PR fixes shadow errors. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5546 Test Plan: make clean && make check -j32 && make clean && USE_CLANG=1 make check -j32 && make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16147841 Pulled By: HaoyuHuang fbshipit-source-id: 1043500d70c134185f537ab4c3900452752f1534 08 July 2019, 07:12:43 UTC
7c76a7f Support GetAllKeyVersions() for non-default cf (#5544) Summary: Previously `GetAllKeyVersions()` supports default column family only. This PR add support for other column families. Test plan (devserver): ``` $make clean && COMPILE_WITH_ASAN=1 make -j32 db_basic_test $./db_basic_test --gtest_filter=DBBasicTest.GetAllKeyVersions ``` All other unit tests must pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5544 Differential Revision: D16147551 Pulled By: riversand963 fbshipit-source-id: 5a61aece2a32d789e150226a9b8d53f4a5760168 08 July 2019, 05:43:52 UTC
8d34806 setup wal_in_db_path_ for secondary instance (#5545) Summary: PR https://github.com/facebook/rocksdb/pull/5520 adds DBImpl:: wal_in_db_path_ and initializes it in DBImpl::Open, this PR fixes the valgrind error for secondary instance: ``` ==236417== Conditional jump or move depends on uninitialised value(s) ==236417== at 0x62242A: rocksdb::DeleteDBFile(rocksdb::ImmutableDBOptions const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, bool) (file_util.cc:96) ==236417== by 0x512432: rocksdb::DBImpl::DeleteObsoleteFileImpl(int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::FileType, unsigned long) (db_impl_files.cc:261) ==236417== by 0x515A7A: rocksdb::DBImpl::PurgeObsoleteFiles(rocksdb::JobContext&, bool) (db_impl_files.cc:492) ==236417== by 0x499153: rocksdb::ColumnFamilyHandleImpl::~ColumnFamilyHandleImpl() (column_family.cc:75) ==236417== by 0x499880: rocksdb::ColumnFamilyHandleImpl::~ColumnFamilyHandleImpl() (column_family.cc:84) ==236417== by 0x4C9AF9: rocksdb::DB::DestroyColumnFamilyHandle(rocksdb::ColumnFamilyHandle*) (db_impl.cc:3105) ==236417== by 0x44E853: CloseSecondary (db_secondary_test.cc:53) ==236417== by 0x44E853: rocksdb::DBSecondaryTest::~DBSecondaryTest() (db_secondary_test.cc:31) ==236417== by 0x44EC77: ~DBSecondaryTest_PrimaryDropColumnFamily_Test (db_secondary_test.cc:443) ==236417== by 0x44EC77: rocksdb::DBSecondaryTest_PrimaryDropColumnFamily_Test::~DBSecondaryTest_PrimaryDropColumnFamily_Test() (db_secondary_test.cc:443) ==236417== by 0x83D1D7: HandleSehExceptionsInMethodIfSupported<testing::Test, void> (gtest-all.cc:3824) ==236417== by 0x83D1D7: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (gtest-all.cc:3860) ==236417== by 0x8346DB: testing::TestInfo::Run() [clone .part.486] (gtest-all.cc:4078) ==236417== by 0x8348D4: Run (gtest-all.cc:4047) ==236417== by 0x8348D4: testing::TestCase::Run() [clone .part.487] (gtest-all.cc:4190) ==236417== by 0x834D14: Run (gtest-all.cc:6100) ==236417== by 0x834D14: testing::internal::UnitTestImpl::RunAllTests() (gtest-all.cc:6062) ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5545 Differential Revision: D16146224 Pulled By: miasantreble fbshipit-source-id: 184c90e451352951da4e955f054d4b1a1f29ea29 08 July 2019, 04:32:50 UTC
e0d9d57 Fix bugs in WAL trash file handling (#5520) Summary: 1. Cleanup WAL trash files on open 2. Don't apply deletion rate limit if WAL dir is different from db dir Pull Request resolved: https://github.com/facebook/rocksdb/pull/5520 Test Plan: Add new unit tests and make check Differential Revision: D16096750 Pulled By: anand1976 fbshipit-source-id: 6f07858ad864b754b711db416f0389c45ede599b 07 July 2019, 04:07:32 UTC
2de61d9 Assert get_context not null in BlockBasedTable::Get() (#5542) Summary: clang analyze fails after https://github.com/facebook/rocksdb/pull/5514 for this failure: table/block_based/block_based_table_reader.cc:3450:16: warning: Called C++ object pointer is null if (!get_context->SaveValue( ^~~~~~~~~~~~~~~~~~~~~~~ 1 warning generated. The reaon is that a branching is added earlier in the function on get_context is null or not, CLANG analyze thinks that it can be null and we make the function call withou the null checking. Fix the issue by removing the branch and add an assert. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5542 Test Plan: "make all check" passes and CLANG analyze failure goes away. Differential Revision: D16133988 fbshipit-source-id: d4627d03c4746254cc11926c523931086ccebcda 05 July 2019, 19:34:13 UTC
4f66ec9 Fix lower bound check error when iterate across file boundary (#5540) Summary: Since https://github.com/facebook/rocksdb/issues/5468 `LevelIterator` compare lower bound and file smallest key on `NewFileIterator` and cache the result to reduce per key lower bound check. However when iterate across file boundary, it doesn't update the cached result since `Valid()=false` because `Valid()` still reflect the status of the previous file iterator. Fixing it by remove the `Valid()` check from `CheckMayBeOutOfLowerBound()`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5540 Test Plan: See the new test. Signed-off-by: Yi Wu <yiwu@pingcap.com> Differential Revision: D16127653 fbshipit-source-id: a0691e1164658d485c17971aaa97028812f74678 05 July 2019, 00:28:30 UTC
e4dcf5f db_bench to add a new "benchmark" to print out all stats history (#5532) Summary: Sometimes it is helpful to fetch the whole history of stats after benchmark runs. Add such an option Pull Request resolved: https://github.com/facebook/rocksdb/pull/5532 Test Plan: Run the benchmark manually and observe the output is as expected. Differential Revision: D16097764 fbshipit-source-id: 10b5b735a22a18be198b8f348be11f11f8806904 04 July 2019, 03:03:28 UTC
6edc5d0 Block cache tracing: Associate a unique id with Get and MultiGet (#5514) Summary: This PR associates a unique id with Get and MultiGet. This enables us to track how many blocks a Get/MultiGet request accesses. We can also measure the impact of row cache vs block cache. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5514 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16032681 Pulled By: HaoyuHuang fbshipit-source-id: 775b05f4440badd58de6667e3ec9f4fc87a0af4c 04 July 2019, 02:35:41 UTC
84c5c9a Fix a bug in compaction reads causing checksum mismatches and asan errors (#5531) Summary: Fixed a bug in compaction reads due to which incorrect number of bytes were being read/utilized. The bug was introduced in https://github.com/facebook/rocksdb/issues/5498 , resulting in "Corruption: block checksum mismatch" and "heap-buffer-overflow" asan errors in our tests. https://github.com/facebook/rocksdb/issues/5498 was introduced recently and is not in any released versions. ASAN: ``` > ==2280939==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6250005e83da at pc 0x000000d57f62 bp 0x7f954f483770 sp 0x7f954f482f20 > === How to use this, how to get the raw stack trace, and more: fburl.com/ASAN === > READ of size 4 at 0x6250005e83da thread T4 > SCARINESS: 27 (4-byte-read-heap-buffer-overflow-far-from-bounds) > #0 tests+0xd57f61 __asan_memcpy > https://github.com/facebook/rocksdb/issues/1 rocksdb/src/util/coding.h:124 rocksdb::DecodeFixed32(char const*) > https://github.com/facebook/rocksdb/issues/2 rocksdb/src/table/block_fetcher.cc:39 rocksdb::BlockFetcher::CheckBlockChecksum() > https://github.com/facebook/rocksdb/issues/3 rocksdb/src/table/block_fetcher.cc:99 rocksdb::BlockFetcher::TryGetFromPrefetchBuffer() > https://github.com/facebook/rocksdb/issues/4 rocksdb/src/table/block_fetcher.cc:209 rocksdb::BlockFetcher::ReadBlockContents() > https://github.com/facebook/rocksdb/issues/5 rocksdb/src/table/block_based/block_based_table_reader.cc:93 rocksdb::(anonymous namespace)::ReadBlockFromFile(rocksdb::RandomAccessFileReader*, rocksdb::FilePrefetchBuffer*, rocksdb::Footer const&, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, std::unique_ptr<...>*, rocksdb::ImmutableCFOptions const&, bool, bool, rocksdb::UncompressionDict const&, rocksdb::PersistentCacheOptions const&, unsigned long, unsigned long, rocksdb::MemoryAllocator*, bool) > https://github.com/facebook/rocksdb/issues/6 rocksdb/src/table/block_based/block_based_table_reader.cc:2331 rocksdb::BlockBasedTable::RetrieveBlock(rocksdb::FilePrefetchBuffer*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::UncompressionDict const&, rocksdb::CachableEntry<...>*, rocksdb::BlockType, rocksdb::GetContext*, rocksdb::BlockCacheLookupContext*, bool) const > https://github.com/facebook/rocksdb/issues/7 rocksdb/src/table/block_based/block_based_table_reader.cc:2090 rocksdb::DataBlockIter* rocksdb::BlockBasedTable::NewDataBlockIterator<...>(rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::DataBlockIter*, rocksdb::BlockType, bool, bool, rocksdb::GetContext*, rocksdb::BlockCacheLookupContext*, rocksdb::Status, rocksdb::FilePrefetchBuffe r*, bool) const > https://github.com/facebook/rocksdb/issues/8 rocksdb/src/table/block_based/block_based_table_reader.cc:2720 rocksdb::BlockBasedTableIterator<...>::InitDataBlock() > https://github.com/facebook/rocksdb/issues/9 rocksdb/src/table/block_based/block_based_table_reader.cc:2607 rocksdb::BlockBasedTableIterator<...>::SeekToFirst() > https://github.com/facebook/rocksdb/issues/10 rocksdb/src/table/iterator_wrapper.h:83 rocksdb::IteratorWrapperBase<...>::SeekToFirst() > https://github.com/facebook/rocksdb/issues/11 rocksdb/src/table/merging_iterator.cc:100 rocksdb::MergingIterator::SeekToFirst() > https://github.com/facebook/rocksdb/issues/12 rocksdb/compaction/compaction_job.cc:877 rocksdb::CompactionJob::ProcessKeyValueCompaction(rocksdb::CompactionJob::SubcompactionState*) > https://github.com/facebook/rocksdb/issues/13 rocksdb/compaction/compaction_job.cc:590 rocksdb::CompactionJob::Run() > https://github.com/facebook/rocksdb/issues/14 rocksdb/db_impl/db_impl_compaction_flush.cc:2689 rocksdb::DBImpl::BackgroundCompaction(bool*, rocksdb::JobContext*, rocksdb::LogBuffer*, rocksdb::DBImpl::PrepickedCompaction*, rocksdb::Env::Priority) > https://github.com/facebook/rocksdb/issues/15 rocksdb/db_impl/db_impl_compaction_flush.cc:2248 rocksdb::DBImpl::BackgroundCallCompaction(rocksdb::DBImpl::PrepickedCompaction*, rocksdb::Env::Priority) > https://github.com/facebook/rocksdb/issues/16 rocksdb/db_impl/db_impl_compaction_flush.cc:2024 rocksdb::DBImpl::BGWorkCompaction(void*) > https://github.com/facebook/rocksdb/issues/23 rocksdb/src/util/threadpool_imp.cc:266 rocksdb::ThreadPoolImpl::Impl::BGThread(unsigned long) > https://github.com/facebook/rocksdb/issues/24 rocksdb/src/util/threadpool_imp.cc:307 rocksdb::ThreadPoolImpl::Impl::BGThreadWrapper(void*) ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5531 Test Plan: Verified that this fixes the fb-internal Logdevice test which caught the issue. Differential Revision: D16109702 Pulled By: sagar0 fbshipit-source-id: 1fc08549cf7b553e338a133ae11eb9f4d5011914 04 July 2019, 02:06:46 UTC
09ea5d8 Fix clang build with jemalloc (#5522) Summary: Fixes the below build failure for clang compiler using glibc and jemalloc. Platform: linux x86-64 Compiler: clang version 6.0.0-1ubuntu2 Build failure: ``` $ CXX=clang++ CC=clang USE_CLANG=1 WITH_JEMALLOC_FLAG=1 JEMALLOC=1 EXTRA_LDFLAGS="-L/home/andrew/jemalloc/lib/" EXTRA_CXXFLAGS="-I/home/andrew/jemalloc/include/" make check -j12 ... CC memory/jemalloc_nodump_allocator.o In file included from memory/jemalloc_nodump_allocator.cc:6: In file included from ./memory/jemalloc_nodump_allocator.h:11: In file included from ./port/jemalloc_helper.h:16: /usr/include/clang/6.0.0/include/mm_malloc.h:39:16: error: 'posix_memalign' is missing exception specification 'throw()' extern "C" int posix_memalign(void **__memptr, size_t __alignment, size_t __size); ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:388:26: note: expanded from macro 'posix_memalign' # define posix_memalign je_posix_memalign ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:77:29: note: expanded from macro 'je_posix_memalign' # define je_posix_memalign posix_memalign ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:232:38: note: previous declaration is here JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_posix_memalign(void **memptr, ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:77:29: note: expanded from macro 'je_posix_memalign' # define je_posix_memalign posix_memalign ^ 1 error generated. Makefile:1972: recipe for target 'memory/jemalloc_nodump_allocator.o' failed make: *** [memory/jemalloc_nodump_allocator.o] Error 1 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5522 Differential Revision: D16069869 Pulled By: miasantreble fbshipit-source-id: c489bbc993adee194b9a550134c6237a264bc443 02 July 2019, 20:02:12 UTC
0d57d93 Support jemalloc compiled with `--with-jemalloc-prefix` (#5521) Summary: Previously, if the jemalloc was built with nonempty string for `--with-jemalloc-prefix`, then `HasJemalloc()` would return false on Linux, so jemalloc would not be used at runtime. On Mac, it would cause a linker failure due to no definitions found for the weak functions declared in "port/jemalloc_helper.h". This should be a rare problem because (1) on Linux the default `--with-jemalloc-prefix` value is the empty string, and (2) Homebrew's build explicitly sets `--with-jemalloc-prefix` to the empty string. However, there are cases where `--with-jemalloc-prefix` is nonempty. For example, when building jemalloc from source on Mac, the default setting is `--with-jemalloc-prefix=je_`. Such jemalloc builds should be usable by RocksDB. The fix is simple. Defining `JEMALLOC_MANGLE` before including "jemalloc.h" causes it to define unprefixed symbols that are aliases for each of the prefixed symbols. Thanks to benesch for figuring this out and explaining it to me. Fixes https://github.com/facebook/rocksdb/issues/1462. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5521 Test Plan: build jemalloc with prefixed symbols: ``` $ ./configure --with-jemalloc-prefix=lol $ make ``` compile rocksdb against it: ``` $ WITH_JEMALLOC_FLAG=1 JEMALLOC=1 EXTRA_LDFLAGS="-L/home/andrew/jemalloc/lib/" EXTRA_CXXFLAGS="-I/home/andrew/jemalloc/include/" make -j12 ./db_bench ``` run db_bench and verify jemalloc actually used: ``` $ ./db_bench -benchmarks=fillrandom -statistics=true -dump_malloc_stats=true -stats_dump_period_sec=1 $ grep jemalloc /tmp/rocksdbtest-1000/dbbench/LOG 2019/06/29-12:20:52.088658 7fc5fb7f6700 [_impl/db_impl.cc:837] ___ Begin jemalloc statistics ___ ... ``` Differential Revision: D16092758 fbshipit-source-id: c2c358346190ed62ceb2a3547a6c4c180b12f7c4 02 July 2019, 19:07:01 UTC
662ce62 Reduce iterator key comparison for upper/lower bound check (2nd attempt) (#5468) Summary: This is a second attempt for https://github.com/facebook/rocksdb/issues/5111, with the fix to redo iterate bounds check after `SeekXXX()`. This is because MyRocks may change iterate bounds between seek. See https://github.com/facebook/rocksdb/issues/5111 for original benchmark result and discussion. Closes https://github.com/facebook/rocksdb/issues/5463. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5468 Test Plan: Existing rocksdb tests, plus myrocks test `rocksdb.optimizer_loose_index_scans` and `rocksdb.group_min_max`. Differential Revision: D15863332 fbshipit-source-id: ab4aba5899838591806b8673899bd465f3f53e18 02 July 2019, 18:48:46 UTC
cfdf211 Exclude StatsHistoryTest.ForceManualFlushStatsCF test from lite mode (#5529) Summary: Recent commit 3886dddc3b44bf5061c0f93eab578c51e8bad7bd introduced a new test which is not compatible with lite mode and breaks contrun test: ``` [ RUN ] StatsHistoryTest.ForceManualFlushStatsCF monitoring/stats_history_test.cc:642: Failure Expected: (cfd_stats->GetLogNumber()) < (cfd_test->GetLogNumber()), actual: 15 vs 15 ``` This PR excludes the test from lite mode to appease the failing test Pull Request resolved: https://github.com/facebook/rocksdb/pull/5529 Differential Revision: D16080892 Pulled By: miasantreble fbshipit-source-id: 2f8a22758f71250cd9f204046404226ddc13b028 01 July 2019, 23:37:08 UTC
66464d1 Remove multiple declarations o kMicrosInSecond. Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5526 Test Plan: OPT=-g V=1 make J=1 unity_test -j32 make clean && make -j32 Differential Revision: D16079315 Pulled By: HaoyuHuang fbshipit-source-id: 294ab439cf0db8dd5da44e30eabf0cbb2bb8c4f6 01 July 2019, 22:15:12 UTC
3e6c185 Formatting fixes in db_bench_tool (#5525) Summary: Formatting fixes in db_bench_tool that were accidentally omitted Pull Request resolved: https://github.com/facebook/rocksdb/pull/5525 Test Plan: Unit tests Differential Revision: D16078516 Pulled By: elipoz fbshipit-source-id: bf8df0e3f08092a91794ebf285396d9b8a335bb9 01 July 2019, 21:57:28 UTC
1e87f2b Ref and unref cfd before and after calling WaitForFlushMemTables (#5513) Summary: This is to prevent bg flush thread from unrefing and deleting the cfd that has been dropped by a concurrent thread. Before RocksDB calls `DBImpl::WaitForFlushMemTables`, we should increase the refcount of each `ColumnFamilyData` so that its ref count will not drop to 0 even if the column family is dropped by another thread. Otherwise the bg flush thread can deref the cfd and deletes it, causing a segfault in `WaitForFlushMemtables` upon accessing `cfd`. Test plan (on devserver): ``` $make clean && COMPILE_WITH_ASAN=1 make -j32 $make check ``` All unit tests must pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5513 Differential Revision: D16062898 Pulled By: riversand963 fbshipit-source-id: 37dc511f1dc99f036d0201bbd7f0a8f5677c763d 01 July 2019, 21:12:02 UTC
f872009 Fix from some C-style casting (#5524) Summary: Fix from some C-style casting in bloom.cc and ./tools/db_bench_tool.cc Pull Request resolved: https://github.com/facebook/rocksdb/pull/5524 Differential Revision: D16075626 Pulled By: elipoz fbshipit-source-id: 352948885efb64a7ef865942c75c3c727a914207 01 July 2019, 20:05:34 UTC
9f0bd56 Cache simulator: Refactor the cache simulator so that we can add alternative policies easily (#5517) Summary: This PR creates cache_simulator.h file. It contains a CacheSimulator that runs against a block cache trace record. We can add alternative cache simulators derived from CacheSimulator later. For example, this PR adds a PrioritizedCacheSimulator that inserts filter/index/uncompressed dictionary blocks with high priority. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5517 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16043689 Pulled By: HaoyuHuang fbshipit-source-id: 65f28ed52b866ffb0e6eceffd7f9ca7c45bb680d 01 July 2019, 19:46:32 UTC
3886ddd force flushing stats CF to avoid holding old logs (#5509) Summary: WAL records RocksDB writes to all column families. When user flushes a a column family, the old WAL will not accept new writes but cannot be deleted yet because it may still contain live data for other column families. (See https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log#life-cycle-of-a-wal for detailed explanation) Because of this, if there is a column family that receive very infrequent writes and no manual flush is called for it, it could prevent a lot of WALs from being deleted. PR https://github.com/facebook/rocksdb/pull/5046 introduced persistent stats column family which is a good example of such column families. Depending on the config, it may have long intervals between writes, and user is unaware of it which makes it difficult to call manual flush for it. This PR addresses the problem for persistent stats column family by forcing a flush for persistent stats column family when 1) another column family is flushed 2) persistent stats column family's log number is the smallest among all column families, this way persistent stats column family will keep advancing its log number when necessary, allowing RocksDB to delete old WAL files. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5509 Differential Revision: D16045896 Pulled By: miasantreble fbshipit-source-id: 286837b633e988417f0096ff38384742d3b40ef4 01 July 2019, 18:56:43 UTC
c360675 Add secondary instance to stress test (#5479) Summary: This PR allows users to run stress tests on secondary instance. Test plan (on devserver) ``` ./db_stress -ops_per_thread=100000 -enable_secondary=true -threads=32 -secondary_catch_up_one_in=10000 -clear_column_family_one_in=1000 -reopen=100 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5479 Differential Revision: D16074325 Pulled By: riversand963 fbshipit-source-id: c0ed959e7b6c7cda3efd0b3070ab379de3b29f1c 01 July 2019, 18:49:50 UTC
7259e28 MultiGet parallel IO (#5464) Summary: Enhancement to MultiGet batching to read data blocks required for keys in a batch in parallel from disk. It uses Env::MultiRead() API to read multiple blocks and reduce latency. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5464 Test Plan: 1. make check 2. make asan_check 3. make asan_crash Differential Revision: D15911771 Pulled By: anand1976 fbshipit-source-id: 605036b9af0f90ca0020dc87c3a86b4da6e83394 01 July 2019, 03:56:04 UTC
68b46a2 Block cache tracer: StartTrace return busy if trace is already started. (#5519) Summary: This PR is needed for integration into MyRocks. A second call on StartTrace returns Busy so that MyRocks may return an error to the user. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5519 Test Plan: make clean && USE_CLANG=1 make check -j32 Differential Revision: D16055476 Pulled By: HaoyuHuang fbshipit-source-id: a51772fb0965c873922757eb470a332b1e02a91d 01 July 2019, 03:03:01 UTC
10bae8c Add more release versions to tools/check_format_compatible.sh (#5518) Summary: tools/check_format_compatible.sh is lagged behind. Catch up. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5518 Test Plan: Run the command Differential Revision: D16063180 fbshipit-source-id: d063eb42df9653dec06a2cf0fb982b8a60ca3d2f 29 June 2019, 00:41:58 UTC
5c2f13f add create_column_family and drop_column_family cmd to ldb tool (#5503) Summary: `create_column_family` cmd already exists but was somehow missed in the help message. also add `drop_column_family` cmd which can drop a cf without opening db. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5503 Test Plan: Updated existing ldb_test.py to test deleting a column family. Differential Revision: D16018414 Pulled By: lightmark fbshipit-source-id: 1fc33680b742104fea86b10efc8499f79e722301 27 June 2019, 18:11:48 UTC
15fd3be LRU Cache to enable mid-point insertion by default (#5508) Summary: Mid-point insertion is a useful feature and is mature now. Make it default. Also changed cache_index_and_filter_blocks_with_high_priority=true as default accordingly, so that we won't evict index and filter blocks easier after the change, to avoid too many surprises to users. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5508 Test Plan: Run all existing tests. Differential Revision: D16021179 fbshipit-source-id: ce8456e8d43b3bfb48df6c304b5290a9d19817eb 27 June 2019, 17:20:57 UTC
c08c0ae Add C binding for secondary instance (#5505) Summary: Add C binding for secondary instance as well as unit test. Test plan (on devserver) ``` $make clean && COMPILE_WITH_ASAN=1 make -j20 all $./c_test $make check ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5505 Differential Revision: D16000043 Pulled By: riversand963 fbshipit-source-id: 3361ef6bfdf4ce12438cee7290a0ac203b5250bd 27 June 2019, 15:58:54 UTC
a8975b6 Block cache tracer: Do not populate block cache trace record when tracing is disabled. (#5510) Summary: This PR makes sure that trace record is not populated when tracing is disabled. Before this PR: DB path: [/data/mysql/rocks_regression_tests/OPTIONS-myrocks-40-33-10000000/2019-06-26-13-04-41/db] readwhilewriting : 9.803 micros/op 1550408 ops/sec; 107.9 MB/s (5000000 of 5000000 found) Microseconds per read: Count: 80000000 Average: 9.8045 StdDev: 12.64 Min: 1 Median: 7.5246 Max: 25343 Percentiles: P50: 7.52 P75: 12.10 P99: 37.44 P99.9: 75.07 P99.99: 133.60 After this PR: DB path: [/data/mysql/rocks_regression_tests/OPTIONS-myrocks-40-33-10000000/2019-06-26-14-08-21/db] readwhilewriting : 8.723 micros/op 1662882 ops/sec; 115.8 MB/s (5000000 of 5000000 found) Microseconds per read: Count: 80000000 Average: 8.7236 StdDev: 12.19 Min: 1 Median: 6.7262 Max: 25229 Percentiles: P50: 6.73 P75: 10.50 P99: 31.54 P99.9: 74.81 P99.99: 132.82 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5510 Differential Revision: D16016428 Pulled By: HaoyuHuang fbshipit-source-id: 3b3d11e6accf207d18ec2545b802aa01ee65901f 27 June 2019, 15:34:08 UTC
9dbcda9 Fix uninitialized prev_block_offset_ in BlockBasedTableReader (#5507) Summary: Found by valgrind_check. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5507 Differential Revision: D16002612 Pulled By: miasantreble fbshipit-source-id: 13c11c183190e0a0571844635457d434da3ac59a 26 June 2019, 06:02:01 UTC
b4d7209 Add an option to put first key of each sst block in the index (#5289) Summary: The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes. Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it. So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks. Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files. This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289 Differential Revision: D15256423 Pulled By: al13n321 fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a 25 June 2019, 03:54:04 UTC
554a645 Block cache trace analysis: Write time series graphs in csv files (#5490) Summary: This PR adds a feature in block cache trace analysis tool to write statistics into csv files. 1. The analysis tool supports grouping the number of accesses per second by various labels, e.g., block, column family, block type, or a combination of them. 2. It also computes reuse distance and reuse interval. Reuse distance: The cumulated size of unique blocks read between two consecutive accesses on the same block. Reuse interval: The time between two consecutive accesses on the same block. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5490 Differential Revision: D15901322 Pulled By: HaoyuHuang fbshipit-source-id: b5454fea408a32757a80be63de6fe1c8149ca70e 25 June 2019, 03:42:12 UTC
acb8053 Fix build jemalloc api (#5470) Summary: There is a compile error on Windows with MSVC in malloc_stats.cc where malloc_stats_print is referenced. The compiler only knows je_malloc_stats_print from jemalloc.h. Adding JEMALLOC_NO_RENAME replaces malloc_stats_print with je_malloc_stats_print. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5470 Differential Revision: D15978720 fbshipit-source-id: c05757a2e89e2e015a661d9626c352e4f32f97e4 25 June 2019, 00:40:32 UTC
e731f44 C file should not include <cinttypes>, it is a C++ header. (#5499) Summary: Include <inttypes.h> instead. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5499 Differential Revision: D15966937 Pulled By: miasantreble fbshipit-source-id: 2156c4329b91d26d447de94f1231264d52786350 24 June 2019, 23:12:39 UTC
c92c58f JNI: Do not create 8M block cache for negative blockCacheSize values (#5465) Summary: As [BlockBasedTableConfig setBlockCacheSize()](https://github.com/facebook/rocksdb/blob/1966a7c055f6e182d627275051f5c09441aa922d/java/src/main/java/org/rocksdb/BlockBasedTableConfig.java#L728) said, If cacheSize is non-positive, then cache will not be used. but when we configure a negative number or 0, there is an unexpected result: the block cache becomes 8M. - Allow 0 as a valid size. When block cache size is 0, an 8MB block cache is created, as it is the default C++ API behavior. Also updated the comment. - Set no_block_cache true if negative value is passed to block cache size, and no block cache will be created. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5465 Differential Revision: D15968788 Pulled By: sagar0 fbshipit-source-id: ee02d6e95841c9e2c316a64bfdf192d46ff5638a 24 June 2019, 18:37:04 UTC
68980df Also build compression libraries on AppVeyor CI (#5226) Summary: This adds some compression dependencies to AppVeyor CI (those whose builds can be easily scripted on Windows, i.e. Snappy, LZ4, and ZStd). Let's see if the CI passes ;-) Pull Request resolved: https://github.com/facebook/rocksdb/pull/5226 Differential Revision: D15967223 fbshipit-source-id: 0914c613ac358cbb248df75cdee8099e836828dc 24 June 2019, 17:41:07 UTC
22028aa Compaction Reads should read no more than compaction_readahead_size bytes, when set! (#5498) Summary: As a result of https://github.com/facebook/rocksdb/issues/5431 the compaction_readahead_size given by a user was not used exactly, the reason being the code behind readahead for user-read and compaction-read was unified in the above PR and the behavior for user-read is to read readahead_size+n bytes (see FilePrefetchBuffer::TryReadFromCache method). Before the unification the ReadaheadRandomAccessFileReader used compaction_readahead_size as it is. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5498 Test Plan: Ran strace command : strace -e pread64 -f -T -t ./db_compaction_test --gtest_filter=DBCompactionTest.PartialManualCompaction In the test the compaction_readahead_size was configured to 2MB and verified the pread syscall did indeed request 2MB. Before the change it was requesting more than 2MB. Strace Output: strace: Process 3798982 attached Note: Google Test filter = DBCompactionTest.PartialManualCompaction [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from DBCompactionTest [ RUN ] DBCompactionTest.PartialManualCompaction strace: Process 3798983 attached strace: Process 3798984 attached strace: Process 3798985 attached strace: Process 3798986 attached strace: Process 3798987 attached strace: Process 3798992 attached [pid 3798987] 12:07:05 +++ exited with 0 +++ strace: Process 3798993 attached [pid 3798993] 12:07:05 +++ exited with 0 +++ strace: Process 3798994 attached strace: Process 3799008 attached strace: Process 3799009 attached [pid 3799008] 12:07:05 +++ exited with 0 +++ strace: Process 3799010 attached [pid 3799009] 12:07:05 +++ exited with 0 +++ strace: Process 3799011 attached [pid 3799010] 12:07:05 +++ exited with 0 +++ [pid 3799011] 12:07:05 +++ exited with 0 +++ strace: Process 3799012 attached [pid 3799012] 12:07:05 +++ exited with 0 +++ strace: Process 3799013 attached strace: Process 3799014 attached [pid 3799013] 12:07:05 +++ exited with 0 +++ strace: Process 3799015 attached [pid 3799014] 12:07:05 +++ exited with 0 +++ [pid 3799015] 12:07:05 +++ exited with 0 +++ strace: Process 3799016 attached [pid 3799016] 12:07:05 +++ exited with 0 +++ strace: Process 3799017 attached [pid 3799017] 12:07:05 +++ exited with 0 +++ strace: Process 3799019 attached [pid 3799019] 12:07:05 +++ exited with 0 +++ strace: Process 3799020 attached strace: Process 3799021 attached [pid 3799020] 12:07:05 +++ exited with 0 +++ [pid 3799021] 12:07:05 +++ exited with 0 +++ strace: Process 3799022 attached [pid 3799022] 12:07:05 +++ exited with 0 +++ strace: Process 3799023 attached [pid 3799023] 12:07:05 +++ exited with 0 +++ strace: Process 3799047 attached strace: Process 3799048 attached [pid 3799047] 12:07:06 +++ exited with 0 +++ [pid 3799048] 12:07:06 +++ exited with 0 +++ [pid 3798994] 12:07:06 +++ exited with 0 +++ strace: Process 3799052 attached [pid 3799052] 12:07:06 +++ exited with 0 +++ strace: Process 3799054 attached strace: Process 3799069 attached strace: Process 3799070 attached [pid 3799069] 12:07:06 +++ exited with 0 +++ strace: Process 3799071 attached [pid 3799070] 12:07:06 +++ exited with 0 +++ [pid 3799071] 12:07:06 +++ exited with 0 +++ strace: Process 3799072 attached strace: Process 3799073 attached [pid 3799072] 12:07:06 +++ exited with 0 +++ [pid 3799073] 12:07:06 +++ exited with 0 +++ strace: Process 3799074 attached [pid 3799074] 12:07:06 +++ exited with 0 +++ strace: Process 3799075 attached [pid 3799075] 12:07:06 +++ exited with 0 +++ strace: Process 3799076 attached [pid 3799076] 12:07:06 +++ exited with 0 +++ strace: Process 3799077 attached [pid 3799077] 12:07:06 +++ exited with 0 +++ strace: Process 3799078 attached [pid 3799078] 12:07:06 +++ exited with 0 +++ strace: Process 3799079 attached [pid 3799079] 12:07:06 +++ exited with 0 +++ strace: Process 3799080 attached [pid 3799080] 12:07:06 +++ exited with 0 +++ strace: Process 3799081 attached [pid 3799081] 12:07:06 +++ exited with 0 +++ strace: Process 3799082 attached [pid 3799082] 12:07:06 +++ exited with 0 +++ strace: Process 3799083 attached [pid 3799083] 12:07:06 +++ exited with 0 +++ strace: Process 3799086 attached strace: Process 3799087 attached [pid 3798984] 12:07:06 pread64(9, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000121> [pid 3798984] 12:07:06 pread64(9, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000106> [pid 3798984] 12:07:06 pread64(9, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000081> [pid 3798984] 12:07:06 pread64(9, "\0\v\3foo\2\7\0\0\0\0\0\0\0\270 \0\v\4foo\2\3\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000138> [pid 3798984] 12:07:06 pread64(11, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000097> [pid 3798984] 12:07:06 pread64(11, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000086> [pid 3798984] 12:07:06 pread64(11, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000064> [pid 3798984] 12:07:06 pread64(11, "\0\v\3foo\2\21\0\0\0\0\0\0\0\270 \0\v\4foo\2\r\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000064> [pid 3798984] 12:07:06 pread64(12, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000080> [pid 3798984] 12:07:06 pread64(12, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000090> [pid 3798984] 12:07:06 pread64(12, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000059> [pid 3798984] 12:07:06 pread64(12, "\0\v\3foo\2\33\0\0\0\0\0\0\0\270 \0\v\4foo\2\27\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000065> [pid 3798984] 12:07:06 pread64(13, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000070> [pid 3798984] 12:07:06 pread64(13, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000059> [pid 3798984] 12:07:06 pread64(13, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000061> [pid 3798984] 12:07:06 pread64(13, "\0\v\3foo\2%\0\0\0\0\0\0\0\270 \0\v\4foo\2!\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000065> [pid 3798984] 12:07:06 pread64(14, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000118> [pid 3798984] 12:07:06 pread64(14, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000093> [pid 3798984] 12:07:06 pread64(14, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000050> [pid 3798984] 12:07:06 pread64(14, "\0\v\3foo\2/\0\0\0\0\0\0\0\270 \0\v\4foo\2+\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000082> [pid 3798984] 12:07:06 pread64(15, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000080> [pid 3798984] 12:07:06 pread64(15, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000086> [pid 3798984] 12:07:06 pread64(15, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000091> [pid 3798984] 12:07:06 pread64(15, "\0\v\3foo\0029\0\0\0\0\0\0\0\270 \0\v\4foo\0025\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000174> [pid 3798984] 12:07:06 pread64(16, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000080> [pid 3798984] 12:07:06 pread64(16, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000093> [pid 3798984] 12:07:06 pread64(16, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000194> [pid 3798984] 12:07:06 pread64(16, "\0\v\3foo\2C\0\0\0\0\0\0\0\270 \0\v\4foo\2?\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000086> [pid 3798984] 12:07:06 pread64(17, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000079> [pid 3798984] 12:07:06 pread64(17, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000047> [pid 3798984] 12:07:06 pread64(17, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000045> [pid 3798984] 12:07:06 pread64(17, "\0\v\3foo\2M\0\0\0\0\0\0\0\270 \0\v\4foo\2I\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000107> [pid 3798983] 12:07:06 pread64(17, "\0\v\200\10foo\2P\0\0\0\0\0\0)U?MSg_)j(roFn($e"..., 2097152, 0) = 11230 <0.000091> [pid 3798983] 12:07:06 pread64(17, "", 2085922, 11230) = 0 <0.000073> [pid 3798983] 12:07:06 pread64(16, "\0\v\200\10foo\2F\0\0\0\0\0\0k[h3%.OPH_^:\\S7T&"..., 2097152, 0) = 11230 <0.000083> [pid 3798983] 12:07:06 pread64(16, "", 2085922, 11230) = 0 <0.000078> [pid 3798983] 12:07:06 pread64(15, "\0\v\200\10foo\2<\0\0\0\0\0\0+qToi_c{*S+4:N(:"..., 2097152, 0) = 11230 <0.000095> [pid 3798983] 12:07:06 pread64(15, "", 2085922, 11230) = 0 <0.000067> [pid 3798983] 12:07:06 pread64(14, "\0\v\200\10foo\0022\0\0\0\0\0\0%hw%OMa\"}9I609Q!B"..., 2097152, 0) = 11230 <0.000111> [pid 3798983] 12:07:06 pread64(14, "", 2085922, 11230) = 0 <0.000093> [pid 3798983] 12:07:06 pread64(13, "\0\v\200\10foo\2(\0\0\0\0\0\0p}Y&mu^DcaSGb2&nP"..., 2097152, 0) = 11230 <0.000128> [pid 3798983] 12:07:06 pread64(13, "", 2085922, 11230) = 0 <0.000076> [pid 3798983] 12:07:06 pread64(12, "\0\v\200\10foo\2\36\0\0\0\0\0\0YIyW#]oSs^6VHfB<`"..., 2097152, 0) = 11230 <0.000092> [pid 3798983] 12:07:06 pread64(12, "", 2085922, 11230) = 0 <0.000073> [pid 3798983] 12:07:06 pread64(11, "\0\v\200\10foo\2\24\0\0\0\0\0\0mfF8Jel/*Zf :-#s("..., 2097152, 0) = 11230 <0.000088> [pid 3798983] 12:07:06 pread64(11, "", 2085922, 11230) = 0 <0.000067> [pid 3798983] 12:07:06 pread64(9, "\0\v\200\10foo\2\n\0\0\0\0\0\0\\X'cjiHX)D,RSj1X!"..., 2097152, 0) = 11230 <0.000115> [pid 3798983] 12:07:06 pread64(9, "", 2085922, 11230) = 0 <0.000073> [pid 3798983] 12:07:06 pread64(8, "\1\315\5 \36\30\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 754) = 53 <0.000098> [pid 3798983] 12:07:06 pread64(8, "\0\22\3rocksdb.properties;\215\5\0\0\0\0\1\0\0\0"..., 37, 717) = 37 <0.000064> [pid 3798983] 12:07:06 pread64(8, "\0$\4rocksdb.block.based.table.ind"..., 658, 59) = 658 <0.000074> [pid 3798983] 12:07:06 pread64(8, "\0\v\2foo\1\0\0\0\0\0\0\0\0\31\0\0\0\0\1\0\0\0\0\212\216\222P", 29, 30) = 29 <0.000064> [pid 3799086] 12:07:06 +++ exited with 0 +++ [pid 3799087] 12:07:06 +++ exited with 0 +++ [pid 3799054] 12:07:06 +++ exited with 0 +++ strace: Process 3799104 attached [pid 3799104] 12:07:06 +++ exited with 0 +++ [ OK ] DBCompactionTest.PartialManualCompaction (757 ms) [----------] 1 test from DBCompactionTest (758 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (759 ms total) [ PASSED ] 1 test. [pid 3798983] 12:07:06 +++ exited with 0 +++ [pid 3798984] 12:07:06 +++ exited with 0 +++ [pid 3798992] 12:07:06 +++ exited with 0 +++ [pid 3798986] 12:07:06 +++ exited with 0 +++ [pid 3798982] 12:07:06 +++ exited with 0 +++ [pid 3798985] 12:07:06 +++ exited with 0 +++ 12:07:06 +++ exited with 0 +++ Differential Revision: D15948422 Pulled By: vjnadimpalli fbshipit-source-id: 9b189d1e8675d290c7784e4b33e5d3b5761d2ac8 22 June 2019, 04:31:49 UTC
2730fe6 Fix ingested file and direcotry not being sync (#5435) Summary: It it not safe to assume application had sync the SST file before ingest it into DB. Also the directory to put the ingested file needs to be fsync, otherwise the file can be lost. For integrity of RocksDB we need to sync the ingested file and directory before apply the change to manifest. Also syncing after writing global sequence when write_global_seqno=true was removed in https://github.com/facebook/rocksdb/issues/4172. Adding it back. Fixes https://github.com/facebook/rocksdb/issues/5287. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5435 Test Plan: Test ingest file with ldb command and observe fsync/fdatasync in strace output. Tried both move_files=true and move_files=false. https://gist.github.com/yiwu-arbug/650a4023f57979056d83485fa863bef9 More test suggestions are welcome. Differential Revision: D15941675 Pulled By: riversand963 fbshipit-source-id: 389533f3923065a96df2cdde23ff4724a1810d78 21 June 2019, 17:15:38 UTC
1bfeffa Stop printing after verification fails (#5493) Summary: Stop verification and printing once verification fails. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5493 Differential Revision: D15928992 Pulled By: riversand963 fbshipit-source-id: 699feac034a217d57280aa3fb50f5aba06adf317 21 June 2019, 05:16:58 UTC
705b8ee Add more callers for table reader. (#5454) Summary: This PR adds more callers for table readers. These information are only used for block cache analysis so that we can know which caller accesses a block. 1. It renames the BlockCacheLookupCaller to TableReaderCaller as passing the caller from upstream requires changes to table_reader.h and TableReaderCaller is a more appropriate name. 2. It adds more table reader callers in table/table_reader_caller.h, e.g., kCompactionRefill, kExternalSSTIngestion, and kBuildTable. This PR is long as it requires modification of interfaces in table_reader.h, e.g., NewIterator. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5454 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32. Differential Revision: D15819451 Pulled By: HaoyuHuang fbshipit-source-id: b6caa704c8fb96ddd15b9a934b7e7ea87f88092d 20 June 2019, 21:31:48 UTC
0b0cb6f Fix segfalut in ~DBWithTTLImpl() when called after Close() (#5485) Summary: ~DBWithTTLImpl() fails after calling Close() function (will invoke the Close() function of DBImpl), because the Close() function deletes default_cf_handle_ which is used in the GetOptions() function called in ~DBWithTTLImpl(), hence lead to segfault. Fix by creating a Close() function for the DBWithTTLImpl class and do the close and the work originally in ~DBWithTTLImpl(). If the Close() function is not called, it will be called in the ~DBWithTTLImpl() function. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5485 Test Plan: make clean; USE_CLANG=1 make all check -j Differential Revision: D15924498 fbshipit-source-id: 567397fb972961059083a1ae0f9f99ff74872b78 20 June 2019, 20:08:17 UTC
24f7343 sanitize and limit block_size under 4GB (#5492) Summary: `Block::restart_index_`, `Block::restarts_`, and `Block::current_` are defined as uint32_t but `BlockBasedTableOptions::block_size` is defined as a size_t so user might see corruption as in https://github.com/facebook/rocksdb/issues/5486. This PR adds a check in `BlockBasedTableFactory::SanitizeOptions` to disallow such configurations. yiwu-arbug Pull Request resolved: https://github.com/facebook/rocksdb/pull/5492 Differential Revision: D15914047 Pulled By: miasantreble fbshipit-source-id: c943f153d967e15aee7f2795730ab8259e2be201 20 June 2019, 18:45:08 UTC
68614a9 Fix AlignedBuffer's usage in Encryption Env (#5396) Summary: The usage of `AlignedBuffer` in env_encryption.cc writes and reads to/from the AlignedBuffer's internal buffer directly without going through AlignedBuffer's APIs (like `Append` and `Read`), causing encapsulation to break in some cases. The writes are especially problematic as after the data is written to the buffer (directly using either memmove or memcpy), the size of the buffer is not updated ... causing the AlignedBuffer to lose track of the encapsulated buffer's current size. Fixed this by updating the buffer size after every write. Todo for later: Add an overloaded method to AlignedBuffer to support a memmove in addition to a memcopy. Encryption env does a memmove, and hence I couldn't switch to using `AlignedBuffer.Append()`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5396 Test Plan: `make check` Differential Revision: D15764756 Pulled By: sagar0 fbshipit-source-id: 2e24b52bd3b4b5056c5c1da157f91ddf89370183 19 June 2019, 23:46:20 UTC
back to top