swh:1:snp:5115096b921df712aeb2a08114fede57fb3331fb

sort by:
Revision Author Date Message Commit Date
abd1fdd Fix asan_check failures Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5589 Test Plan: TEST_TMPDIR=/dev/shm/rocksdb COMPILE_WITH_ASAN=1 OPT=-g make J=64 -j64 asan_check Differential Revision: D16361081 Pulled By: anand1976 fbshipit-source-id: 09474832b9cfb318a840d4b633e22dfad105d58c 18 July 2019, 21:51:25 UTC
3a6e83b HISTORY update for export and import column family APIs Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5587 Differential Revision: D16359919 fbshipit-source-id: cfd9c448d79a8b8e7ac1d2b661d10151df269dba 18 July 2019, 17:16:38 UTC
ec2b996 Fix LITE mode build failure Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5588 Test Plan: make LITE=1 all check Differential Revision: D16354543 Pulled By: anand1976 fbshipit-source-id: 327a171439e183ac3a5e5057c511d6bca445e97d 18 July 2019, 05:06:12 UTC
9f5cfb8 Fix for ReadaheadSequentialFile crash in ldb_cmd_test (#5586) Summary: Fixing a corner case crash when there was no data read from file, but status is still OK Pull Request resolved: https://github.com/facebook/rocksdb/pull/5586 Differential Revision: D16348117 Pulled By: elipoz fbshipit-source-id: f97973308024f020d8be79ca3c56466b84d80656 18 July 2019, 00:04:39 UTC
8a008d4 Block access tracing: Trace referenced key for Get on non-data blocks. (#5548) Summary: This PR traces the referenced key for Get for all types of blocks. This is useful when evaluating hybrid row-block caches. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5548 Test Plan: make clean && USE_CLANG=1 make check -j32 Differential Revision: D16157979 Pulled By: HaoyuHuang fbshipit-source-id: f6327411c9deb74e35e22a35f66cdbae09ab9d87 17 July 2019, 20:05:58 UTC
22ce462 Export Import sst files (#5495) Summary: Refresh of the earlier change here - https://github.com/facebook/rocksdb/issues/5135 This is a review request for code change needed for - https://github.com/facebook/rocksdb/issues/3469 "Add support for taking snapshot of a column family and creating column family from a given CF snapshot" We have an implementation for this that we have been testing internally. We have two new APIs that together provide this functionality. (1) ExportColumnFamily() - This API is modelled after CreateCheckpoint() as below. // Exports all live SST files of a specified Column Family onto export_dir, // returning SST files information in metadata. // - SST files will be created as hard links when the directory specified // is in the same partition as the db directory, copied otherwise. // - export_dir should not already exist and will be created by this API. // - Always triggers a flush. virtual Status ExportColumnFamily(ColumnFamilyHandle* handle, const std::string& export_dir, ExportImportFilesMetaData** metadata); Internally, the API will DisableFileDeletions(), GetColumnFamilyMetaData(), Parse through metadata, creating links/copies of all the sst files, EnableFileDeletions() and complete the call by returning the list of file metadata. (2) CreateColumnFamilyWithImport() - This API is modeled after IngestExternalFile(), but invoked only during a CF creation as below. // CreateColumnFamilyWithImport() will create a new column family with // column_family_name and import external SST files specified in metadata into // this column family. // (1) External SST files can be created using SstFileWriter. // (2) External SST files can be exported from a particular column family in // an existing DB. // Option in import_options specifies whether the external files are copied or // moved (default is copy). When option specifies copy, managing files at // external_file_path is caller's responsibility. When option specifies a // move, the call ensures that the specified files at external_file_path are // deleted on successful return and files are not modified on any error // return. // On error return, column family handle returned will be nullptr. // ColumnFamily will be present on successful return and will not be present // on error return. ColumnFamily may be present on any crash during this call. virtual Status CreateColumnFamilyWithImport( const ColumnFamilyOptions& options, const std::string& column_family_name, const ImportColumnFamilyOptions& import_options, const ExportImportFilesMetaData& metadata, ColumnFamilyHandle** handle); Internally, this API creates a new CF, parses all the sst files and adds it to the specified column family, at the same level and with same sequence number as in the metadata. Also performs safety checks with respect to overlaps between the sst files being imported. If incoming sequence number is higher than current local sequence number, local sequence number is updated to reflect this. Note, as the sst files is are being moved across Column Families, Column Family name in sst file will no longer match the actual column family on destination DB. The API does not modify Column Family name or id in the sst files being imported. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5495 Differential Revision: D16018881 fbshipit-source-id: 9ae2251025d5916d35a9fc4ea4d6707f6be16ff9 17 July 2019, 19:27:14 UTC
a3c1832 Arm64 CRC32 parallel computation optimization for RocksDB (#5494) Summary: Crc32c Parallel computation optimization: Algorithm comes from Intel whitepaper: [crc-iscsi-polynomial-crc32-instruction-paper](https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/crc-iscsi-polynomial-crc32-instruction-paper.pdf) Input data is divided into three equal-sized blocks Three parallel blocks (crc0, crc1, crc2) for 1024 Bytes One Block: 42(BLK_LENGTH) * 8(step length: crc32c_u64) bytes 1. crc32c_test: ``` [==========] Running 4 tests from 1 test case. [----------] Global test environment set-up. [----------] 4 tests from CRC [ RUN ] CRC.StandardResults [ OK ] CRC.StandardResults (1 ms) [ RUN ] CRC.Values [ OK ] CRC.Values (0 ms) [ RUN ] CRC.Extend [ OK ] CRC.Extend (0 ms) [ RUN ] CRC.Mask [ OK ] CRC.Mask (0 ms) [----------] 4 tests from CRC (1 ms total) [----------] Global test environment tear-down [==========] 4 tests from 1 test case ran. (1 ms total) [ PASSED ] 4 tests. ``` 2. RocksDB benchmark: db_bench --benchmarks="crc32c" ``` Linear Arm crc32c: crc32c: 1.005 micros/op 995133 ops/sec; 3887.2 MB/s (4096 per op) ``` ``` Parallel optimization with Armv8 crypto extension: crc32c: 0.419 micros/op 2385078 ops/sec; 9316.7 MB/s (4096 per op) ``` It gets ~2.4x speedup compared to linear Arm crc32c instructions. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5494 Differential Revision: D16340806 fbshipit-source-id: 95dae9a5b646fd20a8303671d82f17b2e162e945 17 July 2019, 18:22:38 UTC
74fb7f0 Cleaned up and simplified LRU cache implementation (#5579) Summary: The 'refs' field in LRUHandle now counts only external references, since anyway we already have the IN_CACHE flag. This simplifies reference accounting logic a bit. Also cleaned up few asserts code as well as the comments - to be more readable. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5579 Differential Revision: D16286747 Pulled By: elipoz fbshipit-source-id: 7186d88f80f512ce584d0a303437494b5cbefd7f 17 July 2019, 02:17:45 UTC
0f4d90e Added support for sequential read-ahead file (#5580) Summary: Added support for sequential read-ahead file that can prefetch the read data and later serve it from internal cache buffer. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5580 Differential Revision: D16287082 Pulled By: elipoz fbshipit-source-id: a3e7ad9643d377d39352ff63058ce050ec31dcf3 17 July 2019, 01:21:18 UTC
699a569 Remove RandomAccessFileReader.for_compaction_ (#5572) Summary: RandomAccessFileReader.for_compaction_ doesn't seem to be used anymore. Remove it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5572 Test Plan: USE_CLANG=1 make all check -j Differential Revision: D16286178 fbshipit-source-id: aa338049761033dfbe5e8b1707bbb0be2df5be7e 16 July 2019, 23:32:18 UTC
0acaa1a WriteUnPrepared: use tracked_keys_ to track keys needed for rollback (#5562) Summary: Currently, we are tracking keys we need to rollback via a separate structure specific to WriteUnprepared in write_set_keys_. We already have a data structure called tracked_keys_ used to track which keys to unlock on transaction termination. This is exactly what we want, since we should only rollback keys that we have locked anyway. Save some memory by reusing that data structure instead of making our own. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5562 Differential Revision: D16206484 Pulled By: lth fbshipit-source-id: 5894d2b824a4b19062d84adbd6e6e86f00047488 16 July 2019, 22:24:56 UTC
3bde41b Move the filter readers out of the block cache (#5504) Summary: Currently, when the block cache is used for the filter block, it is not really the block itself that is stored in the cache but a FilterBlockReader object. Since this object is not pure data (it has, for instance, pointers that might dangle, including in one case a back pointer to the TableReader), it's not really sharable. To avoid the issues around this, the current code erases the cache entries when the TableReader is closed (which, BTW, is not sufficient since a concurrent TableReader might have picked up the object in the meantime). Instead of doing this, the patch moves the FilterBlockReader out of the cache altogether, and decouples the filter reader object from the filter block. In particular, instead of the TableReader owning, or caching/pinning the FilterBlockReader (based on the customer's settings), with the change the TableReader unconditionally owns the FilterBlockReader, which in turn owns/caches/pins the filter block. This change also enables us to reuse the code paths historically used for data blocks for filters as well. Note: Eviction statistics for filter blocks are temporarily broken. We plan to fix this in a separate phase. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5504 Test Plan: make asan_check Differential Revision: D16036974 Pulled By: ltamasi fbshipit-source-id: 770f543c5fb4ed126fd1e04bfd3809cf4ff9c091 16 July 2019, 20:14:58 UTC
cd25203 Fix memorty leak in `rocksdb_wal_iter_get_batch` function (#5515) Summary: `wal_batch.writeBatchPtr.release()` gives up the ownership of the original `WriteBatch`, but there is no new owner, which causes memory leak. The patch is simple. Removing `release()` prevent ownership change. `std::move` is for speed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5515 Differential Revision: D16264281 Pulled By: riversand963 fbshipit-source-id: 51c556b7a1c977325c3aa24acb636303847151fa 15 July 2019, 19:59:39 UTC
6e8a135 Fix regression - 100% CPU - Regression for Windows 7 (#5557) Summary: Fixes https://github.com/facebook/rocksdb/issues/5552 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5557 Differential Revision: D16266329 fbshipit-source-id: a8f6b50298a6f7c8d6c7e172bb26dd7eb6bd8a4d 15 July 2019, 19:19:49 UTC
b0259e4 add more tracing for stats history (#5566) Summary: Sample info log output from db_bench: In-memory: ``` 2019/07/12-21:39:19.478490 7fa01b3f5700 [_impl/db_impl.cc:702] ------- PERSISTING STATS ------- 2019/07/12-21:39:19.478633 7fa01b3f5700 [_impl/db_impl.cc:753] Storing 145 stats with timestamp 1562992759 to in-memory stats history 2019/07/12-21:39:19.478670 7fa01b3f5700 [_impl/db_impl.cc:766] [Pre-GC] In-memory stats history size: 1051218 bytes, slice count: 103 2019/07/12-21:39:19.478704 7fa01b3f5700 [_impl/db_impl.cc:775] [Post-GC] In-memory stats history size: 1051218 bytes, slice count: 102 ``` On-disk: ``` 2019/07/12-21:48:53.862548 7f24943f5700 [_impl/db_impl.cc:702] ------- PERSISTING STATS ------- 2019/07/12-21:48:53.862553 7f24943f5700 [_impl/db_impl.cc:709] Reading 145 stats from statistics 2019/07/12-21:48:53.862852 7f24943f5700 [_impl/db_impl.cc:737] Writing 145 stats with timestamp 1562993333 to persistent stats CF succeeded ``` ``` 2019/07/12-21:48:51.861711 7f24943f5700 [_impl/db_impl.cc:702] ------- PERSISTING STATS ------- 2019/07/12-21:48:51.861729 7f24943f5700 [_impl/db_impl.cc:709] Reading 145 stats from statistics 2019/07/12-21:48:51.861921 7f24943f5700 [_impl/db_impl.cc:732] Writing to persistent stats CF failed -- Result incomplete: Write stall ... 2019/07/12-21:48:51.873032 7f2494bf6700 [WARN] [lumn_family.cc:749] [default] Stopping writes because we have 2 immutable memtables (waiting for flush), max_write_buffer_number is set to 2 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5566 Differential Revision: D16258187 Pulled By: miasantreble fbshipit-source-id: 292497099b941418590ed4312411bee36e244dc5 15 July 2019, 18:49:17 UTC
f064d74 Cleanup the Arm64 CRC32 unused warning (#5565) Summary: When 'HAVE_ARM64_CRC' is set, the blew methods: - bool rocksdb::crc32c::isSSE42() - bool rocksdb::crc32c::isPCLMULQDQ() are defined but not used, the unused-function is raised when do rocksdb build. This patch try to cleanup these warnings by add ifndef, if it build under the HAVE_ARM64_CRC, we will not define `isSSE42` and `isPCLMULQDQ`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5565 Differential Revision: D16233654 fbshipit-source-id: c32a9dda7465dbf65f9ccafef159124db92cdffd 15 July 2019, 18:20:26 UTC
68d43b4 A python script to plot graphs for cvs files generated by block_cache_trace_analyzer Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5563 Test Plan: Manually run the script on files generated by block_cache_trace_analyzer. Differential Revision: D16214400 Pulled By: HaoyuHuang fbshipit-source-id: 94485eed995e9b2b63e197c5dfeb80129fa7897f 13 July 2019, 01:56:20 UTC
6187661 Fix MyRocks compile warnings-treated-as-errors on Fedora 30, gcc 9.1.1 (#5553) Summary: - Provide assignment operator in CompactionStats - Provide a copy constructor for FileDescriptor - Remove std::move from "return std::move(t)" in BoundedQueue Pull Request resolved: https://github.com/facebook/rocksdb/pull/5553 Differential Revision: D16230170 fbshipit-source-id: fd7c6e52390b2db1be24141e25649cf62424d078 13 July 2019, 00:30:51 UTC
3e9c5a3 Block cache analyzer: Add more stats (#5516) Summary: This PR provides more command line options for block cache analyzer to better understand block cache access pattern. -analyze_bottom_k_access_count_blocks -analyze_top_k_access_count_blocks -reuse_lifetime_labels -reuse_lifetime_buckets -analyze_callers -access_count_buckets -analyze_blocks_reuse_k_reuse_window Pull Request resolved: https://github.com/facebook/rocksdb/pull/5516 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16037440 Pulled By: HaoyuHuang fbshipit-source-id: b9a4ac0d4712053fab910732077a4d4b91400bc8 12 July 2019, 23:55:34 UTC
1a59b6e Cache simulator: Add a ghost cache for admission control and a hybrid row-block cache. (#5534) Summary: This PR adds a ghost cache for admission control. Specifically, it admits an entry on its second access. It also adds a hybrid row-block cache that caches the referenced key-value pairs of a Get/MultiGet request instead of its blocks. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5534 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16101124 Pulled By: HaoyuHuang fbshipit-source-id: b99edda6418a888e94eb40f71ece45d375e234b1 11 July 2019, 19:43:29 UTC
82d8ca8 Upload db directory during cleanup for certain tests (#5554) Summary: Add an extra cleanup step so that db directory can be saved and uploaded. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5554 Reviewed By: yancouto Differential Revision: D16168844 Pulled By: riversand963 fbshipit-source-id: ec7b2cee5f11c7d388c36531f8b076d648e2fb19 10 July 2019, 18:29:55 UTC
60d8b19 Implemented a file logger that uses WritableFileWriter (#5491) Summary: Current PosixLogger performs IO operations using posix calls. Thus the current implementation will not work for non-posix env. Created a new logger class EnvLogger that uses env specific WritableFileWriter for IO operations. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5491 Test Plan: make check Differential Revision: D15909002 Pulled By: ggaurav28 fbshipit-source-id: 13a8105176e8e42db0c59798d48cb6a0dbccc965 09 July 2019, 23:27:22 UTC
f786b4a Improve result print on atomic flush stress test failure (#5549) Summary: When atomic flush stress test fails, we print internal keys within the range with mismatched key/values for all column families. Test plan (on devserver) Manually hack the code to randomly insert wrong data. Run the test. ``` $make clean && COMPILE_WITH_TSAN=1 make -j32 db_stress $./db_stress -test_atomic_flush=true -ops_per_thread=10000 ``` Check that proper error messages are printed, as follows: ``` 2019/07/08-17:40:14 Starting verification Verification failed Latest Sequence Number: 190903 [default] 000000000000050B => 56290000525350515E5F5C5D5A5B5859 [3] 0000000000000533 => EE100000EAEBE8E9E6E7E4E5E2E3E0E1FEFFFCFDFAFBF8F9 Internal keys in CF 'default', [000000000000050B, 0000000000000533] (max 8) key 000000000000050B seq 139920 type 1 key 0000000000000533 seq 0 type 1 Internal keys in CF '3', [000000000000050B, 0000000000000533] (max 8) key 0000000000000533 seq 0 type 1 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5549 Differential Revision: D16158709 Pulled By: riversand963 fbshipit-source-id: f07fa87763f87b3bd908da03c956709c6456bcab 09 July 2019, 23:27:22 UTC
aa0367a Allow ldb to open DB as secondary (#5537) Summary: Right now ldb can open running DB through read-only DB. However, it might leave info logs files to the read-only DB directory. Add an option to open the DB as secondary to avoid it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5537 Test Plan: Run ./ldb scan --max_keys=10 --db=/tmp/rocksdbtest-2491/dbbench --secondary_path=/tmp --no_value --hex and ./ldb get 0x00000000000000103030303030303030 --hex --db=/tmp/rocksdbtest-2491/dbbench --secondary_path=/tmp against a normal db_bench run and observe the output changes. Also observe that no new info logs files are created under /tmp/rocksdbtest-2491/dbbench. Run without --secondary_path and observe that new info logs created under /tmp/rocksdbtest-2491/dbbench. Differential Revision: D16113886 fbshipit-source-id: 4e09dec47c2528f6ca08a9e7a7894ba2d9daebbb 09 July 2019, 19:51:28 UTC
cb19e74 Fix bugs in DBWALTest.kTolerateCorruptedTailRecords triggered by #5520 (#5550) Summary: https://github.com/facebook/rocksdb/pull/5520 caused a buffer overflow bug in DBWALTest.kTolerateCorruptedTailRecords. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5550 Test Plan: Run the test in UBSAN. It used to fail. Not it succeeds. Differential Revision: D16165516 fbshipit-source-id: 42c56a6bc64eb091f054b87757fcbef60da825f7 09 July 2019, 18:18:32 UTC
a6a9213 Fix interpreter lines for files with python2-only syntax. Reviewed By: lisroach Differential Revision: D15362271 fbshipit-source-id: 48fab12ab6e55a8537b19b4623d2545ca9950ec5 09 July 2019, 17:51:37 UTC
872a261 db_stress to print some internal keys after verification failure (#5543) Summary: Print out some more information when db_tress fails with verification failures to help debugging problems. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5543 Test Plan: Manually ingest some failures and observe the outputs are like this: Verification failed [default] 0000000000199A5A => 7C3D000078797A7B74757677707172736C6D6E6F68696A6B [6] 000000000019C8BD => 65380000616063626D6C6F6E69686B6A internal keys in default CF [0000000000199A5A, 000000000019C8BD] (max 8) key 0000000000199A5A seq 179246 type 1 key 000000000019C8BD seq 163970 type 1 Lastest Sequence Number: 292234 Differential Revision: D16153717 fbshipit-source-id: b33fa50a828c190cbf8249a37955432044f92daf 08 July 2019, 20:36:37 UTC
6ca3fee Fix -Werror=shadow (#5546) Summary: This PR fixes shadow errors. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5546 Test Plan: make clean && make check -j32 && make clean && USE_CLANG=1 make check -j32 && make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16147841 Pulled By: HaoyuHuang fbshipit-source-id: 1043500d70c134185f537ab4c3900452752f1534 08 July 2019, 07:12:43 UTC
7c76a7f Support GetAllKeyVersions() for non-default cf (#5544) Summary: Previously `GetAllKeyVersions()` supports default column family only. This PR add support for other column families. Test plan (devserver): ``` $make clean && COMPILE_WITH_ASAN=1 make -j32 db_basic_test $./db_basic_test --gtest_filter=DBBasicTest.GetAllKeyVersions ``` All other unit tests must pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5544 Differential Revision: D16147551 Pulled By: riversand963 fbshipit-source-id: 5a61aece2a32d789e150226a9b8d53f4a5760168 08 July 2019, 05:43:52 UTC
8d34806 setup wal_in_db_path_ for secondary instance (#5545) Summary: PR https://github.com/facebook/rocksdb/pull/5520 adds DBImpl:: wal_in_db_path_ and initializes it in DBImpl::Open, this PR fixes the valgrind error for secondary instance: ``` ==236417== Conditional jump or move depends on uninitialised value(s) ==236417== at 0x62242A: rocksdb::DeleteDBFile(rocksdb::ImmutableDBOptions const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, bool) (file_util.cc:96) ==236417== by 0x512432: rocksdb::DBImpl::DeleteObsoleteFileImpl(int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::FileType, unsigned long) (db_impl_files.cc:261) ==236417== by 0x515A7A: rocksdb::DBImpl::PurgeObsoleteFiles(rocksdb::JobContext&, bool) (db_impl_files.cc:492) ==236417== by 0x499153: rocksdb::ColumnFamilyHandleImpl::~ColumnFamilyHandleImpl() (column_family.cc:75) ==236417== by 0x499880: rocksdb::ColumnFamilyHandleImpl::~ColumnFamilyHandleImpl() (column_family.cc:84) ==236417== by 0x4C9AF9: rocksdb::DB::DestroyColumnFamilyHandle(rocksdb::ColumnFamilyHandle*) (db_impl.cc:3105) ==236417== by 0x44E853: CloseSecondary (db_secondary_test.cc:53) ==236417== by 0x44E853: rocksdb::DBSecondaryTest::~DBSecondaryTest() (db_secondary_test.cc:31) ==236417== by 0x44EC77: ~DBSecondaryTest_PrimaryDropColumnFamily_Test (db_secondary_test.cc:443) ==236417== by 0x44EC77: rocksdb::DBSecondaryTest_PrimaryDropColumnFamily_Test::~DBSecondaryTest_PrimaryDropColumnFamily_Test() (db_secondary_test.cc:443) ==236417== by 0x83D1D7: HandleSehExceptionsInMethodIfSupported<testing::Test, void> (gtest-all.cc:3824) ==236417== by 0x83D1D7: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (gtest-all.cc:3860) ==236417== by 0x8346DB: testing::TestInfo::Run() [clone .part.486] (gtest-all.cc:4078) ==236417== by 0x8348D4: Run (gtest-all.cc:4047) ==236417== by 0x8348D4: testing::TestCase::Run() [clone .part.487] (gtest-all.cc:4190) ==236417== by 0x834D14: Run (gtest-all.cc:6100) ==236417== by 0x834D14: testing::internal::UnitTestImpl::RunAllTests() (gtest-all.cc:6062) ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5545 Differential Revision: D16146224 Pulled By: miasantreble fbshipit-source-id: 184c90e451352951da4e955f054d4b1a1f29ea29 08 July 2019, 04:32:50 UTC
e0d9d57 Fix bugs in WAL trash file handling (#5520) Summary: 1. Cleanup WAL trash files on open 2. Don't apply deletion rate limit if WAL dir is different from db dir Pull Request resolved: https://github.com/facebook/rocksdb/pull/5520 Test Plan: Add new unit tests and make check Differential Revision: D16096750 Pulled By: anand1976 fbshipit-source-id: 6f07858ad864b754b711db416f0389c45ede599b 07 July 2019, 04:07:32 UTC
2de61d9 Assert get_context not null in BlockBasedTable::Get() (#5542) Summary: clang analyze fails after https://github.com/facebook/rocksdb/pull/5514 for this failure: table/block_based/block_based_table_reader.cc:3450:16: warning: Called C++ object pointer is null if (!get_context->SaveValue( ^~~~~~~~~~~~~~~~~~~~~~~ 1 warning generated. The reaon is that a branching is added earlier in the function on get_context is null or not, CLANG analyze thinks that it can be null and we make the function call withou the null checking. Fix the issue by removing the branch and add an assert. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5542 Test Plan: "make all check" passes and CLANG analyze failure goes away. Differential Revision: D16133988 fbshipit-source-id: d4627d03c4746254cc11926c523931086ccebcda 05 July 2019, 19:34:13 UTC
4f66ec9 Fix lower bound check error when iterate across file boundary (#5540) Summary: Since https://github.com/facebook/rocksdb/issues/5468 `LevelIterator` compare lower bound and file smallest key on `NewFileIterator` and cache the result to reduce per key lower bound check. However when iterate across file boundary, it doesn't update the cached result since `Valid()=false` because `Valid()` still reflect the status of the previous file iterator. Fixing it by remove the `Valid()` check from `CheckMayBeOutOfLowerBound()`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5540 Test Plan: See the new test. Signed-off-by: Yi Wu <yiwu@pingcap.com> Differential Revision: D16127653 fbshipit-source-id: a0691e1164658d485c17971aaa97028812f74678 05 July 2019, 00:28:30 UTC
e4dcf5f db_bench to add a new "benchmark" to print out all stats history (#5532) Summary: Sometimes it is helpful to fetch the whole history of stats after benchmark runs. Add such an option Pull Request resolved: https://github.com/facebook/rocksdb/pull/5532 Test Plan: Run the benchmark manually and observe the output is as expected. Differential Revision: D16097764 fbshipit-source-id: 10b5b735a22a18be198b8f348be11f11f8806904 04 July 2019, 03:03:28 UTC
6edc5d0 Block cache tracing: Associate a unique id with Get and MultiGet (#5514) Summary: This PR associates a unique id with Get and MultiGet. This enables us to track how many blocks a Get/MultiGet request accesses. We can also measure the impact of row cache vs block cache. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5514 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16032681 Pulled By: HaoyuHuang fbshipit-source-id: 775b05f4440badd58de6667e3ec9f4fc87a0af4c 04 July 2019, 02:35:41 UTC
84c5c9a Fix a bug in compaction reads causing checksum mismatches and asan errors (#5531) Summary: Fixed a bug in compaction reads due to which incorrect number of bytes were being read/utilized. The bug was introduced in https://github.com/facebook/rocksdb/issues/5498 , resulting in "Corruption: block checksum mismatch" and "heap-buffer-overflow" asan errors in our tests. https://github.com/facebook/rocksdb/issues/5498 was introduced recently and is not in any released versions. ASAN: ``` > ==2280939==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6250005e83da at pc 0x000000d57f62 bp 0x7f954f483770 sp 0x7f954f482f20 > === How to use this, how to get the raw stack trace, and more: fburl.com/ASAN === > READ of size 4 at 0x6250005e83da thread T4 > SCARINESS: 27 (4-byte-read-heap-buffer-overflow-far-from-bounds) > #0 tests+0xd57f61 __asan_memcpy > https://github.com/facebook/rocksdb/issues/1 rocksdb/src/util/coding.h:124 rocksdb::DecodeFixed32(char const*) > https://github.com/facebook/rocksdb/issues/2 rocksdb/src/table/block_fetcher.cc:39 rocksdb::BlockFetcher::CheckBlockChecksum() > https://github.com/facebook/rocksdb/issues/3 rocksdb/src/table/block_fetcher.cc:99 rocksdb::BlockFetcher::TryGetFromPrefetchBuffer() > https://github.com/facebook/rocksdb/issues/4 rocksdb/src/table/block_fetcher.cc:209 rocksdb::BlockFetcher::ReadBlockContents() > https://github.com/facebook/rocksdb/issues/5 rocksdb/src/table/block_based/block_based_table_reader.cc:93 rocksdb::(anonymous namespace)::ReadBlockFromFile(rocksdb::RandomAccessFileReader*, rocksdb::FilePrefetchBuffer*, rocksdb::Footer const&, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, std::unique_ptr<...>*, rocksdb::ImmutableCFOptions const&, bool, bool, rocksdb::UncompressionDict const&, rocksdb::PersistentCacheOptions const&, unsigned long, unsigned long, rocksdb::MemoryAllocator*, bool) > https://github.com/facebook/rocksdb/issues/6 rocksdb/src/table/block_based/block_based_table_reader.cc:2331 rocksdb::BlockBasedTable::RetrieveBlock(rocksdb::FilePrefetchBuffer*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::UncompressionDict const&, rocksdb::CachableEntry<...>*, rocksdb::BlockType, rocksdb::GetContext*, rocksdb::BlockCacheLookupContext*, bool) const > https://github.com/facebook/rocksdb/issues/7 rocksdb/src/table/block_based/block_based_table_reader.cc:2090 rocksdb::DataBlockIter* rocksdb::BlockBasedTable::NewDataBlockIterator<...>(rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::DataBlockIter*, rocksdb::BlockType, bool, bool, rocksdb::GetContext*, rocksdb::BlockCacheLookupContext*, rocksdb::Status, rocksdb::FilePrefetchBuffe r*, bool) const > https://github.com/facebook/rocksdb/issues/8 rocksdb/src/table/block_based/block_based_table_reader.cc:2720 rocksdb::BlockBasedTableIterator<...>::InitDataBlock() > https://github.com/facebook/rocksdb/issues/9 rocksdb/src/table/block_based/block_based_table_reader.cc:2607 rocksdb::BlockBasedTableIterator<...>::SeekToFirst() > https://github.com/facebook/rocksdb/issues/10 rocksdb/src/table/iterator_wrapper.h:83 rocksdb::IteratorWrapperBase<...>::SeekToFirst() > https://github.com/facebook/rocksdb/issues/11 rocksdb/src/table/merging_iterator.cc:100 rocksdb::MergingIterator::SeekToFirst() > https://github.com/facebook/rocksdb/issues/12 rocksdb/compaction/compaction_job.cc:877 rocksdb::CompactionJob::ProcessKeyValueCompaction(rocksdb::CompactionJob::SubcompactionState*) > https://github.com/facebook/rocksdb/issues/13 rocksdb/compaction/compaction_job.cc:590 rocksdb::CompactionJob::Run() > https://github.com/facebook/rocksdb/issues/14 rocksdb/db_impl/db_impl_compaction_flush.cc:2689 rocksdb::DBImpl::BackgroundCompaction(bool*, rocksdb::JobContext*, rocksdb::LogBuffer*, rocksdb::DBImpl::PrepickedCompaction*, rocksdb::Env::Priority) > https://github.com/facebook/rocksdb/issues/15 rocksdb/db_impl/db_impl_compaction_flush.cc:2248 rocksdb::DBImpl::BackgroundCallCompaction(rocksdb::DBImpl::PrepickedCompaction*, rocksdb::Env::Priority) > https://github.com/facebook/rocksdb/issues/16 rocksdb/db_impl/db_impl_compaction_flush.cc:2024 rocksdb::DBImpl::BGWorkCompaction(void*) > https://github.com/facebook/rocksdb/issues/23 rocksdb/src/util/threadpool_imp.cc:266 rocksdb::ThreadPoolImpl::Impl::BGThread(unsigned long) > https://github.com/facebook/rocksdb/issues/24 rocksdb/src/util/threadpool_imp.cc:307 rocksdb::ThreadPoolImpl::Impl::BGThreadWrapper(void*) ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5531 Test Plan: Verified that this fixes the fb-internal Logdevice test which caught the issue. Differential Revision: D16109702 Pulled By: sagar0 fbshipit-source-id: 1fc08549cf7b553e338a133ae11eb9f4d5011914 04 July 2019, 02:06:46 UTC
09ea5d8 Fix clang build with jemalloc (#5522) Summary: Fixes the below build failure for clang compiler using glibc and jemalloc. Platform: linux x86-64 Compiler: clang version 6.0.0-1ubuntu2 Build failure: ``` $ CXX=clang++ CC=clang USE_CLANG=1 WITH_JEMALLOC_FLAG=1 JEMALLOC=1 EXTRA_LDFLAGS="-L/home/andrew/jemalloc/lib/" EXTRA_CXXFLAGS="-I/home/andrew/jemalloc/include/" make check -j12 ... CC memory/jemalloc_nodump_allocator.o In file included from memory/jemalloc_nodump_allocator.cc:6: In file included from ./memory/jemalloc_nodump_allocator.h:11: In file included from ./port/jemalloc_helper.h:16: /usr/include/clang/6.0.0/include/mm_malloc.h:39:16: error: 'posix_memalign' is missing exception specification 'throw()' extern "C" int posix_memalign(void **__memptr, size_t __alignment, size_t __size); ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:388:26: note: expanded from macro 'posix_memalign' # define posix_memalign je_posix_memalign ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:77:29: note: expanded from macro 'je_posix_memalign' # define je_posix_memalign posix_memalign ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:232:38: note: previous declaration is here JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_posix_memalign(void **memptr, ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:77:29: note: expanded from macro 'je_posix_memalign' # define je_posix_memalign posix_memalign ^ 1 error generated. Makefile:1972: recipe for target 'memory/jemalloc_nodump_allocator.o' failed make: *** [memory/jemalloc_nodump_allocator.o] Error 1 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5522 Differential Revision: D16069869 Pulled By: miasantreble fbshipit-source-id: c489bbc993adee194b9a550134c6237a264bc443 02 July 2019, 20:02:12 UTC
0d57d93 Support jemalloc compiled with `--with-jemalloc-prefix` (#5521) Summary: Previously, if the jemalloc was built with nonempty string for `--with-jemalloc-prefix`, then `HasJemalloc()` would return false on Linux, so jemalloc would not be used at runtime. On Mac, it would cause a linker failure due to no definitions found for the weak functions declared in "port/jemalloc_helper.h". This should be a rare problem because (1) on Linux the default `--with-jemalloc-prefix` value is the empty string, and (2) Homebrew's build explicitly sets `--with-jemalloc-prefix` to the empty string. However, there are cases where `--with-jemalloc-prefix` is nonempty. For example, when building jemalloc from source on Mac, the default setting is `--with-jemalloc-prefix=je_`. Such jemalloc builds should be usable by RocksDB. The fix is simple. Defining `JEMALLOC_MANGLE` before including "jemalloc.h" causes it to define unprefixed symbols that are aliases for each of the prefixed symbols. Thanks to benesch for figuring this out and explaining it to me. Fixes https://github.com/facebook/rocksdb/issues/1462. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5521 Test Plan: build jemalloc with prefixed symbols: ``` $ ./configure --with-jemalloc-prefix=lol $ make ``` compile rocksdb against it: ``` $ WITH_JEMALLOC_FLAG=1 JEMALLOC=1 EXTRA_LDFLAGS="-L/home/andrew/jemalloc/lib/" EXTRA_CXXFLAGS="-I/home/andrew/jemalloc/include/" make -j12 ./db_bench ``` run db_bench and verify jemalloc actually used: ``` $ ./db_bench -benchmarks=fillrandom -statistics=true -dump_malloc_stats=true -stats_dump_period_sec=1 $ grep jemalloc /tmp/rocksdbtest-1000/dbbench/LOG 2019/06/29-12:20:52.088658 7fc5fb7f6700 [_impl/db_impl.cc:837] ___ Begin jemalloc statistics ___ ... ``` Differential Revision: D16092758 fbshipit-source-id: c2c358346190ed62ceb2a3547a6c4c180b12f7c4 02 July 2019, 19:07:01 UTC
662ce62 Reduce iterator key comparison for upper/lower bound check (2nd attempt) (#5468) Summary: This is a second attempt for https://github.com/facebook/rocksdb/issues/5111, with the fix to redo iterate bounds check after `SeekXXX()`. This is because MyRocks may change iterate bounds between seek. See https://github.com/facebook/rocksdb/issues/5111 for original benchmark result and discussion. Closes https://github.com/facebook/rocksdb/issues/5463. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5468 Test Plan: Existing rocksdb tests, plus myrocks test `rocksdb.optimizer_loose_index_scans` and `rocksdb.group_min_max`. Differential Revision: D15863332 fbshipit-source-id: ab4aba5899838591806b8673899bd465f3f53e18 02 July 2019, 18:48:46 UTC
cfdf211 Exclude StatsHistoryTest.ForceManualFlushStatsCF test from lite mode (#5529) Summary: Recent commit 3886dddc3b44bf5061c0f93eab578c51e8bad7bd introduced a new test which is not compatible with lite mode and breaks contrun test: ``` [ RUN ] StatsHistoryTest.ForceManualFlushStatsCF monitoring/stats_history_test.cc:642: Failure Expected: (cfd_stats->GetLogNumber()) < (cfd_test->GetLogNumber()), actual: 15 vs 15 ``` This PR excludes the test from lite mode to appease the failing test Pull Request resolved: https://github.com/facebook/rocksdb/pull/5529 Differential Revision: D16080892 Pulled By: miasantreble fbshipit-source-id: 2f8a22758f71250cd9f204046404226ddc13b028 01 July 2019, 23:37:08 UTC
66464d1 Remove multiple declarations o kMicrosInSecond. Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5526 Test Plan: OPT=-g V=1 make J=1 unity_test -j32 make clean && make -j32 Differential Revision: D16079315 Pulled By: HaoyuHuang fbshipit-source-id: 294ab439cf0db8dd5da44e30eabf0cbb2bb8c4f6 01 July 2019, 22:15:12 UTC
3e6c185 Formatting fixes in db_bench_tool (#5525) Summary: Formatting fixes in db_bench_tool that were accidentally omitted Pull Request resolved: https://github.com/facebook/rocksdb/pull/5525 Test Plan: Unit tests Differential Revision: D16078516 Pulled By: elipoz fbshipit-source-id: bf8df0e3f08092a91794ebf285396d9b8a335bb9 01 July 2019, 21:57:28 UTC
1e87f2b Ref and unref cfd before and after calling WaitForFlushMemTables (#5513) Summary: This is to prevent bg flush thread from unrefing and deleting the cfd that has been dropped by a concurrent thread. Before RocksDB calls `DBImpl::WaitForFlushMemTables`, we should increase the refcount of each `ColumnFamilyData` so that its ref count will not drop to 0 even if the column family is dropped by another thread. Otherwise the bg flush thread can deref the cfd and deletes it, causing a segfault in `WaitForFlushMemtables` upon accessing `cfd`. Test plan (on devserver): ``` $make clean && COMPILE_WITH_ASAN=1 make -j32 $make check ``` All unit tests must pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5513 Differential Revision: D16062898 Pulled By: riversand963 fbshipit-source-id: 37dc511f1dc99f036d0201bbd7f0a8f5677c763d 01 July 2019, 21:12:02 UTC
f872009 Fix from some C-style casting (#5524) Summary: Fix from some C-style casting in bloom.cc and ./tools/db_bench_tool.cc Pull Request resolved: https://github.com/facebook/rocksdb/pull/5524 Differential Revision: D16075626 Pulled By: elipoz fbshipit-source-id: 352948885efb64a7ef865942c75c3c727a914207 01 July 2019, 20:05:34 UTC
9f0bd56 Cache simulator: Refactor the cache simulator so that we can add alternative policies easily (#5517) Summary: This PR creates cache_simulator.h file. It contains a CacheSimulator that runs against a block cache trace record. We can add alternative cache simulators derived from CacheSimulator later. For example, this PR adds a PrioritizedCacheSimulator that inserts filter/index/uncompressed dictionary blocks with high priority. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5517 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16043689 Pulled By: HaoyuHuang fbshipit-source-id: 65f28ed52b866ffb0e6eceffd7f9ca7c45bb680d 01 July 2019, 19:46:32 UTC
3886ddd force flushing stats CF to avoid holding old logs (#5509) Summary: WAL records RocksDB writes to all column families. When user flushes a a column family, the old WAL will not accept new writes but cannot be deleted yet because it may still contain live data for other column families. (See https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log#life-cycle-of-a-wal for detailed explanation) Because of this, if there is a column family that receive very infrequent writes and no manual flush is called for it, it could prevent a lot of WALs from being deleted. PR https://github.com/facebook/rocksdb/pull/5046 introduced persistent stats column family which is a good example of such column families. Depending on the config, it may have long intervals between writes, and user is unaware of it which makes it difficult to call manual flush for it. This PR addresses the problem for persistent stats column family by forcing a flush for persistent stats column family when 1) another column family is flushed 2) persistent stats column family's log number is the smallest among all column families, this way persistent stats column family will keep advancing its log number when necessary, allowing RocksDB to delete old WAL files. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5509 Differential Revision: D16045896 Pulled By: miasantreble fbshipit-source-id: 286837b633e988417f0096ff38384742d3b40ef4 01 July 2019, 18:56:43 UTC
c360675 Add secondary instance to stress test (#5479) Summary: This PR allows users to run stress tests on secondary instance. Test plan (on devserver) ``` ./db_stress -ops_per_thread=100000 -enable_secondary=true -threads=32 -secondary_catch_up_one_in=10000 -clear_column_family_one_in=1000 -reopen=100 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5479 Differential Revision: D16074325 Pulled By: riversand963 fbshipit-source-id: c0ed959e7b6c7cda3efd0b3070ab379de3b29f1c 01 July 2019, 18:49:50 UTC
7259e28 MultiGet parallel IO (#5464) Summary: Enhancement to MultiGet batching to read data blocks required for keys in a batch in parallel from disk. It uses Env::MultiRead() API to read multiple blocks and reduce latency. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5464 Test Plan: 1. make check 2. make asan_check 3. make asan_crash Differential Revision: D15911771 Pulled By: anand1976 fbshipit-source-id: 605036b9af0f90ca0020dc87c3a86b4da6e83394 01 July 2019, 03:56:04 UTC
68b46a2 Block cache tracer: StartTrace return busy if trace is already started. (#5519) Summary: This PR is needed for integration into MyRocks. A second call on StartTrace returns Busy so that MyRocks may return an error to the user. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5519 Test Plan: make clean && USE_CLANG=1 make check -j32 Differential Revision: D16055476 Pulled By: HaoyuHuang fbshipit-source-id: a51772fb0965c873922757eb470a332b1e02a91d 01 July 2019, 03:03:01 UTC
10bae8c Add more release versions to tools/check_format_compatible.sh (#5518) Summary: tools/check_format_compatible.sh is lagged behind. Catch up. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5518 Test Plan: Run the command Differential Revision: D16063180 fbshipit-source-id: d063eb42df9653dec06a2cf0fb982b8a60ca3d2f 29 June 2019, 00:41:58 UTC
5c2f13f add create_column_family and drop_column_family cmd to ldb tool (#5503) Summary: `create_column_family` cmd already exists but was somehow missed in the help message. also add `drop_column_family` cmd which can drop a cf without opening db. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5503 Test Plan: Updated existing ldb_test.py to test deleting a column family. Differential Revision: D16018414 Pulled By: lightmark fbshipit-source-id: 1fc33680b742104fea86b10efc8499f79e722301 27 June 2019, 18:11:48 UTC
15fd3be LRU Cache to enable mid-point insertion by default (#5508) Summary: Mid-point insertion is a useful feature and is mature now. Make it default. Also changed cache_index_and_filter_blocks_with_high_priority=true as default accordingly, so that we won't evict index and filter blocks easier after the change, to avoid too many surprises to users. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5508 Test Plan: Run all existing tests. Differential Revision: D16021179 fbshipit-source-id: ce8456e8d43b3bfb48df6c304b5290a9d19817eb 27 June 2019, 17:20:57 UTC
c08c0ae Add C binding for secondary instance (#5505) Summary: Add C binding for secondary instance as well as unit test. Test plan (on devserver) ``` $make clean && COMPILE_WITH_ASAN=1 make -j20 all $./c_test $make check ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5505 Differential Revision: D16000043 Pulled By: riversand963 fbshipit-source-id: 3361ef6bfdf4ce12438cee7290a0ac203b5250bd 27 June 2019, 15:58:54 UTC
a8975b6 Block cache tracer: Do not populate block cache trace record when tracing is disabled. (#5510) Summary: This PR makes sure that trace record is not populated when tracing is disabled. Before this PR: DB path: [/data/mysql/rocks_regression_tests/OPTIONS-myrocks-40-33-10000000/2019-06-26-13-04-41/db] readwhilewriting : 9.803 micros/op 1550408 ops/sec; 107.9 MB/s (5000000 of 5000000 found) Microseconds per read: Count: 80000000 Average: 9.8045 StdDev: 12.64 Min: 1 Median: 7.5246 Max: 25343 Percentiles: P50: 7.52 P75: 12.10 P99: 37.44 P99.9: 75.07 P99.99: 133.60 After this PR: DB path: [/data/mysql/rocks_regression_tests/OPTIONS-myrocks-40-33-10000000/2019-06-26-14-08-21/db] readwhilewriting : 8.723 micros/op 1662882 ops/sec; 115.8 MB/s (5000000 of 5000000 found) Microseconds per read: Count: 80000000 Average: 8.7236 StdDev: 12.19 Min: 1 Median: 6.7262 Max: 25229 Percentiles: P50: 6.73 P75: 10.50 P99: 31.54 P99.9: 74.81 P99.99: 132.82 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5510 Differential Revision: D16016428 Pulled By: HaoyuHuang fbshipit-source-id: 3b3d11e6accf207d18ec2545b802aa01ee65901f 27 June 2019, 15:34:08 UTC
9dbcda9 Fix uninitialized prev_block_offset_ in BlockBasedTableReader (#5507) Summary: Found by valgrind_check. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5507 Differential Revision: D16002612 Pulled By: miasantreble fbshipit-source-id: 13c11c183190e0a0571844635457d434da3ac59a 26 June 2019, 06:02:01 UTC
b4d7209 Add an option to put first key of each sst block in the index (#5289) Summary: The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes. Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it. So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks. Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files. This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289 Differential Revision: D15256423 Pulled By: al13n321 fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a 25 June 2019, 03:54:04 UTC
554a645 Block cache trace analysis: Write time series graphs in csv files (#5490) Summary: This PR adds a feature in block cache trace analysis tool to write statistics into csv files. 1. The analysis tool supports grouping the number of accesses per second by various labels, e.g., block, column family, block type, or a combination of them. 2. It also computes reuse distance and reuse interval. Reuse distance: The cumulated size of unique blocks read between two consecutive accesses on the same block. Reuse interval: The time between two consecutive accesses on the same block. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5490 Differential Revision: D15901322 Pulled By: HaoyuHuang fbshipit-source-id: b5454fea408a32757a80be63de6fe1c8149ca70e 25 June 2019, 03:42:12 UTC
acb8053 Fix build jemalloc api (#5470) Summary: There is a compile error on Windows with MSVC in malloc_stats.cc where malloc_stats_print is referenced. The compiler only knows je_malloc_stats_print from jemalloc.h. Adding JEMALLOC_NO_RENAME replaces malloc_stats_print with je_malloc_stats_print. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5470 Differential Revision: D15978720 fbshipit-source-id: c05757a2e89e2e015a661d9626c352e4f32f97e4 25 June 2019, 00:40:32 UTC
e731f44 C file should not include <cinttypes>, it is a C++ header. (#5499) Summary: Include <inttypes.h> instead. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5499 Differential Revision: D15966937 Pulled By: miasantreble fbshipit-source-id: 2156c4329b91d26d447de94f1231264d52786350 24 June 2019, 23:12:39 UTC
c92c58f JNI: Do not create 8M block cache for negative blockCacheSize values (#5465) Summary: As [BlockBasedTableConfig setBlockCacheSize()](https://github.com/facebook/rocksdb/blob/1966a7c055f6e182d627275051f5c09441aa922d/java/src/main/java/org/rocksdb/BlockBasedTableConfig.java#L728) said, If cacheSize is non-positive, then cache will not be used. but when we configure a negative number or 0, there is an unexpected result: the block cache becomes 8M. - Allow 0 as a valid size. When block cache size is 0, an 8MB block cache is created, as it is the default C++ API behavior. Also updated the comment. - Set no_block_cache true if negative value is passed to block cache size, and no block cache will be created. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5465 Differential Revision: D15968788 Pulled By: sagar0 fbshipit-source-id: ee02d6e95841c9e2c316a64bfdf192d46ff5638a 24 June 2019, 18:37:04 UTC
68980df Also build compression libraries on AppVeyor CI (#5226) Summary: This adds some compression dependencies to AppVeyor CI (those whose builds can be easily scripted on Windows, i.e. Snappy, LZ4, and ZStd). Let's see if the CI passes ;-) Pull Request resolved: https://github.com/facebook/rocksdb/pull/5226 Differential Revision: D15967223 fbshipit-source-id: 0914c613ac358cbb248df75cdee8099e836828dc 24 June 2019, 17:41:07 UTC
22028aa Compaction Reads should read no more than compaction_readahead_size bytes, when set! (#5498) Summary: As a result of https://github.com/facebook/rocksdb/issues/5431 the compaction_readahead_size given by a user was not used exactly, the reason being the code behind readahead for user-read and compaction-read was unified in the above PR and the behavior for user-read is to read readahead_size+n bytes (see FilePrefetchBuffer::TryReadFromCache method). Before the unification the ReadaheadRandomAccessFileReader used compaction_readahead_size as it is. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5498 Test Plan: Ran strace command : strace -e pread64 -f -T -t ./db_compaction_test --gtest_filter=DBCompactionTest.PartialManualCompaction In the test the compaction_readahead_size was configured to 2MB and verified the pread syscall did indeed request 2MB. Before the change it was requesting more than 2MB. Strace Output: strace: Process 3798982 attached Note: Google Test filter = DBCompactionTest.PartialManualCompaction [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from DBCompactionTest [ RUN ] DBCompactionTest.PartialManualCompaction strace: Process 3798983 attached strace: Process 3798984 attached strace: Process 3798985 attached strace: Process 3798986 attached strace: Process 3798987 attached strace: Process 3798992 attached [pid 3798987] 12:07:05 +++ exited with 0 +++ strace: Process 3798993 attached [pid 3798993] 12:07:05 +++ exited with 0 +++ strace: Process 3798994 attached strace: Process 3799008 attached strace: Process 3799009 attached [pid 3799008] 12:07:05 +++ exited with 0 +++ strace: Process 3799010 attached [pid 3799009] 12:07:05 +++ exited with 0 +++ strace: Process 3799011 attached [pid 3799010] 12:07:05 +++ exited with 0 +++ [pid 3799011] 12:07:05 +++ exited with 0 +++ strace: Process 3799012 attached [pid 3799012] 12:07:05 +++ exited with 0 +++ strace: Process 3799013 attached strace: Process 3799014 attached [pid 3799013] 12:07:05 +++ exited with 0 +++ strace: Process 3799015 attached [pid 3799014] 12:07:05 +++ exited with 0 +++ [pid 3799015] 12:07:05 +++ exited with 0 +++ strace: Process 3799016 attached [pid 3799016] 12:07:05 +++ exited with 0 +++ strace: Process 3799017 attached [pid 3799017] 12:07:05 +++ exited with 0 +++ strace: Process 3799019 attached [pid 3799019] 12:07:05 +++ exited with 0 +++ strace: Process 3799020 attached strace: Process 3799021 attached [pid 3799020] 12:07:05 +++ exited with 0 +++ [pid 3799021] 12:07:05 +++ exited with 0 +++ strace: Process 3799022 attached [pid 3799022] 12:07:05 +++ exited with 0 +++ strace: Process 3799023 attached [pid 3799023] 12:07:05 +++ exited with 0 +++ strace: Process 3799047 attached strace: Process 3799048 attached [pid 3799047] 12:07:06 +++ exited with 0 +++ [pid 3799048] 12:07:06 +++ exited with 0 +++ [pid 3798994] 12:07:06 +++ exited with 0 +++ strace: Process 3799052 attached [pid 3799052] 12:07:06 +++ exited with 0 +++ strace: Process 3799054 attached strace: Process 3799069 attached strace: Process 3799070 attached [pid 3799069] 12:07:06 +++ exited with 0 +++ strace: Process 3799071 attached [pid 3799070] 12:07:06 +++ exited with 0 +++ [pid 3799071] 12:07:06 +++ exited with 0 +++ strace: Process 3799072 attached strace: Process 3799073 attached [pid 3799072] 12:07:06 +++ exited with 0 +++ [pid 3799073] 12:07:06 +++ exited with 0 +++ strace: Process 3799074 attached [pid 3799074] 12:07:06 +++ exited with 0 +++ strace: Process 3799075 attached [pid 3799075] 12:07:06 +++ exited with 0 +++ strace: Process 3799076 attached [pid 3799076] 12:07:06 +++ exited with 0 +++ strace: Process 3799077 attached [pid 3799077] 12:07:06 +++ exited with 0 +++ strace: Process 3799078 attached [pid 3799078] 12:07:06 +++ exited with 0 +++ strace: Process 3799079 attached [pid 3799079] 12:07:06 +++ exited with 0 +++ strace: Process 3799080 attached [pid 3799080] 12:07:06 +++ exited with 0 +++ strace: Process 3799081 attached [pid 3799081] 12:07:06 +++ exited with 0 +++ strace: Process 3799082 attached [pid 3799082] 12:07:06 +++ exited with 0 +++ strace: Process 3799083 attached [pid 3799083] 12:07:06 +++ exited with 0 +++ strace: Process 3799086 attached strace: Process 3799087 attached [pid 3798984] 12:07:06 pread64(9, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000121> [pid 3798984] 12:07:06 pread64(9, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000106> [pid 3798984] 12:07:06 pread64(9, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000081> [pid 3798984] 12:07:06 pread64(9, "\0\v\3foo\2\7\0\0\0\0\0\0\0\270 \0\v\4foo\2\3\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000138> [pid 3798984] 12:07:06 pread64(11, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000097> [pid 3798984] 12:07:06 pread64(11, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000086> [pid 3798984] 12:07:06 pread64(11, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000064> [pid 3798984] 12:07:06 pread64(11, "\0\v\3foo\2\21\0\0\0\0\0\0\0\270 \0\v\4foo\2\r\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000064> [pid 3798984] 12:07:06 pread64(12, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000080> [pid 3798984] 12:07:06 pread64(12, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000090> [pid 3798984] 12:07:06 pread64(12, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000059> [pid 3798984] 12:07:06 pread64(12, "\0\v\3foo\2\33\0\0\0\0\0\0\0\270 \0\v\4foo\2\27\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000065> [pid 3798984] 12:07:06 pread64(13, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000070> [pid 3798984] 12:07:06 pread64(13, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000059> [pid 3798984] 12:07:06 pread64(13, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000061> [pid 3798984] 12:07:06 pread64(13, "\0\v\3foo\2%\0\0\0\0\0\0\0\270 \0\v\4foo\2!\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000065> [pid 3798984] 12:07:06 pread64(14, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000118> [pid 3798984] 12:07:06 pread64(14, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000093> [pid 3798984] 12:07:06 pread64(14, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000050> [pid 3798984] 12:07:06 pread64(14, "\0\v\3foo\2/\0\0\0\0\0\0\0\270 \0\v\4foo\2+\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000082> [pid 3798984] 12:07:06 pread64(15, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000080> [pid 3798984] 12:07:06 pread64(15, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000086> [pid 3798984] 12:07:06 pread64(15, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000091> [pid 3798984] 12:07:06 pread64(15, "\0\v\3foo\0029\0\0\0\0\0\0\0\270 \0\v\4foo\0025\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000174> [pid 3798984] 12:07:06 pread64(16, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000080> [pid 3798984] 12:07:06 pread64(16, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000093> [pid 3798984] 12:07:06 pread64(16, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000194> [pid 3798984] 12:07:06 pread64(16, "\0\v\3foo\2C\0\0\0\0\0\0\0\270 \0\v\4foo\2?\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000086> [pid 3798984] 12:07:06 pread64(17, "\1\203W!\241QE\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 11177) = 53 <0.000079> [pid 3798984] 12:07:06 pread64(17, "\0\22\4rocksdb.properties\353Q\223\5\0\0\0\0\1\0\0"..., 38, 11139) = 38 <0.000047> [pid 3798984] 12:07:06 pread64(17, "\0$\4rocksdb.block.based.table.ind"..., 664, 10475) = 664 <0.000045> [pid 3798984] 12:07:06 pread64(17, "\0\v\3foo\2M\0\0\0\0\0\0\0\270 \0\v\4foo\2I\0\0\0\0\0\0\275"..., 74, 10401) = 74 <0.000107> [pid 3798983] 12:07:06 pread64(17, "\0\v\200\10foo\2P\0\0\0\0\0\0)U?MSg_)j(roFn($e"..., 2097152, 0) = 11230 <0.000091> [pid 3798983] 12:07:06 pread64(17, "", 2085922, 11230) = 0 <0.000073> [pid 3798983] 12:07:06 pread64(16, "\0\v\200\10foo\2F\0\0\0\0\0\0k[h3%.OPH_^:\\S7T&"..., 2097152, 0) = 11230 <0.000083> [pid 3798983] 12:07:06 pread64(16, "", 2085922, 11230) = 0 <0.000078> [pid 3798983] 12:07:06 pread64(15, "\0\v\200\10foo\2<\0\0\0\0\0\0+qToi_c{*S+4:N(:"..., 2097152, 0) = 11230 <0.000095> [pid 3798983] 12:07:06 pread64(15, "", 2085922, 11230) = 0 <0.000067> [pid 3798983] 12:07:06 pread64(14, "\0\v\200\10foo\0022\0\0\0\0\0\0%hw%OMa\"}9I609Q!B"..., 2097152, 0) = 11230 <0.000111> [pid 3798983] 12:07:06 pread64(14, "", 2085922, 11230) = 0 <0.000093> [pid 3798983] 12:07:06 pread64(13, "\0\v\200\10foo\2(\0\0\0\0\0\0p}Y&mu^DcaSGb2&nP"..., 2097152, 0) = 11230 <0.000128> [pid 3798983] 12:07:06 pread64(13, "", 2085922, 11230) = 0 <0.000076> [pid 3798983] 12:07:06 pread64(12, "\0\v\200\10foo\2\36\0\0\0\0\0\0YIyW#]oSs^6VHfB<`"..., 2097152, 0) = 11230 <0.000092> [pid 3798983] 12:07:06 pread64(12, "", 2085922, 11230) = 0 <0.000073> [pid 3798983] 12:07:06 pread64(11, "\0\v\200\10foo\2\24\0\0\0\0\0\0mfF8Jel/*Zf :-#s("..., 2097152, 0) = 11230 <0.000088> [pid 3798983] 12:07:06 pread64(11, "", 2085922, 11230) = 0 <0.000067> [pid 3798983] 12:07:06 pread64(9, "\0\v\200\10foo\2\n\0\0\0\0\0\0\\X'cjiHX)D,RSj1X!"..., 2097152, 0) = 11230 <0.000115> [pid 3798983] 12:07:06 pread64(9, "", 2085922, 11230) = 0 <0.000073> [pid 3798983] 12:07:06 pread64(8, "\1\315\5 \36\30\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 754) = 53 <0.000098> [pid 3798983] 12:07:06 pread64(8, "\0\22\3rocksdb.properties;\215\5\0\0\0\0\1\0\0\0"..., 37, 717) = 37 <0.000064> [pid 3798983] 12:07:06 pread64(8, "\0$\4rocksdb.block.based.table.ind"..., 658, 59) = 658 <0.000074> [pid 3798983] 12:07:06 pread64(8, "\0\v\2foo\1\0\0\0\0\0\0\0\0\31\0\0\0\0\1\0\0\0\0\212\216\222P", 29, 30) = 29 <0.000064> [pid 3799086] 12:07:06 +++ exited with 0 +++ [pid 3799087] 12:07:06 +++ exited with 0 +++ [pid 3799054] 12:07:06 +++ exited with 0 +++ strace: Process 3799104 attached [pid 3799104] 12:07:06 +++ exited with 0 +++ [ OK ] DBCompactionTest.PartialManualCompaction (757 ms) [----------] 1 test from DBCompactionTest (758 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (759 ms total) [ PASSED ] 1 test. [pid 3798983] 12:07:06 +++ exited with 0 +++ [pid 3798984] 12:07:06 +++ exited with 0 +++ [pid 3798992] 12:07:06 +++ exited with 0 +++ [pid 3798986] 12:07:06 +++ exited with 0 +++ [pid 3798982] 12:07:06 +++ exited with 0 +++ [pid 3798985] 12:07:06 +++ exited with 0 +++ 12:07:06 +++ exited with 0 +++ Differential Revision: D15948422 Pulled By: vjnadimpalli fbshipit-source-id: 9b189d1e8675d290c7784e4b33e5d3b5761d2ac8 22 June 2019, 04:31:49 UTC
2730fe6 Fix ingested file and direcotry not being sync (#5435) Summary: It it not safe to assume application had sync the SST file before ingest it into DB. Also the directory to put the ingested file needs to be fsync, otherwise the file can be lost. For integrity of RocksDB we need to sync the ingested file and directory before apply the change to manifest. Also syncing after writing global sequence when write_global_seqno=true was removed in https://github.com/facebook/rocksdb/issues/4172. Adding it back. Fixes https://github.com/facebook/rocksdb/issues/5287. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5435 Test Plan: Test ingest file with ldb command and observe fsync/fdatasync in strace output. Tried both move_files=true and move_files=false. https://gist.github.com/yiwu-arbug/650a4023f57979056d83485fa863bef9 More test suggestions are welcome. Differential Revision: D15941675 Pulled By: riversand963 fbshipit-source-id: 389533f3923065a96df2cdde23ff4724a1810d78 21 June 2019, 17:15:38 UTC
1bfeffa Stop printing after verification fails (#5493) Summary: Stop verification and printing once verification fails. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5493 Differential Revision: D15928992 Pulled By: riversand963 fbshipit-source-id: 699feac034a217d57280aa3fb50f5aba06adf317 21 June 2019, 05:16:58 UTC
705b8ee Add more callers for table reader. (#5454) Summary: This PR adds more callers for table readers. These information are only used for block cache analysis so that we can know which caller accesses a block. 1. It renames the BlockCacheLookupCaller to TableReaderCaller as passing the caller from upstream requires changes to table_reader.h and TableReaderCaller is a more appropriate name. 2. It adds more table reader callers in table/table_reader_caller.h, e.g., kCompactionRefill, kExternalSSTIngestion, and kBuildTable. This PR is long as it requires modification of interfaces in table_reader.h, e.g., NewIterator. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5454 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32. Differential Revision: D15819451 Pulled By: HaoyuHuang fbshipit-source-id: b6caa704c8fb96ddd15b9a934b7e7ea87f88092d 20 June 2019, 21:31:48 UTC
0b0cb6f Fix segfalut in ~DBWithTTLImpl() when called after Close() (#5485) Summary: ~DBWithTTLImpl() fails after calling Close() function (will invoke the Close() function of DBImpl), because the Close() function deletes default_cf_handle_ which is used in the GetOptions() function called in ~DBWithTTLImpl(), hence lead to segfault. Fix by creating a Close() function for the DBWithTTLImpl class and do the close and the work originally in ~DBWithTTLImpl(). If the Close() function is not called, it will be called in the ~DBWithTTLImpl() function. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5485 Test Plan: make clean; USE_CLANG=1 make all check -j Differential Revision: D15924498 fbshipit-source-id: 567397fb972961059083a1ae0f9f99ff74872b78 20 June 2019, 20:08:17 UTC
24f7343 sanitize and limit block_size under 4GB (#5492) Summary: `Block::restart_index_`, `Block::restarts_`, and `Block::current_` are defined as uint32_t but `BlockBasedTableOptions::block_size` is defined as a size_t so user might see corruption as in https://github.com/facebook/rocksdb/issues/5486. This PR adds a check in `BlockBasedTableFactory::SanitizeOptions` to disallow such configurations. yiwu-arbug Pull Request resolved: https://github.com/facebook/rocksdb/pull/5492 Differential Revision: D15914047 Pulled By: miasantreble fbshipit-source-id: c943f153d967e15aee7f2795730ab8259e2be201 20 June 2019, 18:45:08 UTC
68614a9 Fix AlignedBuffer's usage in Encryption Env (#5396) Summary: The usage of `AlignedBuffer` in env_encryption.cc writes and reads to/from the AlignedBuffer's internal buffer directly without going through AlignedBuffer's APIs (like `Append` and `Read`), causing encapsulation to break in some cases. The writes are especially problematic as after the data is written to the buffer (directly using either memmove or memcpy), the size of the buffer is not updated ... causing the AlignedBuffer to lose track of the encapsulated buffer's current size. Fixed this by updating the buffer size after every write. Todo for later: Add an overloaded method to AlignedBuffer to support a memmove in addition to a memcopy. Encryption env does a memmove, and hence I couldn't switch to using `AlignedBuffer.Append()`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5396 Test Plan: `make check` Differential Revision: D15764756 Pulled By: sagar0 fbshipit-source-id: 2e24b52bd3b4b5056c5c1da157f91ddf89370183 19 June 2019, 23:46:20 UTC
5830c61 Java: Make the generics of the Options interfaces more strict (#5461) Summary: Make the generics of the Options interfaces more strict so they are usable in a Kotlin Multiplatform expect/actual typealias implementation without causing a Violation of Finite Bound Restriction. This fix would enable the creation of a generic Kotlin multiplatform library by just typealiasing the JVM implementation to the current Java implementation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5461 Differential Revision: D15903288 Pulled By: sagar0 fbshipit-source-id: 75e83fdf5d2fcede40744a17e767563d6a4b0696 19 June 2019, 21:43:52 UTC
24b118a Combine the read-ahead logic for user reads and compaction reads (#5431) Summary: Currently the read-ahead logic for user reads and compaction reads go through different code paths where compaction reads create new table readers and use `ReadaheadRandomAccessFile`. This change is to unify read-ahead logic to use read-ahead in BlockBasedTableReader::InitDataBlock(). As a result of the change `ReadAheadRandomAccessFile` class and `new_table_reader_for_compaction_inputs` option will no longer be used. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5431 Test Plan: make check Here is the benchmarking - https://gist.github.com/vjnadimpalli/083cf423f7b6aa12dcdb14c858bc18a5 Differential Revision: D15772533 Pulled By: vjnadimpalli fbshipit-source-id: b71dca710590471ede6fb37553388654e2e479b9 19 June 2019, 21:10:46 UTC
fe90ed7 Replace Corruption with TryAgain status when new tail is not visible to TransactionLogIterator (#5474) Summary: When tailing the WAL with TransactionLogIterator, it used to return Corruption status to indicate that the WAL has new tail that is not visible to the iterator, which is a misleading status. The patch replaces it with TryAgain which is more descriptive of a status, indicating that the user needs to create a new iterator to fetch the recent tail. Fixes https://github.com/facebook/rocksdb/issues/5455 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5474 Differential Revision: D15898953 Pulled By: maysamyabandeh fbshipit-source-id: 40966f6457cb539e1aeb104daeada6b0e46059fc 19 June 2019, 15:10:08 UTC
5355e52 Make the 'block read count' performance counters consistent (#5484) Summary: The patch brings the semantics of per-block-type read performance context counters in sync with the generic block_read_count by only incrementing the counter if the block was actually read from the file. It also fixes index_block_read_count, which fell victim to the refactoring in PR https://github.com/facebook/rocksdb/issues/5298. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5484 Test Plan: Extended the unit tests. Differential Revision: D15887431 Pulled By: ltamasi fbshipit-source-id: a3889759d0ac5759d56625d692cd828d1b9207a6 19 June 2019, 02:03:24 UTC
2e8ad03 Add more stats in the block cache trace analyzer (#5482) Summary: This PR adds more stats in the block cache trace analyzer. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5482 Differential Revision: D15883553 Pulled By: HaoyuHuang fbshipit-source-id: 6d440e4f657af75690420102d532d0ee1ed4e9cf 19 June 2019, 01:38:42 UTC
f46a2a0 Export Cache::GetCharge (#5476) Summary: Exporting GetCharge to cache.hh Pull Request resolved: https://github.com/facebook/rocksdb/pull/5476 Differential Revision: D15881882 Pulled By: riversand963 fbshipit-source-id: 3d99084d10059b4fcaaaba240606ed50bc23351c 19 June 2019, 00:35:41 UTC
92f631d replace sprintf with its safe version snprintf (#5475) Summary: sprintf is unsafe and has buffer overrun risk. Replace it with the safer version snprintf where buffer size is supplied to avoid overrun. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5475 Differential Revision: D15879481 Pulled By: sagar0 fbshipit-source-id: 7ae1958ffc9727fa50261dfbb98ddd74e70a72d8 18 June 2019, 23:42:26 UTC
d0c6aea Revert to respecting only the read_tier read option for index blocks (#5481) Summary: PR https://github.com/facebook/rocksdb/issues/5298 subtly changed how read options are applied to the index block during a Get, MultiGet, or iteration. Earlier, only the read_tier option applied to the index block read; since PR https://github.com/facebook/rocksdb/issues/5298, fill_cache and verify_checksums also have an effect. This patch restores the earlier behavior to prevent surprise memory increases for clients due to the index block not being cached. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5481 Test Plan: make check Differential Revision: D15883082 Pulled By: ltamasi fbshipit-source-id: 9a065ec3a6db5a365cf6dd5e95190a20c5756356 18 June 2019, 22:02:09 UTC
2208705 Fix compilation with USE_HDFS (#5444) Summary: The changes in https://github.com/facebook/rocksdb/commit/8272a6de57ed701fb25bb660e074cab703ed3fe7 were untested with `USE_HDFS=1`. There were a couple compiler errors. This PR fixes them. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5444 Test Plan: ``` $ EXTRA_LDFLAGS="-L/tmp/hadoop-3.1.2/lib/native/" EXTRA_CXXFLAGS="-I/tmp/hadoop-3.1.2/include" USE_HDFS=1 make -j12 check ``` Differential Revision: D15885009 fbshipit-source-id: 2a0a63739e0b9a2819b461ad63ce1292c4833fe2 18 June 2019, 21:55:59 UTC
5dc9fbd Update the version of ZStd for the Rocks Java static build Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5228 Differential Revision: D15880451 Pulled By: sagar0 fbshipit-source-id: 84da6f42cac15367d95bffa5336ebd002e7c3308 18 June 2019, 18:57:01 UTC
4bd0cf5 build on ARM64 (#5450) Summary: Support building RocksDB on AWS ARM64 ``` uname -m aarch64 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5450 Differential Revision: D15879851 fbshipit-source-id: a9b56520a2cd9921338305a06d7103a40a3300b8 18 June 2019, 18:27:45 UTC
f287f8d Fix a bug caused by secondary not skipping the beginning of new MANIFEST (#5472) Summary: While the secondary is replaying after the primary, the primary may switch to a new MANIFEST. The secondary is already able to detect and follow the primary to the new MANIFEST. However, the current implementation has a bug, described as follows. The new MANIFEST's first records have been generated by VersionSet::WriteSnapshot to describe the current state of the column families and the db as of the MANIFEST creation. Since the secondary instance has already finished recovering upon start, there is no need for the secondary to process these records. Actually, if the secondary were to replay these records, the secondary may end up adding the same SST files **again** to each column family, causing consistency checks done by VersionBuilder to fail. Therefore, we record the number of records to skip at the beginning of the new MANIFEST and ignore them. Test plan (on dev server) ``` $make clean && make -j32 all $./db_secondary_test ``` All existing unit tests must pass as well. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5472 Differential Revision: D15866771 Pulled By: riversand963 fbshipit-source-id: a1eec4837fb2ad13059398efb0f437e74fd53bed 18 June 2019, 18:21:37 UTC
ddd088c fix rocksdb lite and clang contrun test failures (#5477) Summary: recent commit 671d15cbdd3839acb54cb21a2aa82efca4917155 introduced some test failures: ``` ===== Running stats_history_test [==========] Running 9 tests from 1 test case. [----------] Global test environment set-up. [----------] 9 tests from StatsHistoryTest [ RUN ] StatsHistoryTest.RunStatsDumpPeriodSec monitoring/stats_history_test.cc:63: Failure dbfull()->SetDBOptions({{"stats_dump_period_sec", "0"}}) Not implemented: Not supported in ROCKSDB LITE db/db_options_test.cc:28:11: error: unused variable 'kMicrosInSec' [-Werror,-Wunused-const-variable] const int kMicrosInSec = 1000000; ``` This PR fixes these failures Pull Request resolved: https://github.com/facebook/rocksdb/pull/5477 Differential Revision: D15871814 Pulled By: miasantreble fbshipit-source-id: 0a7023914d2c1784d9d2d3f5bfb47310d4855394 18 June 2019, 04:16:29 UTC
bcfc53b Block cache tracing: Fix minor bugs with downsampling and some benchmark results. (#5473) Summary: As the code changes for block cache tracing are almost complete, I did a benchmark to compare the performance when block cache tracing is enabled/disabled. With 1% downsampling ratio, the performance overhead of block cache tracing is negligible. When we trace all block accesses, the throughput drops by 6 folds with 16 threads issuing random reads and all reads are served in block cache. Setup: RocksDB: version 6.2 Date: Mon Jun 17 17:11:13 2019 CPU: 24 * Intel Core Processor (Skylake) CPUCache: 16384 KB Keys: 20 bytes each Values: 100 bytes each (100 bytes after compression) Entries: 10000000 Prefix: 20 bytes Keys per prefix: 0 RawSize: 1144.4 MB (estimated) FileSize: 1144.4 MB (estimated) Write rate: 0 bytes/second Read rate: 0 ops/second Compression: NoCompression Compression sampling rate: 0 Memtablerep: skip_list Perf Level: 1 I ran the readrandom workload for 1 minute. Detailed throughput results: (ops/second) Sample rate 0: no block cache tracing. Sample rate 1: trace all block accesses. Sample rate 100: trace accesses 1% blocks. 1 thread |   |   |  -- | -- | -- | -- Sample rate | 0 | 1 | 100 1 MB block cache size | 13,094 | 13,166 | 13,341 10 GB block cache size | 202,243 | 188,677 | 229,182 16 threads |   |   | -- | -- | -- | -- Sample rate | 0 | 1 | 100 1 MB block cache size | 208,761 | 178,700 | 201,872 10 GB block cache size | 2,645,996 | 426,295 | 2,587,605 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5473 Differential Revision: D15869479 Pulled By: HaoyuHuang fbshipit-source-id: 7ae802abe84811281a6af8649f489887cd7c4618 18 June 2019, 00:59:02 UTC
2d1dd5b Support computing miss ratio curves using sim_cache. (#5449) Summary: This PR adds a BlockCacheTraceSimulator that reports the miss ratios given different cache configurations. A cache configuration contains "cache_name,num_shard_bits,cache_capacities". For example, "lru, 1, 1K, 2K, 4M, 4G". When we replay the trace, we also perform lookups and inserts on the simulated caches. In the end, it reports the miss ratio for each tuple <cache_name, num_shard_bits, cache_capacity> in a output file. This PR also adds a main source block_cache_trace_analyzer so that we can run the analyzer in command line. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5449 Test Plan: Added tests for block_cache_trace_analyzer. COMPILE_WITH_ASAN=1 make check -j32. Differential Revision: D15797073 Pulled By: HaoyuHuang fbshipit-source-id: aef0c5c2e7938f3e8b6a10d4a6a50e6928ecf408 17 June 2019, 23:41:12 UTC
7d8d564 Override check consistency for DBImplSecondary (#5469) Summary: `DBImplSecondary` calls `CheckConsistency()` during open. In the past, `DBImplSecondary` did not override this function thus `DBImpl::CheckConsistency()` is called. The following can happen. The secondary instance is performing consistency check which calls `GetFileSize(file_path)` but the file at `file_path` is deleted by the primary instance. `DBImpl::CheckConsistency` does not account for this and fails the consistency check. This is undesirable. The solution is that, we call `DBImpl::CheckConsistency()` first. If it passes, then we are good. If not, we give it a second chance and handles the case of file(s) being deleted. Test plan (on dev server): ``` $make clean && make -j20 all $./db_secondary_test ``` All other existing unit tests must pass as well. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5469 Differential Revision: D15861845 Pulled By: riversand963 fbshipit-source-id: 507d72392508caed3cd003bb2e2aa43f993dd597 17 June 2019, 22:39:55 UTC
671d15c Persistent Stats: persist stats history to disk (#5046) Summary: This PR continues the work in https://github.com/facebook/rocksdb/pull/4748 and https://github.com/facebook/rocksdb/pull/4535 by adding a new DBOption `persist_stats_to_disk` which instructs RocksDB to persist stats history to RocksDB itself. When statistics is enabled, and both options `stats_persist_period_sec` and `persist_stats_to_disk` are set, RocksDB will periodically write stats to a built-in column family in the following form: key -> (timestamp in microseconds)#(stats name), value -> stats value. The existing API `GetStatsHistory` will detect the current value of `persist_stats_to_disk` and either read from in-memory data structure or from the hidden column family on disk. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5046 Differential Revision: D15863138 Pulled By: miasantreble fbshipit-source-id: bb82abdb3f2ca581aa42531734ac799f113e931b 17 June 2019, 22:21:50 UTC
ee294c2 Make db_bloom_filter_test parallel (#5467) Summary: When run under TSAN it sometimes goes over 10m and times out. The slowest ones are `DBBloomFilterTestWithParam.BloomFilter` which we have 6 of them. Making the tests run in parallel should take care of the timeout issue. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5467 Differential Revision: D15856912 Pulled By: maysamyabandeh fbshipit-source-id: 26c43c55312974c1b809c070342dee037d0219f4 17 June 2019, 18:13:45 UTC
d43b4cd Integrate block cache tracing into db_bench (#5459) Summary: This PR integrates the block cache tracing into db_bench. It adds three command line arguments. -block_cache_trace_file (Block cache trace file path.) type: string default: "" -block_cache_trace_max_trace_file_size_in_bytes (The maximum block cache trace file size in bytes. Block cache accesses will not be logged if the trace file size exceeds this threshold. Default is 64 GB.) type: int64 default: 68719476736 -block_cache_trace_sampling_frequency (Block cache trace sampling frequency, termed s. It uses spatial downsampling and samples accesses to one out of s blocks.) type: int32 default: 1 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5459 Differential Revision: D15832031 Pulled By: HaoyuHuang fbshipit-source-id: 0ecf2f2686557251fe741a2769b21170777efa3d 17 June 2019, 18:08:21 UTC
d1ae67b Switch Travis to Xenial build (#4789) Summary: I think this should now also run on Travis's new virtualised infrastructure which affords more memory and CPU. We also need to think about migrating from travis-ci.org to travis-ci.com. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4789 Differential Revision: D15856272 fbshipit-source-id: 10b41d21924e8a362bc9646a63ccd1a5dfc437c6 17 June 2019, 17:20:02 UTC
7a8d735 Integrate block cache tracer in block based table reader. (#5441) Summary: This PR integrates the block cache tracer into block based table reader. The tracer will write the block cache accesses using the trace_writer. The tracer is null in this PR so that nothing will be logged. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5441 Differential Revision: D15772029 Pulled By: HaoyuHuang fbshipit-source-id: a64adb92642cd23222e0ba8b10d86bf522b42f9b 15 June 2019, 00:40:31 UTC
f121964 Validate CF Options when creating a new column family (#5453) Summary: It seems like CF Options are not properly validated when creating a new column family with `CreateColumnFamily` API; only a selected few checks are done. Calling `ColumnFamilyData::ValidateOptions`, which is the single source for all CFOptions validations, will help fix this. (`ColumnFamilyData::ValidateOptions` is already called at the time of `DB::Open`). **Test Plan:** Added a new test: `DBTest.CreateColumnFamilyShouldFailOnIncompatibleOptions` ``` TEST_TMPDIR=/dev/shm ./db_test --gtest_filter=DBTest.CreateColumnFamilyShouldFailOnIncompatibleOptions ``` Also ran gtest-parallel to make sure the new test is not flaky. ``` TEST_TMPDIR=/dev/shm ~/gtest-parallel/gtest-parallel ./db_test --gtest_filter=DBTest.CreateColumnFamilyShouldFailOnIncompatibleOptions --repeat=10000 [10000/10000] DBTest.CreateColumnFamilyShouldFailOnIncompatibleOptions (15 ms) ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5453 Differential Revision: D15816851 Pulled By: sagar0 fbshipit-source-id: 9e702b9850f5c4a7e0ef8d39e1e6f9b81e7fe1e5 14 June 2019, 21:11:10 UTC
b47cfec fix compilation error on MSVC (#5458) Summary: "__attribute__((__weak__))" was introduced in port\jemalloc_helper.h. It's not supported by Microsoft VS 2015, resulting in compile error. This fix adds a #if branch to work around the compile issue. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5458 Differential Revision: D15827285 fbshipit-source-id: 8c5f7ad31de1ac677bd96f16c4450767de834beb 14 June 2019, 18:28:13 UTC
58c7835 Set executeLocal on child lego jobs (#5456) Summary: This property is needed to run the child jobs on the same host and thus propagate the child job status back to the parent's. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5456 Reviewed By: yancouto Differential Revision: D15824382 Pulled By: maysamyabandeh fbshipit-source-id: 42f2efbedaa3a8b399281105f0ce793c1c9a6191 14 June 2019, 17:38:04 UTC
89695bf Remove unused variable (#5457) Summary: This PR removes the unused variable that causes CLANG build to fail. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5457 Differential Revision: D15825027 Pulled By: HaoyuHuang fbshipit-source-id: 72c847c39ca310560efcbc5938cffa6f31164068 14 June 2019, 16:17:09 UTC
bb41780 Integrate block cache tracer into db_impl (#5433) Summary: This PR integrates the block cache tracer class into db_impl.cc. db_impl.cc contains a member variable of AtomicBlockCacheTraceWriter class and passes its reference to the block_based_table_reader. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5433 Differential Revision: D15728016 Pulled By: HaoyuHuang fbshipit-source-id: 23d5659e8c82d556833dcc1a5558aac8c1f7db71 13 June 2019, 22:43:10 UTC
a3b8c76 Add missing check before calling PurgeObsoleteFiles in EnableFileDeletions (#5448) Summary: Calling PurgeObsoleteFiles with a JobContext for which HaveSomethingToDelete is false is a precondition violation. This would trigger an assertion in debug builds; however, in release builds with assertions disabled, this can result in the pending_purge_obsolete_files_ counter in DBImpl underflowing, which in turn can lead to the process hanging during database close. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5448 Differential Revision: D15792569 Pulled By: ltamasi fbshipit-source-id: 82d92c9b4f6a9efcdc69dbb3d5a52a1ae2dd2472 13 June 2019, 21:43:13 UTC
2c9df9f Dynamic test whether sync_file_range returns ENOSYS (#5416) Summary: `sync_file_range` returns `ENOSYS` on Windows Subsystem for Linux even when using a supposedly supported filesystem like ext4. To handle this case we can do a dynamic check that a no-op `sync_file_range` invocation, which is accomplished by passing zero for the `flags` argument, succeeds. Also I rearranged the function and comments to hopefully make it more easily understandable. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5416 Differential Revision: D15807061 fbshipit-source-id: d31d94e1f228b7850ea500e6199f8b5daf8cfbd3 13 June 2019, 20:56:10 UTC
ec8111c Add Alluxio to USERS.md (#5434) Summary: Add Alluxio's use case of RocksDB to `USERS.md` for metadata service Pull Request resolved: https://github.com/facebook/rocksdb/pull/5434 Differential Revision: D15766559 Pulled By: riversand963 fbshipit-source-id: b68ef851f8f92e0925c31e55296260225fdf849e 13 June 2019, 19:25:26 UTC
5c76ba9 Support rocksdbjava aarch64 build and test (#5258) Summary: Verified with an Ampere Computing eMAG aarch64 system. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5258 Differential Revision: D15807309 Pulled By: maysamyabandeh fbshipit-source-id: ab85d2fd3fe40e6094430ab0eba557b1e979510d 13 June 2019, 18:48:10 UTC
60f3ec2 Fix appveyor compliant about passing const to thread (#5447) Summary: CLANG would complain if we pass const to lambda function and appveyor complains if we don't (https://github.com/facebook/rocksdb/pull/5443). The patch fixes that by using the default capture mode. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5447 Differential Revision: D15788722 Pulled By: maysamyabandeh fbshipit-source-id: 47e7f49264afe31fdafe42cb8bf93da126abfca9 12 June 2019, 22:06:22 UTC
f984286 Disable pipeline writes in stress test (#5445) Summary: The tsan crash tests are failing with a data race compliant with pipelined write option. Temporarily disable it until its concurrency issue are fixed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5445 Differential Revision: D15783824 Pulled By: maysamyabandeh fbshipit-source-id: 413a0c3230b86f524fc7eeea2cf8e8375406e65b 12 June 2019, 18:12:36 UTC
back to top