swh:1:snp:5115096b921df712aeb2a08114fede57fb3331fb

sort by:
Revision Author Date Message Commit Date
aad2110 Updating README.fb to have newest verison 2.4 Summary: Test Plan: visual 04 October 2013, 19:17:44 UTC
a143ef9 Change namespace from leveldb to rocksdb Summary: Change namespace from leveldb to rocksdb. This allows a single application to link in open-source leveldb code as well as rocksdb code into the same process. Test Plan: compile rocksdb Reviewers: emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D13287 04 October 2013, 18:59:26 UTC
b3ed081 Add a statistic to count the number of calls to GetUpdatesSince Summary: This is useful to keep track of refreshes in transaction log iterator Test Plan: make; db_stress --statistics=1 shows it Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D13281 04 October 2013, 17:47:20 UTC
854d236 Add backward compatible option in GetLiveFiles to choose whether to not Flush first Summary: As explained in comments in GetLiveFiles in db.h, this option will cause flush to be skipped in GetLiveFiles because some use-cases use GetSortedWalFiles after GetLiveFiles to generate more complete snapshots. Using GetSortedWalFiles after GetLiveFiles allows us to not Flush in GetLiveFiles first because wals have everything. Note: file deletions will be disabled before calling GLF or GSWF so live logs will not move to archive logs or get delted. Note: Manifest file is truncated to a proper value in GLF, so it will always reply from the proper wal files on a restart Test Plan: make Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D13257 04 October 2013, 17:20:10 UTC
200c05a [RocksDB] Still honor DisableFileDeletions when purge_log_after_memtable_flush is on Summary: as title Test Plan: make check Reviewers: emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D13263 03 October 2013, 23:12:43 UTC
fa798e9 [Rocksdb] Submit mem table flush job in a different thread pool Summary: As title. This is just a quick hack and not ready for commit. fails a lot of unit test. I will test/debug it directly in ViewState shadow . Test Plan: Try it in shadow test. Reviewers: dhruba, xjin CC: leveldb Differential Revision: https://reviews.facebook.net/D12933 03 October 2013, 21:37:19 UTC
658a3ce Fix SIGSEGV issue in universal compaction Summary: We saw SIGSEGV when set options.num_levels=1 in universal compaction style. Dug into this issue for a while, and finally found the root cause (thank Haobo for discussion). Test Plan: Add new unit test. It throws SIGSEGV without this change. Also run "make all check". Reviewers: haobo, dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D13251 03 October 2013, 00:33:31 UTC
6b34021 Triggering verify for gets also Summary: Will use iterators to verify keys in the db for half of its keys and Gets for the other half. Test Plan: ./db_stress --max_key=1000 --ops_per_thread=100 Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D13227 02 October 2013, 18:22:17 UTC
7104697 [RocksDB] Added perf counters to track skipped internal keys during iteration Summary: as title. unit test not polished. this is for a quick live test Test Plan: live Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D13221 02 October 2013, 17:48:41 UTC
861f6e4 Remove the hard-coded enum value in statistics.h Summary: I am planning to add more to statistics classes but found current way of using enum is very verbose and unnecessarily increase the difficulity of adding new statistics. In this diff I removed the code that explicitly specifies the value of each enum entry. This will help us easily add new statistic items more conveniently without manually adding the value of other enum entries by one. Test Plan: make; make check; Reviewers: haobo, dhruba, xjin, emayanke, vamsi CC: leveldb Differential Revision: https://reviews.facebook.net/D13197 01 October 2013, 21:14:06 UTC
7edb92b Phase 2 of iterator stress test Summary: Using an iterator instead of the Get method, each thread goes through a portion of the database and verifies values by comparing to the shared state. Test Plan: ./db_stress --db=/tmp/tmppp --max_key=10000 --ops_per_thread=10000 To test some basic cases, the following lines can be added (each set in turn) to the verifyDb method with the following expected results: // Should abort with "Unexpected value found" shared.Delete(start); // Should abort with "Value not found" WriteOptions write_opts; db_->Delete(write_opts, Key(start)); // Should succeed WriteOptions write_opts; shared.Delete(start); db_->Delete(write_opts, Key(start)); // Should abort with "Value not found" WriteOptions write_opts; db_->Delete(write_opts, Key(start + (end-start)/2)); // Should abort with "Value not found" db_->Delete(write_opts, Key(end-1)); // Should abort with "Unexpected value" shared.Delete(end-1); // Should abort with "Unexpected value" shared.Delete(start + (end-start)/2); // Should abort with "Value not found" db_->Delete(write_opts, Key(start)); shared.Delete(start); db_->Delete(write_opts, Key(end-1)); db_->Delete(write_opts, Key(end-2)); To test the out of range abort, change the key in the for loop to Key(i+1), so that the key defined by the index i is now outside of the supposed range of the database. Reviewers: emayanke Reviewed By: emayanke CC: dhruba, xjin Differential Revision: https://reviews.facebook.net/D13071 30 September 2013, 23:48:00 UTC
22bb7c7 [RocksDB] print the name of options.memtable_factory in LOG so we know Summary: as title Test Plan: make check Reviewers: dhruba, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D13179 29 September 2013, 03:57:29 UTC
8eb552b New unit test for iterator with snapshot Summary: I played with the reported bug about iterator with snapshot: https://code.google.com/p/leveldb/issues/detail?id=200. I turned the original test program (https://code.google.com/p/leveldb/issues/attachmentText?id=200&aid=2000000000&name=test.cc&token=7uOUQW-HFlbAFMUm7EqtaAEy7Tw%3A1378320724136) into a new unit test, but I cannot reproduce the problem. Notice lines 31-34 in above link. I have ran the new test with and without such Put() operations. Both succeed. So this diff simply adds the test, without changing any source codes. Test Plan: run new test. Reviewers: dhruba, haobo, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12735 28 September 2013, 18:39:08 UTC
0c40406 [RocksDB] Move last_sequence and last_flushed_sequence_ update back into lock protected area Summary: A previous diff moved these outside of lock protected area. Moved back in now. Also moved tmp_batch_ update outside of lock protected area, as only the single write thread can access it. Test Plan: make check Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D13137 27 September 2013, 03:43:11 UTC
08740b1 [RocksDB] Fix skiplist sequential insertion optimization Summary: The original optimization missed updating links other than the lowest level. Test Plan: make check; perf_context_test Reviewers: dhruba Reviewed By: dhruba CC: leveldb, adsharma Differential Revision: https://reviews.facebook.net/D13119 26 September 2013, 22:17:03 UTC
e0aa19a [RocbsDB] Add an option to enable set based memtable for perf_context_test Summary: as title. Some result: -- Sequential insertion of 1M key/value with stock skip list (all in on memtable) time ./perf_context_test --total_keys=1000000 --use_set_based_memetable=0 Inserting 1000000 key/value pairs ... Put uesr key comparison: Count: 1000000 Average: 8.0179 StdDev: 176.34 Min: 0.0000 Median: 2.5555 Max: 88933.0000 Percentiles: P50: 2.56 P75: 2.83 P99: 58.21 P99.9: 133.62 P99.99: 987.50 Get uesr key comparison: Count: 1000000 Average: 43.4465 StdDev: 379.03 Min: 2.0000 Median: 36.0195 Max: 88939.0000 Percentiles: P50: 36.02 P75: 43.66 P99: 112.98 P99.9: 824.84 P99.99: 7615.38 real 0m21.345s user 0m14.723s sys 0m5.677s -- Sequential insertion of 1M key/value with set based memtable (all in on memtable) time ./perf_context_test --total_keys=1000000 --use_set_based_memetable=1 Inserting 1000000 key/value pairs ... Put uesr key comparison: Count: 1000000 Average: 61.5022 StdDev: 6.49 Min: 0.0000 Median: 62.4295 Max: 71.0000 Percentiles: P50: 62.43 P75: 66.61 P99: 71.00 P99.9: 71.00 P99.99: 71.00 Get uesr key comparison: Count: 1000000 Average: 29.3810 StdDev: 3.20 Min: 1.0000 Median: 29.1801 Max: 34.0000 Percentiles: P50: 29.18 P75: 32.06 P99: 34.00 P99.9: 34.00 P99.99: 34.00 real 0m28.875s user 0m21.699s sys 0m5.749s Worst case comparison for a Put is 88933 (skiplist) vs 71 (set based memetable) Of course, there's other in-efficiency in set based memtable implementation, which lead to the overall worst performance. However, P99 behavior advantage is very very obvious. Test Plan: ./perf_context_test and viewstate shadow testing Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D13095 26 September 2013, 05:49:18 UTC
f1a60e5 The vector rep implementation was segfaulting because of incorrect initialization of vector. Summary: The constructor for Vector memtable has a parameter called 'count' that specifies the capacity of the vector to be reserved at allocation time. It was incorrectly used to initialize the size of the vector. Test Plan: Enhanced db_test. Reviewers: haobo, xjin, emayanke Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D13083 25 September 2013, 18:33:52 UTC
87d6eb2 Implement apis in the Environment to clear out pages in the OS cache. Summary: Added a new api to the Environment that allows clearing out not-needed pages from the OS cache. This will be helpful when the compressed block cache replaces the OS cache. Test Plan: EnvPosixTest.InvalidateCache Reviewers: haobo Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D13041 24 September 2013, 05:05:03 UTC
9262061 Fixing crashing tests to include iterpercent param Summary: Adding in the iterpercent flag to tests. Test Plan: make crash_test Reviewers: emayanke Reviewed By: emayanke Differential Revision: https://reviews.facebook.net/D13035 20 September 2013, 23:27:22 UTC
5e9f3a9 Better locking in vectorrep that increases throughput to match speed of storage. Summary: There is a use-case where we want to insert data into rocksdb as fast as possible. Vector rep is used for this purpose. The background flush thread needs to flush the vectorrep to storage. It acquires the dblock then sorts the vector, releases the dblock and then writes the sorted vector to storage. This is suboptimal because the lock is held during the sort, which prevents new writes for occuring. This patch moves the sorting of the vector rep to outside the db mutex. Performance is now as fastas the underlying storage system. If you are doing buffered writes to rocksdb files, then you can observe throughput upwards of 200 MB/sec writes. This is an early draft and not yet ready to be reviewed. Test Plan: make check Task ID: # Blame Rev: Reviewers: haobo Reviewed By: haobo CC: leveldb, haobo Differential Revision: https://reviews.facebook.net/D12987 20 September 2013, 04:48:10 UTC
4335418 Phase 1 of an iterator stress test Summary: Added MultiIterate() which does a seek and some Next/Prev calls. Iterator status is checked only, no data integrity check Test Plan: make db_stress ./db_stress --iterpercent=<nonzero value> --readpercent=, etc. Reviewers: emayanke, dhruba, xjin Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D12915 19 September 2013, 23:47:24 UTC
4734dbb [RocksDB] Unit test to show Seek key comparison number Summary: Added SeekKeyComparison to show the uer key comparison incurred by Seek. Test Plan: make perf_context_test export LEVELDB_TESTS=DBTest.SeekKeyComparison ./perf_context_test --write_buffer_size=500000 --total_keys=10000 ./perf_context_test --write_buffer_size=250000 --total_keys=10000 Reviewers: dhruba, xjin Reviewed By: xjin CC: leveldb Differential Revision: https://reviews.facebook.net/D12843 19 September 2013, 04:43:41 UTC
72fcbf0 [RocksDB] Fix DBTest.UniversalCompactionSizeAmplification too Summary: as title Test Plan: make db_test; ./db_test Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D13005 18 September 2013, 04:29:33 UTC
5b76338 [RocksDB] Fix DBTest.UniversalCompactionTrigger to reflect the correct compaction trigger condition. Summary: as title Test Plan: make db_test; ./db_test Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12981 17 September 2013, 21:17:48 UTC
11c6502 Revert "Minor fixes found while trying to compile it using clang on Mac OS X" This reverts commit 5f2c136c328a8dbb6c3cb3818881e30eeb916cd6. 16 September 2013, 06:01:26 UTC
1d8c57d [RocksDB] Universal compaction trigger condition minor fix Summary: Currently, when total number of files reaches level0_file_num_compaction_trigger, universal compaction will schedule a compaction job, but the job will not honor the compaction until the total number of files is level0_file_num_compaction_trigger+1. Fixed the condition for consistent behavior (start compaction on reaching level0_file_num_compaction_trigger). Test Plan: make check; db_stress Reviewers: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12945 16 September 2013, 05:35:59 UTC
5f2c136 Minor fixes found while trying to compile it using clang on Mac OS X 16 September 2013, 05:06:14 UTC
8866448 [RocksDB] fix build env_test Summary: move the TwoPools test to the end of thread related tests. Otherwise, the SetBackgroundThreads call would increase the Low pool size and affect the result of other tests. Test Plan: make env_test; ./env_test Reviewers: dhruba, emayanke, xjin Reviewed By: xjin CC: leveldb Differential Revision: https://reviews.facebook.net/D12939 14 September 2013, 04:13:20 UTC
4012ca1 Added a parameter to limit the maximum space amplification for universal compaction. Summary: Added a new field called max_size_amplification_ratio in the CompactionOptionsUniversal structure. This determines the maximum percentage overhead of space amplification. The size amplification is defined to be the ratio between the size of the oldest file to the sum of the sizes of all other files. If the size amplification exceeds the specified value, then min_merge_width and max_merge_width are ignored and a full compaction of all files is done. A value of 10 means that the size a database that stores 100 bytes of user data could occupy 110 bytes of physical storage. Test Plan: Unit test DBTest.UniversalCompactionSpaceAmplification added. Reviewers: haobo, emayanke, xjin Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D12825 13 September 2013, 23:27:18 UTC
e2a093a Fix delete in db_ttl.cc Summary: should delete the proper variable Test Plan: make all check Reviewers: haobo, dhruba Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D12921 13 September 2013, 18:16:27 UTC
eeb90c7 Update README file for public interface Summary: public interface is in include/* Test Plan: visual Reviewers: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12927 13 September 2013, 18:15:47 UTC
5e73c4d Update README file and check arc diff with proxy Summary: export http_proxy='http://172.31.255.99:8080' export https_proxy="$http_proxy" in bashrc makes arc work. Also README file needed to be updated Test Plan: visual Reviewers: dhruba, haobo Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D12903 13 September 2013, 05:19:44 UTC
1565dab [RocksDB] Enhance Env to support two thread pools LOW and HIGH Summary: this is the ground work for separating memtable flush jobs to their own thread pool. Both SetBackgroundThreads and Schedule take a third parameter Priority to indicate which thread pool they are working on. The names LOW and HIGH are just identifiers for two different thread pools, and does not indicate real difference in 'priority'. We can set number of threads in the pools independently. The thread pool implementation is refactored. Test Plan: make check Reviewers: dhruba, emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D12885 12 September 2013, 23:15:36 UTC
0e42230 [RocksDB] Remove Log file immediately after memtable flush Summary: As title. The DB log file life cycle is tied up with the memtable it backs. Once the memtable is flushed to sst and committed, we should be able to delete the log file, without holding the mutex. This is part of the bigger change to avoid FindObsoleteFiles at runtime. It deals with log files. sst files will be dealt with later. Test Plan: make check; db_bench Reviewers: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11709 12 September 2013, 18:54:44 UTC
6e2b580 Updating readme file for version 2.3 Summary: Test Plan: Reviewers: CC: Task ID: # Blame Rev: 12 September 2013, 04:58:07 UTC
f2f4c80 [RocksDB] Added nano second stopwatch and new perf counters to track block read cost Summary: The pupose of this diff is to expose per user-call level precise timing of block read, so that we can answer questions like: a Get() costs me 100ms, is that somehow related to loading blocks from file system, or sth else? We will answer that with EXACTLY how many blocks have been read, how much time was spent on transfering the bytes from os, how much time was spent on checksum verification and how much time was spent on block decompression, just for that one Get. A nano second stopwatch was introduced to track time with higher precision. The cost/precision of the stopwatch is also measured in unit-test. On my dev box, retrieving one time instance costs about 30ns, on average. The deviation of timing results is good enough to track 100ns-1us level events. And the overhead could be safely ignored for 100us level events (10000 instances/s), for example, a viewstate thrift call. Test Plan: perf_context_test, also testing with viewstate shadow traffic. Reviewers: dhruba Reviewed By: dhruba CC: leveldb, xjin Differential Revision: https://reviews.facebook.net/D12351 08 September 2013, 04:14:54 UTC
32c965d Flush was hanging because the configured options specified that more than 1 memtable need to be merged. Summary: There is an config option called Options.min_write_buffer_number_to_merge that specifies the minimum number of write buffers to merge in memory before flushing to a file in L0. But in the the case when the db is being closed, we should not be using this config, instead we should flush whatever write buffers were available at that time. Test Plan: Unit test attached. Reviewers: haobo, emayanke Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D12717 06 September 2013, 23:28:33 UTC
197034e An iterator may automatically invoke reseeks. Summary: An iterator invokes reseek if the number of sequential skips over the same userkey exceeds a configured number. This makes iter->Next() faster (bacause of fewer key compares) if a large number of adjacent internal keys in a table (sst or memtable) have the same userkey. Test Plan: Unit test DBTest.IterReseek. Reviewers: emayanke, haobo, xjin Reviewed By: xjin CC: leveldb, xjin Differential Revision: https://reviews.facebook.net/D11865 06 September 2013, 18:50:53 UTC
de98c1d Update documentation for backups and LogData Summary: LogData doesn't consume sequence numbers and doesn't increase the count of the write-batch. Also it was discussed that GetLiveFiles will have to be followed by GetSortedWalFiles to get a lossless backup Test Plan: visual Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12753 05 September 2013, 22:33:37 UTC
4b785aa Add logdata to ttl Summary: Ttl-write makes a new writebatch and calls Write on the base db. It should recognize LogData also Test Plan: make Reviewers: dhruba, haobo Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D12747 05 September 2013, 20:52:47 UTC
aa5c897 Return pathname relative to db dir in LogFile and cleanup AppendSortedWalsOfType Summary: So that replication can just download from wherever LogFile.Pathname is pointing them. Test Plan: make all check;./db_repl_stress Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12609 04 September 2013, 20:44:43 UTC
42c109c New ldb command to convert compaction style Summary: Add new command "change_compaction_style" to ldb tool. For universal->level, it shows "nothing to do". For level->universal, it compacts all files into a single one and moves the file to level 0. Also add check for number of files at level 1+ when opening db with universal compaction style. Test Plan: 'make all check'. New unit test for internal convertion function. Also manully test various cmd like: ./ldb change_compaction_style --old_compaction_style=0 --new_compaction_style=1 --db=/tmp/leveldbtest-3088/db_test Reviewers: haobo, dhruba Reviewed By: haobo CC: vamsi, emayanke Differential Revision: https://reviews.facebook.net/D12603 04 September 2013, 20:13:08 UTC
352f063 Fix memory leak in table.cc Summary: In InternalGet, BlockReader returns an Iterator which is legitimately freed at the end of the 'else' scope. BUT there is a break statement in between and must be freed there too! The best solution would be to move to unique_ptr and let it handle. Changed it to a unique_ptr. Test Plan: valgrind ./db_test;make all check Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12681 03 September 2013, 05:13:29 UTC
b1d09f1 Fix build failing becasue of ttl-keymayexist Summary: PutValues calls Flush in ttl_test which clears memtables. KeyMayExist called after that will not be able to read those key-values Test Plan: make all check OPT=-g Reviewers:leveldb 02 September 2013, 04:06:04 UTC
c34271a Fix bug in Counters and record Sequencenumber using only TickerCount Summary: The way counters/statistics are implemented in rocksdb demands that enum Tickers and TickerNameMap follow the same order, otherwise statistics exposed from fbcode/rocks get out-of-sync. 2 counters for prefix had violated this order and when I built counters for fbcode/mcrocksdb, statistics for sequence number were appearing out-of-sync. The other change is to record sequence-number using setTickerCount only and not recordTick. This is because of difference in statistics as understood by rocks/utils which uses ServiceData::statistics function and rocksdb statistics. In rocksdb there is just 1 counter for a countername. But in ServiceData there are 4 independent buckets for every countername-Count, Sum, Average and Rate. SetTickerCount and RecordTick update the same variable in rocksdb but different buckets in ServiceData. Therefore, I had to choose one consistent function from RecordTick or SetTickerCount for sequence number in rocksdb. I chose SetTickerCount because the statistics object in options passed during rocksdb-open is user-dependent and SetTickerCount makes sense there. There will be a corresponding diff to mcorcksdb in fbcode shortly. Test Plan: make all check; check ticker value using fprintfs Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12669 02 September 2013, 00:59:32 UTC
ab5c5c2 Fix build caused by DeleteFile not tolerating / at the beginning Summary: db->DeleteFile calls ParseFileName to check name that was returned for sst file. Now, sst filename is returned using TableFileName which uses MakeFileName. This puts a / at the front of the name and ParseFileName doesn't like that. Changed ParseFileName to tolerate /s at the beginning. The test delet_file_test used to pass earlier because this behaviour of MakeFileName had been changed a while back to not return a / during which delete_file_test was checked in. But MakeFileName had to be reverted to add / at the front because GetLiveFiles used at many places outside rocksdb used the previous behaviour of MakeFileName. Test Plan: make;./delete_filetest;make all check Reviewers: dhruba, haobo, vamsi Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12663 02 September 2013, 00:59:13 UTC
f121c4f KeyMayExist for ttl Summary: value needed to be filtered of timestamp Test Plan: ./ttl_test Reviewers: dhruba, haobo, vamsi Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12657 01 September 2013, 07:28:18 UTC
7afdf5e Correct status in options.h from WouldBlock to Incomplete Summary: WouldBlock was an internediate statue but was changed to Incomplete Test Plan: visual Reviewers: dhruba Differential Revision: https://reviews.facebook.net/D12651 31 August 2013, 15:50:04 UTC
46dcf51 Return a '/' before names of all files through MakeFileName Summary: // won't hurt but a missing / hurts sometimes Test Plan: make all check; ./db_repl_stress Reviewers: vamsi Reviewed By: vamsi CC: dhruba Differential Revision: https://reviews.facebook.net/D12621 30 August 2013, 21:25:22 UTC
59de2db Cleanup DeleteFile API Summary: The DeleteFile API was removing files inside the db-lock. This is now changed to remove files outside the db-lock. The GetLiveFilesMetadata() returns the smallest and largest seqnuence number of each file as well. Test Plan: deletefile_test Reviewers: emayanke, haobo Reviewed By: haobo CC: leveldb Maniphest Tasks: T63 Differential Revision: https://reviews.facebook.net/D12567 29 August 2013, 04:18:58 UTC
48e5ea0 [RocksDB] Fix TransformRepFactory related valgrind problem Summary: Let TransformRepFactory own the passed in transform. Also make it better encapsulated. Test Plan: make valgrind_check; Reviewers: dhruba, emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D12591 29 August 2013, 02:27:54 UTC
fc0c399 Introduced a new flag non_blocking_io in ReadOptions. Summary: If ReadOptions.non_blocking_io is set to true, then KeyMayExists and Iterators will return data that is cached in RAM. If the Iterator needs to do IO from storage to serve the data, then the Iterator.status() will return Status::IsRetry(). Test Plan: Enhanced unit test DBTest.KeyMayExist to detect if there were are IOs issues from storage. Added DBTest.NonBlockingIteration to verify nonblocking Iterations. Reviewers: emayanke, haobo Reviewed By: haobo CC: leveldb Maniphest Tasks: T63 Differential Revision: https://reviews.facebook.net/D12531 28 August 2013, 17:49:14 UTC
43eef52 [RocksDB] move stats counting outside of mutex protected region for DB::Get() Summary: As title. This is possible as tickers are atomic now. db_bench on high qps in-memory muti-thread random get workload, showed ~5% throughput improvement. Test Plan: make check; db_bench; db_stress Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12555 27 August 2013, 20:36:10 UTC
dad2731 Fix bug in KeyMayExist Summary: In KeyMayExist.db_test we do a Flush which causes sst file to be written and added as open file in TableCache, but block cache for the file is not populated. So value_found should have been false where it was true and KeyMayExist.db_test should not have passed earlier. But it passed because BlockReader in table/table.cc takes 2 default arguments at the end called for_compaction and no_io. Although I passed no_io=true from InternalGet to BlockReader, but it understood for_compaction=true and defaulted no_io to false. This is a bug and although will be removed by Dhruba's new patch to incorporate no_io in readoptions, I'm submitting this patch to fix this bug independently of that patch. Test Plan: make all check Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12537 26 August 2013, 15:45:58 UTC
b1074ac Use initializer list for VersionSet Summary: initialiszer list is fasteri/preferable because it can straightaway call the constructor for this object, otherwise it will be created first and then again initialized. Although gain may not be much in this case because files_ is just a pointer and not a complex object, this is recommended practice. Test Plan: make all check Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12519 25 August 2013, 01:16:01 UTC
5738448 Fix for no_io Summary: Oops. My bad. Test Plan: Make all check Reviewers: emayanke Reviewed By: emayanke CC: haobo, leveldb, dhruba Differential Revision: https://reviews.facebook.net/D12525 23 August 2013, 23:36:01 UTC
5c3b254 Fix memory leak Summary: There is a memory leak because TransformRepFactory does not delete its SliceTransform pointer. This patch adds a delete to the destructor. Test Plan: make check make valgrind_check Reviewers: dhruba, emayanke, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12513 23 August 2013, 22:39:49 UTC
4504c99 Internal/user key bug fix. Summary: Fix code so that the filter_block layer only assumes keys are internal when prefix_extractor is set. Test Plan: ./filter_block_test Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12501 23 August 2013, 21:49:57 UTC
1186192 Replace include/leveldb with include/rocksdb. Summary: Replace include/leveldb with include/rocksdb. Test Plan: make clean; make check make clean; make release Differential Revision: https://reviews.facebook.net/D12489 23 August 2013, 17:51:00 UTC
6f4e3ee Added include guards to stringappend and redis-list Summary: added "#pragma once" in the .h files Test Plan: make and run: stringappend_test, redis_test Reviewers: emayanke, haobo Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D12495 23 August 2013, 17:28:16 UTC
74781a0 Add three new MemTableRep's Summary: This patch adds three new MemTableRep's: UnsortedRep, PrefixHashRep, and VectorRep. UnsortedRep stores keys in an std::unordered_map of std::sets. When an iterator is requested, it dumps the keys into an std::set and iterates over that. VectorRep stores keys in an std::vector. When an iterator is requested, it creates a copy of the vector and sorts it using std::sort. The iterator accesses that new vector. PrefixHashRep stores keys in an unordered_map mapping prefixes to ordered sets. I also added one API change. I added a function MemTableRep::MarkImmutable. This function is called when the rep is added to the immutable list. It doesn't do anything yet, but it seems like that could be useful. In particular, for the vectorrep, it means we could elide the extra copy and just sort in place. The only reason I haven't done that yet is because the use of the ArenaAllocator complicates things (I can elaborate on this if needed). Test Plan: make -j32 check ./db_stress --memtablerep=vector ./db_stress --memtablerep=unsorted ./db_stress --memtablerep=prefixhash --prefix_size=10 Reviewers: dhruba, haobo, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12117 23 August 2013, 06:10:02 UTC
17dc128 Pull from https://reviews.facebook.net/D10917 Summary: Pull Mark's patch and slightly revise it. I revised another place in db_impl.cc with similar new formula. Test Plan: make all check. Also run "time ./db_bench --num=2500000000 --numdistinct=2200000000". It has run for 20+ hours and hasn't finished. Looks good so far: Installed stack trace handler for SIGILL SIGSEGV SIGBUS SIGABRT LevelDB: version 2.0 Date: Tue Aug 20 23:11:55 2013 CPU: 32 * Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz CPUCache: 20480 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 2500000000 RawSize: 276565.6 MB (estimated) FileSize: 157356.3 MB (estimated) Write rate limit: 0 Compression: snappy WARNING: Assertions are enabled; benchmarks unnecessarily slow ------------------------------------------------ DB path: [/tmp/leveldbtest-3088/dbbench] fillseq : 7202.000 micros/op 138 ops/sec; DB path: [/tmp/leveldbtest-3088/dbbench] fillsync : 7148.000 micros/op 139 ops/sec; (2500000 ops) DB path: [/tmp/leveldbtest-3088/dbbench] fillrandom : 7105.000 micros/op 140 ops/sec; DB path: [/tmp/leveldbtest-3088/dbbench] overwrite : 6930.000 micros/op 144 ops/sec; DB path: [/tmp/leveldbtest-3088/dbbench] readrandom : 1.020 micros/op 980507 ops/sec; (0 of 2500000000 found) DB path: [/tmp/leveldbtest-3088/dbbench] readrandom : 1.021 micros/op 979620 ops/sec; (0 of 2500000000 found) DB path: [/tmp/leveldbtest-3088/dbbench] readseq : 113.000 micros/op 8849 ops/sec; DB path: [/tmp/leveldbtest-3088/dbbench] readreverse : 102.000 micros/op 9803 ops/sec; DB path: [/tmp/leveldbtest-3088/dbbench] Created bg thread 0x7f0ac17f7700 compact : 111701.000 micros/op 8 ops/sec; DB path: [/tmp/leveldbtest-3088/dbbench] readrandom : 1.020 micros/op 980376 ops/sec; (0 of 2500000000 found) DB path: [/tmp/leveldbtest-3088/dbbench] readseq : 120.000 micros/op 8333 ops/sec; DB path: [/tmp/leveldbtest-3088/dbbench] readreverse : 29.000 micros/op 34482 ops/sec; DB path: [/tmp/leveldbtest-3088/dbbench] ... finished 618100000 ops Reviewers: MarkCallaghan, haobo, dhruba, chip Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D12441 23 August 2013, 05:37:13 UTC
94cf218 Revert "Prefix scan: db_bench and bug fixes" This reverts commit c2bd8f4824bda98db8699f1e08d6969cf21ef86f. 23 August 2013, 01:01:11 UTC
4c6dc7a Fix the gcov/lcov related issues Summary: Jenkin reports errors that: * Linking error on some machines. The error message shows it cannot find some gcov related symbols. * lcov error due to the version issues. Test Plan: run make in different platforms Reviewers: CC: Task ID: # Blame Rev: 23 August 2013, 00:01:06 UTC
c2bd8f4 Prefix scan: db_bench and bug fixes Summary: If use_prefix_filters is set and read_range>1, then the random seeks will set a the prefix filter to be the prefix of the key which was randomly selected as the target. Still need to add statistics (perhaps in a separate diff). Test Plan: ./db_bench --benchmarks=fillseq,prefixscanrandom --num=10000000 --statistics=1 --use_prefix_blooms=1 --use_prefix_api=1 --bloom_bits=10 Reviewers: dhruba Reviewed By: dhruba CC: leveldb, haobo Differential Revision: https://reviews.facebook.net/D12273 22 August 2013, 23:06:50 UTC
60bf2b7 Add APIs to query SST file metadata and to delete specific SST files Summary: An api to query the level, key ranges, size etc for each SST file and an api to delete a specific file from the db and all associated state in the bookkeeping datastructures. Notes: Editing the manifest version does not release the obsolete files right away. However deleting the file directly will mess up the iterator. We may need a more aggressive/timely file deletion api. I have used std::unique_ptr - will switch to boost:: since this is external. thoughts? Unit test is fragile right now as it expects the compaction at certain levels. Test Plan: unittest Reviewers: dhruba, vamsi, emayanke CC: zshao, leveldb, haobo Task ID: # Blame Rev: 22 August 2013, 22:27:19 UTC
bc8eed1 Do not use relative paths in build system Summary: Previously, RocksDB's build scripts used relative pathnames like ./build_detect_platform. This can cause problems if the user uses CDPATH. Also, it just doesn't seem right to me. Test Plan: make clean make -j32 check Reviewers: MarkCallaghan, dhruba, kailiu Reviewed By: kailiu CC: leveldb Differential Revision: https://reviews.facebook.net/D12459 22 August 2013, 21:53:51 UTC
cb703c9 Allow WriteBatch::Handler to abort iteration Summary: Sometimes you don't need to iterate through the whole WriteBatch. This diff makes the Handler member functions return a bool that indicates whether to abort or not. If they return true, the iteration stops. One thing I just thought of is that this will break backwards-compability. Maybe it would be better to add a virtual member function WriteBatch::Handler::ShouldAbort() that returns false by default. Comments requested. I still have to add a new unit test for the abort code, but let's finalize the API first. Test Plan: make -j32 check Reviewers: dhruba, haobo, vamsi, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12339 22 August 2013, 01:27:48 UTC
f9e2dec [RocksDB] Minor iterator cleanup Summary: Was going through the iterator related code, did some cleanup along the way. Basically replaced array with vector and adopted range based loop where applicable. Test Plan: make check; make valgrind_check Reviewers: dhruba, emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D12435 21 August 2013, 23:54:48 UTC
404d63a Add TODO for DBToStackableDB function which doesn't work yet Summary: Test Plan: Reviewers: CC: Task ID: # Blame Rev: 21 August 2013, 04:33:53 UTC
af732c7 Add universal compaction to db_stress nightly build Summary: Most code change in this diff is code cleanup/rewrite. The logic changes include: (1) add universal compaction to db_crashtest2.py (2) randomly set --test_batches_snapshots to be 0 or 1 in db_crashtest2.py. Old codes always use 1. (3) use different tmp directory as db directory in different runs. I saw some intermittent errors in my local tests. Use of different tmp directory seems to be able to solve the issue. Test Plan: Have run "make crashtest" for multiple times. Also run "make all check" Reviewers: emayanke, dhruba, haobo Reviewed By: emayanke Differential Revision: https://reviews.facebook.net/D12369 21 August 2013, 00:37:49 UTC
b87dcae Made merge_oprator a shared_ptr; and added TTL unit tests Test Plan: - make all check; - make release; - make stringappend_test; ./stringappend_test Reviewers: haobo, emayanke Reviewed By: haobo CC: leveldb, kailiu Differential Revision: https://reviews.facebook.net/D12381 20 August 2013, 20:35:28 UTC
3ab2792 Add default info to comment in leveldb/options.h for no_block_cache Summary: to ley clients know Test Plan: visual 19 August 2013, 21:29:40 UTC
28e6fe5 Correct documentation for no_block_cache in leveldb/options.h Summary: false shoudl have been true in comment Test Plan: visual 19 August 2013, 21:27:19 UTC
8a3547d API for getting archived log files Summary: Also expanded class LogFile to have startSequene and FileSize and exposed it publicly Test Plan: make all check Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12087 19 August 2013, 20:37:04 UTC
e134696 Merge operator fixes part 1. Summary: -Added null checks and revisions to DBIter::MergeValuesNewToOld() -Added DBIter test to stringappend_test -Major fix with Merge and TTL More plans for fixes later. Test Plan: -make clean; make stringappend_test -j 32; ./stringappend_test -make all check; Reviewers: haobo, emayanke, vamsi, dhruba Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D12315 19 August 2013, 18:42:47 UTC
1635ea0 Remove PLATFORM_SHARED_CFLAGS when compiling .o files Summary: The flags is accidentally introduced, and resulted in problems in 3rd party release. Test Plan: run make in all platforms (when doing the 3rd party release) 16 August 2013, 23:39:23 UTC
fd2f47d Improve the build files to simplify the 3rd party release process Summary: * Added LIBNAME to enable configurable library name. * remove/check fPIC in linux platform from build_detect_platform Test Plan: make Reviewers: emayanke Differential Revision: https://reviews.facebook.net/D12321 16 August 2013, 19:05:27 UTC
387ac0f Expose statistic for sequence number and implement setTickerCount Summary: statistic for sequence number is needed by wormhole. setTickerCount is demanded for this statistic. I can't simply recordTick(max_sequence) when db recovers because the statistic iobject is owned by client and may/may not be reset during reopen. Eg. statistic is reset in mcrocksdb whereas it is not in db_stress. Therefore it is best to go with setTickerCount Test Plan: ./db_stress ... --statistics=1 and observed expected sequence number Reviewers: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12327 16 August 2013, 06:00:20 UTC
d1d3d15 Tiny fix to db_bench for make release. Summary: In release, "found variable assigned but not used anywhere". Changed it to work with assert. Someone accept this :). Test Plan: make release -j 32 Reviewers: haobo, dhruba, emayanke Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D12309 16 August 2013, 00:50:12 UTC
ad48c3c Benchmarking for Merge Operator Summary: Updated db_bench and utilities/merge_operators.h to allow for dynamic benchmarking of merge operators in db_bench. Added a new test (--benchmarks=mergerandom), which performs a bunch of random Merge() operations over random keys. Also added a "--merge_operator=" flag so that the tester can easily benchmark different merge operators. Currently supports the PutOperator and UInt64Add operator. Support for stringappend or list append may come later. Test Plan: 1. make db_bench 2. Test the PutOperator (simulating Put) as follows: ./db_bench --benchmarks=fillrandom,readrandom,updaterandom,readrandom,mergerandom,readrandom --merge_operator=put --threads=2 3. Test the UInt64AddOperator (simulating numeric addition) similarly: ./db_bench --value_size=8 --benchmarks=fillrandom,readrandom,updaterandom,readrandom,mergerandom,readrandom --merge_operator=uint64add --threads=2 Reviewers: haobo, dhruba, zshao, MarkCallaghan Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D11535 16 August 2013, 00:13:07 UTC
f3dea8c Commit the correct fix for Jenkin failure Summary: My last commit is not the correct one. Fix it in this diff. Test Plan: Reviewers: CC: Task ID: # Blame Rev: 15 August 2013, 21:57:44 UTC
159c19a Fix Jenkin build failure Summary: Previously I changed the line `source ./fbcode.gcc471.sh` to `source fbcode.gcc471.sh`. It works in my devbox but failed in some jenkin servers. I revert the previous code to make sure it works well under all circumstances. Test Plan: Test in the jenkin server as well as dev box. Reviewers: CC: Task ID: # Blame Rev: 15 August 2013, 21:49:31 UTC
457dcc6 Clean up the Makefile and the build scripts Summary: As Aaron suggested, there are quite some problems with our Makefile and scripts. So in this diff I did some cleanup for them and revise some part of the scripts/makefile to help people better understand some mysterious parts. Test Plan: Ran make in several modes; Ran the updated scripts. Reviewers: dhruba, emayanke, akushner Differential Revision: https://reviews.facebook.net/D12285 15 August 2013, 19:59:45 UTC
85d83a1 Update crashtests to match D12267 Summary: I changed the db_stress configs, but forgot to update the scripts using the old configs. Test Plan: 'make blackbox_crash_test' and 'make whitebox_crash_test' start running normally now (I haven't run them til the end, though). Reviewers: vamsi Reviewed By: vamsi CC: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D12303 15 August 2013, 17:14:32 UTC
0a5afd1 Minor fix to current codes Summary: Minor fix to current codes, including: coding style, output format, comments. No major logic change. There are only 2 real changes, please see my inline comments. Test Plan: make all check Reviewers: haobo, dhruba, emayanke Differential Revision: https://reviews.facebook.net/D12297 15 August 2013, 06:03:57 UTC
7612d49 Add prefix scans to db_stress (and bug fix in prefix scan) Summary: Added support for prefix scans. Test Plan: ./db_stress --max_key=4096 --ops_per_thread=10000 Reviewers: dhruba, vamsi Reviewed By: vamsi CC: leveldb Differential Revision: https://reviews.facebook.net/D12267 14 August 2013, 23:58:36 UTC
0307c5f Implement log blobs Summary: This patch adds the ability for the user to add sequences of arbitrary data (blobs) to write batches. These blobs are saved to the log along with everything else in the write batch. You can add multiple blobs per WriteBatch and the ordering of blobs, puts, merges, and deletes are preserved. Blobs are not saves to SST files. RocksDB ignores blobs in every way except for writing them to the log. Before committing this patch, I need to add some test code. But I'm submitting it now so people can comment on the API. Test Plan: make -j32 check Reviewers: dhruba, haobo, vamsi Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12195 14 August 2013, 23:32:46 UTC
d9dd2a1 [RocksDB] Expose thread local perf counter for low overhead, per call level performance statistics. Summary: As title. No locking/atomic is needed due to thread local. There is also no need to modify the existing client interface, in order to expose related counters. perf_context_test shows a simple example of retrieving the number of user key comparison done for each put and get call. More counters could be added later. Sample output ./perf_context_test 1000000 ==== Test PerfContextTest.KeyComparisonCount Inserting 1000000 key/value pairs ... total user key comparison get: 43446523 total user key comparison put: 8017877 max user key comparison get: 88939 avg user key comparison get:43 Basically, the current skiplist does well on average, but could perform poorly in extreme cases. Test Plan: run perf_context_test <total number of entries to put/get> Reviewers: dhruba Differential Revision: https://reviews.facebook.net/D12225 14 August 2013, 22:24:06 UTC
a8f47a4 Add options to dump. Summary: added options to Dump() I missed in D12027. I also ran a script to look for other missing options and found a couple which I added. Should we also print anything for "PrepareForBulkLoad", "memtable_factory", and "statistics"? Or should we leave those alone since it's not easy to print useful info for those? Test Plan: run anything and look at LOG file to make sure these are printed now. Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12219 14 August 2013, 16:06:10 UTC
f1bf169 Counter for merge failure Summary: With Merge returning bool, it can keep failing silently(eg. While faling to fetch timestamp in TTL). We need to detect this through a rocksdb counter which can get bumped whenever Merge returns false. This will also be super-useful for the mcrocksdb-counter service where Merge may fail. Added a counter NUMBER_MERGE_FAILURES and appropriately updated db/merge_helper.cc I felt that it would be better to directly add counter-bumping in Merge as a default function of MergeOperator class but user should not be aware of this, so this approach seems better to me. Test Plan: make all check Reviewers: dnicholas, haobo, dhruba, vamsi CC: leveldb Differential Revision: https://reviews.facebook.net/D12129 13 August 2013, 21:25:42 UTC
f5f1842 Prefix filters for scans (v4) Summary: Similar to v2 (db and table code understands prefixes), but use ReadOptions as in v3. Also, make the CreateFilter code faster and cleaner. Test Plan: make db_test; export LEVELDB_TESTS=PrefixScan; ./db_test Reviewers: dhruba Reviewed By: dhruba CC: haobo, emayanke Differential Revision: https://reviews.facebook.net/D12027 13 August 2013, 21:04:56 UTC
3b81df3 Separate compaction filter for each compaction Summary: If we have same compaction filter for each compaction, application cannot know about the different compaction processes. Later on, we can put in more details in compaction filter for the application to consume and use it according to its needs. For e.g. In the universal compaction, we have a compaction process involving all the files while others don't involve all the files. Applications may want to collect some stats only when during full compaction. Test Plan: run existing unit tests Reviewers: haobo, dhruba Reviewed By: dhruba CC: xinyaohu, leveldb Differential Revision: https://reviews.facebook.net/D12057 13 August 2013, 17:56:20 UTC
9f6b8f0 Add automatic coverage report scripts Summary: Ultimate goals of the coverage report are: * Report the coverage for all files (done in this diff) * Report the coverage for recently updated files (not fully finished) * Report is available in html form (done in this diff, but need some extra work to integrate it in Jenkin) Task link: https://our.intern.facebook.com/intern/tasks/?s=1154818042&t=2604914 Test Plan: Ran: coverage/coverage_test.sh The sample output can be found here: https://phabricator.fb.com/P2433892 Reviewers: dhruba, emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D11943 13 August 2013, 06:53:37 UTC
03bd446 Merge branch 'performance' of github.com:facebook/rocksdb into performance 12 August 2013, 17:31:21 UTC
f3967a5 Merge remote-tracking branch 'origin' into performance 12 August 2013, 16:58:50 UTC
93d77a2 Universal Compaction should keep DeleteMarkers unless it is the earliest file. Summary: The pre-existing code was purging a DeleteMarker if thay key did not exist in deeper levels. But in the Universal Compaction Style, all files are in Level0. For compaction runs that did not include the earliest file, we were erroneously purging the DeleteMarkers. The fix is to purge DeleteMarkers only if the compaction includes the earlist file. Test Plan: DBTest.Randomized triggers this code path. Differential Revision: https://reviews.facebook.net/D12081 09 August 2013, 21:03:57 UTC
8ae905e Fix unit tests for universal compaction (step 2) Summary: Continue fixing existing unit tests for universal compaction. I have tried to apply universal compaction to all unit tests those haven't called ChangeOptions(). I left a few which are either apparently not applicable to universal compaction (because they check files/keys/values at level 1 or above levels), or apparently not related to compaction (e.g., open a file, open a db). I also add a new unit test for universal compaction. Good news is I didn't see any bugs during this round. Test Plan: Ran "make all check" yesterday. Has rebased and is rerunning Reviewers: haobo, dhruba Differential Revision: https://reviews.facebook.net/D12135 09 August 2013, 20:35:44 UTC
3a3b1c3 [RocksDB] Improve manifest dump to print internal keys in hex for version edits. Summary: Currently, VersionEdit::DebugString always display internal keys in the original ascii format. This could cause manifest dump to be truncated if internal keys contain special charactors (like null). Also added an option --input_key_hex for ldb idump to indicate that the passed in user keys are in hex. Test Plan: run ldb manifest_dump Reviewers: dhruba, emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D12111 08 August 2013, 23:19:01 UTC
58a0ae0 [RocksDB] Improve sst_dump to take user key range Summary: The ability to dump internal keys associated with certain user keys, directly from sst files, is very useful for diagnosis. Will incorporate it directly into ldb later. Test Plan: run it Reviewers: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12075 08 August 2013, 22:31:12 UTC
back to top