https://github.com/facebook/rocksdb

sort by:
Revision Author Date Message Commit Date
fe63723 [FB Internal] Point to the latest tool chain. 31 October 2019, 22:41:33 UTC
66e57a5 fb internal tool chain upgrade to gcc-5 06 June 2018, 23:08:03 UTC
98368ee Fix the previous commit 29 May 2018, 17:35:40 UTC
701e9db Set env_options.allow_mmap_writes = false if not from DB Options 29 May 2018, 17:33:05 UTC
677ca7d LDB to use allow_mmap_writes = false 29 May 2018, 17:23:12 UTC
9810827 Miss a file in previous commit 18 July 2017, 01:21:52 UTC
f854a85 [FB Internal] Use gcc-5 17 July 2017, 21:57:11 UTC
9761a06 fb internal: Allow to compile with GCC 4.8.1 Have to disable some errors to make the build pass. 13 October 2016, 21:57:05 UTC
af732c7 Add universal compaction to db_stress nightly build Summary: Most code change in this diff is code cleanup/rewrite. The logic changes include: (1) add universal compaction to db_crashtest2.py (2) randomly set --test_batches_snapshots to be 0 or 1 in db_crashtest2.py. Old codes always use 1. (3) use different tmp directory as db directory in different runs. I saw some intermittent errors in my local tests. Use of different tmp directory seems to be able to solve the issue. Test Plan: Have run "make crashtest" for multiple times. Also run "make all check" Reviewers: emayanke, dhruba, haobo Reviewed By: emayanke Differential Revision: https://reviews.facebook.net/D12369 21 August 2013, 00:37:49 UTC
b87dcae Made merge_oprator a shared_ptr; and added TTL unit tests Test Plan: - make all check; - make release; - make stringappend_test; ./stringappend_test Reviewers: haobo, emayanke Reviewed By: haobo CC: leveldb, kailiu Differential Revision: https://reviews.facebook.net/D12381 20 August 2013, 20:35:28 UTC
3ab2792 Add default info to comment in leveldb/options.h for no_block_cache Summary: to ley clients know Test Plan: visual 19 August 2013, 21:29:40 UTC
28e6fe5 Correct documentation for no_block_cache in leveldb/options.h Summary: false shoudl have been true in comment Test Plan: visual 19 August 2013, 21:27:19 UTC
8a3547d API for getting archived log files Summary: Also expanded class LogFile to have startSequene and FileSize and exposed it publicly Test Plan: make all check Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12087 19 August 2013, 20:37:04 UTC
e134696 Merge operator fixes part 1. Summary: -Added null checks and revisions to DBIter::MergeValuesNewToOld() -Added DBIter test to stringappend_test -Major fix with Merge and TTL More plans for fixes later. Test Plan: -make clean; make stringappend_test -j 32; ./stringappend_test -make all check; Reviewers: haobo, emayanke, vamsi, dhruba Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D12315 19 August 2013, 18:42:47 UTC
1635ea0 Remove PLATFORM_SHARED_CFLAGS when compiling .o files Summary: The flags is accidentally introduced, and resulted in problems in 3rd party release. Test Plan: run make in all platforms (when doing the 3rd party release) 16 August 2013, 23:39:23 UTC
fd2f47d Improve the build files to simplify the 3rd party release process Summary: * Added LIBNAME to enable configurable library name. * remove/check fPIC in linux platform from build_detect_platform Test Plan: make Reviewers: emayanke Differential Revision: https://reviews.facebook.net/D12321 16 August 2013, 19:05:27 UTC
387ac0f Expose statistic for sequence number and implement setTickerCount Summary: statistic for sequence number is needed by wormhole. setTickerCount is demanded for this statistic. I can't simply recordTick(max_sequence) when db recovers because the statistic iobject is owned by client and may/may not be reset during reopen. Eg. statistic is reset in mcrocksdb whereas it is not in db_stress. Therefore it is best to go with setTickerCount Test Plan: ./db_stress ... --statistics=1 and observed expected sequence number Reviewers: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12327 16 August 2013, 06:00:20 UTC
d1d3d15 Tiny fix to db_bench for make release. Summary: In release, "found variable assigned but not used anywhere". Changed it to work with assert. Someone accept this :). Test Plan: make release -j 32 Reviewers: haobo, dhruba, emayanke Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D12309 16 August 2013, 00:50:12 UTC
ad48c3c Benchmarking for Merge Operator Summary: Updated db_bench and utilities/merge_operators.h to allow for dynamic benchmarking of merge operators in db_bench. Added a new test (--benchmarks=mergerandom), which performs a bunch of random Merge() operations over random keys. Also added a "--merge_operator=" flag so that the tester can easily benchmark different merge operators. Currently supports the PutOperator and UInt64Add operator. Support for stringappend or list append may come later. Test Plan: 1. make db_bench 2. Test the PutOperator (simulating Put) as follows: ./db_bench --benchmarks=fillrandom,readrandom,updaterandom,readrandom,mergerandom,readrandom --merge_operator=put --threads=2 3. Test the UInt64AddOperator (simulating numeric addition) similarly: ./db_bench --value_size=8 --benchmarks=fillrandom,readrandom,updaterandom,readrandom,mergerandom,readrandom --merge_operator=uint64add --threads=2 Reviewers: haobo, dhruba, zshao, MarkCallaghan Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D11535 16 August 2013, 00:13:07 UTC
f3dea8c Commit the correct fix for Jenkin failure Summary: My last commit is not the correct one. Fix it in this diff. Test Plan: Reviewers: CC: Task ID: # Blame Rev: 15 August 2013, 21:57:44 UTC
159c19a Fix Jenkin build failure Summary: Previously I changed the line `source ./fbcode.gcc471.sh` to `source fbcode.gcc471.sh`. It works in my devbox but failed in some jenkin servers. I revert the previous code to make sure it works well under all circumstances. Test Plan: Test in the jenkin server as well as dev box. Reviewers: CC: Task ID: # Blame Rev: 15 August 2013, 21:49:31 UTC
457dcc6 Clean up the Makefile and the build scripts Summary: As Aaron suggested, there are quite some problems with our Makefile and scripts. So in this diff I did some cleanup for them and revise some part of the scripts/makefile to help people better understand some mysterious parts. Test Plan: Ran make in several modes; Ran the updated scripts. Reviewers: dhruba, emayanke, akushner Differential Revision: https://reviews.facebook.net/D12285 15 August 2013, 19:59:45 UTC
85d83a1 Update crashtests to match D12267 Summary: I changed the db_stress configs, but forgot to update the scripts using the old configs. Test Plan: 'make blackbox_crash_test' and 'make whitebox_crash_test' start running normally now (I haven't run them til the end, though). Reviewers: vamsi Reviewed By: vamsi CC: leveldb, dhruba Differential Revision: https://reviews.facebook.net/D12303 15 August 2013, 17:14:32 UTC
0a5afd1 Minor fix to current codes Summary: Minor fix to current codes, including: coding style, output format, comments. No major logic change. There are only 2 real changes, please see my inline comments. Test Plan: make all check Reviewers: haobo, dhruba, emayanke Differential Revision: https://reviews.facebook.net/D12297 15 August 2013, 06:03:57 UTC
7612d49 Add prefix scans to db_stress (and bug fix in prefix scan) Summary: Added support for prefix scans. Test Plan: ./db_stress --max_key=4096 --ops_per_thread=10000 Reviewers: dhruba, vamsi Reviewed By: vamsi CC: leveldb Differential Revision: https://reviews.facebook.net/D12267 14 August 2013, 23:58:36 UTC
0307c5f Implement log blobs Summary: This patch adds the ability for the user to add sequences of arbitrary data (blobs) to write batches. These blobs are saved to the log along with everything else in the write batch. You can add multiple blobs per WriteBatch and the ordering of blobs, puts, merges, and deletes are preserved. Blobs are not saves to SST files. RocksDB ignores blobs in every way except for writing them to the log. Before committing this patch, I need to add some test code. But I'm submitting it now so people can comment on the API. Test Plan: make -j32 check Reviewers: dhruba, haobo, vamsi Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12195 14 August 2013, 23:32:46 UTC
d9dd2a1 [RocksDB] Expose thread local perf counter for low overhead, per call level performance statistics. Summary: As title. No locking/atomic is needed due to thread local. There is also no need to modify the existing client interface, in order to expose related counters. perf_context_test shows a simple example of retrieving the number of user key comparison done for each put and get call. More counters could be added later. Sample output ./perf_context_test 1000000 ==== Test PerfContextTest.KeyComparisonCount Inserting 1000000 key/value pairs ... total user key comparison get: 43446523 total user key comparison put: 8017877 max user key comparison get: 88939 avg user key comparison get:43 Basically, the current skiplist does well on average, but could perform poorly in extreme cases. Test Plan: run perf_context_test <total number of entries to put/get> Reviewers: dhruba Differential Revision: https://reviews.facebook.net/D12225 14 August 2013, 22:24:06 UTC
a8f47a4 Add options to dump. Summary: added options to Dump() I missed in D12027. I also ran a script to look for other missing options and found a couple which I added. Should we also print anything for "PrepareForBulkLoad", "memtable_factory", and "statistics"? Or should we leave those alone since it's not easy to print useful info for those? Test Plan: run anything and look at LOG file to make sure these are printed now. Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12219 14 August 2013, 16:06:10 UTC
f1bf169 Counter for merge failure Summary: With Merge returning bool, it can keep failing silently(eg. While faling to fetch timestamp in TTL). We need to detect this through a rocksdb counter which can get bumped whenever Merge returns false. This will also be super-useful for the mcrocksdb-counter service where Merge may fail. Added a counter NUMBER_MERGE_FAILURES and appropriately updated db/merge_helper.cc I felt that it would be better to directly add counter-bumping in Merge as a default function of MergeOperator class but user should not be aware of this, so this approach seems better to me. Test Plan: make all check Reviewers: dnicholas, haobo, dhruba, vamsi CC: leveldb Differential Revision: https://reviews.facebook.net/D12129 13 August 2013, 21:25:42 UTC
f5f1842 Prefix filters for scans (v4) Summary: Similar to v2 (db and table code understands prefixes), but use ReadOptions as in v3. Also, make the CreateFilter code faster and cleaner. Test Plan: make db_test; export LEVELDB_TESTS=PrefixScan; ./db_test Reviewers: dhruba Reviewed By: dhruba CC: haobo, emayanke Differential Revision: https://reviews.facebook.net/D12027 13 August 2013, 21:04:56 UTC
3b81df3 Separate compaction filter for each compaction Summary: If we have same compaction filter for each compaction, application cannot know about the different compaction processes. Later on, we can put in more details in compaction filter for the application to consume and use it according to its needs. For e.g. In the universal compaction, we have a compaction process involving all the files while others don't involve all the files. Applications may want to collect some stats only when during full compaction. Test Plan: run existing unit tests Reviewers: haobo, dhruba Reviewed By: dhruba CC: xinyaohu, leveldb Differential Revision: https://reviews.facebook.net/D12057 13 August 2013, 17:56:20 UTC
9f6b8f0 Add automatic coverage report scripts Summary: Ultimate goals of the coverage report are: * Report the coverage for all files (done in this diff) * Report the coverage for recently updated files (not fully finished) * Report is available in html form (done in this diff, but need some extra work to integrate it in Jenkin) Task link: https://our.intern.facebook.com/intern/tasks/?s=1154818042&t=2604914 Test Plan: Ran: coverage/coverage_test.sh The sample output can be found here: https://phabricator.fb.com/P2433892 Reviewers: dhruba, emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D11943 13 August 2013, 06:53:37 UTC
03bd446 Merge branch 'performance' of github.com:facebook/rocksdb into performance 12 August 2013, 17:31:21 UTC
f3967a5 Merge remote-tracking branch 'origin' into performance 12 August 2013, 16:58:50 UTC
93d77a2 Universal Compaction should keep DeleteMarkers unless it is the earliest file. Summary: The pre-existing code was purging a DeleteMarker if thay key did not exist in deeper levels. But in the Universal Compaction Style, all files are in Level0. For compaction runs that did not include the earliest file, we were erroneously purging the DeleteMarkers. The fix is to purge DeleteMarkers only if the compaction includes the earlist file. Test Plan: DBTest.Randomized triggers this code path. Differential Revision: https://reviews.facebook.net/D12081 09 August 2013, 21:03:57 UTC
8ae905e Fix unit tests for universal compaction (step 2) Summary: Continue fixing existing unit tests for universal compaction. I have tried to apply universal compaction to all unit tests those haven't called ChangeOptions(). I left a few which are either apparently not applicable to universal compaction (because they check files/keys/values at level 1 or above levels), or apparently not related to compaction (e.g., open a file, open a db). I also add a new unit test for universal compaction. Good news is I didn't see any bugs during this round. Test Plan: Ran "make all check" yesterday. Has rebased and is rerunning Reviewers: haobo, dhruba Differential Revision: https://reviews.facebook.net/D12135 09 August 2013, 20:35:44 UTC
3a3b1c3 [RocksDB] Improve manifest dump to print internal keys in hex for version edits. Summary: Currently, VersionEdit::DebugString always display internal keys in the original ascii format. This could cause manifest dump to be truncated if internal keys contain special charactors (like null). Also added an option --input_key_hex for ldb idump to indicate that the passed in user keys are in hex. Test Plan: run ldb manifest_dump Reviewers: dhruba, emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D12111 08 August 2013, 23:19:01 UTC
58a0ae0 [RocksDB] Improve sst_dump to take user key range Summary: The ability to dump internal keys associated with certain user keys, directly from sst files, is very useful for diagnosis. Will incorporate it directly into ldb later. Test Plan: run it Reviewers: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12075 08 August 2013, 22:31:12 UTC
17b8f78 Fix unit tests/bugs for universal compaction (first step) Summary: This is the first step to fix unit tests and bugs for universal compactiion. I added universal compaction option to ChangeOptions(), and fixed all unit tests calling ChangeOptions(). Some of these tests obviously assume more than 1 level and check file number/values in level 1 or above levels. I set kSkipUniversalCompaction for these tests. The major bug I found is manual compaction with universal compaction never stops. I have put a fix for it. I have also set universal compaction as the default compaction and found at least 20+ unit tests failing. I haven't looked into the details. The next step is to check all unit tests without calling ChangeOptions(). Test Plan: make all check Reviewers: dhruba, haobo Differential Revision: https://reviews.facebook.net/D12051 07 August 2013, 21:05:44 UTC
f5fa26b Merge branch 'performance' of github.com:facebook/rocksdb into performance Conflicts: db/builder.cc db/db_impl.cc db/version_set.cc include/leveldb/statistics.h 07 August 2013, 18:58:06 UTC
678d3a1 Updating readme to 2.1 Summary: Test Plan: Reviewers: CC: Task ID: # Blame Rev: 06 August 2013, 21:43:16 UTC
68a4cdf Build fix with merge_test and ttl 06 August 2013, 18:42:21 UTC
e37eb21 minor change to fix build 06 August 2013, 18:02:19 UTC
c2d7826 [RocksDB] [MergeOperator] The new Merge Interface! Uses merge sequences. Summary: Here are the major changes to the Merge Interface. It has been expanded to handle cases where the MergeOperator is not associative. It does so by stacking up merge operations while scanning through the key history (i.e.: during Get() or Compaction), until a valid Put/Delete/end-of-history is encountered; it then applies all of the merge operations in the correct sequence starting with the base/sentinel value. I have also introduced an "AssociativeMerge" function which allows the user to take advantage of associative merge operations (such as in the case of counters). The implementation will always attempt to merge the operations/operands themselves together when they are encountered, and will resort to the "stacking" method if and only if the "associative-merge" fails. This implementation is conjectured to allow MergeOperator to handle the general case, while still providing the user with the ability to take advantage of certain efficiencies in their own merge-operator / data-structure. NOTE: This is a preliminary diff. This must still go through a lot of review, revision, and testing. Feedback welcome! Test Plan: -This is a preliminary diff. I have only just begun testing/debugging it. -I will be testing this with the existing MergeOperator use-cases and unit-tests (counters, string-append, and redis-lists) -I will be "desk-checking" and walking through the code with the help gdb. -I will find a way of stress-testing the new interface / implementation using db_bench, db_test, merge_test, and/or db_stress. -I will ensure that my tests cover all cases: Get-Memtable, Get-Immutable-Memtable, Get-from-Disk, Iterator-Range-Scan, Flush-Memtable-to-L0, Compaction-L0-L1, Compaction-Ln-L(n+1), Put/Delete found, Put/Delete not-found, end-of-history, end-of-file, etc. -A lot of feedback from the reviewers. Reviewers: haobo, dhruba, zshao, emayanke Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D11499 06 August 2013, 03:14:32 UTC
73f9518 Fix build Summary: remove reference Test Plan: make OPT=-g Reviewers: CC: Task ID: # Blame Rev: 06 August 2013, 02:22:12 UTC
8e792e5 Add soft_rate_limit stats Summary: This diff adds histogram stats for soft_rate_limit stalls. It also renames the old rate_limit stats to hard_rate_limit. Test Plan: make -j32 check Reviewers: dhruba, haobo, MarkCallaghan Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12021 06 August 2013, 01:45:23 UTC
1d7b476 Expose base db object from ttl wrapper Summary: rocksdb replicaiton will need this when writing value+TS from master to slave 'as is' Test Plan: make Reviewers: dhruba, vamsi, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11919 06 August 2013, 01:44:14 UTC
1036537 Add soft and hard rate limit support Summary: This diff adds support for both soft and hard rate limiting. The following changes are included: 1) Options.rate_limit is renamed to Options.hard_rate_limit. 2) Options.rate_limit_delay_milliseconds is renamed to Options.rate_limit_delay_max_milliseconds. 3) Options.soft_rate_limit is added. 4) If the maximum compaction score is > hard_rate_limit and rate_limit_delay_max_milliseconds == 0, then writes are delayed by 1 ms at a time until the max compaction score falls below hard_rate_limit. 5) If the max compaction score is > soft_rate_limit but <= hard_rate_limit, then writes are delayed by 0-1 ms depending on how close we are to hard_rate_limit. 6) Users can disable 4 by setting hard_rate_limit = 0. They can add a limit to the maximum amount of time waited by setting rate_limit_delay_max_milliseconds > 0. Thus, the old behavior can be preserved by setting soft_rate_limit = 0, which is the default. Test Plan: make -j32 check ./db_stress Reviewers: dhruba, haobo, MarkCallaghan Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D12003 05 August 2013, 22:43:49 UTC
cacd812 Support user's compaction filter in TTL logic Summary: TTL uses compaction filter to purge key-values and required the user to not pass one. This diff makes it accommodating of user's compaciton filter. Added test to ttl_test Test Plan: make; ./ttl_test Reviewers: dhruba, haobo, vamsi Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11973 05 August 2013, 18:28:01 UTC
7c9093a Changing Makefile to have rocksdb instead of leveldb in binary-names Summary: did a find-replace Test Plan: make Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11979 05 August 2013, 18:14:01 UTC
711a30c Merge branch 'master' into performance Conflicts: include/leveldb/options.h include/leveldb/statistics.h util/options.cc 02 August 2013, 17:22:08 UTC
c42485f Merge operator for ttl Summary: Implemented a TtlMergeOperator class which inherits from MergeOperator and is TTL aware. It strips out timestamp from existing_value and attaches timestamp to new_value, calling user-provided-Merge in between. Test Plan: make all check Reviewers: haobo, dhruba Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D11775 01 August 2013, 16:27:17 UTC
59d0b02 Expand KeyMayExist to return the proper value if it can be found in memory and also check block_cache Summary: Removed KeyMayExistImpl because KeyMayExist demanded Get like semantics now. Removed no_io from memtable and imm because we need the proper value now and shouldn't just stop when we see Merge in memtable. Added checks to block_cache. Updated documentation and unit-test Test Plan: make all check;db_stress for 1 hour Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11853 01 August 2013, 16:07:46 UTC
9700677 Slow down writes gradually rather than suddenly Summary: Currently, when a certain number of level0 files (level0_slowdown_writes_trigger) are present, RocksDB will slow down each write by 1ms. There is a second limit of level0 files at which RocksDB will stop writes altogether (level0_stop_writes_trigger). This patch enables the user to supply a third parameter specifying the number of files at which Rocks will start slowing down writes (level0_start_slowdown_writes). When this number is reached, Rocks will slow down writes as a quadratic function of level0_slowdown_writes_trigger - num_level0_files. For some workloads, this improves latency and throughput. I will post some stats momentarily in https://our.intern.facebook.com/intern/tasks/?t=2613384. Test Plan: make -j32 check ./db_stress ./db_bench Reviewers: dhruba, haobo, MarkCallaghan, xjin Reviewed By: xjin CC: leveldb, xjin, zshao Differential Revision: https://reviews.facebook.net/D11859 31 July 2013, 23:20:48 UTC
0f0a24e Make arena block size configurable Summary: Add an option for arena block size, default value 4096 bytes. Arena will allocate blocks with such size. I am not sure about passing parameter to skiplist in the new virtualized framework, though I talked to Jim a bit. So add Jim as reviewer. Test Plan: new unit test, I am running db_test. For passing paramter from configured option to Arena, I tried tests like: TEST(DBTest, Arena_Option) { std::string dbname = test::TmpDir() + "/db_arena_option_test"; DestroyDB(dbname, Options()); DB* db = nullptr; Options opts; opts.create_if_missing = true; opts.arena_block_size = 1000000; // tested 99, 999999 Status s = DB::Open(opts, dbname, &db); db->Put(WriteOptions(), "a", "123"); } and printed some debug info. The results look good. Any suggestion for such a unit-test? Reviewers: haobo, dhruba, emayanke, jpaton Reviewed By: dhruba CC: leveldb, zshao Differential Revision: https://reviews.facebook.net/D11799 31 July 2013, 19:42:23 UTC
542cc10 Fix README contents. Summary: Fix README contents. Test Plan: Reviewers: CC: Task ID: # Blame Rev: 30 July 2013, 15:30:13 UTC
6db52b5 Don't use redundant Env::NowMicros() calls Summary: After my patch for stall histograms, there are redundant calls to NowMicros() by both the stop watches and DBImpl::MakeRoomForWrites. So I removed the redundant calls such that the information is gotten from the stopwatch. Test Plan: make clean make -j32 check Reviewers: dhruba, haobo, MarkCallaghan Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11883 29 July 2013, 22:46:36 UTC
abc90b0 Use specific DB name in merge_test Summary: Currently, merge_test uses /tmp/testdb for the test database. It should really use something more specific to merge_test. Most of the other tests use test::TmpDir() + "/<test name>db". This patch implements such behavior for merge_test; it makes merge_test use test::TmpDir() + "/merge_testdb" Test Plan: make clean make -j32 merge_test ./merge_test Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11877 29 July 2013, 20:26:38 UTC
18afff2 Add stall counts to statistics Summary: Previously, statistics are kept on how much time is spent on stalls of different types. This patch adds support for keeping number of stalls of each type. For example, instead of just reporting how many microseconds are spent waiting for memtables to be compacted, it will also report how many times a write stalled for that to occur. Test Plan: make -j32 check ./db_stress # Not really sure what else should be done... Reviewers: dhruba, MarkCallaghan, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11841 29 July 2013, 17:34:23 UTC
a91fdf1 The target file size for L0 files was incorrectly set to LLONG_MAX. Summary: The target file size should be valid value. Only if UniversalCompactionStyle is enabled then set max file size to be LLONG_MAX. Test Plan: Reviewers: CC: Task ID: # Blame Rev: 24 July 2013, 21:28:00 UTC
d7ba5bc Revert 6fbe4e981a3d74270a0160445bd993c464c23d76: If disable wal is set, then batch commits are avoided Summary: Revert "If disable wal is set, then batch commits are avoided" because keeping the mutex while inserting into the skiplist means that readers and writes are all serialized on the mutex. Test Plan: Reviewers: CC: Task ID: # Blame Rev: 24 July 2013, 17:01:13 UTC
52d7ecf Virtualize SkipList Interface Summary: This diff virtualizes the skiplist interface so that users can provide their own implementation of a backing store for MemTables. Eventually, the backing store will be responsible for its own synchronization, allowing users (and us) to experiment with different lockless implementations. Test Plan: make clean make -j32 check ./db_stress Reviewers: dhruba, emayanke, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11739 23 July 2013, 21:42:27 UTC
6fbe4e9 If disable wal is set, then batch commits are avoided. Summary: rocksdb uses batch commit to write to transaction log. But if disable wal is set, then writes to transaction log are anyways avoided. In this case, there is not much value-add to batch things, batching can cause unnecessary delays to Puts(). This patch avoids batching when disableWal is set. Test Plan: make check. I am running db_stress now. Reviewers: haobo Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D11763 23 July 2013, 21:22:57 UTC
f3baeec Adding filter_deletes to crash_tests run in jenkins Summary: filter_deletes options introduced in db_stress makes it drop Deletes on key if KeyMayExist(key) returns false on the key. code change was simple and tested so not wasting reviewer's time. Test Plan: maek crash_test; python tools/db_crashtest[1|2].py CC: dhruba, vamsi Differential Revision: https://reviews.facebook.net/D11769 23 July 2013, 20:49:16 UTC
bf66c10 Use KeyMayExist for WriteBatch-Deletes Summary: Introduced KeyMayExist checking during writebatch-delete and removed from Outer Delete API because it uses writebatch-delete. Added code to skip getting Table from disk if not already present in table_cache. Some renaming of variables. Introduced KeyMayExistImpl which allows checking since specified sequence number in GetImpl useful to check partially written writebatch. Changed KeyMayExist to not be pure virtual and provided a default implementation. Expanded unit-tests in db_test to check appropriately. Ran db_stress for 1 hour with ./db_stress --max_key=100000 --ops_per_thread=10000000 --delpercent=50 --filter_deletes=1 --statistics=1. Test Plan: db_stress;make check Reviewers: dhruba, haobo Reviewed By: dhruba CC: leveldb, xjin Differential Revision: https://reviews.facebook.net/D11745 23 July 2013, 20:36:50 UTC
d364eea [RocksDB] Fix FindMinimumEmptyLevelFitting Summary: as title Test Plan: make check; Reviewers: xjin CC: leveldb Differential Revision: https://reviews.facebook.net/D11751 22 July 2013, 19:31:43 UTC
9ee6887 [RocksDB] Enable manual compaction to move files back to an appropriate level. Summary: As title. This diff added an option reduce_level to CompactRange. When set to true, it will try to move the files back to the minimum level sufficient to hold the data set. Note that the default is set to true now, just to excerise it in all existing tests. Will set the default to false before check-in, for backward compatibility. Test Plan: make check; Reviewers: dhruba, emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D11553 19 July 2013, 23:20:36 UTC
9357a53 Fix merge problems with options. Summary: Fix merge problems with options. Test Plan: Reviewers: CC: Task ID: # Blame Rev: 17 July 2013, 22:08:56 UTC
4a745a5 Merge branch 'master' into performance Conflicts: db/version_set.cc include/leveldb/options.h util/options.cc 17 July 2013, 22:05:57 UTC
76a4923 Make file-sizes and grandparentoverlap to be unsigned to avoid bad comparisions. Summary: The maxGrandParentOverlapBytes_ was signed which was causing an erroneous comparision between signed and unsigned longs. This, in turn, was causing compaction-created-output-files to be very small in size. Test Plan: make check Differential Revision: https://reviews.facebook.net/D11727 17 July 2013, 21:26:06 UTC
e9b675b Fix memory leak in KeyMayExist test part of db_test Summary: NewBloomFilterPolicy call requires Delete to be called later on Test Plan: make; valgrind ./db_test Reviewers: haobo, dhruba, vamsi Differential Revision: https://reviews.facebook.net/D11667 12 July 2013, 23:58:57 UTC
2a98691 Make rocksdb-deletes faster using bloom filter Summary: Wrote a new function in db_impl.c-CheckKeyMayExist that calls Get but with a new parameter turned on which makes Get return false only if bloom filters can guarantee that key is not in database. Delete calls this function and if the option- deletes_use_filter is turned on and CheckKeyMayExist returns false, the delete will be dropped saving: 1. Put of delete type 2. Space in the db,and 3. Compaction time Test Plan: make all check; will run db_stress and db_bench and enhance unit-test once the basic design gets approved Reviewers: dhruba, haobo, vamsi Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D11607 11 July 2013, 19:11:11 UTC
8a5341e Newbie code question Summary: This diff is more about my question when reading compaction codes, instead of a normal diff. I don't quite understand the logic here. Test Plan: I didn't do any test. If this is a bug, I will continue doing some test. Reviewers: haobo, dhruba, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11661 11 July 2013, 16:03:40 UTC
821889e Print complete statistics in db_stress Summary: db_stress should alos print complete statistics like db_bench. Needed this when I wanted to measure number of delete-IOs dropped due to CheckKeyMayExist to be introduced to rocksdb codebase later- to make deltes in rocksdb faster Test Plan: make db_stress;./db_stress --max_key=100 --ops_per_thread=1000 --statistics=1 Reviewers: sheki, dhruba, vamsi, haobo Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D11655 11 July 2013, 01:07:13 UTC
289efe9 Update statistics only if needed. Summary: Update statistics only if needed. Test Plan: Reviewers: CC: Task ID: # Blame Rev: 09 July 2013, 23:17:00 UTC
7cb8d46 Rename PickCompactionHybrid to PickCompactionUniversal. Summary: Rename PickCompactionHybrid to PickCompactionUniversal. Changed a few LOG message from "Hybrid:" to "Universal:". Test Plan: Reviewers: CC: Task ID: # Blame Rev: 09 July 2013, 23:08:54 UTC
a8d5f8d [RocksDB] Remove old readahead options Summary: As title. Test Plan: make check; db_bench Reviewers: dhruba, MarkCallaghan CC: leveldb Differential Revision: https://reviews.facebook.net/D11643 09 July 2013, 18:22:33 UTC
9ba8278 [RocksDB] Provide contiguous sequence number even in case of write failure Summary: Replication logic would be simplifeid if we can guarantee that write sequence number is always contiguous, even if write failure occurs. Dhruba and I looked at the sequence number generation part of the code. It seems fixable. Note that if WAL was successful and insert into memtable was not, we would be in an unfortunate state. The approach in this diff is : IO error is expected and error status will be returned to client, sequence number will not be advanced; In-mem error is not expected and we panic. Test Plan: make check; db_stress Reviewers: dhruba, sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D11439 08 July 2013, 22:31:09 UTC
116ec52 Renamed 'hybrid_compaction' tp be "Universal Compaction'. Summary: All the universal compaction parameters are encapsulated in a new file universal_compaction.h Test Plan: make check 03 July 2013, 22:47:53 UTC
92ca816 [RocksDB] Support internal key/value dump for ldb Summary: This diff added a command 'idump' to ldb tool, which dumps the internal key/value pairs. It could be useful for diagnosis and estimating the per user key 'overhead'. Also cleaned up the ldb code a bit where I touched. Test Plan: make check; ldb idump Reviewers: emayanke, sheki, dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11517 03 July 2013, 17:41:31 UTC
d56523c Update rocksdb version Summary: rocksdb-2.0 released to third party Test Plan: visual inspection Reviewers: dhruba, haobo, sheki Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11559 01 July 2013, 21:30:04 UTC
a84f547 Merge branch 'performance' of github.com:facebook/rocksdb into performance Conflicts: db/db_bench.cc db/version_set.cc include/leveldb/options.h util/options.cc 01 July 2013, 03:14:44 UTC
47c4191 Reduce write amplification by merging files in L0 back into L0 Summary: There is a new option called hybrid_mode which, when switched on, causes HBase style compactions. Files from L0 are compacted back into L0. This meat of this compaction algorithm is in PickCompactionHybrid(). All files reside in L0. That means all files have overlapping keys. Each file has a time-bound, i.e. each file contains a range of keys that were inserted around the same time. The start-seqno and the end-seqno refers to the timeframe when these keys were inserted. Files that have contiguous seqno are compacted together into a larger file. All files are ordered from most recent to the oldest. The current compaction algorithm starts to look for candidate files starting from the most recent file. It continues to add more files to the same compaction run as long as the sum of the files chosen till now is smaller than the next candidate file size. This logic needs to be debated and validated. The above logic should reduce write amplification to a large extent... will publish numbers shortly. Test Plan: dbstress runs for 6 hours with no data corruption (tested so far). Differential Revision: https://reviews.facebook.net/D11289 01 July 2013, 03:07:04 UTC
554c06d Reduce write amplification by merging files in L0 back into L0 Summary: There is a new option called hybrid_mode which, when switched on, causes HBase style compactions. Files from L0 are compacted back into L0. This meat of this compaction algorithm is in PickCompactionHybrid(). All files reside in L0. That means all files have overlapping keys. Each file has a time-bound, i.e. each file contains a range of keys that were inserted around the same time. The start-seqno and the end-seqno refers to the timeframe when these keys were inserted. Files that have contiguous seqno are compacted together into a larger file. All files are ordered from most recent to the oldest. The current compaction algorithm starts to look for candidate files starting from the most recent file. It continues to add more files to the same compaction run as long as the sum of the files chosen till now is smaller than the next candidate file size. This logic needs to be debated and validated. The above logic should reduce write amplification to a large extent... will publish numbers shortly. Test Plan: dbstress runs for 6 hours with no data corruption (tested so far). Differential Revision: https://reviews.facebook.net/D11289 30 June 2013, 16:11:02 UTC
71e0f69 [RocksDB] Expose count for WriteBatch Summary: As title. Exposed a Count function that returns the number of updates in a batch. Could be handy for replication sequence number check. Test Plan: make check; Reviewers: emayanke, sheki, dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11523 26 June 2013, 22:13:21 UTC
34ef873 Added stringappend_test back into the unit tests. Summary: With the Makefile now updated to correctly update all .o files, this should fix the issues recompiling stringappend_test. This should also fix the "segmentation-fault" that we were getting earlier. Now, stringappend_test should be clean, and I have added it back to the unit-tests. Also made some minor updates to the tests themselves. Test Plan: 1. make clean; make stringappend_test -j 32 (will test it by itself) 2. make clean; make all check -j 32 (to run all unit tests) 3. make clean; make release (test in release mode) 4. valgrind ./stringappend_test (valgrind tests) Reviewers: haobo, jpaton, dhruba Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D11505 26 June 2013, 18:41:13 UTC
6894a50 Updated "make clean" to remove all .o files Summary: The old Makefile did not remove ALL .o and .d files, but rather only those that happened to be in the root folder and one-level deep. This was causing issues when recompiling files in deeper folders. This fix now causes make clean to find ALL .o and .d files via a unix "find" command, and then remove them. Test Plan: make clean; make all -j 32; Reviewers: haobo, jpaton, dhruba Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D11493 25 June 2013, 18:30:37 UTC
b858da7 Simplify bucketing logic in ldb-ttl Summary: [start_time, end_time) is waht I'm following for the buckets and the whole time-range. Also cleaned up some code in db_ttl.* Not correcting the spacing/indenting convention for util/ldb_cmd.cc in this diff. Test Plan: python ldb_test.py, make ttl_test, Run mcrocksdb-backup tool, Run the ldb tool on 2 mcrocksdb production backups form sigmafio033.prn1 Reviewers: vamsi, haobo Reviewed By: vamsi Differential Revision: https://reviews.facebook.net/D11433 21 June 2013, 16:49:24 UTC
61f1baa Introducing timeranged scan, timeranged dump in ldb. Also the ability to count in time-batches during Dump Summary: Scan and Dump commands in ldb use iterator. We need to also print timestamp for ttl databases for debugging. For this I create a TtlIterator class pointer in these functions and assign it the value of Iterator pointer which actually points to t TtlIterator object, and access the new function ValueWithTS which can return TS also. Buckets feature for dump command: gives a count of different key-values in the specified time-range distributed across the time-range partitioned according to bucket-size. start_time and end_time are specified in unixtimestamp and bucket in seconds on the user-commandline Have commented out 3 ines from ldb_test.py so that the test does not break right now. It breaks because timestamp is also printed now and I have to look at wildcards in python to compare properly. Test Plan: python tools/ldb_test.py Reviewers: vamsi, dhruba, haobo, sheki Reviewed By: vamsi CC: leveldb Differential Revision: https://reviews.facebook.net/D11403 20 June 2013, 01:45:13 UTC
0f78fad [RocksDB] add back --mmap_read options to crashtest Summary: As title, now that db_stress supports --map_read properly Test Plan: make crash_test Reviewers: vamsi, emayanke, dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11391 19 June 2013, 23:15:59 UTC
4deaa0d [RocksDB] Minor change to statistics.h Summary: as title, use initialize list so that lines fit in 80 chars. Test Plan: make check; Reviewers: sheki, dhruba Differential Revision: https://reviews.facebook.net/D11385 19 June 2013, 19:44:42 UTC
96be2c4 [RocksDB] Add mmap_read option for db_stress Summary: as title, also removed an incorrect assertion Test Plan: make check; db_stress --mmap_read=1; db_stress --mmap_read=0 Reviewers: dhruba, emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D11367 19 June 2013, 17:28:32 UTC
5ef6bb8 [rocksdb][refactor] statistic printing code to one place Summary: $title Test Plan: db_bench --statistics=1 Reviewers: haobo Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D11373 19 June 2013, 03:28:41 UTC
09de7a3 Fix Zlib_Compress and Zlib_Uncompress Summary: Zlib_{Compress,Uncompress} did not handle very small input buffers properly. In addition, they did not call inflate/deflate until Z_STREAM_END was returned; it was possible for them to exit when only Z_OK had returned. This diff also fixes a bunch of lint errors. Test Plan: Run make check Reviewers: dhruba, sheki, haobo Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11301 18 June 2013, 23:57:42 UTC
3cc1af2 [RocksDB] Option for incremental sync Summary: This diff added an option to control the incremenal sync frequency. db_bench has a new flag bytes_per_sync for easy tuning exercise. Test Plan: make check; db_bench Reviewers: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11295 18 June 2013, 22:00:32 UTC
79f4fd2 [Rocksdb] Simplify Printing code in db_bench Summary: simplify the printing code in db_bench use TickersMap and HistogramsNameMap introduced in previous diffs. Test Plan: ./db_bench --statistics=1 and see if all the statistics are printed Reviewers: haobo, dhruba Reviewed By: haobo CC: leveldb Differential Revision: https://reviews.facebook.net/D11355 18 June 2013, 21:58:00 UTC
6acbe0f Compact multiple memtables before flushing to storage. Summary: Merge multiple multiple memtables in memory before writing it out to a file in L0. There is a new config parameter min_write_buffer_number_to_merge that specifies the number of write buffers that should be merged together to a single file in storage. The system will not flush wrte buffers to storage unless at least these many buffers have accumulated in memory. The default value of this new parameter is 1, which means that a write buffer will be immediately flushed to disk as soon it is ready. Test Plan: make check Differential Revision: https://reviews.facebook.net/D11241 18 June 2013, 21:28:04 UTC
f561b3a [Rocksdb] Rename one stat key from leveldb to rocksdb 17 June 2013, 21:33:05 UTC
836534d Enhance dbstress to allow specifying compaction trigger for L0. Summary: Rocksdb allos specifying the number of files in L0 that triggers compactions. Expose this api as a command line parameter for running db_stress. Test Plan: Run test Reviewers: sheki, emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D11343 17 June 2013, 21:15:09 UTC
0012468 [rocksdb] do not trim range for level0 in manual compaction Summary: https://code.google.com/p/leveldb/issues/detail?can=1&q=178&colspec=ID%20Type%20Status%20Priority%20Milestone%20Owner%20Summary&id=178 Ported the solution as is to RocksDB. Test Plan: moved the unit test as manual_compaction_test Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D11331 17 June 2013, 20:58:17 UTC
back to top