https://github.com/facebook/rocksdb

sort by:
Revision Author Date Message Commit Date
4c5d3d2 revert to use cfh instead of cf_id 05 December 2016, 21:09:17 UTC
1421cb1 Verify CF id in IngestExternalFile and add a test 05 December 2016, 19:18:50 UTC
1704de8 Pass cf_id instead if cfh 03 December 2016, 01:39:49 UTC
14045ba Allow user to specifiy a cf id to persist in generated external sst file 03 December 2016, 01:24:48 UTC
edde954 fix clang build Summary: override is missing for FilterV2 Closes https://github.com/facebook/rocksdb/pull/1606 Differential Revision: D4263832 Pulled By: IslamAbdelRahman fbshipit-source-id: d8b337a 02 December 2016, 02:39:10 UTC
56281f3 Add memtable_insert_with_hint_prefix_size option to db_bench Summary: Add memtable_insert_with_hint_prefix_size option to db_bench Closes https://github.com/facebook/rocksdb/pull/1604 Differential Revision: D4260549 Pulled By: yiwu-arbug fbshipit-source-id: cee5ef7 02 December 2016, 00:54:16 UTC
4a21b14 Cache heap::downheap() root comparison (optimize heap cmp call) Summary: Reduce number of comparisons in heap by caching which child node in the first level is smallest (left_child or right_child) So next time we can compare directly against the smallest child I see that the total number of calls to comparator drops significantly when using this optimization Before caching (~2mil key comparison for iterating the DB) ``` $ DEBUG_LEVEL=0 make db_bench -j64 && ./db_bench --benchmarks="readseq" --db="/dev/shm/heap_opt" --use_existing_db --disable_auto_compactions --cache_size=1000000000 --perf_level=2 readseq : 0.338 micros/op 2959201 ops/sec; 327.4 MB/s user_key_comparison_count = 2000008 ``` After caching (~1mil key comparison for iterating the DB) ``` $ DEBUG_LEVEL=0 make db_bench -j64 && ./db_bench --benchmarks="readseq" --db="/dev/shm/heap_opt" --use_existing_db --disable_auto_compactions --cache_size=1000000000 --perf_level=2 readseq : 0.309 micros/op 3236801 ops/sec; 358.1 MB/s user_key_comparison_count = 1000011 ``` It also improves Closes https://github.com/facebook/rocksdb/pull/1600 Differential Revision: D4256027 Pulled By: IslamAbdelRahman fbshipit-source-id: 76fcc66 01 December 2016, 21:39:14 UTC
e39d080 Fix travis (compile for clang < 3.9) Summary: Travis fail because it uses clang 3.6 which don't recognize `__attribute__((__no_sanitize__("undefined")))` Closes https://github.com/facebook/rocksdb/pull/1601 Differential Revision: D4257175 Pulled By: IslamAbdelRahman fbshipit-source-id: fb4d1ab 01 December 2016, 18:09:22 UTC
3f407b0 Kill flashcache code in RocksDB Summary: Now that we have userspace persisted cache, we don't need flashcache anymore. Closes https://github.com/facebook/rocksdb/pull/1588 Differential Revision: D4245114 Pulled By: igorcanadi fbshipit-source-id: e2c1c72 01 December 2016, 18:09:22 UTC
b77007d Bug: paralle_group status updated in WriteThread::CompleteParallelWorker Summary: Multi-write thread may update the status of the parallel_group in WriteThread::CompleteParallelWorker if the status of Writer is not ok! When copy write status to the paralle_group, the write thread just hold the mutex of the the writer processed by itself. it is useless. The thread should held the the leader of the parallel_group instead. Closes https://github.com/facebook/rocksdb/pull/1598 Differential Revision: D4252335 Pulled By: siying fbshipit-source-id: 3864cf7 01 December 2016, 17:54:11 UTC
247d097 Support for range skips in compaction filter Summary: This adds the ability for compaction filter to say "drop this key-value, and also drop everything up to key x". This will cause the compaction to seek input iterator to x, without reading the data. This can make compaction much faster when large consecutive chunks of data are filtered out. See the changes in include/rocksdb/compaction_filter.h for the new API. Along the way this diff also adds ability for compaction filter changing merge operands, similar to how it can change values; we're not going to use this feature, it just seemed easier and cleaner to implement it than to document that it's not implemented :) The diff is not as big as it may seem, about half of the lines are a test. Closes https://github.com/facebook/rocksdb/pull/1599 Differential Revision: D4252092 Pulled By: al13n321 fbshipit-source-id: 41e1e48 01 December 2016, 15:09:15 UTC
96fcefb c api: expose option for dynamic level size target Summary: Closes https://github.com/facebook/rocksdb/pull/1587 Differential Revision: D4245923 Pulled By: yiwu-arbug fbshipit-source-id: 6ee7291 30 November 2016, 19:24:14 UTC
00197cf Add C API to set base_backgroud_compactions Summary: Add C API to set base_backgroud_compactions Closes https://github.com/facebook/rocksdb/pull/1571 Differential Revision: D4245709 Pulled By: yiwu-arbug fbshipit-source-id: 792c6b8 30 November 2016, 19:09:13 UTC
5b219ec deleterange end-to-end test improvements for lite/robustness Summary: Closes https://github.com/facebook/rocksdb/pull/1591 Differential Revision: D4246019 Pulled By: ajkr fbshipit-source-id: 0c4aa37 29 November 2016, 20:24:13 UTC
aad1191 pass rocksdb oncall to mysql_mtr_filter otherwise tasks get created w… Summary: …rong owner mysql_mtr_filter script needs proper oncall Closes https://github.com/facebook/rocksdb/pull/1586 Differential Revision: D4245150 Pulled By: anirbanr-fb fbshipit-source-id: fd8577c 29 November 2016, 20:09:12 UTC
e333528 DeleteRange write path end-to-end tests Summary: Closes https://github.com/facebook/rocksdb/pull/1578 Differential Revision: D4241171 Pulled By: ajkr fbshipit-source-id: ce5fd83 29 November 2016, 19:09:22 UTC
7784980 Fix mis-reporting of compaction read bytes to the base level Summary: In dynamic leveled compaction, when calculating read bytes, output level bytes may be wronglyl calculated as input level inputs. Fix it. Closes https://github.com/facebook/rocksdb/pull/1475 Differential Revision: D4148412 Pulled By: siying fbshipit-source-id: f2f475a 29 November 2016, 19:09:22 UTC
3c6b49e Fix implicit conversion between int64_t to int Summary: Make conversion explicit, implicit conversion breaks the build Closes https://github.com/facebook/rocksdb/pull/1589 Differential Revision: D4245158 Pulled By: IslamAbdelRahman fbshipit-source-id: aaec00d 29 November 2016, 18:54:15 UTC
b3b8756 Remove unused assignment in db/db_iter.cc Summary: "make analyze" complains the assignment is not useful. Remove it. Closes https://github.com/facebook/rocksdb/pull/1581 Differential Revision: D4241697 Pulled By: siying fbshipit-source-id: 178f67a 29 November 2016, 17:09:14 UTC
4f6e89b Fix range deletion covering key in same SST file Summary: AddTombstones() needs to be before t->Get(), oops :'( Closes https://github.com/facebook/rocksdb/pull/1576 Differential Revision: D4241041 Pulled By: ajkr fbshipit-source-id: 781ceea 29 November 2016, 06:54:13 UTC
a2bf265 Avoid intentional overflow in GetL0ThresholdSpeedupCompaction Summary: https://github.com/facebook/rocksdb/commit/99c052a34f93d119b75eccdcd489ecd581d48ee9 fixes integer overflow in GetL0ThresholdSpeedupCompaction() by checking if int become -ve. UBSAN will complain about that since this is still an overflow, we can fix the issue by simply using int64_t Closes https://github.com/facebook/rocksdb/pull/1582 Differential Revision: D4241525 Pulled By: IslamAbdelRahman fbshipit-source-id: b3ae21f 29 November 2016, 02:39:13 UTC
52fd1ff disable UBSAN for functions with intentional -ve shift / overflow Summary: disable UBSAN for functions with intentional left shift on -ve number / overflow These functions are rocksdb:: Hash FixedLengthColBufEncoder::Append FaultInjectionTest:: Key Closes https://github.com/facebook/rocksdb/pull/1577 Differential Revision: D4240801 Pulled By: IslamAbdelRahman fbshipit-source-id: 3e1caf6 29 November 2016, 01:54:12 UTC
1886c43 Fix CompactionJob::Install division by zero Summary: Fix CompactionJob::Install division by zero Closes https://github.com/facebook/rocksdb/pull/1580 Differential Revision: D4240794 Pulled By: IslamAbdelRahman fbshipit-source-id: 7286721 29 November 2016, 00:54:16 UTC
63c30de fix options_test ubsan Summary: Having -ve value for max_write_buffer_number does not make sense and cause us to do a left shift on a -ve value number Closes https://github.com/facebook/rocksdb/pull/1579 Differential Revision: D4240798 Pulled By: IslamAbdelRahman fbshipit-source-id: bd6267e 29 November 2016, 00:39:14 UTC
13e66a8 Fix compaction_job.cc division by zero Summary: Fix division by zero in compaction_job.cc Closes https://github.com/facebook/rocksdb/pull/1575 Differential Revision: D4240818 Pulled By: IslamAbdelRahman fbshipit-source-id: a8bc757 29 November 2016, 00:39:13 UTC
01eabf7 Fix double-counted deletion stat Summary: Both the single deletion and the value are included in compaction outputs, so no need to update the stat for the value's deletion yet, otherwise it'd be double-counted. Closes https://github.com/facebook/rocksdb/pull/1574 Differential Revision: D4241181 Pulled By: ajkr fbshipit-source-id: c9aaa15 28 November 2016, 23:54:12 UTC
7ffb10f DeleteRange compaction statistics Summary: - "rocksdb.compaction.key.drop.range_del" - number of keys dropped during compaction due to a range tombstone covering them - "rocksdb.compaction.range_del.drop.obsolete" - number of range tombstones dropped due to compaction to bottom level and no snapshot saving them - s/CompactionIteratorStats/CompactionIterationStats/g since this class is no longer specific to CompactionIterator -- it's also updated for range tombstone iteration during compaction - Move the above class into a separate .h file to avoid circular dependency. Closes https://github.com/facebook/rocksdb/pull/1520 Differential Revision: D4187179 Pulled By: ajkr fbshipit-source-id: 10c2103 28 November 2016, 19:54:12 UTC
236d4c6 Less linear search in DBIter::Seek() when keys are overwritten a lot Summary: In one deployment we saw high latencies (presumably from slow iterator operations) and a lot of CPU time reported by perf with this stack: ``` rocksdb::MergingIterator::Next rocksdb::DBIter::FindNextUserEntryInternal rocksdb::DBIter::Seek ``` I think what's happening is: 1. we create a snapshot iterator, 2. we do lots of Put()s for the same key x; this creates lots of entries in memtable, 3. we seek the iterator to a key slightly smaller than x, 4. the seek walks over lots of entries in memtable for key x, skipping them because of high sequence numbers. CC IslamAbdelRahman Closes https://github.com/facebook/rocksdb/pull/1413 Differential Revision: D4083879 Pulled By: IslamAbdelRahman fbshipit-source-id: a83ddae 28 November 2016, 18:24:11 UTC
cd7c414 Improve Write Stalling System Summary: Current write stalling system has the problem of lacking of positive feedback if the restricted rate is already too low. Users sometimes stack in very low slowdown value. With the diff, we add a positive feedback (increasing the slowdown value) if we recover from slowdown state back to normal. To avoid the positive feedback to keep the slowdown value to be to high, we add issue a negative feedback every time we are close to the stop condition. Experiments show it is easier to reach a relative balance than before. Also increase level0_stop_writes_trigger default from 24 to 32. Since level0_slowdown_writes_trigger default is 20, stop trigger 24 only gives four files as the buffer time to slowdown writes. In order to avoid stop in four files while 20 files have been accumulated, the slowdown value must be very low, which is amost the same as stop. It also doesn't give enough time for the slowdown value to converge. Increase it to 32 will smooth out the system. Closes https://github.com/facebook/rocksdb/pull/1562 Differential Revision: D4218519 Pulled By: siying fbshipit-source-id: 95e4088 23 November 2016, 17:24:15 UTC
dfb6fe6 Unified InlineSkipList::Insert algorithm with hinting Summary: This PR is based on nbronson's diff with small modifications to wire it up with existing interface. Comparing to previous version, this approach works better for inserting keys in decreasing order or updating the same key, and impose less restriction to the prefix extractor. ---- Summary from original diff ---- This diff introduces a single InlineSkipList::Insert that unifies the existing sequential insert optimization (prev_), concurrent insertion, and insertion using externally-managed insertion point hints. There's a deep symmetry between insertion hints (cursors) and the concurrent algorithm. In both cases we have partial information from the recent past that is likely but not certain to be accurate. This diff introduces the struct InlineSkipList::Splice, which encodes predecessor and successor information in the same form that was previously only used within a single call to InsertConcurrently. Splice holds information about an insertion point that can be used to levera Closes https://github.com/facebook/rocksdb/pull/1561 Differential Revision: D4217283 Pulled By: yiwu-arbug fbshipit-source-id: 33ee437 22 November 2016, 22:09:13 UTC
3068870 Making persistent cache more resilient to filesystem failures Summary: The persistent cache is designed to hop over errors and return key not found. So far, it has shown resilience to write errors, encoding errors, data corruption etc. It is not resilient against disappearing files/directories. This was exposed during testing when multiple instances of persistence cache was started sharing the same directory simulating an unpredictable filesystem environment. This patch - makes the write code path more resilient to errors while creating files - makes the read code path more resilient to handle situation where files are not found - added a test that does negative write/read testing by removing the directory while writes are in progress Closes https://github.com/facebook/rocksdb/pull/1472 Differential Revision: D4143413 Pulled By: kradhakrishnan fbshipit-source-id: fd25e9b 22 November 2016, 18:39:10 UTC
734e4ac Eliminate redundant cache lookup with range deletion Summary: When we introduced range deletion block, TableCache::Get() and TableCache::NewIterator() each did two table cache lookups, one for range deletion block iterator and another for getting the table reader to which the Get()/NewIterator() is delegated. This extra cache lookup was very CPU-intensive (about 10% overhead in a read-heavy benchmark). We can avoid it by reusing the Cache::Handle created for range deletion block iterator to get the file reader. Closes https://github.com/facebook/rocksdb/pull/1537 Differential Revision: D4201167 Pulled By: ajkr fbshipit-source-id: d33ffd8 22 November 2016, 05:24:11 UTC
182b940 Add WriteOptions.no_slowdown Summary: If the WriteOptions.no_slowdown flag is set AND we need to wait or sleep for the write request, then fail immediately with Status::Incomplete(). Closes https://github.com/facebook/rocksdb/pull/1527 Differential Revision: D4191405 Pulled By: maysamyabandeh fbshipit-source-id: 7f3ce3f 22 November 2016, 02:09:13 UTC
4118e13 Persistent Cache: Expose stats to user via public API Summary: Exposing persistent cache stats (counters) to the user via public API. Closes https://github.com/facebook/rocksdb/pull/1485 Differential Revision: D4155274 Pulled By: siying fbshipit-source-id: 30a9f50 22 November 2016, 01:39:13 UTC
f2a8f92 rocks_lua_compaction_filter: add unused attribute to a variable Summary: Release build shows warning without this fix. Closes https://github.com/facebook/rocksdb/pull/1558 Differential Revision: D4215831 Pulled By: yiwu-arbug fbshipit-source-id: 888a755 21 November 2016, 22:54:14 UTC
4444256 Remove use of deprecated LZ4 function Summary: LZ4 1.7.3 emits warnings when calling the deprecated function `LZ4_compress_limitedOutput_continue()`. Starting in r129, LZ4 introduces `LZ4_compress_fast_continue()` as a replacement, and the two functions calls are [exactly equivalent](https://github.com/lz4/lz4/blob/dev/lib/lz4.c#L1408). Closes https://github.com/facebook/rocksdb/pull/1532 Differential Revision: D4199240 Pulled By: siying fbshipit-source-id: 138c2bc 21 November 2016, 20:24:14 UTC
548d7fb Fix fd leak when using direct IOs Summary: We should close the fd, before overriding it. This bug was introduced by f89caa127baa086cb100976b14da1a531cf0e823 Closes https://github.com/facebook/rocksdb/pull/1553 Differential Revision: D4214101 Pulled By: siying fbshipit-source-id: 0d65de0 21 November 2016, 20:24:13 UTC
fd43ee0 Range deletion microoptimizations Summary: - Made RangeDelAggregator's InternalKeyComparator member a reference-to-const so we don't need to copy-construct it. Also added InternalKeyComparator to ImmutableCFOptions so we don't need to construct one for each DBIter. - Made MemTable::NewRangeTombstoneIterator and the table readers' NewRangeTombstoneIterator() functions return nullptr instead of NewEmptyInternalIterator to avoid the allocation. Updated callers accordingly. Closes https://github.com/facebook/rocksdb/pull/1548 Differential Revision: D4208169 Pulled By: ajkr fbshipit-source-id: 2fd65cf 21 November 2016, 20:24:13 UTC
23a18ca Reword support a little bit to more clear and concise Summary: I tried to do this in #1556, but it landed before the change could be imported. Closes https://github.com/facebook/rocksdb/pull/1557 Differential Revision: D4214572 Pulled By: siying fbshipit-source-id: 718d4a4 21 November 2016, 19:39:13 UTC
481856a Update support to separate code issues with general questions Summary: Closes https://github.com/facebook/rocksdb/pull/1556 Differential Revision: D4214184 Pulled By: siying fbshipit-source-id: c1abf47 21 November 2016, 18:54:12 UTC
a0deec9 Fix deadlock when calling getMergedHistogram Summary: When calling StatisticsImpl::HistogramInfo::getMergedHistogram(), if there is a dying thread, which is calling ThreadLocalPtr::StaticMeta::OnThreadExit() to merge its thread values to HistogramInfo, deadlock will occur. Because the former try to hold merge_lock then ThreadMeta::mutex_, but the later try to hold ThreadMeta::mutex_ then merge_lock. In short, the locking order isn't the same. This patch addressed this issue by releasing merge_lock before folding thread values. Closes https://github.com/facebook/rocksdb/pull/1552 Differential Revision: D4211942 Pulled By: ajkr fbshipit-source-id: ef89bcb 21 November 2016, 02:24:12 UTC
fe349db Remove Arena in RangeDelAggregator Summary: The Arena construction/destruction introduced significant overhead to read-heavy workload just by creating empty vectors for its blocks, so avoid it in RangeDelAggregator. Closes https://github.com/facebook/rocksdb/pull/1547 Differential Revision: D4207781 Pulled By: ajkr fbshipit-source-id: 9d1c130 19 November 2016, 22:24:12 UTC
e63350e Use more efficient hash map for deadlock detection Summary: Currently, deadlock cycles are held in std::unordered_map. The problem with it is that it allocates/deallocates memory on every insertion/deletion. This limits throughput since we're doing this expensive operation while holding a global mutex. Fix this by using a vector which caches memory instead. Running the deadlock stress test, this change increased throughput from 39k txns/s -> 49k txns/s. The effect is more noticeable in MyRocks. Closes https://github.com/facebook/rocksdb/pull/1545 Differential Revision: D4205662 Pulled By: lth fbshipit-source-id: ff990e4 19 November 2016, 19:39:15 UTC
a13bde3 Skip ldb test in Travis Summary: Travis now is building for ldb tests. Disable for now to unblock other tests while we are investigating. Closes https://github.com/facebook/rocksdb/pull/1546 Differential Revision: D4209404 Pulled By: siying fbshipit-source-id: 47edd97 19 November 2016, 03:24:13 UTC
73843aa Direct I/O Reads Handle the last sector correctly. Summary: Currently, in the Direct I/O read mode, the last sector of the file, if not full, is not handled correctly. If the return value of pread is not multiplier of kSectorSize, we still go ahead and continue reading, even if the buffer is not aligned. With the commit, if the return value is not multiplier of kSectorSize, and all but the last sector has been read, we simply return. Closes https://github.com/facebook/rocksdb/pull/1550 Differential Revision: D4209609 Pulled By: lightmark fbshipit-source-id: cb0b439 19 November 2016, 03:24:13 UTC
9d60151 Implement PositionedAppend for PosixWritableFile Summary: This patch clarifies the contract of PositionedAppend with some unit tests and also implements it for PosixWritableFile. (Tasks: 14524071) Closes https://github.com/facebook/rocksdb/pull/1514 Differential Revision: D4204907 Pulled By: maysamyabandeh fbshipit-source-id: 06eabd2 19 November 2016, 01:24:13 UTC
3f62215 Lazily initialize RangeDelAggregator's map and pinning manager Summary: Since a RangeDelAggregator is created for each read request, these heap-allocating member variables were consuming significant CPU (~3% total) which slowed down request throughput. The map and pinning manager are only necessary when range deletions exist, so we can defer their initialization until the first range deletion is encountered. Currently lazy initialization is done for reads only since reads pass us a single snapshot, which is easier to store on the stack for later insertion into the map than the vector passed to us by flush or compaction. Note the Arena member variable is still expensive, I will figure out what to do with it in a subsequent diff. It cannot be lazily initialized because we currently use this arena even to allocate empty iterators, which is necessary even when no range deletions exist. Closes https://github.com/facebook/rocksdb/pull/1539 Differential Revision: D4203488 Pulled By: ajkr fbshipit-source-id: 3b36279 19 November 2016, 01:09:11 UTC
41e77b8 cmake: s/STEQUAL/STREQUAL/ Summary: Signed-off-by: Kefu Chai <tchaikov@gmail.com> Closes https://github.com/facebook/rocksdb/pull/1540 Differential Revision: D4207564 Pulled By: siying fbshipit-source-id: 567415b 18 November 2016, 22:54:14 UTC
c1038d2 Release RocksDB 5.0 Summary: Update HISTORY.md and version.h Closes https://github.com/facebook/rocksdb/pull/1536 Differential Revision: D4202987 Pulled By: IslamAbdelRahman fbshipit-source-id: 94985e3 18 November 2016, 02:39:15 UTC
635a7bd refactor TableCache Get/NewIterator for single exit points Summary: these functions were too complicated to change with exit points everywhere, so refactored them. btw, please review urgently, this is a prereq to fix the 5.0 perf regression Closes https://github.com/facebook/rocksdb/pull/1534 Differential Revision: D4198972 Pulled By: ajkr fbshipit-source-id: 04ebfb7 17 November 2016, 22:39:13 UTC
f39452e Fix heap use after free ASAN/Valgrind Summary: Dont use c_str() of temp std::string in RocksLuaCompactionFilter::Name() Closes https://github.com/facebook/rocksdb/pull/1535 Differential Revision: D4199094 Pulled By: IslamAbdelRahman fbshipit-source-id: e56ce62 17 November 2016, 20:24:12 UTC
a4eb738 Allow plain table to store index on file with bloom filter disabled Summary: Currently plain table bloom filter is required if storing metadata on file. Remove the constraint. Closes https://github.com/facebook/rocksdb/pull/1525 Differential Revision: D4190977 Pulled By: siying fbshipit-source-id: be60442 17 November 2016, 19:09:13 UTC
36e4762 Remove Ticker::SEQUENCE_NUMBER Summary: Remove the ticker count because: * Having to reset the ticker count in WriteImpl is ineffiecent; * It doesn't make sense to have it as a ticker count if multiple db instance share a statistics object. Closes https://github.com/facebook/rocksdb/pull/1531 Differential Revision: D4194442 Pulled By: yiwu-arbug fbshipit-source-id: e2110a9 17 November 2016, 06:39:09 UTC
86eb2b9 Fix src.mk 17 November 2016, 02:05:19 UTC
0765bab Remove LATEST_BACKUP file Summary: This has been unused since D42069 but kept around for backward compatibility. I think it is unlikely anyone will use a much older version of RocksDB for restore than they use for backup, so I propose removing it. It is also causing recurring confusion, e.g., https://www.facebook.com/groups/rocksdb.dev/permalink/980454015386446/ Ported from https://reviews.facebook.net/D60735 Closes https://github.com/facebook/rocksdb/pull/1529 Differential Revision: D4194199 Pulled By: ajkr fbshipit-source-id: 82f9bf4 17 November 2016, 01:24:15 UTC
647eafd Introduce Lua Extension: RocksLuaCompactionFilter Summary: This diff includes an implementation of CompactionFilter that allows users to write CompactionFilter in Lua. With this ability, users can dynamically change compaction filter logic without requiring building the rocksdb binary and restarting the database. To compile, WITH_LUA_PATH must be specified to the base directory of lua. Closes https://github.com/facebook/rocksdb/pull/1478 Differential Revision: D4150138 Pulled By: yhchiang fbshipit-source-id: ed84222 16 November 2016, 23:39:12 UTC
760ef68 fix deleterange asan issue Summary: pinned_iters_mgr_ pins iterators allocated with arena_, so we should order the instance variable declarations such that the pinned iterators have their destructors executed before the arena is destroyed. Closes https://github.com/facebook/rocksdb/pull/1528 Differential Revision: D4191984 Pulled By: ajkr fbshipit-source-id: 1386f20 16 November 2016, 22:09:07 UTC
327085b fix valgrind Summary: Closes https://github.com/facebook/rocksdb/pull/1526 Differential Revision: D4191257 Pulled By: ajkr fbshipit-source-id: d09dc76 16 November 2016, 20:09:11 UTC
715591b Ask travis to use JDK 7 Summary: yhchiang This may or may not help Closes https://github.com/facebook/rocksdb/pull/1385 Differential Revision: D4098424 Pulled By: yhchiang fbshipit-source-id: 9f9782e 16 November 2016, 18:54:12 UTC
972e3ff Enable allow_concurrent_memtable_write and enable_write_thread_adaptive_yield by default Summary: Closes https://github.com/facebook/rocksdb/pull/1496 Differential Revision: D4168080 Pulled By: siying fbshipit-source-id: 056ae62 16 November 2016, 17:39:09 UTC
420bdb4 option_change_migration_test: force full compaction when needed Summary: When option_change_migration_test decides to go with a full compaction, we don't force a compaction but allow trivial move. This can cause assert failure if the destination is level 0. Fix it by forcing the full compaction to skip trivial move if the destination level is L0. Closes https://github.com/facebook/rocksdb/pull/1518 Differential Revision: D4183610 Pulled By: siying fbshipit-source-id: dea482b 16 November 2016, 06:09:34 UTC
1543d5d Report memory usage by memtable insert hints map. Summary: It is hard to measure acutal memory usage by std containers. Even providing a custom allocator will miss count some of the usage. Here we only do a wild guess on its memory usage. Closes https://github.com/facebook/rocksdb/pull/1511 Differential Revision: D4179945 Pulled By: yiwu-arbug fbshipit-source-id: 32ab929 16 November 2016, 04:24:13 UTC
018bb2e DeleteRange support for db_bench Summary: Added a few options to configure when to add range tombstones during any benchmark involving writes. Closes https://github.com/facebook/rocksdb/pull/1522 Differential Revision: D4187388 Pulled By: ajkr fbshipit-source-id: 2c8a473 16 November 2016, 01:39:47 UTC
dc51bd7 CMakeLists.txt: FreeBSD has jemalloc as default malloc Summary: This will allow reference to `malloc_stats_print` Closes https://github.com/facebook/rocksdb/pull/1516 Differential Revision: D4187258 Pulled By: siying fbshipit-source-id: 34ae9f9 16 November 2016, 01:39:47 UTC
48e8bae Decouple data iterator and range deletion iterator in TableCache Summary: Previously we used TableCache::NewIterator() for multiple purposes (data block iterator and range deletion iterator), and returned non-ok status in the data block iterator. In one case where the caller only used the range deletion block iterator (https://github.com/facebook/rocksdb/blob/9e7cf3469bc626b092ec48366d12873ecab22b4e/db/version_set.cc#L965-L973), we didn't check/free the data block iterator containing non-ok status, which caused a valgrind error. So, this diff decouples creation of data block and range deletion block iterators, and updates the callers accordingly. Both functions can return non-ok status in an InternalIterator. Since the non-ok status is returned in an iterator that the callers will definitely use, it should be more usable/less error-prone. Closes https://github.com/facebook/rocksdb/pull/1513 Differential Revision: D4181423 Pulled By: ajkr fbshipit-source-id: 835b8f5 16 November 2016, 01:24:28 UTC
4b0aa3c Fix failed compaction_filter_example and add it into make all Summary: Simple patch as title Closes https://github.com/facebook/rocksdb/pull/1512 Differential Revision: D4186994 Pulled By: siying fbshipit-source-id: 880f9b8 16 November 2016, 01:09:10 UTC
53b693f ldb support for range delete Summary: Add a subcommand to ldb with which we can delete a range of keys. Closes https://github.com/facebook/rocksdb/pull/1521 Differential Revision: D4186338 Pulled By: ajkr fbshipit-source-id: b8e9861 15 November 2016, 23:54:20 UTC
661e4c9 DeleteRange unsupported in non-block-based tables Summary: Return an error from DeleteRange() (or Write() if the user is using the low-level WriteBatch API) if an unsupported table type is configured. Closes https://github.com/facebook/rocksdb/pull/1519 Differential Revision: D4185933 Pulled By: ajkr fbshipit-source-id: abcdf84 15 November 2016, 23:24:16 UTC
489d142 DeleteRange interface Summary: Expose DeleteRange() interface since we think the implementation is functionally correct now. Closes https://github.com/facebook/rocksdb/pull/1503 Differential Revision: D4171921 Pulled By: ajkr fbshipit-source-id: 5e21c98 15 November 2016, 23:24:16 UTC
eba99c2 Fix min_write_buffer_number_to_merge = 0 bug Summary: It's possible that we set min_write_buffer_number_to_merge to 0. This should never happen Closes https://github.com/facebook/rocksdb/pull/1515 Differential Revision: D4183356 Pulled By: yiwu-arbug fbshipit-source-id: c9d39d7 15 November 2016, 21:54:08 UTC
2ef92fe Remove all instances of relative_url until GitHub pages problem is fixed. I am in email thread with GitHub support about what is happening here. 15 November 2016, 15:40:18 UTC
91300d0 Dynamic max_total_wal_size option Summary: Closes https://github.com/facebook/rocksdb/pull/1509 Differential Revision: D4176426 Pulled By: yiwu-arbug fbshipit-source-id: b57689d 15 November 2016, 06:54:17 UTC
ec2f647 Consider subcompaction boundaries when updating file boundaries for range deletion Summary: Adjusted AddToBuilder() to take lower_bound and upper_bound, which serve two purposes: (1) only range deletions overlapping with the interval [lower_bound, upper_bound) will be added to the output file, and (2) the output file's boundaries will not be extended before lower_bound or after upper_bound. Our computation of lower_bound/upper_bound consider both subcompaction boundaries and previous/next files within the subcompaction. Test cases are here (level subcompactions: https://gist.github.com/ajkr/63c7eae3e9667c5ebdc0a7efb74ac332, and universal subcompactions: https://gist.github.com/ajkr/5a62af77c4ebe4052a1955c496d51fdb) but can't be included in this diff as they depend on committing the API first. They fail before this change and pass after. Closes https://github.com/facebook/rocksdb/pull/1501 Reviewed By: yhchiang Differential Revision: D4171685 Pulled By: ajkr fbshipit-source-id: ee99db8 15 November 2016, 04:24:21 UTC
800e515 Fix CSS issues again :( I have an email to GitHub support about this. 15 November 2016, 04:11:26 UTC
b952c89 Parallize persistent_cache_test and transaction_test Summary: Parallize persistent_cache_test and transaction_test Closes https://github.com/facebook/rocksdb/pull/1506 Differential Revision: D4179392 Pulled By: IslamAbdelRahman fbshipit-source-id: 05507a1 15 November 2016, 04:09:19 UTC
3b192f6 Handle full final subcompaction output file with range deletions Summary: This conditional should only open a new file that's dedicated to range deletions when it's the sole output of the subcompaction. Previously, we created such a file whenever the table builder was nullptr, which would've also been the case whenever the CompactionIterator's final key coincided with the final output table becoming full. Closes https://github.com/facebook/rocksdb/pull/1507 Differential Revision: D4174613 Pulled By: ajkr fbshipit-source-id: 9ffacea 15 November 2016, 01:54:20 UTC
6c57952 Make range deletion inclusive-exclusive Summary: This makes it easier to implement future optimizations like range collapsing. Closes https://github.com/facebook/rocksdb/pull/1504 Differential Revision: D4172214 Pulled By: ajkr fbshipit-source-id: ac4942f 15 November 2016, 01:39:13 UTC
425210c CSS issues are arising on the Github Pages side. Temp fix. Need to figure out why this is still happening that `relative_url` is not prepending the right value at just random times. 14 November 2016, 15:08:52 UTC
1ea79a7 Optimize sequential insert into memtable - Part 1: Interface Summary: Currently our skip-list have an optimization to speedup sequential inserts from a single stream, by remembering the last insert position. We extend the idea to support sequential inserts from multiple streams, and even tolerate small reordering wihtin each stream. This PR is the interface part adding the following: - Add `memtable_insert_prefix_extractor` to allow specifying prefix for each key. - Add `InsertWithHint()` interface to memtable, to allow underlying implementation to return a hint of insert position, which can be later pass back to optimize inserts. - Memtable will maintain a map from prefix to hints and pass the hint via `InsertWithHint()` if `memtable_insert_prefix_extractor` is non-null. Closes https://github.com/facebook/rocksdb/pull/1419 Differential Revision: D4079367 Pulled By: yiwu-arbug fbshipit-source-id: 3555326 14 November 2016, 03:09:18 UTC
df5eeb8 Optimize sequential insert into memtable - Part 2: Implementation Summary: Implement a insert hint into skip-list to hint insert position. This is to optimize for the write workload where there are multiple stream of sequential writes. For example, there is a stream of keys of a1, a2, a3... but also b1, b2, b2... Each stream are not neccessary strictly sequential, but can get reorder a little bit. User can specify a prefix extractor and the `SkipListRep` can thus maintan a hint for each of the stream for fast insert into memtable. This is the internal implementation part. See #1419 for the interface part. See inline comments for details. Closes https://github.com/facebook/rocksdb/pull/1449 Differential Revision: D4106781 Pulled By: yiwu-arbug fbshipit-source-id: f4d48c4 13 November 2016, 21:09:16 UTC
5ed6508 Fix SstFileWriter destructor Summary: If user did not call SstFileWriter::Finish() or called Finish() but it failed. We need to abandon the builder, to avoid destructing it while it's open Closes https://github.com/facebook/rocksdb/pull/1502 Differential Revision: D4171660 Pulled By: IslamAbdelRahman fbshipit-source-id: ab6f434 13 November 2016, 04:11:19 UTC
adb665e Allowed delayed_write_rate option to be dynamically set. Summary: Closes https://github.com/facebook/rocksdb/pull/1488 Differential Revision: D4157784 Pulled By: siying fbshipit-source-id: f150081 12 November 2016, 23:54:11 UTC
307a4e8 sst_dump support for range deletion Summary: Change DumpTable() so we can see the range deletion meta-block. Closes https://github.com/facebook/rocksdb/pull/1505 Differential Revision: D4172227 Pulled By: ajkr fbshipit-source-id: ae35665 12 November 2016, 17:39:23 UTC
361010d Exporting compaction stats in the form of a map Summary: Currently the compaction stats are printed to stdout. We want to export the compaction stats in a map format so that the upper layer apps (e.g., MySQL) could present the stats in any format required by the them. Closes https://github.com/facebook/rocksdb/pull/1477 Differential Revision: D4149836 Pulled By: maysamyabandeh fbshipit-source-id: b3df19f 12 November 2016, 04:54:14 UTC
672300f Use relative Urls for stylesheets 10 November 2016, 22:54:55 UTC
b39b2ee do not call get() in recovery mode Summary: This is a previous fix that has a typo Closes https://github.com/facebook/rocksdb/pull/1487 Differential Revision: D4157381 Pulled By: lightmark fbshipit-source-id: f079be8 10 November 2016, 19:24:20 UTC
1ca5f6d Fix 2PC Recovery SeqId Miscount Summary: Originally sequence ids were calculated, in recovery, based off of the first seqid found if the first log recovered. The working seqid was then incremented from that value based on every insertion that took place. This was faulty because of the potential for missing log files or inserts that skipped the WAL. The current recovery scheme grabs sequence from current recovering batch and increments using memtableinserter to track how many actual inserts take place. This works for 2PC batches as well scenarios where some logs are missing or inserts that skip the WAL. Closes https://github.com/facebook/rocksdb/pull/1486 Differential Revision: D4156064 Pulled By: reidHoruff fbshipit-source-id: a6da8d9 10 November 2016, 19:09:22 UTC
e095d0c Rocksdb contruns to new Sandcastle API Reviewed By: IslamAbdelRahman Differential Revision: D4114816 fbshipit-source-id: 8082936 10 November 2016, 18:54:20 UTC
14c0380 Convenience option to parse an internal key on command line Summary: enhancing sst_dump to be able to parse internal key Closes https://github.com/facebook/rocksdb/pull/1482 Differential Revision: D4154175 Pulled By: siying fbshipit-source-id: b0e28b1 10 November 2016, 18:09:21 UTC
c90fef8 fix open failure with empty wal Summary: Closes https://github.com/facebook/rocksdb/pull/1490 Differential Revision: D4158821 Pulled By: IslamAbdelRahman fbshipit-source-id: 59b73f4 10 November 2016, 06:24:26 UTC
4e20c5d Store internal keys in TombstoneMap Summary: This fixes a correctness issue where ranges with same begin key would overwrite each other. This diff uses InternalKey as TombstoneMap's key such that all tombstones have unique keys even when their start keys overlap. We also update TombstoneMap to use an internal key comparator. End-to-end tests pass and are here (https://gist.github.com/ajkr/851ffe4c1b8a15a68d33025be190a7d9) but cannot be included yet since the DeleteRange() API is yet to be checked in. Note both tests failed before this fix. Closes https://github.com/facebook/rocksdb/pull/1484 Differential Revision: D4155248 Pulled By: ajkr fbshipit-source-id: 304b4b9 09 November 2016, 23:09:18 UTC
a9fb346 Fix RocksDB Lite build failure in c_test.cc Summary: Fix the following RocksDB Lite build failure in c_test.cc db/c_test.c:1051:3: error: implicit declaration of function 'fprintf' is invalid in C99 [-Werror,-Wimplicit-function-declaration] fprintf(stderr, "SKIPPED\n"); ^ db/c_test.c:1051:3: error: declaration of built-in function 'fprintf' requires inclusion of the header <stdio.h> [-Werror,-Wbuiltin-requires-header] db/c_test.c:1051:11: error: use of undeclared identifier 'stderr' fprintf(stderr, "SKIPPED\n"); ^ 3 errors generated. Closes https://github.com/facebook/rocksdb/pull/1479 Differential Revision: D4151160 Pulled By: yhchiang fbshipit-source-id: a471a30 09 November 2016, 20:24:18 UTC
d133b08 Use correct sequence number when creating memtable Summary: copied from: https://github.com/mdlugajczyk/rocksdb/commit/5ebfd2623a01e69a4cbeae3ed2b788f2a84056ad Opening existing RocksDB attempts recovery from log files, which uses wrong sequence number to create the memtable. This is a regression introduced in change a400336. This change includes a test demonstrating the problem, without the fix the test fails with "Operation failed. Try again.: Transaction could not check for conflicts for operation at SequenceNumber 1 as the MemTable only contains changes newer than SequenceNumber 2. Increasing the value of the max_write_buffer_number_to_maintain option could reduce the frequency of this error" This change is a joint effort by Peter 'Stig' Edwards thatsafunnyname and me. Closes https://github.com/facebook/rocksdb/pull/1458 Differential Revision: D4143791 Pulled By: reidHoruff fbshipit-source-id: 5a25033 09 November 2016, 20:24:17 UTC
144cdb8 16384 as e.g .value for compression_max_dict_bytes Summary: Use 16384 as e.g .value for ldb the --compression_max_dict_bytes option. I think 14 was copy and pasted from the options in the lines above. Closes https://github.com/facebook/rocksdb/pull/1483 Differential Revision: D4154393 Pulled By: siying fbshipit-source-id: ef53a69 09 November 2016, 19:24:20 UTC
9bd191d Fix deadlock between (WriterThread/Compaction/IngestExternalFile) Summary: A deadlock is possible if this happen (1) Writer thread is stopped because it's waiting for compaction to finish (2) Compaction is waiting for current IngestExternalFile() calls to finish (3) IngestExternalFile() is waiting to be able to acquire the writer thread (4) WriterThread is held by stopped writes that are waiting for compactions to finish This patch fix the issue by not incrementing num_running_ingest_file_ except when we acquire the writer thread. This patch include a unittest to reproduce the described scenario Closes https://github.com/facebook/rocksdb/pull/1480 Differential Revision: D4151646 Pulled By: IslamAbdelRahman fbshipit-source-id: 09b39db 09 November 2016, 18:54:10 UTC
a9fae0a CSS problems again :( Trying to remove baseurl term. 08 November 2016, 23:22:31 UTC
193221e Fix Forward Iterator Seek()/SeekToFirst() Summary: In ForwardIterator::SeekInternal(), we may end up passing empty Slice representing an internal key to InternalKeyComparator::Compare. and when we try to extract the user key from this empty Slice, we will create a slice with size = 0 - 8 ( which will overflow and cause us to read invalid memory as well ) Scenarios to reproduce these issues are in the unit tests Closes https://github.com/facebook/rocksdb/pull/1467 Differential Revision: D4136660 Pulled By: lightmark fbshipit-source-id: 151e128 08 November 2016, 21:54:31 UTC
e48f3f8 remove tabs and duplicate #include in c api Summary: fix lint error about tabs and duplicate includes. Closes https://github.com/facebook/rocksdb/pull/1476 Differential Revision: D4149646 Pulled By: lightmark fbshipit-source-id: 2e0a632 08 November 2016, 21:54:31 UTC
85bd8f5 Minor fix to GFLAGS usage in persistent cache Summary: The general convention in RocksDB is to use GFLAGS instead of google. Fixing the anomaly. Closes https://github.com/facebook/rocksdb/pull/1470 Differential Revision: D4149213 Pulled By: kradhakrishnan fbshipit-source-id: 2dafa53 08 November 2016, 21:09:20 UTC
a787527 c: support seek_for_prev Summary: support seek_for_prev in c abi. Closes https://github.com/facebook/rocksdb/pull/1457 Differential Revision: D4135360 Pulled By: lightmark fbshipit-source-id: 61256b0 08 November 2016, 20:54:13 UTC
back to top