swh:1:snp:5115096b921df712aeb2a08114fede57fb3331fb

sort by:
Revision Author Date Message Commit Date
2adddee 1.5.8.1.fb release. Summary: Test Plan: Reviewers: CC: Task ID: # Blame Rev: 20 March 2013, 21:16:53 UTC
a6f4275 Removing boost from ldb_cmd.cc Summary: Getting rid of boost in our github codebase which caused problems on third-party Test Plan: make ldb; python tools/ldb_test.py Reviewers: sheki, dhruba Reviewed By: sheki Differential Revision: https://reviews.facebook.net/D9543 20 March 2013, 18:19:12 UTC
48abc06 Using return value of fwrite in posix_logger.h Summary: Was causing error(warning) in third-party saying unused result Test Plan: make Reviewers: sheki, dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9447 20 March 2013, 04:33:01 UTC
b1bea58 Fix more signed-unsigned comparisons Summary: Some comparisons left in log_test.cc and db_test.cc complained by make Test Plan: make Reviewers: dhruba, sheki Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9537 20 March 2013, 00:21:36 UTC
487168c Fixed sign-comparison in rocksdb code-base and fixed Makefile Summary: Makefile had options to ignore sign-comparisons and unused-parameters, which should be there. Also fixed the specific errors in the code-base Test Plan: make Reviewers: chip, dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D9531 19 March 2013, 21:35:23 UTC
72d14ea add --benchmarks=levelstats option to db_bench, prevent "nan" in stats output Summary: Add --benchmarks=levelstats option to report per-level stats (#files, #bytes) Change readwhilewriting test to report response time for writes but exclude them from the stats merged by all threads. Prevent "NaN" in stats output by preventing division by 0. Remove "o" file I committed by mistake. Task ID: # Blame Rev: Test Plan: make check Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9513 19 March 2013, 20:14:44 UTC
02c4598 Ignore a zero-sized file while looking for a seq-no in GetUpdatesSince Summary: Rocksdb can create 0 sized log files when it is opened and closed without any operations. The GetUpdatesSince fails currently if there is a log file of size zero. This diff fixes this. If there is a log file is 0, it is removed form the probable_file_list Test Plan: unit test Reviewers: dhruba, heyongqiang Reviewed By: heyongqiang CC: leveldb Differential Revision: https://reviews.facebook.net/D9507 19 March 2013, 18:00:09 UTC
7b9db9c DO not report level size as zero when there are no files in L0 Summary: Instead of checking for number of files in L0. Check for number of files in the requested level. Bug introduced in D4929 (diff trying to do too many things). Test Plan: db_test. Reviewers: dhruba, MarkCallaghan Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9483 18 March 2013, 19:04:38 UTC
f04cc36 Fixing a careless mistake in ldb Summary: negation of the condition checked currently had to be checkd actually Test Plan: make ldb; python ldb_test.py Reviewers: sheki, dhruba Reviewed By: sheki Differential Revision: https://reviews.facebook.net/D9459 15 March 2013, 20:59:11 UTC
a78fb5e Doing away with boost in ldb_cmd.h Summary: boost functions cause complications while deploying to third-party Test Plan: make Reviewers: sheki, dhruba Reviewed By: sheki Differential Revision: https://reviews.facebook.net/D9441 15 March 2013, 01:16:46 UTC
5a8c884 Enhance db_bench Summary: Add --benchmarks=updaterandom for read-modify-write workloads. This is different from --benchmarks=readrandomwriterandom in a few ways. First, an "operation" is the combined time to do the read & write rather than treating them as two ops. Second, the same key is used for the read & write. Change RandomGenerator to support rows larger than 1M. That was using "assert" to fail and assert is compiled-away when -DNDEBUG is used. Add more options to db_bench --duration - sets the number of seconds for tests to run. When not set the operation count continues to be the limit. This is used by random operation tests. --use_snapshot - when set GetSnapshot() is called prior to each random read. This is to measure the overhead from using snapshots. --get_approx - when set GetApproximateSizes() is called prior to each random read. This is to measure the overhead for a query optimizer. Task ID: # Blame Rev: Test Plan: run db_bench Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9267 14 March 2013, 23:00:23 UTC
e93dc3c Updating fbcode.gcc471.sh to use jemalloc 3.3.1 Summary: Updated TOOL_CHAIN_LIB_BASE to use the third-party version for jemalloc-3.3.1 which contains a bug fix in quarantine.cc. This was detected while debugging valgrind issues with the rocksdb table_test Test Plan: make table_test;valgrind --leak-check=full ./table_test Reviewers: dhruba, sheki, vamsi Reviewed By: sheki Differential Revision: https://reviews.facebook.net/D9387 13 March 2013, 22:34:50 UTC
1ba5abc Use posix_fallocate as default. Summary: Ftruncate does not throw an error on disk-full. This causes Sig-bus in the case where the database tries to issue a Put call on a full-disk. Use posix_fallocate for allocation instead of truncate. Add a check to use MMaped files only on ext4, xfs and tempfs, as posix_fallocate is very slow on ext3 and older. Test Plan: make all check Reviewers: dhruba, chip Reviewed By: dhruba CC: adsharma, leveldb Differential Revision: https://reviews.facebook.net/D9291 13 March 2013, 20:50:26 UTC
4e581c6 Fix ldb_test.py to hide garbage from std output Summary: ldb_test.py did a lot of assertFalse checks and displayed all the failed messages on the std output making it confusing to tell a successful from a failed run. Also many empty lines used to be needlessly printed. Also added some progression-"feel-good" lines in the tests Test Plan: python ldb_test.py Reviewers: dhruba, sheki, dilipj, chip Reviewed By: dilipj CC: leveldb Differential Revision: https://reviews.facebook.net/D9297 13 March 2013, 04:07:07 UTC
5b278b5 Fix valgrind errors in rocksdb tests: auto_roll_logger_test, reduce_levels_test Summary: Fix for memory leaks in rocksdb tests. Also modified the variable NUM_FAILED_TESTS to print the actual number of failed tests. Test Plan: make <test>; valgrind --leak-check=full ./<test> Reviewers: sheki, dhruba Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D9333 12 March 2013, 23:03:16 UTC
ebf16f5 Prevent segfault because SizeUnderCompaction was called without any locks. Summary: SizeBeingCompacted was called without any lock protection. This causes crashes, especially when running db_bench with value_size=128K. The fix is to compute SizeUnderCompaction while holding the mutex and passing in these values into the call to Finalize. (gdb) where #4 leveldb::VersionSet::SizeBeingCompacted (this=this@entry=0x7f0b490931c0, level=level@entry=4) at db/version_set.cc:1827 #5 0x000000000043a3c8 in leveldb::VersionSet::Finalize (this=this@entry=0x7f0b490931c0, v=v@entry=0x7f0b3b86b480) at db/version_set.cc:1420 #6 0x00000000004418d1 in leveldb::VersionSet::LogAndApply (this=0x7f0b490931c0, edit=0x7f0b3dc8c200, mu=0x7f0b490835b0, new_descriptor_log=<optimized out>) at db/version_set.cc:1016 #7 0x00000000004222b2 in leveldb::DBImpl::InstallCompactionResults (this=this@entry=0x7f0b49083400, compact=compact@entry=0x7f0b2b8330f0) at db/db_impl.cc:1473 #8 0x0000000000426027 in leveldb::DBImpl::DoCompactionWork (this=this@entry=0x7f0b49083400, compact=compact@entry=0x7f0b2b8330f0) at db/db_impl.cc:1757 #9 0x0000000000426690 in leveldb::DBImpl::BackgroundCompaction (this=this@entry=0x7f0b49083400, madeProgress=madeProgress@entry=0x7f0b41bf2d1e, deletion_state=...) at db/db_impl.cc:1268 #10 0x0000000000428f42 in leveldb::DBImpl::BackgroundCall (this=0x7f0b49083400) at db/db_impl.cc:1170 #11 0x000000000045348e in BGThread (this=0x7f0b49023100) at util/env_posix.cc:941 #12 leveldb::(anonymous namespace)::PosixEnv::BGThreadWrapper (arg=0x7f0b49023100) at util/env_posix.cc:874 #13 0x00007f0b4a7cf10d in start_thread (arg=0x7f0b41bf3700) at pthread_create.c:301 #14 0x00007f0b49b4b11d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115 Test Plan: make check I am running db_bench with a value size of 128K to see if the segfault is fixed. Reviewers: MarkCallaghan, sheki, emayanke Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D9279 11 March 2013, 21:09:01 UTC
c04c956 Make the build-time show up in the leveldb library. Summary: This is a regression caused by https://github.com/facebook/rocksdb/commit/772f75b3fbc5cfcf4d519114751efeae04411fa1 If you do "strings libleveldb.a | grep leveldb_build_git_datetime" it will show you the time when the binary was built. Test Plan: make check Reviewers: emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D9273 11 March 2013, 17:33:15 UTC
8ade935 [Report the #gets and #founds in db_stress] Summary: Also added some comments and fixed some bugs in stats reporting. Now the stats seem to match what is expected. Test Plan: [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --test_batches_snapshots=1 --ops_per_thread=1000 --threads=1 --max_key=320 LevelDB version : 1.5 Number of threads : 1 Ops per thread : 1000 Read percentage : 10 Delete percentage : 30 Max key : 320 Ratio #ops/#keys : 3 Num times DB reopens: 10 Batches/snapshots : 1 Num keys per lock : 4 Compression : snappy ------------------------------------------------ No lock creation because test_batches_snapshots set 2013/03/04-15:58:56 Starting database operations 2013/03/04-15:58:56 Reopening database for the 1th time 2013/03/04-15:58:56 Reopening database for the 2th time 2013/03/04-15:58:56 Reopening database for the 3th time 2013/03/04-15:58:56 Reopening database for the 4th time Created bg thread 0x7f4542bff700 2013/03/04-15:58:56 Reopening database for the 5th time 2013/03/04-15:58:56 Reopening database for the 6th time 2013/03/04-15:58:56 Reopening database for the 7th time 2013/03/04-15:58:57 Reopening database for the 8th time 2013/03/04-15:58:57 Reopening database for the 9th time 2013/03/04-15:58:57 Reopening database for the 10th time 2013/03/04-15:58:57 Reopening database for the 11th time 2013/03/04-15:58:57 Limited verification already done during gets Stress Test : 1811.551 micros/op 552 ops/sec : Wrote 0.10 MB (0.05 MB/sec) (598% of 1011 ops) : Wrote 6050 times : Deleted 3050 times : 500/900 gets found the key : Got errors 0 times [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --ops_per_thread=1000 --threads=1 --max_key=320 LevelDB version : 1.5 Number of threads : 1 Ops per thread : 1000 Read percentage : 10 Delete percentage : 30 Max key : 320 Ratio #ops/#keys : 3 Num times DB reopens: 10 Batches/snapshots : 0 Num keys per lock : 4 Compression : snappy ------------------------------------------------ Creating 80 locks 2013/03/04-15:58:17 Starting database operations 2013/03/04-15:58:17 Reopening database for the 1th time 2013/03/04-15:58:17 Reopening database for the 2th time 2013/03/04-15:58:17 Reopening database for the 3th time 2013/03/04-15:58:17 Reopening database for the 4th time Created bg thread 0x7fc0f5bff700 2013/03/04-15:58:17 Reopening database for the 5th time 2013/03/04-15:58:17 Reopening database for the 6th time 2013/03/04-15:58:18 Reopening database for the 7th time 2013/03/04-15:58:18 Reopening database for the 8th time 2013/03/04-15:58:18 Reopening database for the 9th time 2013/03/04-15:58:18 Reopening database for the 10th time 2013/03/04-15:58:18 Reopening database for the 11th time 2013/03/04-15:58:18 Starting verification Stress Test : 1836.258 micros/op 544 ops/sec : Wrote 0.01 MB (0.01 MB/sec) (59% of 1011 ops) : Wrote 605 times : Deleted 305 times : 50/90 gets found the key : Got errors 0 times 2013/03/04-15:58:18 Verification successful Revert Plan: OK Task ID: # Reviewers: emayanke, dhruba Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D9081 11 March 2013, 04:57:00 UTC
3eed5c9 Putting option -std=gnu++0x in Makefile rather than fbcode.gcc471.sh Summary: This option is needed for compilation and the open-sourced rocksdb version wiull need to get it from Makefile Test Plan: make clean;make Reviewers: MarkCallaghan, dhruba, sheki, chip CC: leveldb Differential Revision: https://reviews.facebook.net/D9243 08 March 2013, 19:56:18 UTC
9e1c89c Moving VALGRIND_VER which takes the valgrind version from third party to fbcode.gcc471.sh file Summary: the valgrind version being used is in facebook specific path and should be moved to the fbcode.gcc471.sh file instead of the makefile. The execution takes the environment's default valgrind version if the fbcode.gcc471.sh's valgrind_version is not available. Test Plan: make valgrind_check Reviewers: dhruba, sheki, akushner Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9213 08 March 2013, 19:51:26 UTC
469724b Add appropriate parameters to make bulk-load go faster. Summary: 1. Create only 2 levels so that manual compactions are fast. 2. Set target file size to a large value Test Plan: make clean check Reviewers: kailiu, zshao Reviewed By: zshao CC: leveldb Differential Revision: https://reviews.facebook.net/D9231 08 March 2013, 18:52:16 UTC
3b6653b Make db_stress Not purge redundant keys on some opens Summary: In light of the new option introduced by commit 806e26435037f5e2eb3b8c2d1e5f278a86fdb2ba where the database has an option to compact before flushing to disk, we want the stress test to test both sides of the option. Have made it to 'deterministically' and configurably change that option for reopens. Test Plan: make db_stress; ./db_stress with some differnet options Reviewers: dhruba, vamsi Reviewed By: dhruba CC: leveldb, sheki Differential Revision: https://reviews.facebook.net/D9165 08 March 2013, 12:55:07 UTC
6d812b6 A mechanism to detect manifest file write errors and put db in readonly mode. Summary: If there is an error while writing an edit to the manifest file, the manifest file is closed and reopened to check if the edit made it in. However, if the re-opening of the manifest is unsuccessful and options.paranoid_checks is set t true, then the db refuses to accept new puts, effectively putting the db in readonly mode. In a future diff, I would like to make the default value of paranoid_check to true. Test Plan: make check Reviewers: sheki Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D9201 07 March 2013, 17:45:49 UTC
3b87e2b Use version 3.8.1 for valgrind in third_party and do away with log files Summary: valgrind 3.7.0 used currently has a bug that needs LD_PRELOAD being set as a workaround. This caused problems when run on jenkins. 3.8.1 has fixed this issue and we should use it from third party Also, have done away with log files. The whole output will be there on the terminal and the failed tests will be listed at the end. This is done because jenkins only lets us download the different files and not view them in the browser which is undesirable. Test Plan: make valgrind_check Reviewers: akushner, dhruba, vamsi, sheki, heyongqiang Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D9171 07 March 2013, 01:47:31 UTC
d68880a Do not allow Transaction Log Iterator to fall ahead when writer is writing the same file Summary: Store the last flushed, seq no. in db_impl. Check against it in transaction Log iterator. Do not attempt to read ahead if we do not know if the data is flushed completely. Does not work if flush is disabled. Any ideas on fixing that? * Minor change, iter->Next is called the first time automatically for * the first time. Test Plan: existing test pass. More ideas on testing this? Planning to run some stress test. Reviewers: dhruba, heyongqiang CC: leveldb Differential Revision: https://reviews.facebook.net/D9087 06 March 2013, 22:05:53 UTC
afed609 Fox db_stress crash by copying keys before changing sequencenum to zero. Summary: The compaction process zeros out sequence numbers if the output is part of the bottommost level. The Slice is supposed to refer to an immutable data buffer. The merger that implements the priority queue while reading kvs as the input of a compaction run reies on this fact. The bug was that were updating the sequence number of a record in-place and that was causing suceeding invocations of the merger to return kvs in arbitrary order of sequence numbers. The fix is to copy the key to a local memory buffer before setting its seqno to 0. Test Plan: Set Options.purge_redundant_kvs_while_flush = false and then run db_stress --ops_per_thread=1000 --max_key=320 Reviewers: emayanke, sheki Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D9147 06 March 2013, 18:52:08 UTC
2fb47d6 add -j nproc in valgrind check script 06 March 2013, 01:38:13 UTC
760d511 Fix Bash Script to run valgrind. Test Plan: bash -n Reviewers: emayanke Differential Revision: https://reviews.facebook.net/D9129 06 March 2013, 00:50:40 UTC
e7b726d Downgrade optimization level from -O3 to -O2. Summary: When we use -O3, the gcc 4.7.1 compiler generates 'pinsrd' which is not supported on machines with "vendor_id : AuthenticAMD". Previous release of rocksdb used -O2. Optimization -O2 was introduced at https://github.com/facebook/rocksdb/commit/772f75b3fbc5cfcf4d519114751efeae04411fa1 Test Plan: make check Reviewers: chip, heyongqiang, sheki Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D9093 05 March 2013, 18:54:02 UTC
7b43500 [RocksDB] Add bulk_load option to Options and ldb Summary: Add a shortcut function to make it easier for people to efficiently bulk_load data into RocksDB. Test Plan: Tried ldb with "--bulk_load" and "--bulk_load --compact" and verified the outcome. Needs to consult the team on how to test this automatically. Reviewers: sheki, dhruba, emayanke, heyongqiang Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8907 05 March 2013, 08:34:53 UTC
f589668 Removed unnecesary file object in table_cache. Summary: TableCache->file is not used. remove it. I kept the TableAndFile structure and will clean it up in a future patch. Test Plan: make clean check Reviewers: sheki, chip Reviewed By: chip CC: leveldb Differential Revision: https://reviews.facebook.net/D9075 04 March 2013, 21:56:23 UTC
993543d Add rate_delay_limit_milliseconds Summary: This adds the rate_delay_limit_milliseconds option to make the delay configurable in MakeRoomForWrite when the max compaction score is too high. This delay is called the Ln slowdown. This change also counts the Ln slowdown per level to make it possible to see where the stalls occur. From IO-bound performance testing, the Level N stalls occur: * with compression -> at the largest uncompressed level. This makes sense because compaction for compressed levels is much slower. When Lx is uncompressed and Lx+1 is compressed then files pile up at Lx because the (Lx,Lx+1)->Lx+1 compaction process is the first to be slowed by compression. * without compression -> at level 1 Task ID: #1832108 Blame Rev: Test Plan: run with real data, added test Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9045 04 March 2013, 15:41:15 UTC
806e264 Ability for rocksdb to compact when flushing the in-memory memtable to a file in L0. Summary: Rocks accumulates recent writes and deletes in the in-memory memtable. When the memtable is full, it writes the contents on the memtable to a file in L0. This patch removes redundant records at the time of the flush. If there are multiple versions of the same key in the memtable, then only the most recent one is dumped into the output file. The purging of redundant records occur only if the most recent snapshot is earlier than the earliest record in the memtable. Should we switch on this feature by default or should we keep this feature turned off in the default settings? Test Plan: Added test case to db_test.cc Reviewers: sheki, vamsi, emayanke, heyongqiang Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D8991 04 March 2013, 08:01:47 UTC
4992633 enable the ability to set key size in db_bench in rocksdb Summary: 1. the default value for key size is still 16 2. enable the ability to set the key size via command line --key_size= Test Plan: build & run db_banch and pass some value via command line. verify it works correctly. Reviewers: sheki Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D8943 01 March 2013, 22:10:09 UTC
ec96ad5 Automating valgrind to run with jenkins Summary: The script valgrind_test.sh runs Valgrind for all tests in the makefile including leak-checks and outputs the logs for every test in a separate file with the name "valgrind_log_<testname>". It prints the failed tests in the file "valgrind_failed_tests". All these files are created in the directory "VALGRIND_LOGS" which can be changed in the Makefile. Finally it checks the line-count for the file "valgrind_failed_tests" and returns 0 if no tests failed and 1 otherwise. Test Plan: ./valgrind_test.sh; Changed the tests to incorporte leaks and verified correctness Reviewers: dhruba, sheki, MarkCallaghan Reviewed By: sheki CC: zshao Differential Revision: https://reviews.facebook.net/D8877 01 March 2013, 19:44:40 UTC
c41f1e9 Codemod NULL to nullptr Summary: scripted NULL to nullptr in * include/leveldb/ * db/ * table/ * util/ Test Plan: make all check Reviewers: dhruba, emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D9003 01 March 2013, 02:04:58 UTC
e45c7a8 Abilty to support upto a million .sst files in the database Summary: There was an artifical limit of 50K files per database. This is insifficient if the database is 1 TB in size and each file is 2 MB. Test Plan: make check Reviewers: sheki, emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D8919 27 February 2013, 00:27:51 UTC
a9866b7 Refactor statistics. Remove individual functions like incNumFileOpens Summary: Use only the counter mechanism. Do away with incNumFileOpens, incNumFileClose, incNumFileErrors s/NULL/nullptr/g in db/table_cache.cc Test Plan: make clean check Reviewers: dhruba, heyongqiang, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8841 25 February 2013, 21:58:34 UTC
465b910 [Add a second kind of verification to db_stress Summary: Currently the test tracks all writes in memory and uses it for verification at the end. This has 4 problems: (a) It needs mutex for each write to ensure in-memory update and leveldb update are done atomically. This slows down the benchmark. (b) Verification phase at the end is time consuming as well (c) Does not test batch writes or snapshots (d) We cannot kill the test and restart multiple times in a loop because in-memory state will be lost. I am adding a FLAGS_multi that does MultiGet/MultiPut/MultiDelete instead of get/put/delete to get/put/delete a group of related keys with same values atomically. Every get retrieves the group of keys and checks that their values are same. This does not have the above problems but the downside is that it does less amount of validation than the other approach. Test Plan: This whole this is a test! Here is a small run. I am doing larger run now. [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --ops_per_thread=10000 --multi=1 --ops_per_key=25 LevelDB version : 1.5 Number of threads : 32 Ops per thread : 10000 Read percentage : 10 Delete percentage : 30 Max key : 2147483648 Num times DB reopens: 10 Num keys per lock : 4 Compression : snappy ------------------------------------------------ Creating 536870912 locks 2013/02/20-16:59:32 Starting database operations Created bg thread 0x7f9ebcfff700 2013/02/20-16:59:37 Reopening database for the 1th time 2013/02/20-16:59:46 Reopening database for the 2th time 2013/02/20-16:59:57 Reopening database for the 3th time 2013/02/20-17:00:11 Reopening database for the 4th time 2013/02/20-17:00:25 Reopening database for the 5th time 2013/02/20-17:00:36 Reopening database for the 6th time 2013/02/20-17:00:47 Reopening database for the 7th time 2013/02/20-17:00:59 Reopening database for the 8th time 2013/02/20-17:01:10 Reopening database for the 9th time 2013/02/20-17:01:20 Reopening database for the 10th time 2013/02/20-17:01:31 Reopening database for the 11th time 2013/02/20-17:01:31 Starting verification Stress Test : 109.125 micros/op 22191 ops/sec : Wrote 0.00 MB (0.23 MB/sec) (59% of 32 ops) : Deleted 10 times 2013/02/20-17:01:31 Verification successful Revert Plan: OK Task ID: # Reviewers: dhruba, emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D8733 22 February 2013, 20:20:11 UTC
959337e Measure compaction time. Summary: just record time consumed in compaction Test Plan: compile Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8781 22 February 2013, 19:38:40 UTC
5024d72 Adding a rule in the Makefile to run valgrind on the rocksdb tests Summary: Added automated valgrind testing for rocksdb by adding valgrind_check in the Makefile Test Plan: make clean; make all check Reviewers: dhruba, sheki, MarkCallaghan, zshao Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D8787 22 February 2013, 02:41:00 UTC
ec77366 Counters for bytes written and read. Summary: * Counters for bytes read and write. as a part of this diff, I want to=> * Measure compaction times. @dhruba can you point which function, should * I time to get Compaction-times. Was looking at CompactRange. Test Plan: db_test Reviewers: dhruba, emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D8763 22 February 2013, 00:06:32 UTC
6abb30d [Missed adding cmdline parsing for new flags added in D8685] Summary: I had added FLAGS_numdistinct and FLAGS_deletepercent for randomwithverify but forgot to add cmdline parsing for those flags. Test Plan: [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_bench --benchmarks=randomwithverify --numdistinct=500 LevelDB: version 1.5 Date: Thu Feb 21 10:34:40 2013 CPU: 24 * Intel(R) Xeon(R) CPU X5650 @ 2.67GHz CPUCache: 12288 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) Compression: snappy WARNING: Assertions are enabled; benchmarks unnecessarily slow ------------------------------------------------ Created bg thread 0x7fbf90bff700 randomwithverify : 4.693 micros/op 213098 ops/sec; ( get:900000 put:80000 del:20000 total:1000000 found:714556) [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_bench --benchmarks=randomwithverify --deletepercent=5 LevelDB: version 1.5 Date: Thu Feb 21 10:35:03 2013 CPU: 24 * Intel(R) Xeon(R) CPU X5650 @ 2.67GHz CPUCache: 12288 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) Compression: snappy WARNING: Assertions are enabled; benchmarks unnecessarily slow ------------------------------------------------ Created bg thread 0x7fe14dfff700 randomwithverify : 4.883 micros/op 204798 ops/sec; ( get:900000 put:50000 del:50000 total:1000000 found:443847) [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_bench --benchmarks=randomwithverify --deletepercent=5 --numdistinct=500 LevelDB: version 1.5 Date: Thu Feb 21 10:36:18 2013 CPU: 24 * Intel(R) Xeon(R) CPU X5650 @ 2.67GHz CPUCache: 12288 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) Compression: snappy WARNING: Assertions are enabled; benchmarks unnecessarily slow ------------------------------------------------ Created bg thread 0x7fc31c7ff700 randomwithverify : 4.920 micros/op 203233 ops/sec; ( get:900000 put:50000 del:50000 total:1000000 found:445522) Revert Plan: OK Task ID: # Reviewers: dhruba, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8769 21 February 2013, 20:26:32 UTC
1052ea2 Exploring the rocksdb stress test Summary: Fixed a bug in the stress-test where the correct size was not being passed to GenerateValue. This bug was there since the beginning but assertions were switched on in our code-base only recently. Added comments on the top detailing how the stress test works and how to quicken/slow it down after investigation. Test Plan: make all check. ./db_stress Reviewers: dhruba, asad Reviewed By: dhruba CC: vamsi, sheki, heyongqiang, zshao Differential Revision: https://reviews.facebook.net/D8727 21 February 2013, 19:27:28 UTC
945d2b5 [Add randomwithverify benchmark option] Summary: Added RandomWithVerify benchmark option. Test Plan: This whole diff is to test. [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_bench --benchmarks=randomwithverify LevelDB: version 1.5 Date: Tue Feb 19 17:50:28 2013 CPU: 24 * Intel(R) Xeon(R) CPU X5650 @ 2.67GHz CPUCache: 12288 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) Compression: snappy WARNING: Assertions are enabled; benchmarks unnecessarily slow ------------------------------------------------ Created bg thread 0x7fa9c3fff700 randomwithverify : 5.004 micros/op 199836 ops/sec; ( get:900000 put:80000 del:20000 total:1000000 found:711992) Revert Plan: OK Task ID: # Reviewers: dhruba, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8685 21 February 2013, 18:27:02 UTC
9bf91c7 ldb waldump to print the keys along with other stats + NULL to nullptr in ldb_cmd.cc Summary: LDB tool to print the deleted/put keys in hex in the wal file. Test Plan: run ldb on a db to check if output was satisfactory Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8691 20 February 2013, 19:01:37 UTC
b2c50f1 Fix for the weird behaviour encountered by ldb Get where it could read only the second-latest value Summary: Changed the Get and Scan options with openForReadOnly mode to have access to the memtable. Changed the visibility of NewInternalIterator in db_impl from private to protected so that the derived class db_impl_read_only can call that in its NewIterator function for the scan case. The previous approach which changed the default for flush_on_destroy_ from false to true caused many problems in the unit tests due to empty sst files that it created. All unit tests pass now. Test Plan: make clean; make all check; ldb put and get and scans Reviewers: dhruba, heyongqiang, sheki Reviewed By: dhruba CC: kosievdmerwe, zshao, dilipj, kailiu Differential Revision: https://reviews.facebook.net/D8697 20 February 2013, 18:45:52 UTC
fe10200 Introduce histogram in statistics.h Summary: * Introduce is histogram in statistics.h * stop watch to measure time. * introduce two timers as a poc. Replaced NULL with nullptr to fight some lint errors Should be useful for google. Test Plan: ran db_bench and check stats. make all check Reviewers: dhruba, heyongqiang Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8637 20 February 2013, 18:43:32 UTC
61a3e67 Cleanup README.fb Summary: Cleanup README.fb Test Plan: Reviewers: CC: Task ID: # Blame Rev: 19 February 2013, 17:54:54 UTC
45f0030 Fix the "IO error" in auto_roll_logger_test Summary: I missed InitTestDb() in one of my tess. InitTestDb() initializes the test directory, without which the test will throw IO error. This problem didn't occur before because I've already run the tests before so the test directory is already there. Test Plan: Reviewers: dhruba CC: Task ID: # Blame Rev: 19 February 2013, 08:13:22 UTC
ae09544 Temporary remove the auto_roll_logger_test. Summary: auto_roll_logger_test is failing because it cannot create the test dir, leading to IO error: /tmp/leveldbtest-6108/db_log_test/LOG: No such file or directory. I'll temporary remove the unit test and will revert the test this problem is solved. Test Plan: make all check Reviewers: dhruba CC: leveldb Task ID: # Blame Rev: 19 February 2013, 07:30:26 UTC
f3901e0 Revert "Fix for the weird behaviour encountered by ldb Get where it could read only the second-latest value" This reverts commit 4c696ed0018800b62e2448a4ead438255140fc25. 19 February 2013, 06:32:27 UTC
fd367e6 Fix unit test failure in db_filename.cc Summary: c_test: db/filename.cc:74: std::string leveldb::DescriptorFileName(const string&,.... Test Plan: this is a failure in a unit test Differential Revision: https://reviews.facebook.net/D8667 19 February 2013, 05:53:56 UTC
4564915 Zero out redundant sequence numbers for kvs to increase compression efficiency Summary: The sequence numbers in each record eat up plenty of space on storage. The optimization zeroes out sequence numbers on kvs in the Lmax layer that are earlier than the earliest snapshot. Test Plan: Unit test attached. Differential Revision: https://reviews.facebook.net/D8619 19 February 2013, 05:51:15 UTC
27e26df cleanup README. Summary: Test Plan: Reviewers: CC: Task ID: # Blame Rev: 19 February 2013, 03:42:29 UTC
4c696ed Fix for the weird behaviour encountered by ldb Get where it could read only the second-latest value Summary: flush_on_destroy has a default value of false and the memtable is flushed in the dbimpl-destructor only when that is set to true. Because we want the memtable to be flushed everytime that the destructor is called(db is closed) and the cases where we work with the memtable only are very less it is a good idea to give this a default value of true. Thus the put from ldb wil have its data flushed to disk in the destructor and the next Get will be able to read it when opened with OpenForReadOnly. The reason that ldb could read the latest value when the db was opened in the normal Open mode is that the Get from normal Open first reads the memtable and directly finds the latest value written there and the Get from OpenForReadOnly doesn't have access to the memtable (which is correct because all its Put/Modify) are disabled Test Plan: make all; ldb put and get and scans Reviewers: dhruba, heyongqiang, sheki Reviewed By: heyongqiang CC: kosievdmerwe, zshao, dilipj, kailiu Differential Revision: https://reviews.facebook.net/D8631 16 February 2013, 00:56:06 UTC
aaa0cbb Fix the warning introduced by auto_roll_logger_test Summary: Fix the warning [-Werror=format-security] and [-Werror=unused-result]. Test Plan: enforced the Werror and run make Task ID: 2101673 Blame Rev: Reviewers: heyongqiang Differential Revision: https://reviews.facebook.net/D8553 13 February 2013, 23:29:35 UTC
f02db1c Add zlib to our builds and tweak histogram output Summary: $SUBJECT -- cosmetic fix for histograms, print P75/P99, and make sure zlib is enabled for our command line tools. Test Plan: compile, test db_bench with --compression_type=zlib Reviewers: heyongqiang Reviewed By: heyongqiang CC: adsharma, leveldb Differential Revision: https://reviews.facebook.net/D8445 07 February 2013, 23:31:53 UTC
b63aafc Allow the logs to be purged by TTL. Summary: * Add a SplitByTTLLogger to enable this feature. In this diff I implemented generalized AutoSplitLoggerBase class to simplify the development of such classes. * Refactor the existing AutoSplitLogger and fix several bugs. Test Plan: * Added a unit tests for different types of "auto splitable" loggers individually. * Tested the composited logger which allows the log files to be splitted by both TTL and log size. Reviewers: heyongqiang, dhruba Reviewed By: heyongqiang CC: zshao, leveldb Differential Revision: https://reviews.facebook.net/D8037 05 February 2013, 03:42:40 UTC
19012c2 Enable linting in arc. Summary: Just change some config. Test Plan: arc lint Reviewers: chip CC: leveldb Differential Revision: https://reviews.facebook.net/D8355 01 February 2013, 19:34:25 UTC
4dc02f7 Initialize all doubles to 0 in histogram.cc Summary: The existing code did not initialize a few doubles in histogram.cc. Cropped up when I wrote a unit-test. Test Plan: make all check Reviewers: chip Reviewed By: chip CC: leveldb Differential Revision: https://reviews.facebook.net/D8319 01 February 2013, 01:31:43 UTC
009034c Performant util/histogram. Summary: Earlier way to record in histogram=> Linear search BucketLimit array to find the bucket and increment the counter Current way to record in histogram=> Store a HistMap statically which points the buckets of each value in the range [kFirstValue, kLastValue); In the proccess use vectors instead of array's and refactor some code to HistogramHelper class. Test Plan: run db_bench with histogram=1 and see a histogram being printed. Reviewers: dhruba, chip, heyongqiang Reviewed By: chip CC: leveldb Differential Revision: https://reviews.facebook.net/D8265 01 February 2013, 00:10:34 UTC
4dcc0c8 Fixed cache key for block cache Summary: Added function to `RandomAccessFile` to generate an unique ID for that file. Currently only `PosixRandomAccessFile` has this behaviour implemented and only on Linux. Changed how key is generated in `Table::BlockReader`. Added tests to check whether the unique ID is stable, unique and not a prefix of another unique ID. Added tests to see that `Table` uses the cache more efficiently. Test Plan: make check Reviewers: chip, vamsi, dhruba Reviewed By: chip CC: leveldb Differential Revision: https://reviews.facebook.net/D8145 31 January 2013, 23:20:24 UTC
2c35652 Add OS_LINUX ifdef protections around fallocate parts Summary: fallocate is linux only, so let's protect it with ifdef's Test Plan: make Reviewers: sheki, dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8223 28 January 2013, 20:03:35 UTC
11ce6a0 Enhanced ldb to support data access commands Summary: Added put/get/scan/batchput/delete/approxsize Test Plan: Added pyunit script to test the newly added commands Reviewers: chip, leveldb Reviewed By: chip CC: zshao, emayanke Differential Revision: https://reviews.facebook.net/D7947 28 January 2013, 19:38:26 UTC
0b83a83 Fix poor error on num_levels mismatch and few other minor improvements Summary: Previously, if you opened a db with num_levels set lower than the database, you received the unhelpful message "Corruption: VersionEdit: new-file entry." Now you get a more verbose message describing the issue. Also, fix handling of compression_levels (both the run-over-the-end issue and the memory management of it). Lastly, unique_ptr'ify a couple of minor calls. Test Plan: make check Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8151 25 January 2013, 23:37:26 UTC
16e96b1 Cleanup TODO/NEWS/AUTHORS files Summary: These files are not relevant anymore. Test Plan: Reviewers: CC: Task ID: # Blame Rev: 25 January 2013, 17:11:26 UTC
772f75b Stop continually re-creating build_version.c Summary: We continually rebuilt build_version.c because we put the current date into it, but that's what __DATE__ already is. This makes builds faster. This also fixes an issue with 'make clean FOO' not working properly. Also tweak the build rules to be more consistent, always have warnings, and add a 'make release' rule to handle flags for release builds. Test Plan: make, make clean Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D8139 25 January 2013, 01:51:39 UTC
3dafdfb Use fallocate to prevent excessive allocation of sst files and logs Summary: On some filesystems, pre-allocation can be a considerable amount of space. xfs in our production environment pre-allocates by 1GB, for instance. By using fallocate to inform the kernel of our expected file sizes, we eliminate this wasteage (that isn't recovered until the file is closed which, in the case of LOG files, can be a considerable amount of time). Test Plan: created an xfs loopback filesystem, mounted with allocsize=4M, and ran db_stress. LOG file without this change was 4M, and with it it was 128k then grew to normal size. Reviewers: dhruba Reviewed By: dhruba CC: adsharma, leveldb Differential Revision: https://reviews.facebook.net/D7953 24 January 2013, 20:25:13 UTC
2fdf91a Fix a number of object lifetime/ownership issues Summary: Replace manual memory management with std::unique_ptr in a number of places; not exhaustive, but this fixes a few leaks with file handles as well as clarifies semantics of the ownership of file handles with log classes. Test Plan: db_stress, make check Reviewers: dhruba Reviewed By: dhruba CC: zshao, leveldb, heyongqiang Differential Revision: https://reviews.facebook.net/D8043 24 January 2013, 00:54:11 UTC
88b79b2 Fixed didIO not being set with no block_cache Summary: In `Table::BlockReader()` when there was no block cache `didIO` was not set. This didn't seem to matter as `didIO` is only used to trigger seek compactions. However, I would like it if someone else could check that is the case. Test Plan: `make check OPT="-g -O3"` Reviewers: dhruba, vamsi Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8133 23 January 2013, 20:49:10 UTC
16903c3 Add counters to count gets and writes Summary: Add Tickers to count Write's and Get's Test Plan: make check Reviewers: dhruba, chip Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7977 17 January 2013, 20:27:56 UTC
3c3df74 Fixed issues Valgrind found. Summary: Found issues with `db_test` and `db_stress` when running valgrind. `DBImpl` had an issue where if an compaction failed then it will use the uninitialised file size of an output file is used. This manifested as the final call to output to the log in `DoCompactionWork()` branching on uninitialized memory (all the way down in printf's innards). Test Plan: Ran `valgrind --track_origins=yes ./db_test` and `valgrind ./db_stress` to see if issues disappeared. Ran `make check` to see if there were no regressions. Reviewers: vamsi, dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8001 17 January 2013, 18:04:45 UTC
dfcf613 Minor improvements to the regression testing Summary: Several fixes: 1) Use mktemp to create the files and directories 2) Take the stat file as an argument so that the buildservers can specify a file in the WORKSPACE and not in /tmp 3) Use nproc to set make -j value. 4) Check for valid values before sending to ODS 5) Cleanup the grep/cut pipeline to just use awk Test Plan: Verify tests run and complete Reviewers: sheki, dhruba Reviewed By: sheki Differential Revision: https://reviews.facebook.net/D7995 16 January 2013, 22:47:20 UTC
4b1e9f0 Added an API in rocksdb for checking for "invalid argument" and "not supported" for leveldb::Status Summary: a function added to status.h to check whether Status::code is InvalidArgument and similarly for NotSupported state Test Plan: visual inspection Reviewers: heyongqiang, dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7989 16 January 2013, 22:30:45 UTC
7d5a438 rollover manifest file. Summary: Check in LogAndApply if the file size is more than the limit set in Options. Things to consider : will this be expensive? Test Plan: make all check. Inputs on a new unit test? Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7701 16 January 2013, 20:09:44 UTC
c54884f Fix Regression script. Run it for a shorter time. 16 January 2013, 19:23:35 UTC
a2dcd79 Add optional clang compile mode Summary: clang is an alternate compiler based on llvm. It produces nicer error messages and finds some bugs that gcc doesn't, such as the size_t change in this file (which caused some write return values to be misinterpreted!) Clang isn't the default; to try it, do "USE_CLANG=1 make" or "export USE_CLANG=1" then make as normal Test Plan: "make check" and "USE_CLANG=1 make check" Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D7899 16 January 2013, 02:48:37 UTC
9bbcab5 Fix broken build Summary: Mis-merged from HEAD, had a duplicate declaration. Test Plan: make -j32 OPT=-g Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D7911 15 January 2013, 22:05:49 UTC
28fe86c Fixed bug with seek compactions on Level 0 Summary: Due to how the code handled compactions in Level 0 in `PickCompaction()` it could be the case that two compactions on level 0 ran that produced tables in level 1 that overlap. However, this case seems like it would only occur on a seek compaction which is unlikely on level 0. Furthermore, level 0 and level 1 had to have a certain arrangement of files. Test Plan: make check Reviewers: dhruba, vamsi Reviewed By: dhruba CC: leveldb, sheki Differential Revision: https://reviews.facebook.net/D7923 15 January 2013, 20:43:09 UTC
8ce418c Change default regression test location to /tmp from /data/users/abhishekk 15 January 2013, 20:23:11 UTC
917377c Bash script to run db_bench with options and send data to ods. Summary: Basic Regression test. Plan to run this every-night and record qps in ods. Test Plan: ran locally and checked Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7929 15 January 2013, 20:18:01 UTC
c0cb289 Various build cleanups/improvements Summary: Specific changes: 1) Turn on -Werror so all warnings are errors 2) Fix some warnings the above now complains about 3) Add proper dependency support so changing a .h file forces a .c file to rebuild 4) Automatically use fbcode gcc on any internal machine rather than whatever system compiler is laying around 5) Fix jemalloc to once again be used in the builds (seemed like it wasn't being?) 6) Fix issue where 'git' would fail in build_detect_version because of LD_LIBRARY_PATH being set in the third-party build system Test Plan: make, make check, make clean, touch a header file, make sure rebuild is expected Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D7887 15 January 2013, 02:40:22 UTC
2ba125f fix warning for unused variable Test Plan: compile Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7857 11 January 2013, 23:00:47 UTC
85ad13b Port fix for Leveldb manifest writing bug from Open-Source Summary: Pretty much a blind copy of the patch in open source. Hope to get this in before we make a release Test Plan: make clean check Reviewers: dhruba, heyongqiang Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7809 10 January 2013, 20:06:03 UTC
41d7809 Pom changes to make relase 1.5.7 for java. Summary: Ran ./build_java.sh bump_version 1.5.7 Test Plan: automated change Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7833 10 January 2013, 18:43:43 UTC
4e9d9d9 Fixed wrong assumption in Table::Open() Summary: `Table::Open()` assumes that `size` correctly describes the size of `file`, added a check that the footer is actually the right size and for good measure added assertions to `Footer::DecodeFrom()`. This was discovered by running `valgrind ./db_test` and seeing that `Footer::DecodeFrom()` was accessing uninitialized memory. Test Plan: make clean check ran `valgrind ./db_test` and saw DBTest.NoSpace no longer complains about a conditional jump being dependent on uninitialized memory. Reviewers: dhruba, vamsi, emayanke, sheki Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7815 09 January 2013, 18:44:30 UTC
f881d6f Release 1.5.7.fb Summary: Test Plan: Reviewers: CC: Task ID: # Blame Rev: 09 January 2013, 18:26:03 UTC
2e1ad2c Remove unnecessary asserts in table/merger.cc Summary: The asserts introduced in https://reviews.facebook.net/D7629 are wrong. The direction of iteration is changed after the function call so they assert's fail. Test Plan: make clean check Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7827 09 January 2013, 17:52:45 UTC
d8371ef Fixing some issues Valgrind found Summary: Found some issues running Valgrind on `db_test` (there are still some outstanding ones) and fixed them. Test Plan: make check ran `valgrind ./db_test` and saw that errors no longer occur Reviewers: dhruba, vamsi, emayanke, sheki Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7803 08 January 2013, 20:16:40 UTC
4d339d7 Fixed memory leak in ShardedLRUCache Summary: `~ShardedLRUCache()` was empty despite `init()` allocating memory on the heap. Fixed the leak by freeing memory allocated by `init()`. Test Plan: make check Ran valgrind on db_test before and after patch and saw leaked memory went down Reviewers: vamsi, dhruba, emayanke, sheki Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7791 08 January 2013, 19:24:15 UTC
628dc2a db_bench should use the default value for max_grandparent_overlap_factor. Summary: This was a peformance regression caused by https://reviews.facebook.net/D6729. The default value of max_grandparent_overlap_factor was erroneously set to 0 in db_bench. This was causing compactions to create really really small files because the max_grandparent_overlap_factor was erroneously set to zero in the benchmark. Test Plan: Run --benchmarks=overwrite Reviewers: heyongqiang, emayanke, sheki, MarkCallaghan Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D7797 08 January 2013, 19:21:11 UTC
d6e873f Added clearer error message for failure to create db directory in DBImpl::Recover() Summary: Changed CreateDir() to CreateDirIfMissing() so a directory that already exists now causes and error. Fixed CreateDirIfMissing() and added Env.DirExists() Test Plan: make check to test for regessions Ran the following to test if the error message is not about lock files not existing ./db_bench --db=dir/testdb After creating a file "testdb", ran the following to see if it failed with sane error message: ./db_bench --db=testdb Reviewers: dhruba, emayanke, vamsi, sheki Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D7707 07 January 2013, 18:11:18 UTC
4069f66 Add --seed, --read_range to db_bench Summary: Adds the option --seed to db_bench to specify the base for the per-thread RNG. When not set each thread uses the same value across runs of db_bench which defeats IO stress testing. Adds the option --read_range. When set to a value > 1 an iterator is created and each query done for the randomread benchmark will do a range scan for that many rows. When not set or set to 1 the existing behavior (a point lookup) is done. Fixes a bug where a printf format string was missing. Test Plan: run db_bench Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7749 07 January 2013, 17:56:10 UTC
8cd86a7 Fixing and adding some comments Summary: `MemTableList::Add()` neglected to mention that it took ownership of the reference held by its caller. The comment in `MemTable::Get()` was wrong in describing the format of the key. Test Plan: None Reviewers: dhruba, sheki, emayanke, vamsi Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7755 04 January 2013, 01:13:56 UTC
3f7af03 Use a priority queue to merge files. Summary: Use a std::priority_queue in merger.cc instead of doing a o(n) search every time. Currently only the ForwardIteration uses a Priority Queue. Test Plan: make all check Reviewers: dhruba Reviewed By: dhruba CC: emayanke, zshao Differential Revision: https://reviews.facebook.net/D7629 02 January 2013, 21:52:25 UTC
d7d43ae ExtendOverlappingInputs too slow for large databases. Summary: There was a bug in the ExtendOverlappingInputs method so that the terminating condition for the backward search was incorrect. Test Plan: make clean check Reviewers: sheki, emayanke, MarkCallaghan Reviewed By: MarkCallaghan CC: leveldb Differential Revision: https://reviews.facebook.net/D7725 02 January 2013, 21:19:06 UTC
2fc394a Do not compile thrift for fbcode build. Summary: 1. The thrift libraries do not need to be built anyore. 2. SSE is dynamically detected via https://github.com/facebook/rocksdb/commit/1aae609b920f8cd4d93ac49798fa96367b9b864c Test Plan: compile and build Reviewers: sheki, emayanke Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D7665 28 December 2012, 00:11:11 UTC
5b05417 Complication error when using gcc 4.7.1. Summary: There is a compilation error while using gcc 4.7.1. util/ldb_cmd.cc:381:3: error: ‘leveldb::ReadOptions::ReadOptions’ names the constructor, not the type util/ldb_cmd.cc:381:37: error: expected ‘;’ before ‘read_options’ util/ldb_cmd.cc:381:49: error: statement cannot resolve address of overloaded function Test Plan: make clean check Reviewers: sheki, emayanke, zshao Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D7659 27 December 2012, 20:38:20 UTC
0f762ac ldb: Add command "ldb query" to support random read from the database Summary: The queries will come from stdin. One key per line. The output will be in stdout, in the format of "<key> ==> <value>" if found, or "<key>" if not found. "--hex" uses HEX-encoded keys and values in both input and output. Test Plan: ldb query --db=leveldb_db --hex Reviewers: dhruba, emayanke, sheki Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7617 27 December 2012, 04:37:42 UTC
back to top