https://github.com/voldemort/voldemort

sort by:
Revision Author Date Message Commit Date
3a935aa Releasing Voldemort 1.9.18 20 July 2015, 22:54:37 UTC
724596d Voldemort BnP pushes to all colos in parallel. Also contains many logging improvements to discriminate between hosts and clusters. 20 July 2015, 19:43:28 UTC
9c61ada Rewrite of the EventThrottler code to use Tehuti. - Makes throttling less vulnerable to spiky traffic sneaking in "between the interval". - Also fixes throttling for the HdfsFetcher when compression is enabled. 16 July 2015, 01:18:31 UTC
82f80b6 Fix the white space changes The previous refactor was done from my mac book which did not replace the tabs with spaces. This messed up lot of the editing. Instead of re-doing the change with spaces, I just formatted the code which is easier and no re-verification is required. you can review teh commit by adding ?w=1 on github url or use the git diff -w if you are using the command line to ignore the whitespaces and there are not many changes. 10 July 2015, 20:25:23 UTC
168cb69 Pass in additional parameters to fetch 1) Currently the AsyncOperationStatus is set for HdfsFetcher if 2 or more fetches are going on, this would produce erroneous results. 2) Add StoreName, Version, Metadatastore for use in future fetches. 3) Enabled the Hadoop* Tests, don't know why they were not run in ant tests. When I ported them for parity reasons I disabled them too, but now enabling it as the test seems valid. 4) made the fetch throw IOException instead of throwable, which seems less reliable and catching more than it is intended. 01 July 2015, 06:41:08 UTC
d70ed85 Refactor file fetcher to Strategy Interface/class Refactor the file fetcher to Strategy Interface and class In the future this lets you modify the file fetching strategey like having BuildAndPush build only one copy for partition, chunk and the fetcher can fetch them under different names. There is no logic change, just the code is refactored. 01 July 2015, 06:41:08 UTC
6a42f59 Improved path handling and validation in VoldemortSwapJob 01 July 2015, 01:04:12 UTC
aa51b0b Merge pull request #271 from dallasmarlow/coordinator-class Thanks for the fix @dallasmarlow update coordinator class name in server script 30 June 2015, 21:47:13 UTC
a45fc83 Fixed voldemort.cluster.ClusterTest 30 June 2015, 21:06:29 UTC
c2db8fd First-cut implementation of Build and Push High Availability. This commit introduces a limited form of HA for BnP. The new functionality is disabled by default and can be enabled via the following server-side configurations, all of which are necessary: push.ha.enabled=true push.ha.cluster.id=<some arbitrary name which is unique per physical cluster> push.ha.lock.path=<some arbitrary HDFS path used for shared state> push.ha.lock.implementation=voldemort.store.readonly.swapper.HdfsFailedFetchLock push.ha.max.node.failure=1 The Build and Push job will interrogate each cluster it pushes to and honor each clusters' individual settings (i.e.: one can enable HA on one cluster at a time, if desired). However, even if the server settings enable HA, this should be considered a best effort behavior, since some BnP users may be running older versions of BnP which will not honor HA settings. Furthermore, up-to-date BnP users can also set the following config to disable HA, regardless of server-side settings: push.ha.enabled=false Below is a description of the behavior of BnP HA, when enabled. When a Voldemort server fails to do some fetch(es), the BnP job attempts to acquire a lock by moving a file into a shared directory in HDFS. Once the lock is acquired, it will check the state in HDFS to see if any nodes have already been marked as disabled by other BnP jobs. It then determines if the Voldemort node(s) which failed the current BnP job would bring the total number of unique failed nodes above the configured maximum, with the following outcome in each case: - If the total number of failed nodes is equal or lower than the max allowed, then metadata is added to HDFS to mark the store/version currently being pushed as disabled on the problematic node. Afterwards, if the Voldemort server that failed the fetch is still online, it will be asked to go in offline node (this is best effort, as the server could be down). Finally, BnP proceeds with swapping the new data set version on, as if all nodes had fetched properly. - If, on the other hand, the total number of unique failed nodes is above the configured max, then the BnP job will fail and the nodes that succeeded the fetch will be asked to delete the new data, just like before. In either case, BnP will then release the shared lock by moving the lock file outside of the lock directory, so that other BnP instances can go through the same process one at a time, in a globally coordinated (mutually exclusive) fashion. All HA-related HDFS operations are retried every 10 seconds up to 90 times (thus for a total of 15 minutes). These are configurable in the BnP job via push.ha.lock.hdfs.timeout and push.ha.lock.hdfs.retries respectively. When a Voldemort server is in offline mode, in order for BnP to continue working properly, the BnP jobs must be configured so that push.cluster points to the admin port, not the socket port. Configured in this way, transient HDFS issues may lead to the Voldemort server being put in offline mode, but wouldn't prevent future pushes from populating the newer data organically. External systems can be notified of the occurrences of the BnP HA code getting triggered via two new BuildAndPushStatus passed to the custom BuildAndPushHooks registered with the job: SWAPPED (when things work normally) and SWAPPED_WITH_FAILURES (when a swap occurred despite some failed Voldemort node(s)). BnP jobs that failed because the maximum number of failed Voldemort nodes would have been exceeded still fail normally and trigger the FAILED hook. Future work: - Auro-recovery: Transitioning the server from offline to online mode, as well as cleaning up the shared metadata in HDFS, is not handled automatically as part of this commit (which is the main reason why BnP HA should not be enabled by default). The recovery process currently needs to be handled manually, though it could be automated (at least for the common cases) as part of future work. - Support non-HDFS based locking mechanisms: the HdfsFailedFetchLock is an implementation of a new FailedFetchLock interface, which can serve as the basis for other distributed state/locking mechanisms (such as Zookeeper, or a native Voldemort-based solution). Unrelated minor fixes and clean ups included in this commit: - Cleaned up some dead code. - Cleaned up abusive admin client instantiations in BnP. - Cleaned up the closing of resources at the end of the BnP job. - Fixed a NPE in the ReadOnlyStorageEngine. - Fixed a broken sanity check in Cluster.getNumberOfTags(). - Improved some server-side logging statements. - Fixed exception type thrown in ConfigurationStorageEngine's and FileBackedCachingStorageEngine's getCapability(). 30 June 2015, 18:11:45 UTC
050ec92 Merge pull request #273 from bitti/master @bitti thanks for the fix, merged it in. Fix SecurityException when running HadoopStoreJobRunner in an oozie java action 29 June 2015, 21:18:22 UTC
fb9cab6 Fix SecurityException when running HadoopStoreJobRunner in oozie 17 June 2015, 16:24:58 UTC
f3801cf Releasing voldemort 1.9.17 12 June 2015, 23:48:22 UTC
88fcf8d ConnectionException is not catastrophic 1) If a connection timesout or fails during protocol negotiation, they are treated as normal errors instead of catastrophic errors. Connection timeout was a regression from NIO connect fix. Protocol negotiation timeout is a new change to detect the failed servers faster. 2) When a node is marked down, the outstanding queued requests are not failed and let them go through the connection creation cycle. When there is no outstanding requests they can wait infinitely until the next request comes up. 3) UnreachableStoreException is sometimes double wrapped. This causes the catastrophic errors to be not detected accurately. Created an utility method, when you are not sure if the thrown exception could be UnreachableStoreException use this method, which handles this case correctly. 4) In non-blocking connect if the DNS does not resolve the Java throws UnresolvedAddressException instead of UnknownHostException. Probably an issue in java. Also UnresolvedAddressException is not derived from IOException but from IllegalArgumentException which is weird. Fixed the code to handle this. 5) Tuned the remembered exceptions timeout to twice the connection timeout. Previously it was hardcoded to 3 seconds, which was too aggressive when the connection for some use cases where set to more than 5 seconds. Added unit tests to verify all the above cases. 12 June 2015, 23:23:22 UTC
2b95f0d update coordinator class name in server script 12 June 2015, 16:37:50 UTC
d65f7db Releasing Voldemort 1.9.16 09 June 2015, 13:46:34 UTC
c574a37 Standardized recent release_notes formatting. 09 June 2015, 13:41:07 UTC
97d8694 Some more AvroUtils and BnP clean ups. 09 June 2015, 13:33:27 UTC
e13f6a2 Fix error reporting in AvroUtils.getSchemaFromPath() - report errors with an exception - report errors exactly once - provide the failing pathname - don't generate spurious cascading NPE failures 09 June 2015, 01:56:31 UTC
037a0dc Merge pull request #269 from FelixGV/VoldemortConfig_bug Fixed VoldemortConfig bug introduced in 3692fa3. 08 June 2015, 23:01:48 UTC
c7e6cec Fixed VoldemortConfig bug introduced in 3692fa3f493acf717b1431d624af4c997df4f2fd. 08 June 2015, 22:38:57 UTC
5f0cd8b Merge pull request #265 from gnb/VOLDENG-1912 Unregister the "-streaming-stats" mbean correctly 06 June 2015, 00:28:12 UTC
924c72f Unregister the "-streaming-stats" mbean correctly This avoids littering up the logs with JMX exceptions like this 2015/06/04 23:55:58.105 ERROR [JmxUtils] [voldemort-admin-server-t21] [voldemort] [] Error unregistering mbean javax.management.InstanceNotFoundException: voldemort.server.StoreRepository:type=cmp_comparative_insights at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415) at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546) at voldemort.utils.JmxUtils.unregisterMbean(JmxUtils.java:348) at voldemort.server.StoreRepository.removeStorageEngine(StoreRepository.java:187) at voldemort.server.storage.StorageService.removeEngine(StorageService.java:749) at voldemort.server.protocol.admin.AdminServiceRequestHandler.handleDeleteStore(AdminServiceRequestHandler.java:1487) at voldemort.server.protocol.admin.AdminServiceRequestHandler.handleRequest(AdminServiceRequestHandler.java:238) at voldemort.server.niosocket.AsyncRequestHandler.read(AsyncRequestHandler.java:190) at voldemort.common.nio.SelectorManagerWorker.run(SelectorManagerWorker.java:105) at voldemort.common.nio.SelectorManager.run(SelectorManager.java:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 06 June 2015, 00:22:19 UTC
e63bc53 Releasing Voldemort build 1.9.15 06 June 2015, 00:07:15 UTC
139e441 Fix Log message HdfsFile does not have toString method which causes object id to be printed in the log message, it broke the script we had for collecting the download speed. Although speed can be calculated better now using the stats file, but that is a separate project. Added number of directories being downloaded, files in addition to size. This will help to track some more details, as the files if not exist, dummy files are created in place. Renamed HDFSFetcherAdvancedTest to HdfsFetcherAdvancedTest to keep it in sync with other naming conventions. 05 June 2015, 23:58:49 UTC
1592db0 Merge pull request #263 from FelixGV/hung_async_task_mitigation Added SO_TIMEOUT config (default 30 mins) in ConfigurableSocketFactory. Looks good. 04 June 2015, 18:49:05 UTC
3692fa3 Added SO_TIMEOUT config (default 30 mins) in ConfigurableSocketFactory and VoldemortConfig. Added logging to detect hung async jobs in AdminClient.waitForCompletion 04 June 2015, 18:24:00 UTC
13a4b81 HdfsCopyStatsTest fails intermittently The OS returns the expected files in random order. Use set instead of list. 31 May 2015, 16:32:33 UTC
b8d9525 Add more testing for Serialization. Added more testing for Serialization. I was doing some tests on what is the expected input for the serializers and expected output. I thought it will be a good idea instead of just documenting, if i can write unit tests to validate them. Most of them have very poor testing, so decided to add the unit tests. I will add more testing as I start working more on the expected input/output. 27 May 2015, 22:50:09 UTC
b540533 Release 1.9.14 Release version 1.9.14 22 May 2015, 16:49:55 UTC
df12409 RO Hdfs fetcher allocates too much memory 1) Hdfs Fetcher in 1.0.4 uses ByteRangeInputStream. This class does not override the method read(byte[], int , int). So it defaults to this method from InputStream, which reads a character at a time from the input stream. HttpInputStream for this method creates byte arrays for each read. So if you are download 2 TB data, the server will allocate/free 2 TB data before the data is downloaded. This creates too much garbage and new gen gets full in few milliseconds and GC happens. Though GC are fast, this too much GC causes the latency to spike and causes JVM to run out of Memory. 2) http://svn.apache.org/viewvc?view=revision&revision=1330500 fixed this issue on April 2012 rather knowingly/unknowingly. I tried upgrading to Hadoop latest but it brings in ProtoBuf 2.5.0 and Avro 1.7. When I disabled the dependencies it failed at runtime expecting protobuf 2.5.0 . I enabled only protobuf and it has no runtime dependency on Avro 1.7. But I am saving that fix for a later day. The branch is hadoop_Version_Upgrade which uses Hadoop 2.6.0 and ProtoBuf 2.6.1 18 May 2015, 18:49:17 UTC
e2d845c Output stats file for RO files download .stats directory will be created and will contain last X (default: 50) stats file. If a version-X is fetched a file with the same name as this directory name will contain the stats for this download. The stats file will contain the individual file name, time it took to download and few other information. Added unit tests for the HdfsCopyStatsTest 18 May 2015, 18:49:17 UTC
b5db5ed fix slop pusher unit test 15 May 2015, 21:22:29 UTC
45cce9e fix store-delete command 13 May 2015, 22:54:19 UTC
705b6ff add admin command for meta get-ro and add test config for readonly-two-nodes-cluster 13 May 2015, 20:49:30 UTC
eabb057 Add Admin API to list/stop/enable scheduled jobs 13 May 2015, 17:48:54 UTC
20f1037 add storeops.delete and deleteQuotaForNode, fix vector clock for setQuotaForNode 12 May 2015, 21:39:11 UTC
b4fa1cb Refactor HdfsFetcher 1) Created directory and File class to help me in the future. 2) Cleaned up some code to make for easier readability. 12 May 2015, 17:54:57 UTC
3378d6c Code compiled on Java8 fails to run on Java6 Ever witnessed Exception in thread "main" java.lang.NoSuchMethodError: java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView; at voldemort.store.metadata.MetadataStore.updateRoutingStrategies(MetadataStore.java:855) at voldemort.store.metadata.MetadataStore.init(MetadataStore.java:1189) This is because of the issue documented here https://gist.github.com/AlainODea/1375759b8720a3f9f094 11 May 2015, 22:30:00 UTC
20455c7 Releasing Voldemort 1.9.13 11 May 2015, 21:45:23 UTC
b4674b5 Suppress obsoleteVersionException on logs During the refactoring of the server buffers, all errors from the stroage engine are logged. Previous code does not log any errors on writes. I looked at the exception stack and could not see other errors that need to be suppressed. Verified that ProtocolBuffer does not log any error, so only Voldemort Native request handler is affected. 11 May 2015, 21:09:02 UTC
1c8e0d4 NIO style connect Problems : 1) Connect blocks the selector. This causes other operations (read/write ) queued on the selector to incur additional latency or timeout. This is worse when you have data centers that are far away. 2) ProtocolNegotiation request is done after the connection establishment which blocks the selector in the same manner. 3) If Exceptions are encountered while getting connections from the queue they are ignored. Solutions : The connection creation is async. Create method is modified to createAsync and it takes in the pool object. for NIO the createAsync triggers an async operation which checks in the connection when it is ready. For Blocking connections the createAsync blocks, creates the connection and checks in the connection to the pool before returning. As the connection creation is async now, exceptions are remembered (for 5 seconds ) in the pool. When some thread asks for a connection and if the exceptions are remembered they will get an exception. There is no ordering in the way connections are handed out, one thread can request a connection and before it could wait, other thread could steal this connection. This is avoided to a certain extent by instead of doing one blocking wait, the thread splits the blocking wait in 2 half and creates connection if required. This should not be a problem in the real world as when you reach steady state ( create required number of connections) this can't happen. Upgrade the source compatibility from java 5 to 6. Most of the code is written with the assumption of Java 6, I don't believe you can run this code on Java 5. So the impact should be minimal, but if it goes in Client V2 branch, it will get benefit of additional testing. 06 May 2015, 22:06:42 UTC
5c03ea6 Releasing Voldemort 1.9.12 01 May 2015, 22:27:47 UTC
706a1f3 Add more tests and fix buffer size of GZIP Streams 01 May 2015, 21:09:59 UTC
495a234 Merge pull request #256 from FelixGV/disable_ant_build Fully disabled the Ant build in favor of the Gradle one. Though the docs task is not yet ported to gradle, we can always fetch the build.xml from an older trunk and generate the docs. Given the amount of confusion it causes, I will merge this change in. 30 April 2015, 18:18:37 UTC
d310bf2 Fully disabled the Ant build in favor of the Gradle one. 30 April 2015, 18:11:43 UTC
8addbf7 Fix the Readme to use Gradle Remove the ant and fix the readme to use Gradle. 30 April 2015, 17:44:09 UTC
c01f26e Rebalance unit tests fail intermittently There are 2 issues. 1) Put is asynchronous, so there needs to be wait time before the put is verified on all the nodes. 2) Repeated puts need to generate different vector clocks. 27 April 2015, 22:01:41 UTC
9af4da4 turn on reset-quota by default for rebalance-controller-cli 25 April 2015, 01:22:27 UTC
a831610 split quota-resetting logic to QuotaResetter class and add unit test 25 April 2015, 01:22:27 UTC
8e39e55 add reset-quota logic in RebalanceControllerCLI 25 April 2015, 01:22:27 UTC
a103fca Releasing Voldemort 1.9.11 24 April 2015, 00:43:21 UTC
2ec72c4 Adding compression to RO path - first pass commit VoldemortConfig - Added a new config for compression codec. Default value for this property is GZIP. This is used by the AdminServiceRequestHandler to respond to the VoldemortBuildAndPushJob on what codec is supported. VAdminProto - Added a new Request Type for getting the suported compression codecs from RO Voldemort Server AdminServiceRequestHandler - New method to handle the above request type. AdminClient - Provides a method - getSupportedROStorageCompressionCodecs, that supports the above request type.. VoldemortBuildAndPushJob - inside run(), immediately after check cluster equalities, an admin request is issued to the VoldemortServer (specified by the property "push.node") to fetch the RO Compression Codec supported by the Server. - If any of the supported CODEC match the COMPRESSION_CODEC, then compression specific properties are set. Else no compression is enabled. AbstractHadoopJob - This is where the RO compression specific properties are set in Jobconf inside the createJobConf() Method HadoopStoreWriter and HadoopStoreWriterPerBucket - Adding dummy test only constructors - Creating index and value file streams based on compression settings - Got rid of some unused variables - minor movement of code HDFSFetcher - Changed copyFileWithCheckSum() to check if the files are ending with ".gz" and create a GZPIInputStream based on that. - GZPIInputStream (if compression is enabled) wraps the orifinal FSDataInputStream Tests for HadoopStoreWriter and HadoopStoreWriterPerBucket - These ar parameterized tests - takes in a boolean to either save keys or not - Run two tests - compressed and uncompressed - have tighter assumptions and use the test specific constructors in the corresponding classes 23 April 2015, 23:14:37 UTC
9de1042 Fix mode option in cluster fork lift 21 April 2015, 22:39:15 UTC
9e21ccb create admin api for quota operations 1. Get quota by node id 2. Set quota by node id 3. Rebalance quota 4. Unit test for the new admin apis 20 April 2015, 21:26:50 UTC
b83e3e7 add metadata key for quota.enforcement.enabled 20 April 2015, 21:26:50 UTC
c8e583e Releasing Voldemort 1.9.10 16 April 2015, 01:10:50 UTC
32e2e0b Client buffer cleanup and isCompleteResponse 1) Client isCompleteResponse for Get and GetAll allocates the entire key and value. Discards them immediately. Now the byte array is not de-serialized and the validity is verified by advancing the pointers. 2) Put request size is calculated and the buffer is grown to the required size to avoid double allocation. 16 April 2015, 00:00:40 UTC
3f425ef Vector clock deserializer from Input Stream Avoid double allocating the value size for puts which can be potentially few kilobytes. Vector clock has a deserializer from InputStream and it is used to avoid the double allocation on the hot path. 16 April 2015, 00:00:40 UTC
94be1a5 ShareBuffer Refactoring Refactored the Shared Buffer code to eliminate the separate read and write buffers. Now a common buffer is used and the code is refactored into its own classes. running the unit test. 16 April 2015, 00:00:40 UTC
b3becf3 Separate Client and Admin Request Handler Separated both Admin and Client Request Handler. Currently the client port will answer admin requests and the admin port will answer client requests. You can bootstrap from one of these ports and client after bootstrapping sends the queries to the correct ports. This is dangerous as most of the security implementations of voldemort relies on blocking the admin port via firewall and an attacker can change the voldemort source code to send the admin requests to client port. My intention for the fix was to make sure that the client answers only client requests. This will help me to make the client request handler share the read and write buffer without touching the admin request handler. Though it can be done for both client and admin, admin requests are too few and there are too many places to touch. So will fix only the client request handler. The AdminClient expects both the client and admin request handler. The admin client does some get remote metadata calls which uses the voldemort native v1 requests on admin port. So leaving the admin request handler unchanged, just moved some code so that client request handlers are isolated. 16 April 2015, 00:00:40 UTC
4a87d69 client sharing read/write buffer Client either writes/reads from socket, never does them together. So the buffer can be shared which will bring down the memory requirement for the client by half. But the client has to watch for 2 things 1) On Write the buffer expands as necessary. So the buffer needs to be reinitialized if it grows. 2) On Read, if the buffer can't accomodate it grows as necessary, this case also needs to be handled. This works as expected and the unit tests are passing. Will put it through VPL to measure the efficiency of the fixes. Created a new class to hold the Buffer reference. This helps to share the buffer between input and output streams easily. Previously you have to watch out for places where one buffer moves away from the other and need to call an explicit method to update it. Also moved many buffer growing and resetting logic to a common code, so it is more readable and understandable. Should I rename the ByteBufferContainer to MutableByteBuffer this fits the MutableInt pattern nicely where a single int can be shared by multiple classes and updating one is visible to others. 16 April 2015, 00:00:40 UTC
298bdc1 Increase the heap size for Tests Increase the heap size for Tests to 8GB ZoneShrinkage tests fails time to time with errors, as it runs out of heap. 13 April 2015, 22:01:53 UTC
d546a02 Releasing Voldemort 1.9.9 10 April 2015, 18:34:22 UTC
ca08a06 Merge pull request #251 from voldemort/revert-223-master Revert "Steps towards automating cluster zone expansion" 09 April 2015, 21:15:01 UTC
d2190fb Revert "Steps towards automating cluster zone expansion" 09 April 2015, 21:14:16 UTC
a754f35 Merge pull request #223 from gnb/master Steps towards automating cluster zone expansion 09 April 2015, 21:12:19 UTC
46df86e Merge pull request #250 from gnb/roswap2 Improve error messages in ROReplicationHelperCLI 08 April 2015, 23:08:25 UTC
0361984 Improve error messages in ROReplicationHelperCLI Split up one "Unqualified store" error message into three with three separate checks, so that the person who runs this code can actually tell which of those conditions went wrong. 08 April 2015, 21:22:48 UTC
db6de76 Refactor the HDFS fetcher 1) Move some code into a method 2) Allocate the buffer per fetch instead of per file. Tested by fetching 2 directories on HDFS and verified the output. 07 April 2015, 21:52:46 UTC
9b417b6 Incorrectly pushed the logging change to master I had an log info to see where the queries are being sent. This was stashed but not sure how it went to the master. Reverting the change. 07 April 2015, 21:41:45 UTC
f144780 Metadata queries are not sent to same zone Metadata queries for system stores are sent to lowest number node in the cluster instead of the zone. Added a hack to the local pref strategy if the client zone is set, use the zone local routing. The code is very complicated (unnecesarily) did not clean it up as I dont want to run it for all the scenarios and wanted to make a safe fix. 06 April 2015, 22:11:56 UTC
874bef9 RouteToAllStrategy routes to node 0..n RouteToAllStrategy tries the node always in a fixed order. This creates too much metadata queries on the node 0. For zoned cluster, the node with lowest id gets bombarded with too many connections and get queries. Create a shuffled node, when the cluster is initialized and use this in the routing strategy. The random seed is used at the initialization to make it random every time the cluster is re-initialized. 06 April 2015, 19:27:13 UTC
d004dd8 Added more tests for the ClientRequestFormat Added more tests to validate isCompleteResponse for the clientRequestFormat. Noticed that protocolBuffers will break if the server sends in less than 4 bytes of data. 31 March 2015, 16:10:11 UTC
224ce69 Test cases for client request response 1) Validates the request response. 2) Added more validation for missing version timestamps and other issues 3) Added backward compatibility tests. 31 March 2015, 15:41:44 UTC
f1f4853 Merge pull request #246 from gnb/hdfs-fetcher HdfsFetcher fixes 30 March 2015, 20:55:07 UTC
cff89fc Add a destination dir arg to HdfsFetcher main 30 March 2015, 20:01:23 UTC
0b576cd Allow HdfsFetcher to fetch individual files but only from the main(), not when fetch() is invoked by the server. 30 March 2015, 19:59:33 UTC
3a1f460 BnP improvement: - Removed a bunch of redundant constructors that made code unreadble. - Added a min.number.of.records config (defaults to 1) to prevent pushing empty stores. - Improved error handling and reporting in BnP's run function. 27 March 2015, 22:04:47 UTC
258a7c0 add local option for ReadOnlyReplicationHelperCLI 26 March 2015, 01:08:40 UTC
e55cf79 Merge pull request #244 from bhasudha/playing_around_with_rocksdb Rocksdb StorageEngine support for Voldemort 24 March 2015, 06:05:38 UTC
ad87133 Incorporating code review feedbacks - Adding optional read lock for get API - Config change for RocksDb default data directory - minor log fixes 22 March 2015, 19:46:47 UTC
e3e2bf7 Warn on add/delete store in SetMetadata command 1) Currently if you add or delete a store using set metadata the cluster will be in an inconsistent state. Added warning to the server side log if this happens 2) ReplaceNodeCLI does not work correctly if you start the node with empty stores xml. Fixed that. Now it accepts empty stores.xml or the same stores.xml as the other nodes. 3) get stores.xml returns different order different times. Made the ordering constant sorted by the storeName. 4) vadmin.sh meta check stores.xml verify if the store exists and it is queriable on the node. 22 March 2015, 19:23:53 UTC
de59607 Warn on add/delete store in SetMetadata command 1) Currently if you add or delete a store using set metadata the cluster will be in an inconsistent state. Added warning to the server side log if this happens 2) ReplaceNodeCLI does not work correctly if you start the node with empty stores xml. Fixed that. Now it accepts empty stores.xml or the same stores.xml as the other nodes. 3) get stores.xml returns different order different times. Made the ordering constant sorted by the storeName. 4) vadmin.sh meta check stores.xml verify if the store exists and it is queriable on the node. 18 March 2015, 19:51:42 UTC
4758860 Deleting recently added test store 18 March 2015, 05:09:21 UTC
4cba78e Using rocksdbjni jar from maven 18 March 2015, 04:57:44 UTC
2c17d1d New test store New test store with long key and string value. 18 March 2015, 04:28:46 UTC
ab005b9 Add logging messages 18 March 2015, 04:28:46 UTC
fad1e28 Adding Parameterized StorageEngineTest for Rocksdb In this commit; * RocksdbStorageEngineTest that extends AbstractStorageEngineTest * Some fixes to the RocksdbStorageEngine * Adding support for getVersions(ByteArray key) 18 March 2015, 04:28:46 UTC
c5a63e1 Modifying the unit test to be a parameterized test * Now unit test tests both RocksdbStorageEngine and PartitionPrefixedRocksDbStorageEngine * Fixed getALL unit test. 18 March 2015, 04:28:46 UTC
25ff98c Adding test case for getall 18 March 2015, 04:28:46 UTC
5c2f931 Adding more unit test cases 18 March 2015, 04:28:46 UTC
478d8a4 Adding new rocksdbjni jar and librocksdbjni.so 18 March 2015, 04:28:45 UTC
547bd4e Missed to add the PartitionPrefixedRocksDbStorageEngine class in the previous commit. Adding now. 18 March 2015, 04:28:41 UTC
a223c7d Adding basic functional implementation for PartitionPrefixedRocksdbStorageEngine This is similar to PartitionPrefixedBdbStorageEngine. TODO: May need to refactor both storage engines later 18 March 2015, 04:28:41 UTC
8023068 Adding first cut implementations for: * multiVersionPut * Iterators for keys and entries Also adding stubs for other BATCH APIs which can be implemented later based on performance. 18 March 2015, 04:28:41 UTC
7f577e3 Adding basic get after put test to test RocksDB APIs * My tests fail with "java.lang.UnsatisfiedLinkError: no rocksdbjni in java.library.path" . Need to fix this later. 18 March 2015, 04:28:41 UTC
ea60a8d Adding first implementation for delete API 18 March 2015, 04:28:41 UTC
fda545e Adding first impmenetations for getall and put APIs Also added common locking mechanism to be used for write operations 18 March 2015, 04:28:41 UTC
751d9f7 Changed rdb.data.dir to rocksdb.data.dir As suggested by Vinoth Added the RocksDB Storage Config in VoldemortConfig's default list. Added RocksDB's library loading in RocksDbStorageConfiguration Fails for me at runtime because of missing dependencies... will need to be revisited. 18 March 2015, 04:28:41 UTC
back to top