https://github.com/apache/flink

sort by:
Revision Author Date Message Commit Date
45f7825 Commit for release 1.1.0 02 August 2016, 18:30:53 UTC
c71a0c7 [FLINK-4307] [streaming API] Restore ListState behavior for user-facing ListStates 02 August 2016, 18:24:35 UTC
ac7028e Revert "[FLINK-4154] [core] Correction of murmur hash breaks backwards compatibility" This reverts commit 81cf2296683a473db4061dd3bed0aeb249e05058. We had an incorrent implementation of Murmur hash in Flink 1.0. This was fixed in 641a0d4 for Flink 1.1. Then we thought that we need to revert this in order to ensure backwards compatability between Flink 1.0 and 1.1 savepoints (81cf22). Turns out, savepoint backwards compatability is broken for other reasons, too. Therefore, we revert 81cf22 here, ending up with a correct implementation of Murmur hash again. 02 August 2016, 18:24:35 UTC
12bf7c1 [FLINK-4207] WindowOperator becomes very slow with allowed lateness 26 July 2016, 19:12:05 UTC
884d3e2 [FLINK-4258] fix potential NPE in SavepointCoordinator 26 July 2016, 16:37:58 UTC
9bcbcf4 [FLINK-4246] Allow Specifying Multiple Metrics Reporters This also updates documentation and tests. Reporters can now be specified like this: metrics.reporters: foo,bar metrics.reporter.foo.class: JMXReporter.class metrics.reporter.foo.port: 10 metrics.reporter.bar.class: GangliaReporter.class metrics.reporter.bar.port: 11 metrics.reporter.bar.something: 42 26 July 2016, 15:05:21 UTC
cd232e6 [FLINK-4261][tools] retry in case of failed snapshot deployment Unfortunately, we can't deploy snapshots atomically using the Nexus repository. The staged process which leads to an atomic deployment is only designed to work for releases. Best we can do is to retry deploying artifacts in case of failures. - introduce retry in case of failure of snapshot deployment - simplify deployment script This closes #2296 26 July 2016, 14:46:47 UTC
2648bc1 [FLINK-4152] Allow re-registration of TMs at resource manager - Add YarnFlinkResourceManager test to reaccept task manager registrations from a re-elected job manager - Remove unnecessary sync logic between JobManager and ResourceManager - Avoid duplicate reigstration attempts in case of a refused registration - Add test case to check that not an excessive amount of RegisterTaskManager messages are sent - Remove containersLaunched from YarnFlinkResourceManager and instead not clearing registeredWorkers when JobManager loses leadership - Let YarnFlinkResourceManagerTest extend TestLogger - Harden YarnFlinkResourceManager.getContainersFromPreviousAttempts - Add FatalErrorOccurred message handler to FlinkResourceManager; Increase timeout for YarnFlinkResourceManagerTest; Add additional constructor to TestingYarnFlinkResourceManager for tests - Rename registeredWorkers field into startedWorkers Additionally, the RegisterResource message is renamed into NotifyResourceStarted which tells the RM that a resource has been started. This reflects the current semantics of the startedWorkers map in the resource manager. - Fix concurrency issues in TestingLeaderRetrievalService This closes #2257 26 July 2016, 14:39:22 UTC
c6715b7 [travis] remove outdated comment regarding snapshots 26 July 2016, 11:46:39 UTC
e3fec1f [FLINK-4192] [metrics] Move metrics classes out of 'flink-core' - moved user-facing API to 'flink-metrics/flink-metrics-core' - moved JMXReporter to 'flink-metrics/flink-metrics-jmx' - moved remaining metric classes to 'flink-runtime' This closes #2226 26 July 2016, 10:30:15 UTC
e4fe89d [FLINK-4210][metrics] Move close()/isClosed() out of MetricGroup This closes #2286 26 July 2016, 09:50:01 UTC
0b4c04d [FLINK-4239] Set Default Allowed Lateness to Zero and Make Triggers Non-Purging This closes #2278. 26 July 2016, 09:32:42 UTC
f0ac261 [FLINK-4067] [runtime] Add savepoint headers Savepoints were previously persisted without any meta data using default Java serialization of `CompletedCheckpoint`. This commit introduces a savepoint interface with version-specific serializers and stores savepoints with meta data. Savepoints expose a version number and a Collection<TaskState> for savepoint restore. Currently, there is only one savepoint version: SavepointV0 (Flink 1.1): This is the current savepoint version, which holds a reference to the Checkpoint task state collection, but is serialized with a custom serializater not relying on default Java serialization. Therefore, it should not happen again that we need to stick to certain classes in future Flink versions. The savepoints are stored in `FsSavepointStore` with the following format: MagicNumber SavepointVersion Savepoint - MagicNumber => int - SavepointVersion => int (returned by Savepoint#getVersion()) - Savepoint => bytes (serialized via version-specific SavepointSerializer) The header is minimal (magic number, version). All savepoint-specific meta data can be moved to the savepoint itself. This is also were we would have to add new meta data in future versions, allowing us to differentiate between different savepoint versions when we change the serialization stack. All savepoint related classes have been moved from checkpoint to a new sub package `checkpoint.savepoint`. This closes #2194. 26 July 2016, 09:30:43 UTC
110bba3 [FLINK-4103] [table] Add CsvTableSource docs and Java accessibility 26 July 2016, 09:03:29 UTC
7e309ee [FLINK-4103] [table] Modify CsvTableSource to implement StreamTableSource This closes #2162. 26 July 2016, 08:47:57 UTC
0360fb9 [hotfix] [gelly] Reduce maximum number of generator blocks The default maximum akka transfer size is 10 MB. This commit reduces the number of generator blocks from 2^20 to 2^15 which removes the limit on graph size. The original limit of one millions blocks was intended to future-proof scalability. This is a temporary fix as graph generation will be reworked in FLINK-3997. 25 July 2016, 17:58:21 UTC
177168b [FLINK-2125][streaming] Delimiter change from char to string This closes #2233 25 July 2016, 15:37:44 UTC
588830a [FLINK-4244] [docs] Field names for union operator do not have to be equal This closes #2280. 25 July 2016, 13:47:07 UTC
b6f9308 [FLINK-3891] [table] Add a class containing all supported Table API types This closes #2292. 25 July 2016, 13:31:21 UTC
2f9a28a [FLINK-3901] [table] Convert Java implementation to Scala and fix bugs This closes #2283. 25 July 2016, 13:23:38 UTC
c5d1d12 [FLINK-3901] [table] Create a RowCsvInputFormat to use as default CSV IF in Table API 25 July 2016, 13:23:38 UTC
3213016 [FLINK-4150] [runtime] Don't clean up BlobStore on BlobServer shut down The `BlobServer` acts as a local cache for uploaded BLOBs. The life-cycle of each BLOB is bound to the life-cycle of the `BlobServer`. If the BlobServer shuts down (on JobManager shut down), all local files will be removed. With HA, BLOBs are persisted to another file system (e.g. HDFS) via the `BlobStore` in order to have BLOBs available after a JobManager failure (or shut down). These BLOBs are only allowed to be removed when the job that requires them enters a globally terminal state (`FINISHED`, `CANCELLED`, `FAILED`). This commit removes the `BlobStore` clean up call from the `BlobServer` shutdown. The `BlobStore` files will only be cleaned up via the `BlobLibraryCacheManager`'s' clean up task (periodically or on BlobLibraryCacheManager shutdown). This means that there is a chance that BLOBs will linger around after the job has terminated, if the job manager fails before the clean up. This closes #2256. 25 July 2016, 13:20:26 UTC
2818ee0 [hotfix] Prevent CheckpointCommitter from failing job Prevents the CheckpointCommitter from failing a job, if either commitCheckpoint() or isCheckpointCommitter() failed. Instead, we will try again on the next notify(). This closes #2287 25 July 2016, 12:52:55 UTC
b71ac35 [FLINK-4217] [gelly] Gelly drivers should read CSV values as strings The user must now specify the ID type as "integer" or "string" when reading a graph from a CSV file. This closes #2250 22 July 2016, 18:07:19 UTC
e2ef74e [FLINK-4213] [gelly] Provide CombineHint in Gelly algorithms This closes #2248 22 July 2016, 18:07:19 UTC
54f02ec [FLINK-4201] [runtime] Forward suspend to checkpoint coordinator Suspended jobs were leading to shutdown of the checkpoint coordinator and hence removal of checkpoint state. For standalone recovery mode this is OK as no state can be recovered anyways (unchanged in this PR). For HA though this lead to removal of checkpoint state, which we actually want to keep for recovery. We have the following behaviour now: -----------+------------+------------------- | Standalone | High Availability -----------+------------+------------------- SUSPENDED | Discard | Keep -----------+------------+------------------- FINISHED/ | Discard | Discard FAILED/ | | CANCELED | | -----------+------------+------------------- This closes #2276. 22 July 2016, 12:26:40 UTC
b3fa459 [FLINK-4202] [metrics] Update restarting time metric documentation This closes #2284. 22 July 2016, 12:18:52 UTC
73836f7 [FLINK-4229] [metrics] Only start JMX server when port is specified This closes #2279 21 July 2016, 15:02:51 UTC
5dd85e2 [FLINK-4202] Add restarting time JM metric This PR adds a JM metric which shows the time it took to restart a job. The time is measured between entering the JobStatus.RESTARTING and reaching the JobStatus.RUNNING state. During this time, the restarting time is continuously updated. The metric only shows the time for the last restart attempt. The metric is published in the job metric group under the name of "restartingTime". This closes #2271. 21 July 2016, 14:16:31 UTC
d3bc556 [FLINK-4238] Only allow/require query for Tuple Stream in CassandraSink 21 July 2016, 13:21:13 UTC
f81cda3 [FLINK-4237] [runtime] Cancel savepoints on declined snapshots 21 July 2016, 11:50:12 UTC
ccd4fd9 [FLINK-4230] [DataStreamAPI] PR feedback on Session Windowing ITCase 21 July 2016, 08:11:30 UTC
78af1e9 [FLINK-4230] [DataStreamAPI] Add Session Windowing ITCase 21 July 2016, 08:11:30 UTC
19dae21 [FLINK-4229] Do not start any Metrics Reporter by default 20 July 2016, 16:02:48 UTC
25b6f22 [FLINK-3956] Make FileInputFormats independent from Configuration Parameters of some input formats that was only possible to be set through the Configuration object now have setter methods that allow the user to do so. Values set by the setters cannot be reset by the configuration object. 20 July 2016, 15:59:35 UTC
6ab96a7 [FLINK-3729] [table] Several SQL tests fail on Windows OS This closes #2238. 20 July 2016, 14:25:53 UTC
130511f [FLINK-4070] [table] Support literals on left side of binary expressions This closes #2120. 20 July 2016, 13:01:47 UTC
2391683 [hotfix] Fixes broken TopSpeedWindowing scala example Integrated PR comments This closes #2259. 20 July 2016, 12:24:06 UTC
082d87e [FLINK-4166] [Distributed Coordination] zookeeper namespaces (cli parameter -z) This closes #2249 19 July 2016, 16:00:21 UTC
17589d4 [FLINK-4199] fix misleading CLI messages during job submission - change CLI message upon cluster retrieval - save JobExecutionResult for interactive executions - only print Collection size in accumulator results - remove unused helper method This closes #2264 19 July 2016, 16:00:04 UTC
e85f787 [FLINK-4183] [table] Move checking for StreamTableEnvironment into validation layer This closes #2221. 19 July 2016, 14:27:35 UTC
dd53831 [FLINK-4130] [table] CallGenerator could generate illegal code when taking no operands This closes #2182. 19 July 2016, 12:45:58 UTC
5758c91 [FLINK-2985] [table] Allow different field names for unionAll() in Table API This closes #2078. 19 July 2016, 12:16:39 UTC
de8406a [FLINK-4232] Make sure ./bin/flink returns correct pid This closes #2268 19 July 2016, 10:22:54 UTC
ceef8ad [hotfix] fix warning b/c of unspecified exception type 18 July 2016, 15:19:31 UTC
ea13637 [hotfix] correct typo in method name 18 July 2016, 15:16:45 UTC
b67e508 [FLINK-3713] [clients, runtime] Use user code class loader when disposing savepoint Disposing savepoints via the JobManager fails for state handles or descriptors, which contain user classes (for example custom folding state or RocksDB handles). With this change, the user can provide the job JAR when disposing a savepoint in order to use the user code class loader of that job. The JAR is optional, hence not breaking the CLI API. This closes #2083. 18 July 2016, 15:04:09 UTC
bd8d7f5 [FLINK-4209] Remove supervisord dependency for the docker image 18 July 2016, 13:05:56 UTC
7d53902 [FLINK-4209] Simplify docker-compose script (volumes are now local) 18 July 2016, 13:05:56 UTC
4ffda0f [FLINK-4209] Add debug information of the build steps 18 July 2016, 13:05:56 UTC
a9a75ce [FLINK-4209] Separate build dependencies in the docker image and remove them once it is ready 18 July 2016, 13:05:56 UTC
e1e8c2d [FLINK-4209] Change hostname resolution from IP to name This solves issues when a host has multiple IPs 18 July 2016, 13:05:56 UTC
a710fc3 [FLINK-4169] Fix State Handling in CEP Before, ValueState.update() was not called after changing the NFA or the priority queue in CEP operators. This means that the operators don't work with state backends such as RocksDB that strictly require that update() be called when state changes. This changes the operators to always call update() and also introduces a test that verifies the changes. 18 July 2016, 09:16:04 UTC
4e6d89c Allow Setting StateBackend in OneInputStreamOperatorTestHarness 18 July 2016, 09:16:03 UTC
7329074 [FLINK-4165] Add warning about equals/hashCode to CEP doc 18 July 2016, 09:16:03 UTC
3d8d192 Fix Displayed CEP Operator Names 18 July 2016, 09:16:03 UTC
254379b [FLINK-4162] Fix Event Queue Serialization in Abstract(Keyed)CEPPatternOperator Before, these were using StreamRecordSerializer, which does not serialize timestamps. Now it uses MultiplexingStreamRecordSerializer. This also extends the tests in CEPOperatorTest to test that timestamps are correctly checkpointed/restored. 18 July 2016, 09:16:03 UTC
ac06146 [FLINK-4149] Fix Serialization of NFA in AbstractKeyedCEPPatternOperator NFA is Serializable and has readObject()/writeObject() methods. In AbstractKeyedCEPPatternOperator a KryoSerializer was used as the TypeSerializer for the ValueState that holds NFA instances. Kryo does not call readObject()/writeObject() therefore the state of the NFA was invalid after deserialization. This change adds a new TypeSerializer for NFA that uses Java Serialization. In the long run it will be better to get rid of the readObject()/writeObject() methods and instead efficiently serialize using a specialized TypeSerializer. 18 July 2016, 09:16:03 UTC
a4c9d66 Replace StreamEvent by StreamRecord in CEP Tests 18 July 2016, 09:15:18 UTC
e9f660d [FLINK-3466] [runtime] Make state handles cancelable. State handles are cancelable, to make sure long running checkpoint restore operations do finish early on cancallation, even if the code does not properly react to interrupts. This is especially important since HDFS client code is so buggy that it deadlocks when interrupted without closing. 15 July 2016, 15:18:04 UTC
2837c60 [FLINK-3466] [tests] Add serialization validation for state handles 15 July 2016, 15:18:04 UTC
41f5818 [FLINK-4186] Use Flink metrics to report Kafka metrics This commit also adds monitoring for the current and committed offset This closes #2236 15 July 2016, 14:30:45 UTC
70094a1 [FLINK-4184] [metrics] Replace invalid characters in ScheduledDropwizardReporter The GraphiteReporter and GangliaReporter report metric names which can contain invalid characters. These characters include quotes and dots. In order to properly report metrics to these systems, the afore-mentioned characters have to be replaced in metric names. The PR also removes quotes from the garbage collector metric name. The PR sets the default value for TTL in the GangliaReporter to 1, because -1 causes the reporter to fail. Introduce CharacterFilter to filter invalid characters from the metric name out The character filter is applied to all components of the fully qualified metric name. The ScheduledDropwizardReporter and AbstractReporter implement this interface to generate compatible metric names. Correct AbstractMetricGroup.getMetricIdentifier; Add test cases to check that reporters filter out invalid characters This closes #2220. 15 July 2016, 14:03:53 UTC
2218cb4 [FLINK-4017] [py] Add Aggregation support to Python API Assembles and applies a GroupReduceFunction using pre-defined AggregationOperations in Python. References to aggregations in PythonOperationInfo and other Java classes in the Python API removed since aggregations are now handled by Python. This closes #2115 15 July 2016, 10:21:57 UTC
a0c3b87 [FLINK-4053] Add tests for RMQ sink and check connection for null This closes #2128 15 July 2016, 10:16:53 UTC
45edafd [licenses] Remove not included dependency from LICENSE This closes #2186 Flink does not include the anchor.js file but loads it dynamically when displaying the documentation. Therefore, we don't have to include anchor.js in the LICENSE file. 15 July 2016, 10:05:45 UTC
d08b189 [FLINK-4142][docs] Add warning about YARN HA bug This closes #2255 15 July 2016, 10:00:48 UTC
7f95800 [hotfix] [metrics] Prevent log flooding from collisions This closes #2135 15 July 2016, 09:58:15 UTC
9a20573 [FLINK-3630] [docs] Little mistake in documentation This closes #2254 15 July 2016, 09:55:21 UTC
273f54b [FLINK-3666] Remove all remaining Nephele references 15 July 2016, 09:46:34 UTC
2346468 [FLINK-4216] [docs] Fixed example. This closes #2246 This closes #2247 14 July 2016, 19:42:41 UTC
91d5c63 [FLINK-4214] [web dashboard] Properly increment the exceptions counter This closes #2242 14 July 2016, 19:38:04 UTC
de6a3d3 [FLINK-4196] [runtime] Remove the 'recoveryTimestamp' from checkpoint restores. The 'recoveryTimestamp' was an unsafe wall clock timestamp attached by the master upon recovery. This this timestamp cannot be relied upon in distributed setups, it is removed. 14 July 2016, 19:11:48 UTC
2477161 [hotfix] [runtim] Minor code cleanups. 14 July 2016, 19:11:48 UTC
f8cd9ba [hotfix] [kafka connector] Minor code cleanups in the Kafka Producer 14 July 2016, 19:11:48 UTC
64aa7c8 [doc] [hotfix] Fix Gelly readCsvFile example 14 July 2016, 13:55:34 UTC
f1d79f1 [FLINK-4170] Simplify Kinesis connecter config keys to be less overly verbose This closes #2228 14 July 2016, 12:34:21 UTC
cc60ba4 [FLINK-4206][metrics] Remove alphanumeric name restriction This closes #2237 14 July 2016, 09:11:02 UTC
0db804b [FLINK-2246] [runtime] Add ChainedReduceCombineDriver This closes #1517 (2/2) 13 July 2016, 20:26:03 UTC
52e191a [FLINK-3477] [runtime] Add hash-based combine strategy for ReduceFunction This closes #1517 (1/2) 13 July 2016, 20:25:47 UTC
ca5016c [hotfix][kinesis-connector] Remove duplicate info in KinesisDeserializationSchema This closes #2234 13 July 2016, 17:12:28 UTC
bc3a96f [FLINK-4197] Allow Kinesis endpoint to be overridden via config This closes #2227 13 July 2016, 17:07:07 UTC
565f941 [hotfix] [network] Add DEBUG log messages to intermediate results This adds log messages about created result partition consumers and spilling. 13 July 2016, 15:16:29 UTC
d17fe4f [FLINK-4167] [metrics] Make IOMetricGroup register its metrics at the parent metric group Introduce ProxyMetricGroup which forwards metric registration calls to its parent. The IOMetricGroup then extends ProxyMetricGroup so that all metric registrations are executed on the parent of IOMetricGroup. This closes #2210. 13 July 2016, 12:24:52 UTC
21d6a29 [hotfix] Reuse getMaxJvmHeapMemory() in EnvironmentInformation#getSizeOfFreeHeapMemory() This closes #2235 13 July 2016, 12:10:26 UTC
e9f576c [hotfix] [documentation] Add missing Gem Include 'json' as a required dependency for building the documentation. 13 July 2016, 11:59:39 UTC
790a654 [FLINK-4143][metrics] Configurable delimiter This closes #2219 13 July 2016, 09:43:43 UTC
90658c8 [FLINK-4159] Remove Quickstart exclusions for unused dependencies This closes #2217 13 July 2016, 09:39:43 UTC
435ffd9 [FLINK-4200] [Kafka Connector] Kafka consumers logs the offset from which they restore This closes #2230 13 July 2016, 09:27:24 UTC
6b7bb76 [FLINK-4127] Check API compatbility for 1.1 in flink-core This closes #2177 13 July 2016, 08:43:40 UTC
74b09ce [FLINK-4123] [cassandra] Fix concurrency issue in CassandraTupleWriteAheadSink The updatesCount variable in the CassandraTupleWriteAheadSink.sendValues did not have guaranteed visibility. Thus, it was possible that the callback thread would read an outdated value for updatesCount, resulting in a deadlock. Replacing IntValue updatesCount with AtomicInteger updatesCount fixes this issue. Furthermore, the PR hardens the CassandraTupleWriteAheadSinkTest which could have failed with a NPE if the callback runnable was not set in time. 12 July 2016, 16:07:13 UTC
5c2da21 [FLINK-4123] Cassandra sink checks for exceptions in ack phase add serialVersionUID switch to AtomicReference wait-notify disable logging add test case for leaving ackPhaseLoopOnException prevent infinite loop in test This closes #2183. 12 July 2016, 16:07:13 UTC
508965e [FLINK-4111] [table] Flink Table & SQL doesn't work in very simple example This closes #2209. 12 July 2016, 13:47:32 UTC
cc9ee2c [hotfix] [execution graph] Null restart strategy field in ExecutionGraph when archiving 12 July 2016, 13:40:42 UTC
1b327f1 [FLINK-3916] [table] Allow generic types passing the Table API This closes #2197. 12 July 2016, 13:33:11 UTC
971dcc5 [hotfix][docs] Add note about Kinesis producer limitations This closes #2229 12 July 2016, 11:57:55 UTC
f0387ac [FLINK-4018][kinesis-connector] Add configuration for idle time between get requests to Kinesis shards This closes #2071 12 July 2016, 08:17:29 UTC
662b458 [FLINK-4191] Expose shard information in kinesis deserialization schema This closes #2225 12 July 2016, 08:09:41 UTC
a7274d5 [FLINK-3190] failure rate restart strategy Add toString method to Time Reintroduce org.apache.flink.streaming.api.windowing.time.Time for backwards compatibility Remove Duration from FailureRateRestarStrategy This closes #1954. 11 July 2016, 22:00:38 UTC
81cf229 [FLINK-4154] [core] Correction of murmur hash breaks backwards compatibility Revert "[FLINK-3623] [runtime] Adjust MurmurHash Algorithm" This reverts commit 641a0d436c9b7a34ff33ceb370cf29962cac4dee. This closes #2223 11 July 2016, 12:30:46 UTC
back to top