https://github.com/apache/spark
Name Target Message Date
HEAD c34baeb [SPARK-47719][SQL] Change spark.sql.legacy.timeParserPolicy default to CORRECTED ### What changes were proposed in this pull request? We changed the time parser policy in Spark 3.0.0. The config has since defaulted to raise an exception if there is a potential conflict between teh legacy and the new policy. Spark 4.0.0 is a good time to default to the new policy ### Why are the changes needed? Move the product forward and retire legacy behavior over time. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Run existing unit tests and verify changes. ### Was this patch authored or co-authored using generative AI tooling? No Closes #45859 from srielau/SPARK-47719-parser-policy-default-to-corrected. Lead-authored-by: Serge Rielau <serge@rielau.com> Co-authored-by: Wenchen Fan <cloud0fan@gmail.com> Signed-off-by: Gengliang Wang <gengliang@apache.org> 05 April 2024, 18:35:38 UTC
refs/heads/branch-0.5 5b021ce Change version to 0.5.3-SNAPSHOT 23 November 2012, 00:26:15 UTC
refs/heads/branch-0.6 d46c54c Merge pull request #485 from andyk/branch-0.6 Fixes link to issue tracker in documentation page "Contributing to Spark" 20 February 2013, 03:59:01 UTC
refs/heads/branch-0.7 379567f fix block manager UI display issue when enable spark.cleaner.ttl Conflicts: core/src/main/scala/spark/storage/StorageUtils.scala 30 May 2013, 09:55:11 UTC
refs/heads/branch-0.8 62b3158 Merge pull request #583 from colorant/zookeeper. Minor fix for ZooKeeperPersistenceEngine to use configured working dir Author: Raymond Liu <raymond.liu@intel.com> Closes #583 and squashes the following commits: 91b0609 [Raymond Liu] Minor fix for ZooKeeperPersistenceEngine to use configured working dir (cherry picked from commit 68b2c0d02dbdca246ca686b871c06af53845d5b5) Signed-off-by: Aaron Davidson <aaron@databricks.com> Conflicts: core/src/main/scala/org/apache/spark/deploy/master/ZooKeeperPersistenceEngine.scala 12 February 2014, 06:39:48 UTC
refs/heads/branch-0.9 e63783a [MAINTENANCE] Closes #2854 This commit exists to close a pull request on github. 04 June 2015, 06:35:16 UTC
refs/heads/branch-1.0 117843f [SPARK-9633] [BUILD] SBT download locations outdated; need an update Remove 2 defunct SBT download URLs and replace with the 1 known download URL. Also, use https. Follow up on https://github.com/apache/spark/pull/7792 Author: Sean Owen <sowen@cloudera.com> Closes #7956 from srowen/SPARK-9633 and squashes the following commits: caa40bd [Sean Owen] Remove 2 defunct SBT download URLs and replace with the 1 known download URL. Also, use https. Conflicts: sbt/sbt-launch-lib.bash 06 August 2015, 22:43:52 UTC
refs/heads/branch-1.0-jdbc 9caf3a9 [SPARK-2696] Reduce default value of spark.serializer.objectStreamReset The current default value of spark.serializer.objectStreamReset is 10,000. When trying to re-partition (e.g., to 64 partitions) a large file (e.g., 500MB), containing 1MB records, the serializer will cache 10000 x 1MB x 64 ~= 640 GB which will cause out of memory errors. This patch sets the default value to a more reasonable default value (100). Author: Hossein <hossein@databricks.com> Closes #1595 from falaki/objectStreamReset and squashes the following commits: 650a935 [Hossein] Updated documentation 1aa0df8 [Hossein] Reduce default value of spark.serializer.objectStreamReset (cherry picked from commit 66f26a4610aede57322cb7e193a50aecb6c57d22) Signed-off-by: Matei Zaharia <matei@databricks.com> 26 July 2014, 08:04:56 UTC
refs/heads/branch-1.1 11ee9d1 [SPARK-11813][MLLIB] Avoid serialization of vocab in Word2Vec jira: https://issues.apache.org/jira/browse/SPARK-11813 I found the problem during training a large corpus. Avoid serialization of vocab in Word2Vec has 2 benefits. 1. Performance improvement for less serialization. 2. Increase the capacity of Word2Vec a lot. Currently in the fit of word2vec, the closure mainly includes serialization of Word2Vec and 2 global table. the main part of Word2vec is the vocab of size: vocab * 40 * 2 * 4 = 320 vocab 2 global table: vocab * vectorSize * 8. If vectorSize = 20, that's 160 vocab. Their sum cannot exceed Int.max due to the restriction of ByteArrayOutputStream. In any case, avoiding serialization of vocab helps decrease the size of the closure serialization, especially when vectorSize is small, thus to allow larger vocabulary. Actually there's another possible fix, make local copy of fields to avoid including Word2Vec in the closure. Let me know if that's preferred. Author: Yuhao Yang <hhbyyh@gmail.com> Closes #9803 from hhbyyh/w2vVocab. (cherry picked from commit e391abdf2cb6098a35347bd123b815ee9ac5b689) Signed-off-by: Xiangrui Meng <meng@databricks.com> 18 November 2015, 21:25:15 UTC
refs/heads/branch-1.2 307f27e [SPARK-11813][MLLIB] Avoid serialization of vocab in Word2Vec jira: https://issues.apache.org/jira/browse/SPARK-11813 I found the problem during training a large corpus. Avoid serialization of vocab in Word2Vec has 2 benefits. 1. Performance improvement for less serialization. 2. Increase the capacity of Word2Vec a lot. Currently in the fit of word2vec, the closure mainly includes serialization of Word2Vec and 2 global table. the main part of Word2vec is the vocab of size: vocab * 40 * 2 * 4 = 320 vocab 2 global table: vocab * vectorSize * 8. If vectorSize = 20, that's 160 vocab. Their sum cannot exceed Int.max due to the restriction of ByteArrayOutputStream. In any case, avoiding serialization of vocab helps decrease the size of the closure serialization, especially when vectorSize is small, thus to allow larger vocabulary. Actually there's another possible fix, make local copy of fields to avoid including Word2Vec in the closure. Let me know if that's preferred. Author: Yuhao Yang <hhbyyh@gmail.com> Closes #9803 from hhbyyh/w2vVocab. (cherry picked from commit e391abdf2cb6098a35347bd123b815ee9ac5b689) Signed-off-by: Xiangrui Meng <meng@databricks.com> 18 November 2015, 21:25:15 UTC
refs/heads/branch-1.3 65cc451 [SPARK-12363] [MLLIB] [BACKPORT-1.3] Remove setRun and fix PowerIterationClustering failed test ## What changes were proposed in this pull request? Backport JIRA-SPARK-12363 to branch-1.3. ## How was the this patch tested? Unit test. cc mengxr Author: Liang-Chi Hsieh <viirya@gmail.com> Author: Xiangrui Meng <meng@databricks.com> Closes #11265 from viirya/backport-12363-1.3 and squashes the following commits: ec076dd [Liang-Chi Hsieh] Fix scala style. 7a3ef5f [Xiangrui Meng] use Graph instead of GraphImpl and update tests and example based on PIC paper b86018d [Liang-Chi Hsieh] Remove setRun and fix PowerIterationClustering failed test. 26 February 2016, 05:15:59 UTC
refs/heads/branch-1.4 b2680ae [SPARK-14468] Always enable OutputCommitCoordinator ## What changes were proposed in this pull request? `OutputCommitCoordinator` was introduced to deal with concurrent task attempts racing to write output, leading to data loss or corruption. For more detail, read the [JIRA description](https://issues.apache.org/jira/browse/SPARK-14468). Before: `OutputCommitCoordinator` is enabled only if speculation is enabled. After: `OutputCommitCoordinator` is always enabled. Users may still disable this through `spark.hadoop.outputCommitCoordination.enabled`, but they really shouldn't... ## How was this patch tested? `OutputCommitCoordinator*Suite` Author: Andrew Or <andrew@databricks.com> Closes #12244 from andrewor14/always-occ. (cherry picked from commit 3e29e372ff518827bae9dcd26087946fde476843) Signed-off-by: Andrew Or <andrew@databricks.com> 08 April 2016, 00:49:39 UTC
refs/heads/branch-1.5 0a04721 [SPARK-17721][MLLIB][BACKPORT] Fix for multiplying transposed SparseMatrix with SparseVector Backport PR of changes relevant to mllib only, but otherwise identical to #15296 jkbradley Author: Bjarne Fruergaard <bwahlgreen@gmail.com> Closes #15311 from bwahlgreen/bugfix-spark-17721-1.6. (cherry picked from commit 376545e4d38cd41b4a3233819d63bb81f5c83283) Signed-off-by: Joseph K. Bradley <joseph@databricks.com> 02 October 2016, 02:28:51 UTC
refs/heads/branch-1.6 a233fac [SPARK-19688][STREAMING] Not to read `spark.yarn.credentials.file` from checkpoint. ## What changes were proposed in this pull request? Reload the `spark.yarn.credentials.file` property when restarting a streaming application from checkpoint. ## How was this patch tested? Manual tested with 1.6.3 and 2.1.1. I didn't test this with master because of some compile problems, but I think it will be the same result. ## Notice This should be merged into maintenance branches too. jira: [SPARK-21008](https://issues.apache.org/jira/browse/SPARK-21008) Author: saturday_s <shi.indetail@gmail.com> Closes #18230 from saturday-shi/SPARK-21008. (cherry picked from commit e92ffe6f1771e3fe9ea2e62ba552c1b5cf255368) Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com> 19 June 2017, 17:24:29 UTC
refs/heads/branch-2.0 5ed89ce [SPARK-25089][R] removing lintr checks for 2.0 ## What changes were proposed in this pull request? since 2.0 will be EOLed some time in the not too distant future, and we'll be moving the builds from centos to ubuntu, i think it's fine to disable R linting rather than going down the rabbit hole of trying to fix this stuff. ## How was this patch tested? the build system will test this Closes #22074 from shaneknapp/removing-lintr-2.0. Authored-by: shane knapp <incomplete@gmail.com> Signed-off-by: Sean Owen <srowen@gmail.com> 10 August 2018, 23:07:18 UTC
refs/heads/branch-2.1 4d2d3d4 [SPARK-23207][SPARK-22905][SPARK-24564][SPARK-25114][SQL][BACKPORT-2.1] Shuffle+Repartition on a DataFrame could lead to incorrect answers ## What changes were proposed in this pull request? Back port of #20393 and #22079. Currently shuffle repartition uses RoundRobinPartitioning, the generated result is nondeterministic since the sequence of input rows are not determined. The bug can be triggered when there is a repartition call following a shuffle (which would lead to non-deterministic row ordering), as the pattern shows below: upstream stage -> repartition stage -> result stage (-> indicate a shuffle) When one of the executors process goes down, some tasks on the repartition stage will be retried and generate inconsistent ordering, and some tasks of the result stage will be retried generating different data. The following code returns 931532, instead of 1000000: ``` import scala.sys.process._ import org.apache.spark.TaskContext val res = spark.range(0, 1000 * 1000, 1).repartition(200).map { x => x }.repartition(200).map { x => if (TaskContext.get.attemptNumber == 0 && TaskContext.get.partitionId < 2) { throw new Exception("pkill -f java".!!) } x } res.distinct().count() ``` In this PR, we propose a most straight-forward way to fix this problem by performing a local sort before partitioning, after we make the input row ordering deterministic, the function from rows to partitions is fully deterministic too. The downside of the approach is that with extra local sort inserted, the performance of repartition() will go down, so we add a new config named `spark.sql.execution.sortBeforeRepartition` to control whether this patch is applied. The patch is default enabled to be safe-by-default, but user may choose to manually turn it off to avoid performance regression. This patch also changes the output rows ordering of repartition(), that leads to a bunch of test cases failure because they are comparing the results directly. Add unit test in ExchangeSuite. With this patch(and `spark.sql.execution.sortBeforeRepartition` set to true), the following query returns 1000000: ``` import scala.sys.process._ import org.apache.spark.TaskContext spark.conf.set("spark.sql.execution.sortBeforeRepartition", "true") val res = spark.range(0, 1000 * 1000, 1).repartition(200).map { x => x }.repartition(200).map { x => if (TaskContext.get.attemptNumber == 0 && TaskContext.get.partitionId < 2) { throw new Exception("pkill -f java".!!) } x } res.distinct().count() res7: Long = 1000000 ``` Author: Xingbo Jiang <xingbo.jiangdatabricks.com> Author: Xingbo Jiang <xingbo.jiang@databricks.com> Author: Henry Robinson <henry@apache.org> Closes #22211 from henryr/spark-23207-branch-2.1. 27 August 2018, 23:20:19 UTC
refs/heads/branch-2.2 7c7d7f6 [SPARK-26806][SS] EventTimeStats.merge should handle zeros correctly ## What changes were proposed in this pull request? Right now, EventTimeStats.merge doesn't handle `zero.merge(zero)` correctly. This will make `avg` become `NaN`. And whatever gets merged with the result of `zero.merge(zero)`, `avg` will still be `NaN`. Then finally, we call `NaN.toLong` and get `0`, and the user will see the following incorrect report: ``` "eventTime" : { "avg" : "1970-01-01T00:00:00.000Z", "max" : "2019-01-31T12:57:00.000Z", "min" : "2019-01-30T18:44:04.000Z", "watermark" : "1970-01-01T00:00:00.000Z" } ``` This issue was reported by liancheng . This PR fixes the above issue. ## How was this patch tested? The new unit tests. Closes #23718 from zsxwing/merge-zero. Authored-by: Shixiong Zhu <zsxwing@gmail.com> Signed-off-by: Shixiong Zhu <zsxwing@gmail.com> (cherry picked from commit 03a928cbecaf38bbbab3e6b957fcbb542771cfbd) Signed-off-by: Shixiong Zhu <zsxwing@gmail.com> 01 February 2019, 19:15:05 UTC
refs/heads/branch-2.3 75cc3b2 [SPARK-28891][BUILD][2.3] backport do-release-docker.sh to branch-2.3 ### What changes were proposed in this pull request? This PR re-enables `do-release-docker.sh` for branch-2.3. According to the release manager of Spark 2.3.3 maropu, `do-release-docker.sh` in the master branch. After applying #23098, the script does not work for branch-2.3. ### Why are the changes needed? This PR simplifies the release process in branch-2.3 simple. While Spark 2.3.x will not be released further, as dongjoon-hyun [suggested](https://github.com/apache/spark/pull/23098#issuecomment-524682234), it would be good to put this change for 1. to reproduce this release by others 2. to make the future urgent release simple ### Does this PR introduce any user-facing change? No ### How was this patch tested? No test is added. This PR is used to create Spark 2.3.4-rc1 Closes #25607 from kiszk/SPARK-28891. Authored-by: Kazuaki Ishizaki <ishizaki@jp.ibm.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> 30 August 2019, 21:21:11 UTC
refs/heads/branch-2.4 4be5660 Update Spark key negotiation protocol 11 August 2021, 23:04:55 UTC
refs/heads/branch-3.0 2f3e4e3 [SPARK-39932][SQL] WindowExec should clear the final partition buffer ### What changes were proposed in this pull request? Explicitly clear final partition buffer if can not find next in `WindowExec`. The same fix in `WindowInPandasExec` ### Why are the changes needed? We do a repartition after a window, then we need do a local sort after window due to RoundRobinPartitioning shuffle. The error stack: ```java ExternalAppendOnlyUnsafeRowArray INFO - Reached spill threshold of 4096 rows, switching to org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 65536 bytes of memory, got 0 at org.apache.spark.memory.MemoryConsumer.throwOom(MemoryConsumer.java:157) at org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:97) at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.growPointerArrayIfNecessary(UnsafeExternalSorter.java:352) at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.allocateMemoryForRecordIfNecessary(UnsafeExternalSorter.java:435) at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:455) at org.apache.spark.sql.execution.UnsafeExternalRowSorter.insertRow(UnsafeExternalRowSorter.java:138) at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:226) at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$.$anonfun$prepareShuffleDependency$10(ShuffleExchangeExec.scala:355) ``` `WindowExec` only clear buffer in `fetchNextPartition` so the final partition buffer miss to clear. It is not a big problem since we have task completion listener. ```scala taskContext.addTaskCompletionListener(context -> { cleanupResources(); }); ``` This bug only affects if the window is not the last operator for this task and the follow operator like sort. ### Does this PR introduce _any_ user-facing change? yes, bug fix ### How was this patch tested? N/A Closes #37358 from ulysses-you/window. Authored-by: ulysses-you <ulyssesyou18@gmail.com> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org> (cherry picked from commit 1fac870126c289a7ec75f45b6b61c93b9a4965d4) Signed-off-by: Hyukjin Kwon <gurwls223@apache.org> 02 August 2022, 09:05:48 UTC
refs/heads/branch-3.1 4a418a4 [SPARK-45210][DOCS][3.4] Switch languages consistently across docs for all code snippets (Spark 3.4 and below) ### What changes were proposed in this pull request? This PR proposes to recover the availity of switching languages consistently across docs for all code snippets in Spark 3.4 and below by using the proper class selector in the JQuery. Previously the selector was a string `.nav-link tab_python` which did not comply multiple class selection: https://www.w3.org/TR/CSS21/selector.html#class-html. I assume it worked as a legacy behaviour somewhere. Now it uses the standard way `.nav-link.tab_python`. Note that https://github.com/apache/spark/pull/42657 works because there's only single class assigned (after we refactored the site at https://github.com/apache/spark/pull/40269) ### Why are the changes needed? This is a regression in our documentation site. ### Does this PR introduce _any_ user-facing change? Yes, once you click the language tab, it will apply to the examples in the whole page. ### How was this patch tested? Manually tested after building the site. ![Screenshot 2023-09-19 at 12 08 17 PM](https://github.com/apache/spark/assets/6477701/09d0c117-9774-4404-8e2e-d454b7f700a3) ### Was this patch authored or co-authored using generative AI tooling? No. Closes #42989 from HyukjinKwon/SPARK-45210. Authored-by: Hyukjin Kwon <gurwls223@apache.org> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org> (cherry picked from commit 796d8785c61e09d1098350657fd44707763687db) Signed-off-by: Hyukjin Kwon <gurwls223@apache.org> 19 September 2023, 05:51:27 UTC
refs/heads/branch-3.2 e428fe9 [SPARK-45210][DOCS][3.4] Switch languages consistently across docs for all code snippets (Spark 3.4 and below) ### What changes were proposed in this pull request? This PR proposes to recover the availity of switching languages consistently across docs for all code snippets in Spark 3.4 and below by using the proper class selector in the JQuery. Previously the selector was a string `.nav-link tab_python` which did not comply multiple class selection: https://www.w3.org/TR/CSS21/selector.html#class-html. I assume it worked as a legacy behaviour somewhere. Now it uses the standard way `.nav-link.tab_python`. Note that https://github.com/apache/spark/pull/42657 works because there's only single class assigned (after we refactored the site at https://github.com/apache/spark/pull/40269) ### Why are the changes needed? This is a regression in our documentation site. ### Does this PR introduce _any_ user-facing change? Yes, once you click the language tab, it will apply to the examples in the whole page. ### How was this patch tested? Manually tested after building the site. ![Screenshot 2023-09-19 at 12 08 17 PM](https://github.com/apache/spark/assets/6477701/09d0c117-9774-4404-8e2e-d454b7f700a3) ### Was this patch authored or co-authored using generative AI tooling? No. Closes #42989 from HyukjinKwon/SPARK-45210. Authored-by: Hyukjin Kwon <gurwls223@apache.org> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org> (cherry picked from commit 796d8785c61e09d1098350657fd44707763687db) Signed-off-by: Hyukjin Kwon <gurwls223@apache.org> 19 September 2023, 05:51:27 UTC
refs/heads/branch-3.3 45ba922 [SPARK-47385] Fix tuple encoders with Option inputs https://github.com/apache/spark/pull/40755 adds a null check on the input of the child deserializer in the tuple encoder. It breaks the deserializer for the `Option` type, because null should be deserialized into `None` rather than null. This PR adds a boolean parameter to `ExpressionEncoder.tuple` so that only the user that https://github.com/apache/spark/pull/40755 intended to fix has this null check. Unit test. Closes #45508 from chenhao-db/SPARK-47385. Authored-by: Chenhao Li <chenhao.li@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> (cherry picked from commit 9986462811f160eacd766da8a4e14a9cbb4b8710) Signed-off-by: Wenchen Fan <wenchen@databricks.com> 14 March 2024, 06:27:36 UTC
refs/heads/branch-3.4 2a453b1 [SPARK-47111][SQL][TESTS][3.4] Upgrade `PostgreSQL` JDBC driver to 42.7.2 and docker image to 16.2 ### What changes were proposed in this pull request? This is a backport of #45191 . This PR aims to upgrade `PostgreSQL` JDBC driver and docker images. - JDBC Driver: `org.postgresql:postgresql` to 42.7.2 - Docker Image: `postgres` from `15.1-alpine` to `16.2-alpine` ### Why are the changes needed? To use the latest PostgreSQL combination in the following integration tests. - PostgresIntegrationSuite - PostgresKrbIntegrationSuite - v2/PostgresIntegrationSuite - v2/PostgresNamespaceSuite ### Does this PR introduce _any_ user-facing change? No. This is a pure test-environment update. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #45900 from dongjoon-hyun/SPARK-47111-3.4. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> 05 April 2024, 19:57:33 UTC
refs/heads/branch-3.5 44cc676 [SPARK-47111][SQL][TESTS][3.5] Upgrade `PostgreSQL` JDBC driver to 42.7.2 and docker image to 16.2 ### What changes were proposed in this pull request? This PR aims to upgrade `PostgreSQL` JDBC driver and docker images. - JDBC Driver: `org.postgresql:postgresql` from 42.7.0 to 42.7.2 - Docker Image: `postgres` from `15.1-alpine` to `16.2-alpine` ### Why are the changes needed? To use the latest PostgreSQL combination in the following integration tests. - PostgresIntegrationSuite - PostgresKrbIntegrationSuite - v2/PostgresIntegrationSuite - v2/PostgresNamespaceSuite ### Does this PR introduce _any_ user-facing change? No. This is a pure test-environment update. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #45899 from dongjoon-hyun/SPARK-47111. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> 05 April 2024, 18:56:34 UTC
refs/heads/master c34baeb [SPARK-47719][SQL] Change spark.sql.legacy.timeParserPolicy default to CORRECTED ### What changes were proposed in this pull request? We changed the time parser policy in Spark 3.0.0. The config has since defaulted to raise an exception if there is a potential conflict between teh legacy and the new policy. Spark 4.0.0 is a good time to default to the new policy ### Why are the changes needed? Move the product forward and retire legacy behavior over time. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Run existing unit tests and verify changes. ### Was this patch authored or co-authored using generative AI tooling? No Closes #45859 from srielau/SPARK-47719-parser-policy-default-to-corrected. Lead-authored-by: Serge Rielau <serge@rielau.com> Co-authored-by: Wenchen Fan <cloud0fan@gmail.com> Signed-off-by: Gengliang Wang <gengliang@apache.org> 05 April 2024, 18:35:38 UTC
refs/remotes/origin/branch-0.8 0a6c051 Merge pull request #918 from pwendell/branch-0.8 Update versions for 0.8.0 release. 10 September 2013, 06:37:57 UTC
refs/remotes/origin/td-rdd-save 9bb448a Catch Throwable instead of Exception in LocalScheduler and Executor. Fixes #57. 01 June 2011, 18:45:47 UTC
refs/tags/0.3-scala-2.8 c86af80 Change version to 0.3 14 July 2011, 21:38:43 UTC
refs/tags/0.3-scala-2.9 7c77b2f Merge branch 'master' into scala-2.9 Conflicts: project/build.properties 14 July 2011, 21:39:34 UTC
refs/tags/2.0.0-preview 8f5a04b Preparing Spark release 2.0.0-preview 18 May 2016, 01:15:42 UTC
refs/tags/alpha-0.1 9f20b6b Added reduceByKey operation for RDDs containing pairs 04 October 2010, 03:28:20 UTC
refs/tags/alpha-0.2 f707856 Removed java-opts.template 24 May 2011, 22:59:01 UTC
refs/tags/v0.5.0 0472cf8 Update version in SBT 12 June 2012, 18:30:49 UTC
refs/tags/v0.5.1 d1538eb Change version in REPL 07 October 2012, 17:40:29 UTC
refs/tags/v0.5.2 8eec96f Change version to 0.5.2 21 November 2012, 02:23:34 UTC
refs/tags/v0.6.0 63fe4e9 Merge pull request #279 from pwendell/dev Removing credentials line in build. 15 October 2012, 02:36:41 UTC
refs/tags/v0.6.0-yarn 2f011b9 Comment out PGP stuff for publish-local to work 15 October 2012, 00:36:20 UTC
refs/tags/v0.6.1 edb91a3 Addressing Matei's comment: SPARK_LOCAL_IP environment variable 19 November 2012, 19:52:10 UTC
refs/tags/v0.6.2 0c37622 Update version number to 0.6.2 07 February 2013, 06:35:26 UTC
refs/tags/v0.7.0 baa30fc Use new Spark EC2 scripts by default 27 February 2013, 07:38:50 UTC
refs/tags/v0.7.0-bizo-1 02b4a0d Merge branches 'subtract' and 'bettersplits' into bizo * subtract: Add RDD.subtract. * bettersplits: Update more javadocs. Tweak test names. Remove fileServerSuite.txt. Update default.parallelism docs, have StandaloneSchedulerBackend use it. Change defaultPartitioner to use upstream split size. Conflicts: core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala core/src/test/scala/spark/ShuffleSuite.scala 16 February 2013, 20:31:30 UTC
refs/tags/v0.7.1 00e78b6 Release v0.7.1 26 April 2013, 17:58:40 UTC
refs/tags/v0.7.2 86cc03b Revert "Update version to 0.7.3-SNAPSHOT" This reverts commit e5fbdac22acba28d6691901a2a9afcf37a80cc74. 02 June 2013, 00:12:22 UTC
refs/tags/v0.9.1 4c43182 [maven-release-plugin] prepare release v0.9.1-rc3 27 March 2014, 05:14:46 UTC
refs/tags/v0.9.2 4322c0b [maven-release-plugin] prepare release v0.9.2-rc1 17 July 2014, 07:48:28 UTC
refs/tags/v1.0.0 c69d97c [maven-release-plugin] prepare release v1.0.0-rc11 26 May 2014, 06:46:48 UTC
refs/tags/v1.0.1 7d1043c [maven-release-plugin] prepare release v1.0.1-rc2 04 July 2014, 17:37:42 UTC
refs/tags/v1.0.2 8fb6f00 [maven-release-plugin] prepare release v1.0.2-rc1 25 July 2014, 21:21:15 UTC
refs/tags/v1.1.0 2f9b2bd [maven-release-plugin] prepare release v1.1.0-rc4 03 September 2014, 05:27:53 UTC
refs/tags/v1.1.1 3693ae5 [maven-release-plugin] prepare release v1.1.1-rc2 19 November 2014, 20:10:56 UTC
refs/tags/v1.2.0 a428c44 Preparing Spark release v1.2.0-rc2 10 December 2014, 09:03:21 UTC
refs/tags/v1.2.1 b6eaf77 Preparing Spark release v1.2.1-rc3 03 February 2015, 00:39:27 UTC
refs/tags/v1.2.2 7531b50 Preparing Spark release v1.2.2-rc1 05 April 2015, 12:17:47 UTC
refs/tags/v1.3.0 4aaf48d Preparing Spark release v1.3.0-rc3 05 March 2015, 23:02:07 UTC
refs/tags/v1.3.1 3e83913 Preparing Spark release v1.3.1-rc3 11 April 2015, 04:04:37 UTC
refs/tags/v1.4.0 22596c5 Preparing Spark release v1.4.0-rc4 03 June 2015, 01:06:35 UTC
refs/tags/v1.4.1 dbaa5c2 Preparing Spark release v1.4.1-rc4 08 July 2015, 22:40:49 UTC
refs/tags/v1.5.0-rc1 4c56ad7 Preparing Spark release v1.5.0-rc1 20 August 2015, 23:24:07 UTC
refs/tags/v1.5.0-rc2 7277713 Preparing Spark release v1.5.0-rc2 25 August 2015, 22:56:37 UTC
refs/tags/v1.5.0-rc3 908e37b Preparing Spark release v1.5.0-rc3 31 August 2015, 22:57:42 UTC
refs/tags/v1.5.1 4f894dd Preparing Spark release v1.5.1-rc1 24 September 2015, 04:32:10 UTC
refs/tags/v1.6.0 4062cda Preparing Spark release v1.6.0-rc4 22 December 2015, 01:50:29 UTC
refs/tags/v1.6.1 15de51c Preparing Spark release v1.6.1-rc1 27 February 2016, 04:09:04 UTC
refs/tags/v1.6.2 54b1121 Preparing Spark release v1.6.2-rc2 19 June 2016, 21:06:21 UTC
refs/tags/v1.6.3 1e86074 Preparing Spark release v1.6.3-rc2 02 November 2016, 21:45:51 UTC
refs/tags/v2.0.0 13650fc Preparing Spark release v2.0.0-rc5 19 July 2016, 21:02:27 UTC
refs/tags/v2.0.1 933d2c1 Preparing Spark release v2.0.1-rc4 28 September 2016, 23:27:45 UTC
refs/tags/v2.0.2 584354e Preparing Spark release v2.0.2-rc3 07 November 2016, 20:26:31 UTC
refs/tags/v2.1.0 cd0a083 Preparing Spark release v2.1.0-rc5 16 December 2016, 01:57:04 UTC
refs/tags/v2.1.1 267aca5 Preparing Spark release v2.1.1-rc4 25 April 2017, 23:28:22 UTC
refs/tags/v2.1.2 2abaea9 Preparing Spark release v2.1.2-rc4 02 October 2017, 18:57:15 UTC
refs/tags/v2.1.2-rc1 6f47032 Preparing Spark release v2.1.2-rc1 14 September 2017, 02:34:41 UTC
refs/tags/v2.1.2-rc2 fabbb7f Preparing Spark release v2.1.2-rc2 22 September 2017, 15:07:37 UTC
refs/tags/v2.1.2-rc3 efdbef4 Preparing Spark release v2.1.2-rc3 29 September 2017, 16:04:26 UTC
refs/tags/v2.1.2-rc4 2abaea9 Preparing Spark release v2.1.2-rc4 02 October 2017, 18:57:15 UTC
refs/tags/v2.1.3 b7eac07 Preparing Spark release v2.1.3-rc2 26 June 2018, 16:22:59 UTC
refs/tags/v2.1.3-rc1 bbec382 Preparing Spark release v2.1.3-rc1 18 June 2018, 20:53:49 UTC
refs/tags/v2.1.3-rc2 b7eac07 Preparing Spark release v2.1.3-rc2 26 June 2018, 16:22:59 UTC
refs/tags/v2.2.0 a2c7b21 Preparing Spark release v2.2.0-rc6 30 June 2017, 22:54:34 UTC
refs/tags/v2.2.1 e30e269 Preparing Spark release v2.2.1-rc2 24 November 2017, 21:11:35 UTC
refs/tags/v2.2.1-rc1 41116ab Preparing Spark release v2.2.1-rc1 13 November 2017, 19:04:27 UTC
refs/tags/v2.2.1-rc2 e30e269 Preparing Spark release v2.2.1-rc2 24 November 2017, 21:11:35 UTC
refs/tags/v2.2.2 fc28ba3 Preparing Spark release v2.2.2-rc2 27 June 2018, 13:55:11 UTC
refs/tags/v2.2.2-rc1 8ce9e2a Preparing Spark release v2.2.2-rc1 18 June 2018, 14:45:11 UTC
refs/tags/v2.2.2-rc2 fc28ba3 Preparing Spark release v2.2.2-rc2 27 June 2018, 13:55:11 UTC
refs/tags/v2.2.3 4acb6ba Preparing Spark release v2.2.3-rc1 07 January 2019, 17:48:24 UTC
refs/tags/v2.2.3-rc1 4acb6ba Preparing Spark release v2.2.3-rc1 07 January 2019, 17:48:24 UTC
refs/tags/v2.3.0 992447f Preparing Spark release v2.3.0-rc5 22 February 2018, 17:56:57 UTC
refs/tags/v2.3.0-rc1 964cc2e Preparing Spark release v2.3.0-rc1 11 January 2018, 23:23:10 UTC
refs/tags/v2.3.0-rc2 489ecb0 Preparing Spark release v2.3.0-rc2 22 January 2018, 18:49:08 UTC
refs/tags/v2.3.0-rc3 89f6fcb Preparing Spark release v2.3.0-rc3 12 February 2018, 19:08:28 UTC
refs/tags/v2.3.0-rc4 44095cb Preparing Spark release v2.3.0-rc4 17 February 2018, 01:29:46 UTC
refs/tags/v2.3.1 30aaa5a Preparing Spark release v2.3.1-rc4 01 June 2018, 20:34:19 UTC
refs/tags/v2.3.1-rc1 cc93bc9 Preparing Spark release v2.3.1-rc1 15 May 2018, 00:57:16 UTC
refs/tags/v2.3.1-rc2 93258d8 Preparing Spark release v2.3.1-rc2 22 May 2018, 16:37:04 UTC
refs/tags/v2.3.1-rc3 1cc5f68 Preparing Spark release v2.3.1-rc3 01 June 2018, 17:56:26 UTC
refs/tags/v2.3.1-rc4 30aaa5a Preparing Spark release v2.3.1-rc4 01 June 2018, 20:34:19 UTC
refs/tags/v2.3.2 02b5107 Preparing Spark release v2.3.2-rc6 16 September 2018, 03:31:17 UTC
refs/tags/v2.3.2-rc1 4df06b4 Preparing Spark release v2.3.2-rc1 08 July 2018, 01:24:42 UTC
back to top