https://github.com/apache/spark

sort by:
Revision Author Date Message Commit Date
807e0a4 Preparing Spark release v2.4.6-rc8 29 May 2020, 23:28:32 UTC
8307f1a [SPARK-26095][BUILD] Disable parallelization in make-distibution.sh. It makes the build slower, but at least it doesn't hang. Seems that maven-shade-plugin has some issue with parallelization. Closes #23061 from vanzin/SPARK-26095. Authored-by: Marcelo Vanzin <vanzin@cloudera.com> Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com> (cherry picked from commit d2792046a1b10a07b65fc30be573983f1237e450) Signed-off-by: Holden Karau <hkarau@apple.com> 29 May 2020, 21:00:06 UTC
1c36f8e Preparing development version 2.4.7-SNAPSHOT 29 May 2020, 07:51:23 UTC
105de05 Preparing Spark release v2.4.6-rc7 29 May 2020, 07:51:19 UTC
d53363d Preparing development version 2.4.7-SNAPSHOT 29 May 2020, 00:40:59 UTC
787c947 Preparing Spark release v2.4.6-rc6 29 May 2020, 00:40:54 UTC
7f522d5 [BUILD][INFRA] bump the timeout to match the jenkins PRB ### What changes were proposed in this pull request? bump the timeout to match what's set in jenkins ### Why are the changes needed? tests be timing out! ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? via jenkins Closes #28666 from shaneknapp/increase-jenkins-timeout. Authored-by: shane knapp <incomplete@gmail.com> Signed-off-by: shane knapp <incomplete@gmail.com> (cherry picked from commit 9e68affd13a3875b92f0700b8ab7c9d902f1a08c) Signed-off-by: shane knapp <incomplete@gmail.com> 28 May 2020, 21:26:59 UTC
8bde6ed [SPARK-31839][TESTS] Delete duplicate code in castsuit ### What changes were proposed in this pull request? Delete duplicate code castsuit ### Why are the changes needed? keep spark code clean ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? no need Closes #28655 from GuoPhilipse/delete-duplicate-code-castsuit. Lead-authored-by: GuoPhilipse <46367746+GuoPhilipse@users.noreply.github.com> Co-authored-by: GuoPhilipse <guofei_ok@126.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit dfbc5edf20040e8163ee3beef61f2743a948c508) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 28 May 2020, 00:57:48 UTC
e1adbac Preparing development version 2.4.7-SNAPSHOT 27 May 2020, 16:38:38 UTC
f5962ca Preparing Spark release v2.4.6-rc5 27 May 2020, 16:38:33 UTC
6b055a4 Preparing development version 2.4.7-SNAPSHOT 27 May 2020, 00:07:42 UTC
e08970b Preparing Spark release v2.4.6-rc4 27 May 2020, 00:07:39 UTC
48ba885 [SPARK-31819][K8S][DOCS][TESTS][2.4] Add a workaround for Java 8u251+/K8s 1.17 and update integration test cases ### What changes were proposed in this pull request? This PR aims to add a workaround `HTTP2_DISABLE=true` to the document and to update the K8s integration test. ### Why are the changes needed? SPARK-31786 reported fabric8 kubernetes-client library fails to talk K8s 1.17.x client on Java 8u251+ environment. It's fixed at Apache Spark 3.0.0 by upgrading the library, but it turns out that we can not use the same way in `branch-2.4` (https://github.com/apache/spark/pull/28625) ### Does this PR introduce _any_ user-facing change? Yes. This will provide a workaround at the document and testing environment. ### How was this patch tested? This PR is irrelevant to Jenkins UT because it's only updating docs and integration tests. We need to the followings. - [x] Pass the Jenkins K8s IT with old JDK8 and K8s versions (https://github.com/apache/spark/pull/28638#issuecomment-633837179) - [x] Manually run K8s IT on K8s 1.17/Java 8u251+ with `export HTTP2_DISABLE=true`. **K8s v1.17.6 / JDK 1.8.0_252** ``` KubernetesSuite: - Run SparkPi with no resources - Run SparkPi with a very long application name. - Use SparkLauncher.NO_RESOURCE - Run SparkPi with a master URL without a scheme. - Run SparkPi with an argument. - Run SparkPi with custom labels, annotations, and environment variables. - Run extraJVMOptions check on driver - Run SparkRemoteFileTest using a remote data file - Run SparkPi with env and mount secrets. - Run PySpark on simple pi.py example - Run PySpark with Python2 to test a pyfiles example - Run PySpark with Python3 to test a pyfiles example - Run PySpark with memory customization - Run in client mode. Run completed in 5 minutes, 7 seconds. Total number of tests run: 14 Suites: completed 2, aborted 0 Tests: succeeded 14, failed 0, canceled 0, ignored 0, pending 0 All tests passed. ``` Closes #28638 from dongjoon-hyun/SPARK-31819. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 26 May 2020, 21:30:35 UTC
1d1a207 [SPARK-31787][K8S][TESTS][2.4] Fix Minikube.getIfNewMinikubeStatus to understand 1.5+ ### What changes were proposed in this pull request? This PR aims to fix the testing infra to support Minikube 1.5+ in K8s IT. Also, note that this is a subset of #26488 with the same ownership. ### Why are the changes needed? This helps us testing `master/3.0/2.4` in the same Minikube version. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? - Pass the Jenkins K8s IT with Minikube v0.34.1. - Manually, test with Minikube 1.5.x. ``` KubernetesSuite: - Run SparkPi with no resources - Run SparkPi with a very long application name. - Use SparkLauncher.NO_RESOURCE - Run SparkPi with a master URL without a scheme. - Run SparkPi with an argument. - Run SparkPi with custom labels, annotations, and environment variables. - Run extraJVMOptions check on driver - Run SparkRemoteFileTest using a remote data file - Run SparkPi with env and mount secrets. - Run PySpark on simple pi.py example - Run PySpark with Python2 to test a pyfiles example - Run PySpark with Python3 to test a pyfiles example - Run PySpark with memory customization - Run in client mode. Run completed in 4 minutes, 37 seconds. Total number of tests run: 14 Suites: completed 2, aborted 0 Tests: succeeded 14, failed 0, canceled 0, ignored 0, pending 0 All tests passed. ``` Closes #28599 from dongjoon-hyun/SPARK-31787. Authored-by: Marcelo Vanzin <vanzin@cloudera.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 21 May 2020, 21:09:51 UTC
11e97b9 [K8S][MINOR] Log minikube version when running integration tests. Closes #23893 from vanzin/minikube-version. Authored-by: Marcelo Vanzin <vanzin@cloudera.com> Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com> (cherry picked from commit 9f16af636661ec1f7af057764409fe359da0026a) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 21 May 2020, 17:59:14 UTC
281c591 [SPARK-31399][CORE][2.4] Support indylambda Scala closure in ClosureCleaner This is a backport of https://github.com/apache/spark/pull/28463 from Apache Spark master/3.0 to 2.4. Minor adaptation include: - Retain the Spark 2.4-specific behavior of skipping the indylambda check when using Scala 2.11 - Remove unnecessary LMF restrictions in ClosureCleaner tests - Address review comments in the original PR from kiszk Tested with the default Scala 2.11 build, and also tested ClosureCleaner-related tests in Scala 2.12 build as well: - repl: `SingletonReplSuite` - core: `ClosureCleanerSuite` and `ClosureCleanerSuite2` --- ### What changes were proposed in this pull request? This PR proposes to enhance Spark's `ClosureCleaner` to support "indylambda" style of Scala closures to the same level as the existing implementation for the old (inner class) style ones. The goal is to reach feature parity with the support of the old style Scala closures, with as close to bug-for-bug compatibility as possible. Specifically, this PR addresses one lacking support for indylambda closures vs the inner class closures: - When a closure is declared in a Scala REPL and captures the enclosing REPL line object, such closure should be cleanable (unreferenced fields on the enclosing REPL line object should be cleaned) This PR maintains the same limitations in the new indylambda closure support as the old inner class closures, in particular the following two: - Cleaning is only available for one level of REPL line object. If a closure captures state from a REPL line object further out from the immediate enclosing one, it won't be subject to cleaning. See example below. - "Sibling" closures are not handled yet. A "sibling" closure is defined here as a closure that is directly or indirectly referenced by the starting closure, but isn't lexically enclosing. e.g. ```scala { val siblingClosure = (x: Int) => x + this.fieldA // captures `this`, references `fieldA` on `this`. val startingClosure = (y: Int) => y + this.fieldB + siblingClosure(y) // captures `this` and `siblingClosure`, references `fieldB` on `this`. } ``` The changes are intended to be minimal, with further code cleanups planned in separate PRs. Jargons: - old, inner class style Scala closures, aka `delambdafy:inline`: default in Scala 2.11 and before - new, "indylambda" style Scala closures, aka `delambdafy:method`: default in Scala 2.12 and later ### Why are the changes needed? There had been previous effortsto extend Spark's `ClosureCleaner` to support "indylambda" Scala closures, which is necessary for proper Scala 2.12 support. Most notably the work done for [SPARK-14540](https://issues.apache.org/jira/browse/SPARK-14540). But the previous efforts had missed one import scenario: a Scala closure declared in a Scala REPL, and it captures the enclosing `this` -- a REPL line object. e.g. in a Spark Shell: ```scala :pa class NotSerializableClass(val x: Int) val ns = new NotSerializableClass(42) val topLevelValue = "someValue" val func = (j: Int) => { (1 to j).flatMap { x => (1 to x).map { y => y + topLevelValue } } } <Ctrl+D> sc.parallelize(0 to 2).map(func).collect ``` In this example, `func` refers to a Scala closure that captures the enclosing `this` because it needs to access `topLevelValue`, which is in turn implemented as a field on the enclosing REPL line object. The existing `ClosureCleaner` in Spark supports cleaning this case in Scala 2.11-, and this PR brings feature parity to Scala 2.12+. Note that the existing cleaning logic only supported one level of REPL line object nesting. This PR does not go beyond that. When a closure references state declared a few commands earlier, the cleaning will fail in both Scala 2.11 and Scala 2.12. e.g. ```scala scala> :pa // Entering paste mode (ctrl-D to finish) class NotSerializableClass1(val x: Int) case class Foo(id: String) val ns = new NotSerializableClass1(42) val topLevelValue = "someValue" // Exiting paste mode, now interpreting. defined class NotSerializableClass1 defined class Foo ns: NotSerializableClass1 = NotSerializableClass1615b1baf topLevelValue: String = someValue scala> :pa // Entering paste mode (ctrl-D to finish) val closure2 = (j: Int) => { (1 to j).flatMap { x => (1 to x).map { y => y + topLevelValue } // 2 levels } } // Exiting paste mode, now interpreting. closure2: Int => scala.collection.immutable.IndexedSeq[String] = <function1> scala> sc.parallelize(0 to 2).map(closure2).collect org.apache.spark.SparkException: Task not serializable ... ``` in the Scala 2.11 / Spark 2.4.x case: ``` Caused by: java.io.NotSerializableException: NotSerializableClass1 Serialization stack: - object not serializable (class: NotSerializableClass1, value: NotSerializableClass1615b1baf) - field (class: $iw, name: ns, type: class NotSerializableClass1) - object (class $iw, $iw64df3f4b) - field (class: $iw, name: $iw, type: class $iw) - object (class $iw, $iw66e6e5e9) - field (class: $line14.$read, name: $iw, type: class $iw) - object (class $line14.$read, $line14.$readc310aa3) - field (class: $iw, name: $line14$read, type: class $line14.$read) - object (class $iw, $iw79224636) - field (class: $iw, name: $outer, type: class $iw) - object (class $iw, $iw636d4cdc) - field (class: $anonfun$1, name: $outer, type: class $iw) - object (class $anonfun$1, <function1>) ``` in the Scala 2.12 / Spark 2.4.x case after this PR: ``` Caused by: java.io.NotSerializableException: NotSerializableClass1 Serialization stack: - object not serializable (class: NotSerializableClass1, value: NotSerializableClass16f3b4c9a) - field (class: $iw, name: ns, type: class NotSerializableClass1) - object (class $iw, $iw2945a3c1) - field (class: $iw, name: $iw, type: class $iw) - object (class $iw, $iw152705d0) - field (class: $line14.$read, name: $iw, type: class $iw) - object (class $line14.$read, $line14.$read7cf311eb) - field (class: $iw, name: $line14$read, type: class $line14.$read) - object (class $iw, $iwd980dac) - field (class: $iw, name: $outer, type: class $iw) - object (class $iw, $iw557d9532) - element of array (index: 0) - array (class [Ljava.lang.Object;, size 1) - field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;) - object (class java.lang.invoke.SerializedLambda, SerializedLambda[capturingClass=class $iw, functionalInterfaceMethod=scala/Function1.apply:(Ljava/lang/Object;)Ljava/lang/Object;, implementation=invokeStatic $anonfun$closure2$1$adapted:(L$iw;Ljava/lang/Object;)Lscala/collection/immutable/IndexedSeq;, instantiatedMethodType=(Ljava/lang/Object;)Lscala/collection/immutable/IndexedSeq;, numCaptured=1]) - writeReplace data (class: java.lang.invoke.SerializedLambda) - object (class $Lambda$2103/815179920, $Lambda$2103/815179920569b57c4) ``` For more background of the new and old ways Scala lowers closures to Java bytecode, please see [A note on how NSC (New Scala Compiler) lowers lambdas](https://gist.github.com/rednaxelafx/e9ecd09bbd1c448dbddad4f4edf25d48#file-notes-md). For more background on how Spark's `ClosureCleaner` works and what's needed to make it support "indylambda" Scala closures, please refer to [A Note on Apache Spark's ClosureCleaner](https://gist.github.com/rednaxelafx/e9ecd09bbd1c448dbddad4f4edf25d48#file-spark_closurecleaner_notes-md). #### tl;dr The `ClosureCleaner` works like a mark-sweep algorithm on fields: - Finding (a chain of) outer objects referenced by the starting closure; - Scanning the starting closure and its inner closures and marking the fields on the outer objects accessed; - Cloning the outer objects, nulling out fields that are not accessed by any closure of concern. ##### Outer Objects For the old, inner class style Scala closures, the "outer objects" is defined as the lexically enclosing closures of the starting closure, plus an optional enclosing REPL line object if these closures are defined in a Scala REPL. All of them are on a singly-linked `$outer` chain. For the new, "indylambda" style Scala closures, the capturing implementation changed, so closures no longer refer to their enclosing closures via an `$outer` chain. However, a closure can still capture its enclosing REPL line object, much like the old style closures. The name of the field that captures this reference would be `arg$1` (instead of `$outer`). So what's missing in the `ClosureCleaner` for the "indylambda" support is find and potentially clone+clean the captured enclosing `this` REPL line object. That's what this PR implements. ##### Inner Closures The old, inner class style of Scala closures are compiled into separate inner classes, one per lambda body. So in order to discover the implementation (bytecode) of the inner closures, one has to jump over multiple classes. The name of such a class would contain the marker substring `$anonfun$`. The new, "indylambda" style Scala closures are compiled into **static methods** in the class where the lambdas were declared. So for lexically nested closures, their lambda bodies would all be compiled into static methods **in the same class**. This makes it much easier to discover the implementation (bytecode) of the nested lambda bodies. The name of such a static method would contain the marker substring `$anonfun$`. Discovery of inner closures involves scanning bytecode for certain patterns that represent the creation of a closure object for the inner closure. - For inner class style: the closure object creation site is like `new <InnerClassForTheClosure>(captured args)` - For "indylambda" style: the closure object creation site would be compiled into an `invokedynamic` instruction, with its "bootstrap method" pointing to the same one used by Java 8 for its serializable lambdas, and with the bootstrap method arguments pointing to the implementation method. ### Does this PR introduce _any_ user-facing change? Yes. Before this PR, Spark 2.4 / 3.0 / master on Scala 2.12 would not support Scala closures declared in a Scala REPL that captures anything from the REPL line objects. After this PR, such scenario is supported. ### How was this patch tested? Added new unit test case to `org.apache.spark.repl.SingletonReplSuite`. The new test case fails without the fix in this PR, and pases with the fix. Closes #28463 from rednaxelafx/closure-cleaner-indylambda. Authored-by: Kris Mok <kris.mokdatabricks.com> Signed-off-by: Wenchen Fan <wenchendatabricks.com> (cherry picked from commit dc01b7556f74e4a9873ceb1f78bc7df4e2ab4a8a) Signed-off-by: Kris Mok <kris.mokdatabricks.com> Closes #28577 from rednaxelafx/backport-spark-31399-2.4. Authored-by: Kris Mok <kris.mok@databricks.com> Signed-off-by: DB Tsai <d_tsai@apple.com> 19 May 2020, 19:29:33 UTC
fdbd32e [SPARK-31692][SQL] Pass hadoop confs specifed via Spark confs to URLStreamHandlerfactory Pass hadoop confs specifed via Spark confs to URLStreamHandlerfactory **BEFORE** ``` ➜ spark git:(SPARK-31692) ✗ ./bin/spark-shell --conf spark.hadoop.fs.file.impl=org.apache.hadoop.fs.RawLocalFileSystem scala> spark.sharedState res0: org.apache.spark.sql.internal.SharedState = org.apache.spark.sql.internal.SharedState5793cd84 scala> new java.net.URL("file:///tmp/1.txt").openConnection.getInputStream res1: java.io.InputStream = org.apache.hadoop.fs.ChecksumFileSystem$FSDataBoundedInputStream22846025 scala> import org.apache.hadoop.fs._ import org.apache.hadoop.fs._ scala> FileSystem.get(new Path("file:///tmp/1.txt").toUri, spark.sparkContext.hadoopConfiguration) res2: org.apache.hadoop.fs.FileSystem = org.apache.hadoop.fs.LocalFileSystem5a930c03 ``` **AFTER** ``` ➜ spark git:(SPARK-31692) ✗ ./bin/spark-shell --conf spark.hadoop.fs.file.impl=org.apache.hadoop.fs.RawLocalFileSystem scala> spark.sharedState res0: org.apache.spark.sql.internal.SharedState = org.apache.spark.sql.internal.SharedState5c24a636 scala> new java.net.URL("file:///tmp/1.txt").openConnection.getInputStream res1: java.io.InputStream = org.apache.hadoop.fs.FSDataInputStream2ba8f528 scala> import org.apache.hadoop.fs._ import org.apache.hadoop.fs._ scala> FileSystem.get(new Path("file:///tmp/1.txt").toUri, spark.sparkContext.hadoopConfiguration) res2: org.apache.hadoop.fs.FileSystem = LocalFS scala> FileSystem.get(new Path("file:///tmp/1.txt").toUri, spark.sparkContext.hadoopConfiguration).getClass res3: Class[_ <: org.apache.hadoop.fs.FileSystem] = class org.apache.hadoop.fs.RawLocalFileSystem ``` The type of FileSystem object created(you can check the last statement in the above snippets) in the above two cases are different, which should not have been the case No Tested locally. Added Unit test Closes #28516 from karuppayya/SPARK-31692. Authored-by: Karuppayya Rajendran <karuppayya1990@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 72601460ada41761737f39d5dff8e69444fce2ba) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit d639a12ef243e1e8d20bd06d3a97d00e47f05517) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 19 May 2020, 00:07:44 UTC
d52ff4a [SPARK-25694][SQL][FOLLOW-UP] Move 'spark.sql.defaultUrlStreamHandlerFactory.enabled' into StaticSQLConf.scala This PR is a followup of https://github.com/apache/spark/pull/26530 and proposes to move the configuration `spark.sql.defaultUrlStreamHandlerFactory.enabled` to `StaticSQLConf.scala` for consistency. To put the similar configurations together and for readability. No. Manually tested as described in https://github.com/apache/spark/pull/26530. Closes #26570 from HyukjinKwon/SPARK-25694. Authored-by: HyukjinKwon <gurwls223@apache.org> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit 8469614c0513fbed87977d4e741649db3fdd8add) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 18 May 2020, 23:57:14 UTC
19cb475 [SPARK-25694][SQL] Add a config for `URL.setURLStreamHandlerFactory` Add a property `spark.fsUrlStreamHandlerFactory.enabled` to allow users turn off the default registration of `org.apache.hadoop.fs.FsUrlStreamHandlerFactory` This [SPARK-25694](https://issues.apache.org/jira/browse/SPARK-25694) is a long-standing issue. Originally, [[SPARK-12868][SQL] Allow adding jars from hdfs](https://github.com/apache/spark/pull/17342 ) added this for better Hive support. However, this have a side-effect when the users use Apache Spark without `-Phive`. This causes exceptions when the users tries to use another custom factories or 3rd party library (trying to set this). This configuration will unblock those non-hive users. Yes. This provides a new user-configurable property. By default, the behavior is unchanged. Manual testing. **BEFORE** ``` $ build/sbt package $ bin/spark-shell scala> sql("show tables").show +--------+---------+-----------+ |database|tableName|isTemporary| +--------+---------+-----------+ +--------+---------+-----------+ scala> java.net.URL.setURLStreamHandlerFactory(new org.apache.hadoop.fs.FsUrlStreamHandlerFactory()) java.lang.Error: factory already defined at java.net.URL.setURLStreamHandlerFactory(URL.java:1134) ... 47 elided ``` **AFTER** ``` $ build/sbt package $ bin/spark-shell --conf spark.sql.defaultUrlStreamHandlerFactory.enabled=false scala> sql("show tables").show +--------+---------+-----------+ |database|tableName|isTemporary| +--------+---------+-----------+ +--------+---------+-----------+ scala> java.net.URL.setURLStreamHandlerFactory(new org.apache.hadoop.fs.FsUrlStreamHandlerFactory()) ``` Closes #26530 from jiangzho/master. Lead-authored-by: Zhou Jiang <zhou_jiang@apple.com> Co-authored-by: Dongjoon Hyun <dhyun@apple.com> Co-authored-by: zhou-jiang <zhou_jiang@apple.com> Signed-off-by: DB Tsai <d_tsai@apple.com> (cherry picked from commit ee3bd6d76887ccc4961fd520c5d03f7edd3742ac) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 18 May 2020, 23:45:27 UTC
5c57171 [SPARK-31740][K8S][TESTS] Use github URL instead of a broken link This PR aims to use GitHub URL instead of a broken link in `BasicTestsSuite.scala`. Currently, K8s integration test is broken: https://amplab.cs.berkeley.edu/jenkins/view/Spark%20K8s%20Builds/job/spark-master-test-k8s/534/console ``` - Run SparkRemoteFileTest using a remote data file *** FAILED *** The code passed to eventually never returned normally. Attempted 130 times over 2.00109555135 minutes. Last failure message: false was not true. (KubernetesSuite.scala:370) ``` No. Pass the K8s integration test. Closes #28561 from williamhyun/williamhyun-patch-1. Authored-by: williamhyun <62487364+williamhyun@users.noreply.github.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 5bb1a09b5f3a0f91409c7245847ab428c3c58322) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 18 May 2020, 05:14:16 UTC
6d8cef5 [SPARK-31655][BUILD][2.4] Upgrade snappy-java to 1.1.7.5 ### What changes were proposed in this pull request? snappy-java have release v1.1.7.5, upgrade to latest version. Fixed in v1.1.7.4 - Caching internal buffers for SnappyFramed streams #234 - Fixed the native lib for ppc64le to work with glibc 2.17 (Previously it depended on 2.22) Fixed in v1.1.7.5 - Fixes java.lang.NoClassDefFoundError: org/xerial/snappy/pool/DefaultPoolFactory in 1.1.7.4 https://github.com/xerial/snappy-java/compare/1.1.7.3...1.1.7.5 v 1.1.7.5 release note: https://github.com/xerial/snappy-java/commit/edc4ec28bdb15a32b6c41ca9e8b195e635bec3a3 ### Why are the changes needed? Fix bug ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? No need Closes #28506 from AngersZhuuuu/spark-31655-2.4. Authored-by: angerszhu <angers.zhu@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 17 May 2020, 19:01:36 UTC
a4885f3 [SPARK-31663][SQL] Grouping sets with having clause returns the wrong result - Resolve the havingcondition with expanding the GROUPING SETS/CUBE/ROLLUP expressions together in `ResolveGroupingAnalytics`: - Change the operations resolving directions to top-down. - Try resolving the condition of the filter as though it is in the aggregate clause by reusing the function in `ResolveAggregateFunctions` - Push the aggregate expressions into the aggregate which contains the expanded operations. - Use UnresolvedHaving for all having clause. Correctness bug fix. See the demo and analysis in SPARK-31663. Yes, correctness bug fix for HAVING with GROUPING SETS. New UTs added. Closes #28501 from xuanyuanking/SPARK-31663. Authored-by: Yuanjian Li <xyliyuanjian@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> (cherry picked from commit 86bd37f37eb1e534c520dc9a02387debf9fa05a1) Signed-off-by: Wenchen Fan <wenchen@databricks.com> 16 May 2020, 05:32:47 UTC
18e0767 Preparing development version 2.4.7-SNAPSHOT 16 May 2020, 02:03:49 UTC
570848d Preparing Spark release v2.4.6-rc3 16 May 2020, 02:03:44 UTC
d2027e7 Preparing development version 2.4.7-SNAPSHOT 15 May 2020, 22:49:14 UTC
d6a488a Preparing Spark release v2.4.6-rc2 15 May 2020, 22:49:09 UTC
8b72eb7 [SPARK-31712][SQL][TESTS][2.4] Check casting timestamps before the epoch to Byte/Short/Int/Long types ### What changes were proposed in this pull request? Added tests to check casting timestamps before 1970-01-01 00:00:00Z to ByteType, ShortType, IntegerType and LongType in ansi and non-ansi modes. This is a backport of https://github.com/apache/spark/pull/28531. ### Why are the changes needed? To improve test coverage and prevent errors while modifying the CAST expression code. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? By running the modified test suites: ``` $ ./build/sbt "test:testOnly *CastSuite" ``` Closes #28542 from MaxGekk/test-cast-timestamp-to-byte-2.4. Authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 15 May 2020, 20:31:08 UTC
c3ae928 [SPARK-31716][SQL] Use fallback versions in HiveExternalCatalogVersionsSuite # What changes were proposed in this pull request? This PR aims to provide a fallback version instead of `Nil` in `HiveExternalCatalogVersionsSuite`. The provided fallback Spark versions recovers Jenkins jobs instead of failing. ### Why are the changes needed? Currently, `HiveExternalCatalogVersionsSuite` is aborted in all Jenkins jobs except JDK11 Jenkins jobs which don't have old Spark releases supporting JDK11. ``` HiveExternalCatalogVersionsSuite: org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite *** ABORTED *** Exception encountered when invoking run on a nested suite - Fail to get the lates Spark versions to test. (HiveExternalCatalogVersionsSuite.scala:180) ``` ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass the Jenkins Closes #28536 from dongjoon-hyun/SPARK-HiveExternalCatalogVersionsSuite. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 5d90886523c415768c65ea9cba7db24bc508a23b) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 15 May 2020, 07:30:59 UTC
630f4dd [SPARK-31713][INFRA] Make test-dependencies.sh detect version string correctly ### What changes were proposed in this pull request? This PR makes `test-dependencies.sh` detect the version string correctly by ignoring all the other lines. ### Why are the changes needed? Currently, all SBT jobs are broken like the following. - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/476/console ``` [error] running /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/dev/test-dependencies.sh ; received return code 1 Build step 'Execute shell' marked build as failure ``` The reason is that the script detects the old version like `Falling back to archive.apache.org to download Maven 3.1.0-SNAPSHOT` when `build/mvn` did fallback. Specifically, in the script, `OLD_VERSION` became `Falling back to archive.apache.org to download Maven 3.1.0-SNAPSHOT` instead of `3.1.0-SNAPSHOT` if build/mvn did fallback. Then, `pom.xml` file is corrupted like the following at the end and the exit code become `1` instead of `0`. It causes Jenkins jobs fails ``` - <version>3.1.0-SNAPSHOT</version> + <version>Falling</version> ``` **NO FALLBACK** ``` $ build/mvn -q -Dexec.executable="echo" -Dexec.args='${project.version}' --non-recursive org.codehaus.mojo:exec-maven-plugin:1.6.0:exec Using `mvn` from path: /Users/dongjoon/APACHE/spark-merge/build/apache-maven-3.6.3/bin/mvn 3.1.0-SNAPSHOT ``` **FALLBACK** ``` $ build/mvn -q -Dexec.executable="echo" -Dexec.args='${project.version}' --non-recursive org.codehaus.mojo:exec-maven-plugin:1.6.0:exec Falling back to archive.apache.org to download Maven Using `mvn` from path: /Users/dongjoon/APACHE/spark-merge/build/apache-maven-3.6.3/bin/mvn 3.1.0-SNAPSHOT ``` **In the script** ``` $ echo $(build/mvn -q -Dexec.executable="echo" -Dexec.args='${project.version}' --non-recursive org.codehaus.mojo:exec-maven-plugin:1.6.0:exec) Using `mvn` from path: /Users/dongjoon/APACHE/spark-merge/build/apache-maven-3.6.3/bin/mvn Falling back to archive.apache.org to download Maven 3.1.0-SNAPSHOT ``` This PR will prevent irrelevant logs like `Falling back to archive.apache.org to download Maven`. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass the PR Builder. Closes #28532 from dongjoon-hyun/SPARK-31713. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit cd5fbcf9a0151f10553f67bcaa22b8122b3cf263) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 15 May 2020, 02:28:58 UTC
1ea5844 [SPARK-31676][ML] QuantileDiscretizer raise error parameter splits given invalid value (splits array includes -0.0 and 0.0) In QuantileDiscretizer.getDistinctSplits, before invoking distinct, normalize all -0.0 and 0.0 to be 0.0 ``` for (i <- 0 until splits.length) { if (splits(i) == -0.0) { splits(i) = 0.0 } } ``` Fix bug. No Unit test. ~~~scala import scala.util.Random val rng = new Random(3) val a1 = Array.tabulate(200)(_=>rng.nextDouble * 2.0 - 1.0) ++ Array.fill(20)(0.0) ++ Array.fill(20)(-0.0) import spark.implicits._ val df1 = sc.parallelize(a1, 2).toDF("id") import org.apache.spark.ml.feature.QuantileDiscretizer val qd = new QuantileDiscretizer().setInputCol("id").setOutputCol("out").setNumBuckets(200).setRelativeError(0.0) val model = qd.fit(df1) // will raise error in spark master. ~~~ scala `0.0 == -0.0` is True but `0.0.hashCode == -0.0.hashCode()` is False. This break the contract between equals() and hashCode() If two objects are equal, then they must have the same hash code. And array.distinct will rely on elem.hashCode so it leads to this error. Test code on distinct ``` import scala.util.Random val rng = new Random(3) val a1 = Array.tabulate(200)(_=>rng.nextDouble * 2.0 - 1.0) ++ Array.fill(20)(0.0) ++ Array.fill(20)(-0.0) a1.distinct.sorted.foreach(x => print(x.toString + "\n")) ``` Then you will see output like: ``` ... -0.009292684662246975 -0.0033280686465135823 -0.0 0.0 0.0022219556032221366 0.02217419561977274 ... ``` Closes #28498 from WeichenXu123/SPARK-31676. Authored-by: Weichen Xu <weichen.xu@databricks.com> Signed-off-by: Sean Owen <srowen@gmail.com> (cherry picked from commit b2300fca1e1a22d74c6eeda37942920a6c6299ff) Signed-off-by: Sean Owen <srowen@gmail.com> 14 May 2020, 14:27:30 UTC
4ba9421 [SPARK-31632][CORE][WEBUI] Enrich the exception message when application information is unavailable ### What changes were proposed in this pull request? This PR caught the `NoSuchElementException` and enriched the error message for `AppStatusStore.applicationInfo()` when Spark is starting up and the application information is unavailable. ### Why are the changes needed? During the initialization of `SparkContext`, it first starts the Web UI and then set up the `LiveListenerBus` thread for dispatching the `SparkListenerApplicationStart` event (which will trigger writing the requested `ApplicationInfo` to `InMemoryStore`). If the Web UI is accessed before this info's being written to `InMemoryStore`, the following `NoSuchElementException` will be thrown. ``` WARN org.eclipse.jetty.server.HttpChannel: /jobs/ java.util.NoSuchElementException at java.util.Collections$EmptyIterator.next(Collections.java:4191) at org.apache.spark.util.kvstore.InMemoryStore$InMemoryIterator.next(InMemoryStore.java:467) at org.apache.spark.status.AppStatusStore.applicationInfo(AppStatusStore.scala:39) at org.apache.spark.ui.jobs.AllJobsPage.render(AllJobsPage.scala:266) at org.apache.spark.ui.WebUI.$anonfun$attachPage$1(WebUI.scala:89) at org.apache.spark.ui.JettyUtils$$anon$1.doGet(JettyUtils.scala:80) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:873) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1623) at org.apache.spark.ui.HttpSecurityFilter.doFilter(HttpSecurityFilter.scala:95) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:753) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.Server.handle(Server.java:505) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:698) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:804) at java.lang.Thread.run(Thread.java:748) ``` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Manually tested. This can be reproduced: 1. `./bin/spark-shell` 2. at the same time, open `http://localhost:4040/jobs/` in your browser with quickly refreshing. Closes #28444 from xccui/SPARK-31632. Authored-by: Xingcan Cui <xccui@apache.org> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit 42951e6786319481220ba4abfad015a8d11749f3) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 14 May 2020, 03:08:03 UTC
5b51880 [SPARK-31691][INFRA] release-build.sh should ignore a fallback output from `build/mvn` ### What changes were proposed in this pull request? This PR adds `i` option to ignore additional `build/mvn` output which is irrelevant to version string. ### Why are the changes needed? SPARK-28963 added additional output message, `Falling back to archive.apache.org to download Maven` in build/mvn. This breaks `dev/create-release/release-build.sh` and currently Spark Packaging Jenkins job is hitting this issue consistently and broken. - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/job/spark-master-maven-snapshots/2912/console ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? This happens only when the mirror fails. So, this is verified manually hiject the script. It works like the following. ``` $ echo 'Falling back to archive.apache.org to download Maven' > out $ build/mvn help:evaluate -Dexpression=project.version >> out Using `mvn` from path: /Users/dongjoon/PRS/SPARK_RELEASE_2/build/apache-maven-3.6.3/bin/mvn $ cat out | grep -v INFO | grep -v WARNING | grep -v Download Falling back to archive.apache.org to download Maven 3.1.0-SNAPSHOT $ cat out | grep -v INFO | grep -v WARNING | grep -vi Download 3.1.0-SNAPSHOT ``` Closes #28514 from dongjoon-hyun/SPARK_RELEASE_2. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 3772154442e6341ea97a2f41cd672413de918e85) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit ce52f61f720783e8eeb3313c763493054599091a) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 12 May 2020, 21:26:26 UTC
1f85cd7 [SPARK-31671][ML] Wrong error message in VectorAssembler ### What changes were proposed in this pull request? When input column lengths can not be inferred and handleInvalid = "keep", VectorAssembler will throw a runtime exception. However the error message with this exception is not consistent. I change the content of this error message to make it work properly. ### Why are the changes needed? This is a bug. Here is a simple example to reproduce it. ``` // create a df without vector size val df = Seq( (Vectors.dense(1.0), Vectors.dense(2.0)) ).toDF("n1", "n2") // only set vector size hint for n1 column val hintedDf = new VectorSizeHint() .setInputCol("n1") .setSize(1) .transform(df) // assemble n1, n2 val output = new VectorAssembler() .setInputCols(Array("n1", "n2")) .setOutputCol("features") .setHandleInvalid("keep") .transform(hintedDf) // because only n1 has vector size, the error message should tell us to set vector size for n2 too output.show() ``` Expected error message: ``` Can not infer column lengths with handleInvalid = "keep". Consider using VectorSizeHint to add metadata for columns: [n2]. ``` Actual error message: ``` Can not infer column lengths with handleInvalid = "keep". Consider using VectorSizeHint to add metadata for columns: [n1, n2]. ``` This introduce difficulties when I try to resolve this exception, for I do not know which column required vectorSizeHint. This is especially troublesome when you have a large number of columns to deal with. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Add test in VectorAssemblerSuite. Closes #28487 from fan31415/SPARK-31671. Lead-authored-by: fan31415 <fan12356789@gmail.com> Co-authored-by: yijiefan <fanyije@gmail.com> Signed-off-by: Sean Owen <srowen@gmail.com> (cherry picked from commit 64fb358a994d3fff651a742fa067c194b7455853) Signed-off-by: Sean Owen <srowen@gmail.com> 11 May 2020, 23:23:51 UTC
b28c1fb [SPARK-31456][CORE] Fix shutdown hook priority edge cases ### What changes were proposed in this pull request? Fix application order for shutdown hooks for the priorities of Int.MaxValue, Int.MinValue ### Why are the changes needed? The bug causes out-of-order execution of shutdown hooks if their priorities were Int.MinValue or Int.MaxValue ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Added a test covering the change. Closes #28494 from oleg-smith/SPARK-31456_shutdown_hook_priority. Authored-by: oleg <oleg@nexla.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit d7c3e9e53e01011f809b6cb145349ee8a9c5e5f0) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 11 May 2020, 20:11:10 UTC
3936b14 [SPARK-26908][SQL][2.4] Fix DateTimeUtils.toMillis and millisToDays ## What changes were proposed in this pull request? The `DateTimeUtils.toMillis` can produce inaccurate result for some negative values (timestamps before epoch). The error can be around 1ms. In the PR, I propose to use `Math.floorDiv` in casting microseconds to milliseconds, and milliseconds to days since epoch. ## How was this patch tested? Added new test to `DateTimeUtilsSuite`, and tested by `CastSuite` as well. Closes #28475 from MaxGekk/micros-to-millis-2.4. Lead-authored-by: Max Gekk <max.gekk@gmail.com> Co-authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> 07 May 2020, 15:01:11 UTC
8504bdb Preparing development version 2.4.7-SNAPSHOT 06 May 2020, 23:13:27 UTC
a3cffc9 Preparing Spark release v2.4.6-rc1 06 May 2020, 23:13:21 UTC
a00eddc [SPARK-31653][BUILD] Setuptools is needed before installing any other python packages ### What changes were proposed in this pull request? Allow the docker build to succeed ### Why are the changes needed? The base packages depend on having setuptools installed now ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Ran the release script, pip installs succeeded Closes #28467 from holdenk/SPARK-31653-setuptools-needs-to-be-isntalled-before-anything-else. Authored-by: Holden Karau <hkarau@apple.com> Signed-off-by: Holden Karau <hkarau@apple.com> 06 May 2020, 21:56:42 UTC
579b33b [SPARK-31590][SQL] Metadata-only queries should not include subquery in partition filters ### What changes were proposed in this pull request? Metadata-only queries should not include subquery in partition filters. ### Why are the changes needed? Apply the `OptimizeMetadataOnlyQuery` rule again, will get the exception `Cannot evaluate expression: scalar-subquery`. ### Does this PR introduce any user-facing change? Yes. When `spark.sql.optimizer.metadataOnly` is enabled, it succeeds when the queries include subquery in partition filters. ### How was this patch tested? add UT Closes #28383 from cxzl25/fix_SPARK-31590. Authored-by: sychen <sychen@ctrip.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit 588966d696373c11e963116a0e08ee33c30f0dfb) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 06 May 2020, 01:56:54 UTC
1222ce0 [SPARK-31500][SQL] collect_set() of BinaryType returns duplicate elements ### What changes were proposed in this pull request? The collect_set() aggregate function should produce a set of distinct elements. When the column argument's type is BinayType this is not the case. Example: ```scala import org.apache.spark.sql.functions._ import org.apache.spark.sql.expressions.Window case class R(id: String, value: String, bytes: Array[Byte]) def makeR(id: String, value: String) = R(id, value, value.getBytes) val df = Seq(makeR("a", "dog"), makeR("a", "cat"), makeR("a", "cat"), makeR("b", "fish")).toDF() // In the example below "bytesSet" erroneously has duplicates but "stringSet" does not (as expected). df.agg(collect_set('value) as "stringSet", collect_set('bytes) as "byteSet").show(truncate=false) // The same problem is displayed when using window functions. val win = Window.partitionBy('id).rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing) val result = df.select( collect_set('value).over(win) as "stringSet", collect_set('bytes).over(win) as "bytesSet" ) .select('stringSet, 'bytesSet, size('stringSet) as "stringSetSize", size('bytesSet) as "bytesSetSize") .show() ``` We use a HashSet buffer to accumulate the results, the problem is that arrays equality in Scala don't behave as expected, arrays ara just plain java arrays and the equality don't compare the content of the arrays Array(1, 2, 3) == Array(1, 2, 3) => False The result is that duplicates are not removed in the hashset The solution proposed is that in the last stage, when we have all the data in the Hashset buffer, we delete duplicates changing the type of the elements and then transform it to the original type. This transformation is only applied when we have a BinaryType ### Why are the changes needed? Fix the bug explained ### Does this PR introduce any user-facing change? Yes. Now `collect_set()` correctly deduplicates array of byte. ### How was this patch tested? Unit testing Closes #28351 from planga82/feature/SPARK-31500_COLLECT_SET_bug. Authored-by: Pablo Langa <soypab@gmail.com> Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org> (cherry picked from commit 4fecc20f6ecdfe642890cf0a368a85558c40a47c) Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org> 01 May 2020, 13:10:12 UTC
a2897d2 [SPARK-31601][K8S] Fix spark.kubernetes.executor.podNamePrefix to work This PR aims to fix `spark.kubernetes.executor.podNamePrefix` to work. Currently, the configuration is broken like the following. ``` bin/spark-submit \ --master k8s://$K8S_MASTER \ --deploy-mode cluster \ --name spark-pi \ --class org.apache.spark.examples.SparkPi \ -c spark.kubernetes.container.image=spark:pr \ -c spark.kubernetes.driver.pod.name=mypod \ -c spark.kubernetes.executor.podNamePrefix=mypod \ local:///opt/spark/examples/jars/spark-examples_2.12-3.1.0-SNAPSHOT.jar ``` **BEFORE SPARK-31601** ``` pod/mypod 1/1 Running 0 9s pod/spark-pi-7469dd71c499fafb-exec-1 1/1 Running 0 4s pod/spark-pi-7469dd71c499fafb-exec-2 1/1 Running 0 4s ``` **AFTER SPARK-31601** ``` pod/mypod 1/1 Running 0 8s pod/mypod-exec-1 1/1 Running 0 3s pod/mypod-exec-2 1/1 Running 0 3s ``` Yes. This is a bug fix. The conf will work as described in the documentation. Pass the Jenkins and run the above comment manually. Closes #28401 from dongjoon-hyun/SPARK-31601. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Prashant Sharma <prashsh1@in.ibm.com> (cherry picked from commit 85dad37f69ebb617c8ac015dbbbda11054170298) Signed-off-by: Prashant Sharma <prashsh1@in.ibm.com> (cherry picked from commit 82b8f7fc9d21f4ac506d8cd613158f0511f5cb1d) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 30 April 2020, 04:55:36 UTC
da3748f [SPARK-31449][SQL][2.4] Fix getting time zone offsets from local milliseconds ### What changes were proposed in this pull request? Replace current implementation of `getOffsetFromLocalMillis()` by the code from JDK https://github.com/AdoptOpenJDK/openjdk-jdk8u/blob/aa318070b27849f1fe00d14684b2a40f7b29bf79/jdk/src/share/classes/java/util/GregorianCalendar.java#L2795-L2801: ```java if (zone instanceof ZoneInfo) { ((ZoneInfo)zone).getOffsetsByWall(millis, zoneOffsets); } else { int gmtOffset = isFieldSet(fieldMask, ZONE_OFFSET) ? internalGet(ZONE_OFFSET) : zone.getRawOffset(); zone.getOffsets(millis - gmtOffset, zoneOffsets); } ``` ### Why are the changes needed? Current domestic implementation of `getOffsetFromLocalMillis()` is incompatible with other date-time functions used from JDK's `GregorianCalendar` like `ZoneInfo.getOffsets`, and can return wrong results as it is demonstrated in SPARK-31449. For example, currently the function returns 1h offset but JDK function 0h: ``` Europe/Paris 1916-10-01 23:50:39.0 3600000 0 ``` Actually, the timestamp is in a DST interval of shifting wall clocks by 1 hour back Year | Date & Time | Abbreviation | Time Change | Offset After -- | -- | -- | -- | -- 1916 |Tue, 14 Jun, 23:00 | WET → WEST | +1 hour (DST start) | UTC+1h 1916 |Sun, 2 Oct, 00:00 | WEST → WET | -1 hour (DST end) | UTC And according the default JDK policy, the latest timestamp should be taken in the case of overlapping but current implementation takes the earliest one. That makes it incompatible with other JDK calls. ### Does this PR introduce any user-facing change? Yes, see differences in SPARK-31449. ### How was this patch tested? By existing test suite `DateTimeUtilsSuite`, `DateFunctionsSuite` and `DateExpressionsSuite`. Closes #28410 from MaxGekk/fix-tz-offset-by-wallclock-2.4. Authored-by: Max Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> 30 April 2020, 03:16:37 UTC
3b17ad3 [SPARK-31582][YARN][2.4] Being able to not populate Hadoop classpath ### What changes were proposed in this pull request? We are adding a new Spark Yarn configuration, `spark.yarn.populateHadoopClasspath` to not populate Hadoop classpath from `yarn.application.classpath` and `mapreduce.application.classpath`. ### Why are the changes needed? Spark Yarn client populates extra Hadoop classpath from `yarn.application.classpath` and `mapreduce.application.classpath` when a job is submitted to a Yarn Hadoop cluster. However, for `with-hadoop` Spark build that embeds Hadoop runtime, it can cause jar conflicts because Spark distribution can contain different version of Hadoop jars. One case we have is when a user uses an Apache Spark distribution with its-own embedded hadoop, and submits a job to Cloudera or Hortonworks Yarn clusters, because of two different incompatible Hadoop jars in the classpath, it runs into errors. By not populating the Hadoop classpath from the clusters can address this issue. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? An UT is added, but very hard to add a new integration test since this requires using different incompatible versions of Hadoop. We also manually tested this PR, and we are able to submit a Spark job using Spark distribution built with Apache Hadoop 2.10 to CDH 5.6 without populating CDH classpath. Closes #28411 from dbtsai/SPARK-31582. Authored-by: DB Tsai <d_tsai@apple.com> Signed-off-by: DB Tsai <d_tsai@apple.com> 29 April 2020, 21:39:03 UTC
4be3390 [SPARK-31519][SQL][2.4] Datetime functions in having aggregate expressions returns the wrong result ### What changes were proposed in this pull request? Add a new logical node AggregateWithHaving, and the parser should create this plan for HAVING. The analyzer resolves it to Filter(..., Aggregate(...)). ### Why are the changes needed? The SQL parser in Spark creates Filter(..., Aggregate(...)) for the HAVING query, and Spark has a special analyzer rule ResolveAggregateFunctions to resolve the aggregate functions and grouping columns in the Filter operator. It works for simple cases in a very tricky way as it relies on rule execution order: 1. Rule ResolveReferences hits the Aggregate operator and resolves attributes inside aggregate functions, but the function itself is still unresolved as it's an UnresolvedFunction. This stops resolving the Filter operator as the child Aggrege operator is still unresolved. 2. Rule ResolveFunctions resolves UnresolvedFunction. This makes the Aggrege operator resolved. 3. Rule ResolveAggregateFunctions resolves the Filter operator if its child is a resolved Aggregate. This rule can correctly resolve the grouping columns. In the example query, I put a datetime function `hour`, which needs to be resolved by rule ResolveTimeZone, which runs after ResolveAggregateFunctions. This breaks step 3 as the Aggregate operator is unresolved at that time. Then the analyzer starts next round and the Filter operator is resolved by ResolveReferences, which wrongly resolves the grouping columns. See the demo below: ``` SELECT SUM(a) AS b, '2020-01-01 12:12:12' AS fake FROM VALUES (1, 10), (2, 20) AS T(a, b) GROUP BY b HAVING b > 10 ``` The query's result is ``` +---+-------------------+ | b| fake| +---+-------------------+ | 2|2020-01-01 12:12:12| +---+-------------------+ ``` But if we use `hour` function, it will return an empty result. ``` SELECT SUM(a) AS b, hour('2020-01-01 12:12:12') AS fake FROM VALUES (1, 10), (2, 20) AS T(a, b) GROUP BY b HAVING b > 10 ``` ### Does this PR introduce any user-facing change? Yes, bug fix for cast in having aggregate expressions. ### How was this patch tested? New UT added. Closes #28397 from xuanyuanking/SPARK-31519-backport. Authored-by: Yuanjian Li <xyliyuanjian@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 29 April 2020, 16:18:40 UTC
2828495 [SPARK-31563][SQL][FOLLOWUP] Create literals directly from Catalyst's internal value in InSet.sql ### What changes were proposed in this pull request? In the PR, I propose to simplify the code of `InSet.sql` and create `Literal` instances directly from Catalyst's internal values by using the default `Literal` constructor. ### Why are the changes needed? This simplifies code and avoids unnecessary conversions to external types. ### Does this PR introduce any user-facing change? No ### How was this patch tested? By existing test `SPARK-31563: sql of InSet for UTF8String collection` in `ColumnExpressionSuite`. Closes #28399 from MaxGekk/fix-InSet-sql-followup. Authored-by: Max Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> (cherry picked from commit 86761861c28e1c854b4f78f1c078591b11b7daf3) Signed-off-by: Wenchen Fan <wenchen@databricks.com> 29 April 2020, 06:45:00 UTC
eab7b88 [SPARK-31589][INFRA][2.4] Use `r-lib/actions/setup-r` in GitHub Action ### What changes were proposed in this pull request? This PR aims to use `r-lib/actions/setup-r` because it's more stable and maintained by 3rd party. ### Why are the changes needed? This will recover the current outage. In addition, this will be more robust in the future. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Pass the GitHub Actions, especially `Linter R` and `Generate Documents`. Closes #28384 from dongjoon-hyun/SPARK-31589-2.4. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 28 April 2020, 04:58:05 UTC
4a23bd0 [SPARK-31568][R][DOCS] Add detail about func/key in gapply to documentation Improve documentation for `gapply` in `SparkR` Spent a long time this weekend trying to figure out just what exactly `key` is in `gapply`'s `func`. I had assumed it would be a _named_ list, but apparently not -- the examples are working because `schema` is applying the name and the names of the output `data.frame` don't matter. As near as I can tell the description I've added is correct, namely, that `key` is an unnamed list. No? Not in code. Only documentation. Not. Documentation only Closes #28350 from MichaelChirico/r-gapply-key-doc. Authored-by: Michael Chirico <michael.chirico@grabtaxi.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> 27 April 2020, 08:04:44 UTC
9275865 [SPARK-31485][CORE][2.4] Avoid application hang if only partial barrier tasks launched ### What changes were proposed in this pull request? Use `TaskSetManger.abort` to abort a barrier stage instead of throwing exception within `resourceOffers`. ### Why are the changes needed? Any non fatal exception thrown within Spark RPC framework can be swallowed: https://github.com/apache/spark/blob/100fc58da54e026cda87832a10e2d06eaeccdf87/core/src/main/scala/org/apache/spark/rpc/netty/Inbox.scala#L202-L211 The method `TaskSchedulerImpl.resourceOffers` is also within the scope of Spark RPC framework. Thus, throw exception inside `resourceOffers` won't fail the application. As a result, if a barrier stage fail the require check at `require(addressesWithDescs.size == taskSet.numTasks, ...)`, the barrier stage will fail the check again and again util all tasks from `TaskSetManager` dequeued. But since the barrier stage isn't really executed, the application will hang. The issue can be reproduced by the following test: ```scala initLocalClusterSparkContext(2) val rdd0 = sc.parallelize(Seq(0, 1, 2, 3), 2) val dep = new OneToOneDependency[Int](rdd0) val rdd = new MyRDD(sc, 2, List(dep), Seq(Seq("executor_h_0"),Seq("executor_h_0"))) rdd.barrier().mapPartitions { iter => BarrierTaskContext.get().barrier() iter }.collect() ``` ### Does this PR introduce any user-facing change? Yes, application hang previously but fail-fast after this fix. ### How was this patch tested? Added a regression test. Closes #28357 from Ngone51/bp-spark-31485-24. Authored-by: yi.wu <yi.wu@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> 27 April 2020, 06:10:02 UTC
218c114 [SPARK-25595][2.4] Ignore corrupt Avro files if flag IGNORE_CORRUPT_FILES enabled ## What changes were proposed in this pull request? With flag `IGNORE_CORRUPT_FILES` enabled, schema inference should ignore corrupt Avro files, which is consistent with Parquet and Orc data source. ## How was this patch tested? Unit test Closes #28334 from gengliangwang/SPARK-25595-2.4. Authored-by: Gengliang Wang <gengliang.wang@databricks.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> 27 April 2020, 03:51:25 UTC
61fc1f7 [SPARK-31552][SQL][2.4] Fix ClassCastException in ScalaReflection arrayClassFor This PR backports https://github.com/apache/spark/pull/28324 to branch-2.4 ### What changes were proposed in this pull request? the 2 method `arrayClassFor` and `dataTypeFor` in `ScalaReflection` call each other circularly, the cases in `dataTypeFor` are not fully handled in `arrayClassFor` For example: ```scala scala> implicit def newArrayEncoder[T <: Array[_] : TypeTag]: Encoder[T] = ExpressionEncoder() newArrayEncoder: [T <: Array[_]](implicit evidence$1: reflect.runtime.universe.TypeTag[T])org.apache.spark.sql.Encoder[T] scala> val decOne = Decimal(1, 38, 18) decOne: org.apache.spark.sql.types.Decimal = 1E-18 scala> val decTwo = Decimal(2, 38, 18) decTwo: org.apache.spark.sql.types.Decimal = 2E-18 scala> val decSpark = Array(decOne, decTwo) decSpark: Array[org.apache.spark.sql.types.Decimal] = Array(1E-18, 2E-18) scala> Seq(decSpark).toDF() java.lang.ClassCastException: org.apache.spark.sql.types.DecimalType cannot be cast to org.apache.spark.sql.types.ObjectType at org.apache.spark.sql.catalyst.ScalaReflection$.$anonfun$arrayClassFor$1(ScalaReflection.scala:131) at scala.reflect.internal.tpe.TypeConstraints$UndoLog.undo(TypeConstraints.scala:69) at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects(ScalaReflection.scala:879) at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects$(ScalaReflection.scala:878) at org.apache.spark.sql.catalyst.ScalaReflection$.cleanUpReflectionObjects(ScalaReflection.scala:49) at org.apache.spark.sql.catalyst.ScalaReflection$.arrayClassFor(ScalaReflection.scala:120) at org.apache.spark.sql.catalyst.ScalaReflection$.$anonfun$dataTypeFor$1(ScalaReflection.scala:105) at scala.reflect.internal.tpe.TypeConstraints$UndoLog.undo(TypeConstraints.scala:69) at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects(ScalaReflection.scala:879) at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects$(ScalaReflection.scala:878) at org.apache.spark.sql.catalyst.ScalaReflection$.cleanUpReflectionObjects(ScalaReflection.scala:49) at org.apache.spark.sql.catalyst.ScalaReflection$.dataTypeFor(ScalaReflection.scala:88) at org.apache.spark.sql.catalyst.ScalaReflection$.$anonfun$serializerForType$1(ScalaReflection.scala:399) at scala.reflect.internal.tpe.TypeConstraints$UndoLog.undo(TypeConstraints.scala:69) at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects(ScalaReflection.scala:879) at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects$(ScalaReflection.scala:878) at org.apache.spark.sql.catalyst.ScalaReflection$.cleanUpReflectionObjects(ScalaReflection.scala:49) at org.apache.spark.sql.catalyst.ScalaReflection$.serializerForType(ScalaReflection.scala:393) at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:57) at newArrayEncoder(<console>:57) ... 53 elided scala> ``` In this PR, we add the missing cases to `arrayClassFor` ### Why are the changes needed? bugfix as described above ### Does this PR introduce any user-facing change? no ### How was this patch tested? add a test for array encoders Closes #28341 from yaooqinn/SPARK-31552-24. Authored-by: Kent Yao <yaooqinn@hotmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 26 April 2020, 06:14:03 UTC
5e6bcca [SPARK-31563][SQL] Fix failure of InSet.sql for collections of Catalyst's internal types In the PR, I propose to fix the `InSet.sql` method for the cases when input collection contains values of internal Catalyst's types, for instance `UTF8String`. Elements of the input set `hset` are converted to Scala types, and wrapped by `Literal` to properly form SQL view of the input collection. The changes fixed the bug in `InSet.sql` that makes wrong assumption about types of collection elements. See more details in SPARK-31563. Highly likely, not. Added a test to `ColumnExpressionSuite` Closes #28343 from MaxGekk/fix-InSet-sql. Authored-by: Max Gekk <max.gekk@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 7d8216a6642f40af0d1b623129b1d5f4c86bec68) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 25 April 2020, 16:35:35 UTC
0477f21 [SPARK-31532][SPARK-31234][SQL][2.4][FOLLOWUP] Use lowercases for GLOBAL_TEMP_DATABASE config in SparkSessionBuilderSuite ### What changes were proposed in this pull request? This PR intends to fix test code for using lowercases for the `GLOBAL_TEMP_DATABASE` config in `SparkSessionBuilderSuite`. The handling of the config is different between branch-3.0+ and branch-2.4. In branch-3.0+, Spark always lowercases a value in the config, so I think we had better always use lowercases for it in the test. This comes from the dongjoon-hyun comment: https://github.com/apache/spark/pull/28316#issuecomment-619303160 ### Why are the changes needed? To fix the test failure in branch-2.4. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Fixed the test. Closes #28339 from maropu/SPARK-31532. Authored-by: Takeshi Yamamuro <yamamuro@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 25 April 2020, 05:23:57 UTC
a2a0c52 [SPARK-31532][SQL] Builder should not propagate static sql configs to the existing active or default SparkSession ### What changes were proposed in this pull request? SparkSessionBuilder shoud not propagate static sql configurations to the existing active/default SparkSession This seems a long-standing bug. ```scala scala> spark.sql("set spark.sql.warehouse.dir").show +--------------------+--------------------+ | key| value| +--------------------+--------------------+ |spark.sql.warehou...|file:/Users/kenty...| +--------------------+--------------------+ scala> spark.sql("set spark.sql.warehouse.dir=2"); org.apache.spark.sql.AnalysisException: Cannot modify the value of a static config: spark.sql.warehouse.dir; at org.apache.spark.sql.RuntimeConfig.requireNonStaticConf(RuntimeConfig.scala:154) at org.apache.spark.sql.RuntimeConfig.set(RuntimeConfig.scala:42) at org.apache.spark.sql.execution.command.SetCommand.$anonfun$x$7$6(SetCommand.scala:100) at org.apache.spark.sql.execution.command.SetCommand.run(SetCommand.scala:156) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79) at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3644) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3642) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97) at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602) ... 47 elided scala> import org.apache.spark.sql.SparkSession import org.apache.spark.sql.SparkSession scala> SparkSession.builder.config("spark.sql.warehouse.dir", "xyz").get getClass getOrCreate scala> SparkSession.builder.config("spark.sql.warehouse.dir", "xyz").getOrCreate 20/04/23 23:49:13 WARN SparkSession$Builder: Using an existing SparkSession; some configuration may not take effect. res7: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession6403d574 scala> spark.sql("set spark.sql.warehouse.dir").show +--------------------+-----+ | key|value| +--------------------+-----+ |spark.sql.warehou...| xyz| +--------------------+-----+ scala> OptionsAttachments ``` ### Why are the changes needed? bugfix as shown in the previous section ### Does this PR introduce any user-facing change? Yes, static SQL configurations with SparkSession.builder.config do not propagate to any existing or new SparkSession instances. ### How was this patch tested? new ut. Closes #28316 from yaooqinn/SPARK-31532. Authored-by: Kent Yao <yaooqinn@hotmail.com> Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org> (cherry picked from commit 8424f552293677717da7411ed43e68e73aa7f0d6) Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org> 24 April 2020, 23:53:48 UTC
2fefb60 [SPARK-30199][DSTREAM] Recover `spark.(ui|blockManager).port` from checkpoint ### What changes were proposed in this pull request? This is a backport of #26827. This PR aims to recover `spark.ui.port` and `spark.blockManager.port` from checkpoint like `spark.driver.port`. ### Why are the changes needed? When the user configures these values, we can respect them. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Pass the Jenkins with the newly added test cases. Closes #28320 from dongjoon-hyun/SPARK-30199-2.4. Authored-by: Aaruna <aaruna@apple.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 24 April 2020, 04:23:46 UTC
5183984 [SPARK-31503][SQL][2.4] fix the SQL string of the TRIM functions backport https://github.com/apache/spark/pull/28281 to 2.4 This backport has one difference: there is no `EXTRACT(... FROM ...)` SQL syntax in 2.4, so this PR just uses the common function call syntax. Closes #28299 from cloud-fan/pick. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 22 April 2020, 16:35:45 UTC
43cc620 [SPARK-31256][SQL] DataFrameNaFunctions.drop should work for nested columns For example, for the following `df`: ``` val schema = new StructType() .add("c1", new StructType() .add("c1-1", StringType) .add("c1-2", StringType)) val data = Seq(Row(Row(null, "a2")), Row(Row("b1", "b2")), Row(null)) val df = spark.createDataFrame(spark.sparkContext.parallelize(data), schema) df.show +--------+ | c1| +--------+ | [, a2]| |[b1, b2]| | null| +--------+ ``` In Spark 2.4.4, ``` df.na.drop("any", Seq("c1.c1-1")).show +--------+ | c1| +--------+ |[b1, b2]| +--------+ ``` In Spark 2.4.5 or Spark 3.0.0-preview2, if nested columns are specified, they are ignored. ``` df.na.drop("any", Seq("c1.c1-1")).show +--------+ | c1| +--------+ | [, a2]| |[b1, b2]| | null| +--------+ ``` This seems like a regression. Now, the nested column can be specified: ``` df.na.drop("any", Seq("c1.c1-1")).show +--------+ | c1| +--------+ |[b1, b2]| +--------+ ``` Also, if `*` is specified as a column, it will throw an `AnalysisException` that `*` cannot be resolved, which was the behavior in 2.4.4. Currently, in master, it has no effect. Updated existing tests. Closes #28266 from imback82/SPARK-31256. Authored-by: Terry Kim <yuminkim@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> (cherry picked from commit d7499aed9cc943304e2ec89379d3651410f6ca90) Signed-off-by: Wenchen Fan <wenchen@databricks.com> 20 April 2020, 03:07:24 UTC
1af3b48 [SPARK-31234][SQL][2.4] ResetCommand should not affect static SQL Configuration ### What changes were proposed in this pull request? This PR is to backport the fix of https://github.com/apache/spark/pull/28003, add a migration guide, update the PR description and add an end-to-end test case. Before this PR, the SQL `RESET` command will reset the values of static SQL configuration to the default and remove the cached values of Spark Context Configurations in the current session. This PR fixes the bugs. After this PR, the `RESET` command follows its definition and only updates the runtime SQL configuration values to the default. ### Why are the changes needed? When we introduced the feature of Static SQL Configuration, we did not update the implementation of SQL `RESET` command. The static SQL configuration should not be changed by any command at runtime. However, the `RESET` command resets the values to the default. We should fix them. ### Does this PR introduce any user-facing change? Before Spark 2.4.6, the `RESET` command resets both the runtime and static SQL configuration values to the default. It also removes the cached values of Spark Context Configurations in the current session, although these configuration values are for displaying/querying only. ### How was this patch tested? Added an end-to-end test and a unit test Closes #28262 from gatorsmile/spark-31234followup2.4. Authored-by: gatorsmile <gatorsmile@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 19 April 2020, 23:24:56 UTC
9416b7c Apply appropriate RPC handler to receive, receiveStream when auth enabled 18 April 2020, 15:20:43 UTC
ea75c15 [SPARK-31420][WEBUI][FOLLOWUP] Make locale of timeline-view 'en' <!-- Thanks for sending a pull request! Here are some tips for you: 1. If this is your first time, please read our contributor guidelines: https://spark.apache.org/contributing.html 2. Ensure you have added or run the appropriate tests for your PR: https://spark.apache.org/developer-tools.html 3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][SPARK-XXXX] Your PR title ...'. 4. Be sure to keep the PR description updated to reflect all changes. 5. Please write your PR title to summarize what this PR proposes. 6. If possible, provide a concise example to reproduce the issue for a faster review. 7. If you want to add a new configuration, please read the guideline first for naming configurations in 'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'. --> ### What changes were proposed in this pull request? <!-- Please clarify what changes you are proposing. The purpose of this section is to outline the changes and how this PR fixes the issue. If possible, please consider writing useful notes for better and faster reviews in your PR. See the examples below. 1. If you refactor some codes with changing classes, showing the class hierarchy will help reviewers. 2. If you fix some SQL features, you can provide some references of other DBMSes. 3. If there is design documentation, please add the link. 4. If there is a discussion in the mailing list, please add the link. --> This change explicitly set locale of timeline view to 'en' to be the same appearance as before upgrading vis-timeline. ### Why are the changes needed? <!-- Please clarify why the changes are needed. For instance, 1. If you propose a new API, clarify the use case for a new API. 2. If you fix a bug, you can clarify why it is a bug. --> We upgraded vis-timeline in #28192 and the upgraded version is different from before we used in the notation of dates. The notation seems to be dependent on locale. The following is appearance in my Japanese environment. <img width="557" alt="locale-changed" src="https://user-images.githubusercontent.com/4736016/79265314-de886700-7ed0-11ea-8641-fa76b993c0d9.png"> Although the notation is in Japanese, the default format is a little bit unnatural (e.g. 4月9日 05:39 is natural rather than 9 四月 05:39). I found we can get the same appearance as before by explicitly set locale to 'en'. ### Does this PR introduce any user-facing change? <!-- If yes, please clarify the previous behavior and the change this PR proposes - provide the console output, description and/or an example to show the behavior difference if possible. If no, write 'No'. --> No. ### How was this patch tested? <!-- If tests were added, say they were added here. Please make sure to add some test cases that check the changes thoroughly including negative and positive cases if possible. If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future. If tests were not added, please describe why they were not added and/or why it was difficult to add. --> I visited JobsPage, JobPage and StagePage and confirm that timeline view shows dates with 'en' locale. <img width="735" alt="fix-date-appearance" src="https://user-images.githubusercontent.com/4736016/79267107-8bfc7a00-7ed3-11ea-8a25-f6681d04a83c.png"> NOTE: #28192 will be backported to branch-2.4 and branch-3.0 so this PR should be follow #28214 and #28213 . Closes #28218 from sarutak/fix-locale-issue. Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com> Signed-off-by: Kousuke Saruta <sarutak@oss.nttdata.com> (cherry picked from commit df27350142d81a3e8941939870bfc0ab50e37a43) Signed-off-by: Kousuke Saruta <sarutak@oss.nttdata.com> 16 April 2020, 17:32:37 UTC
d34590c [SPARK-31441][PYSPARK][SQL][2.4] Support duplicated column names for toPandas with arrow execution ### What changes were proposed in this pull request? This is to backport #28210. This PR is adding support duplicated column names for `toPandas` with Arrow execution. ### Why are the changes needed? When we execute `toPandas()` with Arrow execution, it fails if the column names have duplicates. ```py >>> spark.sql("select 1 v, 1 v").toPandas() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/path/to/lib/python3.7/site-packages/pyspark/sql/dataframe.py", line 2132, in toPandas pdf = table.to_pandas() File "pyarrow/array.pxi", line 441, in pyarrow.lib._PandasConvertible.to_pandas File "pyarrow/table.pxi", line 1367, in pyarrow.lib.Table._to_pandas File "/path/to/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 653, in table_to_blockmanager columns = _deserialize_column_index(table, all_columns, column_indexes) File "/path/to/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 704, in _deserialize_column_index columns = _flatten_single_level_multiindex(columns) File "/path/to/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 937, in _flatten_single_level_multiindex raise ValueError('Found non-unique column index') ValueError: Found non-unique column index ``` ### Does this PR introduce any user-facing change? Yes, previously we will face an error above, but after this PR, we will see the result: ```py >>> spark.sql("select 1 v, 1 v").toPandas() v v 0 1 1 ``` ### How was this patch tested? Added and modified related tests. Closes #28221 from ueshin/issues/SPARK-31441/2.4/to_pandas. Authored-by: Takuya UESHIN <ueshin@databricks.com> Signed-off-by: Takuya UESHIN <ueshin@databricks.com> 15 April 2020, 18:41:43 UTC
49abdc4 [SPARK-31186][PYSPARK][SQL][2.4] toPandas should not fail on duplicate column names ### What changes were proposed in this pull request? When `toPandas` API works on duplicate column names produced from operators like join, we see the error like: ``` ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). ``` This patch fixes the error in `toPandas` API. This is the backport of original patch to branch-2.4. ### Why are the changes needed? To make `toPandas` work on dataframe with duplicate column names. ### Does this PR introduce any user-facing change? Yes. Previously calling `toPandas` API on a dataframe with duplicate column names will fail. After this patch, it will produce correct result. ### How was this patch tested? Unit test. Closes #28219 from viirya/SPARK-31186-2.4. Authored-by: Liang-Chi Hsieh <viirya@gmail.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> 15 April 2020, 04:57:23 UTC
775e958 [SPARK-31420][WEBUI][2.4] Infinite timeline redraw in job details page ### What changes were proposed in this pull request? This PR backports #28192 to branch-2.4. ### Why are the changes needed? SPARK-31420 affects branch-2.4 too. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Same as #28192 Closes #28214 from sarutak/SPARK-31420-branch-2.4. Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com> Signed-off-by: Gengliang Wang <gengliang.wang@databricks.com> 15 April 2020, 04:33:22 UTC
1c65b1d [SPARK-31422][CORE][FOLLOWUP] Fix a test case compilation error 11 April 2020, 15:53:57 UTC
26d5e8f [SPARK-31422][CORE] Fix NPE when BlockManagerSource is used after BlockManagerMaster stops ### What changes were proposed in this pull request? This PR (SPARK-31422) aims to return empty result in order to avoid `NullPointerException` at `getStorageStatus` and `getMemoryStatus` which happens after `BlockManagerMaster` stops. The empty result is consistent with the current status of `SparkContext` because `BlockManager` and `BlockManagerMaster` is already stopped. ### Why are the changes needed? In `SparkEnv.stop`, the following stop sequence is used and `metricsSystem.stop` invokes `sink.stop`. ``` blockManager.master.stop() metricsSystem.stop() --> sinks.foreach(_.stop) ``` However, some sink can invoke `BlockManagerSource` and ends up with `NullPointerException` because `BlockManagerMaster` is already stopped and `driverEndpoint` became `null`. ``` java.lang.NullPointerException at org.apache.spark.storage.BlockManagerMaster.getStorageStatus(BlockManagerMaster.scala:170) at org.apache.spark.storage.BlockManagerSource$$anonfun$10.apply(BlockManagerSource.scala:63) at org.apache.spark.storage.BlockManagerSource$$anonfun$10.apply(BlockManagerSource.scala:63) at org.apache.spark.storage.BlockManagerSource$$anon$1.getValue(BlockManagerSource.scala:31) at org.apache.spark.storage.BlockManagerSource$$anon$1.getValue(BlockManagerSource.scala:30) ``` Since `SparkContext` registers and forgets `BlockManagerSource` without deregistering, we had better avoid `NullPointerException` inside `BlockManagerMaster` preventively. ```scala _env.metricsSystem.registerSource(new BlockManagerSource(_env.blockManager)) ``` ### Does this PR introduce any user-facing change? Yes. This will remove NPE for the users who uses `BlockManagerSource`. ### How was this patch tested? Pass the Jenkins with the newly added test cases. Closes #28187 from dongjoon-hyun/SPARK-31422. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit a6e6fbf2ca23e51d43f175907ce6f29c946e1acf) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 11 April 2020, 15:28:25 UTC
9657575 [SPARK-31382][BUILD] Show a better error message for different python and pip installation mistake ### What changes were proposed in this pull request? This PR proposes to show a better error message when a user mistakenly installs `pyspark` from PIP but the default `python` does not point out the corresponding `pip`. See https://stackoverflow.com/questions/46286436/running-pyspark-after-pip-install-pyspark/49587560 as an example. It can be reproduced as below: I have two Python executables. `python` is Python 3.7, `pip` binds with Python 3.7 and `python2.7` is Python 2.7. ```bash pip install pyspark ``` ```bash pyspark ``` ``` ... Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 2.4.5 /_/ Using Python version 3.7.3 (default, Mar 27 2019 09:23:15) SparkSession available as 'spark'. ... ``` ```bash PYSPARK_PYTHON=python2.7 pyspark ``` ``` Could not find valid SPARK_HOME while searching ['/Users', '/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/bin'] /usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/bin/pyspark: line 24: /bin/load-spark-env.sh: No such file or directory /usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/bin/pyspark: line 77: /bin/spark-submit: No such file or directory /usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/bin/pyspark: line 77: exec: /bin/spark-submit: cannot execute: No such file or directory ``` ### Why are the changes needed? There are multiple questions outside about this error and they have no idea what's going on. See: - https://stackoverflow.com/questions/46286436/running-pyspark-after-pip-install-pyspark/49587560 - https://stackoverflow.com/questions/45991888/path-issue-could-not-find-valid-spark-home-while-searching - https://stackoverflow.com/questions/49707239/pyspark-could-not-find-valid-spark-home - https://stackoverflow.com/questions/55569985/pyspark-could-not-find-valid-spark-home - https://stackoverflow.com/questions/48296474/error-could-not-find-valid-spark-home-while-searching-pycharm-in-windows - https://github.com/ContinuumIO/anaconda-issues/issues/8076 The answer is usually setting `SPARK_HOME`; however this isn't completely correct. It works if you set `SPARK_HOME` because `pyspark` executable script directly imports the library by using `SPARK_HOME` (see https://github.com/apache/spark/blob/master/bin/pyspark#L52-L53) instead of the default package location specified via `python` executable. So, this way you use a package installed in a different Python, which isn't ideal. ### Does this PR introduce any user-facing change? Yes, it fixes the error message better. **Before:** ``` Could not find valid SPARK_HOME while searching ['/Users', '/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/bin'] ... ``` **After:** ``` Could not find valid SPARK_HOME while searching ['/Users', '/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/bin'] Did you install PySpark via a package manager such as pip or Conda? If so, PySpark was not found in your Python environment. It is possible your Python environment does not properly bind with your package manager. Please check your default 'python' and if you set PYSPARK_PYTHON and/or PYSPARK_DRIVER_PYTHON environment variables, and see if you can import PySpark, for example, 'python -c 'import pyspark'. If you cannot import, you can install by using the Python executable directly, for example, 'python -m pip install pyspark [--user]'. Otherwise, you can also explicitly set the Python executable, that has PySpark installed, to PYSPARK_PYTHON or PYSPARK_DRIVER_PYTHON environment variables, for example, 'PYSPARK_PYTHON=python3 pyspark'. ... ``` ### How was this patch tested? Manually tested as described above. Closes #28152 from HyukjinKwon/SPARK-31382. Authored-by: HyukjinKwon <gurwls223@apache.org> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit 0248b329726527a07d688122f56dd2ada0e51337) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 09 April 2020, 02:05:11 UTC
1a2c233 [SPARK-31327][SQL][2.4] Write Spark version into Avro file metadata Backport https://github.com/apache/spark/commit/6b1ca886c0066f4e10534336f3fce64cdebc79a5, similar to https://github.com/apache/spark/pull/28142 ### What changes were proposed in this pull request? Write Spark version into Avro file metadata ### Why are the changes needed? The version info is very useful for backward compatibility. This is also done in parquet/orc. ### Does this PR introduce any user-facing change? no ### How was this patch tested? new test Closes #28150 from cloud-fan/pick. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 08 April 2020, 15:27:06 UTC
fd6b0bc [SPARK-25102][SQL][2.4] Write Spark version to ORC/Parquet file metadata ### What changes were proposed in this pull request? This is a backport of https://github.com/apache/spark/pull/22932 . Currently, Spark writes Spark version number into Hive Table properties with `spark.sql.create.version`. ``` parameters:{ spark.sql.sources.schema.part.0={ "type":"struct", "fields":[{"name":"a","type":"integer","nullable":true,"metadata":{}}] }, transient_lastDdlTime=1541142761, spark.sql.sources.schema.numParts=1, spark.sql.create.version=2.4.0 } ``` This PR aims to write Spark versions to ORC/Parquet file metadata with `org.apache.spark.sql.create.version` because we used `org.apache.` prefix in Parquet metadata already. It's different from Hive Table property key `spark.sql.create.version`, but it seems that we cannot change Hive Table property for backward compatibility. After this PR, ORC and Parquet file generated by Spark will have the following metadata. **ORC (`native` and `hive` implmentation)** ``` $ orc-tools meta /tmp/o File Version: 0.12 with ... ... User Metadata: org.apache.spark.sql.create.version=3.0.0 ``` **PARQUET** ``` $ parquet-tools meta /tmp/p ... creator: parquet-mr version 1.10.0 (build 031a6654009e3b82020012a18434c582bd74c73a) extra: org.apache.spark.sql.create.version = 3.0.0 extra: org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[{"name":"id","type":"long","nullable":false,"metadata":{}}]} ``` ### Why are the changes needed? This backport helps us handle this files differently in Apache Spark 3.0.0. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Pass the Jenkins with newly added test cases. Closes #28142 from dongjoon-hyun/SPARK-25102-2.4. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 07 April 2020, 20:09:30 UTC
aa9701b [SPARK-31231][BUILD] Unset setuptools version in pip packaging test ### What changes were proposed in this pull request? This PR unsets `setuptools` version in CI. This was fixed in the 0.46.1.2+ `setuptools` - pypa/setuptools#2046. `setuptools` 0.46.1.0 and 0.46.1.1 still have this problem. ### Why are the changes needed? To test the latest setuptools out to see if users can actually install and use it. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Jenkins will test. Closes #28111 from HyukjinKwon/SPARK-31231-revert. Authored-by: HyukjinKwon <gurwls223@apache.org> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit 9f58f0385758f31179943a681e4353eb9279cb20) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 03 April 2020, 23:09:48 UTC
22e0a5a [SPARK-31312][SQL][2.4] Cache Class instance for the UDF instance in HiveFunctionWrapper ### What changes were proposed in this pull request? This patch proposes to cache Class instance for the UDF instance in HiveFunctionWrapper to fix the case where Hive simple UDF is somehow transformed (expression is copied) and evaluated later with another classloader (for the case current thread context classloader is somehow changed). In this case, Spark throws CNFE as of now. It's only occurred for Hive simple UDF, as HiveFunctionWrapper caches the UDF instance whereas it doesn't do for `UDF` type. The comment says Spark has to create instance every time for UDF, so we cannot simply do the same. This patch caches Class instance instead, and switch current thread context classloader to which loads the Class instance. This patch extends the test boundary as well. We only tested with GenericUDTF for SPARK-26560, and this patch actually requires only UDF. But to avoid regression for other types as well, this patch adds all available types (UDF, GenericUDF, AbstractGenericUDAFResolver, UDAF, GenericUDTF) into the boundary of tests. Credit to cloud-fan as he discovered the problem and proposed the solution. ### Why are the changes needed? Above section describes why it's a bug and how it's fixed. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? New UTs added. Closes #28086 from HeartSaVioR/SPARK-31312-branch-2.4. Authored-by: Jungtaek Lim (HeartSaVioR) <kabhwan.opensource@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> 01 April 2020, 03:38:53 UTC
e226f68 [SPARK-31306][DOCS] update rand() function documentation to indicate exclusive upper bound ### What changes were proposed in this pull request? A small documentation change to clarify that the `rand()` function produces values in `[0.0, 1.0)`. ### Why are the changes needed? `rand()` uses `Rand()` - which generates values in [0, 1) ([documented here](https://github.com/apache/spark/blob/a1dbcd13a3eeaee50cc1a46e909f9478d6d55177/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/randomExpressions.scala#L71)). The existing documentation suggests that 1.0 is a possible value returned by rand (i.e for a distribution written as `X ~ U(a, b)`, x can be a or b, so `U[0.0, 1.0]` suggests the value returned could include 1.0). ### Does this PR introduce any user-facing change? Only documentation changes. ### How was this patch tested? Documentation changes only. Closes #28071 from Smeb/master. Authored-by: Ben Ryves <benjamin.ryves@getyourguide.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> 31 March 2020, 06:17:05 UTC
4add8ad [SPARK-31101][BUILD][2.4] Upgrade Janino to 3.0.16 ### What changes were proposed in this pull request? This PR(SPARK-31101) proposes to upgrade Janino to 3.0.16 which is released recently. * Merged pull request janino-compiler/janino#114 "Grow the code for relocatables, and do fixup, and relocate". Please see the commit log. - https://github.com/janino-compiler/janino/commits/3.0.16 You can see the changelog from the link: http://janino-compiler.github.io/janino/changelog.html / though release note for Janino 3.0.16 is actually incorrect. ### Why are the changes needed? We got some report on failure on user's query which Janino throws error on compiling generated code. The issue is here: janino-compiler/janino#113 It contains the information of generated code, symptom (error), and analysis of the bug, so please refer the link for more details. Janino 3.0.16 contains the PR janino-compiler/janino#114 which would enable Janino to succeed to compile user's query properly. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Existing UTs. Below test code fails on branch-2.4 and passes with this patch. (Note that there seems to be the case where another UT affects this UT to not fail - adding this to SQLQuerySuite won't fail this UT, but adding this to DateFunctionsSuite will fail this UT, and if you run this UT solely in SQLQuerySuite via `build/sbt "sql/testOnly *.SQLQuerySuite -- -z SPARK-31115"` then it fails.) ``` /** * NOTE: The test code tries to control the size of for/switch statement in expand_doConsume, * as well as the overall size of expand_doConsume, so that the query triggers known Janino * bug - https://github.com/janino-compiler/janino/issues/113. * * The expected exception message from Janino when we use switch statement for "ExpandExec": * - "Operand stack inconsistent at offset xxx: Previous size 1, now 0" * which will not happen when we use if-else-if statement for "ExpandExec". * * "The number of fields" and "The number of distinct aggregation functions" are the major * factors to increase the size of generated code: while these values should be large enough * to trigger the Janino bug, these values should not also too big; otherwise one of below * exceptions might be thrown: * - "expand_doConsume would be beyond 64KB" * - "java.lang.ClassFormatError: Too many arguments in method signature in class file" */ test("SPARK-31115 Lots of columns and distinct aggregations shouldn't break code generation") { withSQLConf( (SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key, "true"), (SQLConf.WHOLESTAGE_MAX_NUM_FIELDS.key, "10000"), (SQLConf.CODEGEN_FALLBACK.key, "false"), (SQLConf.CODEGEN_LOGGING_MAX_LINES.key, "-1") ) { var df = Seq(("1", "2", 1), ("1", "2", 2), ("2", "3", 3), ("2", "3", 4)).toDF("a", "b", "c") // The value is tested under commit "244405fe57d7737d81c34ba9e8917df6285889eb": // the query fails with switch statement, whereas it passes with if-else statement. // Note that the value depends on the Spark logic as well - different Spark versions may // require different value to ensure the test failing with switch statement. val numNewFields = 100 df = df.withColumns( (1 to numNewFields).map { idx => s"a$idx" }, (1 to numNewFields).map { idx => when(col("c").mod(lit(2)).===(lit(0)), lit(idx)).otherwise(col("c")) } ) val aggExprs: Array[Column] = Range(1, numNewFields).map { idx => if (idx % 2 == 0) { coalesce(countDistinct(s"a$idx"), lit(0)) } else { coalesce(count(s"a$idx"), lit(0)) } }.toArray val aggDf = df .groupBy("a", "b") .agg(aggExprs.head, aggExprs.tail: _*) // We are only interested in whether the code compilation fails or not, so skipping // verification on outputs. aggDf.collect() } } ``` Closes #27997 from HeartSaVioR/SPARK-31101-branch-2.4. Authored-by: Jungtaek Lim (HeartSaVioR) <kabhwan.opensource@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 30 March 2020, 02:25:57 UTC
f05ac28 [SPARK-31293][DSTREAMS][KINESIS][DOC] Fix wrong examples and help messages for Kinesis integration This PR (SPARK-31293) fixes wrong command examples, parameter descriptions and help message format for Amazon Kinesis integration with Spark Streaming. To improve usability of those commands. No I ran the fixed commands manually and confirmed they worked as expected. Closes #28063 from sekikn/SPARK-31293. Authored-by: Kengo Seki <sekikn@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 60dd1a690fed62b1d6442cdc8cf3f89ef4304d5a) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 29 March 2020, 21:29:56 UTC
801d6a9 [SPARK-31261][SQL] Avoid npe when reading bad csv input with `columnNameCorruptRecord` specified ### What changes were proposed in this pull request? SPARK-25387 avoids npe for bad csv input, but when reading bad csv input with `columnNameCorruptRecord` specified, `getCurrentInput` is called and it still throws npe. ### Why are the changes needed? Bug fix. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Add a test. Closes #28029 from wzhfy/corrupt_column_npe. Authored-by: Zhenhua Wang <wzh_zju@163.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit 791d2ba346f3358fc280adbbbe27f2cd50fd3732) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 29 March 2020, 04:30:43 UTC
4217f75 Revert "[SPARK-31258][BUILD] Pin the avro version in SBT" This reverts commit 916a25a46bca7196416372bacc3fc260a6ef658f. 26 March 2020, 20:49:39 UTC
844f207 [SPARK-31231][BUILD][FOLLOW-UP] Set the upper bound (before 46.1.0) for setuptools in pip package test This PR is a followup of apache/spark#27995. Rather then pining setuptools version, it sets upper bound so Python 3.5 with branch-2.4 tests can pass too. To make the CI build stable No, dev-only change. Jenkins will test. Closes #28005 from HyukjinKwon/investigate-pip-packaging-followup. Authored-by: HyukjinKwon <gurwls223@apache.org> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit 178d472e1d9f9f61fa54866d00d0a5b88ee87619) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 26 March 2020, 03:34:12 UTC
916a25a [SPARK-31258][BUILD] Pin the avro version in SBT add arvo dep in SparkBuild fix sbt unidoc like https://github.com/apache/spark/pull/28017#issuecomment-603828597 ```scala [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Main Scala API documentation to /home/jenkins/workspace/SparkPullRequestBuilder6/target/scala-2.12/unidoc... [info] Main Java API documentation to /home/jenkins/workspace/SparkPullRequestBuilder6/target/javaunidoc... [error] /home/jenkins/workspace/SparkPullRequestBuilder6/core/src/main/scala/org/apache/spark/serializer/GenericAvroSerializer.scala:123: value createDatumWriter is not a member of org.apache.avro.generic.GenericData [error] writerCache.getOrElseUpdate(schema, GenericData.get.createDatumWriter(schema)) [error] ^ [info] No documentation generated with unsuccessful compiler run [error] one error found ``` no pass jenkins and verify manually with `sbt dependencyTree` ```scala kentyaohulk î‚° ~/spark î‚° î‚  dep î‚° build/sbt dependencyTree | grep avro | grep -v Resolving [info] +-org.apache.avro:avro-mapred:1.8.2 [info] | +-org.apache.avro:avro-ipc:1.8.2 [info] | | +-org.apache.avro:avro:1.8.2 [info] +-org.apache.avro:avro:1.8.2 [info] | | +-org.apache.avro:avro:1.8.2 [info] org.apache.spark:spark-avro_2.12:3.1.0-SNAPSHOT [S] [info] | | | +-org.apache.avro:avro-mapred:1.8.2 [info] | | | | +-org.apache.avro:avro-ipc:1.8.2 [info] | | | | | +-org.apache.avro:avro:1.8.2 [info] | | | +-org.apache.avro:avro:1.8.2 [info] | | | | | +-org.apache.avro:avro:1.8.2 ``` Closes #28020 from yaooqinn/dep. Authored-by: Kent Yao <yaooqinn@hotmail.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit 336621e277794ab2e6e917391a928ec662498fcf) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 26 March 2020, 01:50:26 UTC
4381ad5 [SPARK-30494][SQL][2.4] Fix cached data leakage during replacing an existing view ### What changes were proposed in this pull request? This is backport of #27185 to branch-2.4. The cached RDD for plan "select 1" stays in memory forever until the session close. This cached data cannot be used since the view temp1 has been replaced by another plan. It's a memory leak. We can reproduce by below commands: ``` Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 3.0.0-SNAPSHOT /_/ Using Scala version 2.12.10 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_201) Type in expressions to have them evaluated. Type :help for more information. scala> spark.sql("create or replace temporary view temp1 as select 1") scala> spark.sql("cache table temp1") scala> spark.sql("create or replace temporary view temp1 as select 1, 2") scala> spark.sql("cache table temp1") scala> assert(spark.sharedState.cacheManager.lookupCachedData(sql("select 1, 2")).isDefined) scala> assert(spark.sharedState.cacheManager.lookupCachedData(sql("select 1")).isDefined) ``` ### Why are the changes needed? Fix the memory leak, specially for long running mode. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Add a unit test. Closes #28000 from LantaoJin/SPARK-30494_2.4. Lead-authored-by: lajin <lajin@ebay.com> Co-authored-by: LantaoJin <jinlantao@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 24 March 2020, 20:29:07 UTC
e37f664 Revert "[SPARK-31231][BUILD] Explicitly setuptools version as 46.0.0 in pip package test" This reverts commit 223b9fb1eadeba0e05b1a300512c31c4f99f41e8. Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 24 March 2020, 18:55:57 UTC
223b9fb [SPARK-31231][BUILD] Explicitly setuptools version as 46.0.0 in pip package test ### What changes were proposed in this pull request? For a bit of background, PIP packaging test started to fail (see [this logs](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/120218/testReport/)) as of setuptools 46.1.0 release. In https://github.com/pypa/setuptools/issues/1424, they decided to don't keep the modes in `package_data`. In PySpark pip installation, we keep the executable scripts in `package_data` https://github.com/apache/spark/blob/fc4e56a54c15e20baf085e6061d3d83f5ce1185d/python/setup.py#L199-L200, and expose their symbolic links as executable scripts. So, the symbolic links (or copied scripts) executes the scripts copied from `package_data`, which doesn't have the executable permission in its mode: ``` /tmp/tmp.UmkEGNFdKF/3.6/bin/spark-submit: line 27: /tmp/tmp.UmkEGNFdKF/3.6/lib/python3.6/site-packages/pyspark/bin/spark-class: Permission denied /tmp/tmp.UmkEGNFdKF/3.6/bin/spark-submit: line 27: exec: /tmp/tmp.UmkEGNFdKF/3.6/lib/python3.6/site-packages/pyspark/bin/spark-class: cannot execute: Permission denied ``` The current issue is being tracked at https://github.com/pypa/setuptools/issues/2041 </br> For what this PR proposes: It sets the upper bound in PR builder for now to unblock other PRs. _This PR does not solve the issue yet. I will make a fix after monitoring https://github.com/pypa/setuptools/issues/2041_ ### Why are the changes needed? It currently affects users who uses the latest setuptools. So, _users seem unable to use PySpark with the latest setuptools._ See also https://github.com/pypa/setuptools/issues/2041#issuecomment-602566667 ### Does this PR introduce any user-facing change? It makes CI pass for now. No user-facing change yet. ### How was this patch tested? Jenkins will test. Closes #27995 from HyukjinKwon/investigate-pip-packaging. Authored-by: HyukjinKwon <gurwls223@apache.org> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit c181c45f863ba55e15ab9b41f635ffbddad9bac0) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 24 March 2020, 09:00:25 UTC
244405f [SPARK-26293][SQL][2.4] Cast exception when having python udf in subquery ## What changes were proposed in this pull request? This PR backports https://github.com/apache/spark/pull/23248 which seems mistakenly not backported. This is a regression introduced by https://github.com/apache/spark/pull/22104 at Spark 2.4.0. When we have Python UDF in subquery, we will hit an exception ``` Caused by: java.lang.ClassCastException: org.apache.spark.sql.catalyst.expressions.AttributeReference cannot be cast to org.apache.spark.sql.catalyst.expressions.PythonUDF at scala.collection.immutable.Stream.map(Stream.scala:414) at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:98) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:815) ... ``` https://github.com/apache/spark/pull/22104 turned `ExtractPythonUDFs` from a physical rule to optimizer rule. However, there is a difference between a physical rule and optimizer rule. A physical rule always runs once, an optimizer rule may be applied twice on a query tree even the rule is located in a batch that only runs once. For a subquery, the `OptimizeSubqueries` rule will execute the entire optimizer on the query plan inside subquery. Later on subquery will be turned to joins, and the optimizer rules will be applied to it again. Unfortunately, the `ExtractPythonUDFs` rule is not idempotent. When it's applied twice on a query plan inside subquery, it will produce a malformed plan. It extracts Python UDF from Python exec plans. This PR proposes 2 changes to be double safe: 1. `ExtractPythonUDFs` should skip python exec plans, to make the rule idempotent 2. `ExtractPythonUDFs` should skip subquery ## How was this patch tested? a new test. Closes #27960 from HyukjinKwon/backport-SPARK-26293. Lead-authored-by: Wenchen Fan <wenchen@databricks.com> Co-authored-by: HyukjinKwon <gurwls223@apache.org> Signed-off-by: HyukjinKwon <gurwls223@apache.org> 20 March 2020, 03:16:38 UTC
73cc8b5 [SPARK-31164][SQL][2.4] Inconsistent rdd and output partitioning for bucket table when output doesn't contain all bucket columns ### What changes were proposed in this pull request? This is a backport for [pr#27924](https://github.com/apache/spark/pull/27924). For a bucketed table, when deciding output partitioning, if the output doesn't contain all bucket columns, the result is `UnknownPartitioning`. But when generating rdd, current Spark uses `createBucketedReadRDD` because it doesn't check if the output contains all bucket columns. So the rdd and its output partitioning are inconsistent. ### Why are the changes needed? To fix a bug. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Modified existing tests. Closes #27934 from wzhfy/inconsistent_rdd_partitioning_2.4. Authored-by: Zhenhua Wang <wzh_zju@163.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> 17 March 2020, 12:17:50 UTC
6a60c66 [MINOR][SQL] Update the DataFrameWriter.bucketBy comment ### What changes were proposed in this pull request? This PR intends to update the `DataFrameWriter.bucketBy` comment for clearly describing that the bucketBy scheme follows a Spark "specific" one. I saw the questions about the current bucketing compatibility with Hive in [SPARK-31162](https://issues.apache.org/jira/browse/SPARK-31162?focusedCommentId=17060408&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17060408) and [SPARK-17495](https://issues.apache.org/jira/browse/SPARK-17495?focusedCommentId=17059847&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17059847) from users and IMHO the comment is a bit confusing to users about the compatibility ### Why are the changes needed? To make users understood smoothly. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? N/A Closes #27930 from maropu/UpdateBucketByComment. Authored-by: Takeshi Yamamuro <yamamuro@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 124b4ce2e6e8f84294f8fc13d3e731a82325dacb) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 17 March 2020, 07:53:38 UTC
26ad3fe [SPARK-31163][SQL] TruncateTableCommand with acl/permission should handle non-existed path ### What changes were proposed in this pull request? This fix #26956 Wrap try-catch on `fs.getFileStatus(path)` within acl/permission in case of the path doesn't exist. ### Why are the changes needed? `truncate table` may fail to re-create path in case of interruption or something else. As a result, next time we `truncate table` on the same table with acl/permission, it will fail due to `FileNotFoundException`. And it also brings behavior change compares to previous Spark version, which could still `truncate table` successfully even if the path doesn't exist. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Added UT. Closes #27923 from Ngone51/fix_truncate. Authored-by: yi.wu <yi.wu@databricks.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit cb26f636b08aea4c5c6bf5035a359cd3cbf335c0) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 16 March 2020, 18:45:58 UTC
51ccb6f [SPARK-31144][SQL][2.4] Wrap Error with QueryExecutionException to notify QueryExecutionListener ### What changes were proposed in this pull request? When `java.lang.Error` is thrown during the execution, `ExecutionListenerManager` will wrap it with `QueryExecutionException` so that we can send it to `QueryExecutionListener.onFailure` which only accepts `Exception`. ### Why are the changes needed? If ` java.lang.Error` is thrown during the execution, QueryExecutionListener doesn't get notified right now. ### Does this PR introduce any user-facing change? No ### How was this patch tested? The new added unit test. Closes #27904 from zsxwing/SPARK-31144-2.4. Authored-by: Shixiong Zhu <zsxwing@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 13 March 2020, 20:58:12 UTC
e6bcaaa [SPARK-31130][BUILD] Use the same version of `commons-io` in SBT This PR (SPARK-31130) aims to pin `Commons IO` version to `2.4` in SBT build like Maven build. [HADOOP-15261](https://issues.apache.org/jira/browse/HADOOP-15261) upgraded `commons-io` from 2.4 to 2.5 at Apache Hadoop 3.1. In `Maven`, Apache Spark always uses `Commons IO 2.4` based on `pom.xml`. ``` $ git grep commons-io.version pom.xml: <commons-io.version>2.4</commons-io.version> pom.xml: <version>${commons-io.version}</version> ``` However, `SBT` choose `2.5`. **branch-3.0** ``` $ build/sbt -Phadoop-3.2 "core/dependencyTree" | grep commons-io:commons-io | head -n1 [info] | | +-commons-io:commons-io:2.5 ``` **branch-2.4** ``` $ build/sbt -Phadoop-3.1 "core/dependencyTree" | grep commons-io:commons-io | head -n1 [info] | | +-commons-io:commons-io:2.5 ``` No. Pass the Jenkins with `[test-hadoop3.2]` (the default PR Builder is `SBT`) and manually do the following locally. ``` build/sbt -Phadoop-3.2 "core/dependencyTree" | grep commons-io:commons-io | head -n1 ``` Closes #27886 from dongjoon-hyun/SPARK-31130. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 972e23d18186c73026ebed95b37a886ca6eecf3e) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 12 March 2020, 16:08:15 UTC
c017422 [SPARK-29295][SQL][2.4] Insert overwrite to Hive external table partition should delete old data ### What changes were proposed in this pull request? This patch proposes to delete old Hive external partition directory even the partition does not exist in Hive, when insert overwrite Hive external table partition. This is backport of #25979 to branch-2.4. ### Why are the changes needed? When insert overwrite to a Hive external table partition, if the partition does not exist, Hive will not check if the external partition directory exists or not before copying files. So if users drop the partition, and then do insert overwrite to the same partition, the partition will have both old and new data. For example: ```scala withSQLConf(HiveUtils.CONVERT_METASTORE_PARQUET.key -> "false") { // test is an external Hive table. sql("INSERT OVERWRITE TABLE test PARTITION(name='n1') SELECT 1") sql("ALTER TABLE test DROP PARTITION(name='n1')") sql("INSERT OVERWRITE TABLE test PARTITION(name='n1') SELECT 2") sql("SELECT id FROM test WHERE name = 'n1' ORDER BY id") // Got both 1 and 2. } ``` ### Does this PR introduce any user-facing change? Yes. This fix a correctness issue when users drop partition on a Hive external table partition and then insert overwrite it. ### How was this patch tested? Added test. Closes #27887 from viirya/SPARK-29295-2.4. Authored-by: Liang-Chi Hsieh <viirya@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 12 March 2020, 10:00:35 UTC
8e1021d [SPARK-31095][BUILD][2.4] Upgrade netty-all to 4.1.47.Final ### What changes were proposed in this pull request? This is a backport of https://github.com/apache/spark/pull/27869. This PR aims to bring the bug fixes from the latest netty-all. ### Why are the changes needed? - 4.1.47.Final: https://github.com/netty/netty/milestone/222?closed=1 (15 patches or issues) - 4.1.46.Final: https://github.com/netty/netty/milestone/221?closed=1 (80 patches or issues) - 4.1.45.Final: https://github.com/netty/netty/milestone/220?closed=1 (23 patches or issues) - 4.1.44.Final: https://github.com/netty/netty/milestone/218?closed=1 (113 patches or issues) - 4.1.43.Final: https://github.com/netty/netty/milestone/217?closed=1 (63 patches or issues) ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Pass the Jenkins with the existing tests. Closes #27870 from dongjoon-hyun/netty. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 11 March 2020, 18:50:03 UTC
f378c7f [SPARK-30941][PYSPARK] Add a note to asDict to document its behavior when there are duplicate fields ### What changes were proposed in this pull request? Adding a note to document `Row.asDict` behavior when there are duplicate fields. ### Why are the changes needed? When a row contains duplicate fields, `asDict` and `_get_item_` behaves differently. We should document it to let users know the difference explicitly. ### Does this PR introduce any user-facing change? No. Only document change. ### How was this patch tested? Existing test. Closes #27853 from viirya/SPARK-30941. Authored-by: Liang-Chi Hsieh <viirya@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit d21aab403a0a32e8b705b38874c0b335e703bd5d) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> 09 March 2020, 18:07:25 UTC
7c237cc [MINOR][SQL] Remove an ignored test from JsonSuite ### What changes were proposed in this pull request? Remove ignored and outdated test `Type conflict in primitive field values (Ignored)` from JsonSuite. ### Why are the changes needed? The test is not maintained for long time. It can be removed to reduce size of JsonSuite, and improve maintainability. ### Does this PR introduce any user-facing change? No ### How was this patch tested? By running the command `./build/sbt "test:testOnly *JsonV2Suite"` Closes #27795 from MaxGekk/remove-ignored-test-in-JsonSuite. Authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit cf7c397ede05fd106697bd5cc8062f394623bf22) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 06 March 2020, 01:36:17 UTC
1c17ede [MINOR][CORE] Expose the alias -c flag of --conf for spark-submit ### What changes were proposed in this pull request? -c is short for --conf, it was introduced since v1.1.0 but hidden from users until now ### Why are the changes needed? ### Does this PR introduce any user-facing change? no expose hidden feature ### How was this patch tested? Nah Closes #27802 from yaooqinn/conf. Authored-by: Kent Yao <yaooqinn@hotmail.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> (cherry picked from commit 3edab6cc1d70c102093e973a2cf97208db19be8c) Signed-off-by: Dongjoon Hyun <dhyun@apple.com> 05 March 2020, 04:38:23 UTC
0ea91da [MINOR][DOCS] ForeachBatch java example fix ### What changes were proposed in this pull request? ForEachBatch Java example was incorrect ### Why are the changes needed? Example did not compile ### Does this PR introduce any user-facing change? Yes, to docs. ### How was this patch tested? In IDE. Closes #27740 from roland1982/foreachwriter_java_example_fix. Authored-by: roland-ondeviceresearch <roland@ondeviceresearch.com> Signed-off-by: Sean Owen <srowen@gmail.com> (cherry picked from commit a4aaee01fa8e71d51f49b24889d862422e0727c7) Signed-off-by: Sean Owen <srowen@gmail.com> 03 March 2020, 15:25:10 UTC
f4c8c48 [SPARK-30998][SQL][2.4] ClassCastException when a generator having nested inner generators ### What changes were proposed in this pull request? A query below failed in branch-2.4; ``` scala> sql("select array(array(1, 2), array(3)) ar").select(explode(explode($"ar"))).show() 20/03/01 13:51:56 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)/ 1] java.lang.ClassCastException: scala.collection.mutable.ArrayOps$ofRef cannot be cast to org.apache.spark.sql.catalyst.util.ArrayData at org.apache.spark.sql.catalyst.expressions.ExplodeBase.eval(generators.scala:313) at org.apache.spark.sql.execution.GenerateExec.$anonfun$doExecute$8(GenerateExec.scala:108) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490) at scala.collection.Iterator$ConcatIterator.hasNext(Iterator.scala:222) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) ... ``` This pr modified the `hasNestedGenerator` code in `ExtractGenerator` for correctly catching nested inner generators. This backport PR comes from https://github.com/apache/spark/pull/27750# ### Why are the changes needed? A bug fix. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Added tests. Closes #27769 from maropu/SPARK-20998-BRANCH-2.4. Authored-by: Takeshi Yamamuro <yamamuro@apache.org> Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org> 03 March 2020, 14:47:40 UTC
7216099 [SPARK-30993][SQL][2.4] Use its sql type for UDT when checking the type of length (fixed/var) or mutable ### What changes were proposed in this pull request? This patch fixes the bug of UnsafeRow which misses to handle the UDT specifically, in `isFixedLength` and `isMutable`. These methods don't check its SQL type for UDT, always treating UDT as variable-length, and non-mutable. It doesn't bring any issue if UDT is used to represent complicated type, but when UDT is used to represent some type which is matched with fixed length of SQL type, it exposes the chance of correctness issues, as these informations sometimes decide how the value should be handled. We got report from user mailing list which suspected as mapGroupsWithState looks like handling UDT incorrectly, but after some investigation it was from GenerateUnsafeRowJoiner in shuffle phase. https://github.com/apache/spark/blob/0e2ca11d80c3921387d7b077cb64c3a0c06b08d7/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/GenerateUnsafeRowJoiner.scala#L32-L43 Here updating position should not happen on fixed-length column, but due to this bug, the value of UDT having fixed-length as sql type would be modified, which actually corrupts the value. ### Why are the changes needed? Misclassifying of the type of length for UDT can corrupt the value when the row is presented to the input of GenerateUnsafeRowJoiner, which brings correctness issue. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? New UT added. Closes #27761 from HeartSaVioR/SPARK-30993-branch-2.4. Authored-by: Jungtaek Lim (HeartSaVioR) <kabhwan.opensource@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> 03 March 2020, 09:50:43 UTC
0b71b4d [SPARK-31003][TESTS] Fix incorrect uses of assume() in tests ### What changes were proposed in this pull request? This patch fixes several incorrect uses of `assume()` in our tests. If a call to `assume(condition)` fails then it will cause the test to be marked as skipped instead of failed: this feature allows test cases to be skipped if certain prerequisites are missing. For example, we use this to skip certain tests when running on Windows (or when Python dependencies are unavailable). In contrast, `assert(condition)` will fail the test if the condition doesn't hold. If `assume()` is accidentally substituted for `assert()`then the resulting test will be marked as skipped in cases where it should have failed, undermining the purpose of the test. This patch fixes several such cases, replacing certain `assume()` calls with `assert()`. Credit to ahirreddy for spotting this problem. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Existing tests. Closes #27754 from JoshRosen/fix-assume-vs-assert. Lead-authored-by: Josh Rosen <rosenville@gmail.com> Co-authored-by: Josh Rosen <joshrosen@databricks.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> (cherry picked from commit f0010c81e2ef9b8859b39917bb62b48d739a4a22) Signed-off-by: Dongjoon Hyun <dhyun@apple.com> 02 March 2020, 23:21:24 UTC
cd8f86a [SPARK-30813][ML] Fix Matrices.sprand comments ### What changes were proposed in this pull request? Fix mistakes in comments ### Why are the changes needed? There are mistakes in comments ### Does this PR introduce any user-facing change? No ### How was this patch tested? N/A Closes #27564 from xwu99/fix-mllib-sprand-comment. Authored-by: Wu, Xiaochang <xiaochang.wu@intel.com> Signed-off-by: Sean Owen <srowen@gmail.com> (cherry picked from commit ac122762f5091a7abf62d17595e0f5a99374ac5c) Signed-off-by: Sean Owen <srowen@gmail.com> 02 March 2020, 14:56:49 UTC
0d1664c [SPARK-29419][SQL] Fix Encoder thread-safety bug in createDataset(Seq) ### What changes were proposed in this pull request? This PR fixes a thread-safety bug in `SparkSession.createDataset(Seq)`: if the caller-supplied `Encoder` is used in multiple threads then createDataset's usage of the encoder may lead to incorrect / corrupt results because the Encoder's internal mutable state will be updated from multiple threads. Here is an example demonstrating the problem: ```scala import org.apache.spark.sql._ val enc = implicitly[Encoder[(Int, Int)]] val datasets = (1 to 100).par.map { _ => val pairs = (1 to 100).map(x => (x, x)) spark.createDataset(pairs)(enc) } datasets.reduce(_ union _).collect().foreach { pair => require(pair._1 == pair._2, s"Pair elements are mismatched: $pair") } ``` Before this PR's change, the above example fails because Spark produces corrupted records where different input records' fields have been co-mingled. This bug is similar to SPARK-22355 / #19577, a similar problem in `Dataset.collect()`. The fix implemented here is based on #24735's updated version of the `Datataset.collect()` bugfix: use `.copy()`. For consistency, I used same [code comment](https://github.com/apache/spark/blob/d841b33ba3a9b0504597dbccd4b0d11fa810abf3/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala#L3414) / explanation as that PR. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Tested manually using the example listed above. Thanks to smcnamara-stripe for identifying this bug. Closes #26076 from JoshRosen/SPARK-29419. Authored-by: Josh Rosen <rosenville@gmail.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> (cherry picked from commit f4499f678dc2e9f72c3ee5d2af083aa6b98f3fc2) Signed-off-by: HyukjinKwon <gurwls223@apache.org> 02 March 2020, 01:19:53 UTC
ff5ba49 [SPARK-30970][K8S][CORE][2.4] Fix NPE while resolving k8s master url ### What changes were proposed in this pull request? backport https://github.com/apache/spark/pull/27721 ``` bin/spark-sql --master k8s:///https://kubernetes.docker.internal:6443 --conf spark.kubernetes.container.image=yaooqinn/spark:v2.4.4 Exception in thread "main" java.lang.NullPointerException at org.apache.spark.util.Utils$.checkAndGetK8sMasterUrl(Utils.scala:2739) at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:261) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:774) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) ``` Althrough `k8s:///https://kubernetes.docker.internal:6443` is a wrong master url but should not throw npe The `case null` will never be touched. https://github.com/apache/spark/blob/3f4060c340d6bac412e8819c4388ccba226efcf3/core/src/main/scala/org/apache/spark/util/Utils.scala#L2772-L2776 ### Why are the changes needed? bug fix ### Does this PR introduce any user-facing change? no ### How was this patch tested? add ut case Closes #27736 from yaooqinn/SPARK-30970-2. Authored-by: Kent Yao <yaooqinn@hotmail.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> 28 February 2020, 19:15:26 UTC
7574d99 [SPARK-30968][BUILD] Upgrade aws-java-sdk-sts to 1.11.655 ### What changes were proposed in this pull request? This PR aims to upgrade `aws-java-sdk-sts` to `1.11.655`. ### Why are the changes needed? [SPARK-29677](https://github.com/apache/spark/pull/26333) upgrades AWS Kinesis Client to 1.12.0 for Apache Spark 2.4.5 and 3.0.0. Since AWS Kinesis Client 1.12.0 is using AWS SDK 1.11.665, `aws-java-sdk-sts` should be consistent with Kinesis client dependency. - https://github.com/awslabs/amazon-kinesis-client/releases/tag/v1.12.0 ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Pass the Jenkins. Closes #27720 from dongjoon-hyun/SPARK-30968. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> (cherry picked from commit 3995728c3ce9d85b0436c0220f957b9d9133d64a) Signed-off-by: Dongjoon Hyun <dhyun@apple.com> 28 February 2020, 01:06:26 UTC
2749043 [MINOR][ML] Fix confusing error message in VectorAssembler ### What changes were proposed in this pull request? When VectorAssembler encounters a NULL with handleInvalid="error", it throws an exception. This exception, though, has a typo making it confusing. Yet apparently, this same exception for NaN values is fine. Fixed it to look like the right one. ### Why are the changes needed? Encountering this error with such message was very confusing! I hope to save time of fellow engineers by improving it. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? It's just an error message... Closes #27709 from Saluev/patch-1. Authored-by: Tigran Saluev <tigran@saluev.com> Signed-off-by: Sean Owen <srowen@gmail.com> (cherry picked from commit 6f4a2e4c99dadf111e43c6e5b4d7ee5e4d4bd8f6) Signed-off-by: Sean Owen <srowen@gmail.com> 27 February 2020, 17:06:25 UTC
a549a07 [SPARK-23435][INFRA][FOLLOW-UP] Remove unnecessary dependency in AppVeyor ### What changes were proposed in this pull request? `testthat` version was pinned to `1.0.2` at https://github.com/apache/spark/commit/f15102b1702b64a54233ae31357e32335722f4e5 due to compatibility issue in SparkR. The compatibility issue is finally fixed as of https://github.com/apache/spark/commit/298d0a5102e54ddc24f114e83d2b936762722eec and we now use testthat latest version. Now we don't need to install `crayon', 'praise' and 'R6' as they are dependences in testthat (https://github.com/r-lib/testthat/blob/master/DESCRIPTION). ### Why are the changes needed? To minimise build specification and prevent dependency confusion. ### Does this PR introduce any user-facing change? No. Dev only change. ### How was this patch tested? AppVeyor build will test it out. Closes #27717 from HyukjinKwon/SPARK-23435-followup. Authored-by: HyukjinKwon <gurwls223@apache.org> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> (cherry picked from commit b2a74107d50a0f69d727049581e01d9e2f6b4778) Signed-off-by: Dongjoon Hyun <dhyun@apple.com> 27 February 2020, 08:19:41 UTC
back to top