https://github.com/apache/spark

sort by:
Revision Author Date Message Commit Date
8fb6f00 [maven-release-plugin] prepare release v1.0.2-rc1 25 July 2014, 21:21:15 UTC
4be1dbd Revert "[maven-release-plugin] prepare release v1.0.2-rc1" This reverts commit 08f601328ad9e7334ef7deb3a9fff1343a3c4f30. 25 July 2014, 20:54:14 UTC
57b5531 Revert "[maven-release-plugin] prepare for next development iteration" This reverts commit 54df1b8c31fa2de5b04ee4a5563706b2664f34f3. 25 July 2014, 20:54:02 UTC
76117ba Updated CHANGES.txt 25 July 2014, 20:50:50 UTC
54df1b8 [maven-release-plugin] prepare for next development iteration 25 July 2014, 18:43:25 UTC
08f6013 [maven-release-plugin] prepare release v1.0.2-rc1 25 July 2014, 18:43:18 UTC
01fc6d8 Revert "[maven-release-plugin] prepare release v1.0.2-rc1" This reverts commit 919c87f26a2655bfd5ae03958915b6804367c1d6. 25 July 2014, 18:18:06 UTC
d9ccf7f Revert "[maven-release-plugin] prepare for next development iteration" This reverts commit edbd02fc6873676e080101d407916efb64bdf71a. 25 July 2014, 18:18:00 UTC
edbd02f [maven-release-plugin] prepare for next development iteration 25 July 2014, 11:30:10 UTC
919c87f [maven-release-plugin] prepare release v1.0.2-rc1 25 July 2014, 11:30:01 UTC
797c663 [SPARK-2529] Clean closures in foreach and foreachPartition. Author: Reynold Xin <rxin@apache.org> Closes #1583 from rxin/closureClean and squashes the following commits: 8982fe6 [Reynold Xin] [SPARK-2529] Clean closures in foreach and foreachPartition. (cherry picked from commit eb82abd8e3d25c912fa75201cf4f429aab8d73c7) Signed-off-by: Reynold Xin <rxin@apache.org> 25 July 2014, 08:10:16 UTC
70109da Updating versions for 1.0.2 release. 25 July 2014, 03:09:36 UTC
b1e1917 Revert "[maven-release-plugin] prepare release v1.0.1-rc3" This reverts commit 70ee14f76d6c3d3f162db6bbe12797c252a0295a. 25 July 2014, 02:25:52 UTC
d10455c Revert "[maven-release-plugin] prepare for next development iteration" This reverts commit baf92a0f2119867b1be540085ebe9f1a1c411ae8. 25 July 2014, 02:20:14 UTC
53b4e0f [SPARK-2464][Streaming] Fixed Twitter stream stopping bug Stopping the Twitter Receiver would call twitter4j's TwitterStream.shutdown, which in turn causes an Exception to be thrown to the listener. This exception caused the Receiver to be restarted. This patch check whether the receiver was stopped or not, and accordingly restarts on exception. Author: Tathagata Das <tathagata.das1565@gmail.com> Closes #1577 from tdas/twitter-stop and squashes the following commits: 011b525 [Tathagata Das] Fixed Twitter stream stopping bug. (cherry picked from commit a45d5480f65d2e969fc7fbd8f358b1717fb99bef) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> 24 July 2014, 23:00:49 UTC
9124159 [SPARK-2603][SQL] Remove unnecessary toMap and toList in converting Java collections to Scala collections JsonRDD.scala In JsonRDD.scalafy, we are using toMap/toList to convert a Java Map/List to a Scala one. These two operations are pretty expensive because they read elements from a Java Map/List and then load to a Scala Map/List. We can use Scala wrappers to wrap those Java collections instead of using toMap/toList. I did a quick test to see the performance. I had a 2.9GB cached RDD[String] storing one JSON object per record (twitter dataset). My simple test program is attached below. ```scala val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext._ val jsonData = sc.textFile("...") jsonData.cache.count val jsonSchemaRDD = sqlContext.jsonRDD(jsonData) jsonSchemaRDD.registerAsTable("jt") sqlContext.sql("select count(*) from jt").collect ``` Stages for the schema inference and the table scan both had 48 tasks. These tasks were executed sequentially. For the current implementation, scanning the JSON dataset will materialize values of all fields of a record. The inferred schema of the dataset can be accessed at https://gist.github.com/yhuai/05fe8a57c638c6666f8d. From the result, there was no significant difference on running `jsonRDD`. For the simple aggregation query, results are attached below. ``` Original: Run 1: 26.1s Run 2: 27.03s Run 3: 27.035s With this change: Run 1: 21.086s Run 2: 21.035s Run 3: 21.029s ``` JIRA: https://issues.apache.org/jira/browse/SPARK-2603 Author: Yin Huai <huai@cse.ohio-state.edu> Closes #1504 from yhuai/removeToMapToList and squashes the following commits: 6831b77 [Yin Huai] Fix failed tests. 09b9bca [Yin Huai] Merge remote-tracking branch 'upstream/master' into removeToMapToList d1abdb8 [Yin Huai] Remove unnecessary toMap and toList. (cherry picked from commit b352ef175c234a2ea86b72c2f40da2ac69658b2e) Signed-off-by: Michael Armbrust <michael@databricks.com> 24 July 2014, 18:19:58 UTC
6b08046 [SPARK-2658][SQL] Add rule for true = 1. Author: Michael Armbrust <michael@databricks.com> Closes #1556 from marmbrus/fixBooleanEqualsOne and squashes the following commits: ad8edd4 [Michael Armbrust] Add rule for true = 1 and false = 0. (cherry picked from commit 78d18fdbaa62d8ed235c29b2e37fd6607263c639) Signed-off-by: Reynold Xin <rxin@apache.org> 24 July 2014, 05:53:05 UTC
c6421b6 [SPARK-2615] [SQL] Add Equal Sign "==" Support for HiveQl Currently, the "==" in HiveQL expression will cause exception thrown, this patch will fix it. Author: Cheng Hao <hao.cheng@intel.com> Closes #1522 from chenghao-intel/equal and squashes the following commits: f62a0ff [Cheng Hao] Add == Support for HiveQl (cherry picked from commit 79fe7634f6817eb2443bc152c6790a4439721fda) Signed-off-by: Michael Armbrust <michael@databricks.com> 23 July 2014, 01:13:41 UTC
84bbfbd [SPARK-2561][SQL] Fix apply schema We need to use the analyzed attributes otherwise we end up with a tree that will never resolve. Author: Michael Armbrust <michael@databricks.com> Closes #1470 from marmbrus/fixApplySchema and squashes the following commits: f968195 [Michael Armbrust] Use analyzed attributes when applying the schema. 4969015 [Michael Armbrust] Add test case. (cherry picked from commit 511a7314037219c23e824ea5363bf7f1df55bab3) Signed-off-by: Michael Armbrust <michael@databricks.com> 22 July 2014, 01:18:35 UTC
cdcd467 [SPARK-2494] [PySpark] make hash of None consistant cross machines In CPython, hash of None is different cross machines, it will cause wrong result during shuffle. This PR will fix this. Author: Davies Liu <davies.liu@gmail.com> Closes #1371 from davies/hash_of_none and squashes the following commits: d01745f [Davies Liu] add comments, remove outdated unit tests 5467141 [Davies Liu] disable hijack of hash, use it only for partitionBy() b7118aa [Davies Liu] use __builtin__ instead of __builtins__ 839e417 [Davies Liu] hijack hash to make hash of None consistant cross machines (cherry picked from commit 872538c600a452ead52638c1ccba90643a9fa41c) Signed-off-by: Matei Zaharia <matei@databricks.com> 21 July 2014, 19:00:17 UTC
e0cc384 Revert "[SPARK-1199][REPL] Remove VALId and use the original import style for defined classes." This reverts commit 6e0b7e5308263bef60120debe05577868ebaeea9. 21 July 2014, 18:54:38 UTC
480669f [SPARK-2598] RangePartitioner's binary search does not use the given Ordering We should fix this in branch-1.0 as well. Author: Reynold Xin <rxin@apache.org> Closes #1500 from rxin/rangePartitioner and squashes the following commits: c0a94f5 [Reynold Xin] [SPARK-2598] RangePartitioner's binary search does not use the given Ordering. (cherry picked from commit fa51b0fb5bee95a402c7b7f13dcf0b46cf5bb429) Signed-off-by: Reynold Xin <rxin@apache.org> 20 July 2014, 18:06:16 UTC
11670bf [SPARK-2524] missing document about spark.deploy.retainedDrivers https://issues.apache.org/jira/browse/SPARK-2524 The configuration on spark.deploy.retainedDrivers is undocumented but actually used https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/master/Master.scala#L60 Author: lianhuiwang <lianhuiwang09@gmail.com> Author: Wang Lianhui <lianhuiwang09@gmail.com> Author: unknown <Administrator@taguswang-PC1.tencent.com> Closes #1443 from lianhuiwang/SPARK-2524 and squashes the following commits: 64660fd [Wang Lianhui] address pwendell's comments 5f6bbb7 [Wang Lianhui] missing document about spark.deploy.retainedDrivers 44a3f50 [unknown] Merge remote-tracking branch 'upstream/master' eacf933 [lianhuiwang] Merge remote-tracking branch 'upstream/master' 8bbfe76 [lianhuiwang] Merge remote-tracking branch 'upstream/master' 480ce94 [lianhuiwang] address aarondav comments f2b5970 [lianhuiwang] bugfix worker DriverStateChanged state should match DriverState.FAILED (cherry picked from commit 4da01e3813f0a0413fe691358c14278bbd5508ed) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 20 July 2014, 03:47:10 UTC
a0624e8 Typo fix to the programming guide in the docs Typo fix to the programming guide in the docs. Changed the word "distibuted" to "distributed". Author: Cesar Arevalo <cesar@zephyrhealthinc.com> Closes #1495 from cesararevalo/master and squashes the following commits: 0c2e3a7 [Cesar Arevalo] Typo fix to the programming guide in the docs (cherry picked from commit 0d01e85f42f3c997df7fee942b05b509968bac4b) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 20 July 2014, 03:20:33 UTC
7611840 [SPARK-2540] [SQL] Add HiveDecimal & HiveVarchar support in unwrapping data Author: Cheng Hao <hao.cheng@intel.com> Closes #1436 from chenghao-intel/unwrapdata and squashes the following commits: 34cc21a [Cheng Hao] update the table scan accodringly since the unwrapData function changed afc39da [Cheng Hao] Polish the code 39d6475 [Cheng Hao] Add HiveDecimal & HiveVarchar support in unwrap data (cherry picked from commit 7f1720813793e155743b58eae5228298e894b90d) Signed-off-by: Michael Armbrust <michael@databricks.com> 18 July 2014, 21:38:32 UTC
284bf10 Added t2 instance types New t2 instance types require HVM amis, bailout assumption of pvm causes failures when using t2 instance types. Author: Basit Mustafa <basitmustafa@computes-things-for-basit.local> Closes #1446 from 24601/master and squashes the following commits: 01fe128 [Basit Mustafa] Makin' it pretty 392a95e [Basit Mustafa] Added t2 instance types Conflicts: ec2/spark_ec2.py 18 July 2014, 19:25:59 UTC
d35837a [SPARK-2570] [SQL] Fix the bug of ClassCastException Exception thrown when running the example of HiveFromSpark. Exception in thread "main" java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:106) at org.apache.spark.sql.catalyst.expressions.GenericRow.getInt(Row.scala:145) at org.apache.spark.examples.sql.hive.HiveFromSpark$.main(HiveFromSpark.scala:45) at org.apache.spark.examples.sql.hive.HiveFromSpark.main(HiveFromSpark.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:303) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Author: Cheng Hao <hao.cheng@intel.com> Closes #1475 from chenghao-intel/hive_from_spark and squashes the following commits: d4c0500 [Cheng Hao] Fix the bug of ClassCastException (cherry picked from commit 29809a6d58bfe3700350ce1988ff7083881c4382) Signed-off-by: Reynold Xin <rxin@apache.org> 18 July 2014, 06:25:13 UTC
26c428a [SPARK-2534] Avoid pulling in the entire RDD in various operators (branch-1.0 backport) This backports #1450 into branch-1.0. Author: Reynold Xin <rxin@apache.org> Closes #1469 from rxin/closure-1.0 and squashes the following commits: b474a92 [Reynold Xin] [SPARK-2534] Avoid pulling in the entire RDD in various operators 17 July 2014, 23:33:30 UTC
3bb5d2f [SPARK-2412] CoalescedRDD throws exception with certain pref locs If the first pass of CoalescedRDD does not find the target number of locations AND the second pass finds new locations, an exception is thrown, as "groupHash.get(nxt_replica).get" is not valid. The fix is just to add an ArrayBuffer to groupHash for that replica if it didn't already exist. Author: Aaron Davidson <aaron@databricks.com> Closes #1337 from aarondav/2412 and squashes the following commits: f587b5d [Aaron Davidson] getOrElseUpdate 3ad8a3c [Aaron Davidson] [SPARK-2412] CoalescedRDD throws exception with certain pref locs (cherry picked from commit 7c23c0dc3ed721c95690fc49f435d9de6952523c) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 17 July 2014, 08:01:25 UTC
0b0b895 [SPARK-2154] Schedule next Driver when one completes (standalone mode) Author: Aaron Davidson <aaron@databricks.com> Closes #1405 from aarondav/2154 and squashes the following commits: 24e9ef9 [Aaron Davidson] [SPARK-2154] Schedule next Driver when one completes (standalone mode) (cherry picked from commit 9c249743eaabe5fc8d961c7aa581cc0197f6e950) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 16 July 2014, 21:16:57 UTC
91e7a71 SPARK-1097: Do not introduce deadlock while fixing concurrency bug We recently added this lock on 'conf' in order to prevent concurrent creation. However, it turns out that this can introduce a deadlock because Hadoop also synchronizes on the Configuration objects when creating new Configurations (and they do so via a static REGISTRY which contains all created Configurations). This fix forces all Spark initialization of Configuration objects to occur serially by using a static lock that we control, and thus also prevents introducing the deadlock. Author: Aaron Davidson <aaron@databricks.com> Closes #1409 from aarondav/1054 and squashes the following commits: 7d1b769 [Aaron Davidson] SPARK-1097: Do not introduce deadlock while fixing concurrency bug (cherry picked from commit 8867cd0bc2961fefed84901b8b14e9676ae6ab18) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 16 July 2014, 21:10:33 UTC
bf1ddc7 [SPARK-2518][SQL] Fix foldability of Substring expression. This is a follow-up of #1428. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #1432 from ueshin/issues/SPARK-2518 and squashes the following commits: 37d1ace [Takuya UESHIN] Fix foldability of Substring expression. (cherry picked from commit cc965eea510397642830acb21f61127b68c098d6) Signed-off-by: Reynold Xin <rxin@apache.org> 16 July 2014, 18:13:48 UTC
fb38b9c [SPARK-2525][SQL] Remove as many compilation warning messages as possible in Spark SQL JIRA: https://issues.apache.org/jira/browse/SPARK-2525. Author: Yin Huai <huai@cse.ohio-state.edu> Closes #1444 from yhuai/SPARK-2517 and squashes the following commits: edbac3f [Yin Huai] Removed some compiler type erasure warnings. (cherry picked from commit df95d82da7c76c074fd4064f7c870d55d99e0d8e) Signed-off-by: Reynold Xin <rxin@apache.org> 16 July 2014, 17:54:41 UTC
e61149d [SPARK-2504][SQL] Fix nullability of Substring expression. This is a follow-up of #1359 with nullability narrowing. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #1426 from ueshin/issues/SPARK-2504 and squashes the following commits: 5157832 [Takuya UESHIN] Remove unnecessary white spaces. 80958ac [Takuya UESHIN] Fix nullability of Substring expression. (cherry picked from commit 632fb3d9a9ebb3d2218385403145d5b89c41c025) Signed-off-by: Reynold Xin <rxin@apache.org> 16 July 2014, 05:43:59 UTC
16c8d56 [SPARK-2509][SQL] Add optimization for Substring. `Substring` including `null` literal cases could be added to `NullPropagation`. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #1428 from ueshin/issues/SPARK-2509 and squashes the following commits: d9eb85f [Takuya UESHIN] Add Substring cases to NullPropagation. (cherry picked from commit 9b38b7c71352bb5e6d359515111ad9ca33299127) Signed-off-by: Reynold Xin <rxin@apache.org> 16 July 2014, 05:35:43 UTC
96fdc7c [SPARK-2314][SQL] Override collect and take in JavaSchemaRDD, forwarding to SchemaRDD implementations. Author: Aaron Staple <aaron.staple@gmail.com> Closes #1421 from staple/SPARK-2314 and squashes the following commits: 73e04dc [Aaron Staple] [SPARK-2314] Override collect and take in JavaSchemaRDD, forwarding to SchemaRDD implementations. (cherry picked from commit 90ca532a0fd95dc85cff8c5722d371e8368b2687) Signed-off-by: Reynold Xin <rxin@apache.org> 16 July 2014, 04:35:48 UTC
9c5f096 [SPARK-2498] [SQL] Synchronize on a lock when using scala reflection inside data type objects. JIRA ticket: https://issues.apache.org/jira/browse/SPARK-2498 Author: Zongheng Yang <zongheng.y@gmail.com> Closes #1423 from concretevitamin/scala-ref-catalyst and squashes the following commits: 325a149 [Zongheng Yang] Synchronize on a lock when initializing data type objects in Catalyst. (cherry picked from commit c2048a5165b270f5baf2003fdfef7bc6c5875715) Signed-off-by: Michael Armbrust <michael@databricks.com> 16 July 2014, 00:58:39 UTC
8da0fd8 [SQL] Attribute equality comparisons should be done by exprId. Author: Michael Armbrust <michael@databricks.com> Closes #1414 from marmbrus/exprIdResolution and squashes the following commits: 97b47bc [Michael Armbrust] Attribute equality comparisons should be done by exprId. (cherry picked from commit 502f90782ad474e2630ed5be4d3c4be7dab09c34) Signed-off-by: Michael Armbrust <michael@databricks.com> 16 July 2014, 00:56:42 UTC
2db77e9 SPARK-2407: Added internal implementation of SQL SUBSTR() This replaces the Hive UDF for SUBSTR(ING) with an implementation in Catalyst and adds tests to verify correct operation. Author: William Benton <willb@redhat.com> Closes #1359 from willb/internalSqlSubstring and squashes the following commits: ccedc47 [William Benton] Fixed too-long line. a30a037 [William Benton] replace view bounds with implicit parameters ec35c80 [William Benton] Adds fixes from review: 4f3bfdb [William Benton] Added internal implementation of SQL SUBSTR() (cherry picked from commit 61de65bc69f9a5fc396b76713193c6415436d452) Signed-off-by: Michael Armbrust <michael@databricks.com> 15 July 2014, 21:12:15 UTC
f2bf651 [SQL] Whitelist more Hive tests. Author: Michael Armbrust <michael@databricks.com> Closes #1396 from marmbrus/moreTests and squashes the following commits: 6660b60 [Michael Armbrust] Blacklist a test that requires DFS command. 8b6001c [Michael Armbrust] Add golden files. ccd8f97 [Michael Armbrust] Whitelist more tests. (cherry picked from commit bcd0c30c7eea4c50301cb732c733fdf4d4142060) Signed-off-by: Michael Armbrust <michael@databricks.com> 15 July 2014, 21:04:19 UTC
3aa120c [SPARK-2483][SQL] Fix parsing of repeated, nested data access. Author: Michael Armbrust <michael@databricks.com> Closes #1411 from marmbrus/nestedRepeated and squashes the following commits: 044fa09 [Michael Armbrust] Fix parsing of repeated, nested data access. (cherry picked from commit 0f98ef1a2c9ecf328f6c5918808fa5ca486e8afd) Signed-off-by: Michael Armbrust <michael@databricks.com> 15 July 2014, 21:02:14 UTC
53a6399 [SPARK-2485][SQL] Lock usage of hive client. Author: Michael Armbrust <michael@databricks.com> Closes #1412 from marmbrus/lockHiveClient and squashes the following commits: 4bc9d5a [Michael Armbrust] protected[hive] 22e9177 [Michael Armbrust] Add comments. 7aa8554 [Michael Armbrust] Don't lock on hive's object. a6edc5f [Michael Armbrust] Lock usage of hive client. (cherry picked from commit c7c7ac83392b10abb011e6aead1bf92e7c73695e) Signed-off-by: Aaron Davidson <aaron@databricks.com> 15 July 2014, 07:14:07 UTC
0e27279 Add/increase severity of warning in documentation of groupBy() groupBy()/groupByKey() is notorious for being a very convenient API that can lead to poor performance when used incorrectly. This PR just makes it clear that users should be cautious not to rely on this API when they really want a different (more performant) one, such as reduceByKey(). (Note that one source of confusion is the name; this groupBy() is not the same as a SQL GROUP-BY, which is used for aggregation and is more similar in nature to Spark's reduceByKey().) Author: Aaron Davidson <aaron@databricks.com> Closes #1380 from aarondav/warning and squashes the following commits: f60da39 [Aaron Davidson] Give better advice d0afb68 [Aaron Davidson] Add/increase severity of warning in documentation of groupBy() (cherry picked from commit a2aa7bebae31e1e7ec23d31aaa436283743b283b) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 15 July 2014, 06:38:24 UTC
2ec7d7a [SPARK-2443][SQL] Fix slow read from partitioned tables This fix obtains a comparable performance boost as [PR #1390](https://github.com/apache/spark/pull/1390) by moving an array update and deserializer initialization out of a potentially very long loop. Suggested by yhuai. The below results are updated for this fix. ## Benchmarks Generated a local text file with 10M rows of simple key-value pairs. The data is loaded as a table through Hive. Results are obtained on my local machine using hive/console. Without the fix: Type | Non-partitioned | Partitioned (1 part) ------------ | ------------ | ------------- First run | 9.52s end-to-end (1.64s Spark job) | 36.6s (28.3s) Stablized runs | 1.21s (1.18s) | 27.6s (27.5s) With this fix: Type | Non-partitioned | Partitioned (1 part) ------------ | ------------ | ------------- First run | 9.57s (1.46s) | 11.0s (1.69s) Stablized runs | 1.13s (1.10s) | 1.23s (1.19s) Author: Zongheng Yang <zongheng.y@gmail.com> Closes #1408 from concretevitamin/slow-read-2 and squashes the following commits: d86e437 [Zongheng Yang] Move update & initialization out of potentially long loop. (cherry picked from commit d60b09bb60cff106fa0acddebf35714503b20f03) Signed-off-by: Michael Armbrust <michael@databricks.com> 14 July 2014, 20:22:39 UTC
baf92a0 [maven-release-plugin] prepare for next development iteration 14 July 2014, 07:46:37 UTC
70ee14f [maven-release-plugin] prepare release v1.0.1-rc3 14 July 2014, 07:46:30 UTC
effa69f [SPARK-2405][SQL] Reusue same byte buffers when creating new instance of InMemoryRelation Reuse byte buffers when creating unique attributes for multiple instances of an InMemoryRelation in a single query plan. Author: Michael Armbrust <michael@databricks.com> Closes #1332 from marmbrus/doubleCache and squashes the following commits: 4a19609 [Michael Armbrust] Clean up concurrency story by calculating buffersn the constructor. b39c931 [Michael Armbrust] Allocations are kind of a side effect. f67eff7 [Michael Armbrust] Reusue same byte buffers when creating new instance of InMemoryRelation (cherry picked from commit 1a7d7cc85fb24de21f1cde67d04467171b82e845) Signed-off-by: Reynold Xin <rxin@apache.org> 12 July 2014, 19:13:44 UTC
37e4943 [SPARK-2441][SQL] Add more efficient distinct operator. Author: Michael Armbrust <michael@databricks.com> Closes #1366 from marmbrus/partialDistinct and squashes the following commits: 12a31ab [Michael Armbrust] Add more efficient distinct operator. (cherry picked from commit 7e26b57615f6c1d3f9058f9c19c05ec91f017f4c) Signed-off-by: Reynold Xin <rxin@apache.org> 12 July 2014, 19:08:35 UTC
354ce4d [SPARK-2455] Mark (Shippable)VertexPartition serializable VertexPartition and ShippableVertexPartition are contained in RDDs but are not marked Serializable, leading to NotSerializableExceptions when using Java serialization. The fix is simply to mark them as Serializable. This PR does that and adds a test for serializing them using Java and Kryo serialization. Author: Ankur Dave <ankurdave@gmail.com> Closes #1376 from ankurdave/SPARK-2455 and squashes the following commits: ed4a51b [Ankur Dave] Make (Shippable)VertexPartition serializable 1fd42c5 [Ankur Dave] Add failing tests for Java serialization (cherry picked from commit 7a0135293192aaefc6ae20b57e15a90945bd8a4e) Signed-off-by: Reynold Xin <rxin@apache.org> 12 July 2014, 19:05:48 UTC
2a5514f Updating versions for branch-1.0 12 July 2014, 15:31:24 UTC
ea434cf HOTFIX: Updating Python doc version 12 July 2014, 15:27:39 UTC
390ae9b [SPARK-2415] [SQL] RowWriteSupport should handle empty ArrayType correctly. `RowWriteSupport` doesn't write empty `ArrayType` value, so the read value becomes `null`. It should write empty `ArrayType` value as it is. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #1339 from ueshin/issues/SPARK-2415 and squashes the following commits: 32afc87 [Takuya UESHIN] Merge branch 'master' into issues/SPARK-2415 2f05196 [Takuya UESHIN] Fix RowWriteSupport to handle empty ArrayType correctly. (cherry picked from commit f5abd271292f5c98eb8b1974c1df31d08ed388dd) Signed-off-by: Michael Armbrust <michael@databricks.com> 11 July 2014, 02:24:16 UTC
1fef57b [SPARK-2431][SQL] Refine StringComparison and related codes. Refine `StringComparison` and related codes as follows: - `StringComparison` could be similar to `StringRegexExpression` or `CaseConversionExpression`. - Nullability of `StringRegexExpression` could depend on children's nullabilities. - Add a case that the like condition includes no wildcard to `LikeSimplification`. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #1357 from ueshin/issues/SPARK-2431 and squashes the following commits: 77766f5 [Takuya UESHIN] Add a case that the like condition includes no wildcard to LikeSimplification. b9da9d2 [Takuya UESHIN] Fix nullability of StringRegexExpression. 680bb72 [Takuya UESHIN] Refine StringComparison. (cherry picked from commit f62c42728990266d5d5099abe241f699189ba025) Signed-off-by: Michael Armbrust <michael@databricks.com> 11 July 2014, 02:20:13 UTC
5bfd319 SPARK-2427: Fix Scala examples that use the wrong command line arguments index The Scala examples HBaseTest and HdfsTest don't use the correct indexes for the command line arguments. This due to to the fix of JIRA 1565, where these examples were not correctly adapted to the new usage of the submit script. Author: Artjom-Metro <Artjom-Metro@users.noreply.github.com> Author: Artjom-Metro <artjom31415@googlemail.com> Closes #1353 from Artjom-Metro/fix_examples and squashes the following commits: 6111801 [Artjom-Metro] Reduce the default number of iterations cfaa73c [Artjom-Metro] Fix some examples that use the wrong index to access the command line arguments (cherry picked from commit ae8ca4dfbacd5a5197fb41722607ad99c190f768) Signed-off-by: Reynold Xin <rxin@apache.org> 10 July 2014, 23:03:38 UTC
ca19cfb [SPARK-1341] [Streaming] Throttle BlockGenerator to limit rate of data consumption. Author: Issac Buenrostro <buenrostro@ooyala.com> Closes #945 from ibuenros/SPARK-1341-throttle and squashes the following commits: 5514916 [Issac Buenrostro] Formatting changes, added documentation for streaming throttling, stricter unit tests for throttling. 62f395f [Issac Buenrostro] Add comments and license to streaming RateLimiter.scala 7066438 [Issac Buenrostro] Moved throttle code to RateLimiter class, smoother pushing when throttling active ccafe09 [Issac Buenrostro] Throttle BlockGenerator to limit rate of data consumption. (cherry picked from commit 2dd67248503306bb08946b1796821e9f9ed4d00e) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> 10 July 2014, 23:01:28 UTC
cb443cf [SPARK-2417][MLlib] Fix DecisionTree tests Fixes test failures introduced by https://github.com/apache/spark/pull/1316. For both the regression and classification cases, val stats is the InformationGainStats for the best tree split. stats.predict is the predicted value for the data, before the split is made. Since 600 of the 1,000 values generated by DecisionTreeSuite.generateCategoricalDataPoints() are 1.0 and the rest 0.0, the regression tree and classification tree both correctly predict a value of 0.6 for this data now, and the assertions have been changed to reflect that. Author: johnnywalleye <jsondag@gmail.com> Closes #1343 from johnnywalleye/decision-tree-tests and squashes the following commits: ef80603 [johnnywalleye] [SPARK-2417][MLlib] Fix DecisionTree tests (cherry picked from commit d35e3db2325931492b64890125a70579bc3b587b) Signed-off-by: Xiangrui Meng <meng@databricks.com> 09 July 2014, 18:06:48 UTC
21fae6d [STREAMING] SPARK-2343: Fix QueueInputDStream with oneAtATime false Fix QueueInputDStream which was not removing dequeued items when used with the oneAtATime flag disabled. Author: Manuel Laflamme <manuel.laflamme@gmail.com> Closes #1285 from mlaflamm/spark-2343 and squashes the following commits: 61c9e38 [Manuel Laflamme] Unit tests for queue input stream c51d029 [Manuel Laflamme] Fix QueueInputDStream with oneAtATime false (cherry picked from commit 0eb11527d13083ced215e3fda44ed849198a57cb) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> 09 July 2014, 17:46:12 UTC
d569838 [SPARK-2152][MLlib] fix bin offset in DecisionTree node aggregations (also resolves SPARK-2160) Hi, this pull fixes (what I believe to be) a bug in DecisionTree.scala. In the extractLeftRightNodeAggregates function, the first set of rightNodeAgg values for Regression are set in line 792 as follows: rightNodeAgg(featureIndex)(2 * (numBins - 2)) = binData(shift + (2 * numBins - 1))) Then there is a loop that sets the rest of the values, as in line 809: rightNodeAgg(featureIndex)(2 * (numBins - 2 - splitIndex)) = binData(shift + (2 *(numBins - 2 - splitIndex))) + rightNodeAgg(featureIndex)(2 * (numBins - 1 - splitIndex)) But since splitIndex starts at 1, this ends up skipping a set of binData values. The changes here address this issue, for both the Regression and Classification cases. Author: johnnywalleye <jsondag@gmail.com> Closes #1316 from johnnywalleye/master and squashes the following commits: 73809da [johnnywalleye] fix bin offset in DecisionTree node aggregations (cherry picked from commit 1114207cc8e4ef94cb97bbd5a2ef3ae4d51f73fa) Signed-off-by: Xiangrui Meng <meng@databricks.com> 09 July 2014, 02:17:43 UTC
8854891 [SPARK-2362] Fix for newFilesOnly logic in file DStream The newFilesOnly logic should be inverted: the logic should be that if the flag newFilesOnly==true then only start reading files older than current time. As the code is now if newFilesOnly==true then it will start to read files that are older than 0L (that is: every file in the directory). Author: Gabriele Nizzoli <mail@nizzoli.net> Closes #1077 from gabrielenizzoli/master and squashes the following commits: 4f1d261 [Gabriele Nizzoli] Fix for newFilesOnly logic in file DStream (cherry picked from commit e6f7bfcfbf6aff7a9f8cd8e0a2166d0bf62b0912) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> 08 July 2014, 21:24:50 UTC
1c12b0b [SPARK-2409] Make SQLConf thread safe. Author: Reynold Xin <rxin@apache.org> Closes #1334 from rxin/sqlConfThreadSafetuy and squashes the following commits: c1e0a5a [Reynold Xin] Fixed the duplicate comment. 7614372 [Reynold Xin] [SPARK-2409] Make SQLConf thread safe. (cherry picked from commit 32516f866a32d51bfaa04685ae77ba216b4202d9) Signed-off-by: Reynold Xin <rxin@apache.org> 08 July 2014, 21:01:14 UTC
3bd32f0 [SPARK-2403] Catch all errors during serialization in DAGScheduler https://issues.apache.org/jira/browse/SPARK-2403 Spark hangs for us whenever we forget to register a class with Kryo. This should be a simple fix for that. But let me know if you have a better suggestion. I did not write a new test for this. It would be pretty complicated and I'm not sure it's worthwhile for such a simple change. Let me know if you disagree. Author: Daniel Darabos <darabos.daniel@gmail.com> Closes #1329 from darabos/spark-2403 and squashes the following commits: 3aceaad [Daniel Darabos] Print full stack trace for miscellaneous exceptions during serialization. 52c22ba [Daniel Darabos] Only catch NonFatal exceptions. 361e962 [Daniel Darabos] Catch all errors during serialization in DAGScheduler. (cherry picked from commit c8a2313cdf825e0191680a423d17619b5504ff89) Signed-off-by: Aaron Davidson <aaron@databricks.com> 08 July 2014, 17:44:02 UTC
4bf8dda [SPARK-2395][SQL] Optimize common LIKE patterns. Author: Michael Armbrust <michael@databricks.com> Closes #1325 from marmbrus/slowLike and squashes the following commits: 023c3eb [Michael Armbrust] add comment. 8b421c2 [Michael Armbrust] Handle the case where the final % is actually escaped. d34d37e [Michael Armbrust] add periods. 3bbf35f [Michael Armbrust] Roll back changes to SparkBuild 53894b1 [Michael Armbrust] Fix grammar. 4094462 [Michael Armbrust] Fix grammar. 6d3d0a0 [Michael Armbrust] Optimize common LIKE patterns. (cherry picked from commit cc3e0a14daf756ff5c2d4e7916438e175046e5bb) Signed-off-by: Michael Armbrust <michael@databricks.com> 08 July 2014, 17:38:30 UTC
3e95225 [EC2] Add default history server port to ec2 script Right now I have to open it manually Author: Andrew Or <andrewor14@gmail.com> Closes #1296 from andrewor14/hist-serv-port and squashes the following commits: 8895a1f [Andrew Or] Add default history server port to ec2 script (cherry picked from commit 56e009d4f05d990c60e109838fa70457f97f44aa) Conflicts: ec2/spark_ec2.py 08 July 2014, 07:54:27 UTC
faa0e9f [SPARK-2391][SQL] Custom take() for LIMIT queries. Using Spark's take can result in an entire in-memory partition to be shipped in order to retrieve a single row. Author: Michael Armbrust <michael@databricks.com> Closes #1318 from marmbrus/takeLimit and squashes the following commits: 77289a5 [Michael Armbrust] Update scala doc 32f0674 [Michael Armbrust] Custom take implementation for LIMIT queries. (cherry picked from commit 5a4063645dd7bb4cd8bda890785235729804ab09) Signed-off-by: Reynold Xin <rxin@apache.org> 08 July 2014, 07:41:55 UTC
23b01a3 Resolve sbt warnings during build Ⅱ Author: witgo <witgo@qq.com> Closes #1153 from witgo/expectResult and squashes the following commits: 97541d8 [witgo] merge master ead26e7 [witgo] Resolve sbt warnings during build (cherry picked from commit 3cd5029be709307415f911236472a685e406e763) Signed-off-by: Reynold Xin <rxin@apache.org> 08 July 2014, 07:31:58 UTC
9dce7be [SPARK-2376][SQL] Selecting list values inside nested JSON objects raises java.lang.IllegalArgumentException JIRA: https://issues.apache.org/jira/browse/SPARK-2376 Author: Yin Huai <huai@cse.ohio-state.edu> Closes #1320 from yhuai/SPARK-2376 and squashes the following commits: 0107417 [Yin Huai] Merge remote-tracking branch 'upstream/master' into SPARK-2376 480803d [Yin Huai] Correctly handling JSON arrays in PySpark. (cherry picked from commit 4352a2fdaa64efee7158eabef65703460ff284ec) Signed-off-by: Michael Armbrust <michael@databricks.com> 08 July 2014, 01:52:51 UTC
1032c28 [SPARK-2375][SQL] JSON schema inference may not resolve type conflicts correctly for a field inside an array of structs For example, for ``` {"array": [{"field":214748364700}, {"field":1}]} ``` the type of field is resolved as IntType. While, for ``` {"array": [{"field":1}, {"field":214748364700}]} ``` the type of field is resolved as LongType. JIRA: https://issues.apache.org/jira/browse/SPARK-2375 Author: Yin Huai <huaiyin.thu@gmail.com> Closes #1308 from yhuai/SPARK-2375 and squashes the following commits: 3e2e312 [Yin Huai] Update unit test. 1b2ff9f [Yin Huai] Merge remote-tracking branch 'upstream/master' into SPARK-2375 10794eb [Yin Huai] Correctly resolve the type of a field inside an array of structs. (cherry picked from commit f0496ee10847db921a028a34f70385f9b740b3f3) Signed-off-by: Michael Armbrust <michael@databricks.com> 08 July 2014, 00:06:10 UTC
691b554 [SPARK-2386] [SQL] RowWriteSupport should use the exact types to cast. When execute `saveAsParquetFile` with non-primitive type, `RowWriteSupport` uses wrong type `Int` for `ByteType` and `ShortType`. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #1315 from ueshin/issues/SPARK-2386 and squashes the following commits: 20d89ec [Takuya UESHIN] Use None instead of null. bd88741 [Takuya UESHIN] Add a test. 323d1d2 [Takuya UESHIN] Modify RowWriteSupport to use the exact types to cast. (cherry picked from commit 4deeed17c4847f212a4fa1a8685cfe8a12179263) Signed-off-by: Michael Armbrust <michael@databricks.com> 08 July 2014, 00:04:11 UTC
e522971 [SPARK-2339][SQL] SQL parser in sql-core is case sensitive, but a table alias is converted to lower case when we create Subquery Reported by http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Join-throws-exception-td8599.html After we get the table from the catalog, because the table has an alias, we will temporarily insert a Subquery. Then, we convert the table alias to lower case no matter if the parser is case sensitive or not. To see the issue ... ``` val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class Person(name: String, age: Int) val people = sc.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt)) people.registerAsTable("people") sqlContext.sql("select PEOPLE.name from people PEOPLE") ``` The plan is ... ``` == Query Plan == Project ['PEOPLE.name] ExistingRdd [name#0,age#1], MapPartitionsRDD[4] at mapPartitions at basicOperators.scala:176 ``` You can find that `PEOPLE.name` is not resolved. This PR introduces three changes. 1. If a table has an alias, the catalog will not lowercase the alias. If a lowercase alias is needed, the analyzer will do the work. 2. A catalog has a new val caseSensitive that indicates if this catalog is case sensitive or not. For example, a SimpleCatalog is case sensitive, but 3. Corresponding unit tests. With this PR, case sensitivity of database names and table names is handled by the catalog. Case sensitivity of other identifiers are handled by the analyzer. JIRA: https://issues.apache.org/jira/browse/SPARK-2339 Author: Yin Huai <huai@cse.ohio-state.edu> Closes #1317 from yhuai/SPARK-2339 and squashes the following commits: 12d8006 [Yin Huai] Handling case sensitivity correctly. This patch introduces three changes. 1. If a table has an alias, the catalog will not lowercase the alias. If a lowercase alias is needed, the analyzer will do the work. 2. A catalog has a new val caseSensitive that indicates if this catalog is case sensitive or not. For example, a SimpleCatalog is case sensitive, but 3. Corresponding unit tests. With this patch, case sensitivity of database names and table names is handled by the catalog. Case sensitivity of other identifiers is handled by the analyzer. (cherry picked from commit c0b4cf097de50eb2c4b0f0e67da53ee92efc1f77) Signed-off-by: Michael Armbrust <michael@databricks.com> 08 July 2014, 00:01:59 UTC
5044ba6 [SPARK-1977][MLLIB] register mutable BitSet in MovieLenseALS Author: Neville Li <neville@spotify.com> Closes #1319 from nevillelyh/gh/SPARK-1977 and squashes the following commits: 1f0a355 [Neville Li] [SPARK-1977][MLLIB] register mutable BitSet in MovieLenseALS (cherry picked from commit f7ce1b3b48f0354434456241188c6a5d954852e2) Signed-off-by: Xiangrui Meng <meng@databricks.com> 07 July 2014, 22:08:10 UTC
b459aa7 [SPARK-2327] [SQL] Fix nullabilities of Join/Generate/Aggregate. Fix nullabilities of `Join`/`Generate`/`Aggregate` because: - Output attributes of opposite side of `OuterJoin` should be nullable. - Output attributes of generater side of `Generate` should be nullable if `join` is `true` and `outer` is `true`. - `AttributeReference` of `computedAggregates` of `Aggregate` should be the same as `aggregateExpression`'s. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #1266 from ueshin/issues/SPARK-2327 and squashes the following commits: 3ace83a [Takuya UESHIN] Add withNullability to Attribute and use it to change nullabilities. df1ae53 [Takuya UESHIN] Modify nullabilize to leave attribute if not resolved. 799ce56 [Takuya UESHIN] Add nullabilization to Generate of SparkPlan. a0fc9bc [Takuya UESHIN] Fix scalastyle errors. 0e31e37 [Takuya UESHIN] Fix Aggregate resultAttribute nullabilities. 09532ec [Takuya UESHIN] Fix Generate output nullabilities. f20f196 [Takuya UESHIN] Fix Join output nullabilities. (cherry picked from commit 9d5ecf8205b924dc8a3c13fed68beb78cc5c7553) Signed-off-by: Michael Armbrust <michael@databricks.com> 05 July 2014, 18:52:11 UTC
3aa52be [SPARK-2366] [SQL] Add column pruning for the right side of LeftSemi join. The right side of `LeftSemi` join needs columns only used in join condition. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #1301 from ueshin/issues/SPARK-2366 and squashes the following commits: 7677a39 [Takuya UESHIN] Update comments. 786d3a0 [Takuya UESHIN] Rename method name. e0957b1 [Takuya UESHIN] Add column pruning for the right side of LeftSemi join. (cherry picked from commit 3da8df939ec63064692ba64d9188aeea908b305c) Signed-off-by: Michael Armbrust <michael@databricks.com> 05 July 2014, 18:48:30 UTC
b77715a [SPARK-2370][SQL] Decrease metadata retrieved for partitioned hive queries. Author: Michael Armbrust <michael@databricks.com> Closes #1305 from marmbrus/usePrunerPartitions and squashes the following commits: 744aa20 [Michael Armbrust] Use getAllPartitionsForPruner instead of getPartitions, which avoids retrieving auth data (cherry picked from commit 9d006c97371ddf357e0b821d5c6d1535d9b6fe41) Signed-off-by: Reynold Xin <rxin@apache.org> 05 July 2014, 02:16:08 UTC
d9b5a8e [maven-release-plugin] prepare for next development iteration 04 July 2014, 17:37:49 UTC
7d1043c [maven-release-plugin] prepare release v1.0.1-rc2 04 July 2014, 17:37:42 UTC
d9cec4e Updating CHANGES.txt file 04 July 2014, 17:14:49 UTC
fb2fe80 HOTFIX: Merge issue with cf1d46e4. The tests in that patch used a newer constructor for TaskInfo. 04 July 2014, 17:09:58 UTC
354a627 [SPARK-2059][SQL] Add analysis checks This replaces #1263 with a test case. Author: Reynold Xin <rxin@apache.org> Author: Michael Armbrust <michael@databricks.com> Closes #1265 from rxin/sql-analysis-error and squashes the following commits: a639e01 [Reynold Xin] Added a test case for unresolved attribute analysis. 7371e1b [Reynold Xin] Merge pull request #1263 from marmbrus/analysisChecks 448c088 [Michael Armbrust] Add analysis checks (cherry picked from commit b3e768e154bd7175db44c3ffc3d8f783f15ab776) Signed-off-by: Reynold Xin <rxin@apache.org> 04 July 2014, 07:54:37 UTC
dc73ee1 Update SQLConf.scala use concurrent.ConcurrentHashMap instead of util.Collections.synchronizedMap Author: baishuo(白硕) <vc_java@hotmail.com> Closes #1272 from baishuo/master and squashes the following commits: 51ec55d [baishuo(白硕)] Update SQLConf.scala 63da043 [baishuo(白硕)] Update SQLConf.scala 36b6dbd [baishuo(白硕)] Update SQLConf.scala 864faa0 [baishuo(白硕)] Update SQLConf.scala 593096b [baishuo(白硕)] Update SQLConf.scala 7304d9b [baishuo(白硕)] Update SQLConf.scala 843581c [baishuo(白硕)] Update SQLConf.scala 1d3e4a2 [baishuo(白硕)] Update SQLConf.scala 0740f28 [baishuo(白硕)] Update SQLConf.scala (cherry picked from commit 0bbe61223eda3f33bbf8992d2a8f0d47813f4873) Signed-off-by: Reynold Xin <rxin@apache.org> 04 July 2014, 07:25:46 UTC
6e0b7e5 [SPARK-1199][REPL] Remove VALId and use the original import style for defined classes. This is an alternate solution to #1176. Author: Prashant Sharma <prashant.s@imaginea.com> Closes #1179 from ScrapCodes/SPARK-1199/repl-fix-second-approach and squashes the following commits: 820b34b [Prashant Sharma] Here we generate two kinds of import wrappers based on whether it is a class or not. (cherry picked from commit d43415075b3468fe8aa56de5d2907d409bb96347) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 04 July 2014, 07:05:41 UTC
5c43758 [SPARK-2059][SQL] Don't throw TreeNodeException in `execution.ExplainCommand` This is a fix for the problem revealed by PR #1265. Currently `HiveComparisonSuite` ignores output of `ExplainCommand` since Catalyst query plan is quite different from Hive query plan. But exceptions throw from `CheckResolution` still breaks test cases. This PR catches any `TreeNodeException` and reports it as part of the query explanation. After merging this PR, PR #1265 can also be merged safely. For a normal query: ``` scala> hql("explain select key from src").foreach(println) ... [Physical execution plan:] [HiveTableScan [key#9], (MetastoreRelation default, src, None), None] ``` For a wrong query with unresolved attribute(s): ``` scala> hql("explain select kay from src").foreach(println) ... [Error occurred during query planning: ] [Unresolved attributes: 'kay, tree:] [Project ['kay]] [ LowerCaseSchema ] [ MetastoreRelation default, src, None] ``` Author: Cheng Lian <lian.cs.zju@gmail.com> Closes #1294 from liancheng/safe-explain and squashes the following commits: 4318911 [Cheng Lian] Don't throw TreeNodeException in `execution.ExplainCommand` (cherry picked from commit 544880457de556d1ad52e8cb7e1eca19da95f517) Signed-off-by: Reynold Xin <rxin@apache.org> 04 July 2014, 06:42:04 UTC
313f202 SPARK-2282: Reuse PySpark Accumulator sockets to avoid crashing Spark JIRA: https://issues.apache.org/jira/browse/SPARK-2282 This issue is caused by a buildup of sockets in the TIME_WAIT stage of TCP, which is a stage that lasts for some period of time after the communication closes. This solution simply allows us to reuse sockets that are in TIME_WAIT, to avoid issues with the buildup of the rapid creation of these sockets. Author: Aaron Davidson <aaron@databricks.com> Closes #1220 from aarondav/SPARK-2282 and squashes the following commits: 2e5cab3 [Aaron Davidson] SPARK-2282: Reuse PySpark Accumulator sockets to avoid crashing Spark (cherry picked from commit 97a0bfe1c0261384f09d53f9350de52fb6446d59) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 04 July 2014, 06:02:47 UTC
cf1d46e [SPARK-2307][Reprise] Correctly report RDD blocks on SparkUI **Problem.** The existing code in `ExecutorPage.scala` requires a linear scan through all the blocks to filter out the uncached ones. Every refresh could be expensive if there are many blocks and many executors. **Solution.** The proper semantics should be the following: `StorageStatusListener` should contain only block statuses that are cached. This means as soon as a block is unpersisted by any mean, its status should be removed. This is reflected in the changes made in `StorageStatusListener.scala`. Further, the `StorageTab` must stop relying on the `StorageStatusListener` changing a dropped block's status to `StorageLevel.NONE` (which no longer happens). This is reflected in the changes made in `StorageTab.scala` and `StorageUtils.scala`. ---------- If you have been following this chain of PRs like pwendell, you will quickly notice that this reverts the changes in #1249, which reverts the changes in #1080. In other words, we are adding back the changes from #1080, and fixing SPARK-2307 on top of those changes. Please ask questions if you are confused. Author: Andrew Or <andrewor14@gmail.com> Closes #1255 from andrewor14/storage-ui-fix-reprise and squashes the following commits: 45416fa [Andrew Or] Merge branch 'master' of github.com:apache/spark into storage-ui-fix-reprise a82ea25 [Andrew Or] Add tests for StorageStatusListener 8773b01 [Andrew Or] Update comment / minor changes 3afde3f [Andrew Or] Correctly report the number of blocks on SparkUI (cherry picked from commit 3894a49be9b532cc026d908a0f49bca850504498) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 04 July 2014, 05:48:33 UTC
27a2afe [SPARK-2350] Don't NPE while launching drivers Prior to this change, we could throw a NPE if we launch a driver while another one is waiting, because removing from an iterator while iterating over it is not safe. Author: Aaron Davidson <aaron@databricks.com> Closes #1289 from aarondav/master-fail and squashes the following commits: 1cf1cf4 [Aaron Davidson] SPARK-2350: Don't NPE while launching drivers (cherry picked from commit 586feb5c9528042420f678f78bacb6c254a5eaf8) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 04 July 2014, 05:32:05 UTC
d2f2534 [SPARK-1097] Workaround Hadoop conf ConcurrentModification issue Workaround Hadoop conf ConcurrentModification issue Author: Raymond Liu <raymond.liu@intel.com> Closes #1273 from colorant/hadoopRDD and squashes the following commits: 994e98b [Raymond Liu] Address comments e2cda3d [Raymond Liu] Workaround Hadoop conf ConcurrentModification issue (cherry picked from commit 5fa0a05763ab1d527efe20e3b10539ac5ffc36de) Signed-off-by: Aaron Davidson <aaron@databricks.com> 04 July 2014, 02:24:37 UTC
ff6ec25 Streaming programming guide typos Fix a bad Java code sample and a broken link in the streaming programming guide. Author: Clément MATHIEU <clement@unportant.info> Closes #1286 from cykl/streaming-programming-guide-typos and squashes the following commits: b0908cb [Clément MATHIEU] Fix broken URL 9d3c535 [Clément MATHIEU] Spark streaming requires at least two working threads (scala version was OK) (cherry picked from commit fdc4c112e7c2ac585d108d03209a642aa8bab7c8) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> 04 July 2014, 01:35:09 UTC
1d36165 [SPARK-2109] Setting SPARK_MEM for bin/pyspark does not work. Trivial fix. Author: Prashant Sharma <prashant.s@imaginea.com> Closes #1050 from ScrapCodes/SPARK-2109/pyspark-script-bug and squashes the following commits: 77072b9 [Prashant Sharma] Changed echos to redirect to STDERR. 13f48a0 [Prashant Sharma] [SPARK-2109] Setting SPARK_MEM for bin/pyspark does not work. (cherry picked from commit 731f683b1bd8abbb83030b6bae14876658bbf098) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 03 July 2014, 22:09:11 UTC
7766c9d [SPARK-2342] Evaluation helper's output type doesn't conform to input ty... The function cast doesn't conform to the intention of "Those expressions are supposed to be in the same data type, and also the return type." comment Author: Yijie Shen <henry.yijieshen@gmail.com> Closes #1283 from yijieshen/master and squashes the following commits: c7aaa4b [Yijie Shen] [SPARK-2342] Evaluation helper's output type doesn't conform to input type (cherry picked from commit a9b52e5623f7fc77fca96b095f9eeaef76e35d54) Signed-off-by: Michael Armbrust <michael@databricks.com> 03 July 2014, 20:22:24 UTC
fdee6ee [SPARK] Fix NPE for ExternalAppendOnlyMap It did not handle null keys very gracefully before. Author: Andrew Or <andrewor14@gmail.com> Closes #1288 from andrewor14/fix-external and squashes the following commits: 312b8d8 [Andrew Or] Abstract key hash code ed5adf9 [Andrew Or] Fix NPE for ExternalAppendOnlyMap (cherry picked from commit c480537739f9329ebfd580f09c69778e6c976366) Signed-off-by: Aaron Davidson <aaron@databricks.com> 03 July 2014, 17:28:06 UTC
87b74a9 [SPARK-2287] [SQL] Make ScalaReflection be able to handle Generic case classes. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #1226 from ueshin/issues/SPARK-2287 and squashes the following commits: 32ef7c3 [Takuya UESHIN] Add execution of `SHOW TABLES` before `TestHive.reset()`. 541dc8d [Takuya UESHIN] Merge branch 'master' into issues/SPARK-2287 fac5fae [Takuya UESHIN] Remove unnecessary method receiver. d306e60 [Takuya UESHIN] Merge branch 'master' into issues/SPARK-2287 7de5706 [Takuya UESHIN] Make ScalaReflection be able to handle Generic case classes. (cherry picked from commit bc7041a42dfa84312492ea8cae6fdeaeac4f6d1c) Signed-off-by: Michael Armbrust <michael@databricks.com> 02 July 2014, 17:11:02 UTC
552e28b [SPARK-2328] [SQL] Add execution of `SHOW TABLES` before `TestHive.reset()`. `PruningSuite` is executed first of Hive tests unfortunately, `TestHive.reset()` breaks the test environment. To prevent this, we must run a query before calling reset the first time. Author: Takuya UESHIN <ueshin@happy-camper.st> Closes #1268 from ueshin/issues/SPARK-2328 and squashes the following commits: 043ceac [Takuya UESHIN] Add execution of `SHOW TABLES` before `TestHive.reset()`. (cherry picked from commit 1e2c26c83dd2e807cf0031ceca8b338a1a57cac6) Signed-off-by: Michael Armbrust <michael@databricks.com> 02 July 2014, 17:07:22 UTC
69112b0 SPARK-2186: Spark SQL DSL support for simple aggregations such as SUM and AVG **Description** This patch enables using the `.select()` function in SchemaRDD with functions such as `Sum`, `Count` and other. **Testing** Unit tests added. Author: Ximo Guanter Gonzalbez <ximo@tid.es> Closes #1211 from edrevo/add-expression-support-in-select and squashes the following commits: fe4a1e1 [Ximo Guanter Gonzalbez] Extend SQL DSL to functions e1d344a [Ximo Guanter Gonzalbez] SPARK-2186: Spark SQL DSL support for simple aggregations such as SUM and AVG (cherry picked from commit 5c6ec94da1bacd8e65a43acb92b6721493484e7b) Signed-off-by: Michael Armbrust <michael@databricks.com> 02 July 2014, 17:04:08 UTC
a4c7541 update the comments in SqlParser SqlParser has been case-insensitive after https://github.com/apache/spark/commit/dab5439a083b5f771d5d5b462d0d517fa8e9aaf2 was merged Author: CodingCat <zhunansjtu@gmail.com> Closes #1275 from CodingCat/master and squashes the following commits: 17931cd [CodingCat] update the comments in SqlParser (cherry picked from commit 6596392da0fc0fee89e22adfca239a3477dfcbab) Signed-off-by: Reynold Xin <rxin@apache.org> 02 July 2014, 03:37:37 UTC
d468b3d [SPARK-2322] Exception in resultHandler should NOT crash DAGScheduler and shutdown SparkContext. This should go into 1.0.1. Author: Reynold Xin <rxin@apache.org> Closes #1264 from rxin/SPARK-2322 and squashes the following commits: c77c07f [Reynold Xin] Added comment to SparkDriverExecutionException and a test case for accumulator. 5d8d920 [Reynold Xin] [SPARK-2322] Exception in resultHandler could crash DAGScheduler and shutdown SparkContext. (cherry picked from commit 358ae1534d01ad9e69364a21441a7ef23c2cb516) Signed-off-by: Reynold Xin <rxin@apache.org> Conflicts: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 30 June 2014, 18:52:30 UTC
febec74 [SPARK-1394] Remove SIGCHLD handler in worker subprocess It should not be the responsibility of the worker subprocess, which does not intentionally fork, to try and cleanup child processes. Doing so is complex and interferes with operations such as platform.system(). If it is desirable to have tighter control over subprocesses, then namespaces should be used and it should be the manager's resposibility to handle cleanup. Author: Matthew Farrellee <matt@redhat.com> Closes #1247 from mattf/SPARK-1394 and squashes the following commits: c36f308 [Matthew Farrellee] [SPARK-1394] Remove SIGCHLD handler in worker subprocess (cherry picked from commit 3c104c79d24425786cec0034f269ba19cf465b31) Signed-off-by: Aaron Davidson <aaron@databricks.com> 29 June 2014, 01:39:48 UTC
2844fbb Revert "[maven-release-plugin] prepare release v1.0.1-rc1" This reverts commit 7feeda3d729f9397aa15ee8750c01ef5aa601962. 28 June 2014, 03:49:39 UTC
afc96a7 Revert "[maven-release-plugin] prepare for next development iteration" This reverts commit ea1a455a755f83f46fc8bf242410917d93d0c52c. 28 June 2014, 03:49:34 UTC
44d70de [SPARK-2003] Fix python SparkContext example Author: Matthew Farrellee <matt@redhat.com> Closes #1246 from mattf/SPARK-2003 and squashes the following commits: b12e7ca [Matthew Farrellee] [SPARK-2003] Fix python SparkContext example (cherry picked from commit 0e0686d3ef88e024fcceafe36a0cdbb953f5aeae) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 28 June 2014, 01:20:47 UTC
9013197 [SPARK-2259] Fix highly misleading docs on cluster / client deploy modes The existing docs are highly misleading. For standalone mode, for example, it encourages the user to use standalone-cluster mode, which is not officially supported. The safeguards have been added in Spark submit itself to prevent bad documentation from leading users down the wrong path in the future. This PR is prompted by countless headaches users of Spark have run into on the mailing list. Author: Andrew Or <andrewor14@gmail.com> Closes #1200 from andrewor14/submit-docs and squashes the following commits: 5ea2460 [Andrew Or] Rephrase cluster vs client explanation c827f32 [Andrew Or] Clarify spark submit messages 9f7ed8f [Andrew Or] Clarify client vs cluster deploy mode + add safeguards (cherry picked from commit f17510e371dfbeaada3c72b884d70c36503ea30a) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 27 June 2014, 23:11:53 UTC
2a49a8d [SPARK-2307] SparkUI - storage tab displays incorrect RDDs The issue here is that the `StorageTab` listens for updates from the `StorageStatusListener`, but when a block is kicked out of the cache, `StorageStatusListener` removes it from its list. Thus, there is no way for the `StorageTab` to know whether a block has been dropped. This issue was introduced in #1080, which was itself a bug fix. Here we revert that PR and offer a different fix for the original bug (SPARK-2144). Author: Andrew Or <andrewor14@gmail.com> Closes #1249 from andrewor14/storage-ui-fix and squashes the following commits: af019ce [Andrew Or] Fix SPARK-2307 (cherry picked from commit 21e0f77b6321590ed86223a60cdb8ae08ea4057f) Signed-off-by: Patrick Wendell <pwendell@gmail.com> 27 June 2014, 22:23:34 UTC
back to top