Revision 14fda6f313c63c9d5a86595c12acfb1e36df43ad authored by jerryshao on 01 June 2017, 05:34:53 UTC, committed by Wenchen Fan on 01 June 2017, 05:35:02 UTC
## What changes were proposed in this pull request?

Hadoop FileSystem's statistics in based on thread local variables, this is ok if the RDD computation chain is running in the same thread. But if child RDD creates another thread to consume the iterator got from Hadoop RDDs, the bytesRead computation will be error, because now the iterator's `next()` and `close()` may run in different threads. This could be happened when using PySpark with PythonRDD.

So here building a map to track the `bytesRead` for different thread and add them together. This method will be used in three RDDs, `HadoopRDD`, `NewHadoopRDD` and `FileScanRDD`. I assume `FileScanRDD` cannot be called directly, so I only fixed `HadoopRDD` and `NewHadoopRDD`.

## How was this patch tested?

Unit test and local cluster verification.

Author: jerryshao <sshao@hortonworks.com>

Closes #17617 from jerryshao/SPARK-20244.

(cherry picked from commit 5854f77ce1d3b9491e2a6bd1f352459da294e369)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
1 parent a607a26
History
File Mode Size
src
pom.xml -rw-r--r-- 3.6 KB

back to top