Revision 7a04def920438ef0e08b66a95befeec981e5571e authored by Xianyang Liu on 07 August 2017, 09:04:53 UTC, committed by Wenchen Fan on 07 August 2017, 09:05:02 UTC
## What changes were proposed in this pull request?

We should reset numRecordsWritten to zero after DiskBlockObjectWriter.commitAndGet called.
Because when `revertPartialWritesAndClose` be called, we decrease the written records in `ShuffleWriteMetrics` . However, we decreased the written records to zero, this should be wrong, we should only decreased the number reords after the last `commitAndGet` called.

## How was this patch tested?
Modified existing test.

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: Xianyang Liu <xianyang.liu@intel.com>

Closes #18830 from ConeyLiu/DiskBlockObjectWriter.

(cherry picked from commit 534a063f7c693158437d13224f50d4ae789ff6fb)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
1 parent 098aaec
Raw File
_config.yml
highlighter: pygments
markdown: kramdown
gems:
  - jekyll-redirect-from

# For some reason kramdown seems to behave differently on different
# OS/packages wrt encoding. So we hard code this config.
kramdown:
  entity_output: numeric

include:
  - _static
  - _modules

# These allow the documentation to be updated with newer releases
# of Spark, Scala, and Mesos.
SPARK_VERSION: 2.2.1-SNAPSHOT
SPARK_VERSION_SHORT: 2.2.1
SCALA_BINARY_VERSION: "2.11"
SCALA_VERSION: "2.11.8"
MESOS_VERSION: 1.0.0
SPARK_ISSUE_TRACKER_URL: https://issues.apache.org/jira/browse/SPARK
SPARK_GITHUB_URL: https://github.com/apache/spark
back to top