Revision 6764408f68495e2ca7c1b9959db53ee12cabb197 authored by Taaffy on 19 September 2017, 09:20:04 UTC, committed by Sean Owen on 19 September 2017, 09:20:14 UTC
Current implementation for processingRate-total uses wrong metric:
mistakenly uses inputRowsPerSecond instead of processedRowsPerSecond

## What changes were proposed in this pull request?
Adjust processingRate-total from using inputRowsPerSecond to processedRowsPerSecond

## How was this patch tested?

Built spark from source with proposed change and tested output with correct parameter. Before change the csv metrics file for inputRate-total and processingRate-total displayed the same values due to the error. After changing MetricsReporter.scala the processingRate-total csv file displayed the correct metric.
<img width="963" alt="processed rows per second" src="https://user-images.githubusercontent.com/32072374/30554340-82eea12c-9ca4-11e7-8370-8168526ff9a2.png">

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: Taaffy <32072374+Taaffy@users.noreply.github.com>

Closes #19268 from Taaffy/patch-1.

(cherry picked from commit 1bc17a6b8add02772a8a0a1048ac6a01d045baf4)
Signed-off-by: Sean Owen <sowen@cloudera.com>
1 parent d0234eb
History
File Mode Size
docker.properties.template -rw-r--r-- 996 bytes
fairscheduler.xml.template -rw-r--r-- 1.1 KB
log4j.properties.template -rw-r--r-- 2.0 KB
metrics.properties.template -rw-r--r-- 7.1 KB
slaves.template -rw-r--r-- 865 bytes
spark-defaults.conf.template -rw-r--r-- 1.3 KB
spark-env.sh.template -rwxr-xr-x 3.7 KB

back to top