Revision 82ae1f0aca9c00fddba130c144adfe0777172cc8 authored by Tathagata Das on 15 May 2017, 17:46:38 UTC, committed by Shixiong Zhu on 15 May 2017, 17:46:45 UTC
## What changes were proposed in this pull request? StateStore.abort() should do a best effort attempt to clean up temporary resources. It should not throw errors, especially because its called in a TaskCompletionListener, because this error could hide previous real errors in the task. ## How was this patch tested? No unit test. Author: Tathagata Das <tathagata.das1565@gmail.com> Closes #17958 from tdas/SPARK-20716. (cherry picked from commit 271175e2bd0f7887a068db92de73eff60f5ef2b2) Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
1 parent 0bd918f
File | Mode | Size |
---|---|---|
beeline | -rwxr-xr-x | 1.1 KB |
beeline.cmd | -rw-r--r-- | 899 bytes |
find-spark-home | -rwxr-xr-x | 1.9 KB |
load-spark-env.cmd | -rw-r--r-- | 1.9 KB |
load-spark-env.sh | -rw-r--r-- | 2.1 KB |
pyspark | -rwxr-xr-x | 2.9 KB |
pyspark.cmd | -rw-r--r-- | 1002 bytes |
pyspark2.cmd | -rw-r--r-- | 1.5 KB |
run-example | -rwxr-xr-x | 1.0 KB |
run-example.cmd | -rw-r--r-- | 988 bytes |
spark-class | -rwxr-xr-x | 3.1 KB |
spark-class.cmd | -rw-r--r-- | 1012 bytes |
spark-class2.cmd | -rw-r--r-- | 2.4 KB |
spark-shell | -rwxr-xr-x | 2.9 KB |
spark-shell.cmd | -rw-r--r-- | 1010 bytes |
spark-shell2.cmd | -rw-r--r-- | 1.5 KB |
spark-sql | -rwxr-xr-x | 1.0 KB |
spark-submit | -rwxr-xr-x | 1.0 KB |
spark-submit.cmd | -rw-r--r-- | 1012 bytes |
spark-submit2.cmd | -rw-r--r-- | 1.1 KB |
sparkR | -rwxr-xr-x | 1.0 KB |
sparkR.cmd | -rw-r--r-- | 1000 bytes |
sparkR2.cmd | -rw-r--r-- | 1014 bytes |
Computing file changes ...