https://github.com/apache/spark
Revision 98141088888a4f51aeb281f14a8421ac1d735c85 authored by Shixiong Zhu on 22 January 2019, 17:00:52 UTC, committed by Shixiong Zhu on 22 January 2019, 17:01:34 UTC
## What changes were proposed in this pull request?

`ByteBuffer.allocate` may throw `OutOfMemoryError` when the block is large but no enough memory is available. However, when this happens, right now BlockTransferService.fetchBlockSync will just hang forever as its `BlockFetchingListener. onBlockFetchSuccess` doesn't complete `Promise`.

This PR catches `Throwable` and uses the error to complete `Promise`.

## How was this patch tested?

Added a unit test. Since I cannot make `ByteBuffer.allocate` throw `OutOfMemoryError`, I passed a negative size to make `ByteBuffer.allocate` fail. Although the error type is different, it should trigger the same code path.

Closes #23590 from zsxwing/SPARK-26665.

Authored-by: Shixiong Zhu <zsxwing@gmail.com>
Signed-off-by: Shixiong Zhu <zsxwing@gmail.com>
(cherry picked from commit 66450bbc1bb4397f06ca9a6ecba4d16c82d711fd)
Signed-off-by: Shixiong Zhu <zsxwing@gmail.com>
1 parent 123adbd
History
Tip revision: 98141088888a4f51aeb281f14a8421ac1d735c85 authored by Shixiong Zhu on 22 January 2019, 17:00:52 UTC
[SPARK-26665][CORE] Fix a bug that BlockTransferService.fetchBlockSync may hang forever
Tip revision: 9814108

README.md

back to top