HDFS-2054 BlockSender.sendChunk() prints ERROR for connection closures encountered during transferToFully()

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1145751 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Michael Stack 2011-07-12 20:23:39 +00:00
parent fd5a762df0
commit 714edd65ac
2 changed files with 15 additions and 3 deletions

View File

@ -546,6 +546,9 @@ Trunk (unreleased changes)
HDFS-2134. Move DecommissionManager to the blockmanagement package.
(szetszwo)
HDFS-2054 BlockSender.sendChunk() prints ERROR for connection closures
encountered during transferToFully() (Kihwal Lee via stack)
OPTIMIZATIONS
HDFS-1458. Improve checkpoint performance by avoiding unnecessary image

View File

@ -401,10 +401,19 @@ private int sendChunks(ByteBuffer pkt, int maxChunks, OutputStream out)
}
} catch (IOException e) {
/* exception while writing to the client (well, with transferTo(),
* it could also be while reading from the local file).
/* Exception while writing to the client. Connection closure from
* the other end is mostly the case and we do not care much about
* it. But other things can go wrong, especially in transferTo(),
* which we do not want to ignore.
*
* The message parsing below should not be considered as a good
* coding example. NEVER do it to drive a program logic. NEVER.
* It was done here because the NIO throws an IOException for EPIPE.
*/
LOG.error("BlockSender.sendChunks() exception: " + StringUtils.stringifyException(e));
String ioem = e.getMessage();
if (!ioem.startsWith("Broken pipe") && !ioem.startsWith("Connection reset")) {
LOG.error("BlockSender.sendChunks() exception: ", e);
}
throw ioeToSocketException(e);
}