diff --git a/hadoop-mapreduce-project/CHANGES.txt b/hadoop-mapreduce-project/CHANGES.txt index 3b2470d270..de5b703ba6 100644 --- a/hadoop-mapreduce-project/CHANGES.txt +++ b/hadoop-mapreduce-project/CHANGES.txt @@ -113,6 +113,8 @@ Release 0.23.3 - UNRELEASED MAPREDUCE-3885. Avoid an unnecessary copy for all requests/responses in MRs ProtoOverHadoopRpcEngine. (Devaraj Das via sseth) + MAPREDUCE-3991. Streaming FAQ has some wrong instructions about input files splitting. (harsh) + OPTIMIZATIONS BUG FIXES diff --git a/hadoop-mapreduce-project/src/docs/src/documentation/content/xdocs/streaming.xml b/hadoop-mapreduce-project/src/docs/src/documentation/content/xdocs/streaming.xml index a1013e8dc9..2ae6858b70 100644 --- a/hadoop-mapreduce-project/src/docs/src/documentation/content/xdocs/streaming.xml +++ b/hadoop-mapreduce-project/src/docs/src/documentation/content/xdocs/streaming.xml @@ -750,7 +750,7 @@ You can use Hadoop Streaming to do this. As an example, consider the problem of zipping (compressing) a set of files across the hadoop cluster. You can achieve this using either of these methods:

  1. Hadoop Streaming and custom mapper script:
  2. The existing Hadoop Framework: