From d07356e60e5a46357df0b7d883e89547ccea8f52 Mon Sep 17 00:00:00 2001 From: Nikita Eshkeev Date: Thu, 20 Apr 2023 13:42:44 +0300 Subject: [PATCH] HADOOP-18597. Simplify single node instructions for creating directories for Map Reduce. (#5305) Signed-off-by: Ayush Saxena --- .../hadoop-common/src/site/markdown/SingleCluster.md.vm | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm b/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm index bbea16855e..8153dce5c3 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm +++ b/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm @@ -157,8 +157,7 @@ The following instructions are to run a MapReduce job locally. If you want to ex 4. Make the HDFS directories required to execute MapReduce jobs: - $ bin/hdfs dfs -mkdir /user - $ bin/hdfs dfs -mkdir /user/ + $ bin/hdfs dfs -mkdir -p /user/ 5. Copy the input files into the distributed filesystem: