HADOOP-9982. Fix dead links in hadoop site docs. (Contributed by Akira Ajisaka)
git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1561813 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
parent
70165d84eb
commit
e9f7f3624a
@ -24,8 +24,7 @@ Configuration
|
|||||||
|
|
||||||
* Server Side Configuration Setup
|
* Server Side Configuration Setup
|
||||||
|
|
||||||
The {{{./apidocs/org/apache/hadoop/auth/server/AuthenticationFilter.html}
|
The AuthenticationFilter filter is Hadoop Auth's server side component.
|
||||||
AuthenticationFilter filter}} is Hadoop Auth's server side component.
|
|
||||||
|
|
||||||
This filter must be configured in front of all the web application resources
|
This filter must be configured in front of all the web application resources
|
||||||
that required authenticated requests. For example:
|
that required authenticated requests. For example:
|
||||||
@ -46,9 +45,7 @@ Configuration
|
|||||||
must start with the prefix. The default value is no prefix.
|
must start with the prefix. The default value is no prefix.
|
||||||
|
|
||||||
* <<<[PREFIX.]type>>>: the authentication type keyword (<<<simple>>> or
|
* <<<[PREFIX.]type>>>: the authentication type keyword (<<<simple>>> or
|
||||||
<<<kerberos>>>) or a
|
<<<kerberos>>>) or a Authentication handler implementation.
|
||||||
{{{./apidocs/org/apache/hadoop/auth/server/AuthenticationHandler.html}
|
|
||||||
Authentication handler implementation}}.
|
|
||||||
|
|
||||||
* <<<[PREFIX.]signature.secret>>>: The secret to SHA-sign the generated
|
* <<<[PREFIX.]signature.secret>>>: The secret to SHA-sign the generated
|
||||||
authentication tokens. If a secret is not provided a random secret is
|
authentication tokens. If a secret is not provided a random secret is
|
||||||
|
@ -52,7 +52,3 @@ Hadoop Auth, Java HTTP SPNEGO ${project.version}
|
|||||||
|
|
||||||
* {{{./BuildingIt.html}Building It}}
|
* {{{./BuildingIt.html}Building It}}
|
||||||
|
|
||||||
* {{{./apidocs/index.html}JavaDocs}}
|
|
||||||
|
|
||||||
* {{{./dependencies.html}Dependencies}}
|
|
||||||
|
|
||||||
|
@ -18,8 +18,6 @@
|
|||||||
|
|
||||||
Hadoop MapReduce Next Generation - CLI MiniCluster.
|
Hadoop MapReduce Next Generation - CLI MiniCluster.
|
||||||
|
|
||||||
\[ {{{./index.html}Go Back}} \]
|
|
||||||
|
|
||||||
%{toc|section=1|fromDepth=0}
|
%{toc|section=1|fromDepth=0}
|
||||||
|
|
||||||
* {Purpose}
|
* {Purpose}
|
||||||
@ -42,7 +40,8 @@ Hadoop MapReduce Next Generation - CLI MiniCluster.
|
|||||||
$ mvn clean install -DskipTests
|
$ mvn clean install -DskipTests
|
||||||
$ mvn package -Pdist -Dtar -DskipTests -Dmaven.javadoc.skip
|
$ mvn package -Pdist -Dtar -DskipTests -Dmaven.javadoc.skip
|
||||||
+---+
|
+---+
|
||||||
<<NOTE:>> You will need protoc 2.5.0 installed.
|
<<NOTE:>> You will need {{{http://code.google.com/p/protobuf/}protoc 2.5.0}}
|
||||||
|
installed.
|
||||||
|
|
||||||
The tarball should be available in <<<hadoop-dist/target/>>> directory.
|
The tarball should be available in <<<hadoop-dist/target/>>> directory.
|
||||||
|
|
||||||
|
@ -16,8 +16,6 @@
|
|||||||
---
|
---
|
||||||
${maven.build.timestamp}
|
${maven.build.timestamp}
|
||||||
|
|
||||||
\[ {{{../index.html}Go Back}} \]
|
|
||||||
|
|
||||||
%{toc|section=1|fromDepth=0}
|
%{toc|section=1|fromDepth=0}
|
||||||
|
|
||||||
Hadoop MapReduce Next Generation - Cluster Setup
|
Hadoop MapReduce Next Generation - Cluster Setup
|
||||||
@ -29,7 +27,7 @@ Hadoop MapReduce Next Generation - Cluster Setup
|
|||||||
with thousands of nodes.
|
with thousands of nodes.
|
||||||
|
|
||||||
To play with Hadoop, you may first want to install it on a single
|
To play with Hadoop, you may first want to install it on a single
|
||||||
machine (see {{{SingleCluster}Single Node Setup}}).
|
machine (see {{{./SingleCluster.html}Single Node Setup}}).
|
||||||
|
|
||||||
* {Prerequisites}
|
* {Prerequisites}
|
||||||
|
|
||||||
|
@ -44,8 +44,9 @@ Overview
|
|||||||
Generic Options
|
Generic Options
|
||||||
|
|
||||||
The following options are supported by {{dfsadmin}}, {{fs}}, {{fsck}},
|
The following options are supported by {{dfsadmin}}, {{fs}}, {{fsck}},
|
||||||
{{job}} and {{fetchdt}}. Applications should implement {{{some_useful_url}Tool}} to support
|
{{job}} and {{fetchdt}}. Applications should implement
|
||||||
{{{another_useful_url}GenericOptions}}.
|
{{{../../api/org/apache/hadoop/util/Tool.html}Tool}} to support
|
||||||
|
GenericOptions.
|
||||||
|
|
||||||
*------------------------------------------------+-----------------------------+
|
*------------------------------------------------+-----------------------------+
|
||||||
|| GENERIC_OPTION || Description
|
|| GENERIC_OPTION || Description
|
||||||
@ -123,7 +124,8 @@ User Commands
|
|||||||
|
|
||||||
* <<<fsck>>>
|
* <<<fsck>>>
|
||||||
|
|
||||||
Runs a HDFS filesystem checking utility. See {{Fsck}} for more info.
|
Runs a HDFS filesystem checking utility.
|
||||||
|
See {{{../hadoop-hdfs/HdfsUserGuide.html#fsck}fsck}} for more info.
|
||||||
|
|
||||||
Usage: <<<hadoop fsck [GENERIC_OPTIONS] <path> [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]>>>
|
Usage: <<<hadoop fsck [GENERIC_OPTIONS] <path> [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]>>>
|
||||||
|
|
||||||
@ -149,7 +151,8 @@ User Commands
|
|||||||
|
|
||||||
* <<<fetchdt>>>
|
* <<<fetchdt>>>
|
||||||
|
|
||||||
Gets Delegation Token from a NameNode. See {{fetchdt}} for more info.
|
Gets Delegation Token from a NameNode.
|
||||||
|
See {{{../hadoop-hdfs/HdfsUserGuide.html#fetchdt}fetchdt}} for more info.
|
||||||
|
|
||||||
Usage: <<<hadoop fetchdt [GENERIC_OPTIONS] [--webservice <namenode_http_addr>] <path> >>>
|
Usage: <<<hadoop fetchdt [GENERIC_OPTIONS] [--webservice <namenode_http_addr>] <path> >>>
|
||||||
|
|
||||||
@ -302,7 +305,8 @@ Administration Commands
|
|||||||
* <<<balancer>>>
|
* <<<balancer>>>
|
||||||
|
|
||||||
Runs a cluster balancing utility. An administrator can simply press Ctrl-C
|
Runs a cluster balancing utility. An administrator can simply press Ctrl-C
|
||||||
to stop the rebalancing process. See Rebalancer for more details.
|
to stop the rebalancing process. See
|
||||||
|
{{{../hadoop-hdfs/HdfsUserGuide.html#Rebalancer}Rebalancer}} for more details.
|
||||||
|
|
||||||
Usage: <<<hadoop balancer [-threshold <threshold>]>>>
|
Usage: <<<hadoop balancer [-threshold <threshold>]>>>
|
||||||
|
|
||||||
@ -445,7 +449,7 @@ Administration Commands
|
|||||||
* <<<namenode>>>
|
* <<<namenode>>>
|
||||||
|
|
||||||
Runs the namenode. More info about the upgrade, rollback and finalize is
|
Runs the namenode. More info about the upgrade, rollback and finalize is
|
||||||
at Upgrade Rollback
|
at {{{../hadoop-hdfs/HdfsUserGuide.html#Upgrade_and_Rollback}Upgrade Rollback}}.
|
||||||
|
|
||||||
Usage: <<<hadoop namenode [-format] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint]>>>
|
Usage: <<<hadoop namenode [-format] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint]>>>
|
||||||
|
|
||||||
@ -474,8 +478,9 @@ Administration Commands
|
|||||||
|
|
||||||
* <<<secondarynamenode>>>
|
* <<<secondarynamenode>>>
|
||||||
|
|
||||||
Runs the HDFS secondary namenode. See Secondary Namenode for more
|
Runs the HDFS secondary namenode.
|
||||||
info.
|
See {{{../hadoop-hdfs/HdfsUserGuide.html#Secondary_NameNode}Secondary Namenode}}
|
||||||
|
for more info.
|
||||||
|
|
||||||
Usage: <<<hadoop secondarynamenode [-checkpoint [force]] | [-geteditsize]>>>
|
Usage: <<<hadoop secondarynamenode [-checkpoint [force]] | [-geteditsize]>>>
|
||||||
|
|
||||||
|
@ -233,9 +233,10 @@ hand-in-hand to address this.
|
|||||||
|
|
||||||
* In particular for MapReduce applications, the developer community will
|
* In particular for MapReduce applications, the developer community will
|
||||||
try our best to support provide binary compatibility across major
|
try our best to support provide binary compatibility across major
|
||||||
releases e.g. applications using org.apache.hadop.mapred.* APIs are
|
releases e.g. applications using org.apache.hadoop.mapred.
|
||||||
supported compatibly across hadoop-1.x and hadoop-2.x. See
|
|
||||||
{{{../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html}
|
* APIs are supported compatibly across hadoop-1.x and hadoop-2.x. See
|
||||||
|
{{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html}
|
||||||
Compatibility for MapReduce applications between hadoop-1.x and hadoop-2.x}}
|
Compatibility for MapReduce applications between hadoop-1.x and hadoop-2.x}}
|
||||||
for more details.
|
for more details.
|
||||||
|
|
||||||
@ -248,13 +249,13 @@ hand-in-hand to address this.
|
|||||||
|
|
||||||
* {{{../hadoop-hdfs/WebHDFS.html}WebHDFS}} - Stable
|
* {{{../hadoop-hdfs/WebHDFS.html}WebHDFS}} - Stable
|
||||||
|
|
||||||
* {{{../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html}ResourceManager}}
|
* {{{../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html}ResourceManager}}
|
||||||
|
|
||||||
* {{{../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html}NodeManager}}
|
* {{{../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html}NodeManager}}
|
||||||
|
|
||||||
* {{{../hadoop-yarn/hadoop-yarn-site/MapredAppMasterRest.html}MR Application Master}}
|
* {{{../../hadoop-yarn/hadoop-yarn-site/MapredAppMasterRest.html}MR Application Master}}
|
||||||
|
|
||||||
* {{{../hadoop-yarn/hadoop-yarn-site/HistoryServerRest.html}History Server}}
|
* {{{../../hadoop-yarn/hadoop-yarn-site/HistoryServerRest.html}History Server}}
|
||||||
|
|
||||||
*** Policy
|
*** Policy
|
||||||
|
|
||||||
@ -512,7 +513,8 @@ hand-in-hand to address this.
|
|||||||
{{{https://issues.apache.org/jira/browse/HADOOP-9517}HADOOP-9517}}
|
{{{https://issues.apache.org/jira/browse/HADOOP-9517}HADOOP-9517}}
|
||||||
|
|
||||||
* Binary compatibility for MapReduce end-user applications between hadoop-1.x and hadoop-2.x -
|
* Binary compatibility for MapReduce end-user applications between hadoop-1.x and hadoop-2.x -
|
||||||
{{{../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html}MapReduce Compatibility between hadoop-1.x and hadoop-2.x}}
|
{{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html}
|
||||||
|
MapReduce Compatibility between hadoop-1.x and hadoop-2.x}}
|
||||||
|
|
||||||
* Annotations for interfaces as per interface classification
|
* Annotations for interfaces as per interface classification
|
||||||
schedule -
|
schedule -
|
||||||
|
@ -88,7 +88,7 @@ chgrp
|
|||||||
|
|
||||||
Change group association of files. The user must be the owner of files, or
|
Change group association of files. The user must be the owner of files, or
|
||||||
else a super-user. Additional information is in the
|
else a super-user. Additional information is in the
|
||||||
{{{betterurl}Permissions Guide}}.
|
{{{../hadoop-hdfs/HdfsPermissionsGuide.html}Permissions Guide}}.
|
||||||
|
|
||||||
Options
|
Options
|
||||||
|
|
||||||
@ -101,7 +101,7 @@ chmod
|
|||||||
Change the permissions of files. With -R, make the change recursively
|
Change the permissions of files. With -R, make the change recursively
|
||||||
through the directory structure. The user must be the owner of the file, or
|
through the directory structure. The user must be the owner of the file, or
|
||||||
else a super-user. Additional information is in the
|
else a super-user. Additional information is in the
|
||||||
{{{betterurl}Permissions Guide}}.
|
{{{../hadoop-hdfs/HdfsPermissionsGuide.html}Permissions Guide}}.
|
||||||
|
|
||||||
Options
|
Options
|
||||||
|
|
||||||
@ -112,7 +112,7 @@ chown
|
|||||||
Usage: <<<hdfs dfs -chown [-R] [OWNER][:[GROUP]] URI [URI ]>>>
|
Usage: <<<hdfs dfs -chown [-R] [OWNER][:[GROUP]] URI [URI ]>>>
|
||||||
|
|
||||||
Change the owner of files. The user must be a super-user. Additional information
|
Change the owner of files. The user must be a super-user. Additional information
|
||||||
is in the {{{betterurl}Permissions Guide}}.
|
is in the {{{../hadoop-hdfs/HdfsPermissionsGuide.html}Permissions Guide}}.
|
||||||
|
|
||||||
Options
|
Options
|
||||||
|
|
||||||
@ -210,8 +210,8 @@ expunge
|
|||||||
|
|
||||||
Usage: <<<hdfs dfs -expunge>>>
|
Usage: <<<hdfs dfs -expunge>>>
|
||||||
|
|
||||||
Empty the Trash. Refer to the {{{betterurl}HDFS Architecture Guide}} for
|
Empty the Trash. Refer to the {{{../hadoop-hdfs/HdfsDesign.html}
|
||||||
more information on the Trash feature.
|
HDFS Architecture Guide}} for more information on the Trash feature.
|
||||||
|
|
||||||
get
|
get
|
||||||
|
|
||||||
@ -439,7 +439,9 @@ test
|
|||||||
Options:
|
Options:
|
||||||
|
|
||||||
* The -e option will check to see if the file exists, returning 0 if true.
|
* The -e option will check to see if the file exists, returning 0 if true.
|
||||||
|
|
||||||
* The -z option will check to see if the file is zero length, returning 0 if true.
|
* The -z option will check to see if the file is zero length, returning 0 if true.
|
||||||
|
|
||||||
* The -d option will check to see if the path is directory, returning 0 if true.
|
* The -d option will check to see if the path is directory, returning 0 if true.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
@ -18,8 +18,6 @@
|
|||||||
|
|
||||||
Hadoop Interface Taxonomy: Audience and Stability Classification
|
Hadoop Interface Taxonomy: Audience and Stability Classification
|
||||||
|
|
||||||
\[ {{{./index.html}Go Back}} \]
|
|
||||||
|
|
||||||
%{toc|section=1|fromDepth=0}
|
%{toc|section=1|fromDepth=0}
|
||||||
|
|
||||||
* Motivation
|
* Motivation
|
||||||
|
@ -29,8 +29,10 @@ Service Level Authorization Guide
|
|||||||
|
|
||||||
Make sure Hadoop is installed, configured and setup correctly. For more
|
Make sure Hadoop is installed, configured and setup correctly. For more
|
||||||
information see:
|
information see:
|
||||||
* Single Node Setup for first-time users.
|
|
||||||
* Cluster Setup for large, distributed clusters.
|
* {{{./SingleCluster.html}Single Node Setup}} for first-time users.
|
||||||
|
|
||||||
|
* {{{./ClusterSetup.html}Cluster Setup}} for large, distributed clusters.
|
||||||
|
|
||||||
* Overview
|
* Overview
|
||||||
|
|
||||||
|
@ -18,8 +18,6 @@
|
|||||||
|
|
||||||
Hadoop MapReduce Next Generation - Setting up a Single Node Cluster.
|
Hadoop MapReduce Next Generation - Setting up a Single Node Cluster.
|
||||||
|
|
||||||
\[ {{{./index.html}Go Back}} \]
|
|
||||||
|
|
||||||
%{toc|section=1|fromDepth=0}
|
%{toc|section=1|fromDepth=0}
|
||||||
|
|
||||||
* Mapreduce Tarball
|
* Mapreduce Tarball
|
||||||
@ -32,7 +30,8 @@ $ mvn clean install -DskipTests
|
|||||||
$ cd hadoop-mapreduce-project
|
$ cd hadoop-mapreduce-project
|
||||||
$ mvn clean install assembly:assembly -Pnative
|
$ mvn clean install assembly:assembly -Pnative
|
||||||
+---+
|
+---+
|
||||||
<<NOTE:>> You will need protoc 2.5.0 installed.
|
<<NOTE:>> You will need {{{http://code.google.com/p/protobuf}protoc 2.5.0}}
|
||||||
|
installed.
|
||||||
|
|
||||||
To ignore the native builds in mapreduce you can omit the <<<-Pnative>>> argument
|
To ignore the native builds in mapreduce you can omit the <<<-Pnative>>> argument
|
||||||
for maven. The tarball should be available in <<<target/>>> directory.
|
for maven. The tarball should be available in <<<target/>>> directory.
|
||||||
|
@ -1151,6 +1151,9 @@ Release 2.3.0 - UNRELEASED
|
|||||||
HDFS-5343. When cat command is issued on snapshot files getting unexpected result.
|
HDFS-5343. When cat command is issued on snapshot files getting unexpected result.
|
||||||
(Sathish via umamahesh)
|
(Sathish via umamahesh)
|
||||||
|
|
||||||
|
HADOOP-9982. Fix dead links in hadoop site docs. (Akira Ajisaka via Arpit
|
||||||
|
Agarwal)
|
||||||
|
|
||||||
Release 2.2.0 - 2013-10-13
|
Release 2.2.0 - 2013-10-13
|
||||||
|
|
||||||
INCOMPATIBLE CHANGES
|
INCOMPATIBLE CHANGES
|
||||||
|
Loading…
Reference in New Issue
Block a user