From 10b2cfa96cec8582799c9ae864dfb4eb8a42aeb7 Mon Sep 17 00:00:00 2001 From: Chen Liang Date: Wed, 13 Sep 2017 10:49:34 -0700 Subject: [PATCH] HADOOP-14804. correct wrong parameters format order in core-default.xml. Contributed by Chen Hongfei. --- .../src/main/resources/core-default.xml | 96 +++++++++---------- 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml index 3d5ff4d538..6cce6472f2 100644 --- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml +++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml @@ -1973,38 +1973,38 @@ - Enable/disable the cross-origin (CORS) filter. hadoop.http.cross-origin.enabled false + Enable/disable the cross-origin (CORS) filter. + hadoop.http.cross-origin.allowed-origins + * Comma separated list of origins that are allowed for web services needing cross-origin (CORS) support. Wildcards (*) and patterns allowed - hadoop.http.cross-origin.allowed-origins - * - Comma separated list of methods that are allowed for web - services needing cross-origin (CORS) support. hadoop.http.cross-origin.allowed-methods GET,POST,HEAD + Comma separated list of methods that are allowed for web + services needing cross-origin (CORS) support. - Comma separated list of headers that are allowed for web - services needing cross-origin (CORS) support. hadoop.http.cross-origin.allowed-headers X-Requested-With,Content-Type,Accept,Origin + Comma separated list of headers that are allowed for web + services needing cross-origin (CORS) support. - The number of seconds a pre-flighted request can be cached - for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.max-age 1800 + The number of seconds a pre-flighted request can be cached + for web services needing cross-origin (CORS) support. @@ -2095,13 +2095,13 @@ + hadoop.http.staticuser.user + dr.who The user name to filter as, on static web filters while rendering content. An example use is the HDFS web UI (user to be used for browsing files). - hadoop.http.staticuser.user - dr.who @@ -2464,6 +2464,8 @@ + hadoop.registry.rm.enabled + false Is the registry enabled in the YARN Resource Manager? @@ -2475,50 +2477,50 @@ If false, the paths must be created by other means, and no automatic cleanup of service records will take place. - hadoop.registry.rm.enabled - false + hadoop.registry.zk.root + /registry The root zookeeper node for the registry - hadoop.registry.zk.root - /registry + hadoop.registry.zk.session.timeout.ms + 60000 Zookeeper session timeout in milliseconds - hadoop.registry.zk.session.timeout.ms - 60000 + hadoop.registry.zk.connection.timeout.ms + 15000 Zookeeper connection timeout in milliseconds - hadoop.registry.zk.connection.timeout.ms - 15000 + hadoop.registry.zk.retry.times + 5 Zookeeper connection retry count before failing - hadoop.registry.zk.retry.times - 5 - - hadoop.registry.zk.retry.interval.ms 1000 + + + hadoop.registry.zk.retry.ceiling.ms + 60000 Zookeeper retry limit in milliseconds, during exponential backoff. @@ -2528,20 +2530,20 @@ with the backoff policy, result in a long retry period - hadoop.registry.zk.retry.ceiling.ms - 60000 + hadoop.registry.zk.quorum + localhost:2181 List of hostname:port pairs defining the zookeeper quorum binding for the registry - hadoop.registry.zk.quorum - localhost:2181 + hadoop.registry.secure + false Key to set if the registry is secure. Turning it on changes the permissions policy from "open access" @@ -2549,11 +2551,11 @@ a user adding one or more auth key pairs down their own tree. - hadoop.registry.secure - false + hadoop.registry.system.acls + sasl:yarn@, sasl:mapred@, sasl:hdfs@ A comma separated list of Zookeeper ACL identifiers with system access to the registry in a secure cluster. @@ -2563,11 +2565,11 @@ If there is an "@" at the end of a SASL entry it instructs the registry client to append the default kerberos domain. - hadoop.registry.system.acls - sasl:yarn@, sasl:mapred@, sasl:hdfs@ + hadoop.registry.kerberos.realm + The kerberos realm: used to set the realm of system principals which do not declare their realm, @@ -2579,26 +2581,24 @@ If neither are known and the realm is needed, then the registry service/client will fail. - hadoop.registry.kerberos.realm - + hadoop.registry.jaas.context + Client Key to define the JAAS context. Used in secure mode - hadoop.registry.jaas.context - Client + hadoop.shell.missing.defaultFs.warning + false Enable hdfs shell commands to display warnings if (fs.defaultFS) property is not set. - hadoop.shell.missing.defaultFs.warning - false @@ -2628,13 +2628,13 @@ + hadoop.http.logs.enabled + true Enable the "/logs" endpoint on all Hadoop daemons, which serves local logs, but may be considered a security risk due to it listing the contents of a directory. - hadoop.http.logs.enabled - true @@ -2799,48 +2799,48 @@ - Host:Port of the ZooKeeper server to be used. - hadoop.zk.address + Host:Port of the ZooKeeper server to be used. + - Number of tries to connect to ZooKeeper. hadoop.zk.num-retries 1000 + Number of tries to connect to ZooKeeper. - Retry interval in milliseconds when connecting to ZooKeeper. - hadoop.zk.retry-interval-ms 1000 + Retry interval in milliseconds when connecting to ZooKeeper. + + hadoop.zk.timeout-ms + 10000 ZooKeeper session timeout in milliseconds. Session expiration is managed by the ZooKeeper cluster itself, not by the client. This value is used by the cluster to determine when the client's session expires. Expirations happens when the cluster does not hear from the client within the specified session timeout period (i.e. no heartbeat). - hadoop.zk.timeout-ms - 10000 - ACL's to be used for ZooKeeper znodes. hadoop.zk.acl world:anyone:rwcda + ACL's to be used for ZooKeeper znodes. + hadoop.zk.auth Specify the auths to be used for the ACL's specified in hadoop.zk.acl. This takes a comma-separated list of authentication mechanisms, each of the form 'scheme:auth' (the same syntax used for the 'addAuth' command in the ZK CLI). - hadoop.zk.auth