hadoop.common.configuration.version
3.0.0
version of this configuration file
hadoop.tmp.dir
/tmp/hadoop-${user.name}
A base for other temporary directories.
io.native.lib.available
true
Should native hadoop libraries, if present, be used.
hadoop.http.filter.initializers
org.apache.hadoop.http.lib.StaticUserWebFilter
A comma separated list of class names. Each class in the list
must extend org.apache.hadoop.http.FilterInitializer. The corresponding
Filter will be initialized. Then, the Filter will be applied to all user
facing jsp and servlet web pages. The ordering of the list defines the
ordering of the filters.
hadoop.security.authorization
false
Is service-level authorization enabled?
hadoop.security.instrumentation.requires.admin
false
Indicates if administrator ACLs are required to access
instrumentation servlets (JMX, METRICS, CONF, STACKS).
hadoop.security.authentication
simple
Possible values are simple (no authentication), and kerberos
hadoop.security.group.mapping
org.apache.hadoop.security.ShellBasedUnixGroupsMapping
Class for user to group mapping (get groups for a given user) for ACL
hadoop.security.groups.cache.secs
300
This is the config controlling the validity of the entries in the cache
containing the user->group mapping. When this duration has expired,
then the implementation of the group mapping provider is invoked to get
the groups of the user and then cached back.
hadoop.security.group.mapping.ldap.url
The URL of the LDAP server to use for resolving user groups when using
the LdapGroupsMapping user to group mapping.
hadoop.security.group.mapping.ldap.ssl
false
Whether or not to use SSL when connecting to the LDAP server.
hadoop.security.group.mapping.ldap.ssl.keystore
File path to the SSL keystore that contains the SSL certificate required
by the LDAP server.
hadoop.security.group.mapping.ldap.ssl.keystore.password.file
The path to a file containing the password of the LDAP SSL keystore.
IMPORTANT: This file should be readable only by the Unix user running
the daemons.
hadoop.security.group.mapping.ldap.bind.user
The distinguished name of the user to bind as when connecting to the LDAP
server. This may be left blank if the LDAP server supports anonymous binds.
hadoop.security.group.mapping.ldap.bind.password.file
The path to a file containing the password of the bind user.
IMPORTANT: This file should be readable only by the Unix user running
the daemons.
hadoop.security.group.mapping.ldap.base
The search base for the LDAP connection. This is a distinguished name,
and will typically be the root of the LDAP directory.
hadoop.security.group.mapping.ldap.search.filter.user
(&(objectClass=user)(sAMAccountName={0})
An additional filter to use when searching for LDAP users. The default will
usually be appropriate for Active Directory installations. If connecting to
an LDAP server with a non-AD schema, this should be replaced with
(&(objectClass=inetOrgPerson)(uid={0}). {0} is a special string used to
denote where the username fits into the filter.
hadoop.security.group.mapping.ldap.search.filter.group
(objectClass=group)
An additional filter to use when searching for LDAP groups. This should be
changed when resolving groups against a non-Active Directory installation.
posixGroups are currently not a supported group class.
hadoop.security.group.mapping.ldap.search.attr.member
member
The attribute of the group object that identifies the users that are
members of the group. The default will usually be appropriate for
any LDAP installation.
hadoop.security.group.mapping.ldap.search.attr.group.name
cn
The attribute of the group object that identifies the group name. The
default will usually be appropriate for all LDAP systems.
hadoop.security.service.user.name.key
For those cases where the same RPC protocol is implemented by multiple
servers, this configuration is required for specifying the principal
name to use for the service when the client wishes to make an RPC call.
hadoop.rpc.protection
authentication
This field sets the quality of protection for secured sasl
connections. Possible values are authentication, integrity and privacy.
authentication means authentication only and no integrity or privacy;
integrity implies authentication and integrity are enabled; and privacy
implies all of authentication, integrity and privacy are enabled.
hadoop.work.around.non.threadsafe.getpwuid
false
Some operating systems or authentication modules are known to
have broken implementations of getpwuid_r and getpwgid_r, such that these
calls are not thread-safe. Symptoms of this problem include JVM crashes
with a stack trace inside these functions. If your system exhibits this
issue, enable this configuration parameter to include a lock around the
calls as a workaround.
An incomplete list of some systems known to have this issue is available
at http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations
hadoop.kerberos.kinit.command
kinit
Used to periodically renew Kerberos credentials when provided
to Hadoop. The default setting assumes that kinit is in the PATH of users
running the Hadoop client. Change this to the absolute path to kinit if this
is not the case.
hadoop.security.auth_to_local
Maps kerberos principals to local user names
io.file.buffer.size
4096
The size of buffer for use in sequence files.
The size of this buffer should probably be a multiple of hardware
page size (4096 on Intel x86), and it determines how much data is
buffered during read and write operations.
io.bytes.per.checksum
512
The number of bytes per checksum. Must not be larger than
io.file.buffer.size.
io.skip.checksum.errors
false
If true, when a checksum error is encountered while
reading a sequence file, entries are skipped, instead of throwing an
exception.
io.compression.codecs
A comma-separated list of the compression codec classes that can
be used for compression/decompression. In addition to any classes specified
with this property (which take precedence), codec classes on the classpath
are discovered using a Java ServiceLoader.
io.serializations
org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
A list of serialization classes that can be used for
obtaining serializers and deserializers.
io.seqfile.local.dir
${hadoop.tmp.dir}/io/local
The local directory where sequence file stores intermediate
data files during merge. May be a comma-separated list of
directories on different devices in order to spread disk i/o.
Directories that do not exist are ignored.
io.map.index.skip
0
Number of index entries to skip between each entry.
Zero by default. Setting this to values larger than zero can
facilitate opening large MapFiles using less memory.
io.map.index.interval
128
MapFile consist of two files - data file (tuples) and index file
(keys). For every io.map.index.interval records written in the
data file, an entry (record-key, data-file-position) is written
in the index file. This is to allow for doing binary search later
within the index file to look up records by their keys and get their
closest positions in the data file.
fs.defaultFS
file:///
The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.
fs.default.name
file:///
Deprecated. Use (fs.defaultFS) property
instead
fs.trash.interval
0
Number of minutes after which the checkpoint
gets deleted.
If zero, the trash feature is disabled.
fs.trash.checkpoint.interval
0
Number of minutes between trash checkpoints.
Should be smaller or equal to fs.trash.interval.
Every time the checkpointer runs it creates a new checkpoint
out of current and removes checkpoints created more than
fs.trash.interval minutes ago.
fs.AbstractFileSystem.file.impl
org.apache.hadoop.fs.local.LocalFs
The AbstractFileSystem for file: uris.
fs.AbstractFileSystem.hdfs.impl
org.apache.hadoop.fs.Hdfs
The FileSystem for hdfs: uris.
fs.AbstractFileSystem.viewfs.impl
org.apache.hadoop.fs.viewfs.ViewFs
The AbstractFileSystem for view file system for viewfs: uris
(ie client side mount table:).
fs.ftp.host
0.0.0.0
FTP filesystem connects to this server
fs.ftp.host.port
21
FTP filesystem connects to fs.ftp.host on this port
fs.df.interval
60000
Disk usage statistics refresh interval in msec.
fs.s3.block.size
67108864
Block size to use when writing files to S3.
fs.s3.buffer.dir
${hadoop.tmp.dir}/s3
Determines where on the local filesystem the S3 filesystem
should store files before sending them to S3
(or after retrieving them from S3).
fs.s3.maxRetries
4
The maximum number of retries for reading or writing files to S3,
before we signal failure to the application.
fs.s3.sleepTimeSeconds
10
The number of seconds to sleep between each S3 retry.
fs.automatic.close
true
By default, FileSystem instances are automatically closed at program
exit using a JVM shutdown hook. Setting this property to false disables this
behavior. This is an advanced option that should only be used by server applications
requiring a more carefully orchestrated shutdown sequence.
fs.s3n.block.size
67108864
Block size to use when reading files using the native S3
filesystem (s3n: URIs).
io.seqfile.compress.blocksize
1000000
The minimum block size for compression in block compressed
SequenceFiles.
io.seqfile.lazydecompress
true
Should values of block-compressed SequenceFiles be decompressed
only when necessary.
io.seqfile.sorter.recordlimit
1000000
The limit on number of records to be kept in memory in a spill
in SequenceFiles.Sorter
io.mapfile.bloom.size
1048576
The size of BloomFilter-s used in BloomMapFile. Each time this many
keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter).
Larger values minimize the number of filters, which slightly increases the performance,
but may waste too much space if the total number of keys is usually much smaller
than this number.
io.mapfile.bloom.error.rate
0.005
The rate of false positives in BloomFilter-s used in BloomMapFile.
As this value decreases, the size of BloomFilter-s increases exponentially. This
value is the probability of encountering false positives (default is 0.5%).
hadoop.util.hash.type
murmur
The default implementation of Hash. Currently this can take one of the
two values: 'murmur' to select MurmurHash and 'jenkins' to select JenkinsHash.
ipc.client.idlethreshold
4000
Defines the threshold number of connections after which
connections will be inspected for idleness.
ipc.client.kill.max
10
Defines the maximum number of clients to disconnect in one go.
ipc.client.connection.maxidletime
10000
The maximum time in msec after which a client will bring down the
connection to the server.
ipc.client.connect.max.retries
10
Indicates the number of retries a client will make to establish
a server connection.
ipc.client.connect.max.retries.on.timeouts
45
Indicates the number of retries a client will make on socket timeout
to establish a server connection.
ipc.server.listen.queue.size
128
Indicates the length of the listen queue for servers accepting
client connections.
ipc.server.tcpnodelay
false
Turn on/off Nagle's algorithm for the TCP socket connection on
the server. Setting to true disables the algorithm and may decrease latency
with a cost of more/smaller packets.
ipc.client.tcpnodelay
false
Turn on/off Nagle's algorithm for the TCP socket connection on
the client. Setting to true disables the algorithm and may decrease latency
with a cost of more/smaller packets.
hadoop.rpc.socket.factory.class.default
org.apache.hadoop.net.StandardSocketFactory
Default SocketFactory to use. This parameter is expected to be
formatted as "package.FactoryClassName".
hadoop.rpc.socket.factory.class.ClientProtocol
SocketFactory to use to connect to a DFS. If null or empty, use
hadoop.rpc.socket.class.default. This socket factory is also used by
DFSClient to create sockets to DataNodes.
hadoop.socks.server
Address (host:port) of the SOCKS server to be used by the
SocksSocketFactory.
net.topology.node.switch.mapping.impl
org.apache.hadoop.net.ScriptBasedMapping
The default implementation of the DNSToSwitchMapping. It
invokes a script specified in net.topology.script.file.name to resolve
node names. If the value for net.topology.script.file.name is not set, the
default value of DEFAULT_RACK is returned for all node names.
net.topology.impl
org.apache.hadoop.net.NetworkTopology
The default implementation of NetworkTopology which is classic three layer one.
net.topology.script.file.name
The script name that should be invoked to resolve DNS names to
NetworkTopology names. Example: the script would take host.foo.bar as an
argument, and return /rack1 as the output.
net.topology.script.number.args
100
The max number of args that the script configured with
net.topology.script.file.name should be run with. Each arg is an
IP address.
net.topology.table.file.name
The file name for a topology file, which is used when the
net.topology.script.file.name property is set to
org.apache.hadoop.net.TableMapping. The file format is a two column text
file, with columns separated by whitespace. The first column is a DNS or
IP address and the second column specifies the rack where the address maps.
If no entry corresponding to a host in the cluster is found, then
/default-rack is assumed.
file.stream-buffer-size
4096
The size of buffer to stream files.
The size of this buffer should probably be a multiple of hardware
page size (4096 on Intel x86), and it determines how much data is
buffered during read and write operations.
file.bytes-per-checksum
512
The number of bytes per checksum. Must not be larger than
file.stream-buffer-size
file.client-write-packet-size
65536
Packet size for clients to write
file.blocksize
67108864
Block size
file.replication
1
Replication factor
s3.stream-buffer-size
4096
The size of buffer to stream files.
The size of this buffer should probably be a multiple of hardware
page size (4096 on Intel x86), and it determines how much data is
buffered during read and write operations.
s3.bytes-per-checksum
512
The number of bytes per checksum. Must not be larger than
s3.stream-buffer-size
s3.client-write-packet-size
65536
Packet size for clients to write
s3.blocksize
67108864
Block size
s3.replication
3
Replication factor
s3native.stream-buffer-size
4096
The size of buffer to stream files.
The size of this buffer should probably be a multiple of hardware
page size (4096 on Intel x86), and it determines how much data is
buffered during read and write operations.
s3native.bytes-per-checksum
512
The number of bytes per checksum. Must not be larger than
s3native.stream-buffer-size
s3native.client-write-packet-size
65536
Packet size for clients to write
s3native.blocksize
67108864
Block size
s3native.replication
3
Replication factor
kfs.stream-buffer-size
4096
The size of buffer to stream files.
The size of this buffer should probably be a multiple of hardware
page size (4096 on Intel x86), and it determines how much data is
buffered during read and write operations.
kfs.bytes-per-checksum
512
The number of bytes per checksum. Must not be larger than
kfs.stream-buffer-size
kfs.client-write-packet-size
65536
Packet size for clients to write
kfs.blocksize
67108864
Block size
kfs.replication
3
Replication factor
ftp.stream-buffer-size
4096
The size of buffer to stream files.
The size of this buffer should probably be a multiple of hardware
page size (4096 on Intel x86), and it determines how much data is
buffered during read and write operations.
ftp.bytes-per-checksum
512
The number of bytes per checksum. Must not be larger than
ftp.stream-buffer-size
ftp.client-write-packet-size
65536
Packet size for clients to write
ftp.blocksize
67108864
Block size
ftp.replication
3
Replication factor
tfile.io.chunk.size
1048576
Value chunk size in bytes. Default to
1MB. Values of the length less than the chunk size is
guaranteed to have known value length in read time (See also
TFile.Reader.Scanner.Entry.isValueLengthKnown()).
tfile.fs.output.buffer.size
262144
Buffer size used for FSDataOutputStream in bytes.
tfile.fs.input.buffer.size
262144
Buffer size used for FSDataInputStream in bytes.
hadoop.http.authentication.type
simple
Defines authentication used for Oozie HTTP endpoint.
Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#
hadoop.http.authentication.token.validity
36000
Indicates how long (in seconds) an authentication token is valid before it has
to be renewed.
hadoop.http.authentication.signature.secret.file
${user.home}/hadoop-http-auth-signature-secret
The signature secret for signing the authentication tokens.
If not set a random secret is generated at startup time.
The same secret should be used for JT/NN/DN/TT configurations.
hadoop.http.authentication.cookie.domain
The domain to use for the HTTP cookie that stores the authentication token.
In order to authentiation to work correctly across all Hadoop nodes web-consoles
the domain must be correctly set.
IMPORTANT: when using IP addresses, browsers ignore cookies with domain settings.
For this setting to work properly all nodes in the cluster must be configured
to generate URLs with hostname.domain names on it.
hadoop.http.authentication.simple.anonymous.allowed
true
Indicates if anonymous requests are allowed when using 'simple' authentication.
hadoop.http.authentication.kerberos.principal
HTTP/_HOST@LOCALHOST
Indicates the Kerberos principal to be used for HTTP endpoint.
The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification.
hadoop.http.authentication.kerberos.keytab
${user.home}/hadoop.keytab
Location of the keytab file with the credentials for the principal.
Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop.
dfs.ha.fencing.methods
List of fencing methods to use for service fencing. May contain
builtin methods (eg shell and sshfence) or user-defined method.
dfs.ha.fencing.ssh.connect-timeout
30000
SSH connection timeout, in milliseconds, to use with the builtin
sshfence fencer.
dfs.ha.fencing.ssh.private-key-files
The SSH private key files to use with the builtin sshfence fencer.
ha.zookeeper.quorum
A list of ZooKeeper server addresses, separated by commas, that are
to be used by the ZKFailoverController in automatic failover.
ha.zookeeper.session-timeout.ms
5000
The session timeout to use when the ZKFC connects to ZooKeeper.
Setting this value to a lower value implies that server crashes
will be detected more quickly, but risks triggering failover too
aggressively in the case of a transient error or network blip.
ha.zookeeper.parent-znode
/hadoop-ha
The ZooKeeper znode under which the ZK failover controller stores
its information. Note that the nameservice ID is automatically
appended to this znode, so it is not normally necessary to
configure this, even in a federated environment.
ha.zookeeper.acl
world:anyone:rwcda
A comma-separated list of ZooKeeper ACLs to apply to the znodes
used by automatic failover. These ACLs are specified in the same
format as used by the ZooKeeper CLI.
If the ACL itself contains secrets, you may instead specify a
path to a file, prefixed with the '@' symbol, and the value of
this configuration will be loaded from within.
ha.zookeeper.auth
A comma-separated list of ZooKeeper authentications to add when
connecting to ZooKeeper. These are specified in the same format
as used by the "addauth" command in the ZK CLI. It is
important that the authentications specified here are sufficient
to access znodes with the ACL specified in ha.zookeeper.acl.
If the auths contain secrets, you may instead specify a
path to a file, prefixed with the '@' symbol, and the value of
this configuration will be loaded from within.
The user name to filter as, on static web filters
while rendering content. An example use is the HDFS
web UI (user to be used for browsing files).
hadoop.http.staticuser.user
dr.who
hadoop.ssl.keystores.factory.class
org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
The keystores factory to use for retrieving certificates.
hadoop.ssl.require.client.cert
false
Whether client certificates are required
hadoop.ssl.hostname.verifier
DEFAULT
The hostname verifier to provide for HttpsURLConnections.
Valid values are: DEFAULT, STRICT, STRICT_I6, DEFAULT_AND_LOCALHOST and
ALLOW_ALL
hadoop.ssl.server.conf
ssl-server.xml
Resource file from which ssl server keystore information will be extracted.
This file is looked up in the classpath, typically it should be in Hadoop
conf/ directory.
hadoop.ssl.client.conf
ssl-client.xml
Resource file from which ssl client keystore information will be extracted
This file is looked up in the classpath, typically it should be in Hadoop
conf/ directory.