mapreduce.jobtracker.jobhistory.location If job tracker is static the history files are stored in this single well known place. If No value is set here, by default, it is in the local file system at ${hadoop.log.dir}/history. mapreduce.jobtracker.jobhistory.task.numberprogresssplits 12 Every task attempt progresses from 0.0 to 1.0 [unless it fails or is killed]. We record, for each task attempt, certain statistics over each twelfth of the progress range. You can change the number of intervals we divide the entire range of progress into by setting this property. Higher values give more precision to the recorded data, but costs more memory in the job tracker at runtime. Each increment in this attribute costs 16 bytes per running task. mapreduce.job.userhistorylocation User can specify a location to store the history files of a particular job. If nothing is specified, the logs are stored in output directory. The files are stored in "_logs/history/" in the directory. User can stop logging by giving the value "none". mapreduce.jobtracker.jobhistory.completed.location The completed job history files are stored at this single well known location. If nothing is specified, the files are stored at ${mapreduce.jobtracker.jobhistory.location}/done. mapreduce.job.committer.setup.cleanup.needed true true, if job needs job-setup and job-cleanup. false, otherwise mapreduce.task.io.sort.factor 10 The number of streams to merge at once while sorting files. This determines the number of open file handles. mapreduce.task.io.sort.mb 100 The total amount of buffer memory to use while sorting files, in megabytes. By default, gives each merge stream 1MB, which should minimize seeks. mapreduce.map.sort.spill.percent 0.80 The soft limit in the serialization buffer. Once reached, a thread will begin to spill the contents to disk in the background. Note that collection will not block if this threshold is exceeded while a spill is already in progress, so spills may be larger than this threshold when it is set to less than .5 mapreduce.jobtracker.address local The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. mapreduce.jobtracker.http.address 0.0.0.0:50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. mapreduce.jobtracker.handler.count 10 The number of server threads for the JobTracker. This should be roughly 4% of the number of tasktracker nodes. mapreduce.tasktracker.report.address 127.0.0.1:0 The interface and port that task tracker server listens on. Since it is only connected to by the tasks, it uses the local interface. EXPERT ONLY. Should only be changed if your host does not have the loopback interface. mapreduce.cluster.local.dir ${hadoop.tmp.dir}/mapred/local The local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored. mapreduce.jobtracker.system.dir ${hadoop.tmp.dir}/mapred/system The directory where MapReduce stores control files. mapreduce.jobtracker.staging.root.dir ${hadoop.tmp.dir}/mapred/staging The root of the staging area for users' job files In practice, this should be the directory where users' home directories are located (usually /user) mapreduce.cluster.temp.dir ${hadoop.tmp.dir}/mapred/temp A shared directory for temporary files. mapreduce.tasktracker.local.dir.minspacestart 0 If the space in mapreduce.cluster.local.dir drops under this, do not ask for more tasks. Value in bytes. mapreduce.tasktracker.local.dir.minspacekill 0 If the space in mapreduce.cluster.local.dir drops under this, do not ask more tasks until all the current ones have finished and cleaned up. Also, to save the rest of the tasks we have running, kill one of them, to clean up some space. Start with the reduce tasks, then go with the ones that have finished the least. Value in bytes. mapreduce.jobtracker.expire.trackers.interval 600000 Expert: The time-interval, in miliseconds, after which a tasktracker is declared 'lost' if it doesn't send heartbeats. mapreduce.tasktracker.instrumentation org.apache.hadoop.mapred.TaskTrackerMetricsInst Expert: The instrumentation class to associate with each TaskTracker. mapreduce.tasktracker.resourcecalculatorplugin Name of the class whose instance will be used to query resource information on the tasktracker. The class must be an instance of org.apache.hadoop.util.ResourceCalculatorPlugin. If the value is null, the tasktracker attempts to use a class appropriate to the platform. Currently, the only platform supported is Linux. mapreduce.tasktracker.taskmemorymanager.monitoringinterval 5000 The interval, in milliseconds, for which the tasktracker waits between two cycles of monitoring its tasks' memory usage. Used only if tasks' memory management is enabled via mapred.tasktracker.tasks.maxmemory. mapreduce.tasktracker.tasks.sleeptimebeforesigkill 5000 The time, in milliseconds, the tasktracker waits for sending a SIGKILL to a task, after it has been sent a SIGTERM. This is currently not used on WINDOWS where tasks are just sent a SIGTERM. mapreduce.job.maps 2 The default number of map tasks per job. Ignored when mapreduce.jobtracker.address is "local". mapreduce.job.reduces 1 The default number of reduce tasks per job. Typically set to 99% of the cluster's reduce capacity, so that if a node fails the reduces can still be executed in a single wave. Ignored when mapreduce.jobtracker.address is "local". mapreduce.jobtracker.restart.recover false "true" to enable (job) recovery upon restart, "false" to start afresh mapreduce.jobtracker.jobhistory.block.size 3145728 The block size of the job history file. Since the job recovery uses job history, its important to dump job history to disk as soon as possible. Note that this is an expert level parameter. The default value is set to 3 MB. mapreduce.jobtracker.taskscheduler org.apache.hadoop.mapred.JobQueueTaskScheduler The class responsible for scheduling the tasks. mapreduce.jobtracker.split.metainfo.maxsize 10000000 The maximum permissible size of the split metainfo file. The JobTracker won't attempt to read split metainfo files bigger than the configured value. No limits if set to -1. mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob The maximum number of running tasks for a job before it gets preempted. No limits if undefined. mapreduce.map.maxattempts 4 Expert: The maximum number of attempts per map task. In other words, framework will try to execute a map task these many number of times before giving up on it. mapreduce.reduce.maxattempts 4 Expert: The maximum number of attempts per reduce task. In other words, framework will try to execute a reduce task these many number of times before giving up on it. mapreduce.reduce.shuffle.parallelcopies 5 The default number of parallel transfers run by reduce during the copy(shuffle) phase. mapreduce.reduce.shuffle.connect.timeout 180000 Expert: The maximum amount of time (in milli seconds) reduce task spends in trying to connect to a tasktracker for getting map output. mapreduce.reduce.shuffle.read.timeout 180000 Expert: The maximum amount of time (in milli seconds) reduce task waits for map output data to be available for reading after obtaining connection. mapreduce.task.timeout 600000 The number of milliseconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string. mapreduce.tasktracker.map.tasks.maximum 2 The maximum number of map tasks that will be run simultaneously by a task tracker. mapreduce.tasktracker.reduce.tasks.maximum 2 The maximum number of reduce tasks that will be run simultaneously by a task tracker. mapreduce.jobtracker.retiredjobs.cache.size 1000 The number of retired job status to keep in the cache. mapreduce.tasktracker.outofband.heartbeat false Expert: Set this to true to let the tasktracker send an out-of-band heartbeat on task-completion for better latency. mapreduce.jobtracker.jobhistory.lru.cache.size 5 The number of job history files loaded in memory. The jobs are loaded when they are first accessed. The cache is cleared based on LRU. mapreduce.jobtracker.instrumentation org.apache.hadoop.mapred.JobTrackerMetricsInst Expert: The instrumentation class to associate with each JobTracker. mapred.child.java.opts -Xmx200m Java opts for the task tracker child processes. The following symbol, if present, will be interpolated: @taskid@ is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc The configuration variable mapred.child.ulimit can be used to control the maximum virtual memory of the child processes. mapred.child.env User added environment variables for the task tracker child processes. Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit tasktracker's B env variable. mapred.child.ulimit The maximum virtual memory, in KB, of a process launched by the Map-Reduce framework. This can be used to control both the Mapper/Reducer tasks and applications using Hadoop Pipes, Hadoop Streaming etc. By default it is left unspecified to let cluster admins control it via limits.conf and other such relevant mechanisms. Note: mapred.child.ulimit must be greater than or equal to the -Xmx passed to JavaVM, else the VM might not start. mapreduce.task.tmp.dir ./tmp To set the value of tmp directory for map and reduce tasks. If the value is an absolute path, it is directly assigned. Otherwise, it is prepended with task's working directory. The java tasks are executed with option -Djava.io.tmpdir='the absolute path of the tmp dir'. Pipes and streaming are set with environment variable, TMPDIR='the absolute path of the tmp dir' mapreduce.map.log.level INFO The logging level for the map task. The allowed levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. mapreduce.reduce.log.level INFO The logging level for the reduce task. The allowed levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. mapreduce.reduce.merge.inmem.threshold 1000 The threshold, in terms of the number of files for the in-memory merge process. When we accumulate threshold number of files we initiate the in-memory merge and spill to disk. A value of 0 or less than 0 indicates we want to DON'T have any threshold and instead depend only on the ramfs's memory consumption to trigger the merge. mapreduce.reduce.shuffle.merge.percent 0.66 The usage threshold at which an in-memory merge will be initiated, expressed as a percentage of the total memory allocated to storing in-memory map outputs, as defined by mapreduce.reduce.shuffle.input.buffer.percent. mapreduce.reduce.shuffle.input.buffer.percent 0.70 The percentage of memory to be allocated from the maximum heap size to storing map outputs during the shuffle. mapreduce.reduce.input.buffer.percent 0.0 The percentage of memory- relative to the maximum heap size- to retain map outputs during the reduce. When the shuffle is concluded, any remaining map outputs in memory must consume less than this threshold before the reduce can begin. mapreduce.reduce.markreset.buffer.percent 0.0 The percentage of memory -relative to the maximum heap size- to be used for caching values when using the mark-reset functionality. mapreduce.map.speculative true If true, then multiple instances of some map tasks may be executed in parallel. mapreduce.reduce.speculative true If true, then multiple instances of some reduce tasks may be executed in parallel. mapreduce.job.speculative.speculativecap 0.1 The max percent (0-1) of running tasks that can be speculatively re-executed at any time. mapreduce.job.speculative.slowtaskthreshold 1.0The number of standard deviations by which a task's ave progress-rates must be lower than the average of all running tasks' for the task to be considered too slow. mapreduce.job.speculative.slownodethreshold 1.0 The number of standard deviations by which a Task Tracker's ave map and reduce progress-rates (finishTime-dispatchTime) must be lower than the average of all successful map/reduce task's for the TT to be considered too slow to give a speculative task to. mapreduce.job.jvm.numtasks 1 How many tasks to run per jvm. If set to -1, there is no limit. mapreduce.input.fileinputformat.split.minsize 0 The minimum size chunk that map input should be split into. Note that some file formats may have minimum split sizes that take priority over this setting. mapreduce.jobtracker.maxtasks.perjob -1 The maximum number of tasks for a single job. A value of -1 indicates that there is no maximum. mapreduce.client.submit.file.replication 10 The replication level for submitted job files. This should be around the square root of the number of nodes. mapreduce.tasktracker.dns.interface default The name of the Network Interface from which a task tracker should report its IP address. mapreduce.tasktracker.dns.nameserver default The host name or IP address of the name server (DNS) which a TaskTracker should use to determine the host name used by the JobTracker for communication and display purposes. mapreduce.tasktracker.http.threads 40 The number of worker threads that for the http server. This is used for map output fetching mapreduce.tasktracker.http.address 0.0.0.0:50060 The task tracker http server address and port. If the port is 0 then the server will start on a free port. mapreduce.task.files.preserve.failedtasks false Should the files for failed tasks be kept. This should only be used on jobs that are failing, because the storage is never reclaimed. It also prevents the map outputs from being erased from the reduce directory as they are consumed. mapreduce.output.fileoutputformat.compress false Should the job outputs be compressed? mapreduce.output.fileoutputformat.compression.type RECORD If the job outputs are to compressed as SequenceFiles, how should they be compressed? Should be one of NONE, RECORD or BLOCK. mapreduce.output.fileoutputformat.compression.codec org.apache.hadoop.io.compress.DefaultCodec If the job outputs are compressed, how should they be compressed? mapreduce.map.output.compress false Should the outputs of the maps be compressed before being sent across the network. Uses SequenceFile compression. mapreduce.map.output.compress.codec org.apache.hadoop.io.compress.DefaultCodec If the map outputs are compressed, how should they be compressed? map.sort.class org.apache.hadoop.util.QuickSort The default sort class for sorting keys. mapreduce.task.userlog.limit.kb 0 The maximum size of user-logs of each task in KB. 0 disables the cap. mapreduce.job.userlog.retain.hours 24 The maximum time, in hours, for which the user-logs are to be retained after the job completion. mapreduce.jobtracker.hosts.filename Names a file that contains the list of nodes that may connect to the jobtracker. If the value is empty, all hosts are permitted. mapreduce.jobtracker.hosts.exclude.filename Names a file that contains the list of hosts that should be excluded by the jobtracker. If the value is empty, no hosts are excluded. mapreduce.jobtracker.heartbeats.in.second 100 Expert: Approximate number of heart-beats that could arrive at JobTracker in a second. Assuming each RPC can be processed in 10msec, the default value is made 100 RPCs in a second. mapreduce.jobtracker.tasktracker.maxblacklists 4 The number of blacklists for a taskTracker by various jobs after which the task tracker could be blacklisted across all jobs. The tracker will be given a tasks later (after a day). The tracker will become a healthy tracker after a restart. mapreduce.job.maxtaskfailures.per.tracker 4 The number of task-failures on a tasktracker of a given job after which new tasks of that job aren't assigned to it. mapreduce.client.output.filter FAILED The filter for controlling the output of the task's userlogs sent to the console of the JobClient. The permissible options are: NONE, KILLED, FAILED, SUCCEEDED and ALL. mapreduce.client.completion.pollinterval 5000 The interval (in milliseconds) between which the JobClient polls the JobTracker for updates about job status. You may want to set this to a lower value to make tests run faster on a single node system. Adjusting this value in production may lead to unwanted client-server traffic. mapreduce.client.progressmonitor.pollinterval 1000 The interval (in milliseconds) between which the JobClient reports status to the console and checks for job completion. You may want to set this to a lower value to make tests run faster on a single node system. Adjusting this value in production may lead to unwanted client-server traffic. mapreduce.jobtracker.persist.jobstatus.active true Indicates if persistency of job status information is active or not. mapreduce.jobtracker.persist.jobstatus.hours 1 The number of hours job status information is persisted in DFS. The job status information will be available after it drops of the memory queue and between jobtracker restarts. With a zero value the job status information is not persisted at all in DFS. mapreduce.jobtracker.persist.jobstatus.dir /jobtracker/jobsInfo The directory where the job status information is persisted in a file system to be available after it drops of the memory queue and between jobtracker restarts. mapreduce.task.profile false To set whether the system should collect profiler information for some of the tasks in this job? The information is stored in the user log directory. The value is "true" if task profiling is enabled. mapreduce.task.profile.maps 0-2 To set the ranges of map tasks to profile. mapreduce.task.profile has to be set to true for the value to be accounted. mapreduce.task.profile.reduces 0-2 To set the ranges of reduce tasks to profile. mapreduce.task.profile has to be set to true for the value to be accounted. mapreduce.task.skip.start.attempts 2 The number of Task attempts AFTER which skip mode will be kicked off. When skip mode is kicked off, the tasks reports the range of records which it will process next, to the TaskTracker. So that on failures, TT knows which ones are possibly the bad records. On further executions, those are skipped. mapreduce.map.skip.proc.count.autoincr true The flag which if set to true, SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented by MapRunner after invoking the map function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own. mapreduce.reduce.skip.proc.count.autoincr true The flag which if set to true, SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented by framework after invoking the reduce function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own. mapreduce.job.skip.outdir If no value is specified here, the skipped records are written to the output directory at _logs/skip. User can stop writing skipped records by giving the value "none". mapreduce.map.skip.maxrecords 0 The number of acceptable skip records surrounding the bad record PER bad record in mapper. The number includes the bad record as well. To turn the feature of detection/skipping of bad records off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever records(depends on application) get skipped are acceptable. mapreduce.reduce.skip.maxgroups 0 The number of acceptable skip groups surrounding the bad group PER bad group in reducer. The number includes the bad group as well. To turn the feature of detection/skipping of bad groups off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever groups(depends on application) get skipped are acceptable. mapreduce.job.end-notification.retry.attempts 0 Indicates how many times hadoop should attempt to contact the notification URL mapreduce.job.end-notification.retry.interval 30000 Indicates time in milliseconds between notification URL retry calls mapreduce.jobtracker.taskcache.levels 2 This is the max level of the task cache. For example, if the level is 2, the tasks cached are at the host level and at the rack level. mapreduce.job.queuename default Queue to which a job is submitted. This must match one of the queues defined in mapred-queues.xml for the system. Also, the ACL setup for the queue must allow the current user to submit a job to the queue. Before specifying a queue, ensure that the system is configured with the queue, and access is allowed for submitting jobs to the queue. mapreduce.cluster.acls.enabled false Specifies whether ACLs should be checked for authorization of users for doing various queue and job level operations. ACLs are disabled by default. If enabled, access control checks are made by JobTracker and TaskTracker when requests are made by users for queue operations like submit job to a queue and kill a job in the queue and job operations like viewing the job-details (See mapreduce.job.acl-view-job) or for modifying the job (See mapreduce.job.acl-modify-job) using Map/Reduce APIs, RPCs or via the console and web user interfaces. For enabling this flag(mapreduce.cluster.acls.enabled), this is to be set to true in mapred-site.xml on JobTracker node and on all TaskTracker nodes. mapreduce.job.acl-modify-job Job specific access-control list for 'modifying' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapreduce.cluster.acls.enabled to true. This specifies the list of users and/or groups who can do modification operations on the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard all the modifications with respect to this job and takes care of all the following operations: o killing this job o killing a task of this job, failing a task of this job o setting the priority of this job Each of these operations are also protected by the per-queue level ACL "acl-administer-jobs" configured via mapred-queues.xml. So a caller should have the authorization to satisfy either the queue-level ACL or the job-level ACL. Irrespective of this ACL configuration, (a) job-owner, (b) the user who started the cluster, (c) cluster administrators configured via mapreduce.cluster.administrators and (d) queue administrators of the queue to which this job was submitted to configured via acl-administer-jobs for the specific queue in mapred-queues.xml can do all the modification operations on a job. By default, nobody else besides job-owner, the user who started the cluster, cluster administrators and queue administrators can perform modification operations on a job. mapreduce.job.acl-view-job Job specific access-control list for 'viewing' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapreduce.cluster.acls.enabled to true. This specifies the list of users and/or groups who can view private details about the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard some of the job-views and at present only protects APIs that can return possibly sensitive information of the job-owner like o job-level counters o task-level counters o tasks' diagnostic information o task-logs displayed on the TaskTracker web-UI and o job.xml showed by the JobTracker's web-UI Every other piece of information of jobs is still accessible by any other user, for e.g., JobStatus, JobProfile, list of jobs in the queue, etc. Irrespective of this ACL configuration, (a) job-owner, (b) the user who started the cluster, (c) cluster administrators configured via mapreduce.cluster.administrators and (d) queue administrators of the queue to which this job was submitted to configured via acl-administer-jobs for the specific queue in mapred-queues.xml can do all the view operations on a job. By default, nobody else besides job-owner, the user who started the cluster, cluster administrators and queue administrators can perform view operations on a job. mapreduce.jobtracker.webinterface.trusted false If set to true, the web interface of the JobTracker will include actions such as kill job that are security sensitive. Leave this option as false if untrusted users have access to the web interface. mapreduce.tasktracker.indexcache.mb 10 The maximum memory that a task tracker allows for the index cache that is used when serving map outputs to reducers. mapreduce.tasktracker.cache.local.size 10737418240 The number of bytes to allocate in each local TaskTracker directory for holding Distributed Cache data. mapreduce.tasktracker.cache.local.numberdirectories 10000 The maximum number of subdirectories that should be created in any particular distributed cache store. After this many directories have been created, cache items will be expunged regardless of whether the total size threshold has been exceeded. mapreduce.task.combine.progress.records 10000 The number of records to process during combine output collection before sending a progress notification to the TaskTracker. mapreduce.task.merge.progress.records 10000 The number of records to process during merge before sending a progress notification to the TaskTracker. mapreduce.job.reduce.slowstart.completedmaps 0.05 Fraction of the number of maps in the job which should be complete before reduces are scheduled for the job. mapreduce.job.complete.cancel.delegation.tokens true if false - do not unregister/cancel delegation tokens from renewal, because same tokens may be used by spawned jobs mapreduce.tasktracker.taskcontroller org.apache.hadoop.mapred.DefaultTaskController TaskController which is used to launch and manage task execution mapreduce.tasktracker.group Expert: Group to which TaskTracker belongs. If LinuxTaskController is configured via mapreduce.tasktracker.taskcontroller, the group owner of the task-controller binary should be same as this group. mapreduce.tasktracker.healthchecker.script.path Absolute path to the script which is periodicallyrun by the node health monitoring service to determine if the node is healthy or not. If the value of this key is empty or the file does not exist in the location configured here, the node health monitoring service is not started. mapreduce.tasktracker.healthchecker.interval 60000 Frequency of the node health script to be run, in milliseconds mapreduce.tasktracker.healthchecker.script.timeout 600000 Time after node health script should be killed if unresponsive and considered that the script has failed. mapreduce.tasktracker.healthchecker.script.args List of arguments which are to be passed to node health script when it is being launched comma seperated.