diff --git a/BUILDING.txt b/BUILDING.txt index 662bd2577a..c4252d757b 100644 --- a/BUILDING.txt +++ b/BUILDING.txt @@ -8,7 +8,7 @@ Requirements: * Maven 3.0 * Forrest 0.8 (if generating docs) * Findbugs 1.3.9 (if running findbugs) -* ProtocolBuffer 2.4.1+ (for MapReduce) +* ProtocolBuffer 2.4.1+ (for MapReduce and HDFS) * Autotools (if compiling native code) * Internet connection for first build (to fetch all Maven and Hadoop dependencies) diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt index 694cebe001..9d73cfd4bc 100644 --- a/hadoop-common-project/hadoop-common/CHANGES.txt +++ b/hadoop-common-project/hadoop-common/CHANGES.txt @@ -3,17 +3,11 @@ Hadoop Change Log Trunk (unreleased changes) INCOMPATIBLE CHANGES - - HADOOP-7920. Remove Avro Rpc. (suresh) NEW FEATURES - HADOOP-7773. Add support for protocol buffer based RPC engine. - (suresh) - - HADOOP-7875. Add helper class to unwrap protobuf ServiceException. - (suresh) IMPROVEMENTS + HADOOP-8017. Configure hadoop-main pom to get rid of M2E plugin execution not covered (Eric Charles via bobby) @@ -22,22 +16,6 @@ Trunk (unreleased changes) HADOOP-7595. Upgrade dependency to Avro 1.5.3. (Alejandro Abdelnur via atm) - HADOOP-7524. Change RPC to allow multiple protocols including multuple - versions of the same protocol (sanjay Radia) - - HADOOP-7607. Simplify the RPC proxy cleanup process. (atm) - - HADOOP-7635. RetryInvocationHandler should release underlying resources on - close (atm) - - HADOOP-7687 Make getProtocolSignature public (sanjay) - - HADOOP-7693. Enhance AvroRpcEngine to support the new #addProtocol - interface introduced in HADOOP-7524. (cutting) - - HADOOP-7716. RPC protocol registration on SS does not log the protocol name - (only the class which may be different) (sanjay) - HADOOP-7717. Move handling of concurrent client fail-overs to RetryInvocationHandler (atm) @@ -54,45 +32,20 @@ Trunk (unreleased changes) HADOOP-7792. Add verifyToken method to AbstractDelegationTokenSecretManager. (jitendra) - HADOOP-7776 Make the Ipc-Header in a RPC-Payload an explicit header (sanjay) - HADOOP-7688. Add servlet handler check in HttpServer.start(). (Uma Maheswara Rao G via szetszwo) - HADOOP-7862. Move the support for multiple protocols to lower layer so - that Writable, PB and Avro can all use it (Sanjay) - - HADOOP-7876. Provided access to encoded key in DelegationKey for - use in protobuf based RPCs. (suresh) - HADOOP-7886. Add toString to FileStatus. (SreeHari via jghoman) - HADOOP-7899. Generate proto java files as part of the build. (tucu) - HADOOP-7808. Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. (Daryn Sharp via suresh) - - HADOOP-7957. Classes deriving GetGroupsBase should be able to override - proxy creation. (jitendra) - - HADOOP-7968. Errant println left in RPC.getHighestSupportedProtocol (Sho Shimauchi via harsh) HADOOP-7987. Support setting the run-as user in unsecure mode. (jitendra) - HADOOP-7965. Support for protocol version and signature in PB. (jitendra) - - HADOOP-7988. Upper case in hostname part of the principals doesn't work with + HADOOP-7988. Upper case in hostname part of the principals doesn't work with kerberos. (jitendra) - HADOOP-8070. Add a standalone benchmark for RPC call performance. (todd) - - HADOOP-8084. Updates ProtoBufRpc engine to not do an unnecessary copy - for RPC request/response. (ddas) - - HADOOP-8085. Add RPC metrics to ProtobufRpcEngine. (Hari Mankude via - suresh) - HADOOP-8108. Move method getHostPortString() from NameNode to NetUtils. (Brandon Li via jitendra) @@ -133,30 +86,14 @@ Trunk (unreleased changes) HADOOP-7704. Reduce number of object created by JMXJsonServlet. (Devaraj K via Eric Yang) - HADOOP-7695. RPC.stopProxy can throw unintended exception while logging - error (atm) - HADOOP-7769. TestJMXJsonServlet is failing. (tomwhite) HADOOP-7770. ViewFS getFileChecksum throws FileNotFoundException for files in /tmp and /user. (Ravi Prakash via jitendra) - HADOOP-7833. Fix findbugs warnings in protobuf generated code. - (John Lee via suresh) - HADOOP-7888. TestFailoverProxy fails intermittently on trunk. (Jason Lowe via atm) - HADOOP-7897. ProtobufRpcEngine client side exception mechanism is not - consistent with WritableRpcEngine. (suresh) - - HADOOP-7913 Fix bug in ProtoBufRpcEngine (sanjay) - - HADOOP-7892. IPC logs too verbose after "RpcKind" introduction (todd) - - HADOOP-7931. o.a.h.ipc.WritableRpcEngine should have a way to force - initialization (atm) - OPTIMIZATIONS HADOOP-7761. Improve the performance of raw comparisons. (todd) @@ -165,14 +102,81 @@ Release 0.23.3 - UNRELEASED INCOMPATIBLE CHANGES - NEW FEATURES - + HADOOP-7920. Remove Avro Rpc. (suresh) + + NEW FEATURES + + HADOOP-7773. Add support for protocol buffer based RPC engine. + (suresh) + + HADOOP-7875. Add helper class to unwrap protobuf ServiceException. + (suresh) + IMPROVEMENTS + HADOOP-7524. Change RPC to allow multiple protocols including multuple + versions of the same protocol (sanjay Radia) + + HADOOP-7607. Simplify the RPC proxy cleanup process. (atm) + + HADOOP-7687. Make getProtocolSignature public (sanjay) + + HADOOP-7693. Enhance AvroRpcEngine to support the new #addProtocol + interface introduced in HADOOP-7524. (cutting) + + HADOOP-7716. RPC protocol registration on SS does not log the protocol name + (only the class which may be different) (sanjay) + + HADOOP-7776. Make the Ipc-Header in a RPC-Payload an explicit header. + (sanjay) + + HADOOP-7862. Move the support for multiple protocols to lower layer so + that Writable, PB and Avro can all use it (Sanjay) + + HADOOP-7876. Provided access to encoded key in DelegationKey for + use in protobuf based RPCs. (suresh) + + HADOOP-7899. Generate proto java files as part of the build. (tucu) + + HADOOP-7957. Classes deriving GetGroupsBase should be able to override + proxy creation. (jitendra) + + HADOOP-7965. Support for protocol version and signature in PB. (jitendra) + + HADOOP-8070. Add a standalone benchmark for RPC call performance. (todd) + + HADOOP-8084. Updates ProtoBufRpc engine to not do an unnecessary copy + for RPC request/response. (ddas) + + HADOOP-8085. Add RPC metrics to ProtobufRpcEngine. (Hari Mankude via + suresh) + OPTIMIZATIONS BUG FIXES + HADOOP-7635. RetryInvocationHandler should release underlying resources on + close. (atm) + + HADOOP-7695. RPC.stopProxy can throw unintended exception while logging + error. (atm) + + HADOOP-7833. Fix findbugs warnings in protobuf generated code. + (John Lee via suresh) + + HADOOP-7897. ProtobufRpcEngine client side exception mechanism is not + consistent with WritableRpcEngine. (suresh) + + HADOOP-7913. Fix bug in ProtoBufRpcEngine. (sanjay) + + HADOOP-7892. IPC logs too verbose after "RpcKind" introduction. (todd) + + HADOOP-7968. Errant println left in RPC.getHighestSupportedProtocol. (Sho + Shimauchi via harsh) + + HADOOP-7931. o.a.h.ipc.WritableRpcEngine should have a way to force + initialization. (atm) + Release 0.23.2 - UNRELEASED INCOMPATIBLE CHANGES diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt index 37912e30ec..0d586cfc1e 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt +++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt @@ -3,93 +3,24 @@ Hadoop HDFS Change Log Trunk (unreleased changes) INCOMPATIBLE CHANGES - - HDFS-2676. Remove Avro RPC. (suresh) NEW FEATURES - - HDFS-395. DFS Scalability: Incremental block reports. (Tomasz Nykiel - via hairong) - - HDFS-2517. Add protobuf service for JounralProtocol. (suresh) - - HDFS-2518. Add protobuf service for NamenodeProtocol. (suresh) - - HDFS-2520. Add protobuf service for InterDatanodeProtocol. (suresh) - - HDFS-2519. Add protobuf service for DatanodeProtocol. (suresh) - - HDFS-2581. Implement protobuf service for JournalProtocol. (suresh) - - HDFS-2618. Implement protobuf service for NamenodeProtocol. (suresh) - - HDFS-2629. Implement protobuf service for InterDatanodeProtocol. (suresh) - - HDFS-2636. Implement protobuf service for ClientDatanodeProtocol. (suresh) HDFS-2430. The number of failed or low-resource volumes the NN can tolerate should be configurable. (atm) - HDFS-2642. Protobuf translators for DatanodeProtocol. (jitendra) - - HDFS-2647. Used protobuf based RPC for InterDatanodeProtocol, - ClientDatanodeProtocol, JournalProtocol, NamenodeProtocol. (suresh) - - HDFS-2666. Fix TestBackupNode failure. (suresh) - HDFS-234. Integration with BookKeeper logging system. (Ivan Kelly via jitendra) - HDFS-2663. Optional protobuf parameters are not handled correctly. - (suresh) - - HDFS-2661. Enable protobuf RPC for DatanodeProtocol. (jitendra) - - HDFS-2697. Move RefreshAuthPolicy, RefreshUserMappings, GetUserMappings - protocol to protocol buffers. (jitendra) - - HDFS-2880. Protobuf changes in DatanodeProtocol to add multiple storages. - (suresh) - - HDFS-2899. Service protocol changes in DatanodeProtocol to add multiple - storages. (suresh) - IMPROVEMENTS - HADOOP-7524 Change RPC to allow multiple protocols including multuple - versions of the same protocol (Sanjay Radia) - HDFS-1620. Rename HdfsConstants -> HdfsServerConstants, FSConstants -> HdfsConstants. (Harsh J Chouraria via atm) + HDFS-2197. Refactor RPC call implementations out of NameNode class (todd) - HDFS-2018. Move all journal stream management code into one place. - (Ivan Kelly via jitendra) - - HDFS-2223. Untangle depencencies between NN components (todd) - - HDFS-2337. DFSClient shouldn't keep multiple RPC proxy references (atm) - - HDFS-2351 Change Namenode and Datanode to register each of their protocols - seperately. (Sanjay Radia) - HDFS-2158. Add JournalSet to manage the set of journals. (jitendra) - HDFS-2459. Separate datatypes for JournalProtocol. (suresh) - - HDFS-2480. Separate datatypes for NamenodeProtocol. (suresh) - - HDFS-2181 Separate HDFS Client wire protocol data types (sanjay) - - HDFS-2489. Move Finalize and Register to separate file out of - DatanodeCommand.java. (suresh) - - HDFS-2488. Separate datatypes for InterDatanodeProtocol. (suresh) - - HDFS-2496. Separate datatypes for DatanodeProtocol. (suresh) - - HDFS-2479 HDFS Client Data Types in Protocol Buffers (sanjay) - HDFS-2334. Add Closeable to JournalManager. (Ivan Kelly via jitendra) HDFS-2572. Remove unnecessary double-check in DN#getHostName. (harsh) @@ -102,30 +33,12 @@ Trunk (unreleased changes) HDFS-2857. Cleanup BlockInfo class. (suresh) - HADOOP-7862 Hdfs changes to work with HADOOP 7862: - Move the support for multiple protocols to lower layer so that Writable, - PB and Avro can all use it (Sanjay) - HDFS-1580. Add interface for generic Write Ahead Logging mechanisms. (Ivan Kelly via jitendra) - HDFS-2597 ClientNameNodeProtocol in Protocol Buffers (sanjay) - - HDFS-2651 ClientNameNodeProtocol Translators for Protocol Buffers (sanjay) - - HDFS-2650. Replace @inheritDoc with @Override. (Hari Mankude via suresh) - - HDFS-2669. Enable protobuf rpc for ClientNamenodeProtocol. (Sanjay Radia) - - HDFS-2801. Provide a method in client side translators to check for a - methods supported in underlying protocol. (jitendra) - HDFS-208. name node should warn if only one dir is listed in dfs.name.dir. (Uma Maheswara Rao G via eli) - HDS-2895. Remove Writable wire protocol types and translators to - complete transition to protocol buffers. (suresh) - HDFS-2786. Fix host-based token incompatibilities in DFSUtil. (Kihwal Lee via jitendra) @@ -148,6 +61,7 @@ Trunk (unreleased changes) (suresh) OPTIMIZATIONS + HDFS-2477. Optimize computing the diff between a block report and the namenode state. (Tomasz Nykiel via hairong) @@ -158,6 +72,7 @@ Trunk (unreleased changes) over-replicated, and invalidated blocks. (Tomasz Nykiel via todd) BUG FIXES + HDFS-2299. TestOfflineEditsViewer is failing on trunk. (Uma Maheswara Rao G via atm) HDFS-2310. TestBackupNode fails since HADOOP-7524 went in. @@ -180,10 +95,118 @@ Trunk (unreleased changes) HDFS-2188. Make FSEditLog create its journals from a list of URIs rather than NNStorage. (Ivan Kelly via jitendra) - HDFS-2481 Unknown protocol: org.apache.hadoop.hdfs.protocol.ClientProtocol. + HDFS-1765. Block Replication should respect under-replication + block priority. (Uma Maheswara Rao G via eli) + + HDFS-2765. TestNameEditsConfigs is incorrectly swallowing IOE. (atm) + + HDFS-2776. Missing interface annotation on JournalSet. + (Brandon Li via jitendra) + + HDFS-2759. Pre-allocate HDFS edit log files after writing version number. + (atm) + + HDFS-2908. Add apache license header for StorageReport.java. (Brandon Li + via jitendra) + +Release 0.23.3 - UNRELEASED + + INCOMPATIBLE CHANGES + + HDFS-2676. Remove Avro RPC. (suresh) + + NEW FEATURES + + HDFS-2978. The NameNode should expose name dir statuses via JMX. (atm) + + HDFS-395. DFS Scalability: Incremental block reports. (Tomasz Nykiel + via hairong) + + HDFS-2517. Add protobuf service for JounralProtocol. (suresh) + + HDFS-2518. Add protobuf service for NamenodeProtocol. (suresh) + + HDFS-2520. Add protobuf service for InterDatanodeProtocol. (suresh) + + HDFS-2519. Add protobuf service for DatanodeProtocol. (suresh) + + HDFS-2581. Implement protobuf service for JournalProtocol. (suresh) + + HDFS-2618. Implement protobuf service for NamenodeProtocol. (suresh) + + HDFS-2629. Implement protobuf service for InterDatanodeProtocol. (suresh) + + HDFS-2636. Implement protobuf service for ClientDatanodeProtocol. (suresh) + + HDFS-2642. Protobuf translators for DatanodeProtocol. (jitendra) + + HDFS-2647. Used protobuf based RPC for InterDatanodeProtocol, + ClientDatanodeProtocol, JournalProtocol, NamenodeProtocol. (suresh) + + HDFS-2661. Enable protobuf RPC for DatanodeProtocol. (jitendra) + + HDFS-2697. Move RefreshAuthPolicy, RefreshUserMappings, GetUserMappings + protocol to protocol buffers. (jitendra) + + HDFS-2880. Protobuf changes in DatanodeProtocol to add multiple storages. + (suresh) + + HDFS-2899. Service protocol changes in DatanodeProtocol to add multiple + storages. (suresh) + + IMPROVEMENTS + + HDFS-2018. Move all journal stream management code into one place. + (Ivan Kelly via jitendra) + + HDFS-2223. Untangle depencencies between NN components (todd) + + HDFS-2351. Change Namenode and Datanode to register each of their protocols + seperately (sanjay) + + HDFS-2337. DFSClient shouldn't keep multiple RPC proxy references (atm) + + HDFS-2181. Separate HDFS Client wire protocol data types (sanjay) + + HDFS-2459. Separate datatypes for Journal Protocol. (suresh) + + HDFS-2480. Separate datatypes for NamenodeProtocol. (suresh) + + HDFS-2489. Move Finalize and Register to separate file out of + DatanodeCommand.java. (suresh) + + HDFS-2488. Separate datatypes for InterDatanodeProtocol. (suresh) + + HDFS-2496. Separate datatypes for DatanodeProtocol. (suresh) + + HDFS-2479. HDFS Client Data Types in Protocol Buffers (sanjay) + + HADOOP-7862. Hdfs changes to work with HADOOP-7862: Move the support for + multiple protocols to lower layer so that Writable, PB and Avro can all + use it. (sanjay) + + HDFS-2597. ClientNameNodeProtocol in Protocol Buffers. (sanjay) + + HDFS-2651. ClientNameNodeProtocol Translators for Protocol Buffers. (sanjay) + + HDFS-2650. Replace @inheritDoc with @Override. (Hari Mankude via suresh). + + HDFS-2669. Enable protobuf rpc for ClientNamenodeProtocol. (sanjay) + + HDFS-2801. Provide a method in client side translators to check for a + methods supported in underlying protocol. (jitendra) + + HDFS-2895. Remove Writable wire protocol types and translators to + complete transition to protocol buffers. (suresh) + + OPTIMIZATIONS + + BUG FIXES + + HDFS-2481. Unknown protocol: org.apache.hadoop.hdfs.protocol.ClientProtocol. (sanjay) - HDFS-2497 Fix TestBackupNode failure. (suresh) + HDFS-2497. Fix TestBackupNode failure. (suresh) HDFS-2499. RPC client is created incorrectly introduced in HDFS-2459. (suresh) @@ -194,8 +217,9 @@ Trunk (unreleased changes) HDFS-2532. TestDfsOverAvroRpc timing out in trunk (Uma Maheswara Rao G via todd) - HDFS-1765. Block Replication should respect under-replication - block priority. (Uma Maheswara Rao G via eli) + HDFS-2666. Fix TestBackupNode failure. (suresh) + + HDFS-2663. Optional protobuf parameters are not handled correctly. (suresh) HDFS-2694. Removal of Avro broke non-PB NN services. (atm) @@ -205,39 +229,14 @@ Trunk (unreleased changes) HDFS-2700. Fix failing TestDataNodeMultipleRegistrations in trunk (Uma Maheswara Rao G via todd) - HDFS-2765. TestNameEditsConfigs is incorrectly swallowing IOE. (atm) - HDFS-2739. SecondaryNameNode doesn't start up. (jitendra) - HDFS-2776. Missing interface annotation on JournalSet. - (Brandon Li via jitendra) - HDFS-2768. BackupNode stop can not close proxy connections because it is not a proxy instance. (Uma Maheswara Rao G via eli) - HDFS-2759. Pre-allocate HDFS edit log files after writing version number. - (atm) - - HDFS-2908. Add apache license header for StorageReport.java. (Brandon Li - via jitendra) - HDFS-2968. Protocol translator for BlockRecoveryCommand broken when multiple blocks need recovery. (todd) -Release 0.23.3 - UNRELEASED - - INCOMPATIBLE CHANGES - - NEW FEATURES - - HDFS-2978. The NameNode should expose name dir statuses via JMX. (atm) - - IMPROVEMENTS - - OPTIMIZATIONS - - BUG FIXES - Release 0.23.2 - UNRELEASED INCOMPATIBLE CHANGES diff --git a/hadoop-mapreduce-project/CHANGES.txt b/hadoop-mapreduce-project/CHANGES.txt index d858553a79..ef661b3f7f 100644 --- a/hadoop-mapreduce-project/CHANGES.txt +++ b/hadoop-mapreduce-project/CHANGES.txt @@ -4,8 +4,6 @@ Trunk (unreleased changes) INCOMPATIBLE CHANGES - MAPREDUCE-3545. Remove Avro RPC. (suresh) - NEW FEATURES MAPREDUCE-778. Rumen Anonymizer. (Amar Kamat and Chris Douglas via amarrk) @@ -32,12 +30,6 @@ Trunk (unreleased changes) MAPREDUCE-3008. Improvements to cumulative CPU emulation for short running tasks in Gridmix. (amarrk) - MAPREDUCE-2887 due to HADOOP-7524 Change RPC to allow multiple protocols - including multuple versions of the same protocol (sanjay Radia) - - MAPREDUCE-2934. MR portion of HADOOP-7607 - Simplify the RPC proxy cleanup - process (atm) - MAPREDUCE-2836. Provide option to fail jobs when submitted to non-existent fair scheduler pools. (Ahmed Radwan via todd) @@ -50,14 +42,8 @@ Trunk (unreleased changes) MAPREDUCE-3169. Create a new MiniMRCluster equivalent which only provides client APIs cross MR1 and MR2 (Ahmed via tucu) - HADOOP-7862 MR changes to work with HADOOP 7862: - Move the support for multiple protocols to lower layer so that Writable, - PB and Avro can all use it (Sanjay) - MAPREDUCE-2944. Improve checking of input for JobClient.displayTasks() (XieXianshan via harsh) - MAPREDUCE-3909 Javadoc the Service interfaces (stevel) - BUG FIXES MAPREDUCE-3757. [Rumen] Fixed Rumen Folder to adjust shuffleFinished and @@ -89,24 +75,41 @@ Trunk (unreleased changes) MAPREDUCE-3664. Federation Documentation has incorrect configuration example. (Brandon Li via jitendra) - MAPREDUCE-3740. Fixed broken mapreduce compilation after the patch for - HADOOP-7965. (Devaraj K via vinodkv) - - MAPREDUCE-3818. Fixed broken compilation in TestSubmitJob after the patch - for HDFS-2895. (Suresh Srinivas via vinodkv) - Release 0.23.3 - UNRELEASED INCOMPATIBLE CHANGES + MAPREDUCE-3545. Remove Avro RPC. (suresh) + NEW FEATURES IMPROVEMENTS + MAPREDUCE-2887. Due to HADOOP-7524, change RPC to allow multiple protocols + including multuple versions of the same protocol (Sanjay Radia) + + MAPREDUCE-2934. MR portion of HADOOP-7607 - Simplify the RPC proxy cleanup + process (atm) + + HADOOP-7862. MR changes to work with HADOOP 7862: Move the support for + multiple protocols to lower layer so that Writable, PB and Avro can all + use it (Sanjay Radia) + + MAPREDUCE-3909 Javadoc the Service interfaces (stevel) + OPTIMIZATIONS BUG FIXES + MAPREDUCE-3740. Fixed broken mapreduce compilation after the patch for + HADOOP-7965. (Devaraj K via vinodkv) + + MAPREDUCE-3818. Fixed broken compilation in TestSubmitJob after the patch + for HDFS-2895. (Suresh Srinivas via vinodkv) + + MAPREDUCE-2942. TestNMAuditLogger.testNMAuditLoggerWithIP failing (Thomas + Graves via mahadev) + Release 0.23.2 - UNRELEASED INCOMPATIBLE CHANGES @@ -135,6 +138,12 @@ Release 0.23.2 - UNRELEASED OPTIMIZATIONS + MAPREDUCE-3901. Modified JobHistory records in YARN to lazily load job and + task reports so as to improve UI response times. (Siddarth Seth via vinodkv) + + MAPREDUCE-2855. Passing a cached class-loader to ResourceBundle creator to + minimize counter names lookup time. (Siddarth Seth via vinodkv) + BUG FIXES MAPREDUCE-3918 proc_historyserver no longer in command line arguments for HistoryServer (Jon Eagles via bobby) @@ -2270,9 +2279,6 @@ Release 0.23.0 - 2011-11-01 MAPREDUCE-2908. Fix all findbugs warnings. (vinodkv via acmurthy) - MAPREDUCE-2942. TestNMAuditLogger.testNMAuditLoggerWithIP failing (Thomas Graves - via mahadev) - MAPREDUCE-2947. Fixed race condition in AuxiliaryServices. (vinodkv via acmurthy) diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java index 2b5b21c867..b4dfc22357 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java @@ -30,6 +30,7 @@ import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.api.records.ApplicationId; import org.apache.hadoop.yarn.api.records.ContainerId; +import org.apache.hadoop.yarn.util.BuilderUtils; import org.apache.hadoop.yarn.util.Records; public class MRBuilderUtils { @@ -41,6 +42,11 @@ public static JobId newJobId(ApplicationId appId, int id) { return jobId; } + public static JobId newJobId(long clusterTs, int appIdInt, int id) { + ApplicationId appId = BuilderUtils.newApplicationId(clusterTs, appIdInt); + return MRBuilderUtils.newJobId(appId, id); + } + public static TaskId newTaskId(JobId jobId, int id, TaskType taskType) { TaskId taskId = Records.newRecord(TaskId.class); taskId.setJobId(jobId); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ResourceBundles.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ResourceBundles.java index aede782ea0..52addcfa86 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ResourceBundles.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ResourceBundles.java @@ -18,6 +18,7 @@ package org.apache.hadoop.mapreduce.util; +import java.util.Locale; import java.util.ResourceBundle; import java.util.MissingResourceException; @@ -33,7 +34,8 @@ public class ResourceBundles { * @throws MissingResourceException */ public static ResourceBundle getBundle(String bundleName) { - return ResourceBundle.getBundle(bundleName.replace('$', '_')); + return ResourceBundle.getBundle(bundleName.replace('$', '_'), + Locale.getDefault(), Thread.currentThread().getContextClassLoader()); } /** diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java index 9584d05dcd..dcd2cf571f 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java @@ -19,13 +19,16 @@ package org.apache.hadoop.mapreduce.v2.hs; import java.io.IOException; -import java.util.ArrayList; +import java.net.UnknownHostException; import java.util.Collections; import java.util.Comparator; import java.util.HashMap; import java.util.LinkedList; import java.util.List; import java.util.Map; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.locks.Lock; +import java.util.concurrent.locks.ReentrantLock; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; @@ -34,6 +37,7 @@ import org.apache.hadoop.mapred.JobACLsManager; import org.apache.hadoop.mapreduce.Counters; import org.apache.hadoop.mapreduce.JobACL; +import org.apache.hadoop.mapreduce.TaskID; import org.apache.hadoop.mapreduce.TypeConverter; import org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser; import org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo; @@ -54,7 +58,7 @@ import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.authorize.AccessControlList; import org.apache.hadoop.yarn.YarnException; -import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; +import org.apache.hadoop.yarn.util.Records; /** @@ -64,50 +68,31 @@ public class CompletedJob implements org.apache.hadoop.mapreduce.v2.app.job.Job { static final Log LOG = LogFactory.getLog(CompletedJob.class); - private final Counters counters; private final Configuration conf; - private final JobId jobId; - private final List diagnostics = new ArrayList(); - private final JobReport report; - private final Map tasks = new HashMap(); - private final Map mapTasks = new HashMap(); - private final Map reduceTasks = new HashMap(); - private final String user; + private final JobId jobId; //Can be picked from JobInfo with a conversion. + private final String user; //Can be picked up from JobInfo private final Path confFile; - private JobACLsManager aclsMgr; - private List completionEvents = null; private JobInfo jobInfo; - + private JobReport report; + AtomicBoolean tasksLoaded = new AtomicBoolean(false); + private Lock tasksLock = new ReentrantLock(); + private Map tasks = new HashMap(); + private Map mapTasks = new HashMap(); + private Map reduceTasks = new HashMap(); + private List completionEvents = null; + private JobACLsManager aclsMgr; + + public CompletedJob(Configuration conf, JobId jobId, Path historyFile, boolean loadTasks, String userName, Path confFile, JobACLsManager aclsMgr) throws IOException { LOG.info("Loading job: " + jobId + " from file: " + historyFile); this.conf = conf; this.jobId = jobId; + this.user = userName; this.confFile = confFile; this.aclsMgr = aclsMgr; - loadFullHistoryData(loadTasks, historyFile); - user = userName; - counters = jobInfo.getTotalCounters(); - diagnostics.add(jobInfo.getErrorInfo()); - report = - RecordFactoryProvider.getRecordFactory(null).newRecordInstance( - JobReport.class); - report.setJobId(jobId); - report.setJobState(JobState.valueOf(jobInfo.getJobStatus())); - report.setSubmitTime(jobInfo.getSubmitTime()); - report.setStartTime(jobInfo.getLaunchTime()); - report.setFinishTime(jobInfo.getFinishTime()); - report.setJobName(jobInfo.getJobname()); - report.setUser(jobInfo.getUsername()); - report.setMapProgress((float) getCompletedMaps() / getTotalMaps()); - report.setReduceProgress((float) getCompletedReduces() / getTotalReduces()); - report.setJobFile(confFile.toString()); - report.setTrackingUrl(JobHistoryUtils.getHistoryUrl(conf, TypeConverter - .toYarn(TypeConverter.fromYarn(jobId)).getAppId())); - report.setAMInfos(getAMInfos()); - report.setIsUber(isUber()); } @Override @@ -122,7 +107,7 @@ public int getCompletedReduces() { @Override public Counters getAllCounters() { - return counters; + return jobInfo.getTotalCounters(); } @Override @@ -131,10 +116,36 @@ public JobId getID() { } @Override - public JobReport getReport() { + public synchronized JobReport getReport() { + if (report == null) { + constructJobReport(); + } return report; } + private void constructJobReport() { + report = Records.newRecord(JobReport.class); + report.setJobId(jobId); + report.setJobState(JobState.valueOf(jobInfo.getJobStatus())); + report.setSubmitTime(jobInfo.getSubmitTime()); + report.setStartTime(jobInfo.getLaunchTime()); + report.setFinishTime(jobInfo.getFinishTime()); + report.setJobName(jobInfo.getJobname()); + report.setUser(jobInfo.getUsername()); + report.setMapProgress((float) getCompletedMaps() / getTotalMaps()); + report.setReduceProgress((float) getCompletedReduces() / getTotalReduces()); + report.setJobFile(confFile.toString()); + String historyUrl = "N/A"; + try { + historyUrl = JobHistoryUtils.getHistoryUrl(conf, jobId.getAppId()); + } catch (UnknownHostException e) { + //Ignore. + } + report.setTrackingUrl(historyUrl); + report.setAMInfos(getAMInfos()); + report.setIsUber(isUber()); + } + @Override public float getProgress() { return 1.0f; @@ -142,16 +153,23 @@ public float getProgress() { @Override public JobState getState() { - return report.getJobState(); + return JobState.valueOf(jobInfo.getJobStatus()); } @Override public Task getTask(TaskId taskId) { - return tasks.get(taskId); + if (tasksLoaded.get()) { + return tasks.get(taskId); + } else { + TaskID oldTaskId = TypeConverter.fromYarn(taskId); + CompletedTask completedTask = + new CompletedTask(taskId, jobInfo.getAllTasks().get(oldTaskId)); + return completedTask; + } } @Override - public TaskAttemptCompletionEvent[] getTaskAttemptCompletionEvents( + public synchronized TaskAttemptCompletionEvent[] getTaskAttemptCompletionEvents( int fromEventId, int maxEvents) { if (completionEvents == null) { constructTaskAttemptCompletionEvents(); @@ -167,6 +185,7 @@ public TaskAttemptCompletionEvent[] getTaskAttemptCompletionEvents( } private void constructTaskAttemptCompletionEvents() { + loadAllTasks(); completionEvents = new LinkedList(); List allTaskAttempts = new LinkedList(); for (TaskId taskId : tasks.keySet()) { @@ -205,8 +224,8 @@ public int compare(TaskAttempt o1, TaskAttempt o2) { int eventId = 0; for (TaskAttempt taskAttempt : allTaskAttempts) { - TaskAttemptCompletionEvent tace = RecordFactoryProvider.getRecordFactory( - null).newRecordInstance(TaskAttemptCompletionEvent.class); + TaskAttemptCompletionEvent tace = + Records.newRecord(TaskAttemptCompletionEvent.class); int attemptRunTime = -1; if (taskAttempt.getLaunchTime() != 0 && taskAttempt.getFinishTime() != 0) { @@ -237,15 +256,42 @@ public int compare(TaskAttempt o1, TaskAttempt o2) { @Override public Map getTasks() { + loadAllTasks(); return tasks; } + private void loadAllTasks() { + if (tasksLoaded.get()) { + return; + } + tasksLock.lock(); + try { + if (tasksLoaded.get()) { + return; + } + for (Map.Entry entry : jobInfo.getAllTasks().entrySet()) { + TaskId yarnTaskID = TypeConverter.toYarn(entry.getKey()); + TaskInfo taskInfo = entry.getValue(); + Task task = new CompletedTask(yarnTaskID, taskInfo); + tasks.put(yarnTaskID, task); + if (task.getType() == TaskType.MAP) { + mapTasks.put(task.getID(), task); + } else if (task.getType() == TaskType.REDUCE) { + reduceTasks.put(task.getID(), task); + } + } + tasksLoaded.set(true); + } finally { + tasksLock.unlock(); + } + } + //History data is leisurely loaded when task level data is requested private synchronized void loadFullHistoryData(boolean loadTasks, Path historyFileAbsolute) throws IOException { LOG.info("Loading history file: [" + historyFileAbsolute + "]"); - if (jobInfo != null) { - return; //data already loaded + if (this.jobInfo != null) { + return; } if (historyFileAbsolute != null) { @@ -254,7 +300,7 @@ private synchronized void loadFullHistoryData(boolean loadTasks, parser = new JobHistoryParser(historyFileAbsolute.getFileSystem(conf), historyFileAbsolute); - jobInfo = parser.parse(); + this.jobInfo = parser.parse(); } catch (IOException e) { throw new YarnException("Could not load history file " + historyFileAbsolute, e); @@ -268,27 +314,15 @@ private synchronized void loadFullHistoryData(boolean loadTasks, } else { throw new IOException("History file not found"); } - if (loadTasks) { - for (Map.Entry entry : jobInfo - .getAllTasks().entrySet()) { - TaskId yarnTaskID = TypeConverter.toYarn(entry.getKey()); - TaskInfo taskInfo = entry.getValue(); - Task task = new CompletedTask(yarnTaskID, taskInfo); - tasks.put(yarnTaskID, task); - if (task.getType() == TaskType.MAP) { - mapTasks.put(task.getID(), task); - } else if (task.getType() == TaskType.REDUCE) { - reduceTasks.put(task.getID(), task); - } - } - } - LOG.info("TaskInfo loaded"); + loadAllTasks(); + LOG.info("TaskInfo loaded"); + } } @Override public List getDiagnostics() { - return diagnostics; + return Collections.singletonList(jobInfo.getErrorInfo()); } @Override @@ -318,6 +352,7 @@ public boolean isUber() { @Override public Map getTasks(TaskType taskType) { + loadAllTasks(); if (TaskType.MAP.equals(taskType)) { return mapTasks; } else {//we have only two types of tasks diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTask.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTask.java index ced2ddb937..669eaa4d6f 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTask.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTask.java @@ -20,10 +20,13 @@ import java.util.ArrayList; import java.util.LinkedHashMap; +import java.util.LinkedList; +import java.util.List; import java.util.Map; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.locks.Lock; +import java.util.concurrent.locks.ReentrantLock; -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; import org.apache.hadoop.mapreduce.Counters; import org.apache.hadoop.mapreduce.TypeConverter; import org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo; @@ -35,59 +38,24 @@ import org.apache.hadoop.mapreduce.v2.api.records.TaskType; import org.apache.hadoop.mapreduce.v2.app.job.Task; import org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt; -import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; +import org.apache.hadoop.yarn.util.Records; public class CompletedTask implements Task { - - private final TaskType type; - private Counters counters; - private final long startTime; - private final long finishTime; - private TaskState state; private final TaskId taskId; - private final TaskReport report; + private final TaskInfo taskInfo; + private TaskReport report; + private TaskAttemptId successfulAttempt; + private List reportDiagnostics = new LinkedList(); + private Lock taskAttemptsLock = new ReentrantLock(); + private AtomicBoolean taskAttemptsLoaded = new AtomicBoolean(false); private final Map attempts = new LinkedHashMap(); - - private static final Log LOG = LogFactory.getLog(CompletedTask.class); CompletedTask(TaskId taskId, TaskInfo taskInfo) { //TODO JobHistoryParser.handleTaskFailedAttempt should use state from the event. - LOG.debug("HandlingTaskId: [" + taskId + "]"); + this.taskInfo = taskInfo; this.taskId = taskId; - this.startTime = taskInfo.getStartTime(); - this.finishTime = taskInfo.getFinishTime(); - this.type = TypeConverter.toYarn(taskInfo.getTaskType()); - if (taskInfo.getCounters() != null) - this.counters = taskInfo.getCounters(); - if (taskInfo.getTaskStatus() != null) { - this.state = TaskState.valueOf(taskInfo.getTaskStatus()); - } else { - this.state = TaskState.KILLED; - } - report = RecordFactoryProvider.getRecordFactory(null).newRecordInstance(TaskReport.class); - for (TaskAttemptInfo attemptHistory : taskInfo.getAllTaskAttempts() - .values()) { - CompletedTaskAttempt attempt = new CompletedTaskAttempt(taskId, - attemptHistory); - report.addAllDiagnostics(attempt.getDiagnostics()); //TODO TMI? - attempts.put(attempt.getID(), attempt); - if (attemptHistory.getTaskStatus() != null - && attemptHistory.getTaskStatus().equals( - TaskState.SUCCEEDED.toString()) - && report.getSuccessfulAttempt() == null) { - report.setSuccessfulAttempt(TypeConverter.toYarn(attemptHistory - .getAttemptId())); - } - } - report.setTaskId(taskId); - report.setStartTime(startTime); - report.setFinishTime(finishTime); - report.setTaskState(state); - report.setProgress(getProgress()); - report.setCounters(TypeConverter.toYarn(getCounters())); - report.addAllRunningAttempts(new ArrayList(attempts.keySet())); } @Override @@ -97,17 +65,19 @@ public boolean canCommit(TaskAttemptId taskAttemptID) { @Override public TaskAttempt getAttempt(TaskAttemptId attemptID) { + loadAllTaskAttempts(); return attempts.get(attemptID); } @Override public Map getAttempts() { + loadAllTaskAttempts(); return attempts; } @Override public Counters getCounters() { - return counters; + return taskInfo.getCounters(); } @Override @@ -121,13 +91,18 @@ public float getProgress() { } @Override - public TaskReport getReport() { + public synchronized TaskReport getReport() { + if (report == null) { + constructTaskReport(); + } return report; } + + @Override public TaskType getType() { - return type; + return TypeConverter.toYarn(taskInfo.getTaskType()); } @Override @@ -137,7 +112,54 @@ public boolean isFinished() { @Override public TaskState getState() { - return state; + return taskInfo.getTaskStatus() == null ? TaskState.KILLED : TaskState + .valueOf(taskInfo.getTaskStatus()); } + private void constructTaskReport() { + loadAllTaskAttempts(); + this.report = Records.newRecord(TaskReport.class); + report.setTaskId(taskId); + report.setStartTime(taskInfo.getStartTime()); + report.setFinishTime(taskInfo.getFinishTime()); + report.setTaskState(getState()); + report.setProgress(getProgress()); + report.setCounters(TypeConverter.toYarn(getCounters())); + if (successfulAttempt != null) { + report.setSuccessfulAttempt(successfulAttempt); + } + report.addAllDiagnostics(reportDiagnostics); + report + .addAllRunningAttempts(new ArrayList(attempts.keySet())); + } + + private void loadAllTaskAttempts() { + if (taskAttemptsLoaded.get()) { + return; + } + taskAttemptsLock.lock(); + try { + if (taskAttemptsLoaded.get()) { + return; + } + + for (TaskAttemptInfo attemptHistory : taskInfo.getAllTaskAttempts() + .values()) { + CompletedTaskAttempt attempt = + new CompletedTaskAttempt(taskId, attemptHistory); + reportDiagnostics.addAll(attempt.getDiagnostics()); + attempts.put(attempt.getID(), attempt); + if (successfulAttempt == null + && attemptHistory.getTaskStatus() != null + && attemptHistory.getTaskStatus().equals( + TaskState.SUCCEEDED.toString())) { + successfulAttempt = + TypeConverter.toYarn(attemptHistory.getAttemptId()); + } + } + taskAttemptsLoaded.set(true); + } finally { + taskAttemptsLock.unlock(); + } + } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTaskAttempt.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTaskAttempt.java index 09819c3922..84ec23e63b 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTaskAttempt.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTaskAttempt.java @@ -30,25 +30,21 @@ import org.apache.hadoop.mapreduce.v2.api.records.TaskId; import org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt; import org.apache.hadoop.yarn.api.records.ContainerId; -import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; +import org.apache.hadoop.yarn.util.Records; public class CompletedTaskAttempt implements TaskAttempt { private final TaskAttemptInfo attemptInfo; private final TaskAttemptId attemptId; - private Counters counters; private final TaskAttemptState state; - private final TaskAttemptReport report; private final List diagnostics = new ArrayList(); + private TaskAttemptReport report; private String localDiagMessage; CompletedTaskAttempt(TaskId taskId, TaskAttemptInfo attemptInfo) { this.attemptInfo = attemptInfo; this.attemptId = TypeConverter.toYarn(attemptInfo.getAttemptId()); - if (attemptInfo.getCounters() != null) { - this.counters = attemptInfo.getCounters(); - } if (attemptInfo.getTaskStatus() != null) { this.state = TaskAttemptState.valueOf(attemptInfo.getTaskStatus()); } else { @@ -56,37 +52,9 @@ public class CompletedTaskAttempt implements TaskAttempt { localDiagMessage = "Attmpt state missing from History : marked as KILLED"; diagnostics.add(localDiagMessage); } - if (attemptInfo.getError() != null) { diagnostics.add(attemptInfo.getError()); } - - report = RecordFactoryProvider.getRecordFactory(null).newRecordInstance(TaskAttemptReport.class); - - report.setTaskAttemptId(attemptId); - report.setTaskAttemptState(state); - report.setProgress(getProgress()); - report.setStartTime(attemptInfo.getStartTime()); - - report.setFinishTime(attemptInfo.getFinishTime()); - report.setShuffleFinishTime(attemptInfo.getShuffleFinishTime()); - report.setSortFinishTime(attemptInfo.getSortFinishTime()); - if (localDiagMessage != null) { - report.setDiagnosticInfo(attemptInfo.getError() + ", " + localDiagMessage); - } else { - report.setDiagnosticInfo(attemptInfo.getError()); - } -// report.setPhase(attemptInfo.get); //TODO - report.setStateString(attemptInfo.getState()); - report.setCounters(TypeConverter.toYarn(getCounters())); - report.setContainerId(attemptInfo.getContainerId()); - if (attemptInfo.getHostname() == null) { - report.setNodeManagerHost("UNKNOWN"); - } else { - report.setNodeManagerHost(attemptInfo.getHostname()); - report.setNodeManagerPort(attemptInfo.getPort()); - } - report.setNodeManagerHttpPort(attemptInfo.getHttpPort()); } @Override @@ -111,7 +79,7 @@ public String getNodeRackName() { @Override public Counters getCounters() { - return counters; + return attemptInfo.getCounters(); } @Override @@ -125,7 +93,10 @@ public float getProgress() { } @Override - public TaskAttemptReport getReport() { + public synchronized TaskAttemptReport getReport() { + if (report == null) { + constructTaskAttemptReport(); + } return report; } @@ -146,26 +117,55 @@ public List getDiagnostics() { @Override public long getLaunchTime() { - return report.getStartTime(); + return attemptInfo.getStartTime(); } @Override public long getFinishTime() { - return report.getFinishTime(); + return attemptInfo.getFinishTime(); } @Override public long getShuffleFinishTime() { - return report.getShuffleFinishTime(); + return attemptInfo.getShuffleFinishTime(); } @Override public long getSortFinishTime() { - return report.getSortFinishTime(); + return attemptInfo.getSortFinishTime(); } @Override public int getShufflePort() { - throw new UnsupportedOperationException("Not supported yet."); + return attemptInfo.getShufflePort(); + } + + private void constructTaskAttemptReport() { + report = Records.newRecord(TaskAttemptReport.class); + + report.setTaskAttemptId(attemptId); + report.setTaskAttemptState(state); + report.setProgress(getProgress()); + report.setStartTime(attemptInfo.getStartTime()); + report.setFinishTime(attemptInfo.getFinishTime()); + report.setShuffleFinishTime(attemptInfo.getShuffleFinishTime()); + report.setSortFinishTime(attemptInfo.getSortFinishTime()); + if (localDiagMessage != null) { + report + .setDiagnosticInfo(attemptInfo.getError() + ", " + localDiagMessage); + } else { + report.setDiagnosticInfo(attemptInfo.getError()); + } + // report.setPhase(attemptInfo.get); //TODO + report.setStateString(attemptInfo.getState()); + report.setCounters(TypeConverter.toYarn(getCounters())); + report.setContainerId(attemptInfo.getContainerId()); + if (attemptInfo.getHostname() == null) { + report.setNodeManagerHost("UNKNOWN"); + } else { + report.setNodeManagerHost(attemptInfo.getHostname()); + report.setNodeManagerPort(attemptInfo.getPort()); + } + report.setNodeManagerHttpPort(attemptInfo.getHttpPort()); } } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java index 8cc05ee3c6..77d7872e66 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java @@ -24,6 +24,7 @@ import java.util.Collections; import java.util.HashMap; import java.util.HashSet; +import java.util.LinkedHashMap; import java.util.List; import java.util.Map; import java.util.Set; @@ -117,9 +118,8 @@ public class JobHistory extends AbstractService implements HistoryContext { //Maintains a list of known done subdirectories. Not currently used. private final Set existingDoneSubdirs = new HashSet(); - - private final SortedMap loadedJobCache = - new ConcurrentSkipListMap(); + + private Map loadedJobCache = null; /** * Maintains a mapping between intermediate user directories and the last @@ -167,6 +167,7 @@ public class JobHistory extends AbstractService implements HistoryContext { * .....${DONE_DIR}/VERSION_STRING/YYYY/MM/DD/HH/SERIAL_NUM/jh{index_entries}.jhist */ + @SuppressWarnings("serial") @Override public void init(Configuration conf) throws YarnException { LOG.info("JobHistory Init"); @@ -224,6 +225,16 @@ public void init(Configuration conf) throws YarnException { DEFAULT_MOVE_THREAD_INTERVAL); numMoveThreads = conf.getInt(JHAdminConfig.MR_HISTORY_MOVE_THREAD_COUNT, DEFAULT_MOVE_THREAD_COUNT); + + loadedJobCache = + Collections.synchronizedMap(new LinkedHashMap( + loadedJobCacheSize + 1, 0.75f, true) { + @Override + public boolean removeEldestEntry(final Map.Entry eldest) { + return super.size() > loadedJobCacheSize; + } + }); + try { initExisting(); } catch (IOException e) { @@ -465,9 +476,6 @@ private void addToLoadedJobCache(Job job) { LOG.debug("Adding "+job.getID()+" to loaded job cache"); } loadedJobCache.put(job.getID(), job); - if (loadedJobCache.size() > loadedJobCacheSize ) { - loadedJobCache.remove(loadedJobCache.firstKey()); - } } @@ -655,7 +663,7 @@ private Job loadJob(MetaInfo metaInfo) { synchronized(metaInfo) { try { Job job = new CompletedJob(conf, metaInfo.getJobIndexInfo().getJobId(), - metaInfo.getHistoryFile(), true, metaInfo.getJobIndexInfo().getUser(), + metaInfo.getHistoryFile(), false, metaInfo.getJobIndexInfo().getUser(), metaInfo.getConfFile(), this.aclsMgr); addToLoadedJobCache(job); return job; @@ -938,7 +946,7 @@ public Map getAllJobs() { LOG.debug("Called getAllJobs()"); return getAllJobsInternal(); } - + static class MetaInfo { private Path historyFile; private Path confFile; diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryEntities.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryEntities.java new file mode 100644 index 0000000000..34462ece0e --- /dev/null +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryEntities.java @@ -0,0 +1,145 @@ +package org.apache.hadoop.mapreduce.v2.hs; + +import static junit.framework.Assert.assertEquals; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Map; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.mapred.JobACLsManager; +import org.apache.hadoop.mapreduce.v2.api.records.JobId; +import org.apache.hadoop.mapreduce.v2.api.records.JobReport; +import org.apache.hadoop.mapreduce.v2.api.records.JobState; +import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId; +import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptReport; +import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptState; +import org.apache.hadoop.mapreduce.v2.api.records.TaskId; +import org.apache.hadoop.mapreduce.v2.api.records.TaskReport; +import org.apache.hadoop.mapreduce.v2.api.records.TaskState; +import org.apache.hadoop.mapreduce.v2.api.records.TaskType; +import org.apache.hadoop.mapreduce.v2.app.job.Task; +import org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt; +import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; +import org.junit.runners.Parameterized.Parameters; + +@RunWith(value = Parameterized.class) +public class TestJobHistoryEntities { + + private final String historyFileName = + "job_1329348432655_0001-1329348443227-user-Sleep+job-1329348468601-10-1-SUCCEEDED-default.jhist"; + private final String confFileName = "job_1329348432655_0001_conf.xml"; + private final Configuration conf = new Configuration(); + private final JobACLsManager jobAclsManager = new JobACLsManager(conf); + private boolean loadTasks; + private JobId jobId = MRBuilderUtils.newJobId(1329348432655l, 1, 1); + Path fulleHistoryPath = + new Path(this.getClass().getClassLoader().getResource(historyFileName) + .getFile()); + Path fullConfPath = + new Path(this.getClass().getClassLoader().getResource(confFileName) + .getFile()); + private CompletedJob completedJob; + + public TestJobHistoryEntities(boolean loadTasks) throws Exception { + this.loadTasks = loadTasks; + } + + @Parameters + public static Collection data() { + List list = new ArrayList(2); + list.add(new Object[] { true }); + list.add(new Object[] { false }); + return list; + } + + /* Verify some expected values based on the history file */ + @Test + public void testCompletedJob() throws Exception { + //Re-initialize to verify the delayed load. + completedJob = + new CompletedJob(conf, jobId, fulleHistoryPath, loadTasks, "user", + fullConfPath, jobAclsManager); + //Verify tasks loaded based on loadTask parameter. + assertEquals(loadTasks, completedJob.tasksLoaded.get()); + assertEquals(1, completedJob.getAMInfos().size()); + assertEquals(10, completedJob.getCompletedMaps()); + assertEquals(1, completedJob.getCompletedReduces()); + assertEquals(11, completedJob.getTasks().size()); + //Verify tasks loaded at this point. + assertEquals(true, completedJob.tasksLoaded.get()); + assertEquals(10, completedJob.getTasks(TaskType.MAP).size()); + assertEquals(1, completedJob.getTasks(TaskType.REDUCE).size()); + assertEquals("user", completedJob.getUserName()); + assertEquals(JobState.SUCCEEDED, completedJob.getState()); + JobReport jobReport = completedJob.getReport(); + assertEquals("user", jobReport.getUser()); + assertEquals(JobState.SUCCEEDED, jobReport.getJobState()); + } + + @Test + public void testCompletedTask() throws Exception { + completedJob = + new CompletedJob(conf, jobId, fulleHistoryPath, loadTasks, "user", + fullConfPath, jobAclsManager); + TaskId mt1Id = MRBuilderUtils.newTaskId(jobId, 0, TaskType.MAP); + TaskId rt1Id = MRBuilderUtils.newTaskId(jobId, 0, TaskType.REDUCE); + + Map mapTasks = completedJob.getTasks(TaskType.MAP); + Map reduceTasks = completedJob.getTasks(TaskType.REDUCE); + assertEquals(10, mapTasks.size()); + assertEquals(1, reduceTasks.size()); + + Task mt1 = mapTasks.get(mt1Id); + assertEquals(1, mt1.getAttempts().size()); + assertEquals(TaskState.SUCCEEDED, mt1.getState()); + TaskReport mt1Report = mt1.getReport(); + assertEquals(TaskState.SUCCEEDED, mt1Report.getTaskState()); + assertEquals(mt1Id, mt1Report.getTaskId()); + Task rt1 = reduceTasks.get(rt1Id); + assertEquals(1, rt1.getAttempts().size()); + assertEquals(TaskState.SUCCEEDED, rt1.getState()); + TaskReport rt1Report = rt1.getReport(); + assertEquals(TaskState.SUCCEEDED, rt1Report.getTaskState()); + assertEquals(rt1Id, rt1Report.getTaskId()); + } + + @Test + public void testCompletedTaskAttempt() throws Exception { + completedJob = + new CompletedJob(conf, jobId, fulleHistoryPath, loadTasks, "user", + fullConfPath, jobAclsManager); + TaskId mt1Id = MRBuilderUtils.newTaskId(jobId, 0, TaskType.MAP); + TaskId rt1Id = MRBuilderUtils.newTaskId(jobId, 0, TaskType.REDUCE); + TaskAttemptId mta1Id = MRBuilderUtils.newTaskAttemptId(mt1Id, 0); + TaskAttemptId rta1Id = MRBuilderUtils.newTaskAttemptId(rt1Id, 0); + + Task mt1 = completedJob.getTask(mt1Id); + Task rt1 = completedJob.getTask(rt1Id); + + TaskAttempt mta1 = mt1.getAttempt(mta1Id); + assertEquals(TaskAttemptState.SUCCEEDED, mta1.getState()); + assertEquals("localhost:45454", mta1.getAssignedContainerMgrAddress()); + assertEquals("localhost:9999", mta1.getNodeHttpAddress()); + TaskAttemptReport mta1Report = mta1.getReport(); + assertEquals(TaskAttemptState.SUCCEEDED, mta1Report.getTaskAttemptState()); + assertEquals("localhost", mta1Report.getNodeManagerHost()); + assertEquals(45454, mta1Report.getNodeManagerPort()); + assertEquals(9999, mta1Report.getNodeManagerHttpPort()); + + TaskAttempt rta1 = rt1.getAttempt(rta1Id); + assertEquals(TaskAttemptState.SUCCEEDED, rta1.getState()); + assertEquals("localhost:45454", rta1.getAssignedContainerMgrAddress()); + assertEquals("localhost:9999", rta1.getNodeHttpAddress()); + TaskAttemptReport rta1Report = rta1.getReport(); + assertEquals(TaskAttemptState.SUCCEEDED, rta1Report.getTaskAttemptState()); + assertEquals("localhost", rta1Report.getNodeManagerHost()); + assertEquals(45454, rta1Report.getNodeManagerPort()); + assertEquals(9999, rta1Report.getNodeManagerHttpPort()); + } +} diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001-1329348443227-user-Sleep+job-1329348468601-10-1-SUCCEEDED-default.jhist b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001-1329348443227-user-Sleep+job-1329348468601-10-1-SUCCEEDED-default.jhist new file mode 100644 index 0000000000..484971898e --- /dev/null +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001-1329348443227-user-Sleep+job-1329348468601-10-1-SUCCEEDED-default.jhist @@ -0,0 +1,51 @@ +Avro-Json +{"type":"record","name":"Event","namespace":"org.apache.hadoop.mapreduce.jobhistory","fields":[{"name":"type","type":{"type":"enum","name":"EventType","symbols":["JOB_SUBMITTED","JOB_INITED","JOB_FINISHED","JOB_PRIORITY_CHANGED","JOB_STATUS_CHANGED","JOB_FAILED","JOB_KILLED","JOB_INFO_CHANGED","TASK_STARTED","TASK_FINISHED","TASK_FAILED","TASK_UPDATED","NORMALIZED_RESOURCE","MAP_ATTEMPT_STARTED","MAP_ATTEMPT_FINISHED","MAP_ATTEMPT_FAILED","MAP_ATTEMPT_KILLED","REDUCE_ATTEMPT_STARTED","REDUCE_ATTEMPT_FINISHED","REDUCE_ATTEMPT_FAILED","REDUCE_ATTEMPT_KILLED","SETUP_ATTEMPT_STARTED","SETUP_ATTEMPT_FINISHED","SETUP_ATTEMPT_FAILED","SETUP_ATTEMPT_KILLED","CLEANUP_ATTEMPT_STARTED","CLEANUP_ATTEMPT_FINISHED","CLEANUP_ATTEMPT_FAILED","CLEANUP_ATTEMPT_KILLED","AM_STARTED"]}},{"name":"event","type":[{"type":"record","name":"JobFinished","fields":[{"name":"jobid","type":"string"},{"name":"finishTime","type":"long"},{"name":"finishedMaps","type":"int"},{"name":"finishedReduces","type":"int"},{"name":"failedMaps","type":"int"},{"name":"failedReduces","type":"int"},{"name":"totalCounters","type":{"type":"record","name":"JhCounters","fields":[{"name":"name","type":"string"},{"name":"groups","type":{"type":"array","items":{"type":"record","name":"JhCounterGroup","fields":[{"name":"name","type":"string"},{"name":"displayName","type":"string"},{"name":"counts","type":{"type":"array","items":{"type":"record","name":"JhCounter","fields":[{"name":"name","type":"string"},{"name":"displayName","type":"string"},{"name":"value","type":"long"}]}}}]}}}]}},{"name":"mapCounters","type":"JhCounters"},{"name":"reduceCounters","type":"JhCounters"}]},{"type":"record","name":"JobInfoChange","fields":[{"name":"jobid","type":"string"},{"name":"submitTime","type":"long"},{"name":"launchTime","type":"long"}]},{"type":"record","name":"JobInited","fields":[{"name":"jobid","type":"string"},{"name":"launchTime","type":"long"},{"name":"totalMaps","type":"int"},{"name":"totalReduces","type":"int"},{"name":"jobStatus","type":"string"},{"name":"uberized","type":"boolean"}]},{"type":"record","name":"AMStarted","fields":[{"name":"applicationAttemptId","type":"string"},{"name":"startTime","type":"long"},{"name":"containerId","type":"string"},{"name":"nodeManagerHost","type":"string"},{"name":"nodeManagerPort","type":"int"},{"name":"nodeManagerHttpPort","type":"int"}]},{"type":"record","name":"JobPriorityChange","fields":[{"name":"jobid","type":"string"},{"name":"priority","type":"string"}]},{"type":"record","name":"JobStatusChanged","fields":[{"name":"jobid","type":"string"},{"name":"jobStatus","type":"string"}]},{"type":"record","name":"JobSubmitted","fields":[{"name":"jobid","type":"string"},{"name":"jobName","type":"string"},{"name":"userName","type":"string"},{"name":"submitTime","type":"long"},{"name":"jobConfPath","type":"string"},{"name":"acls","type":{"type":"map","values":"string"}},{"name":"jobQueueName","type":"string"}]},{"type":"record","name":"JobUnsuccessfulCompletion","fields":[{"name":"jobid","type":"string"},{"name":"finishTime","type":"long"},{"name":"finishedMaps","type":"int"},{"name":"finishedReduces","type":"int"},{"name":"jobStatus","type":"string"}]},{"type":"record","name":"MapAttemptFinished","fields":[{"name":"taskid","type":"string"},{"name":"attemptId","type":"string"},{"name":"taskType","type":"string"},{"name":"taskStatus","type":"string"},{"name":"mapFinishTime","type":"long"},{"name":"finishTime","type":"long"},{"name":"hostname","type":"string"},{"name":"port","type":"int"},{"name":"rackname","type":"string"},{"name":"state","type":"string"},{"name":"counters","type":"JhCounters"},{"name":"clockSplits","type":{"type":"array","items":"int"}},{"name":"cpuUsages","type":{"type":"array","items":"int"}},{"name":"vMemKbytes","type":{"type":"array","items":"int"}},{"name":"physMemKbytes","type":{"type":"array","items":"int"}}]},{"type":"record","name":"ReduceAttemptFinished","fields":[{"name":"taskid","type":"string"},{"name":"attemptId","type":"string"},{"name":"taskType","type":"string"},{"name":"taskStatus","type":"string"},{"name":"shuffleFinishTime","type":"long"},{"name":"sortFinishTime","type":"long"},{"name":"finishTime","type":"long"},{"name":"hostname","type":"string"},{"name":"port","type":"int"},{"name":"rackname","type":"string"},{"name":"state","type":"string"},{"name":"counters","type":"JhCounters"},{"name":"clockSplits","type":{"type":"array","items":"int"}},{"name":"cpuUsages","type":{"type":"array","items":"int"}},{"name":"vMemKbytes","type":{"type":"array","items":"int"}},{"name":"physMemKbytes","type":{"type":"array","items":"int"}}]},{"type":"record","name":"TaskAttemptFinished","fields":[{"name":"taskid","type":"string"},{"name":"attemptId","type":"string"},{"name":"taskType","type":"string"},{"name":"taskStatus","type":"string"},{"name":"finishTime","type":"long"},{"name":"rackname","type":"string"},{"name":"hostname","type":"string"},{"name":"state","type":"string"},{"name":"counters","type":"JhCounters"}]},{"type":"record","name":"TaskAttemptStarted","fields":[{"name":"taskid","type":"string"},{"name":"taskType","type":"string"},{"name":"attemptId","type":"string"},{"name":"startTime","type":"long"},{"name":"trackerName","type":"string"},{"name":"httpPort","type":"int"},{"name":"shufflePort","type":"int"},{"name":"containerId","type":"string"}]},{"type":"record","name":"TaskAttemptUnsuccessfulCompletion","fields":[{"name":"taskid","type":"string"},{"name":"taskType","type":"string"},{"name":"attemptId","type":"string"},{"name":"finishTime","type":"long"},{"name":"hostname","type":"string"},{"name":"port","type":"int"},{"name":"rackname","type":"string"},{"name":"status","type":"string"},{"name":"error","type":"string"},{"name":"clockSplits","type":{"type":"array","items":"int"}},{"name":"cpuUsages","type":{"type":"array","items":"int"}},{"name":"vMemKbytes","type":{"type":"array","items":"int"}},{"name":"physMemKbytes","type":{"type":"array","items":"int"}}]},{"type":"record","name":"TaskFailed","fields":[{"name":"taskid","type":"string"},{"name":"taskType","type":"string"},{"name":"finishTime","type":"long"},{"name":"error","type":"string"},{"name":"failedDueToAttempt","type":["null","string"]},{"name":"status","type":"string"}]},{"type":"record","name":"TaskFinished","fields":[{"name":"taskid","type":"string"},{"name":"taskType","type":"string"},{"name":"finishTime","type":"long"},{"name":"status","type":"string"},{"name":"counters","type":"JhCounters"}]},{"type":"record","name":"TaskStarted","fields":[{"name":"taskid","type":"string"},{"name":"taskType","type":"string"},{"name":"startTime","type":"long"},{"name":"splitLocations","type":"string"}]},{"type":"record","name":"TaskUpdated","fields":[{"name":"taskid","type":"string"},{"name":"finishTime","type":"long"}]}]}]} +{"type":"AM_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.AMStarted":{"applicationAttemptId":"appattempt_1329348432655_0001_000001","startTime":1329348445605,"containerId":"container_1329348432655_0001_01_000001","nodeManagerHost":"localhost","nodeManagerPort":45454,"nodeManagerHttpPort":9999}}} + {"type":"JOB_SUBMITTED","event":{"org.apache.hadoop.mapreduce.jobhistory.JobSubmitted":{"jobid":"job_1329348432655_0001","jobName":"Sleep job","userName":"user","submitTime":1329348443227,"jobConfPath":"hdfs://localhost:8021/tmp/hadoop-yarn/staging/user/.staging/job_1329348432655_0001/job.xml","acls":{},"jobQueueName":"default"}}} + {"type":"JOB_INITED","event":{"org.apache.hadoop.mapreduce.jobhistory.JobInited":{"jobid":"job_1329348432655_0001","launchTime":1329348448308,"totalMaps":10,"totalReduces":1,"jobStatus":"INITED","uberized":false}}} + {"type":"JOB_INFO_CHANGED","event":{"org.apache.hadoop.mapreduce.jobhistory.JobInfoChange":{"jobid":"job_1329348432655_0001","submitTime":1329348443227,"launchTime":1329348448308}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_m_000000","taskType":"MAP","startTime":1329348448373,"splitLocations":""}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_m_000001","taskType":"MAP","startTime":1329348448387,"splitLocations":""}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_m_000002","taskType":"MAP","startTime":1329348448387,"splitLocations":""}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_m_000003","taskType":"MAP","startTime":1329348448387,"splitLocations":""}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_m_000004","taskType":"MAP","startTime":1329348448387,"splitLocations":""}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_m_000005","taskType":"MAP","startTime":1329348448388,"splitLocations":""}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_m_000006","taskType":"MAP","startTime":1329348448388,"splitLocations":""}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_m_000007","taskType":"MAP","startTime":1329348448388,"splitLocations":""}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_m_000008","taskType":"MAP","startTime":1329348448388,"splitLocations":""}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_m_000009","taskType":"MAP","startTime":1329348448388,"splitLocations":""}}} + {"type":"TASK_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskStarted":{"taskid":"task_1329348432655_0001_r_000000","taskType":"REDUCE","startTime":1329348448388,"splitLocations":""}}} + {"type":"MAP_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_m_000000","taskType":"MAP","attemptId":"attempt_1329348432655_0001_m_000000_0","startTime":1329348450485,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000002"}}} + {"type":"MAP_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_m_000002","taskType":"MAP","attemptId":"attempt_1329348432655_0001_m_000002_0","startTime":1329348450537,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000004"}}} + {"type":"MAP_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_m_000004","taskType":"MAP","attemptId":"attempt_1329348432655_0001_m_000004_0","startTime":1329348450538,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000006"}}} + {"type":"MAP_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_m_000001","taskType":"MAP","attemptId":"attempt_1329348432655_0001_m_000001_0","startTime":1329348450576,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000003"}}} + {"type":"MAP_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_m_000003","taskType":"MAP","attemptId":"attempt_1329348432655_0001_m_000003_0","startTime":1329348450579,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000005"}}} + {"type":"MAP_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_m_000005","taskType":"MAP","attemptId":"attempt_1329348432655_0001_m_000005_0","startTime":1329348450580,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000007"}}} + {"type":"MAP_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_m_000006","taskType":"MAP","attemptId":"attempt_1329348432655_0001_m_000006_0","startTime":1329348450581,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000008"}}} + {"type":"MAP_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished":{"taskid":"task_1329348432655_0001_m_000005","attemptId":"attempt_1329348432655_0001_m_000005_0","taskType":"MAP","taskStatus":"SUCCEEDED","mapFinishTime":1329348458880,"finishTime":1329348461951,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":518},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":340},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":183832576},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":701161472},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":181272576}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"clockSplits":[8691,129,128,129,128,129,128,129,128,129,128,129],"cpuUsages":[28,28,29,28,28,29,28,28,29,28,28,29],"vMemKbytes":[28530,85590,142651,199711,256772,313833,370894,427954,485015,542076,599136,656197],"physMemKbytes":[7479,22440,37400,52360,67321,82281,97242,112201,127162,142123,157082,172043]}}} + {"type":"MAP_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished":{"taskid":"task_1329348432655_0001_m_000002","attemptId":"attempt_1329348432655_0001_m_000002_0","taskType":"MAP","taskStatus":"SUCCEEDED","mapFinishTime":1329348459298,"finishTime":1329348461952,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":3},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":330},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":188891136},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":706727936},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":181272576}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"clockSplits":[9743,109,109,109,109,110,109,109,109,109,109,110],"cpuUsages":[27,28,27,28,27,28,27,28,27,28,27,28],"vMemKbytes":[28756,86270,143784,201297,258811,316324,373838,431351,488866,546379,603892,661407],"physMemKbytes":[7686,23058,38430,53801,69174,84545,99918,115289,130662,146034,161405,176778]}}} + {"type":"MAP_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished":{"taskid":"task_1329348432655_0001_m_000006","attemptId":"attempt_1329348432655_0001_m_000006_0","taskType":"MAP","taskStatus":"SUCCEEDED","mapFinishTime":1329348459712,"finishTime":1329348461952,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":105},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":340},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":175243264},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":699457536},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"clockSplits":[10240,61,62,61,61,62,61,61,62,61,61,62],"cpuUsages":[28,28,29,28,28,29,28,28,29,28,28,29],"vMemKbytes":[28461,85383,142305,199226,256149,313070,369993,426914,483837,540759,597680,654603],"physMemKbytes":[7130,21391,35653,49914,64175,78436,92698,106959,121221,135482,149743,164005]}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_m_000005","taskType":"MAP","finishTime":1329348461951,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":518},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":340},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":183832576},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":701161472},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":181272576}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]}}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_m_000002","taskType":"MAP","finishTime":1329348461952,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":3},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":330},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":188891136},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":706727936},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":181272576}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]}}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_m_000006","taskType":"MAP","finishTime":1329348461952,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":105},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":340},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":175243264},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":699457536},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]}}}} + {"type":"MAP_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_m_000007","taskType":"MAP","attemptId":"attempt_1329348432655_0001_m_000007_0","startTime":1329348462091,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000009"}}} + {"type":"MAP_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished":{"taskid":"task_1329348432655_0001_m_000004","attemptId":"attempt_1329348432655_0001_m_000004_0","taskType":"MAP","taskStatus":"SUCCEEDED","mapFinishTime":1329348459434,"finishTime":1329348462091,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":323},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":330},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":189214720},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":707006464},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":181272576}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"clockSplits":[10303,81,82,81,82,81,81,82,81,82,81,82],"cpuUsages":[27,28,27,28,27,28,27,28,27,28,27,28],"vMemKbytes":[28767,86304,143840,201376,258913,316449,373986,431521,489058,546595,604130,661667],"physMemKbytes":[7698,23097,38495,53893,69292,84690,100089,115486,130885,146284,161681,177080]}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_m_000004","taskType":"MAP","finishTime":1329348462091,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":323},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":330},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":189214720},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":707006464},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":181272576}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]}}}} + {"type":"MAP_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished":{"taskid":"task_1329348432655_0001_m_000001","attemptId":"attempt_1329348432655_0001_m_000001_0","taskType":"MAP","taskStatus":"SUCCEEDED","mapFinishTime":1329348461344,"finishTime":1329348462170,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":257},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":380},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":184819712},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":708714496},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"clockSplits":[10971,50,51,50,50,51,50,50,51,50,50,51],"cpuUsages":[31,32,32,31,32,32,31,32,32,31,32,32],"vMemKbytes":[28837,86512,144188,201863,259538,317213,374889,432564,490240,547915,605590,663266],"physMemKbytes":[7520,22560,37601,52641,67682,82723,97764,112804,127845,142886,157926,172967]}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_m_000001","taskType":"MAP","finishTime":1329348462170,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":257},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":380},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":184819712},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":708714496},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]}}}} + {"type":"MAP_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished":{"taskid":"task_1329348432655_0001_m_000003","attemptId":"attempt_1329348432655_0001_m_000003_0","taskType":"MAP","taskStatus":"SUCCEEDED","mapFinishTime":1329348461821,"finishTime":1329348462178,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":796},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":380},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":188272640},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":705773568},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":181272576}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"clockSplits":[11518,6,6,7,6,6,6,7,6,6,6,7],"cpuUsages":[31,32,32,31,32,32,31,32,32,31,32,32],"vMemKbytes":[28718,86154,143590,201025,258462,315897,373334,430769,488206,545642,603077,660514],"physMemKbytes":[7660,22982,38304,53625,68947,84268,99590,114911,130234,145555,160876,176199]}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_m_000003","taskType":"MAP","finishTime":1329348462178,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":796},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":380},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":188272640},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":705773568},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":181272576}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]}}}} + {"type":"MAP_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished":{"taskid":"task_1329348432655_0001_m_000000","attemptId":"attempt_1329348432655_0001_m_000000_0","taskType":"MAP","taskStatus":"SUCCEEDED","mapFinishTime":1329348462400,"finishTime":1329348462562,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":11},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":320},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":181645312},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":706129920},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"clockSplits":[11930,13,13,13,13,13,13,13,13,13,13,14],"cpuUsages":[26,27,27,26,27,27,26,27,27,26,27,27],"vMemKbytes":[28732,86197,143662,201127,258592,316057,373522,430987,488452,545917,603382,660847],"physMemKbytes":[7391,22173,36955,51737,66520,81302,96085,110866,125649,140432,155213,169996]}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_m_000000","taskType":"MAP","finishTime":1329348462562,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":11},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":320},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":181645312},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":706129920},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]}}}} + {"type":"MAP_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_m_000008","taskType":"MAP","attemptId":"attempt_1329348432655_0001_m_000008_0","startTime":1329348462765,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000010"}}} + {"type":"MAP_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_m_000009","taskType":"MAP","attemptId":"attempt_1329348432655_0001_m_000009_0","startTime":1329348462792,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000011"}}} + {"type":"REDUCE_ATTEMPT_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted":{"taskid":"task_1329348432655_0001_r_000000","taskType":"REDUCE","attemptId":"attempt_1329348432655_0001_r_000000_0","startTime":1329348464995,"trackerName":"localhost","httpPort":9999,"shufflePort":8080,"containerId":"container_1329348432655_0001_01_000014"}}} + {"type":"MAP_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished":{"taskid":"task_1329348432655_0001_m_000007","attemptId":"attempt_1329348432655_0001_m_000007_0","taskType":"MAP","taskStatus":"SUCCEEDED","mapFinishTime":1329348465534,"finishTime":1329348465965,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":194},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":320},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":185327616},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":713089024},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"clockSplits":[3464,18,19,18,18,19,18,18,19,18,18,19],"cpuUsages":[26,27,27,26,27,27,26,27,27,26,27,27],"vMemKbytes":[29015,87046,145078,203109,261140,319171,377203,435234,493266,551297,609328,667360],"physMemKbytes":[7541,22623,37705,52786,67869,82950,98033,113114,128197,143279,158360,173443]}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_m_000007","taskType":"MAP","finishTime":1329348465965,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":194},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":320},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":185327616},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":713089024},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]}}}} + {"type":"MAP_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished":{"taskid":"task_1329348432655_0001_m_000009","attemptId":"attempt_1329348432655_0001_m_000009_0","taskType":"MAP","taskStatus":"SUCCEEDED","mapFinishTime":1329348465986,"finishTime":1329348466363,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":23},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":330},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":182169600},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":705945600},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"clockSplits":[3223,21,21,21,21,21,20,21,21,21,21,21],"cpuUsages":[27,28,27,28,27,28,27,28,27,28,27,28],"vMemKbytes":[28725,86175,143625,201074,258525,315974,373425,430874,488325,545775,603224,660675],"physMemKbytes":[7412,22237,37062,51887,66712,81537,96362,111187,126012,140837,155662,170487]}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_m_000009","taskType":"MAP","finishTime":1329348466363,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":23},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":330},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":182169600},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":705945600},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]}}}} + {"type":"MAP_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished":{"taskid":"task_1329348432655_0001_m_000008","attemptId":"attempt_1329348432655_0001_m_000008_0","taskType":"MAP","taskStatus":"SUCCEEDED","mapFinishTime":1329348467231,"finishTime":1329348467421,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":12},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":320},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":181297152},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":705019904},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"clockSplits":[4483,15,16,15,16,15,15,16,15,16,15,16],"cpuUsages":[26,27,27,26,27,27,26,27,27,26,27,27],"vMemKbytes":[28686,86061,143436,200810,258185,315560,372935,430309,487684,545059,602433,659808],"physMemKbytes":[7377,22131,36885,51638,66393,81146,95901,110654,125409,140163,154916,169671]}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_m_000008","taskType":"MAP","finishTime":1329348467421,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":120},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48051},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":48},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":1},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":1},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":1},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":4},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":12},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":48},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":1},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":12},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":320},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":181297152},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":705019904},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":165478400}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]}}}} + {"type":"REDUCE_ATTEMPT_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished":{"taskid":"task_1329348432655_0001_r_000000","attemptId":"attempt_1329348432655_0001_r_000000_0","taskType":"REDUCE","taskStatus":"SUCCEEDED","shuffleFinishTime":1329348468462,"sortFinishTime":1329348468517,"finishTime":1329348468600,"hostname":"localhost","port":45454,"rackname":"/default-rack","state":"Sleeping... (1) ms left > reduce","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":186},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48074},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":0},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":0},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"COMBINE_OUTPUT_RECORDS","displayName":"Combine output records","value":0},{"name":"REDUCE_INPUT_GROUPS","displayName":"Reduce input groups","value":1},{"name":"REDUCE_SHUFFLE_BYTES","displayName":"Reduce shuffle bytes","value":120},{"name":"REDUCE_INPUT_RECORDS","displayName":"Reduce input records","value":10},{"name":"REDUCE_OUTPUT_RECORDS","displayName":"Reduce output records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":10},{"name":"SHUFFLED_MAPS","displayName":"Shuffled Maps ","value":10},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":10},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":14},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":1070},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":82780160},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":714436608},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":60555264}]},{"name":"Shuffle Errors","displayName":"Shuffle Errors","counts":[{"name":"BAD_ID","displayName":"BAD_ID","value":0},{"name":"CONNECTION","displayName":"CONNECTION","value":0},{"name":"IO_ERROR","displayName":"IO_ERROR","value":0},{"name":"WRONG_LENGTH","displayName":"WRONG_LENGTH","value":0},{"name":"WRONG_MAP","displayName":"WRONG_MAP","value":0},{"name":"WRONG_REDUCE","displayName":"WRONG_REDUCE","value":0}]},{"name":"org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter","displayName":"File Output Format Counters ","counts":[{"name":"BYTES_WRITTEN","displayName":"Bytes Written","value":0}]}]},"clockSplits":[3530,6,7,6,7,6,6,7,6,7,6,7],"cpuUsages":[89,89,89,89,89,90,89,89,89,89,89,90],"vMemKbytes":[29070,87211,145352,203493,261634,319775,377916,436057,494198,552339,610480,668621],"physMemKbytes":[3367,10104,16841,23577,30314,37051,43788,50524,57261,63998,70734,77471]}}} + {"type":"TASK_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.TaskFinished":{"taskid":"task_1329348432655_0001_r_000000","taskType":"REDUCE","finishTime":1329348468600,"status":"SUCCEEDED","counters":{"name":"COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":186},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48074},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":0},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":0},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"COMBINE_OUTPUT_RECORDS","displayName":"Combine output records","value":0},{"name":"REDUCE_INPUT_GROUPS","displayName":"Reduce input groups","value":1},{"name":"REDUCE_SHUFFLE_BYTES","displayName":"Reduce shuffle bytes","value":120},{"name":"REDUCE_INPUT_RECORDS","displayName":"Reduce input records","value":10},{"name":"REDUCE_OUTPUT_RECORDS","displayName":"Reduce output records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":10},{"name":"SHUFFLED_MAPS","displayName":"Shuffled Maps ","value":10},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":10},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":14},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":1070},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":82780160},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":714436608},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":60555264}]},{"name":"Shuffle Errors","displayName":"Shuffle Errors","counts":[{"name":"BAD_ID","displayName":"BAD_ID","value":0},{"name":"CONNECTION","displayName":"CONNECTION","value":0},{"name":"IO_ERROR","displayName":"IO_ERROR","value":0},{"name":"WRONG_LENGTH","displayName":"WRONG_LENGTH","value":0},{"name":"WRONG_MAP","displayName":"WRONG_MAP","value":0},{"name":"WRONG_REDUCE","displayName":"WRONG_REDUCE","value":0}]},{"name":"org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter","displayName":"File Output Format Counters ","counts":[{"name":"BYTES_WRITTEN","displayName":"Bytes Written","value":0}]}]}}}} + {"type":"JOB_FINISHED","event":{"org.apache.hadoop.mapreduce.jobhistory.JobFinished":{"jobid":"job_1329348432655_0001","finishTime":1329348468601,"finishedMaps":10,"finishedReduces":1,"failedMaps":0,"failedReduces":0,"totalCounters":{"name":"TOTAL_COUNTERS","groups":[{"name":"Shuffle Errors","displayName":"Shuffle Errors","counts":[{"name":"BAD_ID","displayName":"BAD_ID","value":0},{"name":"CONNECTION","displayName":"CONNECTION","value":0},{"name":"IO_ERROR","displayName":"IO_ERROR","value":0},{"name":"WRONG_LENGTH","displayName":"WRONG_LENGTH","value":0},{"name":"WRONG_MAP","displayName":"WRONG_MAP","value":0},{"name":"WRONG_REDUCE","displayName":"WRONG_REDUCE","value":0}]},{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":1386},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":528584},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":480},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":10},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.JobCounter","displayName":"Job Counters ","counts":[{"name":"TOTAL_LAUNCHED_MAPS","displayName":"Launched map tasks","value":10},{"name":"TOTAL_LAUNCHED_REDUCES","displayName":"Launched reduce tasks","value":1},{"name":"OTHER_LOCAL_MAPS","displayName":"Other local map tasks","value":10},{"name":"SLOTS_MILLIS_MAPS","displayName":"Total time spent by all maps in occupied slots (ms)","value":0},{"name":"SLOTS_MILLIS_REDUCES","displayName":"Total time spent by all reduces in occupied slots (ms)","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":10},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":10},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":40},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":120},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":480},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"COMBINE_OUTPUT_RECORDS","displayName":"Combine output records","value":0},{"name":"REDUCE_INPUT_GROUPS","displayName":"Reduce input groups","value":1},{"name":"REDUCE_SHUFFLE_BYTES","displayName":"Reduce shuffle bytes","value":120},{"name":"REDUCE_INPUT_RECORDS","displayName":"Reduce input records","value":10},{"name":"REDUCE_OUTPUT_RECORDS","displayName":"Reduce output records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":20},{"name":"SHUFFLED_MAPS","displayName":"Shuffled Maps ","value":10},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":10},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":2256},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":4460},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":1923493888},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":7773462528},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":1778515968}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]},{"name":"org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter","displayName":"File Output Format Counters ","counts":[{"name":"BYTES_WRITTEN","displayName":"Bytes Written","value":0}]}]},"mapCounters":{"name":"MAP_COUNTERS","groups":[{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":1200},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":480510},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":480},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":10},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"MAP_INPUT_RECORDS","displayName":"Map input records","value":10},{"name":"MAP_OUTPUT_RECORDS","displayName":"Map output records","value":10},{"name":"MAP_OUTPUT_BYTES","displayName":"Map output bytes","value":40},{"name":"MAP_OUTPUT_MATERIALIZED_BYTES","displayName":"Map output materialized bytes","value":120},{"name":"SPLIT_RAW_BYTES","displayName":"Input split bytes","value":480},{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":10},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":0},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":2242},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":3390},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":1840713728},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":7059025920},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":1717960704}]},{"name":"org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter","displayName":"File Input Format Counters ","counts":[{"name":"BYTES_READ","displayName":"Bytes Read","value":0}]}]},"reduceCounters":{"name":"REDUCE_COUNTERS","groups":[{"name":"Shuffle Errors","displayName":"Shuffle Errors","counts":[{"name":"BAD_ID","displayName":"BAD_ID","value":0},{"name":"CONNECTION","displayName":"CONNECTION","value":0},{"name":"IO_ERROR","displayName":"IO_ERROR","value":0},{"name":"WRONG_LENGTH","displayName":"WRONG_LENGTH","value":0},{"name":"WRONG_MAP","displayName":"WRONG_MAP","value":0},{"name":"WRONG_REDUCE","displayName":"WRONG_REDUCE","value":0}]},{"name":"org.apache.hadoop.mapreduce.FileSystemCounter","displayName":"File System Counters","counts":[{"name":"FILE_BYTES_READ","displayName":"FILE: Number of bytes read","value":186},{"name":"FILE_BYTES_WRITTEN","displayName":"FILE: Number of bytes written","value":48074},{"name":"FILE_READ_OPS","displayName":"FILE: Number of read operations","value":0},{"name":"FILE_LARGE_READ_OPS","displayName":"FILE: Number of large read operations","value":0},{"name":"FILE_WRITE_OPS","displayName":"FILE: Number of write operations","value":0},{"name":"HDFS_BYTES_READ","displayName":"HDFS: Number of bytes read","value":0},{"name":"HDFS_BYTES_WRITTEN","displayName":"HDFS: Number of bytes written","value":0},{"name":"HDFS_READ_OPS","displayName":"HDFS: Number of read operations","value":0},{"name":"HDFS_LARGE_READ_OPS","displayName":"HDFS: Number of large read operations","value":0},{"name":"HDFS_WRITE_OPS","displayName":"HDFS: Number of write operations","value":0}]},{"name":"org.apache.hadoop.mapreduce.TaskCounter","displayName":"Map-Reduce Framework","counts":[{"name":"COMBINE_INPUT_RECORDS","displayName":"Combine input records","value":0},{"name":"COMBINE_OUTPUT_RECORDS","displayName":"Combine output records","value":0},{"name":"REDUCE_INPUT_GROUPS","displayName":"Reduce input groups","value":1},{"name":"REDUCE_SHUFFLE_BYTES","displayName":"Reduce shuffle bytes","value":120},{"name":"REDUCE_INPUT_RECORDS","displayName":"Reduce input records","value":10},{"name":"REDUCE_OUTPUT_RECORDS","displayName":"Reduce output records","value":0},{"name":"SPILLED_RECORDS","displayName":"Spilled Records","value":10},{"name":"SHUFFLED_MAPS","displayName":"Shuffled Maps ","value":10},{"name":"FAILED_SHUFFLE","displayName":"Failed Shuffles","value":0},{"name":"MERGED_MAP_OUTPUTS","displayName":"Merged Map outputs","value":10},{"name":"GC_TIME_MILLIS","displayName":"GC time elapsed (ms)","value":14},{"name":"CPU_MILLISECONDS","displayName":"CPU time spent (ms)","value":1070},{"name":"PHYSICAL_MEMORY_BYTES","displayName":"Physical memory (bytes) snapshot","value":82780160},{"name":"VIRTUAL_MEMORY_BYTES","displayName":"Virtual memory (bytes) snapshot","value":714436608},{"name":"COMMITTED_HEAP_BYTES","displayName":"Total committed heap usage (bytes)","value":60555264}]},{"name":"org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter","displayName":"File Output Format Counters ","counts":[{"name":"BYTES_WRITTEN","displayName":"Bytes Written","value":0}]}]}}}} diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001_conf.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001_conf.xml new file mode 100644 index 0000000000..72d2dbb29f --- /dev/null +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001_conf.xml @@ -0,0 +1,397 @@ + +mapreduce.job.ubertask.enablefalse +yarn.resourcemanager.max-completed-applications10000 +yarn.resourcemanager.delayed.delegation-token.removal-interval-ms30000 +mapreduce.client.submit.file.replication10 +yarn.nodemanager.container-manager.thread-count20 +mapred.queue.default.acl-administer-jobs* +dfs.image.transfer.bandwidthPerSec0 +mapreduce.tasktracker.healthchecker.interval60000 +mapreduce.jobtracker.staging.root.dir${hadoop.tmp.dir}/mapred/staging +dfs.block.access.token.lifetime600 +yarn.resourcemanager.am.max-retries2 +fs.AbstractFileSystem.file.implorg.apache.hadoop.fs.local.LocalFs +mapreduce.client.completion.pollinterval5000 +mapreduce.job.ubertask.maxreduces1 +mapreduce.reduce.shuffle.memory.limit.percent0.25 +hadoop.http.authentication.kerberos.keytab${user.home}/hadoop.keytab +yarn.nodemanager.keytab/etc/krb5.keytab +io.seqfile.sorter.recordlimit1000000 +s3.blocksize67108864 +mapreduce.task.io.sort.factor10 +yarn.nodemanager.disk-health-checker.interval-ms120000 +mapreduce.job.working.dirhdfs://localhost:8021/user/user +yarn.admin.acl* +mapreduce.job.speculative.speculativecap0.1 +dfs.namenode.num.checkpoints.retained2 +dfs.namenode.delegation.token.renew-interval86400000 +yarn.nodemanager.resource.memory-mb8192 +io.map.index.interval128 +s3.client-write-packet-size65536 +dfs.namenode.http-address0.0.0.0:50070 +mapreduce.task.files.preserve.failedtasksfalse +mapreduce.job.reduce.classorg.apache.hadoop.mapreduce.SleepJob$SleepReducer +hadoop.hdfs.configuration.version1 +s3.replication3 +dfs.datanode.balance.bandwidthPerSec1048576 +mapreduce.reduce.shuffle.connect.timeout180000 +yarn.nodemanager.aux-servicesmapreduce.shuffle +dfs.datanode.block.volume.choice.policyorg.apache.hadoop.hdfs.server.datanode.RoundRobinVolumesPolicy +mapreduce.job.complete.cancel.delegation.tokenstrue +yarn.server.nodemanager.connect.rmtrue +dfs.namenode.checkpoint.dirfile://${hadoop.tmp.dir}/dfs/namesecondary +fs.trash.interval0 +yarn.resourcemanager.admin.address0.0.0.0:8141 +mapreduce.job.outputformat.classorg.apache.hadoop.mapreduce.lib.output.NullOutputFormat +yarn.log.server.urlhttp://localhost:19888/jobhistory/nmlogs +hadoop.http.authentication.kerberos.principalHTTP/localhost@LOCALHOST +mapreduce.tasktracker.taskmemorymanager.monitoringinterval5000 +s3native.blocksize67108864 +dfs.namenode.edits.dir${dfs.namenode.name.dir} +mapreduce.job.map.classorg.apache.hadoop.mapreduce.SleepJob$SleepMapper +dfs.datanode.http.address0.0.0.0:50075 +mapreduce.jobtracker.jobhistory.task.numberprogresssplits12 +yarn.acl.enabletrue +yarn.nodemanager.localizer.fetch.thread-count4 +hadoop.proxyuser.user.hosts127.0.0.1 +hadoop.security.authorizationfalse +dfs.namenode.safemode.extension30000 +mapreduce.reduce.log.levelINFO +yarn.log-aggregation-enablefalse +dfs.https.server.keystore.resourcessl-server.xml +mapreduce.jobtracker.instrumentationorg.apache.hadoop.mapred.JobTrackerMetricsInst +dfs.namenode.replication.min1 +mapreduce.map.java.opts-Xmx500m +s3native.bytes-per-checksum512 +mapreduce.tasktracker.tasks.sleeptimebeforesigkill5000 +tfile.fs.output.buffer.size262144 +yarn.nodemanager.local-dirs/home/user/local-dir/ +mapreduce.jobtracker.persist.jobstatus.activetrue +fs.AbstractFileSystem.hdfs.implorg.apache.hadoop.fs.Hdfs +dfs.namenode.safemode.min.datanodes0 +mapreduce.tasktracker.local.dir.minspacestart0 +dfs.client.https.need-authfalse +fs.har.impl.disable.cachetrue +dfs.client.https.keystore.resourcessl-client.xml +dfs.namenode.max.objects0 +dfs.namenode.safemode.threshold-pct0.999f +mapreduce.tasktracker.local.dir.minspacekill0 +mapreduce.jobtracker.retiredjobs.cache.size1000 +dfs.blocksize67108864 +mapreduce.job.reduce.slowstart.completedmaps0.05 +mapreduce.job.end-notification.retry.attempts5 +mapreduce.job.inputformat.classorg.apache.hadoop.mapreduce.SleepJob$SleepInputFormat +fs.s3n.implorg.apache.hadoop.fs.s3native.NativeS3FileSystem +mapreduce.map.memory.mb512 +mapreduce.job.user.nameuser +mapreduce.tasktracker.outofband.heartbeatfalse +io.native.lib.availabletrue +mapreduce.jobtracker.persist.jobstatus.hours1 +dfs.client-write-packet-size65536 +mapreduce.client.progressmonitor.pollinterval1000 +dfs.namenode.name.dirfile:///home/user/hadoop-user/dfs/name +mapreduce.output.fileoutputformat.compression.codecorg.apache.hadoop.io.compress.DefaultCodec +mapreduce.reduce.input.buffer.percent0.0 +mapreduce.map.output.compress.codecorg.apache.hadoop.io.compress.DefaultCodec +yarn.resourcemanager.delegation-token.keepalive-time-ms300000 +mapreduce.map.skip.proc.count.autoincrtrue +dfs.datanode.directoryscan.threads1 +mapreduce.jobtracker.addresslocal +mapreduce.cluster.local.dir${hadoop.tmp.dir}/mapred/local +mapreduce.job.application.attempt.id1 +dfs.permissions.enabledtrue +mapreduce.tasktracker.taskcontrollerorg.apache.hadoop.mapred.DefaultTaskController +mapreduce.reduce.shuffle.parallelcopies5 +yarn.nodemanager.env-whitelistJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME +mapreduce.jobtracker.heartbeats.in.second100 +mapreduce.job.maxtaskfailures.per.tracker4 +ipc.client.connection.maxidletime10000 +dfs.blockreport.intervalMsec21600000 +fs.s3.sleepTimeSeconds10 +dfs.namenode.replication.considerLoadtrue +dfs.client.block.write.retries3 +hadoop.proxyuser.user.groupsusers +dfs.namenode.name.dir.restorefalse +io.seqfile.lazydecompresstrue +dfs.https.enablefalse +mapreduce.reduce.merge.inmem.threshold1000 +mapreduce.input.fileinputformat.split.minsize0 +dfs.replication3 +ipc.client.tcpnodelayfalse +mapreduce.map.output.value.classorg.apache.hadoop.io.NullWritable +dfs.namenode.accesstime.precision3600000 +s3.stream-buffer-size4096 +mapreduce.jobtracker.tasktracker.maxblacklists4 +rpc.engine.com.google.protobuf.BlockingServiceorg.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine +mapreduce.job.jvm.numtasks1 +mapreduce.task.io.sort.mb100 +io.compression.codecsorg.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec +io.file.buffer.size4096 +mapreduce.job.jar/tmp/hadoop-yarn/staging/user/.staging/job_1329348432655_0001/job.jar +dfs.namenode.checkpoint.txns40000 +yarn.nodemanager.admin-envMALLOC_ARENA_MAX=$MALLOC_ARENA_MAX +mapreduce.job.split.metainfo.maxsize10000000 +mapreduce.output.fileoutputformat.compression.typeRECORD +kfs.replication3 +yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms1000 +mapreduce.reduce.maxattempts4 +mapreduce.sleepjob.map.sleep.time1 +kfs.stream-buffer-size4096 +fs.har.implorg.apache.hadoop.fs.HarFileSystem +hadoop.security.authenticationsimple +fs.s3.buffer.dir${hadoop.tmp.dir}/s3 +mapreduce.jobtracker.taskschedulerorg.apache.hadoop.mapred.JobQueueTaskScheduler +yarn.app.mapreduce.am.job.task.listener.thread-count30 +mapreduce.job.reduces1 +mapreduce.map.sort.spill.percent0.80 +mapreduce.job.end-notification.retry.interval1 +mapreduce.job.maps10 +mapreduce.job.speculative.slownodethreshold1.0 +dfs.block.access.token.enablefalse +tfile.fs.input.buffer.size262144 +mapreduce.map.speculativefalse +mapreduce.job.acl-view-job +mapreduce.map.output.key.classorg.apache.hadoop.io.IntWritable +yarn.ipc.serializer.typeprotocolbuffers +mapreduce.job.end-notification.max.retry.interval5 +ftp.blocksize67108864 +mapreduce.tasktracker.http.threads40 +mapreduce.reduce.java.opts-Xmx500m +dfs.datanode.data.dirfile:///home/user/hadoop-user/dfs/data +dfs.namenode.replication.interval3 +fs.file.implorg.apache.hadoop.fs.LocalFileSystem +dfs.namenode.https-address0.0.0.0:50470 +mapreduce.task.skip.start.attempts2 +mapreduce.jobtracker.persist.jobstatus.dir/jobtracker/jobsInfo +ipc.client.kill.max10 +mapreduce.job.end-notification.max.attempts5 +mapreduce.jobhistory.max-age-ms10000000000 +yarn.resourcemanager.zookeeper-store.session.timeout-ms60000 +mapreduce.task.tmp.dir./tmp +dfs.default.chunk.view.size32768 +kfs.bytes-per-checksum512 +mapreduce.reduce.memory.mb512 +hadoop.http.filter.initializersorg.apache.hadoop.yarn.server.webproxy.amfilter.AmFilterInitializer +dfs.datanode.failed.volumes.tolerated0 +mapreduce.sleepjob.reduce.sleep.count1 +hadoop.http.authentication.typesimple +dfs.datanode.data.dir.perm700 +yarn.resourcemanager.client.thread-count50 +ipc.server.listen.queue.size128 +mapreduce.reduce.skip.maxgroups0 +file.stream-buffer-size4096 +dfs.namenode.fs-limits.max-directory-items0 +io.mapfile.bloom.size1048576 +fs.hsftp.implorg.apache.hadoop.hdfs.HsftpFileSystem +yarn.nodemanager.container-executor.classorg.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor +mapreduce.map.maxattempts4 +mapreduce.jobtracker.jobhistory.block.size3145728 +ftp.replication3 +mapreduce.jobtracker.http.address0.0.0.0:50030 +yarn.nodemanager.health-checker.script.timeout-ms1200000 +mapreduce.jobhistory.address0.0.0.0:10020 +dfs.datanode.dns.nameserverdefault +mapreduce.jobtracker.taskcache.levels2 +yarn.nodemanager.log.retain-seconds12000 +mapred.child.java.opts-Xmx200m +dfs.replication.max512 +map.sort.classorg.apache.hadoop.util.QuickSort +dfs.stream-buffer-size4096 +dfs.namenode.backup.address0.0.0.0:50100 +hadoop.util.hash.typemurmur +dfs.block.access.key.update.interval600 +mapreduce.jobhistory.move.interval-ms30000 +dfs.datanode.dns.interfacedefault +mapreduce.reduce.skip.proc.count.autoincrtrue +dfs.namenode.backup.http-address0.0.0.0:50105 +yarn.nodemanager.container-monitor.interval-ms3000 +mapred.reducer.new-apitrue +yarn.nodemanager.disk-health-checker.min-healthy-disks0.25 +kfs.client-write-packet-size65536 +yarn.nodemanager.sleep-delay-before-sigkill.ms250 +mapreduce.job.dir/tmp/hadoop-yarn/staging/user/.staging/job_1329348432655_0001 +io.map.index.skip0 +net.topology.node.switch.mapping.implorg.apache.hadoop.net.ScriptBasedMapping +dfs.namenode.logging.levelinfo +fs.s3.maxRetries4 +s3native.client-write-packet-size65536 +yarn.resourcemanager.amliveliness-monitor.interval-ms1000 +mapreduce.reduce.speculativefalse +mapreduce.client.output.filterFAILED +mapreduce.tasktracker.report.address127.0.0.1:0 +mapreduce.task.userlog.limit.kb0 +mapreduce.tasktracker.map.tasks.maximum2 +hadoop.http.authentication.simple.anonymous.allowedtrue +hadoop.rpc.socket.factory.class.defaultorg.apache.hadoop.net.StandardSocketFactory +mapreduce.job.submithostnamelocalhost +fs.hftp.implorg.apache.hadoop.hdfs.HftpFileSystem +dfs.namenode.handler.count10 +fs.automatic.closetrue +fs.kfs.implorg.apache.hadoop.fs.kfs.KosmosFileSystem +mapreduce.job.submithostaddress127.0.0.1 +mapreduce.tasktracker.healthchecker.script.timeout600000 +dfs.datanode.directoryscan.interval21600 +yarn.resourcemanager.address0.0.0.0:8040 +yarn.nodemanager.log-aggregation-enablefalse +fs.hdfs.implorg.apache.hadoop.hdfs.DistributedFileSystem +yarn.nodemanager.health-checker.interval-ms600000 +mapreduce.reduce.markreset.buffer.percent0.0 +mapreduce.map.log.levelINFO +yarn.nodemanager.localizer.address0.0.0.0:4344 +dfs.bytes-per-checksum512 +ftp.stream-buffer-size4096 +yarn.resourcemanager.keytab/etc/krb5.keytab +mapreduce.sleepjob.map.sleep.count1 +dfs.blockreport.initialDelay0 +yarn.nm.liveness-monitor.expiry-interval-ms600000 +hadoop.http.authentication.token.validity36000 +dfs.namenode.delegation.token.max-lifetime604800000 +mapreduce.job.hdfs-servers${fs.default.name} +fs.ftp.implorg.apache.hadoop.fs.ftp.FTPFileSystem +dfs.web.ugiwebuser,webgroup +s3native.replication3 +dfs.heartbeat.interval3 +yarn.nodemanager.localizer.client.thread-count5 +yarn.resourcemanager.container.liveness-monitor.interval-ms600000 +yarn.am.liveness-monitor.expiry-interval-ms600000 +mapreduce.task.profilefalse +mapreduce.tasktracker.instrumentationorg.apache.hadoop.mapred.TaskTrackerMetricsInst +mapreduce.tasktracker.http.address0.0.0.0:50060 +mapreduce.jobhistory.webapp.address0.0.0.0:19888 +rpc.engine.org.apache.hadoop.yarn.proto.AMRMProtocol$AMRMProtocolService$BlockingInterfaceorg.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine +yarn.ipc.rpc.classorg.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC +mapreduce.job.nameSleep job +kfs.blocksize67108864 +mapreduce.job.ubertask.maxmaps9 +yarn.nodemanager.heartbeat.interval-ms1000 +dfs.namenode.secondary.http-address0.0.0.0:50090 +mapreduce.job.userlog.retain.hours24 +mapreduce.task.timeout600000 +mapreduce.jobhistory.loadedjobs.cache.size1 +mapreduce.framework.nameyarn +ipc.client.idlethreshold4000 +ipc.server.tcpnodelayfalse +ftp.bytes-per-checksum512 +s3.bytes-per-checksum512 +mapreduce.job.speculative.slowtaskthreshold1.0 +yarn.nodemanager.localizer.cache.target-size-mb1 +yarn.nodemanager.remote-app-log-dir/tmp/logs +fs.s3.block.size67108864 +mapreduce.job.queuenamedefault +mapreduce.sleepjob.reduce.sleep.time1 +hadoop.rpc.protectionauthentication +yarn.app.mapreduce.client-am.ipc.max-retries1 +ftp.client-write-packet-size65536 +yarn.nodemanager.address0.0.0.0:45454 +fs.defaultFShdfs://localhost:8021 +mapreduce.task.merge.progress.records10000 +yarn.resourcemanager.scheduler.client.thread-count50 +file.client-write-packet-size65536 +mapreduce.job.partitioner.classorg.apache.hadoop.mapreduce.SleepJob$SleepJobPartitioner +yarn.nodemanager.delete.thread-count4 +yarn.resourcemanager.scheduler.address0.0.0.0:8030 +fs.trash.checkpoint.interval0 +s3native.stream-buffer-size4096 +yarn.scheduler.fifo.minimum-allocation-mb1024 +mapreduce.reduce.shuffle.read.timeout180000 +yarn.app.mapreduce.am.command-opts-Xmx500m +mapreduce.admin.user.envLD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native +dfs.namenode.checkpoint.edits.dir${dfs.namenode.checkpoint.dir} +mapreduce.local.clientfactory.class.nameorg.apache.hadoop.mapred.LocalClientFactory +hadoop.common.configuration.version0.23.0 +mapreduce.tasktracker.dns.interfacedefault +io.serializationsorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization +yarn.nodemanager.aux-service.mapreduce.shuffle.classorg.apache.hadoop.mapred.ShuffleHandler +yarn.nodemanager.aux-services.mapreduce.shuffle.classorg.apache.hadoop.mapred.ShuffleHandler +fs.df.interval60000 +mapreduce.reduce.shuffle.input.buffer.percent0.70 +io.seqfile.compress.blocksize1000000 +ipc.client.connect.max.retries10 +fs.viewfs.implorg.apache.hadoop.fs.viewfs.ViewFileSystem +hadoop.security.groups.cache.secs300 +dfs.namenode.delegation.key.update-interval86400000 +yarn.nodemanager.process-kill-wait.ms2000 +yarn.application.classpath + $HADOOP_CONF_DIR, + $HADOOP_COMMON_HOME/share/hadoop/common/*, + $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, + $HADOOP_HDFS_HOME/share/hadoop/hdfs/*, + $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, + $YARN_HOME/share/hadoop/mapreduce/*, + $YARN_HOME/share/hadoop/mapreduce/lib/* + +yarn.nodemanager.log-aggregation.compression-typegz +dfs.image.compressfalse +yarn.nodemanager.localizer.cache.cleanup.interval-ms30000 +mapred.mapper.new-apitrue +yarn.nodemanager.log-dirs/home/user/logs +fs.s3n.block.size67108864 +fs.ftp.host0.0.0.0 +hadoop.security.group.mappingorg.apache.hadoop.security.ShellBasedUnixGroupsMapping +dfs.datanode.address0.0.0.0:50010 +mapreduce.map.skip.maxrecords0 +dfs.datanode.https.address0.0.0.0:50475 +fs.s3.implorg.apache.hadoop.fs.s3.S3FileSystem +file.replication1 +yarn.resourcemanager.resource-tracker.address0.0.0.0:8025 +mapreduce.jobtracker.restart.recoverfalse +hadoop.work.around.non.threadsafe.getpwuidfalse +mapreduce.client.genericoptionsparser.usedtrue +mapreduce.tasktracker.indexcache.mb10 +mapreduce.output.fileoutputformat.compressfalse +hadoop.tmp.dir/tmp/hadoop-${user.name} +dfs.client.block.write.replace-datanode-on-failure.policyDEFAULT +hadoop.kerberos.kinit.commandkinit +mapreduce.job.committer.setup.cleanup.neededtrue +dfs.datanode.du.reserved0 +mapreduce.task.profile.reduces0-2 +file.bytes-per-checksum512 +mapreduce.input.fileinputformat.inputdirhdfs://localhost:8021/user/user/ignored +dfs.client.block.write.replace-datanode-on-failure.enableture +mapreduce.jobtracker.handler.count10 +net.topology.script.number.args100 +mapreduce.task.profile.maps0-2 +dfs.namenode.decommission.interval30 +dfs.image.compression.codecorg.apache.hadoop.io.compress.DefaultCodec +yarn.resourcemanager.webapp.address0.0.0.0:8088 +mapreduce.jobtracker.system.dir${hadoop.tmp.dir}/mapred/system +dfs.namenode.support.allow.formattrue +yarn.nodemanager.vmem-pmem-ratio2.1 +io.mapfile.bloom.error.rate0.005 +dfs.permissions.superusergroupsupergroup +mapreduce.jobtracker.expire.trackers.interval600000 +mapreduce.cluster.acls.enabledfalse +yarn.nodemanager.remote-app-log-dir-suffixlogs +dfs.namenode.checkpoint.check.period60 +io.seqfile.local.dir${hadoop.tmp.dir}/io/local +yarn.app.mapreduce.am.resource.mb512 +mapreduce.reduce.shuffle.merge.percent0.66 +tfile.io.chunk.size1048576 +file.blocksize67108864 +mapreduce.jobtracker.jobhistory.lru.cache.size5 +mapreduce.jobtracker.maxtasks.perjob-1 +yarn.resourcemanager.nm.liveness-monitor.interval-ms1000 +yarn.nodemanager.webapp.address0.0.0.0:9999 +mapreduce.job.acl-modify-job +mapreduce.tasktracker.reduce.tasks.maximum2 +mapreduce.cluster.temp.dir${hadoop.tmp.dir}/mapred/temp +io.skip.checksum.errorsfalse +yarn.app.mapreduce.am.staging-dir/tmp/hadoop-yarn/staging +dfs.datanode.handler.count3 +hadoop.http.authentication.signature.secrethadoop +dfs.namenode.decommission.nodes.per.interval5 +fs.ftp.host.port21 +dfs.namenode.checkpoint.period3600 +dfs.namenode.fs-limits.max-component-length0 +yarn.resourcemanager.admin.client.thread-count1 +fs.AbstractFileSystem.viewfs.implorg.apache.hadoop.fs.viewfs.ViewFs +yarn.resourcemanager.resource-tracker.client.thread-count50 +mapreduce.tasktracker.dns.nameserverdefault +mapreduce.clientfactory.class.nameorg.apache.hadoop.mapred.YarnClientFactory +mapreduce.map.output.compressfalse +mapreduce.job.counters.limit120 +dfs.datanode.ipc.address0.0.0.0:50020 +fs.webhdfs.implorg.apache.hadoop.hdfs.web.WebHdfsFileSystem +yarn.nodemanager.delete.debug-delay-sec0 +dfs.datanode.max.transfer.threads4096 +