YARN-11007. Correct words in YARN documents (#3680)
Reviewed-by: cxorm <lianp964@gmail.com>
Signed-off-by: Akira Ajisaka <aajisaka@apache.org>
(cherry picked from commit c9d64bad37
)
This commit is contained in:
parent
4a032bc88d
commit
a13a03ec10
@ -282,7 +282,7 @@ ApplicationReport report = yarnClient.getApplicationReport(appId);
|
||||
|
||||
>> * *Application tracking information*: If the application supports some form of progress tracking, it can set a tracking url which is available via `ApplicationReport`'s `getTrackingUrl()` method that a client can look at to monitor progress.
|
||||
|
||||
>> * *Application status*: The state of the application as seen by the ResourceManager is available via `ApplicationReport#getYarnApplicationState`. If the `YarnApplicationState` is set to `FINISHED`, the client should refer to `ApplicationReport#getFinalApplicationStatus` to check for the actual success/failure of the application task itself. In case of failures, `ApplicationReport#getDiagnostics` may be useful to shed some more light on the the failure.
|
||||
>> * *Application status*: The state of the application as seen by the ResourceManager is available via `ApplicationReport#getYarnApplicationState`. If the `YarnApplicationState` is set to `FINISHED`, the client should refer to `ApplicationReport#getFinalApplicationStatus` to check for the actual success/failure of the application task itself. In case of failures, `ApplicationReport#getDiagnostics` may be useful to shed some more light on the failure.
|
||||
|
||||
> * If the ApplicationMaster supports it, a client can directly query the AM itself for progress updates via the host:rpcport information obtained from the application report. It can also use the tracking url obtained from the report if available.
|
||||
|
||||
@ -416,7 +416,7 @@ private ContainerRequest setupContainerAskForRM() {
|
||||
}
|
||||
```
|
||||
|
||||
* After container allocation requests have been sent by the application manager, contailers will be launched asynchronously, by the event handler of the `AMRMClientAsync` client. The handler should implement `AMRMClientAsync.CallbackHandler` interface.
|
||||
* After container allocation requests have been sent by the application manager, containers will be launched asynchronously, by the event handler of the `AMRMClientAsync` client. The handler should implement `AMRMClientAsync.CallbackHandler` interface.
|
||||
|
||||
> * When there are containers allocated, the handler sets up a thread that runs the code to launch containers. Here we use the name `LaunchContainerRunnable` to demonstrate. We will talk about the `LaunchContainerRunnable` class in the following part of this article.
|
||||
|
||||
@ -556,7 +556,7 @@ The `ApplicationAttemptId` will be passed to the AM via the environment and the
|
||||
|
||||
### Why my container is killed by the NodeManager?
|
||||
|
||||
This is likely due to high memory usage exceeding your requested container memory size. There are a number of reasons that can cause this. First, look at the process tree that the NodeManager dumps when it kills your container. The two things you're interested in are physical memory and virtual memory. If you have exceeded physical memory limits your app is using too much physical memory. If you're running a Java app, you can use -hprof to look at what is taking up space in the heap. If you have exceeded virtual memory, you may need to increase the value of the the cluster-wide configuration variable `yarn.nodemanager.vmem-pmem-ratio`.
|
||||
This is likely due to high memory usage exceeding your requested container memory size. There are a number of reasons that can cause this. First, look at the process tree that the NodeManager dumps when it kills your container. The two things you're interested in are physical memory and virtual memory. If you have exceeded physical memory limits your app is using too much physical memory. If you're running a Java app, you can use -hprof to look at what is taking up space in the heap. If you have exceeded virtual memory, you may need to increase the value of the cluster-wide configuration variable `yarn.nodemanager.vmem-pmem-ratio`.
|
||||
|
||||
### How do I include native libraries?
|
||||
|
||||
|
@ -33,7 +33,7 @@ The ApplicationsManager is responsible for accepting job-submissions, negotiatin
|
||||
|
||||
MapReduce in hadoop-2.x maintains **API compatibility** with previous stable release (hadoop-1.x). This means that all MapReduce jobs should still run unchanged on top of YARN with just a recompile.
|
||||
|
||||
YARN supports the notion of **resource reservation** via the [ReservationSystem](./ReservationSystem.html), a component that allows users to specify a profile of resources over-time and temporal constraints (e.g., deadlines), and reserve resources to ensure the predictable execution of important jobs.The *ReservationSystem* tracks resources over-time, performs admission control for reservations, and dynamically instruct the underlying scheduler to ensure that the reservation is fullfilled.
|
||||
YARN supports the notion of **resource reservation** via the [ReservationSystem](./ReservationSystem.html), a component that allows users to specify a profile of resources over-time and temporal constraints (e.g., deadlines), and reserve resources to ensure the predictable execution of important jobs.The *ReservationSystem* tracks resources over-time, performs admission control for reservations, and dynamically instruct the underlying scheduler to ensure that the reservation is fulfilled.
|
||||
|
||||
In order to scale YARN beyond few thousands nodes, YARN supports the notion of **Federation** via the [YARN Federation](./Federation.html) feature. Federation allows to transparently wire together multiple yarn (sub-)clusters, and
|
||||
make them appear as a single massive cluster. This can be used to achieve larger scale, and/or to allow multiple independent clusters to be used together for very large jobs, or for tenants who have capacity across all of them.
|
@ -175,7 +175,7 @@ More precisely
|
||||
1. The token passed by the RM to the NM for localization is refreshed/updated as needed.
|
||||
1. Tokens in the app launch context for use by the application are *not* refreshed.
|
||||
That is, if it has an out of date HDFS token —that token is not renewed. This
|
||||
also holds for tokens for for Hive, HBase, etc.
|
||||
also holds for tokens for Hive, HBase, etc.
|
||||
1. Therefore, to survive AM restart after token expiry, your AM has to get the
|
||||
NMs to localize the keytab or make no HDFS accesses until (somehow) a new token has been passed to them from a client.
|
||||
|
||||
@ -546,7 +546,7 @@ the list of resources to localize.
|
||||
is readable by principals other than the current user, warn,
|
||||
and consider actually failing the launch (similar to the normal `ssh` application.)
|
||||
|
||||
`[ ]` Client acquires HDFS delegation token and and attaches to the AM Container
|
||||
`[ ]` Client acquires HDFS delegation token and attaches to the AM Container
|
||||
Launch Context,
|
||||
|
||||
`[ ]` AM logs in as principal in keytab via `loginUserFromKeytab()`.
|
||||
|
@ -242,7 +242,7 @@ Usage:
|
||||
| -directlyAccessNodeLabelStore | This is DEPRECATED, will be removed in future releases. Directly access node label store, with this option, all node label related operations will not connect RM. Instead, they will access/modify stored node labels directly. By default, it is false (access via RM). AND PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local directory (instead of NFS or HDFS), this option will only work when the command run on the machine where RM is running. |
|
||||
| -refreshClusterMaxPriority | Refresh cluster max priority |
|
||||
| -updateNodeResource [NodeID] [MemSize] [vCores] \([OvercommitTimeout]\) | Update resource on specific node. |
|
||||
| -updateNodeResource [NodeID] [ResourceTypes] \([OvercommitTimeout]\) | Update resource types on specific node. Resource Types is comma-delimited key value pairs of any resources availale at Resource Manager. For example, memory-mb=1024Mi,vcores=1,resource1=2G,resource2=4m|
|
||||
| -updateNodeResource [NodeID] [ResourceTypes] \([OvercommitTimeout]\) | Update resource types on specific node. Resource Types is comma-delimited key value pairs of any resources available at Resource Manager. For example, memory-mb=1024Mi,vcores=1,resource1=2G,resource2=4m|
|
||||
| -transitionToActive [--forceactive] [--forcemanual] \<serviceId\> | Transitions the service into Active state. Try to make the target active without checking that there is no active node if the --forceactive option is used. This command can not be used if automatic failover is enabled. Though you can override this by --forcemanual option, you need caution. This command can not be used if automatic failover is enabled.|
|
||||
| -transitionToStandby [--forcemanual] \<serviceId\> | Transitions the service into Standby state. This command can not be used if automatic failover is enabled. Though you can override this by --forcemanual option, you need caution. |
|
||||
| -getServiceState \<serviceId\> | Returns the state of the service. |
|
||||
|
Loading…
Reference in New Issue
Block a user