diff --git a/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/RootDocProcessor.java b/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/RootDocProcessor.java
index 8042f17b8d..60c2a6f6e9 100644
--- a/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/RootDocProcessor.java
+++ b/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/RootDocProcessor.java
@@ -127,6 +127,10 @@ public Object invoke(Object proxy, Method method, Object[] args)
return filter(((ClassDoc) target).constructors(true),
ConstructorDoc.class);
}
+ } else {
+ if (methodName.equals("methods")) {
+ return filter(((ClassDoc) target).methods(true), MethodDoc.class);
+ }
}
} else if (target instanceof PackageDoc) {
if (methodName.equals("allClasses")) {
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_API_2.6.0.xml b/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_API_2.6.0.xml
new file mode 100644
index 0000000000..5d58600a0c
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_API_2.6.0.xml
@@ -0,0 +1,13076 @@
+
+
+
+
+
+
+
The ResourceManager
responds with a new, monotonically
+ increasing, {@link ApplicationId} which is used by the client to submit
+ a new application.
The ResourceManager
also responds with details such
+ as maximum resource capabilities in the cluster as specified in
+ {@link GetNewApplicationResponse}.
ApplicationId
+ @return response containing the new ApplicationId
to be used
+ to submit an application
+ @throws YarnException
+ @throws IOException
+ @see #submitApplication(SubmitApplicationRequest)]]>
+
+
+ ResourceManager.
+
+ The client is required to provide details such as queue,
+ {@link Resource} required to run the ApplicationMaster
,
+ the equivalent of {@link ContainerLaunchContext} for launching
+ the ApplicationMaster
etc. via the
+ {@link SubmitApplicationRequest}.
Currently the ResourceManager
sends an immediate (empty)
+ {@link SubmitApplicationResponse} on accepting the submission and throws
+ an exception if it rejects the submission. However, this call needs to be
+ followed by {@link #getApplicationReport(GetApplicationReportRequest)}
+ to make sure that the application gets properly submitted - obtaining a
+ {@link SubmitApplicationResponse} from ResourceManager doesn't guarantee
+ that RM 'remembers' this application beyond failover or restart. If RM
+ failover or RM restart happens before ResourceManager saves the
+ application's state successfully, the subsequent
+ {@link #getApplicationReport(GetApplicationReportRequest)} will throw
+ a {@link ApplicationNotFoundException}. The Clients need to re-submit
+ the application with the same {@link ApplicationSubmissionContext} when
+ it encounters the {@link ApplicationNotFoundException} on the
+ {@link #getApplicationReport(GetApplicationReportRequest)} call.
During the submission process, it checks whether the application + already exists. If the application exists, it will simply return + SubmitApplicationResponse
+ + In secure mode,the ResourceManager
verifies access to
+ queues etc. before accepting the application submission.
ResourceManager
to abort submitted application.
+
+ The client, via {@link KillApplicationRequest} provides the + {@link ApplicationId} of the application to be aborted.
+ + In secure mode,the ResourceManager
verifies access to the
+ application, queue etc. before terminating the application.
Currently, the ResourceManager
returns an empty response
+ on success and throws an exception on rejecting the request.
ResourceManager
returns an empty response
+ on success and throws an exception on rejecting the request
+ @throws YarnException
+ @throws IOException
+ @see #getQueueUserAcls(GetQueueUserAclsInfoRequest)]]>
+ ResourceManager
.
+
+ The client, via {@link GetApplicationReportRequest} provides the + {@link ApplicationId} of the application.
+ + In secure mode,the ResourceManager
verifies access to the
+ application, queue etc. before accepting the request.
The ResourceManager
responds with a
+ {@link GetApplicationReportResponse} which includes the
+ {@link ApplicationReport} for the application.
If the user does not have VIEW_APP
access then the
+ following fields in the report will be set to stubbed values:
+
ResourceManager
.
+
+ The ResourceManager
responds with a
+ {@link GetClusterMetricsResponse} which includes the
+ {@link YarnClusterMetrics} with details such as number of current
+ nodes in the cluster.
ResourceManager
.
+
+ The ResourceManager
responds with a
+ {@link GetApplicationsResponse} which includes the
+ {@link ApplicationReport} for the applications.
If the user does not have VIEW_APP
access for an
+ application then the corresponding report will be filtered as
+ described in {@link #getApplicationReport(GetApplicationReportRequest)}.
+
ResourceManager
.
+
+ The ResourceManager
responds with a
+ {@link GetClusterNodesResponse} which includes the
+ {@link NodeReport} for all the nodes in the cluster.
ResourceManager
.
+
+ The client, via {@link GetQueueInfoRequest}, can ask for details such + as used/total resources, child queues, running applications etc.
+ + In secure mode,the ResourceManager
verifies access before
+ providing the information.
ResourceManager
.
+
+
+ The ResourceManager
responds with queue acls for all
+ existing queues.
The ResourceManager
responds with the delegation
+ {@link Token} that can be used by the client to speak to this
+ service.
+ @param request request to get a delegation token for the client.
+ @return delegation token that can be used to talk to this service
+ @throws YarnException
+ @throws IOException]]>
+
ResourceManager
+
+
+ + The client, via {@link GetApplicationAttemptReportRequest} provides the + {@link ApplicationAttemptId} of the application attempt. +
+ +
+ In secure mode,the ResourceManager
verifies access to
+ the method before accepting the request.
+
+ The ResourceManager
responds with a
+ {@link GetApplicationAttemptReportResponse} which includes the
+ {@link ApplicationAttemptReport} for the application attempt.
+
+ If the user does not have VIEW_APP
access then the following
+ fields in the report will be set to stubbed values:
+
ResourceManager
+
+
+
+ The ResourceManager
responds with a
+ {@link GetApplicationAttemptsRequest} which includes the
+ {@link ApplicationAttemptReport} for all the applications attempts of a
+ specified application attempt.
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationAttemptReport(GetApplicationAttemptReportRequest)}.
+
ResourceManager
+
+
+ + The client, via {@link GetContainerReportRequest} provides the + {@link ContainerId} of the container. +
+ +
+ In secure mode,the ResourceManager
verifies access to the
+ method before accepting the request.
+
+ The ResourceManager
responds with a
+ {@link GetContainerReportResponse} which includes the
+ {@link ContainerReport} for the container.
+
ResourceManager
+
+
+ + The client, via {@link GetContainersRequest} provides the + {@link ApplicationAttemptId} of the application attempt. +
+ +
+ In secure mode,the ResourceManager
verifies access to the
+ method before accepting the request.
+
+ The ResourceManager
responds with a
+ {@link GetContainersResponse} which includes a list of
+ {@link ContainerReport} for all the containers of a specific application
+ attempt.
+
+ The client packages all details of its request in a + {@link ReservationSubmissionRequest} object. This contains information + about the amount of capacity, temporal constraints, and concurrency needs. + Furthermore, the reservation might be composed of multiple stages, with + ordering dependencies among them. +
+ ++ In order to respond, a new admission control component in the + {@code ResourceManager} performs an analysis of the resources that have + been committed over the period of time the user is requesting, verify that + the user requests can be fulfilled, and that it respect a sharing policy + (e.g., {@code CapacityOverTimePolicy}). Once it has positively determined + that the ReservationSubmissionRequest is satisfiable the + {@code ResourceManager} answers with a + {@link ReservationSubmissionResponse} that include a non-null + {@link ReservationId}. Upon failure to find a valid allocation the response + is an exception with the reason. + + On application submission the client can use this {@link ReservationId} to + obtain access to the reserved resources. +
+ ++ The system guarantees that during the time-range specified by the user, the + reservationID will be corresponding to a valid reservation. The amount of + capacity dedicated to such queue can vary overtime, depending of the + allocation that has been determined. But it is guaranteed to satisfy all + the constraint expressed by the user in the + {@link ReservationSubmissionRequest}. +
+ + @param request the request to submit a new Reservation + @return response the {@link ReservationId} on accepting the submission + @throws YarnException if the request is invalid or reservation cannot be + created successfully + @throws IOException]]> ++ The allocation is attempted by virtually substituting all previous + allocations related to this Reservation with new ones, that satisfy the new + {@link ReservationUpdateRequest}. Upon success the previous allocation is + substituted by the new one, and on failure (i.e., if the system cannot find + a valid allocation for the updated request), the previous allocation + remains valid. + + The {@link ReservationId} is not changed, and applications currently + running within this reservation will automatically receive the resources + based on the new allocation. +
+ + @param request to update an existing Reservation (the ReservationRequest + should refer to an existing valid {@link ReservationId}) + @return response empty on successfully updating the existing reservation + @throws YarnException if the request is invalid or reservation cannot be + updated successfully + @throws IOException]]> +ResourceManager
+ to submit/abort jobs and to get information on applications, cluster metrics,
+ nodes, queues and ACLs.]]>
+ ResourceManager
.
+
+
+ + The client, via {@link GetApplicationReportRequest} provides the + {@link ApplicationId} of the application. +
+ +
+ In secure mode,the ApplicationHistoryServer
verifies access to
+ the application, queue etc. before accepting the request.
+
+ The ApplicationHistoryServer
responds with a
+ {@link GetApplicationReportResponse} which includes the
+ {@link ApplicationReport} for the application.
+
+ If the user does not have VIEW_APP
access then the following
+ fields in the report will be set to stubbed values:
+
ApplicationHistoryServer
.
+
+
+
+ The ApplicationHistoryServer
responds with a
+ {@link GetApplicationsResponse} which includes a list of
+ {@link ApplicationReport} for all the applications.
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(GetApplicationReportRequest)}.
+
ApplicationHistoryServer
.
+
+
+ + The client, via {@link GetApplicationAttemptReportRequest} provides the + {@link ApplicationAttemptId} of the application attempt. +
+ +
+ In secure mode,the ApplicationHistoryServer
verifies access to
+ the method before accepting the request.
+
+ The ApplicationHistoryServer
responds with a
+ {@link GetApplicationAttemptReportResponse} which includes the
+ {@link ApplicationAttemptReport} for the application attempt.
+
+ If the user does not have VIEW_APP
access then the following
+ fields in the report will be set to stubbed values:
+
ApplicationHistoryServer
.
+
+
+
+ The ApplicationHistoryServer
responds with a
+ {@link GetApplicationAttemptsRequest} which includes the
+ {@link ApplicationAttemptReport} for all the applications attempts of a
+ specified application attempt.
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationAttemptReport(GetApplicationAttemptReportRequest)}.
+
ApplicationHistoryServer
.
+
+
+ + The client, via {@link GetContainerReportRequest} provides the + {@link ContainerId} of the container. +
+ +
+ In secure mode,the ApplicationHistoryServer
verifies access to
+ the method before accepting the request.
+
+ The ApplicationHistoryServer
responds with a
+ {@link GetContainerReportResponse} which includes the
+ {@link ContainerReport} for the container.
+
ApplciationHistoryServer
.
+
+
+ + The client, via {@link GetContainersRequest} provides the + {@link ApplicationAttemptId} of the application attempt. +
+ +
+ In secure mode,the ApplicationHistoryServer
verifies access to
+ the method before accepting the request.
+
+ The ApplicationHistoryServer
responds with a
+ {@link GetContainersResponse} which includes a list of
+ {@link ContainerReport} for all the containers of a specific application
+ attempt.
+
+ The ApplicationHistoryServer
responds with the delegation
+ token {@link Token} that can be used by the client to speak to this
+ service.
+
ApplicationHistoryServer
to
+ get the information of completed applications etc.
+ ]]>
+ ApplicationMaster
to register with
+ the ResourceManager
.
+
+
+
+ The ApplicationMaster
needs to provide details such as RPC
+ Port, HTTP tracking url etc. as specified in
+ {@link RegisterApplicationMasterRequest}.
+
+ The ResourceManager
responds with critical details such as
+ maximum resource capabilities in the cluster as specified in
+ {@link RegisterApplicationMasterResponse}.
+
ApplicationMaster
to notify the
+ ResourceManager
about its completion (success or failed).
+
+ The ApplicationMaster
has to provide details such as
+ final state, diagnostics (in case of failures) etc. as specified in
+ {@link FinishApplicationMasterRequest}.
The ResourceManager
responds with
+ {@link FinishApplicationMasterResponse}.
ApplicationMaster
and the
+ ResourceManager
.
+
+
+
+ The ApplicationMaster
uses this interface to provide a list of
+ {@link ResourceRequest} and returns unused {@link Container} allocated to
+ it via {@link AllocateRequest}. Optionally, the
+ ApplicationMaster
can also blacklist resources which
+ it doesn't want to use.
+
+ This also doubles up as a heartbeat to let the
+ ResourceManager
know that the ApplicationMaster
+ is alive. Thus, applications should periodically make this call to be kept
+ alive. The frequency depends on
+ {@link YarnConfiguration#RM_AM_EXPIRY_INTERVAL_MS} which defaults to
+ {@link YarnConfiguration#DEFAULT_RM_AM_EXPIRY_INTERVAL_MS}.
+
+ The ResourceManager
responds with list of allocated
+ {@link Container}, status of completed containers and headroom information
+ for the application.
+
+ The ApplicationMaster
can use the available headroom
+ (resources) to decide how to utilized allocated resources and make informed
+ decisions about future resource requests.
+
ApplicationMaster
+ and the ResourceManager
.
+
+ This is used by the ApplicationMaster
to register/unregister
+ and to request and obtain resources in the cluster from the
+ ResourceManager
.
ApplicationMaster
provides a list of
+ {@link StartContainerRequest}s to a NodeManager
to
+ start {@link Container}s allocated to it using this interface.
+
+
+
+ The ApplicationMaster
has to provide details such as allocated
+ resource capability, security tokens (if enabled), command to be executed
+ to start the container, environment for the process, necessary
+ binaries/jar/shared-objects etc. via the {@link ContainerLaunchContext} in
+ the {@link StartContainerRequest}.
+
+ The NodeManager
sends a response via
+ {@link StartContainersResponse} which includes a list of
+ {@link Container}s of successfully launched {@link Container}s, a
+ containerId-to-exception map for each failed {@link StartContainerRequest} in
+ which the exception indicates errors from per container and a
+ allServicesMetaData map between the names of auxiliary services and their
+ corresponding meta-data. Note: None-container-specific exceptions will
+ still be thrown by the API method itself.
+
+ The ApplicationMaster
can use
+ {@link #getContainerStatuses(GetContainerStatusesRequest)} to get updated
+ statuses of the to-be-launched or launched containers.
+
ApplicationMaster
requests a NodeManager
to
+ stop a list of {@link Container}s allocated to it using this
+ interface.
+
+
+
+ The ApplicationMaster
sends a {@link StopContainersRequest}
+ which includes the {@link ContainerId}s of the containers to be stopped.
+
+ The NodeManager
sends a response via
+ {@link StopContainersResponse} which includes a list of {@link ContainerId}
+ s of successfully stopped containers, a containerId-to-exception map for
+ each failed request in which the exception indicates errors from per
+ container. Note: None-container-specific exceptions will still be thrown by
+ the API method itself. ApplicationMaster
can use
+ {@link #getContainerStatuses(GetContainerStatusesRequest)} to get updated
+ statuses of the containers.
+
ApplicationMaster
to request for current
+ statuses of Container
s from the NodeManager
.
+
+
+
+ The ApplicationMaster
sends a
+ {@link GetContainerStatusesRequest} which includes the {@link ContainerId}s
+ of all containers whose statuses are needed.
+
+ The NodeManager
responds with
+ {@link GetContainerStatusesResponse} which includes a list of
+ {@link ContainerStatus} of the successfully queried containers and a
+ containerId-to-exception map for each failed request in which the exception
+ indicates errors from per container. Note: None-container-specific
+ exceptions will still be thrown by the API method itself.
+
ContainerStatus
es of containers with
+ the specified ContainerId
s
+ @return response containing the list of ContainerStatus
of the
+ successfully queried containers and a containerId-to-exception map
+ for failed requests.
+
+ @throws YarnException
+ @throws IOException]]>
+ ApplicationMaster
and a
+ NodeManager
to start/stop containers and to get status
+ of running containers.
+
+ If security is enabled the NodeManager
verifies that the
+ ApplicationMaster
has truly been allocated the container
+ by the ResourceManager
and also verifies all interactions such
+ as stopping the container or obtaining status information for the container.
+
ResourceManager
about the application's resource requirements.
+ @return the list of ResourceRequest
+ @see ResourceRequest]]>
+ ResourceManager
about the application's resource requirements.
+ @param resourceRequests list of ResourceRequest
to update the
+ ResourceManager
about the application's
+ resource requirements
+ @see ResourceRequest]]>
+ ApplicationMaster
.
+ @return list of ContainerId
of containers being
+ released by the ApplicationMaster
]]>
+ ApplicationMaster
+ @param releaseContainers list of ContainerId
of
+ containers being released by the
+ ApplicationMaster
]]>
+ ApplicationMaster
.
+ @return the ResourceBlacklistRequest
being sent by the
+ ApplicationMaster
+ @see ResourceBlacklistRequest]]>
+ ResourceManager
about the blacklist additions and removals
+ per the ApplicationMaster
.
+
+ @param resourceBlacklistRequest the ResourceBlacklistRequest
+ to inform the ResourceManager
about
+ the blacklist additions and removals
+ per the ApplicationMaster
+ @see ResourceBlacklistRequest]]>
+ ApplicationMaster
]]>
+ ResourceManager
about some container's resources need to be
+ increased]]>
+ ApplicationMaster
to the
+ ResourceManager
to obtain resources in the cluster.
+
+ The request includes: +
ResourceManager
about the application's
+ resource requirements.
+ ApplicationMaster
to take some action then it will send an
+ AMCommand to the ApplicationMaster
. See AMCommand
+ for details on commands and actions for them.
+ @return AMCommand
if the ApplicationMaster
should
+ take action, null
otherwise
+ @see AMCommand]]>
+ Container
by the
+ ResourceManager
.
+ @return list of newly allocated Container
]]>
+ NodeReport
s. Updates could
+ be changes in health, availability etc of the nodes.
+ @return The delta of updated nodes since the last response]]>
+ + +
The message is a snapshot of the resources the RM wants back from the AM.
+ While demand persists, the RM will repeat its request; applications should
+ not interpret each message as a request for additional
+ resources on top of previous messages. Resources requested consistently
+ over some duration may be forcibly killed by the RM.
+
+ @return A specification of the resources to reclaim from this AM.]]>
+
+
1) AM is receiving first container on underlying NodeManager.
+ OR
+ 2) NMToken master key rolled over in ResourceManager and AM is getting new
+ container on the same underlying NodeManager.
+
AM will receive one NMToken per NM irrespective of the number of containers + issued on same NM. AM is expected to store these tokens until issued a + new token for the same NM.
]]> +
ResourceManager
the
+ ApplicationMaster
during resource negotiation.
+
+ The response, includes: +
ApplicationMaster
+ take some actions (resync, shutdown etc.).
+ ApplicationMaster
.
+ @return final state of the ApplicationMaster
]]>
+ ApplicationMaster
+ @param finalState final state of the ApplicationMaster
]]>
+ ApplicationMaster
.
+ This url if contains scheme then that will be used by resource manager
+ web application proxy otherwise it will default to http.
+ @return tracking URLfor the ApplicationMaster
]]>
+ ApplicationMaster
.
+ This is the web-URL to which ResourceManager or web-application proxy will
+ redirect client/users once the application is finished and the
+ ApplicationMaster
is gone.
+ + If the passed url has a scheme then that will be used by the + ResourceManager and web-application proxy, otherwise the scheme will + default to http. +
++ Empty, null, "N/A" strings are all valid besides a real URL. In case an url + isn't explicitly passed, it defaults to "N/A" on the ResourceManager. +
+
+ @param url
+ tracking URLfor the ApplicationMaster
]]>
+
ApplicationMaster
to
+ inform the ResourceManager
about its completion.
+
+ The final request includes details such: +
ApplicationMaster
ApplicationMaster
+ ResourceManager
to a
+ ApplicationMaster
on it's completion.
+
+
+ + The response, includes: +
ApplicationAttemptId
of an application attempt]]>
+ ApplicationAttemptId
of an application attempt]]>
+ ResourceManager
to get an
+ {@link ApplicationAttemptReport} for an application attempt.
+
+
+ + The request should include the {@link ApplicationAttemptId} of the + application attempt. +
+ + @see ApplicationAttemptReport + @see ApplicationHistoryProtocol#getApplicationAttemptReport(GetApplicationAttemptReportRequest)]]> +ApplicationAttemptReport
for the application attempt]]>
+ ApplicationAttemptReport
for the application attempt]]>
+ ResourceManager
to a client requesting
+ an application attempt report.
+
+
+ + The response includes an {@link ApplicationAttemptReport} which has the + details about the particular application attempt +
+ + @see ApplicationAttemptReport + @see ApplicationHistoryProtocol#getApplicationAttemptReport(GetApplicationAttemptReportRequest)]]> +ApplicationId
of an application]]>
+ ApplicationId
of an application]]>
+ ResourceManager
.
+
+
+ @see ApplicationHistoryProtocol#getApplicationAttempts(GetApplicationAttemptsRequest)]]>
+ ApplicationReport
of an application]]>
+ ApplicationReport
of an application]]>
+ ResourceManager
to a client requesting
+ a list of {@link ApplicationAttemptReport} for application attempts.
+
+
+
+ The ApplicationAttemptReport
for each application includes the
+ details of an application attempt.
+
ApplicationId
of the application]]>
+ ApplicationId
of the application]]>
+ ResourceManager
to
+ get an {@link ApplicationReport} for an application.
+
+ The request should include the {@link ApplicationId} of the + application.
+ + @see ApplicationClientProtocol#getApplicationReport(GetApplicationReportRequest) + @see ApplicationReport]]> +ApplicationReport
for the application]]>
+ ResourceManager
to a client
+ requesting an application report.
+
+ The response includes an {@link ApplicationReport} which has details such
+ as user, queue, name, host on which the ApplicationMaster
is
+ running, RPC port, tracking URL, diagnostics, start time etc.
ResourceManager
.
+
+
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)
+
+ Setting any of the parameters to null, would just disable that + filter
+ + @param scope {@link ApplicationsRequestScope} to filter by + @param users list of users to filter by + @param queues list of scheduler queues to filter by + @param applicationTypes types of applications + @param applicationTags application tags to filter by + @param applicationStates application states to filter by + @param startRange range of application start times to filter by + @param finishRange range of application finish times to filter by + @param limit number of applications to limit to + @return {@link GetApplicationsRequest} to be used with + {@link ApplicationClientProtocol#getApplications(GetApplicationsRequest)}]]> +ResourceManager
.
+
+
+ @param scope {@link ApplicationsRequestScope} to filter by
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
+ ResourceManager
.
+
+
+
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
+ ResourceManager
.
+
+
+
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
+ ResourceManager
.
+
+
+
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
+ ResourceManager
.
+
+
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
+ ApplicationReport
for applications]]>
+ ResourceManager
to a client
+ requesting an {@link ApplicationReport} for applications.
+
+ The ApplicationReport
for each application includes details
+ such as user, queue, name, host on which the ApplicationMaster
+ is running, RPC port, tracking URL, diagnostics, start time etc.
ResourceManager
.
+
+ Currently, this is empty.
+ + @see ApplicationClientProtocol#getClusterMetrics(GetClusterMetricsRequest)]]> +YarnClusterMetrics
for the cluster]]>
+ ResourceManager
to a client
+ requesting cluster metrics.+ + @see YarnClusterMetrics + @see ApplicationClientProtocol#getClusterMetrics(GetClusterMetricsRequest)]]> +
ResourceManager
.
+
+ The request will ask for all nodes in the given {@link NodeState}s.
+
+ @see ApplicationClientProtocol#getClusterNodes(GetClusterNodesRequest)]]>
+ NodeReport
for all nodes in the cluster]]>
+ ResourceManager
to a client
+ requesting a {@link NodeReport} for all nodes.
+
+ The NodeReport
contains per-node information such as
+ available resources, number of containers, tracking url, rack name, health
+ status etc.
+
+ @see NodeReport
+ @see ApplicationClientProtocol#getClusterNodes(GetClusterNodesRequest)]]>
+
ContainerId
of the Container]]>
+ ContainerId
of the container]]>
+ ResourceManager
to get an
+ {@link ContainerReport} for a container.
+ ]]>
+ ContainerReport
for the container]]>
+ ResourceManager
to a client requesting
+ a container report.
+
+
+ + The response includes a {@link ContainerReport} which has details of a + container. +
]]> +ApplicationAttemptId
of an application attempt]]>
+ ApplicationAttemptId
of an application attempt]]>
+ ResourceManager
.
+
+
+ @see ApplicationHistoryProtocol#getContainers(GetContainersRequest)]]>
+ ContainerReport
for all the containers of an
+ application attempt]]>
+ ContainerReport
for all the containers of
+ an application attempt]]>
+ ResourceManager
to a client requesting
+ a list of {@link ContainerReport} for containers.
+
+
+
+ The ContainerReport
for each container includes the container
+ details.
+
ContainerStatus
.
+
+ @return the list of ContainerId
s of containers for which to
+ obtain the ContainerStatus
.]]>
+ ContainerStatus
+
+ @param containerIds
+ a list of ContainerId
s of containers for which to
+ obtain the ContainerStatus
]]>
+ ApplicationMaster
to the
+ NodeManager
to get {@link ContainerStatus} of requested
+ containers.
+
+
+ @see ContainerManagementProtocol#getContainerStatuses(GetContainerStatusesRequest)]]>
+ ContainerStatus
es of the requested containers.]]>
+ NodeManager
to the
+ ApplicationMaster
when asked to obtain the
+ ContainerStatus
of requested containers.
+
+
+ @see ContainerManagementProtocol#getContainerStatuses(GetContainerStatusesRequest)]]>
+ Currently, this is empty.
+ + @see ApplicationClientProtocol#getNewApplication(GetNewApplicationRequest)]]> +ApplicationId
allocated by the
+ ResourceManager
.
+ @return new ApplicationId
allocated by the
+ ResourceManager
]]>
+ ResourceManager
to the client for
+ a request to get a new {@link ApplicationId} for submitting applications.
+
+ Clients can submit an application with the returned + {@link ApplicationId}.
+ + @see ApplicationClientProtocol#getNewApplication(GetNewApplicationRequest)]]> +true
if applications' information is to be included,
+ else false
]]>
+ true
if information about child queues is required,
+ else false
]]>
+ true
if information about entire hierarchy is
+ required, false
otherwise]]>
+ ResourceManager
.
+
+ @see ApplicationClientProtocol#getQueueInfo(GetQueueInfoRequest)]]>
+ QueueInfo
for the specified queue]]>
+ ResourceManager
to a client
+ requesting information about queues in the system.
+
+ The response includes a {@link QueueInfo} which has details such as + queue name, used/total capacities, running applications, child queues etc + .
+ + @see QueueInfo + @see ApplicationClientProtocol#getQueueInfo(GetQueueInfoRequest)]]> +ResourceManager
to
+ get queue acls for the current user.
+
+ Currently, this is empty.
+ + @see ApplicationClientProtocol#getQueueUserAcls(GetQueueUserAclsInfoRequest)]]> +QueueUserACLInfo
per queue for the user]]>
+ ResourceManager
to clients
+ seeking queue acls for the user.
+
+ The response contains a list of {@link QueueUserACLInfo} which + provides information about {@link QueueACL} per queue.
+ + @see QueueACL + @see QueueUserACLInfo + @see ApplicationClientProtocol#getQueueUserAcls(GetQueueUserAclsInfoRequest)]]> +ApplicationId
of the application to be aborted]]>
+ ResourceManager
+ to abort a submitted application.
+
+ The request includes the {@link ApplicationId} of the application to be + aborted.
+ + @see ApplicationClientProtocol#forceKillApplication(KillApplicationRequest)]]> +ResourceManager
to the client aborting
+ a submitted application.
+
+ + The response, includes: +
ResourceManager
crashes before the process of killing the
+ application is completed, the ResourceManager
may retry this
+ application on recovery.
+
+
+ @see ApplicationClientProtocol#forceKillApplication(KillApplicationRequest)]]>
+ ApplicationId
of the application to be moved]]>
+ ApplicationId
of the application to be moved]]>
+ ResourceManager
+ to move a submitted application to a different queue.
+
+ The request includes the {@link ApplicationId} of the application to be + moved and the queue to place it in.
+ + @see ApplicationClientProtocol#moveApplicationAcrossQueues(MoveApplicationAcrossQueuesRequest)]]> +ResourceManager
to the client moving
+ a submitted application to a different queue.
+
+ + A response without exception means that the move has completed successfully. +
+ + @see ApplicationClientProtocol#moveApplicationAcrossQueues(MoveApplicationAcrossQueuesRequest)]]> +RegisterApplicationMasterRequest
]]>
+ ApplicationMaster
is
+ running.
+ @return host on which the ApplicationMaster
is running]]>
+ ApplicationMaster
is
+ running.
+ @param host host on which the ApplicationMaster
+ is running]]>
+ ApplicationMaster
+ is responding.
+ @return the RPC port on which the ApplicationMaster
is
+ responding]]>
+ ApplicationMaster
is
+ responding.
+ @param port RPC port on which the ApplicationMaster
is
+ responding]]>
+ ApplicationMaster
.
+ This url if contains scheme then that will be used by resource manager
+ web application proxy otherwise it will default to http.
+ @return tracking URL for the ApplicationMaster
]]>
+ ApplicationMaster
while
+ it is running. This is the web-URL to which ResourceManager or
+ web-application proxy will redirect client/users while the application and
+ the ApplicationMaster
are still running.
+ + If the passed url has a scheme then that will be used by the + ResourceManager and web-application proxy, otherwise the scheme will + default to http. +
++ Empty, null, "N/A" strings are all valid besides a real URL. In case an url + isn't explicitly passed, it defaults to "N/A" on the ResourceManager. +
+
+ @param trackingUrl
+ tracking URLfor the ApplicationMaster
]]>
+
ApplicationMaster
to
+ ResourceManager
on registration.
+
+ The registration includes details such as: +
ApplicationACL
s]]>
+ The ClientToAMToken master key is sent to ApplicationMaster
+ by ResourceManager
via {@link RegisterApplicationMasterResponse}
+ , used to verify corresponding ClientToAMToken.
]]> +
]]> +
ResourceManager
from previous application attempts.
+
+
+ @return the list of running containers as viewed by
+ ResourceManager
from previous application attempts
+ @see RegisterApplicationMasterResponse#getNMTokensFromPreviousAttempts()]]>
+ ResourceManager
to a new
+ ApplicationMaster
on registration.
+
+ The response contains critical details such as: +
ApplicationACL
s for the application.NodeManager
.
+
+ @return ContainerLaunchContext
for the container to be started
+ by the NodeManager
]]>
+ NodeManager
+ @param context ContainerLaunchContext
for the container to be
+ started by the NodeManager
]]>
+ Note: {@link NMToken} will be used for authenticating communication with + NodeManager.
+ @return the container token to be used for authorization during starting + container. + @see NMToken + @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]> +ApplicationMaster
to the
+ NodeManager
to start a container.
+
+ The ApplicationMaster
has to provide details such as
+ allocated resource capability, security tokens (if enabled), command
+ to be executed to start the container, environment for the process,
+ necessary binaries/jar/shared-objects etc. via the
+ {@link ContainerLaunchContext}.
ApplicationMaster
to the NodeManager
to
+ start containers.
+
+
+
+ In each {@link StartContainerRequest}, the ApplicationMaster
has
+ to provide details such as allocated resource capability, security tokens (if
+ enabled), command to be executed to start the container, environment for the
+ process, necessary binaries/jar/shared-objects etc. via the
+ {@link ContainerLaunchContext}.
+
ContainerId
s of the containers that are
+ started successfully.
+ @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]>
+ NodeManager
.
+
+
+ The meta-data is returned as a Map between the auxiliary service names and
+ their corresponding per service meta-data as an opaque blob
+ ByteBuffer
+
+ To be able to interpret the per-service meta-data, you should consult the + documentation for the Auxiliary-service configured on the NodeManager +
+ + @return a Map between the names of auxiliary services and their + corresponding meta-data]]> +NodeManager
to the
+ ApplicationMaster
when asked to start an allocated
+ container.
+
+
+ @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]>
+ ContainerId
s of containers to be stopped]]>
+ ContainerId
s of the containers to be stopped]]>
+ ApplicationMaster
to the
+ NodeManager
to stop containers.
+
+ @see ContainerManagementProtocol#stopContainers(StopContainersRequest)]]>
+ NodeManager
to the
+ ApplicationMaster
when asked to stop allocated
+ containers.
+
+
+ @see ContainerManagementProtocol#stopContainers(StopContainersRequest)]]>
+ ApplicationSubmissionContext
for the application]]>
+ ApplicationSubmissionContext
for the
+ application]]>
+ ResourceManager
.
+
+ The request, via {@link ApplicationSubmissionContext}, contains
+ details such as queue, {@link Resource} required to run the
+ ApplicationMaster
, the equivalent of
+ {@link ContainerLaunchContext} for launching the
+ ApplicationMaster
etc.
+
+ @see ApplicationClientProtocol#submitApplication(SubmitApplicationRequest)]]>
+
ResourceManager
to a client on
+ application submission.
+
+ Currently, this is empty.
+ + @see ApplicationClientProtocol#submitApplication(SubmitApplicationRequest)]]> +ApplicationAttempId
.
+ @return ApplicationId
of the ApplicationAttempId
]]>
+ Application
.
+ @return attempt id
of the Application
]]>
+ ApplicationAttemptId
denotes the particular attempt
+ of an ApplicationMaster
for a given {@link ApplicationId}.
+
+ Multiple attempts might be needed to run an application to completion due
+ to temporal failures of the ApplicationMaster
such as hardware
+ failures, connectivity issues etc. on the node on which it was scheduled.
ApplicationMaster
.
+
+ @return RPC port of this attempt ApplicationMaster
]]>
+ ApplicationMaster
is running.
+
+ @return host on which this attempt of
+ ApplicationMaster
is running]]>
+ ApplicationAttemptId
of the attempt]]>
+ ContainerId
of the attempt]]>
+ ApplicationAttemptReport
is a report of an application attempt.
+
+
+ + It includes details such as: +
ApplicationMaster
of this attempt is
+ running.ApplicationMaster
of this attempt.ResourceManager
.
+ @return short integer identifier of the ApplicationId
]]>
+ ResourceManager
which is
+ used to generate globally unique ApplicationId
.
+ @return start time of the ResourceManager
]]>
+ ApplicationId
represents the globally unique
+ identifier for an application.
+
+ The globally unique nature of the identifier is achieved by using the
+ cluster timestamp i.e. start-time of the
+ ResourceManager
along with a monotonically increasing counter
+ for the application.
ApplicationId
of the application]]>
+ ApplicationAttemptId
of the attempt]]>
+ ApplicationMaster
+ is running.
+ @return host on which the ApplicationMaster
+ is running]]>
+ ApplicationMaster
.
+ @return RPC port of the ApplicationMaster
]]>
+ ApplicationMaster
.
+
+ ClientToAMToken is the security token used by the AMs to verify
+ authenticity of any client
.
+
+ The ResourceManager
, provides a secure token (via
+ {@link ApplicationReport#getClientToAMToken()}) which is verified by the
+ ApplicationMaster when the client directly talks to an AM.
+
ApplicationMaster
]]>
+ YarnApplicationState
of the application]]>
+ ApplicationReport
is a report of an application.
+
+ It includes details such as: +
ApplicationMaster
is running.ApplicationMaster
.Resource
]]>
+ Resource
]]>
+ Resource
]]>
+ ApplicationId
of the submitted application]]>
+ ApplicationId
of the submitted
+ application]]>
+ Priority
of the application]]>
+ Container
with which the ApplicationMaster
is
+ launched.
+ @return ContainerLaunchContext
for the
+ ApplicationMaster
container]]>
+ Container
with which the ApplicationMaster
is
+ launched.
+ @param amContainer ContainerLaunchContext
for the
+ ApplicationMaster
container]]>
+ ApplicationMaster
for
+ this application.]]>
+ ApplicationMaster
+ for this application.]]>
+ LogAggregationContext
of the application]]>
+ ApplicationSubmissionContext
represents all of the
+ information needed by the ResourceManager
to launch
+ the ApplicationMaster
for an application.
+
+ It includes details such as: +
ApplicationMaster
is executed.
+ Resource
allocated to the container]]>
+ Container
was
+ allocated.
+ @return Priority
at which the Container
was
+ allocated]]>
+ ContainerToken
is the security token used by the framework
+ to verify authenticity of any Container
.
The ResourceManager
, on container allocation provides a
+ secure token which is verified by the NodeManager
on
+ container launch.
Applications do not need to care about ContainerToken
, they
+ are transparently handled by the framework - the allocated
+ Container
includes the ContainerToken
.
ContainerToken
for the container]]>
+ Container
represents an allocated resource in the cluster.
+
+
+ The ResourceManager
is the sole authority to allocate any
+ Container
to applications. The allocated Container
+ is always on a single node and has a unique {@link ContainerId}. It has
+ a specific amount of {@link Resource} allocated.
It includes details such as: +
Typically, an ApplicationMaster
receives the
+ Container
from the ResourceManager
during
+ resource-negotiation and then talks to the NodeManager
to
+ start/stop containers.
Container
was assigned.
+
+ Note: If containers are kept alive across application attempts via
+ {@link ApplicationSubmissionContext#setKeepContainersAcrossApplicationAttempts(boolean)}
+ the ContainerId
does not necessarily contain the current
+ running application attempt's ApplicationAttemptId
This
+ container can be allocated by previously exited application attempt and
+ managed by the current running attempt thus have the previous application
+ attempt's ApplicationAttemptId
.
+
ApplicationAttemptId
of the application to which the
+ Container
was assigned]]>
+ getContainerId
instead.
+ @return lower 32 bits of identifier of the ContainerId
]]>
+ ContainerId
]]>
+ ContainerId
represents a globally unique identifier
+ for a {@link Container} in the cluster.]]>
+ LocalResource
required by the container]]>
+ LocalResource
required by the container]]>
+ + This will be used to initialize this application on the specific + {@link AuxiliaryService} running on the NodeManager by calling + {@link AuxiliaryService#initializeApplication(ApplicationInitializationContext)} +
+ + @return application-specific binary service data]]> +ApplicationACL
s]]>
+ ApplicationACL
s for the application]]>
+ ContainerLaunchContext
represents all of the information
+ needed by the NodeManager
to launch a container.
+
+ It includes details such as: +
ContainerId
of the container.]]>
+ Resource
of the container.]]>
+ NodeId
where container is running.]]>
+ Priority
of the container.]]>
+ ContainerState
of the container.]]>
+ exit status
of the container.]]>
+ ContainerReport
is a report of an container.
+
+
+ + It includes details such as: +
Container
.]]>
+ ContainerId
of the container]]>
+ ContainerState
of the container]]>
+ Note: This is valid only for completed containers i.e. containers + with state {@link ContainerState#COMPLETE}. + Otherwise, it returns an ContainerExitStatus.INVALID. +
+ +Containers killed by the framework, either due to being released by + the application or being 'lost' due to node failures etc. have a special + exit code of ContainerExitStatus.ABORTED.
+ +When threshold number of the nodemanager-local-directories or + threshold number of the nodemanager-log-directories become bad, then + container is not launched and is exited with ContainersExitStatus.DISKS_FAILED. +
+ + @return exit status for the container]]> +ContainerStatus
represents the current status of a
+ Container
.
+
+ It provides details such as: +
ContainerId
of the container.ContainerState
of the container.LocalResourceType
of the resource to be localized]]>
+ LocalResourceType
of the resource to be localized]]>
+ LocalResourceVisibility
of the resource to be
+ localized]]>
+ LocalResourceVisibility
of the resource to be
+ localized]]>
+ PATTERN
).
+ @return pattern that should be used to extract entries from the
+ archive.]]>
+ PATTERN
).
+ @param pattern pattern that should be used to extract entries
+ from the archive.]]>
+ LocalResource
represents a local resource required to
+ run a container.
+
+ The NodeManager
is responsible for localizing the resource
+ prior to launching the container.
Applications can specify {@link LocalResourceType} and + {@link LocalResourceVisibility}.
+ + @see LocalResourceType + @see LocalResourceVisibility + @see ContainerLaunchContext + @see ApplicationSubmissionContext + @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest)]]> +LocalResourceType
specifies the type
+ of a resource localized by the NodeManager
.
+
+ The type can be one of: +
NodeManager
.
+ LocalResourceVisibility
specifies the visibility
+ of a resource localized by the NodeManager
.
+
+ The visibility can be one of: +
LogAggregationContext
represents all of the
+ information needed by the NodeManager
to handle
+ the logs for an application.
+
+ It includes details such as: +
NodeManager
for which the
+ NMToken is used to authenticate.]]>
+ NodeManager
]]>
+ NodeManager
+ It is issued by ResourceMananger
when ApplicationMaster
+ negotiates resource with ResourceManager
and
+ validated on NodeManager
side.
NodeId
is the unique identifier for a node.
+
+ It includes the hostname and port to uniquely
+ identify the node. Thus, it is unique across restarts of any
+ NodeManager
.
NodeId
of the node]]>
+ NodeState
of the node]]>
+ Resource
on the node.
+ @return used Resource
on the node]]>
+ Resource
on the node.
+ @return total Resource
on the node]]>
+ NodeReport
is a summary of runtime information of a
+ node in the cluster.
+
+ It includes details such as: +
Node
.]]>
+ ResourceManager
.
+ @see AllocateRequest#setAskList(List)]]>
+ ResourceManager
. If the AM prefers a different set of
+ containers, then it may checkpoint or kill containers matching the
+ description in {@link #getResourceRequest}.
+ @return Set of containers at risk if the contract is not met.]]>
+ ApplicationMaster
(AM) can satisfy this request according
+ to its own priorities to prevent containers from being forcibly killed by
+ the platform.
+ @see PreemptionMessage]]>
+ ApplicationMaster
(AM). The AM receives a {@link
+ StrictPreemptionContract} message encoding which containers the platform may
+ forcibly kill, granting it an opportunity to checkpoint state or adjust its
+ execution plan. The message may also include a {@link PreemptionContract}
+ granting the AM more latitude in selecting which resources to return to the
+ cluster.+ +
The AM should decode both parts of the message. The {@link + StrictPreemptionContract} specifies particular allocations that the RM + requires back. The AM can checkpoint containers' state, adjust its execution + plan to move the computation, or take no action and hope that conditions that + caused the RM to ask for the container will change.
+ +
In contrast, the {@link PreemptionContract} also includes a description of + resources with a set of containers. If the AM releases containers matching + that profile, then the containers enumerated in {@link + PreemptionContract#getContainers()} may not be killed.
+ +
Each preemption message reflects the RM's current understanding of the
+ cluster state, so a request to return
+ +
The policy enforced by the RM is part of the scheduler. Generally, only + containers that have been requested consistently should be killed, but the + details are not specified.
]]> +
QueueACL
enumerates the various ACLs for queues.
+
+
+ + The ACL is one of: +
QueueState
of the queue]]>
+ accessible node labels
of the queue]]>
+ ApplicationSubmissionContext
and
+ ResourceRequest
don't specify their
+ NodeLabelExpression
.
+
+ @return default node label expression
of the queue]]>
+ It includes information such as: +
A queue is in one of: +
QueueACL
for the given user]]>
+ QueueUserACLInfo
provides information {@link QueueACL} for
+ the given user.
+
+ @see QueueACL
+ @see ApplicationClientProtocol#getQueueUserAcls(org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoRequest)]]>
+ + The globally unique nature of the identifier is achieved by using the + cluster timestamp i.e. start-time of the {@code ResourceManager} + along with a monotonically increasing counter for the reservation. +
]]> ++ It includes: +
Resource
models a set of computer resources in the
+ cluster.
+
+ Currently it models both memory and CPU.
+ +The unit for memory is megabytes. CPU is modeled with virtual cores + (vcores), a unit for expressing parallelism. A node's capacity should + be configured with virtual cores equal to its number of physical cores. A + container should be requested with the number of cores it can saturate, i.e. + the average number of threads it expects to have runnable at a time.
+ +Virtual cores take integer values and thus currently CPU-scheduling is + very coarse. A complementary axis for CPU requests that represents processing + power will likely be added in the future to enable finer-grained resource + configuration.
+ +Typically, applications request Resource
of suitable
+ capability to run their component tasks.
Priority
of the request]]>
+ Priority
of the request]]>
+ Resource
capability of the request]]>
+ Resource
capability of the request]]>
+ ResourceRequest
.]]>
+ + +
If the flag is off on a rack-level ResourceRequest
,
+ containers at that request's priority will not be assigned to nodes on that
+ request's rack unless requests specifically for those nodes have also been
+ submitted.
+ +
If the flag is off on an {@link ResourceRequest#ANY}-level
+ ResourceRequest
, containers at that request's priority will
+ only be assigned on racks for which specific requests have also been
+ submitted.
+ +
For example, to request a container strictly on a specific node, the + corresponding rack-level and any-level requests should have locality + relaxation set to false. Similarly, to request a container strictly on a + specific rack, the corresponding any-level request should have locality + relaxation set to false.
+
+ @param relaxLocality whether locality relaxation is enabled with this
+ ResourceRequest
.]]>
+
ResourceRequest
represents the request made by an
+ application to the ResourceManager
to obtain various
+ Container
allocations.
+
+ It includes: +
true
,
+ which tells the ResourceManager
if the application wants
+ locality to be loose (i.e. allows fall-through to rack or any)
+ or strict (i.e. specify hard constraint on resource allocation).
+ ResourceManager
.
+ @return the set of {@link ContainerId} to be preempted.]]>
+ Token
is the security entity used by the framework
+ to verify authenticity of any resource.]]>
+ URL
represents a serializable {@link java.net.URL}.]]>
+ NodeManager
s in the cluster]]>
+ YarnClusterMetrics
represents cluster metrics.
+
+ Currently only number of NodeManager
s is provided.
+ The reader and writer users/groups pattern that the user can supply is the
+ same as what AccessControlList
takes.
+
+ Primary filters will be used to index the entities in
+ TimelineStore
, such that users should carefully choose the
+ information they want to store as the primary filters. The remaining can be
+ stored as other information.
+
InetSocketAddress
. On a HA cluster,
+ this fetches the address corresponding to the RM identified by
+ {@link #RM_HA_ID}.
+ @param name property name.
+ @param defaultAddress the default value
+ @param defaultPort the default port
+ @return InetSocketAddress]]>
+ + Note: Use {@link DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH} for + cross-platform practice i.e. submit an application from a Windows client to + a Linux/Unix server or vice versa. +
]]> +The ResourceManager
responds with a new, monotonically
+ increasing, {@link ApplicationId} which is used by the client to submit
+ a new application.
The ResourceManager
also responds with details such
+ as maximum resource capabilities in the cluster as specified in
+ {@link GetNewApplicationResponse}.
ApplicationId
+ @return response containing the new ApplicationId
to be used
+ to submit an application
+ @throws YarnException
+ @throws IOException
+ @see #submitApplication(SubmitApplicationRequest)]]>
+ ResourceManager.
+
+ The client is required to provide details such as queue,
+ {@link Resource} required to run the ApplicationMaster
,
+ the equivalent of {@link ContainerLaunchContext} for launching
+ the ApplicationMaster
etc. via the
+ {@link SubmitApplicationRequest}.
Currently the ResourceManager
sends an immediate (empty)
+ {@link SubmitApplicationResponse} on accepting the submission and throws
+ an exception if it rejects the submission. However, this call needs to be
+ followed by {@link #getApplicationReport(GetApplicationReportRequest)}
+ to make sure that the application gets properly submitted - obtaining a
+ {@link SubmitApplicationResponse} from ResourceManager doesn't guarantee
+ that RM 'remembers' this application beyond failover or restart. If RM
+ failover or RM restart happens before ResourceManager saves the
+ application's state successfully, the subsequent
+ {@link #getApplicationReport(GetApplicationReportRequest)} will throw
+ a {@link ApplicationNotFoundException}. The Clients need to re-submit
+ the application with the same {@link ApplicationSubmissionContext} when
+ it encounters the {@link ApplicationNotFoundException} on the
+ {@link #getApplicationReport(GetApplicationReportRequest)} call.
During the submission process, it checks whether the application + already exists. If the application exists, it will simply return + SubmitApplicationResponse
+ + In secure mode,the ResourceManager
verifies access to
+ queues etc. before accepting the application submission.
ResourceManager
to abort submitted application.
+
+ The client, via {@link KillApplicationRequest} provides the + {@link ApplicationId} of the application to be aborted.
+ + In secure mode,the ResourceManager
verifies access to the
+ application, queue etc. before terminating the application.
Currently, the ResourceManager
returns an empty response
+ on success and throws an exception on rejecting the request.
ResourceManager
returns an empty response
+ on success and throws an exception on rejecting the request
+ @throws YarnException
+ @throws IOException
+ @see #getQueueUserAcls(GetQueueUserAclsInfoRequest)]]>
+ ResourceManager
.
+
+ The ResourceManager
responds with a
+ {@link GetClusterMetricsResponse} which includes the
+ {@link YarnClusterMetrics} with details such as number of current
+ nodes in the cluster.
ResourceManager
.
+
+ The ResourceManager
responds with a
+ {@link GetClusterNodesResponse} which includes the
+ {@link NodeReport} for all the nodes in the cluster.
ResourceManager
.
+
+ The client, via {@link GetQueueInfoRequest}, can ask for details such + as used/total resources, child queues, running applications etc.
+ + In secure mode,the ResourceManager
verifies access before
+ providing the information.
ResourceManager
.
+
+
+ The ResourceManager
responds with queue acls for all
+ existing queues.
+ The client packages all details of its request in a + {@link ReservationSubmissionRequest} object. This contains information + about the amount of capacity, temporal constraints, and concurrency needs. + Furthermore, the reservation might be composed of multiple stages, with + ordering dependencies among them. +
+ ++ In order to respond, a new admission control component in the + {@code ResourceManager} performs an analysis of the resources that have + been committed over the period of time the user is requesting, verify that + the user requests can be fulfilled, and that it respect a sharing policy + (e.g., {@code CapacityOverTimePolicy}). Once it has positively determined + that the ReservationSubmissionRequest is satisfiable the + {@code ResourceManager} answers with a + {@link ReservationSubmissionResponse} that include a non-null + {@link ReservationId}. Upon failure to find a valid allocation the response + is an exception with the reason. + + On application submission the client can use this {@link ReservationId} to + obtain access to the reserved resources. +
+ ++ The system guarantees that during the time-range specified by the user, the + reservationID will be corresponding to a valid reservation. The amount of + capacity dedicated to such queue can vary overtime, depending of the + allocation that has been determined. But it is guaranteed to satisfy all + the constraint expressed by the user in the + {@link ReservationSubmissionRequest}. +
+ + @param request the request to submit a new Reservation + @return response the {@link ReservationId} on accepting the submission + @throws YarnException if the request is invalid or reservation cannot be + created successfully + @throws IOException]]> ++ The allocation is attempted by virtually substituting all previous + allocations related to this Reservation with new ones, that satisfy the new + {@link ReservationUpdateRequest}. Upon success the previous allocation is + substituted by the new one, and on failure (i.e., if the system cannot find + a valid allocation for the updated request), the previous allocation + remains valid. + + The {@link ReservationId} is not changed, and applications currently + running within this reservation will automatically receive the resources + based on the new allocation. +
+ + @param request to update an existing Reservation (the ReservationRequest + should refer to an existing valid {@link ReservationId}) + @return response empty on successfully updating the existing reservation + @throws YarnException if the request is invalid or reservation cannot be + updated successfully + @throws IOException]]> +ResourceManager
+ to submit/abort jobs and to get information on applications, cluster metrics,
+ nodes, queues and ACLs.]]>
+ ApplicationHistoryServer
to
+ get the information of completed applications etc.
+ ]]>
+ ApplicationMaster
to register with
+ the ResourceManager
.
+
+
+
+ The ApplicationMaster
needs to provide details such as RPC
+ Port, HTTP tracking url etc. as specified in
+ {@link RegisterApplicationMasterRequest}.
+
+ The ResourceManager
responds with critical details such as
+ maximum resource capabilities in the cluster as specified in
+ {@link RegisterApplicationMasterResponse}.
+
ApplicationMaster
to notify the
+ ResourceManager
about its completion (success or failed).
+
+ The ApplicationMaster
has to provide details such as
+ final state, diagnostics (in case of failures) etc. as specified in
+ {@link FinishApplicationMasterRequest}.
The ResourceManager
responds with
+ {@link FinishApplicationMasterResponse}.
ApplicationMaster
and the
+ ResourceManager
.
+
+
+
+ The ApplicationMaster
uses this interface to provide a list of
+ {@link ResourceRequest} and returns unused {@link Container} allocated to
+ it via {@link AllocateRequest}. Optionally, the
+ ApplicationMaster
can also blacklist resources which
+ it doesn't want to use.
+
+ This also doubles up as a heartbeat to let the
+ ResourceManager
know that the ApplicationMaster
+ is alive. Thus, applications should periodically make this call to be kept
+ alive. The frequency depends on
+ {@link YarnConfiguration#RM_AM_EXPIRY_INTERVAL_MS} which defaults to
+ {@link YarnConfiguration#DEFAULT_RM_AM_EXPIRY_INTERVAL_MS}.
+
+ The ResourceManager
responds with list of allocated
+ {@link Container}, status of completed containers and headroom information
+ for the application.
+
+ The ApplicationMaster
can use the available headroom
+ (resources) to decide how to utilized allocated resources and make informed
+ decisions about future resource requests.
+
ApplicationMaster
+ and the ResourceManager
.
+
+ This is used by the ApplicationMaster
to register/unregister
+ and to request and obtain resources in the cluster from the
+ ResourceManager
.
SharedCacheManager.
The client uses a checksum to identify the
+ resource and an {@link ApplicationId} to identify which application will be
+ using the resource.
+
+
+
+ The SharedCacheManager
responds with whether or not the
+ resource exists in the cache. If the resource exists, a Path
+ to the resource in the shared cache is returned. If the resource does not
+ exist, the response is empty.
+
SharedCacheManager.
This method is called once an application
+ is no longer using a claimed resource in the shared cache. The client uses
+ a checksum to identify the resource and an {@link ApplicationId} to
+ identify which application is releasing the resource.
+
+
+ + Note: This method is an optimization and the client is not required to call + it for correctness. +
+ +
+ Currently the SharedCacheManager
sends an empty response.
+
SharedCacheManager
to claim
+ and release resources in the shared cache.
+ ]]>
+ ApplicationMaster
provides a list of
+ {@link StartContainerRequest}s to a NodeManager
to
+ start {@link Container}s allocated to it using this interface.
+
+
+
+ The ApplicationMaster
has to provide details such as allocated
+ resource capability, security tokens (if enabled), command to be executed
+ to start the container, environment for the process, necessary
+ binaries/jar/shared-objects etc. via the {@link ContainerLaunchContext} in
+ the {@link StartContainerRequest}.
+
+ The NodeManager
sends a response via
+ {@link StartContainersResponse} which includes a list of
+ {@link Container}s of successfully launched {@link Container}s, a
+ containerId-to-exception map for each failed {@link StartContainerRequest} in
+ which the exception indicates errors from per container and a
+ allServicesMetaData map between the names of auxiliary services and their
+ corresponding meta-data. Note: None-container-specific exceptions will
+ still be thrown by the API method itself.
+
+ The ApplicationMaster
can use
+ {@link #getContainerStatuses(GetContainerStatusesRequest)} to get updated
+ statuses of the to-be-launched or launched containers.
+
ApplicationMaster
requests a NodeManager
to
+ stop a list of {@link Container}s allocated to it using this
+ interface.
+
+
+
+ The ApplicationMaster
sends a {@link StopContainersRequest}
+ which includes the {@link ContainerId}s of the containers to be stopped.
+
+ The NodeManager
sends a response via
+ {@link StopContainersResponse} which includes a list of {@link ContainerId}
+ s of successfully stopped containers, a containerId-to-exception map for
+ each failed request in which the exception indicates errors from per
+ container. Note: None-container-specific exceptions will still be thrown by
+ the API method itself. ApplicationMaster
can use
+ {@link #getContainerStatuses(GetContainerStatusesRequest)} to get updated
+ statuses of the containers.
+
ApplicationMaster
to request for current
+ statuses of Container
s from the NodeManager
.
+
+
+
+ The ApplicationMaster
sends a
+ {@link GetContainerStatusesRequest} which includes the {@link ContainerId}s
+ of all containers whose statuses are needed.
+
+ The NodeManager
responds with
+ {@link GetContainerStatusesResponse} which includes a list of
+ {@link ContainerStatus} of the successfully queried containers and a
+ containerId-to-exception map for each failed request in which the exception
+ indicates errors from per container. Note: None-container-specific
+ exceptions will still be thrown by the API method itself.
+
ContainerStatus
es of containers with
+ the specified ContainerId
s
+ @return response containing the list of ContainerStatus
of the
+ successfully queried containers and a containerId-to-exception map
+ for failed requests.
+
+ @throws YarnException
+ @throws IOException]]>
+ ApplicationMaster
and a
+ NodeManager
to start/stop containers and to get status
+ of running containers.
+
+ If security is enabled the NodeManager
verifies that the
+ ApplicationMaster
has truly been allocated the container
+ by the ResourceManager
and also verifies all interactions such
+ as stopping the container or obtaining status information for the container.
+
ResourceManager
about the application's resource requirements.
+ @return the list of ResourceRequest
+ @see ResourceRequest]]>
+ ResourceManager
about the application's resource requirements.
+ @param resourceRequests list of ResourceRequest
to update the
+ ResourceManager
about the application's
+ resource requirements
+ @see ResourceRequest]]>
+ ApplicationMaster
.
+ @return list of ContainerId
of containers being
+ released by the ApplicationMaster
]]>
+ ApplicationMaster
+ @param releaseContainers list of ContainerId
of
+ containers being released by the
+ ApplicationMaster
]]>
+ ApplicationMaster
.
+ @return the ResourceBlacklistRequest
being sent by the
+ ApplicationMaster
+ @see ResourceBlacklistRequest]]>
+ ResourceManager
about the blacklist additions and removals
+ per the ApplicationMaster
.
+
+ @param resourceBlacklistRequest the ResourceBlacklistRequest
+ to inform the ResourceManager
about
+ the blacklist additions and removals
+ per the ApplicationMaster
+ @see ResourceBlacklistRequest]]>
+ ApplicationMaster
]]>
+ ResourceManager
about some container's resources need to be
+ increased]]>
+ ApplicationMaster
to the
+ ResourceManager
to obtain resources in the cluster.
+
+ The request includes: +
ResourceManager
about the application's
+ resource requirements.
+ ApplicationMaster
to take some action then it will send an
+ AMCommand to the ApplicationMaster
. See AMCommand
+ for details on commands and actions for them.
+ @return AMCommand
if the ApplicationMaster
should
+ take action, null
otherwise
+ @see AMCommand]]>
+ Container
by the
+ ResourceManager
.
+ @return list of newly allocated Container
]]>
+ NodeReport
s. Updates could
+ be changes in health, availability etc of the nodes.
+ @return The delta of updated nodes since the last response]]>
+ + AM will receive one NMToken per NM irrespective of the number of containers + issued on same NM. AM is expected to store these tokens until issued a + new token for the same NM.]]> +
ApplicationMaster
during resource negotiation.
+ + The response, includes: +
ApplicationMaster
.
+ @return final state of the ApplicationMaster
]]>
+ ApplicationMaster
+ @param finalState final state of the ApplicationMaster
]]>
+ ApplicationMaster
.
+ This url if contains scheme then that will be used by resource manager
+ web application proxy otherwise it will default to http.
+ @return tracking URLfor the ApplicationMaster
]]>
+ ApplicationMaster
.
+ This is the web-URL to which ResourceManager or web-application proxy will
+ redirect client/users once the application is finished and the
+ ApplicationMaster
is gone.
+ + If the passed url has a scheme then that will be used by the + ResourceManager and web-application proxy, otherwise the scheme will + default to http. +
++ Empty, null, "N/A" strings are all valid besides a real URL. In case an url + isn't explicitly passed, it defaults to "N/A" on the ResourceManager. +
+
+ @param url
+ tracking URLfor the ApplicationMaster
]]>
+
ApplicationMaster
on it's completion.
+ + The response, includes: +
+ Note: The flag indicates whether the application has successfully + unregistered and is safe to stop. The application may stop after the flag is + true. If the application stops before the flag is true then the RM may retry + the application. + + @see ApplicationMasterProtocol#finishApplicationMaster(FinishApplicationMasterRequest)]]> +
ApplicationAttemptId
of an application attempt]]>
+ ApplicationAttemptId
of an application attempt]]>
+ ResourceManager
to get an
+ {@link ApplicationAttemptReport} for an application attempt.
+
+
+ + The request should include the {@link ApplicationAttemptId} of the + application attempt. +
+ + @see ApplicationAttemptReport + @see ApplicationHistoryProtocol#getApplicationAttemptReport(GetApplicationAttemptReportRequest)]]> +ApplicationAttemptReport
for the application attempt]]>
+ ApplicationAttemptReport
for the application attempt]]>
+ ResourceManager
to a client requesting
+ an application attempt report.
+
+
+ + The response includes an {@link ApplicationAttemptReport} which has the + details about the particular application attempt +
+ + @see ApplicationAttemptReport + @see ApplicationHistoryProtocol#getApplicationAttemptReport(GetApplicationAttemptReportRequest)]]> +ApplicationId
of an application]]>
+ ApplicationId
of an application]]>
+ ResourceManager
.
+
+
+ @see ApplicationHistoryProtocol#getApplicationAttempts(GetApplicationAttemptsRequest)]]>
+ ApplicationReport
of an application]]>
+ ApplicationReport
of an application]]>
+ ResourceManager
to a client requesting
+ a list of {@link ApplicationAttemptReport} for application attempts.
+
+
+
+ The ApplicationAttemptReport
for each application includes the
+ details of an application attempt.
+
ApplicationId
of the application]]>
+ ApplicationId
of the application]]>
+ ResourceManager
to
+ get an {@link ApplicationReport} for an application.
+
+ The request should include the {@link ApplicationId} of the + application.
+ + @see ApplicationClientProtocol#getApplicationReport(GetApplicationReportRequest) + @see ApplicationReport]]> +ApplicationReport
for the application]]>
+ ResourceManager
to a client
+ requesting an application report.
+
+ The response includes an {@link ApplicationReport} which has details such
+ as user, queue, name, host on which the ApplicationMaster
is
+ running, RPC port, tracking URL, diagnostics, start time etc.
ResourceManager
.
+
+
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)
+
+ Setting any of the parameters to null, would just disable that + filter
+ + @param scope {@link ApplicationsRequestScope} to filter by + @param users list of users to filter by + @param queues list of scheduler queues to filter by + @param applicationTypes types of applications + @param applicationTags application tags to filter by + @param applicationStates application states to filter by + @param startRange range of application start times to filter by + @param finishRange range of application finish times to filter by + @param limit number of applications to limit to + @return {@link GetApplicationsRequest} to be used with + {@link ApplicationClientProtocol#getApplications(GetApplicationsRequest)}]]> +ResourceManager
.
+
+
+ @param scope {@link ApplicationsRequestScope} to filter by
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
+ ResourceManager
.
+
+
+
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
+ ResourceManager
.
+
+
+
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
+ ResourceManager
.
+
+
+
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
+ ResourceManager
.
+
+ @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
+ ApplicationReport
for applications]]>
+ ResourceManager
to a client
+ requesting an {@link ApplicationReport} for applications.
+
+ The ApplicationReport
for each application includes details
+ such as user, queue, name, host on which the ApplicationMaster
+ is running, RPC port, tracking URL, diagnostics, start time etc.
ResourceManager
.
+
+ Currently, this is empty.
+ + @see ApplicationClientProtocol#getClusterMetrics(GetClusterMetricsRequest)]]> +YarnClusterMetrics
for the cluster]]>
+ ResourceManager
.
+
+ The request will ask for all nodes in the given {@link NodeState}s.
+
+ @see ApplicationClientProtocol#getClusterNodes(GetClusterNodesRequest)]]>
+ NodeReport
for all nodes in the cluster]]>
+ ResourceManager
to a client
+ requesting a {@link NodeReport} for all nodes.
+
+ The NodeReport
contains per-node information such as
+ available resources, number of containers, tracking url, rack name, health
+ status etc.
+
+ @see NodeReport
+ @see ApplicationClientProtocol#getClusterNodes(GetClusterNodesRequest)]]>
+
ContainerId
of the Container]]>
+ ContainerId
of the container]]>
+ ResourceManager
to get an
+ {@link ContainerReport} for a container.
+ ]]>
+ ContainerReport
for the container]]>
+ ResourceManager
to a client requesting
+ a container report.
+
+
+ + The response includes a {@link ContainerReport} which has details of a + container. +
]]> +ApplicationAttemptId
of an application attempt]]>
+ ApplicationAttemptId
of an application attempt]]>
+ ResourceManager
.
+
+
+ @see ApplicationHistoryProtocol#getContainers(GetContainersRequest)]]>
+ ContainerReport
for all the containers of an
+ application attempt]]>
+ ContainerReport
for all the containers of
+ an application attempt]]>
+ ResourceManager
to a client requesting
+ a list of {@link ContainerReport} for containers.
+
+
+
+ The ContainerReport
for each container includes the container
+ details.
+
ContainerStatus
.
+
+ @return the list of ContainerId
s of containers for which to
+ obtain the ContainerStatus
.]]>
+ ContainerStatus
+
+ @param containerIds
+ a list of ContainerId
s of containers for which to
+ obtain the ContainerStatus
]]>
+ NodeManager
to get {@link ContainerStatus} of requested
+ containers.
+
+ @see ContainerManagementProtocol#getContainerStatuses(GetContainerStatusesRequest)]]>
+ ContainerStatus
es of the requested containers.]]>
+ ApplicationMaster
when asked to obtain the
+ ContainerStatus
of requested containers.
+
+ @see ContainerManagementProtocol#getContainerStatuses(GetContainerStatusesRequest)]]>
+ Currently, this is empty.
+ + @see ApplicationClientProtocol#getNewApplication(GetNewApplicationRequest)]]> +ApplicationId
allocated by the
+ ResourceManager
.
+ @return new ApplicationId
allocated by the
+ ResourceManager
]]>
+ ResourceManager
to the client for
+ a request to get a new {@link ApplicationId} for submitting applications.
+
+ Clients can submit an application with the returned + {@link ApplicationId}.
+ + @see ApplicationClientProtocol#getNewApplication(GetNewApplicationRequest)]]> +true
if applications' information is to be included,
+ else false
]]>
+ true
if information about child queues is required,
+ else false
]]>
+ true
if information about entire hierarchy is
+ required, false
otherwise]]>
+ ResourceManager
.
+
+ @see ApplicationClientProtocol#getQueueInfo(GetQueueInfoRequest)]]>
+ QueueInfo
for the specified queue]]>
+ ResourceManager
to
+ get queue acls for the current user.
+
+ Currently, this is empty.
+ + @see ApplicationClientProtocol#getQueueUserAcls(GetQueueUserAclsInfoRequest)]]> +QueueUserACLInfo
per queue for the user]]>
+ ResourceManager
to clients
+ seeking queue acls for the user.
+
+ The response contains a list of {@link QueueUserACLInfo} which + provides information about {@link QueueACL} per queue.
+ + @see QueueACL + @see QueueUserACLInfo + @see ApplicationClientProtocol#getQueueUserAcls(GetQueueUserAclsInfoRequest)]]> +ApplicationId
of the application to be aborted]]>
+ ResourceManager
+ to abort a submitted application.
+
+ The request includes the {@link ApplicationId} of the application to be + aborted.
+ + @see ApplicationClientProtocol#forceKillApplication(KillApplicationRequest)]]> ++ The response, includes: +
ResourceManager
crashes before the process of killing the
+ application is completed, the ResourceManager
may retry this
+ application on recovery.
+
+ @see ApplicationClientProtocol#forceKillApplication(KillApplicationRequest)]]>
+ ApplicationId
of the application to be moved]]>
+ ApplicationId
of the application to be moved]]>
+ ResourceManager
+ to move a submitted application to a different queue.
+
+ The request includes the {@link ApplicationId} of the application to be + moved and the queue to place it in.
+ + @see ApplicationClientProtocol#moveApplicationAcrossQueues(MoveApplicationAcrossQueuesRequest)]]> +ResourceManager
to the client moving
+ a submitted application to a different queue.
+
+ + A response without exception means that the move has completed successfully. +
+ + @see ApplicationClientProtocol#moveApplicationAcrossQueues(MoveApplicationAcrossQueuesRequest)]]> +RegisterApplicationMasterRequest
]]>
+ ApplicationMaster
is
+ running.
+ @return host on which the ApplicationMaster
is running]]>
+ ApplicationMaster
is
+ running.
+ @param host host on which the ApplicationMaster
+ is running]]>
+ ApplicationMaster
.
+ This url if contains scheme then that will be used by resource manager
+ web application proxy otherwise it will default to http.
+ @return tracking URL for the ApplicationMaster
]]>
+ ApplicationMaster
while
+ it is running. This is the web-URL to which ResourceManager or
+ web-application proxy will redirect client/users while the application and
+ the ApplicationMaster
are still running.
+ + If the passed url has a scheme then that will be used by the + ResourceManager and web-application proxy, otherwise the scheme will + default to http. +
++ Empty, null, "N/A" strings are all valid besides a real URL. In case an url + isn't explicitly passed, it defaults to "N/A" on the ResourceManager. +
+
+ @param trackingUrl
+ tracking URLfor the ApplicationMaster
]]>
+
ApplicationACL
s]]>
+ The ClientToAMToken master key is sent to ApplicationMaster
+ by ResourceManager
via {@link RegisterApplicationMasterResponse}
+ , used to verify corresponding ClientToAMToken.
]]> +
]]> +
ResourceManager
from previous application attempts.
+
+
+ @return the list of running containers as viewed by
+ ResourceManager
from previous application attempts
+ @see RegisterApplicationMasterResponse#getNMTokensFromPreviousAttempts()]]>
+ ApplicationId
]]>
+ ApplicationId
]]>
+ key
]]>
+ SharedCacheManager
when
+ releasing a resource in the shared cache.
+
+
+ + Currently, this is empty. +
]]> +NodeManager
.
+
+ @return ContainerLaunchContext
for the container to be started
+ by the NodeManager
]]>
+ NodeManager
+ @param context ContainerLaunchContext
for the container to be
+ started by the NodeManager
]]>
+ ApplicationMaster
to the
+ NodeManager
to start a container.
+
+ The ApplicationMaster
has to provide details such as
+ allocated resource capability, security tokens (if enabled), command
+ to be executed to start the container, environment for the process,
+ necessary binaries/jar/shared-objects etc. via the
+ {@link ContainerLaunchContext}.
ApplicationMaster
to the NodeManager
to
+ start containers.
+
+
+
+ In each {@link StartContainerRequest}, the ApplicationMaster
has
+ to provide details such as allocated resource capability, security tokens (if
+ enabled), command to be executed to start the container, environment for the
+ process, necessary binaries/jar/shared-objects etc. via the
+ {@link ContainerLaunchContext}.
+
ContainerId
s of the containers that are
+ started successfully.
+ @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]>
+ NodeManager
.
+
+
+ The meta-data is returned as a Map between the auxiliary service names and
+ their corresponding per service meta-data as an opaque blob
+ ByteBuffer
+
+ To be able to interpret the per-service meta-data, you should consult the + documentation for the Auxiliary-service configured on the NodeManager +
+ + @return a Map between the names of auxiliary services and their + corresponding meta-data]]> +NodeManager
to the
+ ApplicationMaster
when asked to start an allocated
+ container.
+
+
+ @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]>
+ ContainerId
s of containers to be stopped]]>
+ ContainerId
s of the containers to be stopped]]>
+ ApplicationMaster
to the
+ NodeManager
to stop containers.
+
+ @see ContainerManagementProtocol#stopContainers(StopContainersRequest)]]>
+ NodeManager
to the
+ ApplicationMaster
when asked to stop allocated
+ containers.
+
+
+ @see ContainerManagementProtocol#stopContainers(StopContainersRequest)]]>
+ ApplicationSubmissionContext
for the application]]>
+ ApplicationSubmissionContext
for the
+ application]]>
+ ResourceManager
.
+
+ The request, via {@link ApplicationSubmissionContext}, contains
+ details such as queue, {@link Resource} required to run the
+ ApplicationMaster
, the equivalent of
+ {@link ContainerLaunchContext} for launching the
+ ApplicationMaster
etc.
+
+ @see ApplicationClientProtocol#submitApplication(SubmitApplicationRequest)]]>
+
ResourceManager
to a client on
+ application submission.
+
+ Currently, this is empty.
+ + @see ApplicationClientProtocol#submitApplication(SubmitApplicationRequest)]]> +ApplicationId
]]>
+ ApplicationId
]]>
+ key
]]>
+ SharedCacheManager
that claims a
+ resource in the shared cache.
+ ]]>
+ Path
if the resource exists in the shared
+ cache, null
otherwise]]>
+ Path
corresponding to a resource in the shared
+ cache]]>
+ ApplicationAttempId
.
+ @return ApplicationId
of the ApplicationAttempId
]]>
+ Application
.
+ @return attempt id
of the Application
]]>
+ ApplicationAttemptId
denotes the particular attempt
+ of an ApplicationMaster
for a given {@link ApplicationId}.
+
+ Multiple attempts might be needed to run an application to completion due
+ to temporal failures of the ApplicationMaster
such as hardware
+ failures, connectivity issues etc. on the node on which it was scheduled.
ApplicationMaster
.
+
+ @return RPC port of this attempt ApplicationMaster
]]>
+ ApplicationMaster
is running.
+
+ @return host on which this attempt of
+ ApplicationMaster
is running]]>
+ ApplicationAttemptId
of the attempt]]>
+ ContainerId
of the attempt]]>
+ ApplicationMaster
of this attempt is
+ running.ApplicationMaster
of this attempt.ResourceManager
.
+ @return short integer identifier of the ApplicationId
]]>
+ ResourceManager
which is
+ used to generate globally unique ApplicationId
.
+ @return start time of the ResourceManager
]]>
+ ApplicationId
represents the globally unique
+ identifier for an application.
+
+ The globally unique nature of the identifier is achieved by using the
+ cluster timestamp i.e. start-time of the
+ ResourceManager
along with a monotonically increasing counter
+ for the application.
ApplicationId
of the application]]>
+ ApplicationAttemptId
of the attempt]]>
+ ApplicationMaster
+ is running.
+ @return host on which the ApplicationMaster
+ is running]]>
+ ApplicationMaster
.
+ @return RPC port of the ApplicationMaster
]]>
+ ApplicationMaster
.
+
+ ClientToAMToken is the security token used by the AMs to verify
+ authenticity of any client
.
+
+ The ResourceManager
, provides a secure token (via
+ {@link ApplicationReport#getClientToAMToken()}) which is verified by the
+ ApplicationMaster when the client directly talks to an AM.
+
ApplicationMaster
]]>
+ YarnApplicationState
of the application]]>
+ + The AMRM token will be returned only if all the following conditions are + met: +
ApplicationMaster
is running.ApplicationMaster
.Resource
]]>
+ Resource
]]>
+ Resource
]]>
+ ApplicationId
of the submitted application]]>
+ ApplicationId
of the submitted
+ application]]>
+ Priority
of the application]]>
+ Container
with which the ApplicationMaster
is
+ launched.
+ @return ContainerLaunchContext
for the
+ ApplicationMaster
container]]>
+ Container
with which the ApplicationMaster
is
+ launched.
+ @param amContainer ContainerLaunchContext
for the
+ ApplicationMaster
container]]>
+ ApplicationMaster
for
+ this application.]]>
+ ApplicationMaster
+ for this application.]]>
+ LogAggregationContext
of the application]]>
+ ApplicationMaster
is executed.
+ Resource
allocated to the container]]>
+ Container
was
+ allocated.
+ @return Priority
at which the Container
was
+ allocated]]>
+ ContainerToken
is the security token used by the framework
+ to verify authenticity of any Container
.
The ResourceManager
, on container allocation provides a
+ secure token which is verified by the NodeManager
on
+ container launch.
Applications do not need to care about ContainerToken
, they
+ are transparently handled by the framework - the allocated
+ Container
includes the ContainerToken
.
ContainerToken
for the container]]>
+ + It includes details such as: +
Container
was assigned.
+
+ Note: If containers are kept alive across application attempts via
+ {@link ApplicationSubmissionContext#setKeepContainersAcrossApplicationAttempts(boolean)}
+ the ContainerId
does not necessarily contain the current
+ running application attempt's ApplicationAttemptId
This
+ container can be allocated by previously exited application attempt and
+ managed by the current running attempt thus have the previous application
+ attempt's ApplicationAttemptId
.
+
ApplicationAttemptId
of the application to which the
+ Container
was assigned]]>
+ getContainerId
instead.
+ @return lower 32 bits of identifier of the ContainerId
]]>
+ ContainerId
]]>
+ ContainerId
represents a globally unique identifier
+ for a {@link Container} in the cluster.]]>
+ LocalResource
required by the container]]>
+ LocalResource
required by the container]]>
+ + This will be used to initialize this application on the specific + {@link AuxiliaryService} running on the NodeManager by calling + {@link AuxiliaryService#initializeApplication(ApplicationInitializationContext)} +
+ + @return application-specific binary service data]]> +ApplicationACL
s]]>
+ ApplicationACL
s for the application]]>
+ ContainerId
of the container.]]>
+ Resource
of the container.]]>
+ NodeId
where container is running.]]>
+ Priority
of the container.]]>
+ ContainerState
of the container.]]>
+ exit status
of the container.]]>
+ Container
.]]>
+ ContainerId
of the container]]>
+ ContainerState
of the container]]>
+ Note: This is valid only for completed containers i.e. containers + with state {@link ContainerState#COMPLETE}. + Otherwise, it returns an ContainerExitStatus.INVALID. +
+ +Containers killed by the framework, either due to being released by + the application or being 'lost' due to node failures etc. have a special + exit code of ContainerExitStatus.ABORTED.
+ +When threshold number of the nodemanager-local-directories or + threshold number of the nodemanager-log-directories become bad, then + container is not launched and is exited with ContainersExitStatus.DISKS_FAILED. +
+ + @return exit status for the container]]> +LocalResourceType
of the resource to be localized]]>
+ LocalResourceType
of the resource to be localized]]>
+ LocalResourceVisibility
of the resource to be
+ localized]]>
+ LocalResourceVisibility
of the resource to be
+ localized]]>
+ PATTERN
).
+ @return pattern that should be used to extract entries from the
+ archive.]]>
+ PATTERN
).
+ @param pattern pattern that should be used to extract entries
+ from the archive.]]>
+ LocalResource
represents a local resource required to
+ run a container.
+
+ The NodeManager
is responsible for localizing the resource
+ prior to launching the container.
Applications can specify {@link LocalResourceType} and + {@link LocalResourceVisibility}.
+ + @see LocalResourceType + @see LocalResourceVisibility + @see ContainerLaunchContext + @see ApplicationSubmissionContext + @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest)]]> ++ The type can be one of: +
NodeManager
.
+ + The visibility can be one of: +
NodeManager
for which the
+ NMToken is used to authenticate.]]>
+ NodeManager
]]>
+ NodeManager
+ It is issued by ResourceMananger
when ApplicationMaster
+ negotiates resource with ResourceManager
and
+ validated on NodeManager
side.
NodeId
is the unique identifier for a node.
+
+ It includes the hostname and port to uniquely
+ identify the node. Thus, it is unique across restarts of any
+ NodeManager
.
NodeId
of the node]]>
+ NodeState
of the node]]>
+ Resource
on the node.
+ @return used Resource
on the node]]>
+ Resource
on the node.
+ @return total Resource
on the node]]>
+ Node
.]]>
+ ResourceManager
.
+ @see AllocateRequest#setAskList(List)]]>
+ ResourceManager
. If the AM prefers a different set of
+ containers, then it may checkpoint or kill containers matching the
+ description in {@link #getResourceRequest}.
+ @return Set of containers at risk if the contract is not met.]]>
+ ApplicationMaster
(AM) can satisfy this request according
+ to its own priorities to prevent containers from being forcibly killed by
+ the platform.
+ @see PreemptionMessage]]>
+ + In contrast, the {@link PreemptionContract} also includes a description of + resources with a set of containers. If the AM releases containers matching + that profile, then the containers enumerated in {@link + PreemptionContract#getContainers()} may not be killed. +
+ Each preemption message reflects the RM's current understanding of the + cluster state, so a request to return N containers may not + reflect containers the AM is releasing, recently exited containers the RM has + yet to learn about, or new containers allocated before the message was + generated. Conversely, an RM may request a different profile of containers in + subsequent requests. +
+ The policy enforced by the RM is part of the scheduler. Generally, only + containers that have been requested consistently should be killed, but the + details are not specified.]]> +
QueueState
of the queue]]>
+ accessible node labels
of the queue]]>
+ ApplicationSubmissionContext
and
+ ResourceRequest
don't specify their
+ NodeLabelExpression
.
+
+ @return default node label expression
of the queue]]>
+ QueueACL
for the given user]]>
+ QueueUserACLInfo
provides information {@link QueueACL} for
+ the given user.
+
+ @see QueueACL
+ @see ApplicationClientProtocol#getQueueUserAcls(org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoRequest)]]>
+ + The globally unique nature of the identifier is achieved by using the + cluster timestamp i.e. start-time of the {@code ResourceManager} + along with a monotonically increasing counter for the reservation. +
]]> +Resource
models a set of computer resources in the
+ cluster.
+
+ Currently it models both memory and CPU.
+ +The unit for memory is megabytes. CPU is modeled with virtual cores + (vcores), a unit for expressing parallelism. A node's capacity should + be configured with virtual cores equal to its number of physical cores. A + container should be requested with the number of cores it can saturate, i.e. + the average number of threads it expects to have runnable at a time.
+ +Virtual cores take integer values and thus currently CPU-scheduling is + very coarse. A complementary axis for CPU requests that represents processing + power will likely be added in the future to enable finer-grained resource + configuration.
+ +Typically, applications request Resource
of suitable
+ capability to run their component tasks.
Priority
of the request]]>
+ Priority
of the request]]>
+ Resource
capability of the request]]>
+ Resource
capability of the request]]>
+ ResourceRequest
.]]>
+ + +
If the flag is off on a rack-level ResourceRequest
,
+ containers at that request's priority will not be assigned to nodes on that
+ request's rack unless requests specifically for those nodes have also been
+ submitted.
+ +
If the flag is off on an {@link ResourceRequest#ANY}-level
+ ResourceRequest
, containers at that request's priority will
+ only be assigned on racks for which specific requests have also been
+ submitted.
+ +
For example, to request a container strictly on a specific node, the + corresponding rack-level and any-level requests should have locality + relaxation set to false. Similarly, to request a container strictly on a + specific rack, the corresponding any-level request should have locality + relaxation set to false.
+
+ @param relaxLocality whether locality relaxation is enabled with this
+ ResourceRequest
.]]>
+
ResourceManager
.
+ @return the set of {@link ContainerId} to be preempted.]]>
+ Token
is the security entity used by the framework
+ to verify authenticity of any resource.]]>
+ URL
represents a serializable {@link java.net.URL}.]]>
+ NodeManager
s in the cluster]]>
+ YarnClusterMetrics
represents cluster metrics.
+
+ Currently only number of NodeManager
s is provided.
+ The reader and writer users/groups pattern that the user can supply is the
+ same as what AccessControlList
takes.
+
+ Primary filters will be used to index the entities in
+ TimelineStore
, such that users should carefully choose the
+ information they want to store as the primary filters. The remaining can be
+ stored as other information.
+
InetSocketAddress
. On a HA cluster,
+ this fetches the address corresponding to the RM identified by
+ {@link #RM_HA_ID}.
+ @param name property name.
+ @param defaultAddress the default value
+ @param defaultPort the default port
+ @return InetSocketAddress]]>
+ + Note: Use {@link #DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH} for + cross-platform practice i.e. submit an application from a Windows client to + a Linux/Unix server or vice versa. +
]]> +SharedCacheManager
to run a cleaner task
+ @return SharedCacheManager
returns an empty response
+ on success and throws an exception on rejecting the request
+ @throws YarnException
+ @throws IOException]]>
+ SharedCacheManager
+ ]]>
+
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
+ If the user does not have VIEW_APP
access then the following
+ fields in the report will be set to stubbed values:
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(ApplicationId)}.
+
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
ResourceManager
. New containers assigned to the master are
+ retrieved. Status of completed containers and node health updates are also
+ retrieved. This also doubles up as a heartbeat to the ResourceManager and
+ must be made periodically. The call may not always return any new
+ allocations of containers. App should not make concurrent allocate
+ requests. May cause request loss.
+
+ + Note : If the user has not removed container requests that have already + been satisfied, then the re-register may end up sending the entire + container requests to the RM (including matched requests). Which would mean + the RM could end up giving it a lot of new allocated containers. +
+ + @param progressIndicator Indicates progress made by the master + @return the response of the allocate request + @throws YarnException + @throws IOException]]> +addContainerRequest
earlier in the lifecycle. For performance,
+ the AMRMClient may return its internal collection directly without creating
+ a copy. Users should not perform mutable operations on the return value.
+ Each collection in the list contains requests with identical
+ Resource
size that fit in the given capability. In a
+ collection, requests will be returned in the same order as they were added.
+ @return Collection of request matching the parameters]]>
+ AMRMClient
+
+ If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
+ singleton instance will be used.
+
+ @param nmTokenCache the NM token cache to use.]]>
+ AMRMClient
.
+
+ If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
+ singleton instance will be used.
+
+ @return the NM token cache.]]>
+ checkEveryMillis
ms.
+ See also {@link #waitFor(com.google.common.base.Supplier, int, int)}
+ @param check user defined checker
+ @param checkEveryMillis interval to call check
]]>
+ checkEveryMillis
ms. In the main loop, this method will log
+ the message "waiting in main loop" for each logInterval
times
+ iteration to confirm the thread is alive.
+ @param check user defined checker
+ @param checkEveryMillis interval to call check
+ @param logInterval interval to log for each]]>
+ The ApplicationMaster
or other applications that use the
+ client must provide the details of the allocated container, including the
+ Id, the assigned node's Id and the token via {@link Container}. In
+ addition, the AM needs to provide the {@link ContainerLaunchContext} as
+ well.
NodeManager
to launch the
+ container
+ @return a map between the auxiliary service names and their outputs
+ @throws YarnException
+ @throws IOException]]>
+ NodeManager
+
+ @throws YarnException
+ @throws IOException]]>
+ NodeManager
+
+ @return the status of a container
+ @throws YarnException
+ @throws IOException]]>
+ NMClient
+
+ If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
+ singleton instance will be used.
+
+ @param nmTokenCache the NM token cache to use.]]>
+ NMClient
+
+ If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
+ singleton instance will be used.
+
+ @return the NM token cache]]>
+ + NMTokenCache nmTokenCache = new NMTokenCache(); + AMRMClient rmClient = AMRMClient.createAMRMClient(); + NMClient nmClient = NMClient.createNMClient(); + nmClient.setNMTokenCache(nmTokenCache); + ... ++
+ NMTokenCache nmTokenCache = new NMTokenCache(); + AMRMClient rmClient = AMRMClient.createAMRMClient(); + NMClient nmClient = NMClient.createNMClient(); + nmClient.setNMTokenCache(nmTokenCache); + AMRMClientAsync rmClientAsync = new AMRMClientAsync(rmClient, 1000, [AMRM_CALLBACK]); + NMClientAsync nmClientAsync = new NMClientAsync("nmClient", nmClient, [NM_CALLBACK]); + ... ++
+ NMTokenCache nmTokenCache = new NMTokenCache(); + ... + ApplicationMasterProtocol amPro = ClientRMProxy.createRMProxy(conf, ApplicationMasterProtocol.class); + ... + AllocateRequest allocateRequest = ... + ... + AllocateResponse allocateResponse = rmClient.allocate(allocateRequest); + for (NMToken token : allocateResponse.getNMTokens()) { + nmTokenCache.setToken(token.getNodeId().toString(), token.getToken()); + } + ... + ContainerManagementProtocolProxy nmPro = ContainerManagementProtocolProxy(conf, nmTokenCache); + ... + nmPro.startContainer(container, containerContext); + ... ++
AMRMClient
or
+ NMClient
, or the async versions of them) with a protocol proxy (
+ ContainerManagementProtocolProxy
or
+ ApplicationMasterProtocol
).]]>
+ YARN.
It is a blocking call - it
+ will not return {@link ApplicationId} until the submitted application is
+ submitted successfully and accepted by the ResourceManager.
+
+
+ + Users should provide an {@link ApplicationId} as part of the parameter + {@link ApplicationSubmissionContext} when submitting a new application, + otherwise it will throw the {@link ApplicationIdNotProvidedException}. +
+ +This internally calls {@link ApplicationClientProtocol#submitApplication + (SubmitApplicationRequest)}, and after that, it internally invokes + {@link ApplicationClientProtocol#getApplicationReport + (GetApplicationReportRequest)} and waits till it can make sure that the + application gets properly submitted. If RM fails over or RM restart + happens before ResourceManager saves the application's state, + {@link ApplicationClientProtocol + #getApplicationReport(GetApplicationReportRequest)} will throw + the {@link ApplicationNotFoundException}. This API automatically resubmits + the application with the same {@link ApplicationSubmissionContext} when it + catches the {@link ApplicationNotFoundException}
+ + @param appContext + {@link ApplicationSubmissionContext} containing all the details + needed to submit a new application + @return {@link ApplicationId} of the accepted application + @throws YarnException + @throws IOException + @see #createApplication()]]> +
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
+ If the user does not have VIEW_APP
access then the following
+ fields in the report will be set to stubbed values:
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(ApplicationId)}.
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(ApplicationId)}.
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(ApplicationId)}.
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(ApplicationId)}.
+
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
+ The client packages all details of its request in a + {@link ReservationSubmissionRequest} object. This contains information + about the amount of capacity, temporal constraints, and gang needs. + Furthermore, the reservation might be composed of multiple stages, with + ordering dependencies among them. +
+ ++ In order to respond, a new admission control component in the + {@code ResourceManager} performs an analysis of the resources that have + been committed over the period of time the user is requesting, verify that + the user requests can be fulfilled, and that it respect a sharing policy + (e.g., {@code CapacityOverTimePolicy}). Once it has positively determined + that the ReservationRequest is satisfiable the {@code ResourceManager} + answers with a {@link ReservationSubmissionResponse} that includes a + {@link ReservationId}. Upon failure to find a valid allocation the response + is an exception with the message detailing the reason of failure. +
+ ++ The semantics guarantees that the {@link ReservationId} returned, + corresponds to a valid reservation existing in the time-range request by + the user. The amount of capacity dedicated to such reservation can vary + overtime, depending of the allocation that has been determined. But it is + guaranteed to satisfy all the constraint expressed by the user in the + {@link ReservationDefinition} +
+ + @param request request to submit a new Reservation + @return response contains the {@link ReservationId} on accepting the + submission + @throws YarnException if the reservation cannot be created successfully + @throws IOException]]> ++ The allocation is attempted by virtually substituting all previous + allocations related to this Reservation with new ones, that satisfy the new + {@link ReservationDefinition}. Upon success the previous allocation is + atomically substituted by the new one, and on failure (i.e., if the system + cannot find a valid allocation for the updated request), the previous + allocation remains valid. +
+ + @param request to update an existing Reservation (the + {@link ReservationUpdateRequest} should refer to an existing valid + {@link ReservationId}) + @return response empty on successfully updating the existing reservation + @throws YarnException if the request is invalid or reservation cannot be + updated successfully + @throws IOException]]> +checkEveryMillis
ms.
+ See also {@link #waitFor(com.google.common.base.Supplier, int, int)}
+ @param check user defined checker
+ @param checkEveryMillis interval to call check
]]>
+ checkEveryMillis
ms. In the main loop, this method will log
+ the message "waiting in main loop" for each logInterval
times
+ iteration to confirm the thread is alive.
+ @param check user defined checker
+ @param checkEveryMillis interval to call check
+ @param logInterval interval to log for each]]>
+ + {@code + class MyCallbackHandler implements AMRMClientAsync.CallbackHandler { + public void onContainersAllocated(List+ + The client's lifecycle should be managed similarly to the following: + +containers) { + [run tasks on the containers] + } + + public void onContainersCompleted(List statuses) { + [update progress, check whether app is done] + } + + public void onNodesUpdated(List updated) {} + + public void onReboot() {} + } + } +
+ {@code + AMRMClientAsync asyncClient = + createAMRMClientAsync(appAttId, 1000, new MyCallbackhandler()); + asyncClient.init(conf); + asyncClient.start(); + RegisterApplicationMasterResponse response = asyncClient + .registerApplicationMaster(appMasterHostname, appMasterRpcPort, + appMasterTrackingUrl); + asyncClient.addContainerRequest(containerRequest); + [... wait for application to complete] + asyncClient.unregisterApplicationMaster(status, appMsg, trackingUrl); + asyncClient.stop(); + } +]]> +
+ {@code + class MyCallbackHandler implements NMClientAsync.CallbackHandler { + public void onContainerStarted(ContainerId containerId, + Map+ + The client's life-cycle should be managed like the following: + +allServiceResponse) { + [post process after the container is started, process the response] + } + + public void onContainerStatusReceived(ContainerId containerId, + ContainerStatus containerStatus) { + [make use of the status of the container] + } + + public void onContainerStopped(ContainerId containerId) { + [post process after the container is stopped] + } + + public void onStartContainerError( + ContainerId containerId, Throwable t) { + [handle the raised exception] + } + + public void onGetContainerStatusError( + ContainerId containerId, Throwable t) { + [handle the raised exception] + } + + public void onStopContainerError( + ContainerId containerId, Throwable t) { + [handle the raised exception] + } + } + } +
+ {@code + NMClientAsync asyncClient = + NMClientAsync.createNMClientAsync(new MyCallbackhandler()); + asyncClient.init(conf); + asyncClient.start(); + asyncClient.startContainer(container, containerLaunchContext); + [... wait for container being started] + asyncClient.getContainerStatus(container.getId(), container.getNodeId(), + container.getContainerToken()); + [... handle the status in the callback instance] + asyncClient.stopContainer(container.getId(), container.getNodeId(), + container.getContainerToken()); + [... wait for container being stopped] + asyncClient.stop(); + } +]]> +
NodeManager
are
+ available.
+
+
+ + Once a callback happens, the users can chose to act on it in blocking or + non-blocking manner. If the action on callback is done in a blocking + manner, some of the threads performing requests on NodeManagers may get + blocked depending on how many threads in the pool are busy. +
+ ++ The implementation of the callback function should not throw the + unexpected exception. Otherwise, {@link NMClientAsync} will just + catch, log and then ignore it. +
]]> +YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
+ If the user does not have VIEW_APP
access then the following
+ fields in the report will be set to stubbed values:
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(ApplicationId)}.
+
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
ResourceManager
. New containers assigned to the master are
+ retrieved. Status of completed containers and node health updates are also
+ retrieved. This also doubles up as a heartbeat to the ResourceManager and
+ must be made periodically. The call may not always return any new
+ allocations of containers. App should not make concurrent allocate
+ requests. May cause request loss.
+
+ + Note : If the user has not removed container requests that have already + been satisfied, then the re-register may end up sending the entire + container requests to the RM (including matched requests). Which would mean + the RM could end up giving it a lot of new allocated containers. +
+ + @param progressIndicator Indicates progress made by the master + @return the response of the allocate request + @throws YarnException + @throws IOException]]> +addContainerRequest
earlier in the lifecycle. For performance,
+ the AMRMClient may return its internal collection directly without creating
+ a copy. Users should not perform mutable operations on the return value.
+ Each collection in the list contains requests with identical
+ Resource
size that fit in the given capability. In a
+ collection, requests will be returned in the same order as they were added.
+ @return Collection of request matching the parameters]]>
+ AMRMClient
+ + If a NM token cache is not set, the {@link NMTokenCache#getSingleton()} + singleton instance will be used. + + @param nmTokenCache the NM token cache to use.]]> +
AMRMClient
.
+ + If a NM token cache is not set, the {@link NMTokenCache#getSingleton()} + singleton instance will be used. + + @return the NM token cache.]]> +
checkEveryMillis
ms.
+ See also {@link #waitFor(com.google.common.base.Supplier, int, int)}
+ @param check user defined checker
+ @param checkEveryMillis interval to call check
]]>
+ checkEveryMillis
ms. In the main loop, this method will log
+ the message "waiting in main loop" for each logInterval
times
+ iteration to confirm the thread is alive.
+ @param check user defined checker
+ @param checkEveryMillis interval to call check
+ @param logInterval interval to log for each]]>
+ The ApplicationMaster
or other applications that use the
+ client must provide the details of the allocated container, including the
+ Id, the assigned node's Id and the token via {@link Container}. In
+ addition, the AM needs to provide the {@link ContainerLaunchContext} as
+ well.
NodeManager
to launch the
+ container
+ @return a map between the auxiliary service names and their outputs
+ @throws YarnException
+ @throws IOException]]>
+ NodeManager
+
+ @throws YarnException
+ @throws IOException]]>
+ NodeManager
+
+ @return the status of a container
+ @throws YarnException
+ @throws IOException]]>
+ NMClient
+ + If a NM token cache is not set, the {@link NMTokenCache#getSingleton()} + singleton instance will be used. + + @param nmTokenCache the NM token cache to use.]]> +
NMClient
+ + If a NM token cache is not set, the {@link NMTokenCache#getSingleton()} + singleton instance will be used. + + @return the NM token cache]]> +
+ NMTokenCache nmTokenCache = new NMTokenCache(); + AMRMClient rmClient = AMRMClient.createAMRMClient(); + NMClient nmClient = NMClient.createNMClient(); + nmClient.setNMTokenCache(nmTokenCache); + ... ++
+ NMTokenCache nmTokenCache = new NMTokenCache(); + AMRMClient rmClient = AMRMClient.createAMRMClient(); + NMClient nmClient = NMClient.createNMClient(); + nmClient.setNMTokenCache(nmTokenCache); + AMRMClientAsync rmClientAsync = new AMRMClientAsync(rmClient, 1000, [AMRM_CALLBACK]); + NMClientAsync nmClientAsync = new NMClientAsync("nmClient", nmClient, [NM_CALLBACK]); + ... ++
+ NMTokenCache nmTokenCache = new NMTokenCache(); + ... + ApplicationMasterProtocol amPro = ClientRMProxy.createRMProxy(conf, ApplicationMasterProtocol.class); + ... + AllocateRequest allocateRequest = ... + ... + AllocateResponse allocateResponse = rmClient.allocate(allocateRequest); + for (NMToken token : allocateResponse.getNMTokens()) { + nmTokenCache.setToken(token.getNodeId().toString(), token.getToken()); + } + ... + ContainerManagementProtocolProxy nmPro = ContainerManagementProtocolProxy(conf, nmTokenCache); + ... + nmPro.startContainer(container, containerContext); + ... ++
SharedCacheManager.
+ The client uses a checksum to identify the resource and an
+ {@link ApplicationId} to identify which application will be using the
+ resource.
+
+
+
+ The SharedCacheManager
responds with whether or not the
+ resource exists in the cache. If the resource exists, a Path
+ to the resource in the shared cache is returned. If the resource does not
+ exist, null is returned instead.
+
SharedCacheManager.
+ This method is called once an application is no longer using a claimed
+ resource in the shared cache. The client uses a checksum to identify the
+ resource and an {@link ApplicationId} to identify which application is
+ releasing the resource.
+
+
+ + Note: This method is an optimization and the client is not required to call + it for correctness. +
+ + @param applicationId ApplicationId of the application releasing the + resource + @param resourceKey the key (i.e. checksum) that identifies the resource]]> +YARN.
It is a blocking call - it
+ will not return {@link ApplicationId} until the submitted application is
+ submitted successfully and accepted by the ResourceManager.
+
+
+ + Users should provide an {@link ApplicationId} as part of the parameter + {@link ApplicationSubmissionContext} when submitting a new application, + otherwise it will throw the {@link ApplicationIdNotProvidedException}. +
+ +This internally calls {@link ApplicationClientProtocol#submitApplication + (SubmitApplicationRequest)}, and after that, it internally invokes + {@link ApplicationClientProtocol#getApplicationReport + (GetApplicationReportRequest)} and waits till it can make sure that the + application gets properly submitted. If RM fails over or RM restart + happens before ResourceManager saves the application's state, + {@link ApplicationClientProtocol + #getApplicationReport(GetApplicationReportRequest)} will throw + the {@link ApplicationNotFoundException}. This API automatically resubmits + the application with the same {@link ApplicationSubmissionContext} when it + catches the {@link ApplicationNotFoundException}
+ + @param appContext + {@link ApplicationSubmissionContext} containing all the details + needed to submit a new application + @return {@link ApplicationId} of the accepted application + @throws YarnException + @throws IOException + @see #createApplication()]]> +
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
+ If the user does not have VIEW_APP
access then the following
+ fields in the report will be set to stubbed values:
+
+ The AMRM token will be returned only if all the following conditions are + met: +
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(ApplicationId)}.
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(ApplicationId)}.
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(ApplicationId)}.
+
+ If the user does not have VIEW_APP
access for an application
+ then the corresponding report will be filtered as described in
+ {@link #getApplicationReport(ApplicationId)}.
+
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
+ In secure mode, YARN
verifies access to the application, queue
+ etc. before accepting the request.
+
+ The client packages all details of its request in a + {@link ReservationSubmissionRequest} object. This contains information + about the amount of capacity, temporal constraints, and gang needs. + Furthermore, the reservation might be composed of multiple stages, with + ordering dependencies among them. +
+ ++ In order to respond, a new admission control component in the + {@code ResourceManager} performs an analysis of the resources that have + been committed over the period of time the user is requesting, verify that + the user requests can be fulfilled, and that it respect a sharing policy + (e.g., {@code CapacityOverTimePolicy}). Once it has positively determined + that the ReservationRequest is satisfiable the {@code ResourceManager} + answers with a {@link ReservationSubmissionResponse} that includes a + {@link ReservationId}. Upon failure to find a valid allocation the response + is an exception with the message detailing the reason of failure. +
+ ++ The semantics guarantees that the {@link ReservationId} returned, + corresponds to a valid reservation existing in the time-range request by + the user. The amount of capacity dedicated to such reservation can vary + overtime, depending of the allocation that has been determined. But it is + guaranteed to satisfy all the constraint expressed by the user in the + {@link ReservationDefinition} +
+ + @param request request to submit a new Reservation + @return response contains the {@link ReservationId} on accepting the + submission + @throws YarnException if the reservation cannot be created successfully + @throws IOException]]> ++ The allocation is attempted by virtually substituting all previous + allocations related to this Reservation with new ones, that satisfy the new + {@link ReservationDefinition}. Upon success the previous allocation is + atomically substituted by the new one, and on failure (i.e., if the system + cannot find a valid allocation for the updated request), the previous + allocation remains valid. +
+ + @param request to update an existing Reservation (the + {@link ReservationUpdateRequest} should refer to an existing valid + {@link ReservationId}) + @return response empty on successfully updating the existing reservation + @throws YarnException if the request is invalid or reservation cannot be + updated successfully + @throws IOException]]> +checkEveryMillis
ms.
+ See also {@link #waitFor(com.google.common.base.Supplier, int, int)}
+ @param check user defined checker
+ @param checkEveryMillis interval to call check
]]>
+ checkEveryMillis
ms. In the main loop, this method will log
+ the message "waiting in main loop" for each logInterval
times
+ iteration to confirm the thread is alive.
+ @param check user defined checker
+ @param checkEveryMillis interval to call check
+ @param logInterval interval to log for each]]>
+ + {@code + class MyCallbackHandler implements AMRMClientAsync.CallbackHandler { + public void onContainersAllocated(List+ + The client's lifecycle should be managed similarly to the following: + +containers) { + [run tasks on the containers] + } + + public void onContainersCompleted(List statuses) { + [update progress, check whether app is done] + } + + public void onNodesUpdated(List updated) {} + + public void onReboot() {} + } + } +
+ {@code + AMRMClientAsync asyncClient = + createAMRMClientAsync(appAttId, 1000, new MyCallbackhandler()); + asyncClient.init(conf); + asyncClient.start(); + RegisterApplicationMasterResponse response = asyncClient + .registerApplicationMaster(appMasterHostname, appMasterRpcPort, + appMasterTrackingUrl); + asyncClient.addContainerRequest(containerRequest); + [... wait for application to complete] + asyncClient.unregisterApplicationMaster(status, appMsg, trackingUrl); + asyncClient.stop(); + } +]]> +
+ {@code + class MyCallbackHandler implements NMClientAsync.CallbackHandler { + public void onContainerStarted(ContainerId containerId, + Map+ + The client's life-cycle should be managed like the following: + +allServiceResponse) { + [post process after the container is started, process the response] + } + + public void onContainerStatusReceived(ContainerId containerId, + ContainerStatus containerStatus) { + [make use of the status of the container] + } + + public void onContainerStopped(ContainerId containerId) { + [post process after the container is stopped] + } + + public void onStartContainerError( + ContainerId containerId, Throwable t) { + [handle the raised exception] + } + + public void onGetContainerStatusError( + ContainerId containerId, Throwable t) { + [handle the raised exception] + } + + public void onStopContainerError( + ContainerId containerId, Throwable t) { + [handle the raised exception] + } + } + } +
+ {@code + NMClientAsync asyncClient = + NMClientAsync.createNMClientAsync(new MyCallbackhandler()); + asyncClient.init(conf); + asyncClient.start(); + asyncClient.startContainer(container, containerLaunchContext); + [... wait for container being started] + asyncClient.getContainerStatus(container.getId(), container.getNodeId(), + container.getContainerToken()); + [... handle the status in the callback instance] + asyncClient.stopContainer(container.getId(), container.getNodeId(), + container.getContainerToken()); + [... wait for container being stopped] + asyncClient.stop(); + } +]]> +
NodeManager
are
+ available.
+
+
+ + Once a callback happens, the users can chose to act on it in blocking or + non-blocking manner. If the action on callback is done in a blocking + manner, some of the threads performing requests on NodeManagers may get + blocked depending on how many threads in the pool are busy. +
+ ++ The implementation of the callback function should not throw the + unexpected exception. Otherwise, {@link NMClientAsync} will just + catch, log and then ignore it. +
]]> +Node
s]]>
+ Node
s]]>
+ yarn.sharedcache.checksum.algo.impl
)
+
+ @return SharedCacheChecksum
object]]>
+ false
]]>
+ NodeHealthStatus
is a summary of the health status of the
+ node.
+
+ It includes information such as: +
false
]]>
+