YARN-5646. Add documentation and update config parameter names for scheduling of OPPORTUNISTIC containers. (Konstantinos Karanasos via asuresh)

This commit is contained in:
Arun Suresh 2016-12-16 08:14:34 -08:00
parent cee0c468b0
commit 2273a74c1f
14 changed files with 296 additions and 65 deletions

View File

@ -238,8 +238,8 @@ protected void serviceInit(Configuration conf) throws Exception {
// first attempt to contact RM. // first attempt to contact RM.
retrystartTime = System.currentTimeMillis(); retrystartTime = System.currentTimeMillis();
this.scheduledRequests.setNumOpportunisticMapsPer100( this.scheduledRequests.setNumOpportunisticMapsPer100(
conf.getInt(MRJobConfig.MR_NUM_OPPORTUNISTIC_MAPS_PER_100, conf.getInt(MRJobConfig.MR_NUM_OPPORTUNISTIC_MAPS_PERCENTAGE,
MRJobConfig.DEFAULT_MR_NUM_OPPORTUNISTIC_MAPS_PER_100)); MRJobConfig.DEFAULT_MR_NUM_OPPORTUNISTIC_MAPS_PERCENTAGE));
LOG.info(this.scheduledRequests.getNumOpportunisticMapsPer100() + LOG.info(this.scheduledRequests.getNumOpportunisticMapsPer100() +
"% of the mappers will be scheduled using OPPORTUNISTIC containers"); "% of the mappers will be scheduled using OPPORTUNISTIC containers");
} }

View File

@ -1005,9 +1005,9 @@ public interface MRJobConfig {
* requested by the AM will be opportunistic. If the total number of maps * requested by the AM will be opportunistic. If the total number of maps
* for the job is less than 'x', then ALL maps will be OPPORTUNISTIC * for the job is less than 'x', then ALL maps will be OPPORTUNISTIC
*/ */
public static final String MR_NUM_OPPORTUNISTIC_MAPS_PER_100 = public static final String MR_NUM_OPPORTUNISTIC_MAPS_PERCENTAGE =
"mapreduce.job.num-opportunistic-maps-per-100"; "mapreduce.job.num-opportunistic-maps-percentage";
public static final int DEFAULT_MR_NUM_OPPORTUNISTIC_MAPS_PER_100 = 0; public static final int DEFAULT_MR_NUM_OPPORTUNISTIC_MAPS_PERCENTAGE = 0;
/** /**
* A comma-separated list of properties whose value will be redacted. * A comma-separated list of properties whose value will be redacted.

View File

@ -145,7 +145,7 @@ private void runMergeTest(JobConf job, FileSystem fileSystem, int
job.setNumReduceTasks(numReducers); job.setNumReduceTasks(numReducers);
// All OPPORTUNISTIC // All OPPORTUNISTIC
job.setInt(MRJobConfig.MR_NUM_OPPORTUNISTIC_MAPS_PER_100, percent); job.setInt(MRJobConfig.MR_NUM_OPPORTUNISTIC_MAPS_PERCENTAGE, percent);
job.setInt("mapreduce.map.maxattempts", 1); job.setInt("mapreduce.map.maxattempts", 1);
job.setInt("mapreduce.reduce.maxattempts", 1); job.setInt("mapreduce.reduce.maxattempts", 1);
job.setInt("mapred.test.num_lines", numLines); job.setInt("mapred.test.num_lines", numLines);

View File

@ -312,69 +312,65 @@ public static boolean isAclEnabled(Configuration conf) {
/** ACL used in case none is found. Allows nothing. */ /** ACL used in case none is found. Allows nothing. */
public static final String DEFAULT_YARN_APP_ACL = " "; public static final String DEFAULT_YARN_APP_ACL = " ";
/** Setting that controls whether distributed scheduling is enabled or not. */
public static final String DIST_SCHEDULING_ENABLED =
YARN_PREFIX + "distributed-scheduling.enabled";
public static final boolean DIST_SCHEDULING_ENABLED_DEFAULT = false;
/** Setting that controls whether opportunistic container allocation /** Setting that controls whether opportunistic container allocation
* is enabled or not. */ * is enabled or not. */
public static final String OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED = public static final String OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED =
YARN_PREFIX + "opportunistic-container-allocation.enabled"; RM_PREFIX + "opportunistic-container-allocation.enabled";
public static final boolean public static final boolean
OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED_DEFAULT = false; DEFAULT_OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED = false;
/** Number of nodes to be used by the Opportunistic Container allocator for /** Number of nodes to be used by the Opportunistic Container allocator for
* dispatching containers during container allocation. */ * dispatching containers during container allocation. */
public static final String OPP_CONTAINER_ALLOCATION_NODES_NUMBER_USED = public static final String OPP_CONTAINER_ALLOCATION_NODES_NUMBER_USED =
YARN_PREFIX + "opportunistic-container-allocation.nodes-used"; RM_PREFIX + "opportunistic-container-allocation.nodes-used";
public static final int OPP_CONTAINER_ALLOCATION_NODES_NUMBER_USED_DEFAULT = public static final int DEFAULT_OPP_CONTAINER_ALLOCATION_NODES_NUMBER_USED =
10; 10;
/** Frequency for computing least loaded NMs. */ /** Frequency for computing least loaded NMs. */
public static final String NM_CONTAINER_QUEUING_SORTING_NODES_INTERVAL_MS = public static final String NM_CONTAINER_QUEUING_SORTING_NODES_INTERVAL_MS =
YARN_PREFIX + "nm-container-queuing.sorting-nodes-interval-ms"; RM_PREFIX + "nm-container-queuing.sorting-nodes-interval-ms";
public static final long public static final long
NM_CONTAINER_QUEUING_SORTING_NODES_INTERVAL_MS_DEFAULT = 1000; DEFAULT_NM_CONTAINER_QUEUING_SORTING_NODES_INTERVAL_MS = 1000;
/** Comparator for determining node load for Distributed Scheduling. */ /** Comparator for determining node load for scheduling of opportunistic
* containers. */
public static final String NM_CONTAINER_QUEUING_LOAD_COMPARATOR = public static final String NM_CONTAINER_QUEUING_LOAD_COMPARATOR =
YARN_PREFIX + "nm-container-queuing.load-comparator"; RM_PREFIX + "nm-container-queuing.load-comparator";
public static final String NM_CONTAINER_QUEUING_LOAD_COMPARATOR_DEFAULT = public static final String DEFAULT_NM_CONTAINER_QUEUING_LOAD_COMPARATOR =
"QUEUE_LENGTH"; "QUEUE_LENGTH";
/** Value of standard deviation used for calculation of queue limit /** Value of standard deviation used for calculation of queue limit
* thresholds. */ * thresholds. */
public static final String NM_CONTAINER_QUEUING_LIMIT_STDEV = public static final String NM_CONTAINER_QUEUING_LIMIT_STDEV =
YARN_PREFIX + "nm-container-queuing.queue-limit-stdev"; RM_PREFIX + "nm-container-queuing.queue-limit-stdev";
public static final float NM_CONTAINER_QUEUING_LIMIT_STDEV_DEFAULT = public static final float DEFAULT_NM_CONTAINER_QUEUING_LIMIT_STDEV =
1.0f; 1.0f;
/** Min length of container queue at NodeManager. This is a cluster-wide /** Min length of container queue at NodeManager. This is a cluster-wide
* configuration that acts as the lower-bound of optimal queue length * configuration that acts as the lower-bound of optimal queue length
* calculated by the NodeQueueLoadMonitor */ * calculated by the NodeQueueLoadMonitor */
public static final String NM_CONTAINER_QUEUING_MIN_QUEUE_LENGTH = public static final String NM_CONTAINER_QUEUING_MIN_QUEUE_LENGTH =
YARN_PREFIX + "nm-container-queuing.min-queue-length"; RM_PREFIX + "nm-container-queuing.min-queue-length";
public static final int NM_CONTAINER_QUEUING_MIN_QUEUE_LENGTH_DEFAULT = 1; public static final int DEFAULT_NM_CONTAINER_QUEUING_MIN_QUEUE_LENGTH = 5;
/** Max length of container queue at NodeManager. This is a cluster-wide /** Max length of container queue at NodeManager. This is a cluster-wide
* configuration that acts as the upper-bound of optimal queue length * configuration that acts as the upper-bound of optimal queue length
* calculated by the NodeQueueLoadMonitor */ * calculated by the NodeQueueLoadMonitor */
public static final String NM_CONTAINER_QUEUING_MAX_QUEUE_LENGTH = public static final String NM_CONTAINER_QUEUING_MAX_QUEUE_LENGTH =
YARN_PREFIX + "nm-container-queuing.max-queue-length"; RM_PREFIX + "nm-container-queuing.max-queue-length";
public static final int NM_CONTAINER_QUEUING_MAX_QUEUE_LENGTH_DEFAULT = 10; public static final int DEFAULT_NM_CONTAINER_QUEUING_MAX_QUEUE_LENGTH = 15;
/** Min queue wait time for a container at a NodeManager. */ /** Min queue wait time for a container at a NodeManager. */
public static final String NM_CONTAINER_QUEUING_MIN_QUEUE_WAIT_TIME_MS = public static final String NM_CONTAINER_QUEUING_MIN_QUEUE_WAIT_TIME_MS =
YARN_PREFIX + "nm-container-queuing.min-queue-wait-time-ms"; RM_PREFIX + "nm-container-queuing.min-queue-wait-time-ms";
public static final int NM_CONTAINER_QUEUING_MIN_QUEUE_WAIT_TIME_MS_DEFAULT = public static final int DEFAULT_NM_CONTAINER_QUEUING_MIN_QUEUE_WAIT_TIME_MS =
1; 10;
/** Max queue wait time for a container queue at a NodeManager. */ /** Max queue wait time for a container queue at a NodeManager. */
public static final String NM_CONTAINER_QUEUING_MAX_QUEUE_WAIT_TIME_MS = public static final String NM_CONTAINER_QUEUING_MAX_QUEUE_WAIT_TIME_MS =
YARN_PREFIX + "nm-container-queuing.max-queue-wait-time-ms"; RM_PREFIX + "nm-container-queuing.max-queue-wait-time-ms";
public static final int NM_CONTAINER_QUEUING_MAX_QUEUE_WAIT_TIME_MS_DEFAULT = public static final int DEFAULT_NM_CONTAINER_QUEUING_MAX_QUEUE_WAIT_TIME_MS =
10; 100;
/** /**
* Enable/disable intermediate-data encryption at YARN level. For now, this * Enable/disable intermediate-data encryption at YARN level. For now, this
@ -812,9 +808,14 @@ public static boolean isAclEnabled(Configuration conf) {
/** Max Queue length of <code>OPPORTUNISTIC</code> containers on the NM. */ /** Max Queue length of <code>OPPORTUNISTIC</code> containers on the NM. */
public static final String NM_OPPORTUNISTIC_CONTAINERS_MAX_QUEUE_LENGTH = public static final String NM_OPPORTUNISTIC_CONTAINERS_MAX_QUEUE_LENGTH =
NM_PREFIX + "opportunistic-containers-max-queue-length"; NM_PREFIX + "opportunistic-containers-max-queue-length";
public static final int NM_OPPORTUNISTIC_CONTAINERS_MAX_QUEUE_LENGTH_DEFAULT = public static final int DEFAULT_NM_OPPORTUNISTIC_CONTAINERS_MAX_QUEUE_LENGTH =
0; 0;
/** Setting that controls whether distributed scheduling is enabled or not. */
public static final String DIST_SCHEDULING_ENABLED =
NM_PREFIX + "distributed-scheduling.enabled";
public static final boolean DEFAULT_DIST_SCHEDULING_ENABLED = false;
/** Environment variables that will be sent to containers.*/ /** Environment variables that will be sent to containers.*/
public static final String NM_ADMIN_USER_ENV = NM_PREFIX + "admin-env"; public static final String NM_ADMIN_USER_ENV = NM_PREFIX + "admin-env";
public static final String DEFAULT_NM_ADMIN_USER_ENV = "MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX"; public static final String DEFAULT_NM_ADMIN_USER_ENV = "MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX";
@ -2844,14 +2845,14 @@ public static String getClusterId(Configuration conf) {
public static boolean isDistSchedulingEnabled(Configuration conf) { public static boolean isDistSchedulingEnabled(Configuration conf) {
return conf.getBoolean(YarnConfiguration.DIST_SCHEDULING_ENABLED, return conf.getBoolean(YarnConfiguration.DIST_SCHEDULING_ENABLED,
YarnConfiguration.DIST_SCHEDULING_ENABLED_DEFAULT); YarnConfiguration.DEFAULT_DIST_SCHEDULING_ENABLED);
} }
public static boolean isOpportunisticContainerAllocationEnabled( public static boolean isOpportunisticContainerAllocationEnabled(
Configuration conf) { Configuration conf) {
return conf.getBoolean( return conf.getBoolean(
YarnConfiguration.OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED, YarnConfiguration.OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED,
YarnConfiguration.OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED_DEFAULT); YarnConfiguration.DEFAULT_OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED);
} }
// helper methods for timeline service configuration // helper methods for timeline service configuration

View File

@ -104,7 +104,6 @@ public void doBefore() throws Exception {
cluster = new MiniYARNCluster("testDistributedSchedulingE2E", 1, 1, 1); cluster = new MiniYARNCluster("testDistributedSchedulingE2E", 1, 1, 1);
conf = new YarnConfiguration(); conf = new YarnConfiguration();
conf.setBoolean(YarnConfiguration.AMRM_PROXY_ENABLED, true);
conf.setBoolean(YarnConfiguration. conf.setBoolean(YarnConfiguration.
OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED, true); OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED, true);
conf.setBoolean(YarnConfiguration.DIST_SCHEDULING_ENABLED, true); conf.setBoolean(YarnConfiguration.DIST_SCHEDULING_ENABLED, true);

View File

@ -2751,7 +2751,7 @@
<description> <description>
Setting that controls whether distributed scheduling is enabled. Setting that controls whether distributed scheduling is enabled.
</description> </description>
<name>yarn.distributed-scheduling.enabled</name> <name>yarn.nodemanager.distributed-scheduling.enabled</name>
<value>false</value> <value>false</value>
</property> </property>
@ -2760,7 +2760,7 @@
Setting that controls whether opportunistic container allocation Setting that controls whether opportunistic container allocation
is enabled. is enabled.
</description> </description>
<name>yarn.opportunistic-container-allocation.enabled</name> <name>yarn.resourcemanager.opportunistic-container-allocation.enabled</name>
<value>false</value> <value>false</value>
</property> </property>
@ -2769,7 +2769,7 @@
Number of nodes to be used by the Opportunistic Container Allocator for Number of nodes to be used by the Opportunistic Container Allocator for
dispatching containers during container allocation. dispatching containers during container allocation.
</description> </description>
<name>yarn.opportunistic-container-allocation.nodes-used</name> <name>yarn.resourcemanager.opportunistic-container-allocation.nodes-used</name>
<value>10</value> <value>10</value>
</property> </property>
@ -2777,7 +2777,7 @@
<description> <description>
Frequency for computing least loaded NMs. Frequency for computing least loaded NMs.
</description> </description>
<name>yarn.nm-container-queuing.sorting-nodes-interval-ms</name> <name>yarn.resourcemanager.nm-container-queuing.sorting-nodes-interval-ms</name>
<value>1000</value> <value>1000</value>
</property> </property>
@ -2785,7 +2785,7 @@
<description> <description>
Comparator for determining node load for Distributed Scheduling. Comparator for determining node load for Distributed Scheduling.
</description> </description>
<name>yarn.nm-container-queuing.load-comparator</name> <name>yarn.resourcemanager.nm-container-queuing.load-comparator</name>
<value>QUEUE_LENGTH</value> <value>QUEUE_LENGTH</value>
</property> </property>
@ -2793,7 +2793,7 @@
<description> <description>
Value of standard deviation used for calculation of queue limit thresholds. Value of standard deviation used for calculation of queue limit thresholds.
</description> </description>
<name>yarn.nm-container-queuing.queue-limit-stdev</name> <name>yarn.resourcemanager.nm-container-queuing.queue-limit-stdev</name>
<value>1.0f</value> <value>1.0f</value>
</property> </property>
@ -2801,32 +2801,32 @@
<description> <description>
Min length of container queue at NodeManager. Min length of container queue at NodeManager.
</description> </description>
<name>yarn.nm-container-queuing.min-queue-length</name> <name>yarn.resourcemanager.nm-container-queuing.min-queue-length</name>
<value>1</value> <value>5</value>
</property> </property>
<property> <property>
<description> <description>
Max length of container queue at NodeManager. Max length of container queue at NodeManager.
</description> </description>
<name>yarn.nm-container-queuing.max-queue-length</name> <name>yarn.resourcemanager.nm-container-queuing.max-queue-length</name>
<value>10</value> <value>15</value>
</property> </property>
<property> <property>
<description> <description>
Min queue wait time for a container at a NodeManager. Min queue wait time for a container at a NodeManager.
</description> </description>
<name>yarn.nm-container-queuing.min-queue-wait-time-ms</name> <name>yarn.resourcemanager.nm-container-queuing.min-queue-wait-time-ms</name>
<value>1</value> <value>10</value>
</property> </property>
<property> <property>
<description> <description>
Max queue wait time for a container queue at a NodeManager. Max queue wait time for a container queue at a NodeManager.
</description> </description>
<name>yarn.nm-container-queuing.max-queue-wait-time-ms</name> <name>yarn.resourcemanager.nm-container-queuing.max-queue-wait-time-ms</name>
<value>10</value> <value>100</value>
</property> </property>
<property> <property>

View File

@ -329,7 +329,7 @@ protected void serviceInit(Configuration conf) throws Exception {
boolean isDistSchedulingEnabled = boolean isDistSchedulingEnabled =
conf.getBoolean(YarnConfiguration.DIST_SCHEDULING_ENABLED, conf.getBoolean(YarnConfiguration.DIST_SCHEDULING_ENABLED,
YarnConfiguration.DIST_SCHEDULING_ENABLED_DEFAULT); YarnConfiguration.DEFAULT_DIST_SCHEDULING_ENABLED);
this.context = createNMContext(containerTokenSecretManager, this.context = createNMContext(containerTokenSecretManager,
nmTokenSecretManager, nmStore, isDistSchedulingEnabled, conf); nmTokenSecretManager, nmStore, isDistSchedulingEnabled, conf);

View File

@ -79,7 +79,7 @@
* to intercept and inspect messages from application master to the cluster * to intercept and inspect messages from application master to the cluster
* resource manager. It listens to messages from the application master and * resource manager. It listens to messages from the application master and
* creates a request intercepting pipeline instance for each application. The * creates a request intercepting pipeline instance for each application. The
* pipeline is a chain of intercepter instances that can inspect and modify the * pipeline is a chain of interceptor instances that can inspect and modify the
* request/response as needed. * request/response as needed.
*/ */
public class AMRMProxyService extends AbstractService implements public class AMRMProxyService extends AbstractService implements

View File

@ -305,7 +305,9 @@ public void serviceInit(Configuration conf) throws Exception {
protected void createAMRMProxyService(Configuration conf) { protected void createAMRMProxyService(Configuration conf) {
this.amrmProxyEnabled = this.amrmProxyEnabled =
conf.getBoolean(YarnConfiguration.AMRM_PROXY_ENABLED, conf.getBoolean(YarnConfiguration.AMRM_PROXY_ENABLED,
YarnConfiguration.DEFAULT_AMRM_PROXY_ENABLED); YarnConfiguration.DEFAULT_AMRM_PROXY_ENABLED) ||
conf.getBoolean(YarnConfiguration.DIST_SCHEDULING_ENABLED,
YarnConfiguration.DEFAULT_DIST_SCHEDULING_ENABLED);
if (amrmProxyEnabled) { if (amrmProxyEnabled) {
LOG.info("AMRMProxyService is enabled. " LOG.info("AMRMProxyService is enabled. "

View File

@ -106,7 +106,7 @@ public ContainerScheduler(Context context, AsyncDispatcher dispatcher,
this(context, dispatcher, metrics, context.getConf().getInt( this(context, dispatcher, metrics, context.getConf().getInt(
YarnConfiguration.NM_OPPORTUNISTIC_CONTAINERS_MAX_QUEUE_LENGTH, YarnConfiguration.NM_OPPORTUNISTIC_CONTAINERS_MAX_QUEUE_LENGTH,
YarnConfiguration. YarnConfiguration.
NM_OPPORTUNISTIC_CONTAINERS_MAX_QUEUE_LENGTH_DEFAULT)); DEFAULT_NM_OPPORTUNISTIC_CONTAINERS_MAX_QUEUE_LENGTH));
} }
@VisibleForTesting @VisibleForTesting

View File

@ -112,11 +112,11 @@ public OpportunisticContainerAllocatorAMService(RMContext rmContext,
rmContext.getContainerTokenSecretManager()); rmContext.getContainerTokenSecretManager());
this.k = rmContext.getYarnConfiguration().getInt( this.k = rmContext.getYarnConfiguration().getInt(
YarnConfiguration.OPP_CONTAINER_ALLOCATION_NODES_NUMBER_USED, YarnConfiguration.OPP_CONTAINER_ALLOCATION_NODES_NUMBER_USED,
YarnConfiguration.OPP_CONTAINER_ALLOCATION_NODES_NUMBER_USED_DEFAULT); YarnConfiguration.DEFAULT_OPP_CONTAINER_ALLOCATION_NODES_NUMBER_USED);
long nodeSortInterval = rmContext.getYarnConfiguration().getLong( long nodeSortInterval = rmContext.getYarnConfiguration().getLong(
YarnConfiguration.NM_CONTAINER_QUEUING_SORTING_NODES_INTERVAL_MS, YarnConfiguration.NM_CONTAINER_QUEUING_SORTING_NODES_INTERVAL_MS,
YarnConfiguration. YarnConfiguration.
NM_CONTAINER_QUEUING_SORTING_NODES_INTERVAL_MS_DEFAULT); DEFAULT_NM_CONTAINER_QUEUING_SORTING_NODES_INTERVAL_MS);
this.cacheRefreshInterval = nodeSortInterval; this.cacheRefreshInterval = nodeSortInterval;
this.lastCacheUpdateTime = System.currentTimeMillis(); this.lastCacheUpdateTime = System.currentTimeMillis();
NodeQueueLoadMonitor.LoadComparator comparator = NodeQueueLoadMonitor.LoadComparator comparator =
@ -124,14 +124,14 @@ public OpportunisticContainerAllocatorAMService(RMContext rmContext,
rmContext.getYarnConfiguration().get( rmContext.getYarnConfiguration().get(
YarnConfiguration.NM_CONTAINER_QUEUING_LOAD_COMPARATOR, YarnConfiguration.NM_CONTAINER_QUEUING_LOAD_COMPARATOR,
YarnConfiguration. YarnConfiguration.
NM_CONTAINER_QUEUING_LOAD_COMPARATOR_DEFAULT)); DEFAULT_NM_CONTAINER_QUEUING_LOAD_COMPARATOR));
NodeQueueLoadMonitor topKSelector = NodeQueueLoadMonitor topKSelector =
new NodeQueueLoadMonitor(nodeSortInterval, comparator); new NodeQueueLoadMonitor(nodeSortInterval, comparator);
float sigma = rmContext.getYarnConfiguration() float sigma = rmContext.getYarnConfiguration()
.getFloat(YarnConfiguration.NM_CONTAINER_QUEUING_LIMIT_STDEV, .getFloat(YarnConfiguration.NM_CONTAINER_QUEUING_LIMIT_STDEV,
YarnConfiguration.NM_CONTAINER_QUEUING_LIMIT_STDEV_DEFAULT); YarnConfiguration.DEFAULT_NM_CONTAINER_QUEUING_LIMIT_STDEV);
int limitMin, limitMax; int limitMin, limitMax;
@ -139,22 +139,22 @@ public OpportunisticContainerAllocatorAMService(RMContext rmContext,
limitMin = rmContext.getYarnConfiguration() limitMin = rmContext.getYarnConfiguration()
.getInt(YarnConfiguration.NM_CONTAINER_QUEUING_MIN_QUEUE_LENGTH, .getInt(YarnConfiguration.NM_CONTAINER_QUEUING_MIN_QUEUE_LENGTH,
YarnConfiguration. YarnConfiguration.
NM_CONTAINER_QUEUING_MIN_QUEUE_LENGTH_DEFAULT); DEFAULT_NM_CONTAINER_QUEUING_MIN_QUEUE_LENGTH);
limitMax = rmContext.getYarnConfiguration() limitMax = rmContext.getYarnConfiguration()
.getInt(YarnConfiguration.NM_CONTAINER_QUEUING_MAX_QUEUE_LENGTH, .getInt(YarnConfiguration.NM_CONTAINER_QUEUING_MAX_QUEUE_LENGTH,
YarnConfiguration. YarnConfiguration.
NM_CONTAINER_QUEUING_MAX_QUEUE_LENGTH_DEFAULT); DEFAULT_NM_CONTAINER_QUEUING_MAX_QUEUE_LENGTH);
} else { } else {
limitMin = rmContext.getYarnConfiguration() limitMin = rmContext.getYarnConfiguration()
.getInt( .getInt(
YarnConfiguration.NM_CONTAINER_QUEUING_MIN_QUEUE_WAIT_TIME_MS, YarnConfiguration.NM_CONTAINER_QUEUING_MIN_QUEUE_WAIT_TIME_MS,
YarnConfiguration. YarnConfiguration.
NM_CONTAINER_QUEUING_MIN_QUEUE_WAIT_TIME_MS_DEFAULT); DEFAULT_NM_CONTAINER_QUEUING_MIN_QUEUE_WAIT_TIME_MS);
limitMax = rmContext.getYarnConfiguration() limitMax = rmContext.getYarnConfiguration()
.getInt( .getInt(
YarnConfiguration.NM_CONTAINER_QUEUING_MAX_QUEUE_WAIT_TIME_MS, YarnConfiguration.NM_CONTAINER_QUEUING_MAX_QUEUE_WAIT_TIME_MS,
YarnConfiguration. YarnConfiguration.
NM_CONTAINER_QUEUING_MAX_QUEUE_WAIT_TIME_MS_DEFAULT); DEFAULT_NM_CONTAINER_QUEUING_MAX_QUEUE_WAIT_TIME_MS);
} }
topKSelector.initThresholdCalculator(sigma, limitMin, limitMax); topKSelector.initThresholdCalculator(sigma, limitMin, limitMax);

View File

@ -947,7 +947,7 @@ protected void serviceStop() {
protected ApplicationMasterService createApplicationMasterService() { protected ApplicationMasterService createApplicationMasterService() {
if (this.rmContext.getYarnConfiguration().getBoolean( if (this.rmContext.getYarnConfiguration().getBoolean(
YarnConfiguration.OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED, YarnConfiguration.OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED,
YarnConfiguration.OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED_DEFAULT)) { YarnConfiguration.DEFAULT_OPPORTUNISTIC_CONTAINER_ALLOCATION_ENABLED)) {
return new OpportunisticContainerAllocatorAMService(getRMContext(), return new OpportunisticContainerAllocatorAMService(getRMContext(),
scheduler) { scheduler) {
@Override @Override

View File

@ -852,7 +852,9 @@ public CustomContainerManagerImpl(Context context, ContainerExecutor exec,
protected void createAMRMProxyService(Configuration conf) { protected void createAMRMProxyService(Configuration conf) {
this.amrmProxyEnabled = this.amrmProxyEnabled =
conf.getBoolean(YarnConfiguration.AMRM_PROXY_ENABLED, conf.getBoolean(YarnConfiguration.AMRM_PROXY_ENABLED,
YarnConfiguration.DEFAULT_AMRM_PROXY_ENABLED); YarnConfiguration.DEFAULT_AMRM_PROXY_ENABLED) ||
conf.getBoolean(YarnConfiguration.DIST_SCHEDULING_ENABLED,
YarnConfiguration.DEFAULT_DIST_SCHEDULING_ENABLED);
if (this.amrmProxyEnabled) { if (this.amrmProxyEnabled) {
LOG.info("CustomAMRMProxyService is enabled. " LOG.info("CustomAMRMProxyService is enabled. "
@ -882,7 +884,9 @@ public CustomQueueingContainerManagerImpl(Context context,
protected void createAMRMProxyService(Configuration conf) { protected void createAMRMProxyService(Configuration conf) {
this.amrmProxyEnabled = this.amrmProxyEnabled =
conf.getBoolean(YarnConfiguration.AMRM_PROXY_ENABLED, conf.getBoolean(YarnConfiguration.AMRM_PROXY_ENABLED,
YarnConfiguration.DEFAULT_AMRM_PROXY_ENABLED); YarnConfiguration.DEFAULT_AMRM_PROXY_ENABLED) ||
conf.getBoolean(YarnConfiguration.DIST_SCHEDULING_ENABLED,
YarnConfiguration.DEFAULT_DIST_SCHEDULING_ENABLED);
if (this.amrmProxyEnabled) { if (this.amrmProxyEnabled) {
LOG.info("CustomAMRMProxyService is enabled. " LOG.info("CustomAMRMProxyService is enabled. "

View File

@ -0,0 +1,225 @@
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
Opportunistic Containers
========================
* [Purpose](#Purpose)
* [Quick Guide](#Quick_Guide)
* [Main Goal](#Main_Goal)
* [Enabling Opportunistic Containers](#Enabling_Opportunistic_Containers)
* [Running a Sample Job](Running_a_Sample_Job)
* [Opportunistic Containers in Web UI](Opportunistic_Containers_in_Web_UI)
* [Overview](#Overview)
* [Container Execution Types](#Container_Execution_Types)
* [Execution of Opportunistic Containers](#Execution_of_Opportunistic_Containers)
* [Allocation of Opportunistic Containers](#Allocation_of_Opportunistic_Containers)
* [Centralized Allocation](#Centralized_Allocation)
* [Distributed Allocation](#Distributed_Allocation)
* [Determining Nodes for Allocation](#Determining_Nodes_for_Allocation)
* [Rebalancing Node Load](#Rebalancing_Node_Load)
* [Advanced Configuration](#Advanced_Configuration)
* [Items for Future Work](#Items_for_Future_Work)
<a name="Purpose"></a>Purpose
-----------------------------
This document introduces the notion of **opportunistic** container execution, and discusses how opportunistic containers are allocated and executed.
<a name="Quick_Guide"></a>Quick Guide
--------------------------------------------------------------------
We start by providing a brief overview of opportunistic containers, including how a user can enable this feature and run a sample job using such containers.
###<a name="Main_Goal"></a>Main Goal
Unlike existing YARN containers that are scheduled in a node only if there are unallocated resources, opportunistic containers can be dispatched to an NM, even if their execution at that node cannot start immediately. In such a case, opportunistic containers will be queued at that NM until resources become available.
The main goal of opportunistic container execution is to improve cluster resource utilization, and therefore increase task throughput. Resource utilization and task throughput improvements are more pronounced for workloads that include relatively short tasks (in the order of seconds).
###<a name="Enabling_Opportunistic_Containers"></a>Enabling Opportunistic Containers
To enable opportunistic container allocation, the following two properties have to be present in **conf/yarn-site.xml**:
| Property | Description | Default value |
|:-------- |:----- |:----- |
| `yarn.resourcemanager.opportunistic-container-allocation.enabled` | Enables opportunistic container allocation. | `false` |
| `yarn.nodemanager.opportunistic-containers-max-queue-length` | Determines the max number of opportunistic containers that can be queued at an NM. | `0` |
The first parameter above has to be set to `true`. The second one has to be set to a positive value to allow queuing of opportunistic containers at the NM. A value of `10` can be used to start experimenting with opportunistic containers. The optimal value depends on the jobs characteristics, the cluster configuration and the target utilization.
By default, allocation of opportunistic containers is performed centrally through the RM. However, a user can choose to enable distributed allocation of opportunistic containers, which can further improve allocation latency for short tasks. Distributed scheduling can be enabling by setting to `true` the following parameter (note that non-opportunistic containers will continue being scheduled through the RM):
| Property | Description | Default value |
|:-------- |:----- |:----- |
| `yarn.nodemanager.distributed-scheduling.enabled` | Enables distributed scheduling. | `false` |
###<a name="Running_a_Sample_Job"></a>Running a Sample Job
The following command can be used to run a sample pi map-reduce job, executing 40% of mappers using opportunistic containers (substitute `3.0.0-alpha2-SNAPSHOT` below with the version of Hadoop you are using):
```
$ hadoop jar hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-alpha2-SNAPSHOT.jar pi -Dmapreduce.job.num-opportunistic-maps-percentage="40" 50 100
```
By changing the value of `mapreduce.job.num-opportunistic-maps-percentage` in the above command, we can specify the percentage of mappers that can be executed through opportunistic containers.
###<a name="Opportunistic_Containers_in_Web_UI"></a>Opportunistic Containers in Web UI
When opportunistic container allocation is enabled, the following new columns can be observed in the Nodes page of the Web UI (`rm-address:8088/cluster/nodes`):
* Running Containers (O): number of running opportunistic containers on each node;
* Mem Used (O): Total memory used by opportunistic containers on each node;
* VCores Used (O): Total CPU virtual cores used by opportunistic containers on each node;
* Queued Containers: Number of containers queued at each node.
When clicking on a specific container running on a node, the execution type of the container is also shown.
In the rest of the document, we provide an in-depth description of opportunistic containers, including details about their allocation and execution.
Overview <a name="Overview"></a>
--------------------------------
The existing schedulers in YARN (Fair and Capacity Scheduler) allocate containers to a node only if there are unallocated resources at that node at the moment of scheduling the containers. This **guaranteed** type of execution has the advantage that once the AM dispatches a container to a node, the container execution will start immediately, since it is guaranteed that there will be available resources. Moreover, unless fairness or capacity constraints are violated, containers are guaranteed to run to completion without being preempted.
Although this design offers a more predictable task execution, it has two main drawbacks that can lead to suboptimal cluster resource utilization:
* **Feedback delays.** When a container finishes its execution at a node, the RM gets notified that there are available resources through the next NM-RM heartbeat, then the RM schedules a new container at that node, the AM gets notified through the next AM-RM heartbeat, and finally the AM launches the new container at the node. These delays result in idle node resources, which in turn lead to lower resource utilization, especially when workloads involve tasks whose duration is relatively short.
* **Allocated vs. utilized resources.** The RM allocates containers based on the *allocated* resources at each node, which might be significantly higher than the actually *utilized* resources (e.g., think of a container for which 4GB memory have been allocated, but only 2GB are being utilized). This lowers effective resource utilization, and can be avoided if the RM takes into account the utilized resources during scheduling. However, this has to be done in a way that allows resources to be reclaimed in case the utilized resources of a running container increase.
To mitigate the above problems, in addition to the existing containers (which we term **guaranteed** containers hereafter), we introduce the notion of **opportunistic** containers. An opportunistic container can be dispatched to an NM, even if there are no available (unallocated) resources for it at the moment of scheduling. In such a case, the opportunistic container will be queued at the NM, waiting for resources to become available for its execution to start. The opportunistic containers are of lower priority than the guaranteed ones, which means that they can be preempted for guaranteed containers to start their execution. Therefore, they can be used to improve cluster resource utilization without impacting the execution of existing guaranteed containers.
An additional advantage of opportunistic containers is that they introduce a notion of **execution priority at the NMs**. For instance, a lower priority job that does not require strict execution guarantees can use opportunistic containers or a mix of container execution types for its tasks.
We have introduced two ways of allocating opportunistic containers: a **centralized** and a **distributed** one. In the centralized scheduling, opportunistic containers are allocated through the YARN RM, whereas in the distributed one, through local schedulers that reside at each NM. Centralized allocation allows for higher quality placement decisions and for implementing more involved sharing policies across applications (e.g., fairness). On the other hand, distributed scheduling can offer faster container allocation, which is useful for short tasks, as it avoids the round-trip to the RM. In both cases, the scheduling of guaranteed containers remains intact and happens through the YARN RM (using the existing Fair or Capacity Scheduler).
Note that in the current implementation, we are allocating containers based on allocated (and not utilized) resources. Therefore, we tackle the "feedback delays" problem mentioned above, but not the "allocated vs. utilized resources" one. There is ongoing work (`YARN-1011`) that employs opportunistic containers to address the latter problem too.
Below, we describe in more detail the [container execution types](#Container_Execution_Types), as well as the [execution](#Execution_of_Opportunistic_Containers) (including the container queuing at the NMs) and [allocation](#Allocation_of_Opportunistic_Containers) of opportunistic containers. Then we discuss how to fine-tune opportunistic containers through some [advanced configuration parameters](#Advanced_Configuration). Finally, we discuss open items for [future work](#Items_for_Future_Work).
<a name="Container_Execution_Types"></a>Container Execution Types
-----------------------------------------------------------------
We introduce the following two types of containers:
* **Guaranteed containers** correspond to the existing YARN containers. They are allocated by the Fair or Capacity Scheduler, and once dispatched to a node, it is guaranteed that there are available resources for their execution to start immediately. Moreover, these containers run to completion (as long as there are no failures). They can be preempted only in case the scheduler's queue to which they belong, violates fairness or capacity constraints.
* **Opportunistic containers** are not guaranteed to have resources for their execution to start when they get dispatched to a node. Instead, they might be queued at the NM until resources become available. In case a guaranteed container arrives at a node and there are no resources available for it, one or more opportunistic containers will be preempted to execute the guaranteed one.
When an AM submits its resource requests to the RM, it specifies the type for each container (default is guaranteed), determining the way the container will be [allocated](#Allocation_of_Opportunistic_Containers). Subsequently, when the container is launched by the AM at an NM, its type determines how it will be [executed](#Execution_of_Opportunistic_Containers) by the NM.
<a name="Execution_of_Opportunistic_Containers"></a>Execution of Opportunistic Containers
---------------------------------------------------------------------------
When a container arrives at an NM, its execution is determined by the available resources at the NM and the container type. Guaranteed containers start their execution immediately, and if needed, the NM will kill running opportunistic containers to ensure there are sufficient resources for the guaranteed ones to start. On the other hand, opportunistic containers can be queued at the NM, if there are no resources available to start their execution when they arrive at the NM. To enable this, we extended the NM by allowing queuing of containers at each node. The NM monitors the local resources, and when there are sufficient resources available, it starts the execution of the opportunistic container that is at the head of the queue.
In particular, when a container arrives at an NM, localization is performed (i.e., all required resources are downloaded), and then the container moves to a `SCHEDULED` state, in which the container is queued, waiting for its execution to begin:
* If there are available resources, the execution of the container starts immediately, irrespective of its execution type.
* If there are no available resources:
* If the container is guaranteed, we kill as many running opportunistic containers as required for the guaranteed container to be executed, and then start its execution.
* If the container is opportunistic, it remains at the queue until resources become available.
* When a container (guaranteed or opportunistic) finishes its execution and resources get freed up, we examine the queued containers and if there are available resources we start their execution. We pick containers from the queue in a FIFO order.
In the [future work items](#Items_for_Future_Work) below, we discuss different ways of prioritizing task execution (queue reordering) and of killing opportunistic containers to make space for guaranteed ones.
<a name="Allocation_of_Opportunistic_Containers"></a>Allocation of Opportunistic Containers
-----------------------------------------------------------------------------
As mentioned above, we provide both a centralized and a distributed way of allocating opportunistic containers, which we describe below.
###<a name="Centralized_Allocation"></a>Centralized Allocation
We have introduced a new service at the RM, namely the `OpportunisticContainerAllocatorAMService`, which extends the `ApplicationMasterService`. When the centralized opportunistic allocation is enabled, the resource requests from the AMs are served at the RM side by the `OpportunisticContainerAllocatorAMService`, which splits them into two sets of resource requests:
* The guaranteed set is forwarded to the existing `ApplicationMasterService` and is subsequently handled by the Fair or Capacity Scheduler.
* The opportunistic set is handled by the new `OpportunisticContainerAllocator`, which performs the scheduling of opportunistic containers to nodes.
The `OpportunisticContainerAllocator` maintains a list with the [least loaded nodes](#Determining_Nodes_for_Allocation) of the cluster at each moment, and assigns containers to them in a round-robin fashion. Note that in the current implementation, we purposely do not take into account node locality constraints. Since an opportunistic container (unlike the guaranteed ones) might wait at the queue of an NM before its execution starts, it is more important to allocate it at a node that is less loaded (i.e., where queuing delay will be smaller) rather than respect its locality constraints. Moreover, we do not take into account sharing (fairness/capacity) constraints for opportunistic containers at the moment. Support for both locality and sharing constraints can be added in the future if required.
###<a name="Distributed_Allocation"></a>Distributed Allocation
In order to enable distributed scheduling of opportunistic containers, we have introduced a new service at each NM, called `AMRMProxyService`. The `AMRMProxyService` implements the `ApplicationMasterService` protocol, and acts as a proxy between the AMs running at that node and the RM. When the `AMRMProxyService` is enabled (through a parameter), we force all AMs running at a particular node to communicate with the `AMRMProxyService` of the same node, instead of going directly to the RM. Moreover, to ensure that the AMs will not talk directly with the RM, when a new AM gets initialized, we replace its `AMRMToken` with a token signed by the `AMRMProxyService`.
A chain of interceptors can be registered with the `AMRMProxyService`. One of these interceptors is the `DistributedScheduler` that is responsible for allocating opportunistic containers in a distributed way, without needing to contact the RM. This modular design makes the `AMRMProxyService` instrumental in other scenarios too, such as YARN federation (`YARN-2915`) or throttling down misbehaving AMs, which can be enabled simply by adding additional interceptors at the interceptor chain.
When distributed opportunistic scheduling is enabled, each AM sends its resource requests to the `AMRMProxyService` running at the same node. The `AMRMProxyService` splits the resource requests into two sets:
* The guaranteed set is forwarded to the RM. In this case the `AMRMProxyService` simply acts as a proxy between the AM and the RM, and the container allocation remains intact (using the Fair or Capacity Scheduler).
* The opportunistic set is not forwarded to the RM. Instead, it is handled by the `DistributedScheduler` that is running locally at the node. In particular, the `DistributedScheduler` maintains a list with the least loaded nodes in the cluster, and allocates containers to them in a round-robin fashion. The RM informs the `DistributedScheduler` about the least loaded nodes at regular intervals through the NM-RM heartbeats.
The above procedure is similar to the one performed by the `OpportunisticContainerAllocatorAMService` in the case of centralized opportunistic scheduling described above. The main difference is that in the distributed case, the splitting of requests into guaranteed and opportunistic happens locally at the node, and only the guaranteed requests are forwarded to the RM, while the opportunistic ones are handled without contacting the RM.
###<a name="Determining_Nodes_for_Allocation"></a>Determining Nodes for Allocation
Each NM informs the RM periodically through the NM-RM heartbeats about the number of running guaranteed and opportunistic containers, as well as the number of queued opportunistic containers. The RM gathers this information from all nodes and determines the least loaded ones.
In the case of centralized allocation of opportunistic containers, this information is immediately available, since the allocation happens centrally. In the case of distributed scheduling, the list with the least loaded nodes is propagated to all NMs (and thus becomes available to the `DistributedSchedulers`) through the heartbeat responses from the RM to the NMs. The number of least loaded nodes sent to the NMs is configurable.
At the moment, we take into account only the number of queued opportunistic containers at each node in order to estimate the time an opportunistic container would have to wait if sent to that node and, thus, determine the least loaded nodes. If the AM provided us with information about the estimated task durations, we could take them into account in order to have better estimates of the queue waiting times.
###<a name="Rebalancing_Node_Load"></a>Rebalancing Node Load
Occasionally poor placement choices for opportunistic containers may be made (due to stale queue length estimates), which can lead to load imbalance between nodes. The problem is more pronounced under high cluster load, and also in the case of distributed scheduling (multiple `DistributedSchedulers` may place containers at the same NM, since they do not coordinate with each other). To deal with this load imbalance between the NM queues, we perform load shedding to dynamically re-balance the load between NMs. In particular, while aggregating at the RM the queue time estimates published by each NM, we construct a distribution and find a targeted maximal value for the length of the NM queues (based on the mean and standard deviation of the distribution). Then the RM disseminates this value to the various NMs through the heartbeat responses. Subsequently, using this information, an NM on a node whose queue length is above the threshold discards opportunistic containers to meet this maximal value. This forces the associated individual AMs to reschedule those containers elsewhere.
<a name="Advanced_Configuration"></a>Advanced Configuration
--------------------------------------------------
The main properties for enabling opportunistic container allocation and choosing between centralized and distributed allocation were described in the [quick guide](#Quick_Guide) in the beginning of this document. Here we present more advanced configuration. Note that using default values for those parameters should be sufficient in most cases. All parameters below have to be defined in the **conf/yarn-site.xml** file.
To determine the number of [least loaded nodes](#Determining_Nodes_for_Allocation) that will be used when scheduling opportunistic containers and how often this list will be refreshed, we use the following parameters:
| Property | Description | Default value |
|:-------- |:----- |:----- |
| `yarn.resourcemanager.opportunistic-container-allocation.nodes-used` | Number of least loaded nodes to be used by the Opportunistic Container allocator for dispatching containers during container allocation. A higher value can improve load balance in large clusters. | `10` |
| `yarn.resourcemanager.nm-container-queuing.sorting-nodes-interval-ms` | Frequency for computing least loaded nodes. | `1000` |
As discussed in the [node load rebalancing](#Rebalancing_Node_Load) section above, at regular intervals, the RM gathers all NM queue lengths and computes their mean value (`avg`) and standard deviation (`stdev`), as well as the value `avg + k*stdev` (where `k` a float). This value gets propagated through the NM-RM heartbeats to all NMs, who should respect that value by dequeuing containers (if required), as long as their current queue length is between a `queue_min_length` and a `queue_max_length` value (these values are used to avoid dequeuing tasks from very short queues and to aggressively dequeue tasks from long queues, respectively).
The parameters `k`, `queue_min_length` and `queue_max_length` can be specified as follows:
| Property | Description | Default value |
|:-------- |:----- |:----- |
| `yarn.resourcemanager.nm-container-queuing.queue-limit-stdev` | The `k` parameter. | `1.0f` |
| `yarn.resourcemanager.nm-container-queuing.min-queue-length` | The `queue_min_length` parameter. | `5` |
| `yarn.resourcemanager.nm-container-queuing.max-queue-length` | The `queue_max_length` parameter. | `15` |
Finally, two more properties can further tune the `AMRMProxyService` in case distributed scheduling is used:
| Property | Description | Default value |
|:-------- |:----- |:----- |
| `yarn.nodemanager.amrmproxy.address` | The address/port to which the `AMRMProxyService` is bound to. | `0.0.0.0:8049` |
| `yarn.nodemanager.amrmproxy.client.thread-count` | The number of threads that are used at each NM for serving the interceptors register to the `AMRMProxyService` by different jobs. | `3` |
<a name="Items_for_Future_Work"></a>Items for Future Work
-----------------------------------------------
Here we describe multiple ways in which we can extend/enhance the allocation and execution of opportunistic containers. We also provide the JIRAs that track each item.
* **Resource overcommitment** (`YARN-1011`). As already discussed, in order to further improve the cluster resource utilization, we can schedule containers not based on the allocated resources but on the actually utilized ones. When over-committing resources, there is the risk of running out of resources in case we have an increase in the utilized resources of the already running containers. Therefore, opportunistic execution should be used for containers whose allocation goes beyond the capacity of a node. This way, we can choose opportunistic containers to kill for reclaiming resources.
* **NM Queue reordering** (`YARN-5886`). Instead of executing queued containers in a FIFO order, we can employ reordering strategies that dynamically determine which opportunistic container will be executed next. For example, we can prioritize containers that are expected to be short-running or which belong to applications that are close to completion.
* **Out of order killing at NMs** (`YARN-5887`). As described above, when we need to free up resources for a guaranteed container to start its execution, we kill opportunistic containers in reverse order of arrival (first the most recently started ones). This might not always be the right decision. For example, we might want to minimize the number of containers killed or to refrain from killing containers of jobs that are very close to completion.
* **Container pausing** (`YARN-5292`): At the moment we kill opportunistic containers to make room for guaranteed in case of resource contention. In busy clusters this can lower the effective cluster utilization: whenever we kill a running opportunistic container, it has to be restarted, and thus we lose work. To this end, we can instead pause running opportunistic containers. Note that this will require support from the container executor (e.g., the container technology used) and from the application.
* **Container promotion** (`YARN-5085`). There are cases where changing the execution type of a container during its execution can be beneficial. For instance, an application might submit a container as opportunistic, and when its execution starts, it can request its promotion to a guaranteed container to avoid it getting killed.