YARN-7039. Fix javac and javadoc errors in YARN-3926 branch. (Sunil G via wangda)

Change-Id: I442bf6d838b3aba83f1f6779cf9dcf8596a2102d
This commit is contained in:
Wangda Tan 2017-08-22 16:18:01 -07:00
parent a333ba54e3
commit e490602e9b
29 changed files with 100 additions and 2048 deletions

File diff suppressed because it is too large Load Diff

View File

@ -1,283 +0,0 @@
This product includes software developed by The Apache Software
Foundation (http://www.apache.org/).
The binary distribution of this product bundles binaries of
org.iq80.leveldb:leveldb-api (https://github.com/dain/leveldb), which has the
following notices:
* Copyright 2011 Dain Sundstrom <dain@iq80.com>
* Copyright 2011 FuseSource Corp. http://fusesource.com
The binary distribution of this product bundles binaries of
org.fusesource.hawtjni:hawtjni-runtime (https://github.com/fusesource/hawtjni),
which has the following notices:
* This product includes software developed by FuseSource Corp.
http://fusesource.com
* This product includes software developed at
Progress Software Corporation and/or its subsidiaries or affiliates.
* This product includes software developed by IBM Corporation and others.
The binary distribution of this product bundles binaries of
AWS Java SDK 1.10.6,
which has the following notices:
* This software includes third party software subject to the following
copyrights: - XML parsing and utility functions from JetS3t - Copyright
2006-2009 James Murty. - JSON parsing and utility functions from JSON.org -
Copyright 2002 JSON.org. - PKCS#1 PEM encoded private key parsing and utility
functions from oauth.googlecode.com - Copyright 1998-2010 AOL Inc.
The binary distribution of this product bundles binaries of
Gson 2.2.4,
which has the following notices:
The Netty Project
=================
Please visit the Netty web site for more information:
* http://netty.io/
Copyright 2014 The Netty Project
The Netty Project licenses this file to you under the Apache License,
version 2.0 (the "License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at:
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Also, please refer to each LICENSE.<component>.txt file, which is located in
the 'license' directory of the distribution file, for the license terms of the
components that this product depends on.
-------------------------------------------------------------------------------
This product contains the extensions to Java Collections Framework which has
been derived from the works by JSR-166 EG, Doug Lea, and Jason T. Greene:
* LICENSE:
* license/LICENSE.jsr166y.txt (Public Domain)
* HOMEPAGE:
* http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/
* http://viewvc.jboss.org/cgi-bin/viewvc.cgi/jbosscache/experimental/jsr166/
This product contains a modified version of Robert Harder's Public Domain
Base64 Encoder and Decoder, which can be obtained at:
* LICENSE:
* license/LICENSE.base64.txt (Public Domain)
* HOMEPAGE:
* http://iharder.sourceforge.net/current/java/base64/
This product contains a modified portion of 'Webbit', an event based
WebSocket and HTTP server, which can be obtained at:
* LICENSE:
* license/LICENSE.webbit.txt (BSD License)
* HOMEPAGE:
* https://github.com/joewalnes/webbit
This product contains a modified portion of 'SLF4J', a simple logging
facade for Java, which can be obtained at:
* LICENSE:
* license/LICENSE.slf4j.txt (MIT License)
* HOMEPAGE:
* http://www.slf4j.org/
This product contains a modified portion of 'ArrayDeque', written by Josh
Bloch of Google, Inc:
* LICENSE:
* license/LICENSE.deque.txt (Public Domain)
This product contains a modified portion of 'Apache Harmony', an open source
Java SE, which can be obtained at:
* LICENSE:
* license/LICENSE.harmony.txt (Apache License 2.0)
* HOMEPAGE:
* http://archive.apache.org/dist/harmony/
This product contains a modified version of Roland Kuhn's ASL2
AbstractNodeQueue, which is based on Dmitriy Vyukov's non-intrusive MPSC queue.
It can be obtained at:
* LICENSE:
* license/LICENSE.abstractnodequeue.txt (Public Domain)
* HOMEPAGE:
* https://github.com/akka/akka/blob/wip-2.2.3-for-scala-2.11/akka-actor/src/main/java/akka/dispatch/AbstractNodeQueue.java
This product contains a modified portion of 'jbzip2', a Java bzip2 compression
and decompression library written by Matthew J. Francis. It can be obtained at:
* LICENSE:
* license/LICENSE.jbzip2.txt (MIT License)
* HOMEPAGE:
* https://code.google.com/p/jbzip2/
This product contains a modified portion of 'libdivsufsort', a C API library to construct
the suffix array and the Burrows-Wheeler transformed string for any input string of
a constant-size alphabet written by Yuta Mori. It can be obtained at:
* LICENSE:
* license/LICENSE.libdivsufsort.txt (MIT License)
* HOMEPAGE:
* https://code.google.com/p/libdivsufsort/
This product contains a modified portion of Nitsan Wakart's 'JCTools', Java Concurrency Tools for the JVM,
which can be obtained at:
* LICENSE:
* license/LICENSE.jctools.txt (ASL2 License)
* HOMEPAGE:
* https://github.com/JCTools/JCTools
This product optionally depends on 'JZlib', a re-implementation of zlib in
pure Java, which can be obtained at:
* LICENSE:
* license/LICENSE.jzlib.txt (BSD style License)
* HOMEPAGE:
* http://www.jcraft.com/jzlib/
This product optionally depends on 'Compress-LZF', a Java library for encoding and
decoding data in LZF format, written by Tatu Saloranta. It can be obtained at:
* LICENSE:
* license/LICENSE.compress-lzf.txt (Apache License 2.0)
* HOMEPAGE:
* https://github.com/ning/compress
This product optionally depends on 'lz4', a LZ4 Java compression
and decompression library written by Adrien Grand. It can be obtained at:
* LICENSE:
* license/LICENSE.lz4.txt (Apache License 2.0)
* HOMEPAGE:
* https://github.com/jpountz/lz4-java
This product optionally depends on 'lzma-java', a LZMA Java compression
and decompression library, which can be obtained at:
* LICENSE:
* license/LICENSE.lzma-java.txt (Apache License 2.0)
* HOMEPAGE:
* https://github.com/jponge/lzma-java
This product contains a modified portion of 'jfastlz', a Java port of FastLZ compression
and decompression library written by William Kinney. It can be obtained at:
* LICENSE:
* license/LICENSE.jfastlz.txt (MIT License)
* HOMEPAGE:
* https://code.google.com/p/jfastlz/
This product contains a modified portion of and optionally depends on 'Protocol Buffers', Google's data
interchange format, which can be obtained at:
* LICENSE:
* license/LICENSE.protobuf.txt (New BSD License)
* HOMEPAGE:
* http://code.google.com/p/protobuf/
This product optionally depends on 'Bouncy Castle Crypto APIs' to generate
a temporary self-signed X.509 certificate when the JVM does not provide the
equivalent functionality. It can be obtained at:
* LICENSE:
* license/LICENSE.bouncycastle.txt (MIT License)
* HOMEPAGE:
* http://www.bouncycastle.org/
This product optionally depends on 'Snappy', a compression library produced
by Google Inc, which can be obtained at:
* LICENSE:
* license/LICENSE.snappy.txt (New BSD License)
* HOMEPAGE:
* http://code.google.com/p/snappy/
This product optionally depends on 'JBoss Marshalling', an alternative Java
serialization API, which can be obtained at:
* LICENSE:
* license/LICENSE.jboss-marshalling.txt (GNU LGPL 2.1)
* HOMEPAGE:
* http://www.jboss.org/jbossmarshalling
This product optionally depends on 'Caliper', Google's micro-
benchmarking framework, which can be obtained at:
* LICENSE:
* license/LICENSE.caliper.txt (Apache License 2.0)
* HOMEPAGE:
* http://code.google.com/p/caliper/
This product optionally depends on 'Apache Commons Logging', a logging
framework, which can be obtained at:
* LICENSE:
* license/LICENSE.commons-logging.txt (Apache License 2.0)
* HOMEPAGE:
* http://commons.apache.org/logging/
This product optionally depends on 'Apache Log4J', a logging framework, which
can be obtained at:
* LICENSE:
* license/LICENSE.log4j.txt (Apache License 2.0)
* HOMEPAGE:
* http://logging.apache.org/log4j/
This product optionally depends on 'Aalto XML', an ultra-high performance
non-blocking XML processor, which can be obtained at:
* LICENSE:
* license/LICENSE.aalto-xml.txt (Apache License 2.0)
* HOMEPAGE:
* http://wiki.fasterxml.com/AaltoHome
This product contains a modified version of 'HPACK', a Java implementation of
the HTTP/2 HPACK algorithm written by Twitter. It can be obtained at:
* LICENSE:
* license/LICENSE.hpack.txt (Apache License 2.0)
* HOMEPAGE:
* https://github.com/twitter/hpack
This product contains a modified portion of 'Apache Commons Lang', a Java library
provides utilities for the java.lang API, which can be obtained at:
* LICENSE:
* license/LICENSE.commons-lang.txt (Apache License 2.0)
* HOMEPAGE:
* https://commons.apache.org/proper/commons-lang/
The binary distribution of this product bundles binaries of
Commons Codec 1.4,
which has the following notices:
* src/test/org/apache/commons/codec/language/DoubleMetaphoneTest.javacontains test data from http://aspell.net/test/orig/batch0.tab.Copyright (C) 2002 Kevin Atkinson (kevina@gnu.org)
===============================================================================
The content of package org.apache.commons.codec.language.bm has been translated
from the original php source code available at http://stevemorse.org/phoneticinfo.htm
with permission from the original authors.
Original source copyright:Copyright (c) 2008 Alexander Beider & Stephen P. Morse.
The binary distribution of this product bundles binaries of
Commons Lang 2.6,
which has the following notices:
* This product includes software from the Spring Framework,under the Apache License 2.0 (see: StringUtils.containsWhitespace())
The binary distribution of this product bundles binaries of
Apache Log4j 1.2.17,
which has the following notices:
* ResolverUtil.java
Copyright 2005-2006 Tim Fennell
Dumbster SMTP test server
Copyright 2004 Jason Paul Kitchen
TypeUtil.java
Copyright 2002-2012 Ramnivas Laddad, Juergen Hoeller, Chris Beams

View File

@ -24,7 +24,6 @@
import org.apache.hadoop.classification.InterfaceStability.Unstable;
import org.apache.hadoop.yarn.util.Records;
import java.util.HashMap;
import java.util.Map;
/**

View File

@ -19,9 +19,7 @@
package org.apache.hadoop.yarn.api.records;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceAudience.Public;
import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.classification.InterfaceStability.Evolving;
import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes;
import org.apache.hadoop.yarn.util.Records;
@ -111,7 +109,7 @@ public static ResourceTypeInfo newInstance(String name, String units,
}
/**
* Create a new instance of ResourceTypeInfo from name, units
* Create a new instance of ResourceTypeInfo from name, units.
*
* @param name name of resource type
* @param units units of resource type
@ -124,7 +122,7 @@ public static ResourceTypeInfo newInstance(String name, String units) {
}
/**
* Create a new instance of ResourceTypeInfo from name
* Create a new instance of ResourceTypeInfo from name.
*
* @param name name of resource type
* @return the new ResourceTypeInfo object

View File

@ -15,6 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Package org.apache.hadoop.yarn.api.records.impl contains classes
* which define basic resources.
*/
@InterfaceAudience.Public
@InterfaceStability.Unstable
package org.apache.hadoop.yarn.api.records.impl;

View File

@ -45,9 +45,8 @@ public static class Converter {
}
}
private static final String[] UNITS =
{ "p", "n", "u", "m", "", "k", "M", "G", "T", "P", "Ki", "Mi", "Gi", "Ti",
"Pi" };
private static final String[] UNITS = {"p", "n", "u", "m", "", "k", "M", "G",
"T", "P", "Ki", "Mi", "Gi", "Ti", "Pi"};
private static final List<String> SORTED_UNITS = Arrays.asList(UNITS);
public static final Set<String> KNOWN_UNITS = createKnownUnitsSet();
private static final Converter PICO =

View File

@ -466,8 +466,8 @@ public static Map<String, ResourceInformation> getNodeResourceInformation(
return readOnlyNodeResources;
}
private static Map<String, ResourceInformation>
initializeNodeResourceInformation(Configuration conf) {
private static Map<String, ResourceInformation> initializeNodeResourceInformation(
Configuration conf) {
Map<String, ResourceInformation> nodeResources = new HashMap<>();
try {
addResourcesFileToConf(

View File

@ -15,6 +15,10 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Package org.apache.hadoop.yarn.util.resource contains classes
* which is used as utility class for resource profile computations.
*/
@InterfaceAudience.Public
@InterfaceStability.Unstable
package org.apache.hadoop.yarn.util.resource;

View File

@ -22,6 +22,9 @@
import org.junit.Assert;
import org.junit.Test;
/**
* Test class to verify various resource informations in a given resource.
*/
public class TestResourceInformation {
@Test

View File

@ -21,6 +21,10 @@
import org.junit.Assert;
import org.junit.Test;
/**
* Test class to handle all test cases needed to verify basic unit conversion
* scenarios.
*/
public class TestUnitsConversionUtil {
@Test
@ -87,13 +91,13 @@ public void testOverflow() {
UnitsConversionUtil.convert("P", "p", test);
Assert.fail("this operation should result in an overflow");
} catch (IllegalArgumentException ie) {
; // do nothing
// do nothing
}
try {
UnitsConversionUtil.convert("m", "p", Long.MAX_VALUE - 1);
Assert.fail("this operation should result in an overflow");
} catch (IllegalArgumentException ie) {
; // do nothing
// do nothing
}
}

View File

@ -81,7 +81,7 @@
import org.apache.hadoop.yarn.client.api.YarnClientApplication;
import org.apache.hadoop.yarn.client.util.YarnClientUtils;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.exceptions.ResourceProfilesNotEnabledException;
import org.apache.hadoop.yarn.exceptions.YARNFeatureNotEnabledException;
import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.util.resource.Resources;
import org.apache.hadoop.yarn.util.timeline.TimelineUtils;
@ -553,7 +553,7 @@ public boolean run() throws IOException, YarnException {
Map<String, Resource> profiles;
try {
profiles = yarnClient.getResourceProfiles();
} catch (ResourceProfilesNotEnabledException re) {
} catch (YARNFeatureNotEnabledException re) {
profiles = null;
}
@ -994,15 +994,17 @@ private void setAMResourceCapability(ApplicationSubmissionContext appContext,
if (profile.isEmpty()) {
tmp = "default";
}
if (appContext.getAMContainerResourceRequest() == null) {
appContext.setAMContainerResourceRequest(ResourceRequest
.newInstance(Priority.newInstance(priority), "*",
if (appContext.getAMContainerResourceRequests() == null) {
List<ResourceRequest> amResourceRequests = new ArrayList<ResourceRequest>();
amResourceRequests
.add(ResourceRequest.newInstance(Priority.newInstance(priority), "*",
Resources.clone(Resources.none()), 1));
appContext.setAMContainerResourceRequests(amResourceRequests);
}
if (appContext.getAMContainerResourceRequest().getProfileCapability()
== null) {
appContext.getAMContainerResourceRequest().setProfileCapability(
if (appContext.getAMContainerResourceRequests().get(0)
.getProfileCapability() == null) {
appContext.getAMContainerResourceRequests().get(0).setProfileCapability(
ProfileCapability.newInstance(tmp, Resource.newInstance(0, 0)));
}
Resource capability = Resource.newInstance(0, 0);
@ -1018,7 +1020,7 @@ private void setAMResourceCapability(ApplicationSubmissionContext appContext,
capability.setMemorySize(memory);
capability.setVirtualCores(vcores);
}
appContext.getAMContainerResourceRequest().getProfileCapability()
appContext.getAMContainerResourceRequests().get(0).getProfileCapability()
.setProfileCapabilityOverride(capability);
}

View File

@ -19,7 +19,6 @@
package org.apache.hadoop.yarn.client.api.impl;
import java.io.IOException;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;

View File

@ -134,9 +134,4 @@ private ResourceProto convertToProtoFormat(Resource res) {
r.setVirtualCores(res.getVirtualCores());
return r.getProto();
}
@Override
public int hashCode() {
return getProto().hashCode();
}
}

View File

@ -60,8 +60,9 @@ public int hashCode() {
@Override
public boolean equals(Object other) {
if (other == null)
if (other == null) {
return false;
}
if (other.getClass().isAssignableFrom(this.getClass())) {
return this.getProto().equals(this.getClass().cast(other).getProto());
}

View File

@ -93,9 +93,4 @@ public String getProfileName() {
}
return profile;
}
@Override
public int hashCode() {
return getProto().hashCode();
}
}

View File

@ -104,9 +104,4 @@ private void maybeInitBuilder() {
}
viaProto = false;
}
@Override
public int hashCode() {
return getProto().hashCode();
}
}

View File

@ -29,7 +29,6 @@
import com.google.protobuf.TextFormat;
import java.util.HashMap;
import java.util.Map;
@Private

View File

@ -126,9 +126,4 @@ private YarnProtos.ResourceProto convertToProtoFormat(Resource res) {
r.setVirtualCores(res.getVirtualCores());
return r.getProto();
}
@Override
public int hashCode() {
return getProto().hashCode();
}
}

View File

@ -26,8 +26,6 @@
import org.apache.hadoop.yarn.proto.YarnProtos.ResourceTypeInfoProto;
import org.apache.hadoop.yarn.proto.YarnProtos.ResourceTypesProto;
import com.google.common.base.Preconditions;
/**
* {@code ResourceTypeInfoPBImpl} which implements the
* {@link ResourceTypeInfo} class which represents different resource types

View File

@ -30,6 +30,10 @@
import java.util.Arrays;
/**
* Resources is a computation class which provides a set of apis to do
* mathematical operations on Resource object.
*/
@InterfaceAudience.LimitedPrivate({ "YARN", "MapReduce" })
@Unstable
public class Resources {
@ -47,7 +51,7 @@ static class FixedValueResource extends BaseResource {
private String name;
/**
* Constructor for a fixed value resource
* Constructor for a fixed value resource.
* @param rName the name of the resource
* @param value the fixed value to be returned for all resource types
*/

View File

@ -29,8 +29,6 @@
import org.apache.hadoop.yarn.api.protocolrecords.CommitResponse;
import org.apache.hadoop.yarn.api.protocolrecords.ContainerUpdateRequest;
import org.apache.hadoop.yarn.api.protocolrecords.ContainerUpdateResponse;
import org.apache.hadoop.yarn.api.protocolrecords.GetAllResourceTypeInfoRequest;
import org.apache.hadoop.yarn.api.protocolrecords.GetAllResourceTypeInfoResponse;
import org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceRequest;
import org.apache.hadoop.yarn.api.protocolrecords.IncreaseContainersResourceResponse;
import org.apache.hadoop.yarn.api.protocolrecords.ReInitializeContainerRequest;
@ -147,7 +145,6 @@
import org.apache.hadoop.yarn.api.records.ReservationRequests;
import org.apache.hadoop.yarn.api.records.Resource;
import org.apache.hadoop.yarn.api.records.ResourceAllocationRequest;
import org.apache.hadoop.yarn.api.records.Resource;
import org.apache.hadoop.yarn.api.records.ResourceInformation;
import org.apache.hadoop.yarn.api.records.ResourceBlacklistRequest;
import org.apache.hadoop.yarn.api.records.ResourceOption;
@ -350,7 +347,6 @@
import org.junit.Test;
import com.google.common.collect.ImmutableSet;
import java.io.IOException;
import java.util.Arrays;
/**

View File

@ -33,6 +33,9 @@
import java.util.HashMap;
import java.util.Map;
/**
* Test class to verify all resource utility methods.
*/
public class TestResourceUtils {
static class ResourceFileInformation {
@ -106,8 +109,8 @@ public void testGetResourceTypesConfigs() throws Exception {
testFile4.resourceNameUnitsMap.put("resource1", "G");
testFile4.resourceNameUnitsMap.put("resource2", "m");
ResourceFileInformation[] tests =
{ testFile1, testFile2, testFile3, testFile4 };
ResourceFileInformation[] tests = {testFile1, testFile2, testFile3,
testFile4};
Map<String, ResourceInformation> res;
for (ResourceFileInformation testInformation : tests) {
ResourceUtils.resetResourceTypes();
@ -134,9 +137,9 @@ public void testGetResourceTypesConfigs() throws Exception {
public void testGetResourceTypesConfigErrors() throws Exception {
Configuration conf = new YarnConfiguration();
String[] resourceFiles =
{ "resource-types-error-1.xml", "resource-types-error-2.xml",
"resource-types-error-3.xml", "resource-types-error-4.xml" };
String[] resourceFiles = {"resource-types-error-1.xml",
"resource-types-error-2.xml", "resource-types-error-3.xml",
"resource-types-error-4.xml"};
for (String resourceFile : resourceFiles) {
ResourceUtils.resetResourceTypes();
File dest = null;
@ -159,15 +162,15 @@ public void testGetResourceTypesConfigErrors() throws Exception {
@Test
public void testInitializeResourcesMap() throws Exception {
String[] empty = { "", "" };
String[] res1 = { "resource1", "m" };
String[] res2 = { "resource2", "G" };
String[][] test1 = { empty };
String[][] test2 = { res1 };
String[][] test3 = { res2 };
String[][] test4 = { res1, res2 };
String[] empty = {"", ""};
String[] res1 = {"resource1", "m"};
String[] res2 = {"resource2", "G"};
String[][] test1 = {empty};
String[][] test2 = {res1};
String[][] test3 = {res2};
String[][] test4 = {res1, res2};
String[][][] allTests = { test1, test2, test3, test4 };
String[][][] allTests = {test1, test2, test3, test4};
for (String[][] test : allTests) {
@ -221,19 +224,19 @@ public void testInitializeResourcesMap() throws Exception {
@Test
public void testInitializeResourcesMapErrors() throws Exception {
String[] mem1 = { "memory-mb", "" };
String[] vcores1 = { "vcores", "M" };
String[] mem1 = {"memory-mb", ""};
String[] vcores1 = {"vcores", "M"};
String[] mem2 = { "memory-mb", "m" };
String[] vcores2 = { "vcores", "G" };
String[] mem2 = {"memory-mb", "m"};
String[] vcores2 = {"vcores", "G"};
String[] mem3 = { "memory", "" };
String[] mem3 = {"memory", ""};
String[][] test1 = { mem1, vcores1 };
String[][] test2 = { mem2, vcores2 };
String[][] test3 = { mem3 };
String[][] test1 = {mem1, vcores1};
String[][] test2 = {mem2, vcores2};
String[][] test3 = {mem3};
String[][][] allTests = { test1, test2, test3 };
String[][][] allTests = {test1, test2, test3};
for (String[][] test : allTests) {

View File

@ -175,12 +175,12 @@ public NodeStatusUpdaterImpl(Context context, Dispatcher dispatcher,
@Override
protected void serviceInit(Configuration conf) throws Exception {
this.totalResource = NodeManagerHardwareUtils.getNodeResources(conf);
int memoryMb = totalResource.getMemory();
long memoryMb = totalResource.getMemorySize();
float vMemToPMem =
conf.getFloat(
YarnConfiguration.NM_VMEM_PMEM_RATIO,
YarnConfiguration.DEFAULT_NM_VMEM_PMEM_RATIO);
int virtualMemoryMb = (int)Math.ceil(memoryMb * vMemToPMem);
long virtualMemoryMb = (long)Math.ceil(memoryMb * vMemToPMem);
int virtualCores = totalResource.getVirtualCores();
LOG.info("Nodemanager resources: memory set to " + memoryMb + "MB.");
@ -190,12 +190,12 @@ protected void serviceInit(Configuration conf) throws Exception {
metrics.addResource(totalResource);
// Get actual node physical resources
int physicalMemoryMb = memoryMb;
long physicalMemoryMb = memoryMb;
int physicalCores = virtualCores;
ResourceCalculatorPlugin rcp =
ResourceCalculatorPlugin.getNodeResourceMonitorPlugin(conf);
if (rcp != null) {
physicalMemoryMb = (int) (rcp.getPhysicalMemorySize() / (1024 * 1024));
physicalMemoryMb = rcp.getPhysicalMemorySize() / (1024 * 1024);
physicalCores = rcp.getNumProcessors();
}
this.physicalResource =

View File

@ -246,8 +246,8 @@ private static int getVCoresInternal(ResourceCalculatorPlugin plugin,
return cores;
}
private static int getConfiguredMemoryMB(Configuration conf) {
int memoryMb = conf.getInt(YarnConfiguration.NM_PMEM_MB,
private static long getConfiguredMemoryMB(Configuration conf) {
long memoryMb = conf.getLong(YarnConfiguration.NM_PMEM_MB,
YarnConfiguration.DEFAULT_NM_PMEM_MB);
if (memoryMb == -1) {
memoryMb = YarnConfiguration.DEFAULT_NM_PMEM_MB;
@ -270,7 +270,7 @@ private static int getConfiguredMemoryMB(Configuration conf) {
* - the configuration for the NodeManager
* @return the amount of memory that will be used for YARN containers in MB.
*/
public static int getContainerMemoryMB(Configuration conf) {
public static long getContainerMemoryMB(Configuration conf) {
if (!isHardwareDetectionEnabled(conf)) {
return getConfiguredMemoryMB(conf);
}
@ -299,7 +299,7 @@ public static int getContainerMemoryMB(Configuration conf) {
* - the configuration for the NodeManager
* @return the amount of memory that will be used for YARN containers in MB.
*/
public static int getContainerMemoryMB(ResourceCalculatorPlugin plugin,
public static long getContainerMemoryMB(ResourceCalculatorPlugin plugin,
Configuration conf) {
if (!isHardwareDetectionEnabled(conf) || plugin == null) {
return getConfiguredMemoryMB(conf);
@ -307,26 +307,24 @@ public static int getContainerMemoryMB(ResourceCalculatorPlugin plugin,
return getContainerMemoryMBInternal(plugin, conf);
}
private static int getContainerMemoryMBInternal(ResourceCalculatorPlugin plugin,
private static long getContainerMemoryMBInternal(ResourceCalculatorPlugin plugin,
Configuration conf) {
int memoryMb = conf.getInt(YarnConfiguration.NM_PMEM_MB, -1);
long memoryMb = conf.getInt(YarnConfiguration.NM_PMEM_MB, -1);
if (memoryMb == -1) {
int physicalMemoryMB =
(int) (plugin.getPhysicalMemorySize() / (1024 * 1024));
int hadoopHeapSizeMB =
(int) (Runtime.getRuntime().maxMemory() / (1024 * 1024));
int containerPhysicalMemoryMB =
(int) (0.8f * (physicalMemoryMB - (2 * hadoopHeapSizeMB)));
int reservedMemoryMB =
conf.getInt(YarnConfiguration.NM_SYSTEM_RESERVED_PMEM_MB, -1);
long physicalMemoryMB = (plugin.getPhysicalMemorySize() / (1024 * 1024));
long hadoopHeapSizeMB = (Runtime.getRuntime().maxMemory()
/ (1024 * 1024));
long containerPhysicalMemoryMB = (long) (0.8f
* (physicalMemoryMB - (2 * hadoopHeapSizeMB)));
long reservedMemoryMB = conf
.getInt(YarnConfiguration.NM_SYSTEM_RESERVED_PMEM_MB, -1);
if (reservedMemoryMB != -1) {
containerPhysicalMemoryMB = physicalMemoryMB - reservedMemoryMB;
}
if(containerPhysicalMemoryMB <= 0) {
if (containerPhysicalMemoryMB <= 0) {
LOG.error("Calculated memory for YARN containers is too low."
+ " Node memory is " + physicalMemoryMB
+ " MB, system reserved memory is "
+ reservedMemoryMB + " MB.");
+ " MB, system reserved memory is " + reservedMemoryMB + " MB.");
}
containerPhysicalMemoryMB = Math.max(containerPhysicalMemoryMB, 0);
memoryMb = containerPhysicalMemoryMB;
@ -365,8 +363,8 @@ public static Resource getNodeResources(Configuration configuration) {
}
ResourceInformation memResInfo = resourceInformation.get(memory);
if(memResInfo.getValue() == 0) {
ret.setMemory(getContainerMemoryMB(conf));
LOG.debug("Set memory to " + ret.getMemory());
ret.setMemorySize(getContainerMemoryMB(conf));
LOG.debug("Set memory to " + ret.getMemorySize());
}
}
if (resourceInformation.containsKey(vcores)) {

View File

@ -172,7 +172,7 @@ public void testGetContainerMemoryMB() throws Exception {
YarnConfiguration conf = new YarnConfiguration();
conf.setBoolean(YarnConfiguration.NM_ENABLE_HARDWARE_CAPABILITY_DETECTION,
true);
int mem = NodeManagerHardwareUtils.getContainerMemoryMB(null, conf);
long mem = NodeManagerHardwareUtils.getContainerMemoryMB(null, conf);
Assert.assertEquals(YarnConfiguration.DEFAULT_NM_PMEM_MB, mem);
mem = NodeManagerHardwareUtils.getContainerMemoryMB(plugin, conf);

View File

@ -58,7 +58,6 @@
import org.apache.hadoop.yarn.ipc.YarnRPC;
import org.apache.hadoop.yarn.security.AMRMTokenIdentifier;
import org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger.AuditConstants;
import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceProfilesManager;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.AMLivelinessMonitor;

View File

@ -239,7 +239,7 @@ protected void serviceInit(Configuration conf) throws Exception {
new ResourceProfilesManagerImpl();
resourceProfilesManager.init(conf);
rmContext.setResourceProfilesManager(resourceProfilesManager);
this.configurationProvider =
ConfigurationProviderFactory.getConfigurationProvider(conf);
this.configurationProvider.init(this.conf);

View File

@ -45,6 +45,10 @@
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantReadWriteLock;
/**
* PBImpl class to handle all proto related implementation for
* ResourceProfilesManager.
*/
public class ResourceProfilesManagerImpl implements ResourceProfilesManager {
private static final Log LOG =
@ -66,8 +70,8 @@ public class ResourceProfilesManagerImpl implements ResourceProfilesManager {
protected final ReentrantReadWriteLock.ReadLock readLock;
protected final ReentrantReadWriteLock.WriteLock writeLock;
private static final String[] MANDATORY_PROFILES =
{ DEFAULT_PROFILE, MINIMUM_PROFILE, MAXIMUM_PROFILE };
private static final String[] MANDATORY_PROFILES = {DEFAULT_PROFILE,
MINIMUM_PROFILE, MAXIMUM_PROFILE};
private static final String FEATURE_NOT_ENABLED_MSG =
"Resource profile is not enabled, please "
+ "enable resource profile feature before using its functions."

View File

@ -29,6 +29,9 @@
import java.util.HashMap;
import java.util.Map;
/**
* Common test class for resource profile related tests.
*/
public class TestResourceProfiles {
@Test
@ -86,9 +89,8 @@ public void testLoadProfilesMissingMandatoryProfile() throws Exception {
Configuration conf = new Configuration();
conf.setBoolean(YarnConfiguration.RM_RESOURCE_PROFILES_ENABLED, true);
String[] badProfiles = { "profiles/illegal-profiles-1.json",
"profiles/illegal-profiles-2.json",
"profiles/illegal-profiles-3.json" };
String[] badProfiles = {"profiles/illegal-profiles-1.json",
"profiles/illegal-profiles-2.json", "profiles/illegal-profiles-3.json"};
for (String file : badProfiles) {
ResourceProfilesManager manager = new ResourceProfilesManagerImpl();
conf.set(YarnConfiguration.RM_RESOURCE_PROFILES_SOURCE_FILE, file);