dbe2d61258
ABFS has a client-side throttling mechanism which works on the metrics collected from past requests When requests are fail due to server-side throttling it updates its metrics and recalculates any client side backoff. The choice of which requests should be used to compute client side backoff interval is based on the http status code: - Status code in 2xx range: Successful Operations should contribute. - Status code in 3xx range: Redirection Operations should not contribute. - Status code in 4xx range: User Errors should not contribute. - Status code is 503: Throttling Error should contribute only if they are due to client limits breach as follows: * 503, Ingress Over Account Limit: Should Contribute * 503, Egress Over Account Limit: Should Contribute * 503, TPS Over Account Limit: Should Contribute * 503, Other Server Throttling: Should not Contribute. - Status code in 5xx range other than 503: Should not Contribute. - IOException and UnknownHostExceptions: Should not Contribute. Contributed by Anuj Modi |
||
---|---|---|
.github | ||
.yetus | ||
dev-support | ||
hadoop-assemblies | ||
hadoop-build-tools | ||
hadoop-client-modules | ||
hadoop-cloud-storage-project | ||
hadoop-common-project | ||
hadoop-dist | ||
hadoop-hdfs-project | ||
hadoop-mapreduce-project | ||
hadoop-maven-plugins | ||
hadoop-minicluster | ||
hadoop-project | ||
hadoop-project-dist | ||
hadoop-tools | ||
hadoop-yarn-project | ||
licenses | ||
licenses-binary | ||
.asf.yaml | ||
.gitattributes | ||
.gitignore | ||
BUILDING.txt | ||
LICENSE-binary | ||
LICENSE.txt | ||
NOTICE-binary | ||
NOTICE.txt | ||
pom.xml | ||
README.txt | ||
start-build-env.sh |
For the latest information about Hadoop, please visit our website at: http://hadoop.apache.org/ and our wiki, at: https://cwiki.apache.org/confluence/display/HADOOP/