As part of work done under HADOOP-19120 [ABFS]: ApacheHttpClient adaptation as network library - ASF JIRA
Apache HTTP Client was introduced as an alternative Network Library that can be used with ABFS Driver. Earlier JDK Http Client was the only supported network library.
Apache HTTP Client was found to be more helpful in terms of controls and knobs it provides to manage the Network aspects of the driver better. Hence, the default Network Client was made to be used with the ABFS Driver.
Recently while running scale workloads, we observed a regression where some unexpected wait time was observed while establishing connections. A possible fix has been identified and we are working on getting it fixed.
There was also a possible NPE scenario was identified on the new network client code.
Until we are done with the code fixes and revalidated the whole Apache client flow, we would like to make JDK Client as default client again. The support will still be there, but it will be disabled behind a config.
Contributed by: manika137
1. The class WrappedIO has been extended with more filesystem operations
- openFile()
- PathCapabilities
- StreamCapabilities
- ByteBufferPositionedReadable
All these static methods raise UncheckedIOExceptions rather than
checked ones.
2. The adjacent class org.apache.hadoop.io.wrappedio.WrappedStatistics
provides similar access to IOStatistics/IOStatisticsContext classes
and operations.
Allows callers to:
* Get a serializable IOStatisticsSnapshot from an IOStatisticsSource or
IOStatistics instance
* Save an IOStatisticsSnapshot to file
* Convert an IOStatisticsSnapshot to JSON
* Given an object which may be an IOStatisticsSource, return an object
whose toString() value is a dynamically generated, human readable summary.
This is for logging.
* Separate getters to the different sections of IOStatistics.
* Mean values are returned as a Map.Pair<Long, Long> of (samples, sum)
from which means may be calculated.
There are examples of the dynamic bindings to these classes in:
org.apache.hadoop.io.wrappedio.impl.DynamicWrappedIO
org.apache.hadoop.io.wrappedio.impl.DynamicWrappedStatistics
These use DynMethods and other classes in the package
org.apache.hadoop.util.dynamic which are based on the
Apache Parquet equivalents.
This makes re-implementing these in that library and others
which their own fork of the classes (example: Apache Iceberg)
3. The openFile() option "fs.option.openfile.read.policy" has
added specific file format policies for the core filetypes
* avro
* columnar
* csv
* hbase
* json
* orc
* parquet
S3A chooses the appropriate sequential/random policy as a
A policy `parquet, columnar, vector, random, adaptive` will use the parquet policy for
any filesystem aware of it, falling back to the first entry in the list which
the specific version of the filesystem recognizes
4. New Path capability fs.capability.virtual.block.locations
Indicates that locations are generated client side
and don't refer to real hosts.
Contributed by Steve Loughran
Apache httpclient 4.5.x is the new default implementation of http connections;
this supports a large configurable pool of connections along with
the ability to limit their lifespan.
The networking library can be chosen using the configuration
option fs.azure.networking.library
The supported values are
- APACHE_HTTP_CLIENT : Use Apache HttpClient [Default]
- JDK_HTTP_URL_CONNECTION : Use JDK networking library
Important: unless the networking library is switched back to
the JDK, the apache httpcore and httpclient must be on the classpath
Contributed by Pranav Saxena
Add support for Azure Active Directory (Azure AD) workload identities which integrate with the Kubernetes's native capabilities to federate with any external identity provider.
Contributed By: Anuj Modi
Customer-provided-keys (CPK) configs are not allowed with non-hierarchal-namespace (non-HNS) accounts for ABFS. This patch aims to prevent ABFS initialization for non-HNS accounts if CPK configs are provided.
Contributed by: Pranav Saxena
Adds support for metric collection at the filesystem instance level.
Metrics are pushed to the store upon the closure of a filesystem instance, encompassing all operations
that utilized that specific instance.
Collected Metrics:
- Number of successful requests without any retries.
- Count of requests that succeeded after a specified number of retries (x retries).
- Request count subjected to throttling.
- Number of requests that failed despite exhausting all retry attempts. etc.
Implementation Details:
Incorporated logic in the AbfsClient to facilitate metric pushing through an additional request.
This occurs in scenarios where no requests are sent to the backend for a defined idle period.
By implementing these enhancements, we ensure comprehensive monitoring and analysis of filesystem interactions, enabling a deeper understanding of success rates, retry scenarios, throttling instances, and exhaustive failure scenarios. Additionally, the AbfsClient logic ensures that metrics are proactively pushed even during idle periods, maintaining a continuous and accurate representation of filesystem performance.
Contributed by Anmol Asrani
Applications can create a BulkDelete instance from a
BulkDeleteSource; the BulkDelete interface provides
the pageSize(): the maximum number of entries which can be
deleted, and a bulkDelete(Collection paths)
method which can take a collection up to pageSize() long.
This is optimized for object stores with bulk delete APIs;
the S3A connector will offer the page size of
fs.s3a.bulk.delete.page.size unless bulk delete has
been disabled.
Even with a page size of 1, the S3A implementation is
more efficient than delete(path)
as there are no safety checks for the path being a directory
or probes for the need to recreate directories.
The interface BulkDeleteSource is implemented by
all FileSystem implementations, with a page size
of 1 and mapped to delete(pathToDelete, false).
This means that callers do not need to have special
case handling for object stores versus classic filesystems.
To aid use through reflection APIs, the class
org.apache.hadoop.io.wrappedio.WrappedIO
has been created with "reflection friendly" methods.
Contributed by Mukund Thakur and Steve Loughran
Improve task commit resilience everywhere
and add an option to reduce delete IO requests on
job cleanup (relevant for ABFS and HDFS).
Task Commit Resilience
----------------------
Task manifest saving is re-attempted on failure; the number of
attempts made is configurable with the option:
mapreduce.manifest.committer.manifest.save.attempts
* The default is 5.
* The minimum is 1; asking for less is ignored.
* A retry policy adds 500ms of sleep per attempt.
* Move from classic rename() to commitFile() to rename the file,
after calling getFileStatus() to get its length and possibly etag.
This becomes a rename() on gcs/hdfs anyway, but on abfs it does reach
the ResilientCommitByRename callbacks in abfs, which report on
the outcome to the caller...which is then logged at WARN.
* New statistic task_stage_save_summary_file to distinguish from
other saving operations (job success/report file).
This is only saved to the manifest on task commit retries, and
provides statistics on all previous unsuccessful attempts to save
the manifests
+ test changes to match the codepath changes, including improvements
in fault injection.
Directory size for deletion
---------------------------
New option
mapreduce.manifest.committer.cleanup.parallel.delete.base.first
This attempts an initial attempt at deleting the base dir, only falling
back to parallel deletes if there's a timeout.
This option is disabled by default; Consider enabling it for abfs to
reduce IO load. Consult the documentation for more details.
Success file printing
---------------------
The command to print a JSON _SUCCESS file from this committer and
any S3A committer is now something which can be invoked from
the mapred command:
mapred successfile <path to file>
Contributed by Steve Loughran
ABFS has a client-side throttling mechanism which works on the metrics collected
from past requests
When requests are fail due to server-side throttling it updates its
metrics and recalculates any client side backoff.
The choice of which requests should be used to compute client side
backoff interval is based on the http status code:
- Status code in 2xx range: Successful Operations should contribute.
- Status code in 3xx range: Redirection Operations should not contribute.
- Status code in 4xx range: User Errors should not contribute.
- Status code is 503: Throttling Error should contribute only if they
are due to client limits breach as follows:
* 503, Ingress Over Account Limit: Should Contribute
* 503, Egress Over Account Limit: Should Contribute
* 503, TPS Over Account Limit: Should Contribute
* 503, Other Server Throttling: Should not Contribute.
- Status code in 5xx range other than 503: Should not Contribute.
- IOException and UnknownHostExceptions: Should not Contribute.
Contributed by Anuj Modi
Clarifies behaviour of VectorIO methods with contract tests as well as
specification.
* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
(HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
toString() output logs this.
HDFS
* Add test TestHDFSContractVectoredRead
ABFS
* Add test ITestAbfsFileSystemContractVectoredRead
S3A
* checks for vector IO being stopped in all iterative
vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
which had not yet read data in.
* Improved logging.
readVectored()
* made synchronized. This is only for the invocation;
the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.
+ AbstractSTestS3AHugeFiles enhancements.
+ ADDENDUM: test fix in ITestS3AContractVectoredRead
Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback
implementation
Contributed by Steve Loughran
Change-Id: Ia4ed71864c595f175c275aad83a2ff5741693432
Clarifies behaviour of VectorIO methods with contract tests as well as specification.
* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
(HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
toString() output logs this.
HDFS
* Add test TestHDFSContractVectoredRead
ABFS
* Add test ITestAbfsFileSystemContractVectoredRead
S3A
* checks for vector IO being stopped in all iterative
vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
which had not yet read data in.
* Improved logging.
readVectored()
* made synchronized. This is only for the invocation;
the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.
+ AbstractSTestS3AHugeFiles enhancements.
Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback implementation
Contributed by Steve Loughran
This reverts most of
HADOOP-18869: [ABFS] Fix behavior of a File System APIs on root path (#6003).
Calling getXAttr("/") or setXAttr("/") on an abfs container will fail with
`Operation failed: "The request URI is invalid.", HTTP 400 Bad Request`
This change is to ensure:
* Consistency across ADLS clients
* Consistency across authentication mechanisms.
Contributed by Anuj Modi
Co-authored-by: Wei-Chiu Chuang <weichiu@apache.org>
Includes HADOOP-18354. Upgrade reload4j to 1.22.2 due to XXE vulnerability (#4607).
Log4j 1.2.17 has been replaced by reloadj 1.22.2
SLF4J is at 1.7.36
Address JDK bug JDK-8314978 related to handling of HTTP 100
responses.
https://bugs.openjdk.org/browse/JDK-8314978
In the AbfsHttpOperation, after sendRequest() we call processResponse()
method from AbfsRestOperation.
Even if the conn.getOutputStream() fails due to expect-100 error,
we consume the exception and let the code go ahead.
This may call getHeaderField() / getHeaderFields() / getHeaderFieldLong() after
getOutputStream() has failed. These invocation all lead to server calls.
This commit aims to prevent this.
If connection.getOutputStream() fails due to an Expect-100 error,
the ABFS client does not invoke getHeaderField(), getHeaderFields(),
getHeaderFieldLong() or getInputStream().
getResponseCode() is safe as on the failure it sets the
responseCode variable in HttpUrlConnection object.
Contributed by Pranav Saxena
The option fs.azure.footer.read.request.size sets the size of the footer to
read and cache; the default value of 524288 has been measured to
be good for most workloads running on parquet, ORC and similar file formats.
Contributed by Anuj Modi
The default value for fs.azure.data.blocks.buffer is changed from "disk" to "bytebuffer"
This will speed up writing to azure storage, at the risk of running out of memory
-especially if there are many threads writing to abfs at the same time and the
upload bandwidth is limited.
If jobs do run out of memory writing to abfs, change the option back to "disk"
Contributed by Anmol Asrani
To avoid the ABFS instance getting closed due to GC while the streams are working, attach the ABFS instance to a backReference opaque object and passing down to the streams so that we have a hard reference while the streams are working.
Contributed by: Mehakmeet Singh