Add support for S3 client side encryption (CSE).
CSE can configured in two modes:
- CSE-KMS where keys are provided by AWS KMS
- CSE-CUSTOM where custom keys are provided by implementing
a custom keyring.
CSE requires an encryption library:
amazon-s3-encryption-client-java.jar
This is _not_ included in the shaded bundle.jar
and is released separately.
The version used is currently 3.1.1
Contributed by Syed Shameerur Rahman.
This sets a different timeout for data upload PUT/POST calls to all
other requests, so that slow block uploads do not trigger timeouts
as rapidly as normal requests. This was always the behavior
in the V1 AWS SDK; for V2 we have to explicitly set it on the operations
we want to give extended timeouts.
Option: fs.s3a.connection.part.upload.timeout
Default: 15m
Contributed by Steve Loughran
* HttpReferrerAuditHeader is thread safe, copying the lists/maps passed
in and using synchronized methods when necessary.
* All exceptions raised when building referrer header are caught
and swallowed.
* The first such error is logged at warn,
* all errors plus stack are logged at debug
Contributed by Steve Loughran
Adds new option
s3a.cross.region.access.enabled
Which is true by default
This enables cross region access as a separate config and enable/disables it irrespective of region/endpoint is set.
Contributed by Syed Shameerur Rahman
This moves Hadoop to Apache commons-collections4.
Apache commons-collections has been removed and is completely banned from the source code.
Contributed by Nihal Jain
Disables all logging below error in the AWS SDK Transfer Manager.
This is done in ClientManagerImpl construction so is automatically done
during S3A FS initialization.
ITests verify that
* It is possible to restore the warning log. This verifies the validity of
the test suite, and will identify when an SDK update fixes this regression.
* Constructing an S3A FS instance will disable the logging.
The log manipulation code is lifted from Cloudstore, where it was used to
dynamically enable logging. It uses reflection to load the Log4J binding;
all uses of the API catch and swallow exceptions.
This is needed to avoid failures when running against different log backends
This is an emergency fix -we could come up with a better design for
the reflection based code using the new DynMethods classes.
But this is based on working code, which is always good.
Contributed by Steve Loughran
This is a major change which handles 400 error responses when uploading
large files from memory heap/buffer (or staging committer) and the remote S3
store returns a 500 response from a upload of a block in a multipart upload.
The SDK's own streaming code seems unable to fully replay the upload;
at attempts to but then blocks and the S3 store returns a 400 response
"Your socket connection to the server was not read from or written to
within the timeout period. Idle connections will be closed.
(Service: S3, Status Code: 400...)"
There is an option to control whether or not the S3A client itself
attempts to retry on a 50x error other than 503 throttling events
(which are independently processed as before)
Option: fs.s3a.retry.http.5xx.errors
Default: true
500 errors are very rare from standard AWS S3, which has a five nines
SLA. It may be more common against S3 Express which has lower
guarantees.
Third party stores have unknown guarantees, and the exception may
indicate a bad server configuration. Consider setting
fs.s3a.retry.http.5xx.errors to false when working with
such stores.
Signification Code changes:
There is now a custom set of implementations of
software.amazon.awssdk.http.ContentStreamProvidercontent in
the class org.apache.hadoop.fs.s3a.impl.UploadContentProviders.
These:
* Restart on failures
* Do not copy buffers/byte buffers into new private byte arrays,
so avoid exacerbating memory problems..
There new IOStatistics for specific http error codes -these are collected
even when all recovery is performed within the SDK.
S3ABlockOutputStream has major changes, including handling of
Thread.interrupt() on the main thread, which now triggers and briefly
awaits cancellation of any ongoing uploads.
If the writing thread is interrupted in close(), it is mapped to
an InterruptedIOException. Applications like Hive and Spark must
catch these after cancelling a worker thread.
Contributed by Steve Loughran
1. The class WrappedIO has been extended with more filesystem operations
- openFile()
- PathCapabilities
- StreamCapabilities
- ByteBufferPositionedReadable
All these static methods raise UncheckedIOExceptions rather than
checked ones.
2. The adjacent class org.apache.hadoop.io.wrappedio.WrappedStatistics
provides similar access to IOStatistics/IOStatisticsContext classes
and operations.
Allows callers to:
* Get a serializable IOStatisticsSnapshot from an IOStatisticsSource or
IOStatistics instance
* Save an IOStatisticsSnapshot to file
* Convert an IOStatisticsSnapshot to JSON
* Given an object which may be an IOStatisticsSource, return an object
whose toString() value is a dynamically generated, human readable summary.
This is for logging.
* Separate getters to the different sections of IOStatistics.
* Mean values are returned as a Map.Pair<Long, Long> of (samples, sum)
from which means may be calculated.
There are examples of the dynamic bindings to these classes in:
org.apache.hadoop.io.wrappedio.impl.DynamicWrappedIO
org.apache.hadoop.io.wrappedio.impl.DynamicWrappedStatistics
These use DynMethods and other classes in the package
org.apache.hadoop.util.dynamic which are based on the
Apache Parquet equivalents.
This makes re-implementing these in that library and others
which their own fork of the classes (example: Apache Iceberg)
3. The openFile() option "fs.option.openfile.read.policy" has
added specific file format policies for the core filetypes
* avro
* columnar
* csv
* hbase
* json
* orc
* parquet
S3A chooses the appropriate sequential/random policy as a
A policy `parquet, columnar, vector, random, adaptive` will use the parquet policy for
any filesystem aware of it, falling back to the first entry in the list which
the specific version of the filesystem recognizes
4. New Path capability fs.capability.virtual.block.locations
Indicates that locations are generated client side
and don't refer to real hosts.
Contributed by Steve Loughran
This is a followup to #6543 which ensures all test pass in configurations where
fs.s3a.performance.flags is set to "*" or contains "mkdirs"
Contributed by VJ Jasani
If the flag list in fs.s3a.performance.flags includes "mkdir"
then the safety check of a walk up the tree to look for a parent directory,
-done to verify a directory isn't being created under a file- are skipped.
This saves the cost of multiple list operations.
Contributed by Viraj Jasani
1. Configuration adds new method `getEnumSet()` to get a set of enum values from
a configuration string.
<E extends Enum<E>> EnumSet<E> getEnumSet(String key, Class<E> enumClass, boolean ignoreUnknown)
Whitespace is ignored, case is ignored and the value "*" is mapped to "all values of the enum".
If "ignoreUnknown" is true then when parsing, unknown values are ignored.
This is recommended for forward compatiblity with later versions.
2. This support is implemented in org.apache.hadoop.fs.s3a.impl.ConfigurationHelper -it can be used
elsewhere in the hadoop codebase.
3. A new private FlagSet class in hadoop common manages a set of enum flags.
It implements StreamCapabilities and can be probed for a specific option being set
(with a prefix)
S3A adds an option fs.s3a.performance.flags which builds a FlagSet with enum
type PerformanceFlagEnum
* which initially contains {Create, Delete, Mkdir, Open}
* the existing fs.s3a.create.performance option sets the flag "Create".
* tests which configure fs.s3a.create.performance MUST clear
fs.s3a.performance.flags in test setup.
Future performance flags are planned, with different levels of safety
and/or backwards compatibility.
Contributed by Steve Loughran
The new property fs.s3a.encryption.context allow users to specify the AWS KMS Encryption Context to be used in S3A.
The value of the encryption context is a key/value string that will be Base64 encoded and set in the parameter ssekmsEncryptionContext from the S3 client.
Contributed by Raphael Azzolini
Adds new ClientManager interface/implementation which provides on-demand
creation of synchronous and asynchronous s3 clients, s3 transfer manager,
and in close() terminates these.
S3A FS is modified to
* Create a ClientManagerImpl instance and pass down to its S3Store.
* Use the same ClientManager interface against S3Store to demand-create
the services.
* Only create the async client as part of the transfer manager creation,
which will take place during the first rename() operation.
* Statistics on client creation count and duration are recorded.
+ Statistics on the time to initialize and shutdown the S3A FS are collected
in IOStatistics for reporting.
Adds to hadoop common class
LazyAtomicReference<T> implements CallableRaisingIOE<T>, Supplier<T>
and subclass
LazyAutoCloseableReference<T extends AutoCloseable>
extends LazyAtomicReference<T> implements AutoCloseable
These evaluate the Supplier<T>/CallableRaisingIOE<T> they were
constructed with on the first (successful) read of the the value.
Any exception raised during this operation will be rethrown, and on future
evaluations the same operation retried.
These classes implement the Supplier and CallableRaisingIOE
interfaces so can actually be used for to implement lazy function evaluation
as Haskell and some other functional languages do.
LazyAutoCloseableReference is AutoCloseable; its close() method will
close the inner reference if it is set
This class is used in ClientManagerImpl for the lazy S3 Cliehnt creation
and closure.
Contributed by Steve Loughran.
Speed up slow tests
* TestS3AAWSCredentialsProvider: decrease thread pool shutdown time
* TestS3AInputStreamRetry: reduce retry limit and intervals
Contributed by Steve Loughran
The new test TestAWSV2SDK scans the aws sdk bundle.jar and prints out all classes
which are unshaded, so at risk of creating classpath problems
It does not fail the test if this holds, because the current SDKs
do ship with unshaded classes; the test would always fail.
The SDK upgrade process should include inspecting the output
of this test to see if it has got worse (do a before/after check).
Once the AWS SDK does shade everything, we can have this
test fail on any regression
Contributed by Harshit Gupta
It is now possible to provide a job ID in the maven "job.id" property
hadoop-aws test runs to isolate paths under a the test bucket
under which all tests will be executed.
This will allow independent builds *in different source trees*
to test against the same bucket in parallel, and is designed for
CI testing.
Example:
mvn verify -Dparallel-tests -Droot.tests.enabled=false -Djob.id=1
mvn verify -Droot.tests.enabled=false -Djob.id=2
- Root tests must be be disabled to stop them cleaning up
the test paths of other test runs.
- Do still regularly run the root tests just to force cleanup
of the output of any interrupted test suites.
Contributed by Steve Loughran
* parameterize the test run rather than do it from within the test suite.
* log what the committer factory is up to (and improve its logging)
* close all filesystems, then create the test filesystem with cache enabled.
The cache is critical, we want the fs from cache to be used when querying
filesystem properties, rather than one created from the committer jobconf,
which will have the same options as the task committer, so not actually
validate the override logic.
Contributed by Steve Loughran
WrappedIO.bulkDelete_PageSize() => bulkDelete_pageSize()
Makes it consistent with the HADOOP-19131 naming scheme.
The name needs to be fixed before invoking it through reflection,
as once that is attempted the binding won't work at run time,
though compilation will be happy.
Contributed by Steve Loughran
Applications can create a BulkDelete instance from a
BulkDeleteSource; the BulkDelete interface provides
the pageSize(): the maximum number of entries which can be
deleted, and a bulkDelete(Collection paths)
method which can take a collection up to pageSize() long.
This is optimized for object stores with bulk delete APIs;
the S3A connector will offer the page size of
fs.s3a.bulk.delete.page.size unless bulk delete has
been disabled.
Even with a page size of 1, the S3A implementation is
more efficient than delete(path)
as there are no safety checks for the path being a directory
or probes for the need to recreate directories.
The interface BulkDeleteSource is implemented by
all FileSystem implementations, with a page size
of 1 and mapped to delete(pathToDelete, false).
This means that callers do not need to have special
case handling for object stores versus classic filesystems.
To aid use through reflection APIs, the class
org.apache.hadoop.io.wrappedio.WrappedIO
has been created with "reflection friendly" methods.
Contributed by Mukund Thakur and Steve Loughran
HADOOP-19057 switched the hadoop-aws test bucket from landsat-pds to
noaa-cors-pds
This new bucket isn't accessible if the client configuration
sets an fs.s3a.endpoint/region value other than us-east-1.
Contributed by Viraj Jasani
The description of `fs.s3a.committer.abort.pending.uploads` in the section `Concurrent Jobs writing to the same destination` is not correct. Its default value is `true`.
Contributed by Xi Chen
Clarifies behaviour of VectorIO methods with contract tests as well as
specification.
* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
(HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
toString() output logs this.
HDFS
* Add test TestHDFSContractVectoredRead
ABFS
* Add test ITestAbfsFileSystemContractVectoredRead
S3A
* checks for vector IO being stopped in all iterative
vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
which had not yet read data in.
* Improved logging.
readVectored()
* made synchronized. This is only for the invocation;
the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.
+ AbstractSTestS3AHugeFiles enhancements.
+ ADDENDUM: test fix in ITestS3AContractVectoredRead
Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback
implementation
Contributed by Steve Loughran
Change-Id: Ia4ed71864c595f175c275aad83a2ff5741693432
Clarifies behaviour of VectorIO methods with contract tests as well as specification.
* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
(HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
toString() output logs this.
HDFS
* Add test TestHDFSContractVectoredRead
ABFS
* Add test ITestAbfsFileSystemContractVectoredRead
S3A
* checks for vector IO being stopped in all iterative
vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
which had not yet read data in.
* Improved logging.
readVectored()
* made synchronized. This is only for the invocation;
the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.
+ AbstractSTestS3AHugeFiles enhancements.
Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback implementation
Contributed by Steve Loughran
If the option fs.s3a.committer.magic.track.commits.in.memory.enabled
is set to true, then rather than save data about in-progress uploads
to S3, this information is cached in memory.
If the number of files being committed is low, this will save network IO
in both the generation of .pending and marker files, and in the scanning
of task attempt directory trees during task commit.
Contributed by Syed Shameerur Rahman
This is a followup to #6406:
HADOOP-18980. S3A credential provider remapping: make extensible
It adds extra validation of key-value pairs in a configuration
option, with tests.
Contributed by Viraj Jasani
FIPS is only supported in north america AWS regions; relevant tests in
ITestS3AEndpointRegion are skipped for buckets with different endpoints/regions.
Disables the new tests added in:
HADOOP-19027. S3A: S3AInputStream doesn't recover from HTTP/channel exceptions #6425
The underlying issue here is that the block prefetch code can identify
when there's a mismatch between declared and actual length, and doesn't
store any of the incomplete buffer.
This should be addressed in HADOOP-18184.
Contributed by Steve Loughran
The AWS landsat data previously used in some S3A tests is no
longer accessible
This PR moves to the new external file
s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz
* Large enough file for scale tests
* Bucket supports anonymous access
* Ends in .gz to keep codec tests happy
* No spaces in path to keep bucket-info happy
Test Code Changes
* Leaves the test key name alone: fs.s3a.scale.test.csvfile
* Rename all methods and fields move remove "csv" from their names and
move to "external file" we no longer require it to be CSV.
* Path definition and helper methods have been moved to PublicDatasetTestUtils
* Improve error reporting in ITestS3AInputStreamPerformance if the file
is too short
With S3 Select removed, there is no need for the file to be
a CSV file; there is a test which tries to unzip it; other
tests have a minimum file size.
Consult the JIRA for the settings to add to auth-keys.xml
to switch earlier builds to this same file.
Contributed by Steve Loughran
This is a followup to PR:
HADOOP-19045. S3A: Validate CreateSession Timeout Propagation (#6470)
Remove all declarations of fs.s3a.connection.request.timeout
in
- hadoop-common/src/main/resources/core-default.xml
- hadoop-aws/src/test/resources/core-site.xml
New test in TestAwsClientConfig to verify that the value
defined in fs.s3a.Constants class is used.
This is brittle to someone overriding it in their test setups,
but as this test is intended to verify that the option is not
explicitly set, there's no workaround.
Contributed by Steve Loughran