The new test TestAWSV2SDK scans the aws sdk bundle.jar and prints out all classes
which are unshaded, so at risk of creating classpath problems
It does not fail the test if this holds, because the current SDKs
do ship with unshaded classes; the test would always fail.
The SDK upgrade process should include inspecting the output
of this test to see if it has got worse (do a before/after check).
Once the AWS SDK does shade everything, we can have this
test fail on any regression
Contributed by Harshit Gupta
It is now possible to provide a job ID in the maven "job.id" property
hadoop-aws test runs to isolate paths under a the test bucket
under which all tests will be executed.
This will allow independent builds *in different source trees*
to test against the same bucket in parallel, and is designed for
CI testing.
Example:
mvn verify -Dparallel-tests -Droot.tests.enabled=false -Djob.id=1
mvn verify -Droot.tests.enabled=false -Djob.id=2
- Root tests must be be disabled to stop them cleaning up
the test paths of other test runs.
- Do still regularly run the root tests just to force cleanup
of the output of any interrupted test suites.
Contributed by Steve Loughran
* parameterize the test run rather than do it from within the test suite.
* log what the committer factory is up to (and improve its logging)
* close all filesystems, then create the test filesystem with cache enabled.
The cache is critical, we want the fs from cache to be used when querying
filesystem properties, rather than one created from the committer jobconf,
which will have the same options as the task committer, so not actually
validate the override logic.
Contributed by Steve Loughran
WrappedIO.bulkDelete_PageSize() => bulkDelete_pageSize()
Makes it consistent with the HADOOP-19131 naming scheme.
The name needs to be fixed before invoking it through reflection,
as once that is attempted the binding won't work at run time,
though compilation will be happy.
Contributed by Steve Loughran
Applications can create a BulkDelete instance from a
BulkDeleteSource; the BulkDelete interface provides
the pageSize(): the maximum number of entries which can be
deleted, and a bulkDelete(Collection paths)
method which can take a collection up to pageSize() long.
This is optimized for object stores with bulk delete APIs;
the S3A connector will offer the page size of
fs.s3a.bulk.delete.page.size unless bulk delete has
been disabled.
Even with a page size of 1, the S3A implementation is
more efficient than delete(path)
as there are no safety checks for the path being a directory
or probes for the need to recreate directories.
The interface BulkDeleteSource is implemented by
all FileSystem implementations, with a page size
of 1 and mapped to delete(pathToDelete, false).
This means that callers do not need to have special
case handling for object stores versus classic filesystems.
To aid use through reflection APIs, the class
org.apache.hadoop.io.wrappedio.WrappedIO
has been created with "reflection friendly" methods.
Contributed by Mukund Thakur and Steve Loughran
HADOOP-19057 switched the hadoop-aws test bucket from landsat-pds to
noaa-cors-pds
This new bucket isn't accessible if the client configuration
sets an fs.s3a.endpoint/region value other than us-east-1.
Contributed by Viraj Jasani
The description of `fs.s3a.committer.abort.pending.uploads` in the section `Concurrent Jobs writing to the same destination` is not correct. Its default value is `true`.
Contributed by Xi Chen
Clarifies behaviour of VectorIO methods with contract tests as well as
specification.
* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
(HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
toString() output logs this.
HDFS
* Add test TestHDFSContractVectoredRead
ABFS
* Add test ITestAbfsFileSystemContractVectoredRead
S3A
* checks for vector IO being stopped in all iterative
vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
which had not yet read data in.
* Improved logging.
readVectored()
* made synchronized. This is only for the invocation;
the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.
+ AbstractSTestS3AHugeFiles enhancements.
+ ADDENDUM: test fix in ITestS3AContractVectoredRead
Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback
implementation
Contributed by Steve Loughran
Change-Id: Ia4ed71864c595f175c275aad83a2ff5741693432
Clarifies behaviour of VectorIO methods with contract tests as well as specification.
* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
(HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
toString() output logs this.
HDFS
* Add test TestHDFSContractVectoredRead
ABFS
* Add test ITestAbfsFileSystemContractVectoredRead
S3A
* checks for vector IO being stopped in all iterative
vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
which had not yet read data in.
* Improved logging.
readVectored()
* made synchronized. This is only for the invocation;
the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.
+ AbstractSTestS3AHugeFiles enhancements.
Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback implementation
Contributed by Steve Loughran
If the option fs.s3a.committer.magic.track.commits.in.memory.enabled
is set to true, then rather than save data about in-progress uploads
to S3, this information is cached in memory.
If the number of files being committed is low, this will save network IO
in both the generation of .pending and marker files, and in the scanning
of task attempt directory trees during task commit.
Contributed by Syed Shameerur Rahman
This is a followup to #6406:
HADOOP-18980. S3A credential provider remapping: make extensible
It adds extra validation of key-value pairs in a configuration
option, with tests.
Contributed by Viraj Jasani
FIPS is only supported in north america AWS regions; relevant tests in
ITestS3AEndpointRegion are skipped for buckets with different endpoints/regions.
Disables the new tests added in:
HADOOP-19027. S3A: S3AInputStream doesn't recover from HTTP/channel exceptions #6425
The underlying issue here is that the block prefetch code can identify
when there's a mismatch between declared and actual length, and doesn't
store any of the incomplete buffer.
This should be addressed in HADOOP-18184.
Contributed by Steve Loughran
The AWS landsat data previously used in some S3A tests is no
longer accessible
This PR moves to the new external file
s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz
* Large enough file for scale tests
* Bucket supports anonymous access
* Ends in .gz to keep codec tests happy
* No spaces in path to keep bucket-info happy
Test Code Changes
* Leaves the test key name alone: fs.s3a.scale.test.csvfile
* Rename all methods and fields move remove "csv" from their names and
move to "external file" we no longer require it to be CSV.
* Path definition and helper methods have been moved to PublicDatasetTestUtils
* Improve error reporting in ITestS3AInputStreamPerformance if the file
is too short
With S3 Select removed, there is no need for the file to be
a CSV file; there is a test which tries to unzip it; other
tests have a minimum file size.
Consult the JIRA for the settings to add to auth-keys.xml
to switch earlier builds to this same file.
Contributed by Steve Loughran
This is a followup to PR:
HADOOP-19045. S3A: Validate CreateSession Timeout Propagation (#6470)
Remove all declarations of fs.s3a.connection.request.timeout
in
- hadoop-common/src/main/resources/core-default.xml
- hadoop-aws/src/test/resources/core-site.xml
New test in TestAwsClientConfig to verify that the value
defined in fs.s3a.Constants class is used.
This is brittle to someone overriding it in their test setups,
but as this test is intended to verify that the option is not
explicitly set, there's no workaround.
Contributed by Steve Loughran
The option fs.s3a.classloader.isolation (default: true) can be set to false to disable s3a classloader isolation;
This can assist in using custom credential providers and other extension points.
Contributed by Antonio Murgia
Improves region handling in the S3A connector, including enabling cross-region support
when that is considered necessary.
Consult the documentation in connecting.md/connecting.html for the current
resolution process.
Contributed by Viraj Jasani
Cut out S3 Select
* leave public/unstable constants alone
* s3guard tool will fail with error
* s3afs. path capability will fail
* openFile() will fail with specific error
* s3 select doc updated
* Cut eventstream jar
* New test: ITestSelectUnsupported verifies new failure
handling above
Contributed by Steve Loughran
New test ITestCreateSessionTimeout to verify that the duration set
in fs.s3a.connection.request.timeout is passed all the way down.
This is done by adding a sleep() in a custom signer and verifying
that it is interrupted and that an AWSApiCallTimeoutException is
raised.
+ Fix testRequestTimeout()
* doesn't skip if considered cross-region
* sets a minimum duration of 0 before invocation
* resets the minimum afterwards
Contributed by Steve Loughran
This update ensures that the timeout set in fs.s3a.connection.request.timeout is passed down
to calls to CreateSession made in the AWS SDK to get S3 Express session tokens.
Contributed by Steve Loughran
Add new option fs.s3a.checksum.validation, default false, which
is used when creating s3 clients to enable/disable checksum
validation.
When false, GET response processing is measurably faster.
Contributed by Steve Loughran.
HADOOP-19015. Increase fs.s3a.connection.maximum to 500 to minimize the risk of Timeout waiting for connection from the pool
Contributed By: Mukund Thakur
Adds a new option `fs.s3a.endpoint.fips` to switch the SDK client to use
FIPS endpoints, as an alternative to explicitly declaring them.
* The option is available as a path capability for probes.
* SDK v2 itself doesn't know that some regions don't have FIPS endpoints
* SDK only fails with endpoint + fips flag as a retried exception; wit this
change the S3A client should fail fast.
PR fails fast.
* Adds a new "connecting.md" doc; moves existing docs there and restructures.
* New Tests in ITestS3AEndpointRegion
bucket-info command support:
* added to list of path capabilities
* added -fips flag and test for explicit probe
* also now prints bucket region
* and removed some of the obsolete s3guard options
* updated docs
Contributed by Steve Loughran
Differentiate from "EOF out of range/end of GET" from
"EOF channel problems" through
two different subclasses of EOFException and input streams to always
retry on http channel errors; out of range GET requests are not retried.
Currently an EOFException is always treated as a fail-fast call in read()
This allows for all existing external code catching EOFException to handle
both, but S3AInputStream to cleanly differentiate range errors (map to -1)
from channel errors (retry)
- HttpChannelEOFException is subclass of EOFException, so all code
which catches EOFException is still happy.
retry policy: connectivityFailure
- RangeNotSatisfiableEOFException is the subclass of EOFException
raised on 416 GET range errors.
retry policy: fail
- Method ErrorTranslation.maybeExtractChannelException() to create this
from shaded/unshaded NoHttpResponseException, using string match to
avoid classpath problems.
- And do this for SdkClientExceptions with OpenSSL error code WFOPENSSL0035.
We believe this is the OpenSSL equivalent.
- ErrorTranslation.maybeExtractIOException() to perform this translation as
appropriate.
S3AInputStream.reopen() code retries on EOF, except on
RangeNotSatisfiableEOFException,
which is converted to a -1 response to the caller
as is done historically.
S3AInputStream knows to handle these with
read(): HttpChannelEOFException: stream aborting close then retry
lazySeek(): Map RangeNotSatisfiableEOFException to -1, but do not map
any other EOFException class raised.
This means that
* out of range reads map to -1
* channel problems in reopen are retried
* channel problems in read() abort the failed http connection so it
isn't recycled
Tests for this using/abusing mocking.
Testing through actually raising 416 exceptions and verifying that
readFully(), char read() and vector reads are all good.
There is no attempt to recover within a readFully(); there's
a boolean constant switch to turn this on, but if anyone does
it a test will spin forever as the inner PositionedReadable.read(position, buffer, len)
downgrades all EOF exceptions to -1.
A new method would need to be added which controls whether to downgrade/rethrow
exceptions.
What does that mean? Possibly reduced resilience to non-retried failures
on the inner stream, even though more channel exceptions are retried on.
Contributed by Steve Loughran
Move to the new auth flow based signers for aws. * Implement a new Signer Initialization Chain
* Add a new instantiation method
* Add a new test
* Fix Reflection Code for SignerInitialization
Contributed by Harshit Gupta
Adds a new option fs.s3a.s3express.create.session; default is true.
When false, this disables the CreateSession call to create/refresh temporary
session credentials when working with an Amazon S3 Express store.
This avoids having to give the caller the new IAM role permission,
at the expense of every remote call on the S3 Express store having to
include the latency of a checkup of IAM permissions.
* fs.s3a.s3express.create.session is set to false in tests which generate new
role permissions and call AssumeRole
* move ApiCallTimeoutException logic until after sdk exceptions get translated
to IOE. This lines up for any future AWS throwing up underlying cause here.
* Tests will automatically skip ACL, storage class, S3 Select or encryption tests
when target fs is S3Express.
* same for the out of order multipart uploader test cases, v1 listing.
* bucket tool s3 test treats invalid location error as a successful invocation
of the create bucket attempt
Contributed by Steve Loughran
This adds borad support for Amazon S3 Express One Zone to the S3A connector,
particularly resilience of other parts of the codebase to LIST operations returning
paths under which only in-progress uploads are taking place.
hadoop-common and hadoop-mapreduce treewalking routines all cope with this;
distcp is left alone.
There are still some outstanding followup issues, and we expect more to surface
with extended use.
Contains HADOOP-18955. AWS SDK v2: add path capability probe "fs.s3a.capability.aws.v2
* lets us probe for AWS SDK version
* bucket-info reports it
Contains HADOOP-18961 S3A: add s3guard command "bucket"
hadoop s3guard bucket -create -region us-west-2 -zone usw2-az2 \
s3a://stevel--usw2-az2--x-s3/
* requires -zone if bucket is zonal
* rejects it if not
* rejects zonal bucket suffixes if endpoint is not aws (safety feature)
* imperfect, but a functional starting point.
New path capability "fs.s3a.capability.zonal.storage"
* Used in tests to determine whether pending uploads manifest paths
* cli tests can probe for this
* bucket-info reports it
* some tests disable/change assertions as appropriate
----
Shell commands fail on S3Express buckets if pending uploads.
New path capability in hadoop-common
"fs.capability.directory.listing.inconsistent"
1. S3AFS returns true on a S3 Express bucket
2. FileUtil.maybeIgnoreMissingDirectory(fs, path, fnfe)
decides whether to swallow the exception or not.
3. This is used in: Shell, FileInputFormat, LocatedFileStatusFetcher
Fixes with tests
* fs -ls -R
* fs -du
* fs -df
* fs -find
* S3AFS.getContentSummary() (maybe...should discuss)
* mapred LocatedFileStatusFetcher
* Globber, HADOOP-15478 already fixed that when dealing with
S3 inconsistencies
* FileInputFormat
S3Express CreateSession request is permitted outside audit spans.
S3 Bulk Delete calls request the store to return the list of deleted objects
if RequestFactoryImpl is set to trace.
log4j.logger.org.apache.hadoop.fs.s3a.impl.RequestFactoryImpl=TRACE
Test Changes
* ITestS3AMiscOperations removes all tests which require unencrypted
buckets. AWS S3 defaults to SSE-S3 everywhere.
* ITestBucketTool to test new tool without actually creating new
buckets.
* S3ATestUtils add methods to skip test suites/cases if store is/is not
S3Express
* Cutting down on "is this a S3Express bucket" logic to trailing --x-s3 string
and not worrying about AZ naming logic. commented out relevant tests.
* ITestTreewalkProblems validated against standard and S3Express stores
Outstanding
* Distcp: tests show it fails. Proposed: release notes.
---
x-amz-checksum header not found when signing S3Express messages
This modifies the custom signer in ITestCustomSigner to be a subclass
of AwsS3V4Signer with a goal of preventing signing problems with
S3 Express stores.
----
RemoteFileChanged renaming multipart file
Maps 412 status code to RemoteFileChangedException
Modifies huge file tests
-Adds a check on etag match for stat vs list
-ITestS3AHugeFilesByteBufferBlocks renames parent dirs, rather than
files, to replicate distcp better.
----
S3Express custom Signing cannot handle bulk delete
Copy custom signer into production JAR, so enable downstream testing
Extend ITestCustomSigner to cover more filesystem operations
- PUT
- POST
- COPY
- LIST
- Bulk delete through delete() and rename()
- list + abort multipart uploads
Suite is parameterized on bulk delete enabled/disabled.
To use the new signer for a full test run:
<property>
<name>fs.s3a.custom.signers</name>
<value>CustomSdkSigner:org.apache.hadoop.fs.s3a.auth.CustomSdkSigner</value>
</property>
<property>
<name>fs.s3a.s3.signing-algorithm</name>
<value>CustomSdkSigner</value>
</property>
Increases existing pool sizes, as with server scale and vector
IO, larger pools are needed
fs.s3a.connection.maximum 200
fs.s3a.threads.max 96
Adds new configuration options for v2 sdk internal timeouts,
both with default of 60s:
fs.s3a.connection.acquisition.timeout
fs.s3a.connection.idle.time
All the pool/timoeut options are covered in performance.md
Moves all timeout/duration options in the s3a FS to taking
temporal units (h, m, s, ms,...); retaining the previous default
unit (normally millisecond)
Adds a minimum duration for most of these, in order to recover from
deployments where a timeout has been set on the assumption the unit
was seconds, not millis.
Uses java.time.Duration throughout the codebase;
retaining the older numeric constants in
org.apache.hadoop.fs.s3a.Constants for backwards compatibility;
these are now deprecated.
Adds new class AWSApiCallTimeoutException to be raised on
sdk-related methods and also gateway timeouts. This is a subclass
of org.apache.hadoop.net.ConnectTimeoutException to support
existing retry logic.
+ reverted default value of fs.s3a.create.performance to false;
inadvertently set to true during testing.
Contributed by Steve Loughran.
Add a new option:
fs.s3a.optimized.copy.from.local.enabled
This will enable (default) or disable the
optimized CopyFromLocalOperation upload operation
when copyFromLocalFile() is invoked.
When false the superclass implementation is used; duration
statistics are still collected, though audit span entries
in logs will be for the individual fs operations, not the
overall operation.
Contributed by Steve Loughran
If fs.s3a.create.performance is set on a bucket
- All file overwrite checks are skipped, even if the caller says
otherwise.
- All directory existence checks are skipped.
- Marker deletion is *always* skipped.
This eliminates a HEAD and a LIST for every creation.
* New path capability "fs.s3a.create.performance.enabled" true
if the option is enabled.
* Parameterize ITestS3AContractCreate to expect the different
outcomes
* Parameterize ITestCreateFileCost similarly, with
changed cost assertions there.
* create(/) raises an IOE. existing bug only noticed here.
Contributed by Steve Loughran
S3A directory delete and rename will optionally abort all pending multipart uploads
in their under their to-be-deleted paths when.
fs.s3a.directory.operations.purge.upload is true
It is off by default.
The filesystems hasPathCapability("fs.s3a.directory.operations.purge.upload")
probe will return true when this feature is enabled.
Multipart uploads may accrue from interrupted data writes, uncommitted
staging/magic committer jobs and other operations/applications. On AWS S3
lifecycle rules are the recommended way to clean these; this change improves
support for stores which lack these rules.
Contributed by Steve Loughran