New test ITestCreateSessionTimeout to verify that the duration set
in fs.s3a.connection.request.timeout is passed all the way down.
This is done by adding a sleep() in a custom signer and verifying
that it is interrupted and that an AWSApiCallTimeoutException is
raised.
+ Fix testRequestTimeout()
* doesn't skip if considered cross-region
* sets a minimum duration of 0 before invocation
* resets the minimum afterwards
Contributed by Steve Loughran
Address JDK bug JDK-8314978 related to handling of HTTP 100
responses.
https://bugs.openjdk.org/browse/JDK-8314978
In the AbfsHttpOperation, after sendRequest() we call processResponse()
method from AbfsRestOperation.
Even if the conn.getOutputStream() fails due to expect-100 error,
we consume the exception and let the code go ahead.
This may call getHeaderField() / getHeaderFields() / getHeaderFieldLong() after
getOutputStream() has failed. These invocation all lead to server calls.
This commit aims to prevent this.
If connection.getOutputStream() fails due to an Expect-100 error,
the ABFS client does not invoke getHeaderField(), getHeaderFields(),
getHeaderFieldLong() or getInputStream().
getResponseCode() is safe as on the failure it sets the
responseCode variable in HttpUrlConnection object.
Contributed by Pranav Saxena
This update ensures that the timeout set in fs.s3a.connection.request.timeout is passed down
to calls to CreateSession made in the AWS SDK to get S3 Express session tokens.
Contributed by Steve Loughran
Add new option fs.s3a.checksum.validation, default false, which
is used when creating s3 clients to enable/disable checksum
validation.
When false, GET response processing is measurably faster.
Contributed by Steve Loughran.
HADOOP-19015. Increase fs.s3a.connection.maximum to 500 to minimize the risk of Timeout waiting for connection from the pool
Contributed By: Mukund Thakur
Adds a new option `fs.s3a.endpoint.fips` to switch the SDK client to use
FIPS endpoints, as an alternative to explicitly declaring them.
* The option is available as a path capability for probes.
* SDK v2 itself doesn't know that some regions don't have FIPS endpoints
* SDK only fails with endpoint + fips flag as a retried exception; wit this
change the S3A client should fail fast.
PR fails fast.
* Adds a new "connecting.md" doc; moves existing docs there and restructures.
* New Tests in ITestS3AEndpointRegion
bucket-info command support:
* added to list of path capabilities
* added -fips flag and test for explicit probe
* also now prints bucket region
* and removed some of the obsolete s3guard options
* updated docs
Contributed by Steve Loughran
Differentiate from "EOF out of range/end of GET" from
"EOF channel problems" through
two different subclasses of EOFException and input streams to always
retry on http channel errors; out of range GET requests are not retried.
Currently an EOFException is always treated as a fail-fast call in read()
This allows for all existing external code catching EOFException to handle
both, but S3AInputStream to cleanly differentiate range errors (map to -1)
from channel errors (retry)
- HttpChannelEOFException is subclass of EOFException, so all code
which catches EOFException is still happy.
retry policy: connectivityFailure
- RangeNotSatisfiableEOFException is the subclass of EOFException
raised on 416 GET range errors.
retry policy: fail
- Method ErrorTranslation.maybeExtractChannelException() to create this
from shaded/unshaded NoHttpResponseException, using string match to
avoid classpath problems.
- And do this for SdkClientExceptions with OpenSSL error code WFOPENSSL0035.
We believe this is the OpenSSL equivalent.
- ErrorTranslation.maybeExtractIOException() to perform this translation as
appropriate.
S3AInputStream.reopen() code retries on EOF, except on
RangeNotSatisfiableEOFException,
which is converted to a -1 response to the caller
as is done historically.
S3AInputStream knows to handle these with
read(): HttpChannelEOFException: stream aborting close then retry
lazySeek(): Map RangeNotSatisfiableEOFException to -1, but do not map
any other EOFException class raised.
This means that
* out of range reads map to -1
* channel problems in reopen are retried
* channel problems in read() abort the failed http connection so it
isn't recycled
Tests for this using/abusing mocking.
Testing through actually raising 416 exceptions and verifying that
readFully(), char read() and vector reads are all good.
There is no attempt to recover within a readFully(); there's
a boolean constant switch to turn this on, but if anyone does
it a test will spin forever as the inner PositionedReadable.read(position, buffer, len)
downgrades all EOF exceptions to -1.
A new method would need to be added which controls whether to downgrade/rethrow
exceptions.
What does that mean? Possibly reduced resilience to non-retried failures
on the inner stream, even though more channel exceptions are retried on.
Contributed by Steve Loughran
Move to the new auth flow based signers for aws. * Implement a new Signer Initialization Chain
* Add a new instantiation method
* Add a new test
* Fix Reflection Code for SignerInitialization
Contributed by Harshit Gupta
Move the org.apache.hadoop.{oncrpc, portmap} packages from the hadoop-nfs module
to the hadoop-common module.
This allows for use of the protocol beyond just NFS -including within HDFS itself.
Contributed by Xing Lin