S3A region logic improved for better inference and
to be compatible with previous releases
1. If you are using an AWS S3 AccessPoint, its region is determined
from the ARN itself.
2. If fs.s3a.endpoint.region is set and non-empty, it is used.
3. If fs.s3a.endpoint is an s3.*.amazonaws.com url,
the region is determined by by parsing the URL
Note: vpce endpoints are not handled by this.
4. If fs.s3a.endpoint.region==null, and none could be determined
from the endpoint, use us-east-2 as default.
5. If fs.s3a.endpoint.region=="" then it is handed off to
The default AWS SDK resolution process.
Consult the AWS SDK documentation for the details on its resolution
process, knowing that it is complicated and may use environment variables,
entries in ~/.aws/config, IAM instance information within
EC2 deployments and possibly even JSON resources on the classpath.
Put differently: it is somewhat brittle across deployments.
Contributed by Ahmar Suhail
Protobuf 2.5 JAR is no longer needed at runtime.
The option common.protobuf.scope defines whether the protobuf 2.5.0
dependency is marked as provided or not.
* New package org.apache.hadoop.ipc.internal for internal only protobuf classes
...with a ShadedProtobufHelper in there which has shaded protobuf refs
only, so guaranteed not to need protobuf-2.5 on the CP
* All uses of org.apache.hadoop.ipc.ProtobufHelper have
been replaced by uses of org.apache.hadoop.ipc.internal.ShadedProtobufHelper
* The scope of protobuf-2.5 is set by the option common.protobuf2.scope
In this patch is it is still "compile"
* There is explicit reference to it in modules where it may be needed.
* The maven scope of the dependency can be set with the common.protobuf2.scope
option. It can be set to "provided" in a build:
-Dcommon.protobuf2.scope=provided
* Add new ipc(callable) method to catch and convert shaded protobuf
exceptions raised during invocation of the supplied lambda expression
* This is adopted in the code where the migration is not traumatically
over-complex. RouterAdminProtocolTranslatorPB is left alone for this
reason.
Contributed by Steve Loughran
Tune AWS v2 SDK changes based on testing with third party stores
including GCS.
Contains HADOOP-18889. S3A v2 SDK error translations and troubleshooting docs
* Changes needed to work with multiple third party stores
* New third_party_stores document on how to bind to and test
third party stores, including google gcs (which works!)
* Troubleshooting docs mostly updated for v2 SDK
Exception translation/resilience
* New AWSUnsupportedFeatureException for unsupported/unavailable errors
* Handle 501 method unimplemented as one of these
* Error codes > 500 mapped to the AWSStatus500Exception if no explicit
handler.
* Precondition errors handled a bit better
* GCS throttle exception also recognized.
* GCS raises 404 on a delete of a file which doesn't exist: swallow it.
* Error translation uses reflection to create IOE of the right type.
All IOEs at the bottom of an AWS stack chain are regenerated.
then a new exception of that specific type is created, with the top level ex
its cause. This is done to retain the whole stack chain.
* Reduce the number of retries within the AWS SDK
* And those of s3a code.
* S3ARetryPolicy explicitly declare SocketException as connectivity failure
but subclasses BindException
* SocketTimeoutException also considered connectivity
* Log at debug whenever retry policies looked up
* Reorder exceptions to alphabetical order, with commentary
* Review use of the Invoke.retry() method
The reduction in retries is because its clear when you try to create a bucket
which doesn't resolve that the time for even an UnknownHostException to
eventually fail over 90s, which then hit the s3a retry code.
- Reducing the SDK retries means these escalate to our code better.
- Cutting back on our own retries makes it a bit more responsive for most real
deployments.
- maybeTranslateNetworkException() and s3a retry policy means that
unknown host exception is recognised and fails fast.
Contributed by Steve Loughran
This patch migrates the S3A connector to use the V2 AWS SDK.
This is a significant change at the source code level.
Any applications using the internal extension/override points in
the filesystem connector are likely to break.
This includes but is not limited to:
- Code invoking methods on the S3AFileSystem class
which used classes from the V1 SDK.
- The ability to define the factory for the `AmazonS3` client, and
to retrieve it from the S3AFileSystem. There is a new factory
API and a special interface S3AInternals to access a limited
set of internal classes and operations.
- Delegation token and auditing extensions.
- Classes trying to integrate with the AWS SDK.
All standard V1 credential providers listed in the option
fs.s3a.aws.credentials.provider will be automatically remapped to their
V2 equivalent.
Other V1 Credential Providers are supported, but only if the V1 SDK is
added back to the classpath.
The SDK Signing plugin has changed; all v1 signers are incompatible.
There is no support for the S3 "v2" signing algorithm.
Finally, the aws-sdk-bundle JAR has been replaced by the shaded V2
equivalent, "bundle.jar", which is now exported by the hadoop-aws module.
Consult the document aws_sdk_upgrade for the full details.
Contributed by Ahmar Suhail + some bits by Steve Loughran
As well as the POM update, this patch moves to the (renamed) verify methods.
Backporting mockito test changes may now require cherrypicking this patch, otherwise
use the old method names.
Contributed by Anmol Asrani
To avoid the ABFS instance getting closed due to GC while the streams are working, attach the ABFS instance to a backReference opaque object and passing down to the streams so that we have a hard reference while the streams are working.
Contributed by: Mehakmeet Singh
* Add jdiff xml files from 3.3.6 release.
* Declare 3.3.6 as the latest stable release.
* Copy release notes.
(cherry picked from commit 7db9895000)
(cherry picked from commit cc121e2124aa01458dc296a060edc5e21a295268)
This modifies the manifest committer so that the list of files
to rename is passed between stages as a file of
writeable entries on the local filesystem.
The map of directories to create is still passed in memory;
this map is built across all tasks, so even if many tasks
created files, if they all write into the same set of directories
the memory needed is O(directories) with the
task count not a factor.
The _SUCCESS file reports on heap size through gauges.
This should give a warning if there are problems.
Contributed by Steve Loughran
This:
1. Adds optLong, optDouble, mustLong and mustDouble
methods to the FSBuilder interface to let callers explicitly
passin long and double arguments.
2. The opt() and must() builder calls which take float/double values
now only set long values instead, so as to avoid problems
related to overloaded methods resulting in a ".0" being appended
to a long value.
3. All of the relevant opt/must calls in the hadoop codebase move to
the new methods
4. And the s3a code is resilient to parse errors in is numeric options
-it will downgrade to the default.
This is nominally incompatible, but the floating-point builder methods
were never used: nothing currently expects floating point numbers.
For anyone who wants to safely set numeric builder options across all compatible
releases, convert the number to a string and then use the opt(String, String)
and must(String, String) methods.
Contributed by Steve Loughran