Adds a class DelegationBindingInfo which contains binding info
beyond just the AWS credential list.
The binding class can be expanded when needed. Until then, all existing
implementations will work, as the new method
DelegationBindingInfo deploy(AbstractS3ATokenIdentifier retrievedIdentifier)
falls back to the original methods.
To avoid the ABFS instance getting closed due to GC while the streams are working, attach the ABFS instance to a backReference opaque object and passing down to the streams so that we have a hard reference while the streams are working.
Contributed by: Mehakmeet Singh
This modifies the manifest committer so that the list of files
to rename is passed between stages as a file of
writeable entries on the local filesystem.
The map of directories to create is still passed in memory;
this map is built across all tasks, so even if many tasks
created files, if they all write into the same set of directories
the memory needed is O(directories) with the
task count not a factor.
The _SUCCESS file reports on heap size through gauges.
This should give a warning if there are problems.
Contributed by Steve Loughran
This
1. changes the default value of fs.s3a.directory.marker.retention
to "keep"
2. no longer prints a message when an S3A FS instances is
instantiated with any option other than delete.
Switching to marker retention improves performance
on any S3 bucket as there are no needless marker DELETE requests
-leading to a reduction in write IOPS and and any delays waiting
for the DELETE call to finish.
There are *very* significant improvements on versioned buckets,
where tombstone markers slow down LIST operations: the more
tombstones there are, the worse query planning gets.
Having versioning enabled on production stores is the foundation
of any data protection strategy, so this has tangible benefits
in production.
It is *not* compatible with older hadoop releases; specifically
- Hadoop branch 2 < 2.10.2
- Any release of Hadoop 3.0.x and Hadoop 3.1.x
- Hadoop 3.2.0 and 3.2.1
- Hadoop 3.3.0
Incompatible releases have no problems reading data in stores
where markers are retained, but can get confused when deleting
or renaming directories.
If you are still using older versions to write to data, and cannot
yet upgrade, switch the option back to "delete"
Contributed by Steve Loughran
This:
1. Adds optLong, optDouble, mustLong and mustDouble
methods to the FSBuilder interface to let callers explicitly
passin long and double arguments.
2. The opt() and must() builder calls which take float/double values
now only set long values instead, so as to avoid problems
related to overloaded methods resulting in a ".0" being appended
to a long value.
3. All of the relevant opt/must calls in the hadoop codebase move to
the new methods
4. And the s3a code is resilient to parse errors in is numeric options
-it will downgrade to the default.
This is nominally incompatible, but the floating-point builder methods
were never used: nothing currently expects floating point numbers.
For anyone who wants to safely set numeric builder options across all compatible
releases, convert the number to a string and then use the opt(String, String)
and must(String, String) methods.
Contributed by Steve Loughran
Explicitly sets the fs.s3a.endpoint.region to eu-west-1 so
the ARN-referenced fs creation fails with unknown store
rather than IllegalArgumentException.
Steve Loughran
To support recovery of network failures during rename, the abfs client
fetches the etag of the source file, and when recovering from a
failure, uses this tag to determine whether the rename succeeded
before the failure happened.
* This works for files, but not directories
* It adds the overhead of a HEAD request before each rename.
* The option can be disabled by setting "fs.azure.enable.rename.resilience"
to false
Contributed by Sree Bhattacharyya
This change lets the client react pre-emptively to server load without getting to 503 and the exponential backoff
which follows. This stops performance suffering so much as capacity limits are approached for an account.
Contributed by Anmol Asranii
POM and LICENSE fixup of transient dependencies
* Exclude hadoop-cloud-storage imports which come in with hadoop-common
* Add explicit import of hadoop's org.codehaus.jettison declaration
to hadoop-aliyun
* Tune aliyun jars imports
* Update LICENSE-binary for the current set of libraries.
Contributed by Steve Loughran
Followup to the original HADOOP-18582.
Temporary path cleanup is re-enabled for -append jobs
as these will create temporary files when creating or overwriting files.
Contributed by Ayush Saxena
Adding toggleable support for modification time during distcp -update between two stores with incompatible checksum comparison.
Contributed by: Mehakmeet Singh <mehakmeet.singh.behl@gmail.com>
Fixes a javadoc error which came with
HADOOP-18577. ABFS: Add probes of readahead fix (#5205)
Part of the HADOOP-18521 ABFS readahead fix; MUST be included.
Contributed by Steve Loughran
Followup patch to HADOOP-18456 as part of HADOOP-18521,
ABFS ReadBufferManager buffer sharing across concurrent HTTP requests
Add probes of readahead fix aid in checking safety of
hadoop ABFS client across different releases.
* ReadBufferManager constructor logs the fact it is safe at TRACE
* AbfsInputStream declares it is fixed in toString()
by including fs.azure.capability.readahead.safe" in the
result.
The ABFS FileSystem hasPathCapability("fs.azure.capability.readahead.safe")
probe returns true to indicate the client's readahead manager has been fixed
to be safe when prefetching.
All Hadoop releases for which probe this returns false
and for which the probe "fs.capability.etags.available"
returns true at risk of returning invalid data when reading
ADLS Gen2/Azure storage data.
Contributed by Steve Loughran.
This has triggered an OOM in a process which was churning through s3a fs
instances; the increased memory footprint of IOStatistics amplified what
must have been a long-standing issue with FS instances being created
and not closed()
* Makes sure instrumentation is closed when the FS is closed.
* Uses a weak reference from metrics to instrumentation, so even
if the FS wasn't closed (see HADOOP-18478), this back reference
would not cause the S3AInstrumentation reference to be retained.
* If S3AFileSystem is configured to log at TRACE it will log the
calling stack of initialize(), so help identify where the
instance is being created. This should help track down
the cause of instance leakage.
Contributed by Steve Loughran.
This is a followup to the original HADOOP-18546
patch; cherry-picks of that should include this
or follow up with it.
Removes risk of race conditions in assertions
of ITestReadBufferManager on the state of the in-progress
and completed queues by removing assertions brittle
to race conditions in scheduling/network IO
* Waits for all the executor pool shutdown to complete before
making any assertions
* Assertions that there are no in progress reads MUST be
cut as there may be some and they won't be cancelled.
* Assertions that the completed list is without buffers
of a closed stream are brittle because if there was
an in progress stream which completed after stream.close()
then it will end up in the list.
Contributed by Steve Loughran
This addresses HADOOP-18521, "ABFS ReadBufferManager buffer sharing
across concurrent HTTP requests" by not trying to cancel
in progress reads.
It supercedes HADOOP-18528, which disables the prefetching.
If that patch is applied *after* this one, prefetching
will be disabled.
As well as changing the default value in the code,
core-default.xml is updated to set
fs.azure.enable.readahead = true
As a result, if Configuration.get("fs.azure.enable.readahead")
returns a non-null value, then it can be inferred that
it was set in or core-default.xml (the fix is present)
or in core-site.xml (someone asked for it).
Contributed by Pranav Saxena.
This allows abfs request throttling to be shared across all
abfs connections talking to containers belonging to the same abfs storage
account -as that is the level at which IO throttling is applied.
The option is enabled/disabled in the configuration option
"fs.azure.account.throttling.enabled";
The default is "true"
Contributed by Anmol Asrani