S3A input stream support for the few fs.option.openfile settings.
As well as supporting the read policy option and values,
if the file length is declared in fs.option.openfile.length
then no HEAD request will be issued when opening a file.
This can cut a few tens of milliseconds off the operation.
The patch adds a new openfile parameter/FS configuration option
fs.s3a.input.async.drain.threshold (default: 16000).
It declares the number of bytes remaining in the http input stream
above which any operation to read and discard the rest of the stream,
"draining", is executed asynchronously.
This asynchronous draining offers some performance benefit on seek-heavy
file IO.
Contributed by Steve Loughran.
Change-Id: I9b0626bbe635e9fd97ac0f463f5e7167e0111e39
Completely removes S3Guard support from the S3A codebase.
If the connector is configured to use any metastore other than
the null and local stores (i.e. DynamoDB is selected) the s3a client
will raise an exception and refuse to initialize.
This is to ensure that there is no mix of S3Guard enabled and disabled
deployments with the same configuration but different hadoop releases
-it must be turned off completely.
The "hadoop s3guard" command has been retained -but the supported
subcommands have been reduced to those which are not purely S3Guard
related: "bucket-info" and "uploads".
This is major change in terms of the number of files
changed; before cherry picking subsequent s3a patches into
older releases, this patch will probably need backporting
first.
Goodbye S3Guard, your work is done. Time to die.
Contributed by Steve Loughran.
S3A connector to support the IOStatistics API of HADOOP-16830,
This is a major rework of the S3A Statistics collection to
* Embrace the IOStatistics APIs
* Move from direct references of S3AInstrumention statistics
collectors to interface/implementation classes in new packages.
* Ubiquitous support of IOStatistics, including:
S3AFileSystem, input and output streams, RemoteIterator instances
provided in list calls.
* Adoption of new statistic names from hadoop-common
Regarding statistic collection, as well as all existing
statistics, the connector now records min/max/mean durations
of HTTP GET and HEAD requests, and those of LIST operations.
Contributed by Steve Loughran.
Contains:
HADOOP-16474. S3Guard ProgressiveRenameTracker to mark destination
dirirectory as authoritative on success.
HADOOP-16684. S3guard bucket info to list a bit more about
authoritative paths.
HADOOP-16722. S3GuardTool to support FilterFileSystem.
This patch improves the marking of newly created/import directory
trees in S3Guard DynamoDB tables as authoritative.
Specific changes:
* Renamed directories are marked as authoritative if the entire
operation succeeded (HADOOP-16474).
* When updating parent table entries as part of any table write,
there's no overwriting of their authoritative flag.
s3guard import changes:
* new -verbose flag to print out what is going on.
* The "s3guard import" command lets you declare that a directory tree
is to be marked as authoritative
hadoop s3guard import -authoritative -verbose s3a://bucket/path
When importing a listing and a file is found, the import tool queries
the metastore and only updates the entry if the file is different from
before, where different == new timestamp, etag, or length. S3Guard can get
timestamp differences due to clock skew in PUT operations.
As the recursive list performed by the import command doesn't retrieve the
versionID, the existing entry may in fact be more complete.
When updating an existing due to clock skew the existing version ID
is propagated to the new entry (note: the etags must match; this is needed
to deal with inconsistent listings).
There is a new s3guard command to audit a s3guard bucket/path's
authoritative state:
hadoop s3guard authoritative -check-config s3a://bucket/path
This is primarily for testing/auditing.
The s3guard bucket-info command also provides some more details on the
authoritative state of a store (HADOOP-16684).
Change-Id: I58001341c04f6f3597fcb4fcb1581ccefeb77d91
S3A to implement S3 Select through this API.
The new openFile() API is asynchronous, and implemented across FileSystem and FileContext.
The MapReduce V2 inputs are moved to this API, and you can actually set must/may
options to pass in.
This is more useful for setting things like s3a seek policy than for S3 select,
as the existing input format/record readers can't handle S3 select output where
the stream is shorter than the file length, and splitting plain text is suboptimal.
Future work is needed there.
In the meantime, any/all filesystem connectors are now free to add their own filesystem-specific
configuration parameters which can be set in jobs and used to set filesystem input stream
options (seek policy, retry, encryption secrets, etc).
Contributed by Steve Loughran
patch includes
HADOOP-12844 Recover when S3A fails on IOException in read()
HADOOP-13058 S3A FS fails during init against a read-only FS if multipart purge
HADOOP-13047 S3a Forward seek in stream length to be configurable