Contributed by Steve Loughran.
This is part of the ongoing refactoring of the S3A codebase, with the
delegation token support (HADOOP-14556) no longer given a direct reference
to the owning S3AFileSystem. Instead it gets a StoreContext and a new
interface, DelegationOperations, to access those operations offered by S3AFS
which are specifically needed by the DT bindings.
The sole operation needed is listAWSPolicyRules(), which is used to allow
S3A FS and the S3Guard metastore to return the AWS policy rules needed to
access their specific services/buckets/tables, allowing the AssumedRole
delegation token to be locked down.
As further restructuring takes place, that interface's implementation
can be moved to wherever the new home for those operations ends up.
Although it changes the API of an extension point, that feature (S3
Delegation Tokens) has not shipped; backwards compatibility is not a
problem except for anyone who has implemented DT support against trunk.
To those developers: sorry.
Change-Id: I770f58b49ff7634a34875ba37b7d51c94d7c21da
Contributed by Steve Loughran.
This downgrade the checks for leftover __magic entries from fail to warn now the parallel
test runs make speculation more likely.
Change-Id: Ia4df2e90f82a06dbae69f3fdaadcbb0e0d713b38
Contributed by Steve Loughran.
This FileSystem instantiation so if an IOException or RuntimeException is
raised in the invocation of FileSystem.initialize() then a best-effort
attempt is made to close the FS instance; exceptions raised that there
are swallowed.
The S3AFileSystem is also modified to do its own cleanup if an
IOException is raised during its initialize() process, it being the
FS we know has the "potential" to leak threads, especially in
extension points (e.g AWS Authenticators) which spawn threads.
Change-Id: Ib84073a606c9d53bf53cbfca4629876a03894f04
Contributed by Steve Loughran.
Includes HADOOP-16651. S3 getBucketLocation() can return "US" for us-east.
Change-Id: Ifc0dca76e51495ed1a8fc0f077b86bf125deff40
Contributed by Steve Loughran.
This addresses two scale issues which has surfaced in large scale benchmarks
of the S3A Committers.
* Thread pools are not cleaned up.
This now happens, with tests.
* OOM on job commit for jobs with many thousands of tasks,
each generating tens of (very large) files.
Instead of loading all pending commits into memory as a single list, the list
of files to load is the sole list which is passed around; .pendingset files are
loaded and processed in isolation -and reloaded if necessary for any
abort/rollback operation.
The parallel commit/abort/revert operations now work at the .pendingset level,
rather than that of individual pending commit files. The existing parallelized
Tasks API is still used to commit those files, but with a null thread pool, so
as to serialize the operations.
Change-Id: I5c8240cd31800eaa83d112358770ca0eb2bca797
Contributed by Steve Loughran.
Replaces the committer-specific terasort and MR test jobs with parameterization
of the (now single tests) and use of file:// over hdfs:// as the cluster FS.
The parameterization ensures that only one of the specific committer tests
run at a time -overloads of the test machines are less likely, and so the
suites can be pulled back into the parallel phase.
There's also more detailed validation of the stage outputs of the terasorting;
if one test fails the rest are all skipped. This and the fact that job
output is stored under target/yarn-${timestamp} means failures should
be more debuggable.
Change-Id: Iefa370ba73c6419496e6e69dd6673d00f37ff095
Contributed by Steve Loughran.
Includes
-S3A glob scans don't bother trying to resolve symlinks
-stack traces don't get lost in getFileStatuses() when exceptions are wrapped
-debug level logging of what is up in Globber
-Contains HADOOP-13373. Add S3A implementation of FSMainOperationsBaseTest.
-ITestRestrictedReadAccess tests incomplete read access to files.
This adds a builder API for constructing globbers which other stores can use
so that they too can skip symlink resolution when not needed.
Change-Id: I23bcdb2783d6bd77cf168fdc165b1b4b334d91c7
Contributed by Steve Loughran.
This complements the StreamCapabilities Interface by allowing applications to probe for a specific path on a specific instance of a FileSystem client
to offer a specific capability.
This is intended to allow applications to determine
* Whether a method is implemented before calling it and dealing with UnsupportedOperationException.
* Whether a specific feature is believed to be available in the remote store.
As well as a common set of capabilities defined in CommonPathCapabilities,
file systems are free to add their own capabilities, prefixed with
fs. + schema + .
The plan is to identify and document more capabilities -and for file systems which add new features, for a declaration of the availability of the feature to always be available.
Note
* The remote store is not expected to be checked for the feature;
It is more a check of client API and the client's configuration/knowledge
of the state of the remote system.
* Permissions are not checked.
Change-Id: I80bfebe94f4a8bdad8f3ac055495735b824968f5
Contributed by Sahil Takiar.
This moves the SSLSocketFactoryEx class from hadoop-azure into hadoop-common
as the DelegatingSSLSocketFactory and binds the S3A connector to it so that
it can avoid using those HTTPS algorithms which are underperformant on Java 8.
Change-Id: Ie9e6ac24deac1aa05e136e08899620efa7d22abd
Contributed by Steve Loughran.
This patch avoids issuing any HEAD path request when creating a file with overwrite=true,
so 404s will not end up in the S3 load balancers unless someone calls getFileStatus/exists/isFile
in their own code.
The Hadoop FsShell CommandWithDestination class is modified to not register uncreated files
for deleteOnExit(), because that calls exists() and so can place the 404 in the cache, even
after S3A is patched to not do it itself.
Because S3Guard knows when a file should be present, it adds a special FileNotFound retry policy
independently configurable from other retry policies; it is also exponential, but with
different parameters. This is because every HEAD request will refresh any 404 cached in
the S3 Load Balancers. It's not enough to retry: we have to have a suitable gap between
attempts to (hopefully) ensure any cached entry wil be gone.
The options and values are:
fs.s3a.s3guard.consistency.retry.interval: 2s
fs.s3a.s3guard.consistency.retry.limit: 7
The S3A copy() method used during rename() raises a RemoteFileChangedException which is not caught
so not downgraded to false. Thus: when a rename is unrecoverable, this fact is propagated.
Copy operations without S3Guard lack the confidence that the file exists, so don't retry the same way:
it will fail fast with a different error message. However, because create(path, overwrite=false) no
longer does HEAD path, we can at least be confident that S3A itself is not creating those cached
404 markers.
Change-Id: Ia7807faad8b9a8546836cb19f816cccf17cca26d
Contributed by Steve Loughran.
This overlaps the scanning for directory entries with batched calls to S3 DELETE and updates of the S3Guard tables.
It also uses S3Guard to list the files to delete, so find newly created files even when S3 listings are not use consistent.
For path which the client considers S3Guard to be authoritative, we also do a recursive LIST of the store and delete files; this is to find unindexed files and do guarantee that the delete(path, true) call really does delete everything underneath.
Change-Id: Ice2f6e940c506e0b3a78fa534a99721b1698708e
Contributed by Gabor Bota.
This pulls the tracking of the lastUpdated timestamp of metadata entries up from the DDB metastore into all s3guard stores, and then uses this to filter out expired tombstones from listings.
Change-Id: I80f121236b49c75a024116f65a3ef29d3580b462
Contributed by Steve Loughran and Gabor Bota.
This
* Asks S3Guard to determine the empty directory status.
* Has S3A's root directory rm("/") command to always return false (as abfs does)
* Documents that object stores MAY do this
* Overloads ContractTestUtils.assertDeleted to let assertions declare that the source directory does not need to exist. This stops inconsistencies in directory listings failing a root test.
It avoids a recent regression (HADOOP-16279) where if there was a tombstone above the first element found in a directory listing, the directory would be considered empty, when in fact there were child entries. That could downgrade an rm(path, recursive) to a no-op, while also confusing rename(src, dest), as dest could be mistaken for an empty directory and so permit the copy above it, rather than reject it "destination path exists and is not empty".
Change-Id: I136a3d1a5a48a67e6155d790a40ff558d0d2c108
Contributed by Steve Loughran
Contains
- HADOOP-16397. Hadoop S3Guard Prune command to support a -tombstone option.
- HADOOP-16406. ITestDynamoDBMetadataStore.testProvisionTable times out intermittently
This patch doesn't fix the underlying problem but it
* changes some tests to clean up better
* does a lot more in logging operations in against DDB, if enabled
* adds an entry point to dump the state of the metastore and s3 tables (precursor to fsck)
* adds a purge entry point to help clean up after a test run has got a store into a mess
* s3guard prune command adds -tombstone option to only clear tombstones
The outcome is that tests should pass consistently and if problems occur we have better diagnostics.
Change-Id: I3eca3f5529d7f6fec398c0ff0472919f08f054eb
Contributed by Steve Loughran.
This patch
* changes the default for the staging committer to append, as we get for the classic FileOutputFormat committer
* adds a check for the dest path being a file not a dir
* adds tests for this
* Changes AbstractCommitTerasortIT. to not use the simple parser, so fails if the file is present.
Change-Id: Id53742958ed1cf321ff96c9063505d64f3254f53
Contributed by Ben Roling.
S3Guard will now track the etag of uploaded files and, if an S3
bucket is versioned, the object version.
You can then control how to react to a mismatch between the data
in the DynamoDB table and that in the store: warn, fail, or, when
using versions, return the original value.
This adds two new columns to the table: etag and version.
This is transparent to older S3A clients -but when such clients
add/update data to the S3Guard table, they will not add these values.
As a result, the etag/version checks will not work with files uploaded by older clients.
For a consistent experience, upgrade all clients to use the latest hadoop version.
This is the first step for on-demand operations: things recognize when they are using on-demand tables,
as do the tests.
Contributed by Steve Loughran.
This is needed to fix up some confusion about caching of job.addCache() handling of S3A paths; all parent dirs -the files are downloaded by the NM without using the DTs of the user submitting the job. This means that when you submit jobs to an EC2 cluster with lower IAM permissions than the user, cached resources don't get downloaded and the job doesn't start.
Production code changes:
* S3AFileStatus Adds "true" to the superclass's encrypted flag during construction.
Tests
* Base AbstractContractOpenTest can control whether zero byte files created in tests are encrypted. Not done via an XML attribute, just a subclass point. Thoughts?
* Verify that the filecache considers paths to not have the permissions which trigger reduce-privilege downloads
* And extend ITestDelegatedMRJob to test a completely different bucket (open street map), to verify that cached resources do get their tokens picked up
Docs:
* Advise FS developers to say all files are encrypted. It's otherwise harmless and it'll stop other people seeing impossible to debug error messages on app launch.
Contributed by Steve Loughran.
Change-Id: Ifaae4c9d735ccc5eafeebd2584b65daf2d4e5da3
Contributed by Steve Loughran.
This includes
- HADOOP-15890. Some S3A committer tests don't match ITest* pattern; don't run in maven
- MAPREDUCE-7090. BigMapOutput example doesn't work with paths off cluster fs
- MAPREDUCE-7091. Terasort on S3A to switch to new committers
- MAPREDUCE-7092. MR examples to work better against cloud stores
S3A to implement S3 Select through this API.
The new openFile() API is asynchronous, and implemented across FileSystem and FileContext.
The MapReduce V2 inputs are moved to this API, and you can actually set must/may
options to pass in.
This is more useful for setting things like s3a seek policy than for S3 select,
as the existing input format/record readers can't handle S3 select output where
the stream is shorter than the file length, and splitting plain text is suboptimal.
Future work is needed there.
In the meantime, any/all filesystem connectors are now free to add their own filesystem-specific
configuration parameters which can be set in jobs and used to set filesystem input stream
options (seek policy, retry, encryption secrets, etc).
Contributed by Steve Loughran